content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
This book comprises select proceedings of the 63rd Congress of the Indian Society of Theoretical and Applied Mechanics (ISTAM) held in Bangalore, in December 2018. Latest research in computational, experimental, and applied mechanics is presented in the book. The chapters are broadly classified into two sections - (i) fluid mechanics and (ii) solid mechanics. Each section covers computational and experimental studies on various contemporary topics such as aerospace dynamics and propulsion, atmospheric sciences, boundary layers, compressible flow, environmental fluid dynamics, control structures, fracture and crack, viscoelasticity, and mechanics of composites. The contents of this book will serve as a useful reference to students, researchers, and practitioners interested in the broad field of mechanics.
Advances in Bio-Based Fiber
Moving Towards a Green Society
by Sanjay Mavinkere Rangappa,Madhu Puttegowda,Jyotishkumar Parameswaranpillai,Suchart Siengchin,Sergey Gorbatyuk
- Publisher : Woodhead Publishing
- Release : 2021-12-09
- Pages : 834
- ISBN : 0128245441
- Language : En, Es, Fr & De
Advances in Bio-Based Fibres: Moving Towards a Green Society describes many novel natural fibers, their specific synthesis and characterization methods, their environmental sustainability values, their compatibility with polymer composites, and a wide range of innovative commercial engineering applications. As bio-based fiber polymer composites possess excellent mechanical, electrical and thermal properties, along with highly sustainable properties, they are an important technology for manufacturers and materials scientists seeking to improve the sustainability of their industries. This cutting-edge book draws on the latest industry practice and academic research to provide advice on technologies with applications in industries, including packaging, automotive, aerospace, biomedical and structural engineering. Provides technical data on advanced material properties, including electrical and rheological Gives a comprehensive guide to appraising and applying this technology to improve sustainability, including lifecycle assessment and recyclability Includes advice on the latest modeling techniques for designing with these materials
Epoxy Composites
Preparation, Characterization and Applications
by Jyotishkumar Parameswaranpillai,Herikrishnan Pulikkalparambil,Sanjay Mavinkere Rangappa,Suchart Siengchin
- Publisher : John Wiley & Sons
- Release : 2021-06-01
- Pages : 448
- ISBN : 3527346783
- Language : En, Es, Fr & De
Discover a one-stop resource for in-depth knowledge on epoxy composites from leading voices in the field Used in a wide variety of materials engineering applications, epoxy composites are highly relevant to the work of engineers and scientists in many fields. Recent developments have allowed for significant advancements in their preparation, processing and characterization that are highly relevant to the aerospace and automobile industry, among others. In Epoxy Composites: Fabrication, Characterization and Applications, a distinguished team of authors and editors deliver a comprehensive and straightforward summary of the most recent developments in the area of epoxy composites. The book emphasizes their preparation, characterization and applications, providing a complete understanding of the correlation of rheology, cure reaction, morphology, and thermo-mechanical properties with filler dispersion. Readers will learn about a variety of topics on the cutting-edge of epoxy composite fabrication and characterization, including smart epoxy composites, theoretical modeling, recycling and environmental issues, safety issues, and future prospects for these highly practical materials. Readers will also benefit from the inclusion of: A thorough introduction to epoxy composites, their synthesis and manufacturing, and micro- and nano-scale structure formation in epoxy and clay nanocomposites An exploration of long fiber reinforced epoxy composites and eco-friendly epoxy-based composites Practical discussions of the processing of epoxy composites based on carbon nanomaterials and the thermal stability and flame retardancy of epoxy composites An analysis of the spectroscopy and X-ray scattering studies of epoxy composites Perfect for materials scientists, polymer chemists, and mechanical engineers, Epoxy Composites: Fabrication, Characterization and Applications will also earn a place in the libraries of engineering scientists working in industry and process engineers seeking a comprehensive and exhaustive resource on epoxy composites.
Polymer-Silica Based Composites in Sustainable Construction
Theory, Preparation and Characterizations
by Harrison Shagwira,Fredrick Madaraka Mwema,Thomas Ochuku Mbuya
- Publisher : CRC Press
- Release : 2021-12-27
- Pages : 124
- ISBN : 1000527956
- Language : En, Es, Fr & De
This book presents the application of Polymer-Silica Based Composites in the Construction Industry providing the fundamental framework and knowledge needed for the sustainable and efficient use of these composites as building and structural materials. It also includes characterization of prepared materials to ascertain mechanical, chemical, and physical properties and analyses results obtained using similar methods. Topics such as life cycle analysis of plastics, application of plastics in construction and elimination of plastic wastes are also discussed. The book also provides information on the outlook and competitiveness of emerging composites materials. Covers theory, preparation and characterizations of polymer-silica based composites for green construction. Discusses technology, reliability, manufacturing cost and environmental impact. Reviews the classification, application, and processing of polymer-silica composites. Gives a deeper analysis on the various tests carried out on polymer-silica composite. Highlights role of such composites in the Industry 4.0 and emerging technologies. This book is aimed at graduate students and researchers in civil engineering, built environment, construction materials, and materials science.
Adsorption: Fundamental Processes and Applications
A Book
by Mehrorang Ghaedi
- Publisher : Academic Press
- Release : 2021-03-19
- Pages : 728
- ISBN : 0128188081
- Language : En, Es, Fr & De
Adsorption: Fundamental Processes and Applications, Volume 33 in the Interface Science and Technology Series, discusses the great technological importance of adsorption and describes how adsorbents are used on a large scale as desiccants, catalysts, catalyst supports, in the separation of gases, the purification of liquids, pollution control, and in respiratory protection. Finally, it explores how adsorption phenomena play a vital role in many solid-state reactions and biological mechanisms, as well as stressing the importance of the widespread use of adsorption techniques in the characterization of surface properties and the texture of fine powders. Covers the fundamental aspects of adsorption process engineering Reviews the environmental impact of key aquatic pollutants Discusses and analyzes the importance of adsorption processes for water treatment Highlights opportunity areas for adsorption process intensification Edited by a world-leading researcher in interface science
Polymer-Based Composites
Design, Manufacturing, and Applications
by V. Arumugaprabu,R. Deepak Joel Johnson,M. Uthayakumar,P. Sivaranjana
- Publisher : CRC Press
- Release : 2021-08-24
- Pages : 166
- ISBN : 1000433137
- Language : En, Es, Fr & De
The increasing use of composite materials over conventional materials has been a continual trend for over a decade. While the fundamental understanding of fiber reinforcement has not changed, many new material advancements have occurred, especially in manufacturing methods, and there is an ever-growing number of composite material applications across various industries. Polymer-Based Composites: Design, Manufacturing, and Applications presents the concepts and methods involved in the development of various fiber-reinforced composite materials. Features: Offers a comprehensive view of materials, mechanics, processing, design, and applications Bridges the gap between research, manufacturing science, and analysis and design Discusses composite materials composed of continuous synthetic fibers and matrices for use in engineering structures Presents codes and standards related to fiber-reinforced polymer composites Includes case studies and examples based on industrial, automotive, aerospace, and household applications This book is a valuable resource for advanced students, researchers, and industry personnel to understand recent advances in the field and achieve practical results in the development, manufacture, and application of advanced composite materials.
Structural Characterization and Seismic Retrofitting of Adobe Constructions
Experimental and Numerical Developments
by Humberto Varum,Fulvio Parisi,Nicola Tarque,Sabino Nicola Tarque Ruiz,Dora Silveira (Civil engineer)
- Publisher : Springer Nature
- Release : 2021
- Pages : 255
- ISBN : 3030747379
- Language : En, Es, Fr & De
This book provides the reader with a review of the most relevant research on the structural characterization and seismic retrofitting of adobe construction. It offers a complete review of the latest research developments, and hence the relevance of the field. The book starts with an introductory discussion on adobe construction and its use throughout the world over time, highlighting characteristics and performance of adobe masonry structures as well as different contributions for cultural heritage conservation (Chapter 1). Then, the seismic behaviour of adobe masonry buildings is addressed, including examples of real performance during recent earthquakes (Chapter 2). In the following chapters, key research investigations on seismic response assessment and retrofitting of adobe constructions are reviewed. The review deals with the following issues: mechanical characterization of adobe bricks and adobe masonry (Chapters 3 and 4); quasi-static and shaking table testing of adobe masonry walls and structures (Chapters 5 and 6); non-destructive and minor-destructive testing for characterization of adobe constructions (Chapter 7); seismic strengthening techniques for adobe constructions (Chapter 8); and numerical modelling of adobe structures (Chapter 9). The book ends with Chapter 10, where some general conclusions are drawn and research needs are identified. Each chapter is co-authored by a group of experts from different countries to comprehensively address all issues of adobe constructions from a worldwide perspective. The information covered in this book is fundamental to support civil engineers and architects in the rehabilitation and strengthening of existing adobe constructions and also in the design of new adobe buildings. This information is also of interest to researchers, by providing a summary of existing research and suggesting possible directions for future research efforts.
Hybrid Fiber Composites
Materials, Manufacturing, Process Engineering
by Anish Khan,Sanjay Mavinkere Rangappa,Mohammad Jawaid,Suchart Siengchin,Abdullah M. Asiri
- Publisher : John Wiley & Sons
- Release : 2020-06-25
- Pages : 438
- ISBN : 3527824561
- Language : En, Es, Fr & De
Fiber-reinforced composites are exceptionally versatile materials whose properties can be tuned to exhibit a variety of favorable properties such as high tensile strength and resistance against wear or chemical and thermal influences. Consequently, these materials are widely used in various industrial fields such as the aircraft, marine, and automobile industry. After an overview of the general structures and properties of hybrid fiber composites, the book focuses on the manufacturing and processing of these materials and their mechanical performance, including the elucidation of failure mechanisms. A comprehensive chapter on the modeling of hybrid fiber composites from micromechanical properties to macro-scale material behavior is followed by a review of applications of these materials in structural engineering, packaging, and the automotive and aerospace industries.
Failure of Fibre-Reinforced Polymer Composites
A Book
by Mohamed Thariq Hameed Sultan,M Rajesh,K Jayakrishna
- Publisher : CRC Press
- Release : 2021-12-14
- Pages : 188
- ISBN : 1000477452
- Language : En, Es, Fr & De
The proposed book focusses on the theme of failure of polymer composites, focusing on vital aspects of enhancing failure resistance, constituents and repair including associated complexities. It discusses characterization and experimentation of the composites under loading with respect to the specific environment and applications. Further, it includes topics as green composites, advanced materials and composite joint failure, buckling failure, and fiber-metal composite failure. It explains preparation, applications of composites for weight sensitive applications, leading to potential applications and formulations, fabrication of polymer products based on bio-resources. Provides exhaustive understanding of failure and fatigue of polymer composites Covers the failure of fiber reinforced polymer composites, composite joint failure, fiber-metal composite, and laminate failure Discusses how to enhance the resistance against failure of the polymer composites Provides input to industry related and academic orientated research problems Represents an organized perspective and analysis of materials processing, material design, and their failure under loading This book is aimed at researchers, graduate students in composites, fiber reinforcement, failure mechanism, materials science, and mechanical engineering.
Green Biocomposites for Biomedical Engineering
Design, Properties, and Applications
by Md Enamul Hoque,Ahmed Sharif,Mohammad Jawaid
- Publisher : Woodhead Publishing
- Release : 2021-06-30
- Pages : 474
- ISBN : 0128215542
- Language : En, Es, Fr & De
Green Biocomposites for Biomedical Engineering: Design, Properties, and Applications combines emergent research outcomes with fundamental theoretical concepts relevant to processing, properties and applications of advanced green composites in the field of biomedical engineering. The book outlines the design elements and characterization of biocomposites, highlighting each class of biocomposite separately. A broad range of biomedical applications for biocomposites is then covered, with a final section discussing the ethics and safety regulations associated with manufacturing and the use of biocomposites. With contributions from eminent editors and recognized authors around the world, this book is a vital reference for researchers in biomedical engineering, materials science and environmental science, both in industry and academia. Provides comprehensive information regarding current advances in the interdisciplinary field of eco-friendly green composite materials for biomedical applications Offers coverage of state-of-the-art physics-based advanced models used in composites Lists a broad range of characterization techniques and biomedical applications
Fiber-Reinforced Nanocomposites: Fundamentals and Applications
A Book
by Baoguo Han,Sumit Sharma,Tuan Anh Nguyen,Longbiao Li,K. Subrahmanya Bhat
- Publisher : Elsevier
- Release : 2020-03-13
- Pages : 614
- ISBN : 0128199105
- Language : En, Es, Fr & De
Fiber-reinforced Nanocomposites: Fundamentals and Applications explores the fundamental concepts and emerging applications of fiber-reinforced nanocomposites in the automobile, aerospace, transportation, construction, sporting goods, optics, electronics, acoustics and environmental sector. In addition, the book provides a detailed overview of the properties of fiber-reinforced nanocomposites, including discussion on embedding these high-strength fibers in matrices. Due to the mismatch in structure, density, strain and thermal expansion coefficients between matrix and fibers, their thermo-mechanical properties strongly depend not only on the preparative methods, but also on the interaction between reinforcing phase and matrix phase. This book offers a concise overview of these advances and how they are leading to the creation of stronger, more durable classes of nanocomposite materials. Explores the interaction between fiber, nanoreinforcers and matrices at the nanoscale Shows how the properties of fiber-enforced nanocomposites are ideal for use for a variety of consumer products Outlines the major challenges to creating fiber-reinforced nanocomposites effectively
Biocomposites: Design and Mechanical Performance
A Book
by Manjusri Misra,Jitendra Kumar Pandey,Amar Mohanty
- Publisher : Woodhead Publishing
- Release : 2015-08-07
- Pages : 524
- ISBN : 178242394X
- Language : En, Es, Fr & De
Biocomposites: Design and Mechanical Performance describes recent research on cost-effective ways to improve the mechanical toughness and durability of biocomposites, while also reducing their weight. Beginning with an introduction to commercially competitive natural fiber-based composites, chapters then move on to explore the mechanical properties of a wide range of biocomposite materials, including polylactic, polyethylene, polycarbonate, oil palm, natural fiber epoxy, polyhydroxyalkanoate, polyvinyl acetate, polyurethane, starch, flax, poly (propylene carbonate)-based biocomposites, and biocomposites from biodegradable polymer blends, natural fibers, and green plastics, giving the reader a deep understanding of the potential of these materials. Describes recent research to improve the mechanical properties and performance of a wide range of biocomposite materials Explores the mechanical properties of a wide range of biocomposite materials, including polylactic, polyethylene, polycarbonate, oil palm, natural fiber epoxy, polyhydroxyalkanoate, polyvinyl acetate, and polyurethane Evaluates the potential of biocomposites as substitutes for petroleum-based plastics in industries such as packaging, electronic, automotive, aerospace and construction Includes contributions from leading experts in this field
Biocomposite and Synthetic Composites for Automotive Applications
A Book
by S.M. Sapuan Sapuan,R.A. Ilyas
- Publisher : Woodhead Publishing
- Release : 2020-11-24
- Pages : 456
- ISBN : 0128209232
- Language : En, Es, Fr & De
Biocomposite and Synthetic Composites for Automotive Applications provides a detailed review of advanced macro and nanocomposite materials and structures, and discusses their use in the transport industry, specifically for automotive applications. This book covers materials selection, properties and performance, design solutions, and manufacturing techniques. A broad range of different material classes are reviewed with emphasis on advanced materials and new research pathways where composites can be derived from agricultural waste in the future, as well as the development and performance of hybrid composites. The book is an essential reference resource for those researching materials development and industrial design engineers who need a detailed understanding of materials usage in transport structures. Life Cycle Assessment (LCA) analysis of composite products in automotive applications is also discussed, and the effect of different fiber orientation on crash performance. Synthetic/natural fiber composites for aircraft engine fire-designated zones are linked to automotive applications. Additional chapters include the application and use of magnesium composites compared to biocomposites in the automotive industry; autonomous inspection and repair of aircraft composite structures via vortex robot technology and its application in automotive applications; composites in a three-wheeler (tuk tuk); and thermal properties of composites in automotive applications. Covers advanced macro and nanocomposites used in automotive structures Emphasizes materials selection, properties and performance, design solutions, and manufacturing techniques Features case studies of successful applications of biocomposites in automotive structures
Structural Health Monitoring System for Synthetic, Hybrid and Natural Fiber Composites
A Book
by Mohammad Jawaid,Ahmad Hamdan,Mohamed Thariq Hameed Sultan
- Publisher : Springer Nature
- Release : 2020-12-05
- Pages : 229
- ISBN : 9811588406
- Language : En, Es, Fr & De
This book covers the basic principle and challenges of structural health monitoring system for natural fibre and the hybrid composites structural materials in industrial applications, such as building, automotive, aerospace and wind turbine. Structural health monitoring (SHM) has become crucial in evaluating the performance of structural application in recent trends, especially since it is in line with the high-tech strategy of Industry 4.0. It is a system that is operated in real time or in an online situation. Hence, it also has advantages for damage detection, damage localisation, damage assessment and life prediction compared to the non-destructive test (NDT) which is conducted offline. The book covers the monitoring of the composite materials in terms of structural properties and damage evaluation through modelling and prediction of failure in composite. It includes recent examples and real-world engineering application to illustrate the understanding of the current technology application. The book benefits lecturers, students, researchers, engineers and industrialist who are working in the civil, aerospace and wind turbine industries. | https://www.seecoalharbour.com/failure-analysis-in-biocomposites-fibre-reinforced-composites-and-hybrid-composites/ |
What’s the Difference between Appraised Value vs. Assessed Value?
When preparing to sell or renovate your home, it’s essential to understand the current value of your property. Its worth now may be different than when you purchased the house. There are two ways to gauge your home’s value — appraised value and assessed value.
While they may sound similar, they are very different and this article is meant to help you understand the differences. It is not uncommon for the two values to be different from one another, and as a homeowner you should understand why.
Why is Home Value Important?
Understanding your home’s value gives you more control over insurance premiums and property taxes. Refinancing, home equity lines of credit, insurance premiums, and annual property taxes are all based on your home’s value. For example, you can better evaluate if your property taxes are too high by pulling comps of similar homes. It’s worth the effort if you can lower your tax bill, right?
If you are waiting to put your home on the market, knowing your home’s market value is essential. It is also important if you are considering small renovations or a major remodeling project. Knowing the amount of equity you have built up in your home and if the home renovations you are planning will impact your home’s value enough to make your investment worthwhile can prevent you from making expensive mistakes.
Being able to accurately gauge the equity built up in your home can also offer peace of mind in a turbulent economy. Having access to a home equity line of credit or a home equity loan can offer financial flexibility and stability during times when your family needs to make a significant investment such as putting a kid through college, buying a new vehicle, or planning a family vacation.
However, before making decisions based on your home’s value, it is essential to understand the difference between appraised value and assessed value.
What is an Assessed Value?
The assessed value of your home is what the local government uses to calculate property taxes. A tax assessment is required by state law to be performed at regular intervals that can often be years apart. These assessments help municipalities fairly levy annual taxes against real estate located in their jurisdiction. The purpose of a property assessment is, therefore, to provide a basis for collecting the taxes necessary to meet the municipalities’s annual budget, not to provide home buyers with prices they should pay for specific properties.
To calculate property taxes, municipality officials will usually appoint an assessor or appraiser to determine your home’s value. It’s important to note that tax assessors may not be licensed appraisers. While rules will vary, the assessor compares your home to similar ones in the surrounding area. The appraiser may consider the following points when determining a property’s value:
- Features and condition of the home
- Curb appeal
- Size and square footage
- Surrounding properties
- Access to public services
They often also do not see each individual property in an area they are assessing and rarely see every nook and cranny of the properties they are assessing. As such, they may not know about improvements made to properties since the last assessment or if a property is in dire need of fundamental repairs. The assessment process is often aided by computers and databases that contain property records and real estate data. It’s all to help provide a fair and accurate assessed value on your home. Much of the information they base their assessment on is taken from public record and may not necessarily be current or accurate for selling price purposes.
In general, the assessed value of a home tends to be 20% to 40% lower than the fair market value.
It’s no surprise, the higher the assessed value of your home, the more you’ll pay in property taxes so most homeowners don’t complain that their assessed value is too low. If they try to have their assessed value changed, it’s usually to lower it even more.
What is Appraised Value?
To determine the appraised value of your home, an appraisal is required. An appraisal consists of a thorough inspection of the property and a comparison of recently sold homes in the area to estimate the value.
At Great Midwest Bank, we require appraisers to be licensed in evaluating market data, approved by the bank’s Board of Directors, and provide a valuation that represents the “fair” sales price of the home if it were bought or sold today. The task of an appraiser is to determine a fair market value to your property. Appraisers look for certain things that can affect the price or impact the lender’s decision to loan you money for the home such as:
- Health and safety hazards
- Structural integrity
- Overall condition of the home
- Upgrades or improvements
- Visible defects
If there are signs of potential issues, an appraiser may request additional inspections such as a roof, pest, or water inspection. If the appraisal or inspection finds any conditions that don’t meet the lender’s requirements, they’ll have to be corrected before you can move in. The findings determine the amount a lender will let you borrow for the property.
When buying a home, the appraised value protects you from paying more than the house is worth. When you’re refinancing your mortgage, it prevents the lender from giving the homeowner more money than the home is worth.
What is Fair Market Value?
Fair market value is what your house is expected to sell for. It’s important to understand that fair market value is different from a list price or appraised value. List price is the price a seller hopes to get for their home. It’s the price they advertise their property at when they put it up for sale. On the other hand, fair market value is an estimate on what buyers are willing to pay for a home.
The market value is determined based on what the home is sold for before any financing is included in the process. This means if a home sold for $150k when it was listed for $200k, then the $150k becomes the market value.
So, which one is more important?
Overall, the appraisal value will be the most accurate when it comes to lending decisions. However, appraised value will not be the price the home is always sold for. Fair market value is largely determined by the current housing inventory available in the local market. If inventory is very low, a house may sell easily at a listed price far above its assessed value. If inventory is plentiful, a seller may have to lower the listed price below the assessed value of the home in order to make a sale.
We understand that there still may be some confusion about the difference between appraised value, assessed value, and fair market price – especially if you are a first time home buyer. That’s okay. Our experienced loan officers are here to answer your questions and help guide through the loan process.
Contact a Great Midwest Bank loan officer today. We’ll be happy to help you better understand the difference between appraised and assessed values and how they relate to the home buying process. | https://greatmidwestbank.com/appraised-value-vs-assessed-value-whats-the-difference/ |
Who asserts you can’t lose your atmosphere to a neighboring red dwarf and then regrow it using volcanic activity? After a tumultuous encounter with its host star, this flexible world, situated 41 light-years from Earth, looks to be flourishing once more in the solar system.
The researchers behind the observation spotted the Earth-sized, rocky exoplanet GJ 1132 b using NASA’s Hubble Space Telescope and discovered credible proof that the exoplanet had an atmosphere — the solar system requirement for life to survive — but there is still something odd about it. The atmosphere of GJ 1132 b, as we see it now, was not the planet’s initial.
The Solar System Exoplanet GJ 1132 b and Key Findings
A Brief overview of what was discovered in the solar system with regards to the exoplanet:
We report the detection of an atmosphere on a rocky exoplanet, GJ 1132 b, which is similar to Earth in terms of size and density. The atmospheric transmission spectrum was detected using Hubble WFC3 measurements and shows spectral signatures of aerosol scattering, HCN, and CH4 in a low mean molecular weight atmosphere.
We model the atmospheric loss process and conclude that GJ 1132 b likely lost the original H/He envelope, suggesting that the atmosphere that we detect has been reestablished. We explore the possibility of H2 mantle degassing, previously identified as a possibility for this planet by theoretical studies, and find that outgassing from ultra reduced magma could produce the observed atmosphere. In this way, we use the observed exoplanet transmission spectrum to gain insights into magma composition for a terrestrial planet.
The detection of an atmosphere on this rocky planet raises the possibility that the numerous powerfully irradiated Super-Earth planets, believed to be the evaporated cores of Sub-Neptunes, may, under favorable circumstances, host detectable atmospheres.Detection of an Atmosphere on a Rocky Exoplanet, publication in Astronomical Journal.
In an interview, Raissa Estrela, a co-author of the study and a planetary scientist at NASA’s Jet Propulsion Laboratory in Southern California, said, “It’s incredibly exciting because we think the atmosphere that we see now was reconstructed, so it may be a secondary atmosphere.” “At first, we assumed that these heavily irradiated planets would be dull because they had lost their atmospheres. However, we used Hubble to look at the solar system and current observations of this planet and concluded, “Oh no, there is an atmosphere there.”
Comparison, the surface of GJ 1132 b with that of Earth and Saturn’s largest moon in the solar system we know
In the video above we have Artist’s impression of Exoplanet GJ 1132 b: by Robert Hurt. Atmosphere escaping an exoplanet (artist’s impression): NASA, ESA, M. Kornmesser. Artist’s impression of WASP-107b: ESA/Hubble, NASA, M. Kornmesser. Video animation of of Exoplanet GJ 1132 b: Robert Hurt. Aerial of oozing red lava in Hawaii: Artbeats. Aerial from Puu Oo volcanic vents on Hawaii’s Kilauea: Artbeats. Exovolcano Animation Background Only: Michael Lentz. Illustration depicting one interpretation of planet GJ 357 c: Chris Smith
GJ 1132 b, which completes one orbit of its greedy host star in only 1.5 days, is likely to be subject to tidal heating, in which gravitational forces churn the earth from the inside. Despite its short year, the alien planet is in an elliptical orbit, resulting in a phenomenon known as “gravitational pumping.” GJ 1132 b switches between squashing and stretching behavior as it swings back and forth, generating energy that powers tidal forces and, as a result, the retention of a liquid mantle.
Let us Compare with the solar system we know the most about
“How many terrestrial planets don’t begin as terrestrials? Some may start as sub-Neptunes, and they become terrestrials through a mechanism that photo-evaporates the primordial atmosphere. This process works early in a planet’s life, when the star is hotter,” said lead author Mark Swain of JPL. “Then the star cools down and the planet’s just sitting there. So you’ve got this mechanism where you can cook off the atmosphere in the first 100 million years, and then things settle down. And if you can regenerate the atmosphere, maybe you can keep it.”
“The question is, what makes the mantle hot enough to remain liquid and a ton of volcanic power?” asked Swain. “This system is unique because it has an incentive for a lot of tidal heating.” The solar system is unique. | https://geekimpulse.com/the-solar-system-exoplanet-gj-1132-b/ |
- Students will be able to write a lab report that contains a descriptive title, complete and concise abstract, substantive and relevant introduction that includes a testable hypothesis, descriptive methods, description and comparison of results of various testable groups, biological explanation of the results that reflect the testable hypothesis, a conclusion that contains societal implications or scientific impact, and references cited in the document.
- Students will be able to self-identify weaknesses and strengths of their writing.
- Students will understand how to utilize office hours and the writing center to receive feedback on their lab reports.
-
Dynamic Daphnia: An inquiry-based research experience in ecology that teaches the scientific process to first-year...Learning ObjectivesStudents will be able to:
- Construct written predictions about 1 factor experiments.
- Interpret simple (2 variables) figures.
- Construct simple (2 variables) figures from data.
- Design simple 1 factor experiments with appropriate controls.
- Demonstrate proper use of standard laboratory items, including a two-stop pipette, stereomicroscope, and laboratory notebook.
- Calculate means and standard deviations.
- Given some scaffolding (instructions), select the correct statistical test for a data set, be able to run a t-test, ANOVA, chi-squared test, and linear regression in Microsoft Excel, and be able to correctly interpret their results.
- Construct and present a scientific poster.
-
Follow the Sulfur: Using Yeast Mutants to Study a Metabolic PathwayLearning ObjectivesAt the end of this lesson, students will be able to:
- use spot plating techniques to compare the growth of yeast strains on solid culture media.
- predict the ability of specific met deletion strains to grow on media containing various sulfur sources.
- predict how mutations in specific genes will affect the concentrations of metabolites in the pathways involved in methionine biosynthesis.
-
-
-
Discovery Poster ProjectLearning ObjectivesStudents will be able to:
- identify and learn about a scientific research discovery of interest to them using popular press articles and the primary literature
- find a group on campus doing research that aligns with their interests and communicate with the faculty leader of that group
- create and present a poster that synthesizes their knowledge of the research beyond the discovery
-
-
A first lesson in mathematical modeling for biologists: RocsLearning Objectives
- Systematically develop a functioning, discrete, single-species model of an exponentially-growing or -declining population.
- Use the model to recommend appropriate action for population management.
- Communicate model output and recommendations to non-expert audiences.
- Generate a collaborative work product that most individuals could not generate on their own, given time and resource constraints.
-
Air Quality Data Mining: Mining the US EPA AirData website for student-led evaluation of air quality issuesLearning ObjectivesStudents will be able to:
- Describe various parameters of air quality that can negatively impact human health, list priority air pollutants, and interpret the EPA Air Quality Index as it relates to human health.
- Identify an air quality problem that varies on spatial and/or temporal scales that can be addressed using publicly available U.S. EPA air data.
- Collect appropriate U.S. EPA Airdata information needed to answer that/those questions, using the U.S. EPA Airdata website data mining tools.
- Analyze the data as needed to address or answer their question(s).
- Interpret data and draw conclusions regarding air quality levels and/or impacts on human and public health.
- Communicate results in the form of a scientific paper.
-
Using Synthetic Biology and pClone Red for Authentic Research on Promoter Function: Introductory Biology (identifying...Learning Objectives
- Describe how cells can produce proteins at the right time and correct amount.
- Diagram how a repressor works to reduce transcription.
- Diagram how an activator works to increase transcription.
- Identify a new promoter from literature and design a method to clone it and test its function.
- Successfully and safely manipulate DNA and Escherichia coli for ligation and transformation experiments.
- Design an experiment to verify a new promoter has been cloned into a destination vector.
- Design an experiment to measure the strength of a promoter.
- Analyze data showing reporter protein produced and use the data to assess promoter strength.
- Define type IIs restriction enzymes.
- Distinguish between type II and type IIs restriction enzymes.
- Explain how Golden Gate Assembly (GGA) works.
- Measure the relative strength of a promoter compared to a standard promoter.
-
Discovering Prokaryotic Gene Regulation by Building and Investigating a Computational Model of the lac OperonLearning ObjectivesStudents will be able to:
- model how the components of the lac operon contribute to gene regulation and expression.
- generate and test predictions using computational modeling and simulations.
- interpret and record graphs displaying simulation results.
- relate simulation results to cellular events.
- describe how changes in environmental glucose and lactose levels impact regulation of the lac operon.
- predict, test, and explain how mutations in specific elements in the lac operon affect their protein product and other elements within the operon.
-
You and Your Oral Microflora: Introducing non-biology majors to their “forgotten organ”Learning ObjectivesStudents will be able to:
- Explain both beneficial and detrimental roles of microbes in human health.
- Compare and contrast DNA replication as it occurs inside a cell versus in a test tube
- Identify an unknown sequence of DNA by performing a BLAST search
- Navigate sources of scientific information to assess the accuracy of their experimental techniques
-
A clicker-based case study that untangles student thinking about the processes in the central dogmaLearning ObjectivesStudents will be able to:
- explain the differences between silent (no change in the resulting amino acid sequence), missense (a change in the amino acid sequence), and nonsense (a change resulting in a premature stop codon) mutations.
- differentiate between how information is encoded during DNA replication, transcription, and translation.
- evaluate how different types of mutations (silent, missense, and nonsense) and the location of those mutations (intron, exon, and promoter) differentially affect the processes in the central dogma.
- predict the molecular (DNA size, mRNA length, mRNA abundance, and protein length) and/or phenotypic consequences of mutations.
-
Discovering Prokaryotic Gene Regulation with Simulations of the trp OperonLearning ObjectivesStudents will be able to:
- Perturb and interpret simulations of the trp operon.
- Define how simulation results relate to cellular events.
- Describe the biological role of the trp operon.
- Describe cellular mechanisms regulating the trp operon.
- Explain mechanistically how changes in the extracellular environment affect the trp operon.
- Define the impact of mutations on trp operon expression and regulation. | https://www.coursesource.org/courses/search/c/introductory-biology/field_course_level/introductory-271/field_scientific_process_skiils/communicating-results-332/field_vision_change_concepts/information-flow-exchange-and-storage-314 |
This volume would be an extremely useful addition to the bookshelf of anybody with an active interest in the biochemical and pathological processes that underlie some of the more common neurological diseases. In the past the role of proteolysis in these disorders has been largely neglected because it was assumed that it represented a general non-specific metabolic process. In terms of attracting research interest the field also suffered from the confusion in the literature concerning the naming of these enzymes and the fact that the same enzyme might have many different names. However, as the editors point out in their preface, this is no longer the case and they have managed to bring together an impressive array of current research on the involvement of proteases in a wide variety of disorders. From what individually might have been regarded as rather disparate studies, one can now start to see common themes not least of which is the potential therapeutic value of targeting specific proteases and the development of specific inhibitors.
If, like me, you don’t have specialist knowledge of this area I would recommend going straight to the last chapter on the mammalian proteinase genes. Here you will find a clearly laid out summary of the classification and characteristics of the four main groups of proteases (serine, cysteine, aspartic, and metalloproteinases). I also found the chapter on the ubiquitin/proteasome system and the normal physiological breakdown of proteins particularly informative. Having read these two chapters you then have a wide choice of disorders and proteases to choose from. Perhaps the most widely discussed is Alzheimer’s disease, undoubtedly because of the huge advances that have been made in the understanding of the biochemical processes underlying this disease over the past 15 years. Papain-like cysteine proteases (cathepsins), caspases, calpains, and a novel metalloendopeptidase (EC 3.4.24.15) all appear to have some role in the pathology of Alzheimer’s disease and may, therefore, be potential targets for drug development. There is also a group of Alzheimer’s disease specific proteases that affect the processing of the amyloid precursor protein (α, β, and, γ secretase) and presenilin (presenilinase). Both of these proteins are central to the development of pathology and so these enzymes in particular are key targets for current drug company research.
Apart from the interest in Alzheimer’s disease, there are other chapters covering the role of matrix metalloproteinases and calpain in the demyelination of multiple sclerosis and the key role of calpain in the pathology of traumatic brain and spinal cord injury. Further chapters describe the loss of calcium homeostasis and the subsequent pathological activation of calpain, resulting in the breakdown of key structural proteins in some neuromuscular disorders. In summary, this book has something for everyone in an area of research that holds huge promise for the future in terms of developing useful therapies for treating neurodegenerative disorders.
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | https://jnnp.bmj.com/content/74/5/694.2 |
Scroll & click the images below to view this artwork in different rooms & settings.
- On a tablet or mobile, click the “view in your room” button, point your camera at the wall you wish to see the artwork on. It will appear to scale on the wall, when viewed through your device.
- On a desktop/laptop, click the same button & scan the QR code using a mobile device to view the artwork on your wall instantly.
- Use 1 finger to move artwork to desired spot. If it disappears, close, go back & click “view in your room” button again. To resize art, pinch to zoom with 2 fingers.
Requires compatible Apple iOS 13 or Android with ARCore 1.9+
Stephen Petyarre / Mulga Seed Dreaming (SP03)
SKU: SP03
30cm x 30cm Acrylic on CanvasView more from artist
$180.00
30cm x 30cm Acrylic on Canvas
In stock
Other Payment Methods
Artist Profile
Stephen Petyarre was born 1974 in the Utopia region Northern Territory, and is the son of the late Gloria Ngale, a highly respected Elder and artist. Stephen is also the nephew of the late Emily Kame Kngwarreye, one of Australia’s best known artists. His Region is Alhalkere and his Language Group is Anmatyerre.
Stephen’s sister is Anna Price Petyarre; it was Anna that inspired Stephen to take up painting as a young man. Stephen is married to artist Bernadine Johnston Kemarre; they divide their time between Utopia and Adelaide where their children attend school.
Stephen is a traditional man, participating in Men’s Business and further learning the dreaming stories of his family and his culture.
Artwork Description
Stephen paints the story of the Mulga Tree which he calls Ntang Artety. Ntang means seed in Stephen’s language, and Artety is the word for Mulga Tree. This is a very important story for Stephen that belongs to his country. The Mulga seeds are an important plant food throughout Central Australia, usually available to collect for several months of the year. The seeds are ground into a paste before being consumed and taste similar to peanut butter and are highly nutritious. In this painting the concentric circles represent the Dreaming place for Men’s Ceremony. The lines coming out of the circles are the travel lines to and from the meeting places of the men associated with this Dreaming story.
Shipping, Returns & Exchanges
Shipping & Insurance is 100% Free Worldwide
Note: Some countries & local jurisdictions may charge import customs fees. Please check with your local customs office. Free shipping does not include any additional import duties, taxes or fees.Guarantee & Refunds:
All artworks come with a 30-day 100% money back guarantee.
If, for whatever reason, on delivery of your artwork(s), you are not satisfied with your acquisition, you may return the artworks(s) for a full refund of the purchase price.
When requesting a refund all return shipping charges are to be borne by the customer and as all goods are the responsibility of the customer until they are received by us, we highly recommend that you insure the goods to be returned to the value of the purchase price.
This can usually be done easily through your local postage service or courier.Exchanges:
The Artlandish 30-day exchange program means you may also swap your artwork(s) with something else up to the value of the originally purchased painting(s).
Simply notify Artlandish within 30 days of receiving your artwork that you wish to exchange it for another piece and then return the artwork to be exchanged.
The new artwork(s) chosen via exchange also enjoy Free worldwide shipping! You will only have to cover the return shipping costs of the artwork you wish to exchange.How Artworks Are Sent:
All ochre artworks are delivered stretched on canvas ready to hang unless stated otherwise and all acrylic artworks are delivered un-stretched on canvas in a post pack tube unless stated otherwise.
If you have any other questions or concerns, please don’t hesitate to contact us at any time. | https://www.aboriginal-art-australia.com/artworks/stephen-petyarre-mulga-seed-dreaming-sp03/ |
‘Two-year conflict leading to mutual destruction,’ Russia warns Syria
Russia on Wednesday urged the regime and rebels in Syria to swiftly halt their almost two-year conflict, warning that seeking a military settlement risked leading to mutual destruction.
“It’s time to end this two-year conflict,” Foreign Minister Sergei Lavrov said after a meeting with Arab League chief Nabil al-Arabi and other top Arab diplomats.
“Neither side can allow itself to bet on a military settlement as this is a path to nowhere, a path to mutual destruction,” he said.
Lavrov, who on Monday is due to host Syrian Foreign Minister Walid Muallem for crucial talks, said Moscow was working to encourage dialogue between the rebels and regime of President Bashar al-Assad.
“There are signs of positive tendencies, signs of tendencies for dialogue both from the side of the government and the opposition,” he said.
But he said it was up to the two sides to decide what kind of dialogue might take place and at what level.
“It is important that they do not come out with any conditions for each other and say that I am going to talk to this person but not that one.”
Moscow, unlike other world powers, still keeps close ties with the regime of Assad and has infuriated the West and some Arab states by refusing to halt military cooperation with Damascus.
Lavrov confirmed that Russia was working on agreeing a trip to Moscow by the head of the Syrian opposition National Coalition Ahmed Moaz al-Khatib who has previously been unwilling to visit Russia over its past support for the regime.
“We are agreeing a date of a visit here by Mr Khatib, which will probably happen at the start of March,” said Lavrov.
He said the diplomacy was aimed at “creating the conditions for the start of direct dialogue” between the regime and opposition.
“What is needed is that the sides sit at the negotiating table,” said Lavrov.
He said there were signs of a new readiness on the part of the Syrian opposition for dialogue and it was vital that this was met by similar moves on the part the Syrian government.
“The government has long talked about this but now has come the time when words have to be put into concrete deeds,” said Lavrov.
“We count on this happening and we will work to make sure it does happen.”
Lavrov was speaking after a meeting of the formal session of the so-called Russian-Arab Forum that was founded in December 2009 but failed to meet as tensions rose between Moscow and regional states over the Arab Spring uprisings.
As well as Arabi, the talks included the foreign ministers of Iraq, Kuwait, Lebanon and Egypt. However the top diplomats of Qatar and Saudi Arabia -- who have been strongly supportive of the Syrian opposition and critical of Moscow -- were conspicuously absent.
Russia on Tuesday sent two planes to Syria to pick up Russians wanting to leave the conflict-torn country as the navy despatched four warships to the Mediterranean reportedly for a possible larger evacuation.
Two emergencies ministry planes carrying humanitarian aid for Syria took off from Moscow for the port city of Latakia and would take any Russians wanting to leave on their flight back, the ministry said.
The Russian emergencies ministry Ilyushin-62 and Ilyushin-76 planes were carrying over 40 tonnes of humanitarian aid and would be ready to evacuate any Russians wanting to leave the country, a ministry statement said.
The aid consists of electrical equipment, bedding, tents as well as foodstuffs like fish and milk conserves as well as sugar.
On Tuesday, U.N. humanitarian chief Valerie Amos warned that aid operations are largely unable to reach the opposition-held north of Syria, despite the U.N. saying it has stepped up its operations elsewhere in the country.
“We are watching a humanitarian tragedy unfold before our eyes,” Amos told a news briefing late Tuesday. “We must do all we can to reassure the people that we care and that we will not let them down.”
“Cross-line operations are difficult but they are do-able.
“We are crossing conflict lines, negotiating with armed groups on the ground to reach more in need. But we are not reaching enough of those who require our help. Limited access in the north is a problem that can only solved using alternative methods of aid delivery,” Amos said, quoted by Reuters news agency.
Some 70,000 people have been killed in the nearly two-year-old revolt against President Bashar al-Assad that has also sent 860,000 refugees fleeing abroad, according to the world body.
In the last few weeks, the U.N. refugee agency reached the northern opposition-held Azaz with aid for the first time. The World Health Organization has delivered vaccines in many opposition-held areas, Amos said.
Syrian opposition representatives told the United Nations this week that some three million people living throughout rebel-held territory require international assistance, she said.
The Syrian government still refuses to allow U.N. convoys to cross from Turkey into northern Syria, as most border crossings are controlled by the Free Syrian Army, she said. | https://english.alarabiya.net/articles/2013%2F02%2F20%2F267342 |
Brazil is included in the group of 17 megadiverse countries, holding around 15 to 20% of the entire biodiversity of the planet. This number, combined with local traditional knowledge with respect to the use of elements of biodiversity implies an immeasurable potential for business opportunities in the area of biotechnology, - which brings together techniques that allow the use of living beings or parts of living beings, modified or not, in the generation of new products and processes for specific purposes.
Brazil establishes rules for access to its genetic resources (RG) and associated traditional knowledge (CTA), as well as for the sharing of benefits arising from their economic exploitation through Federal Law No. 13,123/2015, of Decree No. 8,772/2016 and the Regulations of CGEN (Genetic Heritage Management Council).
Kasznar Leonardos, with its multidisciplinary team, offers technical and legal assistance for the intellectual protection of inventions derived from access to RG and CTA, as well as, in the event of lawsuits, procedural action and monitoring at all instances.
Consultancy in the area of contracts related to biotechnology and access to genetic heritage and benefit sharing.
Judicial and administrative litigation in the area of biotechnology and access to genetic heritage. | https://www.kasznarleonardos.com/practice-areas/biodiversity |
What is Nicotinamide Adenine Dinucleotide (NAD+)?
NAD+ is an oxidized form of NADH (Nicotinamide Adenine Dinucleotide Hydroxide). NAD+ is a component of the Electron Transport Chain (ETC) and carries electrons from one biological reaction. That is how it becomes a medium for shuttling energy within and outside the cell. It also acts as a mediator for various biological processes in the body, such as post-translational modification of the proteins and activation/deactivation of some enzymes.
It is the critical component in maintaining cell-to-cell communication in the body. The neurons present in the blood vessels, intestines, and bladder release NAD+, and it acts as an extracellular signaling molecule to regulate bio-functions in the body. It functions as a cofactor in numerous bodily processes such as immune defense, DNA repair, circadian cycles, and energy conversion. However, like other naturally occurring mediators in the body, its level declines as age advances. Therefore, replenishing the levels of NAD+ in the body can help off-setting various age-related progressive and degenerative processes. NAD+ is a naturally occurring compound with virtually few to no side effects. It can also be used synergistically with other supplements to obtain multiplied benefits.
Specifications
SYNONYMS: Nicotinamide Adenine Dinucleotide, Beta-NAD, NAD, Endopride
MOLECULAR WEIGHT: 663.43 g/mol
MOLECULAR FORMULA: C21H27N7O14P2
PUBCHEM:CID 925
RECONSTITUTION: Required
NAD+ Research
NAD+ is of pivotal significance in numerous biological processes and greatly interested researchers. The goal is to study its natural role in the body and how it implies the clinical well-being of people owing to its immense properties. Below are a few of the many implications of NAD+.
Anti-aging effects:
Mitochondria are considered the powerhouse of the body. They serve as a platform for primary metabolic functions such as intracellular signaling and regulation of innate immunity. These processes are directly affected by mitochondrial senescence and ultimately alter cellular metabolism, inflammation, and even stem cell activity. These altogether reduce the pace of healing following an injury. That is how mitochondria are involved in age-related tissue and organ function decline. Any way of modulating mitochondrial activity can slow, cease or even reverse the process of aging.
A deficiency of NAD+ in the cell induces a pseudo-hypoxic state which interrupts signaling within the nucleus. Studies have provided evidence supporting the role of NAD+ supplementation in reversing at least some of the age-related decline in the function of mitochondria. The mechanism underlying this property involves the activation of the SIRT 1 function. A gene encodes an enzyme called Sirtuin-1 (NAD+ dependent Deacetylase Sirtuin-1). Sirtuin-1 regulates the mediators involved in metabolism, inflammation, the longevity of cells, and the processes linked to stress.
Improvement in muscle function:
The decline in muscle function related to age is associated with mitochondrial senescence. It occurs in two steps. The first reversible one involves declined expression of mitochondrial genes. These genes are responsible for oxidative phosphorylation (the process by which mitochondria produce energy). The second irreversible step consists of a decline in genes responsible for oxidative phosphorylation in the nucleus. Mice experiments have demonstrated a step 1 reversal when there is an administration of NAD+ before the cell progresses to step 2.
The mechanism behind this intervention in mitochondrial aging involves stabilizing the activity of Peroxisome Proliferator-activated Receptor Gamma Co-activator 1-alpha (PGC-1-alpha). Studies have proven that the effect mentioned above produced in the mitochondria is similar to exercise on the mitochondria of skeletal muscles. This information helps maintain the skeletal muscles’ oxidative capacity over a long period.
Role in Neurodegenerative Diseases:
NAD+ is a cofactor that plays a significant neuroprotective role. It does so by improving mitochondrial function and reducing the production of Reactive Oxidative Stress (ROS). ROS are responsible for inflammatory changes associated with injury and degenerative changes associated with aging. This association provides its basis for treating certain neurodegenerative diseases such as Alzheimer’s, Huntington’s, and Parkinson’s disease. Research conducted on mice showed that the administration of NAD+ helps protect against progressive motor deficits and the death of dopamine-producing cells in the substantia nigra. It implies that although NAD+ doesn’t alleviate the symptoms, it sure does slow the progression if not entirely prevent the development of Parkinson’s disease.
Reduction of inflammation:
NAMPT is an enzyme that is associated with inflammation. It is overexpressing in some types of cancers. An increase in the levels of NAMPT is related to a decrease in the ranks of NAD+ and vice versa. The NAMPT-associated inflammation occurs in cancers, obesity, type 2 diabetes, and nonalcoholic fatty liver disease. NAMPT is a potent activator of inflammation whose levels tend to decrease dramatically following the administration of NAD+.
Treatment of Addiction:
Drug and alcohol addiction is associated with a decline in the levels of NAD+. This addiction leads to significant nutritional deficiencies and cognitive deterioration. Supplementation with NAD+ can help overcome these changes and can be helpful in various addiction disorders.
NAD+ supplementation provides a wide range of benefits when used alone and can also be used in conjunction with other therapies to obtain multiplied effects. | https://biotechpeptides.com/product/nad-100mg-750mg/ |
As we welcome new officers and new members of the Council, we can reflect on our past accomplishments and the directions that our Division needs to go in the years ahead, and particularly for the next bar year. The American Bar Association recently found itself at a crossroads and it needed to take significant steps to reposition itself for the next decade. Membership was being lost at an alarming rate, both recruiting of new members and retaining existing members. The Senior Lawyers Division became a major part of that new direction. Just three years ago, the ABA Board of Governors, took a significant step by expanding the Senior Lawyers Division to encompass every ABA member age 62 and older. Our Division went from about 3,000 members to approximately 60,000. We took on new responsibilities to provide the senior members of our Association with a meaningful experience as they worked through that significant part of their legal careers and concluded their professional service.
My vision for the Senior Lawyers Division is to make it an important and significant part of the professional lives of lawyers and to execute our mission as the Advocate for Experienced Lawyers and American Families. It is the Chair’s intent for the next bar year to focus on five major areas: (1) Retain Membership; (2) Provide Quality Publications; (3) Present Substantive Programs; (4) Collaborate with other Sections, Divisions, and Forums; and (5) Provide Centers of Excellence based in the Senior Lawyers Division.
Retain Membership
In May of 2019, after over a year of study, the ABA began its program to increase and support membership. The overall program emphasized a more reasonable dues structure while providing members with increased benefits. The emphasis was on recruiting more new members, particularly young lawyers, and solo and small firm lawyers. Our Division has a limited role in recruitment, except through encouraging law firms and state and local bar associations to join and participate in their national professional organization.
The primary role of the Senior Lawyers Division is to convince our members to continue their membership in the ABA as they work toward the end of their legal careers. The Senior Lawyers Division can and should provide a continuing professional experience for our members with communications and publications, valuable programs, and opportunities for continued collegial experiences. Many lawyers, after completing their service in their respective Sections, Divisions, and Forums, need other opportunities for work and leadership in their profession. It is said that the sixties of today are the new forties. If so, the experience gained in working with other ABA groups can be transferred to the activities of the Senior Lawyers Division.
To accomplish this goal of retaining membership, we will (1) Work with ABA marketing and membership staff to develop our retention plan, (2) Engage members in programs and social events, and (3) Provide opportunities for leadership and service positions in all of our activities.
Provide Quality Publications
The one way that we can stay in constant touch with our members is through our publications. The ABA, like most professional organizations, cannot regularly bring its members together. But we can take our Division to our members through magazines, emails, letters, and books. We can also provide an outlet for our members who want to contribute by writing interesting and informative articles and books. Our publications also provide an opportunity for our committees to publicize their programs and work product. We would like substantive committee to submit an article at least once each year.
In our publications we will: (1) Publish our award-winning magazine, EXPERIENCE, on a quarterly basis. Each issue will have a theme and will concentrate on articles about our profession, about the legal developments, and about advice on interesting subjects for lawyers and their families. (2) Publish our e-newsletter, “Voice of Experience,” monthly with articles on what’s happening in the Division, in the ABA, in the law, and in life. (3) Publish books through the ABA on a variety of subjects of interest to lawyers and America’s families. We will seek two to three new books on senior issues this coming year.
Present Substantive Programs
The Division has demonstrated superiority in several program areas. One significant contribution has been in confronting the national opioid problem. In May of 2018, the Division conducted an Opioid Summit with the support of several ABA entities and other outside organizations. From the Summit came the Report: “Experienced Lawyers, American Families, and the Opioid Crisis.” The Report made nine recommendations for action. These recommendations were accepted as ABA policy by the ABA House of Delegates at its mid-year meeting in January 2019. This year the Division is joining with the Health Law Section to present a CLE Showcase Program at the Annual Meeting in San Francisco, which will feature the President Elect of the American Medical Association, the Co-Chair of the Legal Services Corporation Opioid Task Force, and primary counsel in the current opioid litigation.
This year, we will: (1) Form a Joint Opioid Task Force with the Health Law Section to carry on this work; (2) Encourage our Committees to present programs in their substantive areas using Webinars and stand-alone programs at ABA meetings; (3) Develop programs which can be incorporated into the ABA CLE Library; (4) Join with other ABA entities in presenting programs through our Liaison Representatives; (5) Work with the Special Committee for the Celebration of the 19th Amendment.
Collaborate with Other ABA Sections, Divisions, and Forums
This past year the Division received an award from the Section Officers Conference for its work in collaborating with over twenty sections and entities within the ABA and outside of it. The Division recognizes the strength of combining our efforts with those of other groups.
This year we will: (1) Join with the Health Law Section in forming the Joint Opioid Task Force and present programs and activities related to solving the problems of opioid addiction; (2) Work with the Judicial Division in the development of Drug Courts and other specialized judicial programs; (3) Coordinate with and support State Bars in the operation of their senior lawyer organizations and encourage states without such organizations to organize them; (4) More fully develop a system of liaisons with every ABA Section, Division, Forum, and Special Commission; and (5) Work with other Sections, Divisions, and Forums in the development and presentation of Resolutions to the ABA House of Delegates, with our Division taking the lead in at least one Report and Resolution this year.
Provide Centers of Excellence Based in the Seniors Lawyers Division
We have learned that there are several areas of special interest to our members. These areas need to be further developed and made available to our members. Some of the areas already underway are Elder Law, Opioid and Drug Abuse, Dementia programs for lawyers and caretakers, Disaster Assistance, pro bono service, and legislative/policy sponsorship. We believe that through the experience and leadership of our members, developed by their long years of work and dedication to their law firms and bar associations, they are well positioned to educate, teach, coach, and counsel.
In this area, we will: (1) Create a home for Elder Law practitioners with programs and publications focused on elder law issues; (2) Build on the Opioid programs already established and reach out to state and local groups through the Joint Opioid Task Force; (3) Provide dementia programs for lawyers, caregivers, and others; (4) Create service centers for volunteer assistance in pro bono and for work in disaster relief; and (5) Organize a working group of practicing lawyers in senior positions with their law firms to explore solutions and decision making techniques.
Division Leadership
I am honored to be the Chair of the Senior Lawyers Division, but the success that we will enjoy this year will depend on our officers, members of the Council, committee chairs, and our staff to accomplish our overall mission. We will rely on your dedication, ideas, encouragement, and support.
Respectfully, | https://www.americanbar.org/groups/senior_lawyers/publications/voice_of_experience/2019/august-2019/incoming-chair-albert-harvey-column-august-2019/ |
Establishing and Modelling Norms in Online Courses
In Module 5, you will:
• Identify key points of etiquette and privacy for class videoconferences including tasks before, during and after class
• Implement strategies and recommendations for student participation during videoconferences
• Incorporate promising practices for email communications with students (both individually and as an entire class)
• Promote collaboration among students through the use of discussion forums, blogs and group projects
• Identify efficient scheduling procedures for class events and assignments
In this intro video, Dr. Kellam talks about the key ideas in Module 5 and shows you how to create a video that allows your students to know a little about your interests — which can go a long way to supporting connection in an online course.
Estimated Completion Times
Estimated Reading Time : 1 Hour
Estimated Reflection Time : 15 Minutes
Estimated Practice Time: 20 Minutes
Table of Contents for this Module
- Part 1: Etiquette During Videoconferences
- Part 5: Scheduling for Virtual Classes and Online Learning
5.1 Think Big
Your Digital Classroom: A Personal Experience
All too often, when teachers and students think of online learning they think of a cold, impersonal, regimented solution to learning, or a system that is bound by the constraints of technology. But does it have to be this way? Absolutely not!
I like to think of digital tools as new ways to expand my teaching practices. Yes, there are constraints with regards to in-person interactions, but there are so many tools and techniques that you can use to make your online classroom personal, reliable, fun and motivating for learners. If you can open your mind to the possibilities, you will see the potential. My son had some interesting experiences at his high school during the pandemic, and I will present it in this short case study.
Reflections on High School Online Learning During the Pandemic
As mentioned earlier, the pandemic was an unprecedented challenge for classroom teachers. Below, I share observations of the online teaching practices of four teachers — two who successfully embraced the possibilities of online learning, and two who seemed less sure of how to organize their online materials and communicate effectively. Consistent with research on the development of effective online learning environments (e.g., Sundar, 2008; Sundar et al., 2015) the key determining factors for success were predictability and personalization.
The Positive
Two of my son’s four teachers did an admirable job during the transition. The first was a new, second-year teacher who almost immediately had the schedule for the entire semester up on his class site. My son could see all of the upcoming content, readings, assignments and deadlines. It painted a very clear picture of what was expected and when it was due. Although this teacher only had two class meetings for the entire semester, his clear scheduling made it easy for my son to follow along. He said it was calming to see everything laid out so clealry. The second teacher did not have a very well-organized classroom site, but she insisted on one thing: bi-weekly group meetings. She would meet via videoconference with small groups of 5-6 students. She also changed the groups each month. My son said he got to know his teacher a lot better, and worked collaboratively with students that he never would have gotten to know in class. Interesting how online learning became more personal for him!
The Negative
One of my son’s teachers did not fare so well. She was a very experienced teacher, and in fact was the department head for her subject. While she did schedule online meetings with small groups of students, the groups and activities always stayed the same. Also, the videoconference schedule was all over the map, sometimes twice weekly and other times no meetings for two or more weeks. The teacher also kept adding assignments as the term went on, with no set schedule or syllabus. While they did accomplish a fair amount of work, my son found the schedule to be extremely stressful and difficult to follow. One time, the teacher even missed a 1:1 videoconference with my son and offered no apology or explanation. Her online unreliability managed to undermine what had been a successful classroom relationship between her and my son. She tried to embrace videoconferencing and the digital classroom, but ultimately failed due to poor organization.
Implications for Your Online Classroom
My son’s experiences highlight that there are so many considerations when creating a digital classroom beyond the layout and components of a learning management system. We must consider so many other practices and norms to ensure success:
– How do you effectively communicate with students?
– How do you schedule class time and group work?
– How do you promote collaboration?
– What are best practices for privacy and etiquette during a videoconference?
• This module will begin to answer these questions and present useful tools and templates to get you started…
Part 1: Etiquette During Videoconferences
Before Your Videoconference
A great videoconference takes a lot of careful planning, with most of the work occurring prior to beginning the meeting. With practice you can become adept at these steps and achieve the consistency and professionalism for a great virtual class. Here is a checklist to get you started:
1. Prepare Your Office
– Being comfortable in your physical space is so important when teaching:
• Have a solid background behind you
• Make sure the lighting comes from behind your camera
• Dress professionally, but vary it for fun
• Show your personality with your background objects or pictures
• Have technical support numbers or websites handy
• Wear a comfortable headset
• Have a comfortable chair
2. Know Your Videoconference Software
– This is critical as all software is different
– Login to the software and make sure you can do the following:
• Turn on and off your camera and microphone
• Accept participants into the videoconference
• Share your screen
• Mute all participant microphones
• Open the chat box, create chats (both for the entire class or smaller groups), respond to comments
• Create breakout groups
• Record and post the virtual class
3. Prepare Documentation and Slides
– Have all of your slides ready to go, and test the following:
• Connection to videoconference
• How to share your desktop (this can be tricky if you have two monitors)
• All links to websites, videos and documents to ensure they load and share properly
• Send all documentation to students before the videoconference (be consistent with this – often 15 minutes prior is ideal or have a link on your LMS with all required documents)
4. Set Etiquette Rules for Your Class
– Let’s think of this as “virtual classroom management.”
– Having set guidelines and rules for participation is key, and having a concise one page document on your LMS can help set the tone for your classes (Please see the attached document “Videoconference Etiquette for Students”)
– Also consider the following:
• How will you ask questions? Questions for the entire group are best for the chat function.
• How will you set up smaller groups?
• How will you check in with individual students?
5. Have an Agenda
– Students want to know what is going to be covered in class, and an agenda can keep everyone on the same page and promote inclusivity
– Things to include on your agenda:
• Topics and how they will be covered (lecture, small group breakout discussions, chat, student presentation, video, web search, etc.)
• Timing of each topic
• How students will be assessed
• Links to documents, your presentation, videos and websites
• How to access the recording
• Dates and times of future class meetings
6. Consider Multimodal Learning
– Just like in a face-to-face class, consider a variety of presentation methods
– Design slides with clear titles and content (no more that 4-5 main points per slide for clarity)
– Include videos or audio clips (if you can find an external expert even better)
– Provide opportunities for discussion (both online in the chat box or among students)
– Change up your clothing, location and/or background each class (this makes it fun and engaging, especially for younger learners)
7. Anticipate Challenges and Technical Issues
– How will you deal with disruptive students? Will you mute their microphone or turn off their camera?
– What if students cannot connect? Provide an email where they can reach you before or during the class. A second monitor is great for this.
– What if the videoconference fails? Give clear instructions on how to reconnect to your students.
– What if there is a privacy breach such as a participant not from your class? See more on this in the privacy section.
– What if your camera or microphone isn’t working? Know how to reconnect them or how to contact your videoconference software company for help.
8. Prepare a Two-Page Log Book
I like to have a book open beside my computer, with a class list on the left and a blank page on the right
– The class list allows me to:
• Note any good points or participation
• Note any classroom management issues
• Jot down ideas for engagement of students
• Assess group work or student presentations
– The blank page allows me to:
• Note any successes or failures during class
• Jot down ideas for future lessons, assessments or topics
During Your Videoconference
- 1. The First 5-10 Minutes
– Use the first few minutes for students to connect and to have a few informal conversations, much like you would in class – here are some ideas:
• Mention a current hot topic
• Ask a student or two to introduce their pets, or talk about a current event
• Talk about a fun story from your week/day
• Ask a student to present a class journal entry
• Make this time personal and inviting
2. Present Your Agenda
– This is important to set the tone for the class and get everyone on the same page for learning
– Things to include on your agenda:
• Topics and how they will be covered (lecture, small group breakout discussions, chat, student presentation, video, web search, etc.)
• Timing of each topic
• How students will be assessed
• Links to documents, your presentation, videos and websites
• How to access the recording
• Dates and times of future class meetings.
3. Use the Chat Feature to Promote Engagement
– Check the chat often to identify student questions and areas of interest or confusion
– If you find this difficult ask a student (or two) to take on special roles as “chat monitors” to keep track of questions or chats that you may miss while presenting
– Use chat to engage students in discussion. Pause often during a lecture to ask a question and invite students to answer or comment – have these planned into your lecture!
– “Call on” students with a text question or comment to elaborate
4. Use Narration to Promote Inclusivity
– Narrate the material that you’re displaying visually on the screen
– Just as you might read materials aloud in class, read screen material that you share on-screen just in case students are not able to see essential text
– Don’t read the text as this is boring, but make sure to hit all of the key points for learning
- Use Polling to Gauge Interest and Identify Points of Discussion
– Use a polling tool to collect student responses, and then share results
– You can use this to check for understanding but also to help “branch” your lecture by asking for student input into the lecture content
– This can lead to group discussions, opportunities for clarification or potential breakout groups
- Promote Synchronous Group Work
• Use breakout rooms to help students talk in smaller groups, just as they would in a face to face classroom environment
• You can visit the breakout rooms, broadcast messages to various rooms, and end the breakout sessions when it is time to regroup
• Use a Google Doc to allow students to work together either as an entire class or in their breakout groups
• GIVE THEM A TASK – answer questions, brainstorm or jot down their own questions.
- Use Student Presentations
• Give students the opportunity to present during your class
• You can mark these presentations very much like in-person work, looking for pronunciation, clarity, expression and confidence
• You can also assign presentations to a group, and have members present different portions and “pass” the presentation among group members
• This is also an opportunity for you to meet privately with the group in order to help them prepare as well as evaluate their teamwork and preparation!
- Take Attendance and Note Participation
• These can be done more easily during a videoconference than they can be done in class!
• Most videoconference software packages include an attendance feature, and the chat is recorded and available after the conference ends
• I like to have a printed class list handy and quickly note if students participate during the videoconference
• You can also use these lists and notes to “call on” quiet students during your next class or contact them after class if you have any concerns about engagement or motivation
- Let’s Talk About Flipped Classrooms (Again)
• If you record your lectures and have students watch them outside of class time (you can track who has watched them in your LMS), you can use your live videoconferences for the following:
– Problem-solving or workshop activities
– Help sessions with small groups of students with similar skill levels
– Practical examples of your lecture material
– Group collaboration on presentations or projects
– Doing “homework” live with the teacher
• This also takes pressure off of parents for doing homework – they will appreciate it!!
- The Wrap-up
• At the end of your videoconference, be sure to do the following:
– Ask if anyone has questions about the class (this can be a verbal response for small group meetings, or use the chat feature for large groups)
– Summarize your key learnings for the class
– Schedule your next class videoconference
– List any upcoming assignments or homework responsibilities
– Identify how you will communicate with students/update your LMS/answer questions
After Your Videoconference
Housekeeping and Follow-Up
• Here are some things I like to do after a virtual class:
– Make any final notes in my log book about the class (success, failure, improvements, future topics)
– Note any technical difficulties and seek solutions
– Schedule the next videoconference
– Take attendance and note participation marks
– Respond by email to any student concerns or issues from the videoconference
– Update your LMS class site with any materials from the class (presentation, handouts, group work, etc.)
– Send out or post instructions about upcoming assignments, class events or presentations
Part 2: Tips for Videoconference Privacy
All tips in this section are presented in two ways — in text below, and in this video presentation where Dr. Kellam provides a model of how to record a picture-in-picture video with a set of slides.
1. Run the Latest Version of Videoconference Software
• New versions of software have the latest updates for addressing security vulnerabilities, fixing known bugs or providing new features or functions
• Updating to the latest version of software is critical to keeping in step with hackers as they find new ways to join or disrupt videoconferences
2. Know Your Security Controls in the Software
Specific features vary according to the software you are using, but most have controls that give a meeting host power over an invited participant’s access to and use of the video services
Here are some tips:
– Use Unique Meeting IDs.
• To prevent unauthorized access by persons who received the meeting ID for a prior meeting held by the same host, use the software to generate a unique meeting ID every time a new meeting is created.
– Require Passwords and send them out via email or post on your class LMS
https://www.lexology.com/library/detail.aspx?g=a2f1311d-e0a9-4618-a457-1a9f575b78bf
3. Control Access to Your Videoconference
• Waiting Rooms
– Using this feature allows you to validate that only invited students have joined the class
• Muting and Removal
– Most services allow a host to mute specific participants or remove them altogether from a class
• Turn off Screen Sharing or Use it Individually
– Disable the screen sharing function for all students except for the host and any other presenters, unless it is required (like for group presentations or sharing of student work)
•Turn off Recording
– This should be disabled unless the content is needed for purposes such as asynchronous learning. If it is, you should only record the teacher and not the students
• Use Chat Wisely
– If you use the chat feature then make it public and between the the host and all students (vs. private messaging between students)
• File Sharing Settings
– Disable these and use a school-approved resource like Google Docs
4. Tips for You and Your Students
• Be very mindful of what the camera is showing in the background
• Choose a safe and appropriate place and appropriate attire for conferencing – not in your pajamas in your bedroom!
• Students should use their first name and last initial as a screen name
• Teachers should recognize (say hello to) students as they enter the group. Students should say hello if the teacher doesn’t see them enter.
• Do not share the video conference link with anyone outside the participants that were invited
5. Videoconference Consent
Parents/guardians of secondary students’ should be informed of video conferencing and the purpose of the conferences
• Parents/guardians of elementary students must provide consent for students to participate in video conferencing
– You can send the videoconference link to the parent in advance of the date with the time, link, and duration of the conference and ask for an email back giving permission or indicating that they will attend with the student
• For 1 to 1 student to teacher video conferences, you should obtain the consent of the student’s parent/guardian and the consent of your supervisor or principal
• Consent could take the form of email, phone call, letter, or other means of communication sent to the principal and teacher
Key Resource: Ontario College of Teachers Videoconferencing Guidelines
An excellent resource that blends both etiquette and privacy concerns can be found on the OCT website here. It is an excellent resource and includes a downloadable PDF guide for your reference.
Part 3: Email Communications with Students and Parents
Be Consistent and Set Expectations
• Let students know how you plan to communicate with them, and how often – I find once per week for the whole group works well
• Tell students both how often you expect them to check their email, and how quickly they can expect your response – I have a policy to respond within 24 hours
• Let students know about changes or disruptions as early as possible, even if all the details aren’t in place yet, and let them know when they can expect more specific information
Manage Your Communications Load
• You will likely receive some individual requests for information that could be useful to all your students, so consider keeping track of frequently asked questions and sending those replies out to everyone
– For example changes to due dates, typos, clarifications of objectives and assignments
• Also, create an information page in your LMS, and then encourage students to check there first for answers before emailing you
• Manage Your Communications Load – By Being Responsive!!
My Two Monitor Technique
• Communicating effectively by email can make a student feel heard, respected and create a trusting relationship
• Students nowadays are used to a fast-paced life, so let’s give it to them – it will help your time-management!
• Have two monitors and keep one with your class email open at all times
• Do your best to respond and clear all student emails by the end of the day – quite often they are short clarifications on assignments, due dates or absences
• Schedule meetings or office hours for more detailed requests
• YOUR STUDENTS WILL LOVE YOU FOR IT!!!
When is email the appropriate form of communication to use?
Email is a good way to get your message across when:
• You need to send students an electronic file, such as a document for a course, a rubric, or a marked rough draft of an assignment
• You need to distribute information to a large number of people quickly (for example, an agenda that needs to be sent to the entire class)
• You need a written record of the communication
• This is great for future reference or if you need proof (for example, proof that you responded to a student request or provided feedback)
When is email NOT an appropriate form of communication to use?
• Email is not an effective means of communication when:
• Your message is long and complicated or requires additional discussion that would best be accomplished face-to-face. Set up a meeting instead.
• The information is highly confidential. Email is NEVER private – and it can be stored forever!
• Remember, your message could be forwarded on to other people without your knowledge!
• Your message is emotionally charged or the tone of the message could be misunderstood. If you would hesitate to say something to someone’s face, do not write it in an email.
https://writingcenter.unc.edu/tips-and-tools/effective-e-mail-communication/
Helpful Rules for Emails
• If you are sending an email to a contact list you must include every contact as a blind copy (BCC)
– Otherwise you are sending your email list to the entire class
• The body of the email must be informative, complete and concise (more on this later)
• The grammar, spelling and tone of the message should be perfect – make sure to review it carefully prior to sending…
• It is convenient to assign certain hours to send and read emails, in order to avoid wasting a lot of time on it – or use the two monitor tip!
Rules for Class Emails
You should also establish rules for using emails in class:
• Students must be properly informed of how to use the email service used in class – send out a list of rules for them!
• It is best practice have a specific email account for school communications for both teachers and students – parents may want to use a personal email for simplicity
• You must establish the topics that can be discussed by email and those that can not
• No insulting, annoying, spamming or doing anything that could have negative consequences
• Encourage respect for others as you would in class
• The emails sent to the teacher must always be answered, preferably with a positive or supportive response
• Establish consequences for those who do not follow the rules of sending emails in class
Writing more effective emails
• Here are some steps you can take to ensure that your message is understood:
• Briefly state your purpose for writing in the very beginning of your message, or even in the title!
• Be sure to provide the reader with some context. If you’re asking a question, cut and paste any relevant text (for example, computer error messages, assignment prompts you don’t understand, part of a previous message, etc.) into the email so that the reader has some frame of reference for your question.
• Use paragraphs to separate thoughts (or consider writing separate emails if you have many unrelated points or questions).
• State the desired outcome at the end of your message
• If you’re requesting a response, let the student know what type of response you require (for example, an email reply, possible times for a meeting, due date for work, etc.)
• If you’re request has a due date, be sure to highlight that due date in a prominent position in your message
• Ending your email with the next step can be really useful (for example, have this ready for next class, or we will discuss in our meeting next week)
https://writingcenter.unc.edu/tips-and-tools/effective-e-mail-communication/
There are also many things to consider if you are sending emails to parents, and there is an excellent resource from Kathleen Morris here.
Part 4: Promoting Collaboration in the Online Classroom
Collaboration Online: So Many Tools and Advantages
We are fortunate to have so many excellent tools for online collaboration: Google Docs, whiteboards, chat rooms, email and videoconferencing to name just a few.
All of these tools provide students the opportunity to share, reflect, assess and learn from one another, but the key is to set them up for success.
This section highlights some tips on setting the stage for online collaboration and ideas for setting up group projects and collaborative learning.
Get to KNOW YOUR STUDENTS
This is a theme throughout this course, and is so critical for collaborative learning.
• One of your first projects should be an introduction page that is created by each one of your students
• Producing a personal online introduction can result in much more detail and interaction than the usual classroom introduction
• Choose your questions carefully, and think about your grade, subject, and assignments
Topics for Personal Profile Pages
Some useful topics include:
• What are your favorite subjects?
• Hobbies?
• Favorite part of school? Group work? Presentations? Writing? Drawing? Creating videos?
• Least favorite part of school?
• What are your personal strengths?
• Who is your favorite celebrity or sports figure and why?
Answers to these questions not only allow you to get to know your students, but can provide valuable information for forming groups and assigning group roles
Working with Groups Online
Again, how you set up your class will determine how much TIME you will have to work with your students
• Consider using a flipped classroom model, where you record certain lectures or presentations
• You can then use “class” time to meet with small groups of students to coach them on their projects and ensure motivation and success
• This will allow you to use your valuable teaching skills in a practical way, and monitor progress and assess group interactions
Fostering Collaboration
• Technology can make collaborative learning easier
• Collaboration can have the same results via technology as in person, increased learning opportunities
• Start by having students get to know each other’s backgrounds and ideas beforehand on a blog or chat-board – like our personal introduction pages!!
• Value diversity.
• Collaborative learning relies on some buy in
• Students need to respect and appreciate each other’s viewpoints for it to work
• Focus your discussions to promote different perspectives and independent thinking
• Try to model or provide examples where people working together are able to reach complex solutions, or utilize real-world problems that promote solutioning and teamwork…
Promoting Diversity
• Keep in mind the diversity of groups, and ways that you can promote them.
• Mixed groups that include a range of talents, backgrounds, learning styles, ideas, and experiences are excellent learning opportunities for students
• Mixed aptitude groups can learn more from each other and increase achievement of low performers (your participation and coaching will be important here)
• Rotate groups so students have a chance to learn from others – you can start with groups of similar learners or friends for your first project, then promote diversity in future projects once students are comfortable with the technology and collaboration process
• Establish group goals. Effective collaborative learning involves establishment of group goals, and this keeps the group on task and productive
– Before beginning an assignment, it is best to define goals and objectives to save time.
• Keep groups midsized. Small groups of 3 or less lack enough diversity and may not allow divergent thinking to occur
– Groups that are too large create “freeloading” where not all members participate
– A moderate size group of 4-5 is ideal, and make sure each member has a specific role or job
Trust and Communication
• Build trust and promote open communication.
• Building trust is so important for collaboration!
– Deal with emotional issues that arise immediately and any interpersonal problems before moving on
– Attend the first group meeting so you can set the rules, gauge group cohesion and answer questions
– Assignments should encourage team members to explain concepts thoroughly to each other
– Students who provide and receive intricate explanations gain most from collaborative learning, so open and honest communication is key here!
Promoting Interaction
• Establish group interactions and roles.
• As the teacher, you should provide a model of how a successful group functions
• Shared leadership is best, and students can work together on the task and maintenance functions of a group
• Roles are important in group development. Task functions include:
– Initiating Discussions
– Clarifying points
– Summarizing
– Challenging assumptions/devil’s advocate
– Providing or researching information
Authentic Learning
• Use real world problems.
• Authentic learning using open-ended questions can be very engaging for students
• Rather than spending a lot of time designing an artificial scenario, use inspiration from everyday problems
• Real world problems can be used to facilitate project-based learning and often have the right scope for collaborative learning, for example:
– Current world events
– Issues at your school or in your community
– Themed work based on the time of the year
– Issues that matter to the student community
Vary Your Techniques
• Consider using different strategies, like the Jigsaw technique.
• The jigsaw strategy is said to improve social interactions in learning and support diversity
• To do this, separate the assignment into subtasks, where individuals research their assigned area
– Students with the same topic from different groups might meet together to discuss ideas between groups
– This allows students to become “experts” in their assigned topic
– Students then return to their primary group to educate others
A Final Thought on Group Work Online: Have Fun
• This is your chance to coach and interact with your students in a meaningful way.
• Working toward a common goal with a group and seeing the product of their combined learning is one of the best feelings that an educator can have!
• Get creative, use multimodal techniques, and consider fun and engaging projects for your students!
Part 5: Scheduling for Virtual Classes and Online Learning
Scheduling = Consistency
Your students are accustomed to a routine in the live classroom, yet this is one of the most overlooked elements of planning for a virtual class
Routine will also help you organize your lessons, assignments and evaluations
Preparing or Participating in an online course is convenient, but it takes weekly discipline to stay on top of modules, activities and assignments
Starting off on the Right Foot: The Course Syllabus
Granted this is usually reserved for University courses, but a good online course needs a course syllabus. You should seriously consider preparing one for your classes.
A great course syllabus should have the following sections:
• A course description
• Course objectives and teaching strategies
• Course structure
• Assignments and evaluation methods
• Schedule of classes
• Attendance policy
• Privacy and etiquette for online learning
Your Weekly Schedule
For me, there are two weekly schedules: the teacher’s schedule and the class schedule
The Teacher’s Schedule
• It is important to reserve weekly time to work on your online course or else you can fall behind just as easily as your students! Here are key components:
– Preparation and recording of lectures
– Method for responding to emails
– Weekly virtual office hours (perhaps only monthly)
– Assessment and evaluation (prep and marking)
– Reading and research
The Class Schedule
• CONSISTENCY is key. You should post dedicated times each week on the LMS for all of the key components:
– When you update the class LMS (Sunday night)
– Weekly live classes or class recordings
– Weekly discussion groups/group work
– Discussion reflections or blog posts (if applicable)
– Readings or web research
– Homework and/or assignment completion dates
Scheduling Group Work and Discussions
Group discussions or projects should have their own section in the class schedule. They can include:
• Meetings with the group and/or the teacher
• Important deadlines for group assignment components
• Deadlines for discussion forum posts and summaries
• Dedicated space on the class LMS for group work:
– Google Docs
– Whiteboards
– Group folders
– Group assignment submissions
– Group discussions
Scheduling Assessments and Assignments
One of the biggest mistakes in many online classes is not having any assessments in the first few weeks of class. It is critical to balance the workload for both you and your students, so space out assessments throughout the semester. Here are a few ideas:
• Utilize formative assessments at the beginning that focus on getting to know your students – such as the personal biography pages, students interviewing one another, or small group presentations via video
• Consider having two group assignments, one at the beginning and one at the end of the semester
• I like having reflective exercises throughout the term, with a culminating assembly of the work at the end
• You CAN live without tests in the virtual environment!!
5.2 Reflect
Think of your personal and/or learning experiences using videoconferencing. Was it effective as a communication tool? Why or why not? If not, what could have been done to make it more effective?
What was the best group project you have ever participated in or seen in a classroom environment? Could it be adapted for the online environment?
5.3 Practice
Setting the “Norms” for Your Virtual Classroom
Print out or use an online calendar tool and create both a weeekly and monthly calendar for your virtual classroom.
Write on the weekly and monthly calendar the routines that you will create for the following topics:
- Videoconferencing
- When/how often will you hold entire class videoconferences?
- When/how often will you schedule group videoconferences?
- Will you hold 1:1 videoconferences with students? If so, when and how often?
2. Email Communications
- When/how often/for what purpose will you communicate with the entire class via email?
- When/how often/for what purpose will you communicate with parents via email?
- When/how often/for what purpose will you communicate with individual students via email?
3. Group Projects and Collaboration
- Think about topics/projects that you will use for group work
- What will be your due dates, key assignments and check-in points with your groups?
- How will you ensure diversity in your groups?
Ready to Move on to Module 6 (of 6) ?
In Module 6 we show you how to address the OCT Standards of Practice during your online practicum.
References for Module 5
Nonthamand, N. (2020). Guideline to develop an instructional design model using video conference in open learning. International Journal of Emerging Technologies in Learning, 15(3), 140-155.
Rehn, N., Maor, D. & McConney, A. (2017), Navigating the challenges of delivering secondary school courses by videoconference. British Journal of Educational Technology, 48: 802-813. doi:10.1111/bjet.12460
Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 72–100). Cambridge, MA: The MIT Press. | https://onlineteaching.ca/module-4/ |
TALLAHASSEE — The Florida Public Service Commission (PSC) will conduct a customer service hearing on Thursday, July 30, 2009, for customers of Progress Energy Florida, Inc. (Progress). In March, Progress filed a petition with the PSC for an increase to its base rates.
This month, a 1,000 kwh monthly bill for residential customers is $122.79. Using Progress’ proposal, a typical 1,000 kwh monthly bill for residential customers would increase to $135.79 in 2010. Progress provides electric service to approximately 1.6 million retail customers in 35 Florida counties.
The hearing allows customers to comment on the proposed rates and any quality-of-service issues relevant to the utility. Customers are invited to attend the hearing at the following time and location:
Thursday, July 30, 2009
1:00 p.m.
Apalachicola Community Center
1 Bay Drive
Apalachicola, Florida
The PSC is committed to making sure that Florida’s consumers receive their electric, natural gas, telephone, water, and wastewater services in a safe, affordable, and reliable manner. The PSC exercises regulatory authority over utilities in the areas of rate base/economic regulation; competitive market oversight; and monitoring of safety, reliability, and service.
For additional information, visit www.floridapsc.com. | https://www.radeylaw.com/2009/07/29/psc-customer-service-hearing-set-for-progress-rate-request-in-apalachicola/ |
This is an exciting time for southwest Indiana with a number of opportunities knocking at the door, such as the Multi-Institutional Academic Health Science and Research Center in downtown Evansville and the I-69 Innovation Corridor. It is crucial we make the most of these opportunities by capitalizing on these four factors. If we can accomplish that, we are in for an era of unprecedented regional growth.
1. KNOW-HOW. Southwest Indiana has a strong legacy in manufacturing, healthcare, logistics, agriculture and energy. Support for our higher education institutions and trade schools will allow the region to transfer traditional know-how into a modern and trained workforce.
2. INNOVATION. We are seeing huge growth in our entrepreneurship and innovation networks, with co-working spaces popping up along the I-69 corridor, the Tech on Tap community, USI's Technology Commercialization Academy and award winning technology transfer programs with Crane Naval Base.
3. DIVERSITY. Diversity fuels the intersection of ideas, concepts and cultures and enhances our ability to create disruptive innovations. We can promote and welcome diversity by attracting, embracing and fostering newcomers from all walks of life, thus increasing our innovation potential.
4. COLLABORATION. It is necessary to embrace a culture of regional collaboration, something USI Outreach and Engagement is always striving to do better. The only way we can compete in the 21st century is by expanding our horizons and using our assets as a region. We live in a global economy, and remaining in isolation is no longer an option if we wish to strive and keep the next generation here! | https://usi.edu/outreach/engage/2016-archives/four-factors-needed-for-regional-growth-in-southwest-indiana/ |
Generally categorized as Arctic or alpine, tundra refers to a treeless biome that ranks among the coldest on Earth. Though covered in snow most of the year, the tundra experiences a short summer growing season during which animal and plant activity peaks. Virtually no reptiles or amphibians can live in tundra's harsh conditions, but other tundra plants and animals have developed adaptations that allow them to survive in such a frigid environment.
Mammals of the Tundra
A number of mammals can survive in tundra habitats thanks to special adaptations and the insulation fur and fat provide. A prominent example is the herbivorous musk ox. One of the largest Arctic tundra mammals, the musk ox has a dense coat which, combined with its large size and short legs and tail, reduces the loss of body heat. Other arctic tundra herbivores include arctic hares, squirrels, voles, lemmings and caribou, which have hooves that support them in snow. Arctic tundra carnivores include arctic foxes and polar bears. In alpine tundra, marmots, mountain goats, pikas, sheep and elk occur.
Birds Inhabit the Tundra
Many birds that occur in Arctic tundra are migratory, which means they only travel to such regions during the warmer summer period. These include ravens, snow buntings, falcons, terns and several gull species. Others birds, however, such as ptarmigan and the lemming-eating snowy owl, are year-round tundra residents. Ptarmigan are brown in summer, but white in the winter. Male snowy owls are completely white, which makes it difficult for predators to spot them against snow.
Insects of the Tundra
One insect species that has adapted well to frigid conditions is the tundra bumblebee, which has dense hair that guards against heat loss. It can also use its flight muscles to generate heat through shiver-like movements. Mosquitoes, flies and moths are also found in Arctic tundra regions, while grasshoppers and butterflies occur in both Arctic and alpine tundra.
Fish Are Important Tundra Biome Animals
Cod, flatfish and salmon are a few of the fish found in tundra waters. Some tundra fish have special adaptations, like the Alaska blackfish, which produces a chemical that lowers the freezing point of the fluids in its cells. Many animals that live in tundra environment, including fish, grow and reproduce at slower rates. Unlike trout in other parts of the world, for example, tundra lake trout have been known to take up to 10 years to mature.
Tundra Biome Plants
According to the University of California Museum of Paleontology, 1,700 kinds of plants occur in Arctic tundra. Some of the adaptions that allow vegetation to grow in these regions include short roots and furry or wax-like coatings. The flowers of the woolly louse, for example, have dense hair that generate heat through a greenhouse-like effect. Other Arctic tundra plants include shrubs, sedges, reindeer mosses, liverworts, grasses and several species of lichen. Drainage is limited by permafrost in Arctic tundra, but not so in alpine tundra, where dwarf trees and small-leafed shrubs are plentiful.
References
About the Author
Since beginning her career as a professional journalist in 2007, Nathalie Alonso has covered a myriad of topics, including arts, culture and travel, for newspapers and magazines in New York City. She holds a B.A. in American Studies from Columbia University and lives in Queens with her two cats. | https://sciencing.com/plants-animals-live-tundra-7830304.html |
The mission of the Sindisa Fund is to support and conduct activities that contribute to the global conservation of endangered species.
Purposes
-
Advance the global conservation of wildlife in general, but particularly threatened and endangered species and their habitats.
-
Educate communities that live on the edge of wildlife populations about the importance of endangered species, ecosystems and biodiversity conservation.
-
Facilitate community development that enhances the capacity of communities adjacent to threatened and endangered species populations to support biodiversity conservation.
-
Support and conduct wildlife monitoring that enhances the survival of endangered species populations.
-
Support and conduct scientific research that aids in the understanding and conservation of threatened and endangered species.
-
Raise funds and receive donations for the above endeavors.
-
Support and partner with projects and other organizations for the purpose of fulfilling the above goals.
Latest project
The Sindisa Fund was founded in 2013 and received its US 501(c)3 tax-exempt non-profit status in 2014.
The Sindisa Fund
Global Endangered Species Conservation
PHOTO: The Sindisa Fund Founder and Director, Bruce Lombardo, assists in the conservation of the black rhino in South Africa.
To read more about The Sindisa Fund, click below: | https://www.sindisafund.org/ |
The vast majority of Africa’s oldest baobabs are dying. The situation of these ancient trees has worsened over the last 15 years, probably because of climate change. A disappearance described as “of an unprecedented scale”.
The vast majority of the oldest baobabs in Africa have been dying for the past ten years, alerting researchers on Monday that climate change is a possible cause of this disappearance: “It is shocking and spectacular to witness during our life the disappearance of so many millennia old trees , “ explains Adrian Patrut of Babeş-Bolyai University in Romania, co-author of the study published in the journal Nature Plants . “During the second half of the XIXth century, the great baobab trees of southern Africa began to die, but since 10/15 years their disappearance has increased rapidly due to high temperatures and drought “continues the researcher.“
Aged from 1,100 to 2,500 years ago and tutelage of the sky, the baobabs and their massive trunk crowned with roots-like branches, are one of the most emblematic silhouettes of arid savannas, spotted for miles around. But, in the past 12 years, nine of the thirteen older baobabs are partially or totally dead, according to the study.
Among the victims, three symbolic monsters: Panke, native of Zimbabwe, the oldest baobab with 2,450 years on the clock, the Platland tree of South Africa , one of the largest in the world, with a trunk of more than 10 meters in diameter and the famous Chapman baobab of Botswana, on which Livingstone engraved his initials, classified national monument.
The researchers discovered this “unprecedented scale” situation almost by chance: they studied these trees to unlock the secret of their incredible measurements. For this, between 2005 and 2017, Adrian Patrut and his colleagues studied all the largest (and therefore usually the oldest) baobabs in Africa, more than 60 in all.
Baobabs die in a region heavily affected by global warming
Traveling through Zimbabwe, South Africa, Namibia, Mozambique, Botswana and Zambia, they collected samples from different parts of the trees. Fragments of which they then defined the age using carbon dating .
“The cavity of an old baobab Zimbabwe is so great that 40 people can shelter there,” says the website Internet Kruger National Park in South Africa. They could be used as a store, as a prison or simply as a bus stop. They have also long been used to find their way through explorers or travelers.
“Baobabs periodically produce new trunks, as other species produce branches,” according to the study. These stems or trunks, often of different ages, then merge together. When too many stems die, the tree collapses. “Before we started our research, we had been informed of the collapse of the baobab grootboom in Namibia but we thought it was an isolated event , “ says Adrian Patrut.
“These deaths were not caused by an epidemic, “ say the authors, who suggest that climate change could affect the baobab’s ability to survive in its habitat even though “further research will be needed to support or refute this hypothesis” . But “the area in which the millennia baobab trees are dead is one of those where warming is the fastest in Africa,” notes Adrian Patrut.
- Most of the oldest baobabs in Africa are dying.
- These huge trees are emblematic of arid savannas.
- Climate change is probably responsible for their disappearance, which has accelerated over the last fifteen years. | https://abcofagri.com/the-oldest-baobabs-in-africa-disappear-ominously/ |
The authors show clear anatomical photographs of this accessory muscle along with an algorithm for investigating suspected anatomical variations.A-0285 Radiographic prediction of lunate morphology: reliability, reproducibility, and compatibility with MR arthrography Ji Hun Park, Tae Wook Kang, Seul Gi Kim, Young Woo Kwon, Jong Woong Park College of Medicine, Korea University Anam Hospital, Seoul, South Korea Background: Two major lunate types have been proposed on the basis of the absence (Type I) or presence (Type II) of medial facets.Methods: Plain radiographs of a total of 150 wrists were reviewed by three observers.The lunate types were independently evaluated twice using both PA analysis (Lunate Types I and II) and CTD analysis (Lunate Types I, CTD≤2mm; II, CTD≥4 mm; Intermediate, Others).The swelling increased in size and his overall hand function decreased, so surgical exploration was planned.At operation, the FDP tendon to the index finger and the intrinsic muscles were intact and of normal appearance.On the CTD analysis, 76 (50.7%) of the total wrists were classified into the intermediate group; excluding them, 27 of 29 Type II lunates (93.1%) and 39 of 45 Type I lunates (86.7%) were compatible with the MRA findings.Conclusions: Both systems had moderate inter-observer and intra-observer reliabilities.
A tendon ran proximally from this accessory muscle belly into the forearm.
While most anomalous muscles are asymptomatic, ours was causing symptoms particularly due to underlying muscle spasticity.
The other clinical relevance of this is that the ring finger FDP usually supplied by the ulnar nerve was in this instance supplied by the median nerve.
Anomalous or accessory FDS muscles in the palm are rare, but when present they can be painful and interfere significantly with hand function.
We present the case of a 28 year old male mechanic who presented with a painful swelling over his right thenar eminence following a road traffic accident.Anomalous and accessory muscles in the palm are anatomical curiosities until they become symptomatic.Accessory FDS muscles presenting in the palm are rare and only a few cases have been reported in the literature since they were first described in 1970 by Vichare.His grip strength was compromised and had reduced abduction of his index finger. No intrinsic muscle rupture was detected and no mass lesion was detected. | https://comp-org.ru/takeo-spikes-dating-3242.html |
A Colorado professor wondered how racist rhetoric stoked by the 2016 presidential election was impacting Latino students, so he conducted academic research that found exposure to racism often led to self-hatred and acceptance of the offensive cultural beliefs lobbed at young Latinos from politicians, the media and their community.
“Although most people might intuitively know that racism negatively affects Latino undergraduates, the findings of this study provide empirical evidence of racism’s impacts,” said Carlos P. Hipolito-Delgado, an associate professor of counseling at the University of Colorado Denver. “Little by little, it begins to chip away at that sense of self.”
Hipolito-Delgado’s interest in studying the subject piqued during the lead up to the 2016 presidential election and after Donald Trump referred to Mexicans as rapists, drug dealers and criminals. In Colorado, white supremacist and other extremist organizations have been more emboldened now than in past years, data show, and the incidents of hate and bias in the state is rising dramatically, experts said.
The study’s participants, 350 first-generation Latino undergraduate students from colleges across the country, took a survey designed to determine whether exposure to racism and encouragement to accept and assimilate to racist notions were predictive of internalized racism.
Questions included whether participants believed certain racist stereotypes, how much they felt like an American and whether or not they’d endured racist experiences like a clerk following them around a store, expecting them to steal.
The survey’s results indicated that participants did internalize hatred directed at them in a way that was statistically significant.
The study defines racial internalization as the conscious or unconscious acceptance of a racial hierarchy that values white people above people of color. Internalized racism has been linked to marital dissatisfaction, increased depressive symptoms, increased stress, decreased self-esteem and decreased life satisfaction, the study said.
Luis Estrada, an electrical engineering student at Metropolitan State University of Denver, was joined by about a dozen undocumented and refugee college students earlier this month at downtown Denver’s Auraria Higher Education Center to share stories and insights with the Colorado Department of Higher Education.
Estrada pulled up an internet meme of a Spongebob Squarepants character on fire who appeared unbothered by the flames.
“This climate we’re in feels like this,” the first-generation college student said. “You just get used to it. I’ve made peace with the fact that I can’t control anything. I can’t control Congress. I can’t control the president’s mood. I am just trying to find internships for myself and work hard.”
Dan Baer, executive director of the state’s higher education department, acknowledged the difficulties marginalized students face.
“We’re living through a really unsettling political environment that’s not welcoming or comfortable, and it’s especially important we have conversations like this because the people around this table don’t have direct power to change what’s going on in Washington, but we do have the ability to support each other,” Baer said to the group of students and members of his staff.
Hipolito-Delgado hopes the study will encourage counselors to intervene by helping Latino undergraduate students talk through discrimination they face.
Saira Galindo, a senior at MSU and an undocumented student, said counseling and access to mental health care was crucial as she worked through the stresses of the political atmosphere and everyday life of being a Mexican immigrant living in the United States.
“It’s been life-saving and life-changing to attend the free counseling services MSU offers because of our lack of access to health insurance,” Galindo said. “We don’t always have to be strong. It feels like it because we’re so busy taking care of our families and working so hard against all this, but there are resources out there.”
Hipolito-Delgado plans on further studying the impacts of racism and bias on academic achievements and the pursuit of college. | https://www.denverpost.com/2018/11/28/latino-racism-in-colorado/ |
The reaction rate of a chemical process will change by what factor if the pH drops from 6.50 to 2.00?
This is an important question for many chemists and engineers who want to know how changing the pH level will affect their product’s reaction rate.
In this article, we’ll discuss how to calculate the reactant concentration, which is related to the rate of a chemical process, in order to determine how changes in suspension pH can alter its conversion speed.
The following equation can be used to calculate the reactant concentration:
c = [reactant]*[concentration of solution in molarity]/([total volume], [volume]) where c is the reaction rate, and concentrations are in mol·L-; M stands for “moles per liter”.
The total volume (Vt) and final volume (Vi) do not matter because they cancel each other out. We will now look at an example problem that involves pH changes.
Suppose a chemical process begins with a suspension’s initial pH level of six but has been adjusted to have an endpoint pH value of two by adding hydrochloric acid HCl(aq). | https://fitose.com/by-what-factor-will-the-rate-of-the-reaction-change-if-the-ph-decreases-from-6-50-to-2-00/ |
There has been a tremendous interest in developing autonomous driving systems to relieve human beings from disastrous accidents caused by driver fatigue, drunk driving, or inexperienced driver’s mis-operations. At the same time, however, such application scenarios also impose strong safety- and security-requirements to the autonomous driving systems. It is thus desirable that the system can have a second safety-ensuring sub-module besides the main driving functionality. In this proposal, we plan to develop techniques to detect and diagnose anomalies in system logs to provide further security guarantee to autonomous driving systems. In particular, we consider the situation where a car fully manages itself without any human intervention. Thus, this proposal is an augmentation of existing work which analyze the divergence between car predictions and human operations. We will leverage the existing infrastructure and data in Berkeley Deep Drive center , collect more log data, and deploy our proposed system in an autonomous car for evaluation in real world settings. Started with a broad categorization of autonomous driving system logs (Section 1), we first propose an effective logging practice (Section 2), and then propose solutions to use these logged data for anomaly detection (Section 3) and diagnosis (Section 4).
The PI and key personnel have extensive experience in adversarial machine learning and system log analysis. PI’s previous research include black-box attacks [1,5,14], and adversarial examples against different machine learning models [12,13,17]. The key personnel also have experience in traditional computer system log analysis [6–9], and have recently achieved significant improvements on system log anomaly detection through deep learning .
1 Logs in autonomous driving systems
An autonomous driving system relies on multiple sensor sources to provide streaming inputs. Based on the inputs, an intelligent engine can make predictive decisions to control a vehicle.The system thus can log both the input events and the predictions. Furthermore, an autonomous driving system typically has an operating system and multiple other controllers, which constantly produce system event logs. In particular, we consider three broad classes of log events.
Input event logs. We consider input events as the input from sensors. For example, in autonomous driving systems, there are cameras that continuously record surrounding environments while driving, GPS systems that track car locations in real-time, as well as LiDar and Radar that perceive the locations and distances of other objects relative to a car.
Prediction event logs. An autonomous driving system constantly makes decisions to navigate a car based on real-time environmental situations. Example decisions include how to control the direction; what speed should be maintained; and whether to change lane or not. Such predictive decisions should be recorded for online anomaly detection and subsequent analysis. We call such recorded decisions prediction event logs.
Traditional system logs. Note that an autonomous driving system can produce system logs as all other systems. For example, currently each BDD vehicle platform is installed with an operating system of Ubuntu 16.04 , thus system logs such as authentication logs, booting information and package installation logs would be valuable data source to analyze. There exist also other types of traditional vehicle sensor logs, such as fuel monitors or tire pressure monitors.
2 Effective logging for diagnosing autonomous driving systems
System logging practices have been studied for years [18, 19]. An effective logging practice needs to consider the level of details and the log size. In the scenario of autonomous driving, system needs to log information such as high-resolution videos taken from the camera, which require much larger storage than other types of log data. Therefore, a good logging practice becomes more crucial. In this proposal, we plan to explore the options for logging different types of events. Our design is motivated by two goals: (1) the logged information should be sufficient to detect various abnormal events in the autonomous driving vehicle; and (2) the logged information should be sufficient to diagnose the root cause of an anomaly.
History logs are queried to make online decisions, and for offline anomaly detection and diagnosis. Since we cannot store infinite system logs, we need to make a trade-off between query time and log size. Intuitively, later events should have more relevance to the current system status, and thus should be stored with more details for fast querying. In contrast, older events are less relevant to immediate decisions. As a result, more query time could be afforded, and fewer details are needed in them. Therefore, log size could be significantly reduced for older events. In addition to exploring the most effective compression method for each type of log data, a more interesting direction is to utilize machine learning techniques to remove history logs that are less interesting. Specifically, we will design algorithms to learn critical points of the time-series log data, and only keep detailed information around those points. For example, we could learn when the system experiences a novel setting, and keep detailed logs around that period while reducing others.
3 Anomaly detection in autonomous driving logs
The first use case of autonomous driving logs is anomaly detection. That is, detecting if the autonomous vehicle is making unusual behaviors which may have severe security implication.
Adversarial example detection using multiple types of inputs. Adversarial examples are carefully designed input data samples that could misguide machine learning models to generate incorrect outputs . Such examples are known to be hard to detect . To the best of our knowledge, all previous work on adversarial example detection focus on analyzing a single type of input data source [3,15]. In autonomous cars, we propose to leverage multiple data sources collected at the same time to detect possible adversarial examples. In addition to existing operation prediction mechanism, we will build multiple machine learning models that make predictions using different combinations of input data sources. If the predictions by these models are not consistent, an adversarial input may exist.
Control sequence anomaly detection. Control sequences are sequences of operational decisions by self-driving cars predicted from input data. We propose two approaches to detect anomalies in such sequences. The first one is to check the predictions through comparing with sensor readings. For example, an anomaly might have happened if the navigation control sequence does not follow the actual GPS reading data series. The second approach is to model such sequences using machine learning techniques. In particular, we assume that a pattern exists in vehicle control sequences. Thus, we could learn this pattern, and check if current prediction is probable based on history control sequence. We plan to use LSTM as a baseline to evaluate the feasibility of learning such sequences, and propose more advanced probabilistic models for effective anomaly detection.
Traditional system logs anomaly detection. Traditional system logs produced by the operating system and embedded systems running inside an autonomous car reveal system execution paths. For example, using an identifier to represent a log printing statement (LPS) in system source code, one system execution may go through a LPS sequence of “A A B D S” and generate corresponding system logs. Such sequence patterns resulted from normal executions could be learned, to detect unusual system behaviors shown as rare LPS sequences, e.g., system crashes. Previously we have achieved effective anomaly detection on traditional computer system logs using LSTM . In this proposal, we plan to explore in this direction further. Specifically, inspired by the fact that a code block may contain multiple LPSs that will always be executed one after another, we propose to learn a segmental structure from system log sequence, and use this segmentation information to improve anomaly detection. Moreover, we will incorporate domain knowledge and new log types brought by autonomous driving systems. We propose to analyze the correlations over time among system logs and other sensor readings. For one thing, such correlation enables us a deeper understanding of the system status. For another, different logs could have anomalies within the same time period. This could be caused by the same anomaly, or cascading failures started by one component failure. Such causality information could be reasoned through correlation analysis.
4 Automatic diagnosis under abnormal events
After an abnormal behavior has been detected, it is desirable to find the cause of the anomaly and finally resolve it. However, tracing back the reason behind a behavior is typically hard, especially due to the opacity of a deep neural network model. Given an abnormal output of the decision making model, our goal is to find out the reason why the abnormal decision is made. We plan to study different approaches to achieve this goal in order to simplify the diagnosis process along two directions.
Models with justification. Understanding the decision making procedure of a neural network model is hard. In the first direction, we plan to study how to revise the model to not only emit one final prediction, but also produce a justification to support the decision. For example, one possible way to instantiate this idea is to design a neural network that can make a chain of predictions so that later predictions are justified by the earlier ones in the chain. We call such a chain decision chain. For such an approach, training can be challenging if the decision chain is not provided in the ground truth. We are interested in exploring reinforcement learning-based approaches for training. We plan to explore the design space of both neural network architectures and the forms of justifications to find the best approach.
Decision interpreting models. The second approach is to design a decision interpreting model to interpret the decisions made by the decision making model. Consider the decision is d, and all input sources are i1, ..., in, which are used for the decision making model to produce the decision d. We can build another model f , which takes d and one input source ij (for j = 1, ..., n), such that f (d, ij) produces a score sj [0, 1] indicating the likelihood of the input source ij being the cause of the decision d. We call f the decision interpreting model, and we can use it to understand the influence of each input to the final decision without even opening the black-box of the decision making process. Note that the decision interpreting model is relevant to the influence function that has been well-studied in the machine learning literature. An additional benefit over influence function is that the decision interpreting model can also be trained to interpret the decisions made by non-neural network components of the systems. | https://deepdrive.berkeley.edu/project/detecting-and-diagnosing-abnormal-behaviors-autonomous-driving-logs |
Urinary Tract Infection is an infection that can take place in any region of your urinary tract systems, such as the urethra, bladder, kidneys, and ureters. Most UTIs are caused by bacteria. However, others are as a result of fungi and, in other cases, viruses. There are different types of UTIs depending on the part of your urinary tract infected. These are:
- Cystitis (bladder) - Here, you might experience frequent urination or pain while urinating. You may also experience pain in your lower belly and bloody or cloudy urine.
- Pyelonephritis (kidneys) - It causes vomiting, nausea, and fever.
- Urethritis (urethra) - You may experience burning when urinating and discharge.
The most common types of UTIs affect the bladder and the urethra. However, the ureter and kidneys are rarely affected, although they are more severe.
What Causes a Urinary Tract Infection?
Doctors recommend women wipe themselves from front to back after visiting the toilet to prevent chances of getting UTI. The urethra and the anus are close to each other. The large intestines produce bacteria like E. coli, which can move from your anus to the urethra.
The bacteria can also move up to your bladder, and if the UTI is untreated, your kidneys can be infected too. Since women have shorter urethras than men, the bacteria move to the bladder easily. Women are more likely to get UTI during sexual intercourse. During sexual intercourse, pressure may occur on a woman’s urinary tract, and bacteria may move to the bladder from the anus. The body then gets rid of the bacteria after 24 hours. However, bacteria from the bowel have some characteristics that let them stay in the bladder. UTI is also caused by decreased levels of estrogen. Low estrogen levels interfere with the bacteria in the vagina, increasing the chances of getting UTI.
Most of the causes of UTI in men are similar to those of women. However, men with an enlarged prostate have a high risk of getting UTI.
Knowing the Signs of a UTI
Some signs are the same in both men and women. Sometimes, men experience rectal pain for the lower tract UTI, which involves the bladder and the urethra. Women may have pelvic pain with the lower tract UTI. Both men and women experience similar symptoms for the UTI that affects the ureter and the kidneys.
In most cases, the symptoms of UTI will depend on the type of UTI you have. Some of the common bladder and urethra UTIs include:
- Burning sensation with urination
- Frequent urination while passing little urine
- Pelvic pain in women
- Bloody urine
- Rectal pain in men
- Cloudy urine
- Strange smelling urine
For the kidneys and ureter, the symptoms are life-threatening. The bacteria move from the infected kidney to the blood. This condition may result in low blood pressure and even death. People may experience symptoms such as:
- Nausea
- Chills
- Fever
- Pain or tenderness in your upper back or the abdomen
Treating a Urinary Tract Infection
UTI is treated depending on its cause. Doctors first examine your urine sample to identify the organism causing the UTI then diagnose you. Most UTI infections are caused by bacteria that are treated with antibiotics. UTIs caused by viruses are treated with antivirals, while those caused by fungi are treated with antifungals.
However, the type of antibiotic used will depend on the urinary tract infected. Urethra and bladder UTIs are treated using oral antibiotics, while the ureter and kidneys UTIs with intravenous antibiotics.
Since bacteria may resist antibiotics, doctors mostly prescribe UTI patients with treatment that lasts for a short time. Your doctor may also use your urine test results to determine the antibiotic treatment that will work best for you. If you get UTI more than three times a year, you should ask your doctor for a treatment plan.
There are home remedies that you can practice. These remedies cannot cure UTI; instead, they boost your medication to give better results. They include: | https://autoproducts.com/home-maintenance/urinary-tract-infections-can-become-a-plague-to-some-people |
With Japan’s surrender in the Pacific War in August 1945 four decades of Japanese colonial rule ended and U.S. and Soviet troops came to be stationed on the Korean Peninsula to both the south and north of the 38th parallel respectively. This resulted in the division of Korea into two separate countries.
On June 25, 1950, North Korea attacked the South on all fronts, igniting a three-year internecine war. The tragic war was stopped with the signing of the Korean Armistice Agreement on July 27, 1953. The peninsula has remained divided ever since, but a mood for peace has recently developed after years of tension. | https://kccuk.org.uk/en/about-korea/inter-korean-relations/historical-background/ |
Our 7-hour Interview Crash Course offers the perfect balance of theory and practice to give you the best chance of getting that coveted place at Medical School.
Doing well at interview is is all about coming across as a friendly, likeable person who is passionate about becoming a doctor, and has the qualities, knowledge and determination to be a good one. That sounds pretty simple, and indeed it is – there’s nothing particularly complicated or difficult about the interview at all. But having said that, preparation is of course, essential, and as the majority of students have never done anything of the sort before, it can be a significant source of worry and stress.
We want to help with that worry and stress. We can’t get rid of completely, but we can give you the tools you need to succeed at the interview. We’ll teach you how to go about structuring your answers, we’ll teach you what you need to know about things like medical ethics, the NHS and current affairs, and we’ll give personalised feedback to make sure you know what to work on. That’s a really important point by the way – being good at interviews (and indeed, at life in general) requires you to understand your own areas of weakness and actively work on them. We’ll do our best to help you identify these issues, but it’ll be down to you to improve them yourself.
Morning | 10:00am – 1:00pm
We’re going to spend the morning taking you through the fundamentals – some of what we’ll be going over is listed to the right.
The whole day is going to be very interactive, with plenty of group discussion and very little in the way of plain, spoon-fed teaching. The fact that we’ll have just 15 students in each class means that everyone will get multiple opportunities to contribute and to shine. And that also means that the instructor will be able to subtly identify your individual areas of weakness and give you tips to improve them.
- Communication skills seminar
- Answering common question types
- Structuring your answers
- Medical ethics workshop & group work
- NHS and Current Affairs – Things you should know
- MMI stations – Lateral thinking & breaking bad news
Afternoon | 2:00pm – 5:00pm
This is where the real fun begins! We’ll be giving one lucky volunteer an intense mock interview in front of the group, which will consist of a mix of standard questions, lateral thinking questions, medical ethics and MMI scenarios. If you’re the one being interviewed you can (and should) use the knowledge you gained in the morning to give yourself an advantage, and after your interview, the instructor and the other students will give you constructive feedback about what you did wonderfully and what could be improved. If you’re watching someone else being interviewed, you should be thinking of what your own responses to the questions would be and what you think of their responses are as “the interviewer”.
After the interviews and feedback, we’ll be finishing the day off with a somewhat intense quick-fire session, where we’ll be throwing random questions and scenarios at you to see if you’ve got a grasp of what we spent the day trying to teach you. There’ll also be an opportunity after the course for you to chat to the instructors about the universities you’re applying to, your personal statement, or anything else you’re concerned about.
Throughout the day, constructive honesty will be our default policy. If we think you’re coming across as too arrogant, too ignorant or just socially inept, we will have no qualms about telling you this to your face. We want to create an environment where everyone can improve, and skirting delicately around areas of individual weakness isn’t going to help anyone.
The Interview Crash Course Handbook
Just like with our BMAT and UKCAT courses, every student who attends the Interview Crash Course will get a free copy of our Course e-book. You should treat this as your Bible during the interview season – it will greatly help you understand what’s going on in the medical world, and will therefore be beneficial to your life.
Written by our team of experienced medical students, the Interview Crash Course Handbook is the perfect reference manual for your interview preparation. The content is presented in an informal yet informative style, which makes it much easier to read than the majority of “proper” interview books on the market. We’ve included everything we think you should know before your interview, including easy-to-follow articles about the NHS and Current Affairs that they love to ask about. | https://www.interviewcrashcourse.com/course/ |
As a player for New River United, I pledge to:
- Play for fun and enjoyment of the game.
- Be a good sport.
- Respect my coaches, teammates, parents, opponents, and referees.
- Obey the laws of the game and play within the spirit of the laws.
- Work hard to improve my soccer skills.
- Be a team player, to get along with and cooperate with my coaches and teammates.
As the parent of a New River United player, I pledge to:
- Support New River United and my child’s team in requiring players to abide by the Players Code of Conduct.
- Use language appropriate to youth sports at all times
- Refrain from coaching my child or other children from the sideline during the game.
- Support the coach in providing positive reinforcement to the players.
- Show respect to the game officials and the players and coaches from both teams.
- Remember that the game is for the children – not for the adults.
New River United expects all coaches to set a positive example for the young people participating in our recreational program by demonstrating good sportsmanship, respecting the game, acting ethically, and providing positive instruction.
Coaches should respect the game and its rules. This means that coaches extend respect to their players, their opponents, the officials, and the game itself. Coaches are expected to act ethically at all times, to respect the spirit as well as the letter of the Laws of the Game. Lead by example.
New River United Recreational coaches need to recognize the wide spectrum of experience and development present in all divisions of the program. Recognize the individual abilities of every player and seek to move every player up the developmental spectrum over the course of the season. Coaches should strive to provide a safe and supportive atmosphere at practices and games that fosters development and improvement, not just competition.
New River United depends on the service of hundreds of volunteers every year. We have a strong tradition of excellence in coaching and service to young people. We count on our coaches to uphold that tradition.
As a youth coach, remember that this is a game for young players and strive to make it a positive and memorable experience for them. | https://www.newriverunited.com/Default.aspx?tabid=744578 |
This Access tutorial explains how to add auto number in Access Query using Access VBA Function.
You may also want to read:
Add sorting in Access Table field
Add Auto number in Access Table
In Microsoft Access, go to Design View of a Table and define the Data Type of a field as AutoNumber
Go to data view of the Table, each row of data is assigned a sequence number in ascending order.
However, we cannot add the AutoNumber Data Type in Query Design View, therefore we need to workaround to add auto number in Access Query.
Add Auto number in Access Query
To add auto number in Access Query, there are several solutions in Google. Some solutions have very low performance for large data set, some are related to assign ranking (not assigning sequence as is). Among all solutions, I prefer the one from tek-tips.
This solution creates a VBA Function, then you can use the Function directly in Query Expression.
Step 1 – Create User Defined Function
Press ALT + F11 > insert a Module > copy and below and paste to the Module.
Global seqNumber As Long Global lastcall As Date Function wAutoNumber(i) As Long If Now - lastcall > 4 / 60 / 60 / 24 Then lastcall = Now seqNumber = 0 End If seqNumber= seqNumber+ 1 wAutoNumber = seqNumber End Function
This Function adds 1 each time the Function is run. The Function is run from row one data to the last row of the Query, therefore row 1 will have an auto number of 1, row 2 will have 2.
The auto number (seqNumber) is stored as a Global variable, meaning the variable will not be reset to zero even after the Function is ended. Therefore we need to define when the variable will reset. The statement
Now - lastcall > 4 / 60 / 60 / 24
defines 4 seconds as the time for reset, you can change 4 to other number. Note that if you run this Query within 4 seconds, the auto number will continue to increase.
If you try to run a Macro to export more than 1 Queries without waiting for 4 seconds, try the following Sub
Public Sub exp() seqNumber= 0 DoCmd.TransferSpreadsheet acExport, acSpreadsheetTypeExcel12Xml, "Query1", "C:\test\Query1.xlsx", True seqNumber= 0 DoCmd.TransferSpreadsheet acExport, acSpreadsheetTypeExcel12Xml, "Query2", "C:\test\Query2.xlsx", True End Sub
Step 2 – Add Expression in Query
Add an Expression in a Query using the above User Defined Function. Actually the parameter is meaningless, it can be any field name, it is added so that the Function can work probably.
Result
Run the Query, now the auto number is generated in a new field.
Note that the number begins with 2 instead of 1, I believe it is because Access runs the Function before running the Function for the first record in this Datasheet View.
However, if you export the Query to Excel, you will see the number begins with 1. | http://access-excel.tips/add-auto-number-in-access-query/ |
Consumers, providers and health care purchasers need high-quality information to help them compare and evaluate their health care options. The CAHPS V project will advance the AHRQ CAHPS mission of improving patients' experiences with health care by developing and evaluating strategies for survey measurement, reporting and quality improvement (QI). We propose a 5-year effort to advance the science and practice of patient experience assessment, continue innovation to ensure relevance to health service delivery and implement best survey practices, further the science of reporting, and evaluate CAHPS QI efforts. We will develop program communication strategies, and disseminate and promote use of CAHPS products. In particular, we will develop a survey to assess patient experiences with end-of-life care; develop new items to assess shared decision-making, care coordination, patient engagement, and patient safety; test alternatives to the standard CAHPS modes of data collection), explore the feasibility of administering a short-form survey dividing CG-CAHPS composites among respondents to reduce response burden, elicit stakeholder feedback about the value of different CAHPS supplement item sets, evaluate ?The Your CAHPS Survey? that was designed to help users of the CAHPS surveys compile a survey tailed to their specific needs, and evaluate existing Spanish translations of CAHPS surveys. In addition, we will gather input from stakeholders on best practices for narrative data analysis, develop an approach for using automated Natural Language Processing for analyses of narratives, and construct an algorithm approach to select representative narratives that reflect and illustrate overall provider ratings. Finally, we will evaluate the contribution of patient narratives to quality improvement efforts in hospital care for children, characterize primary care practices use of the CG-CAHPS survey and patient-centered medical home items during PCMH transformation, assess the impact of pay- for-performance for care delivered by primary and specialty care safety net providers on CAHPS survey responses, explore the value of new shared decision-making, patient engagement, communication and patient safety items for QI and identify QI strategies that improve patient experience across various settings. We will also advance analytic methods for CAHPS data. The project team is well suited to achieving the study objectives given its prior accomplishments and established working relationships. The work is innovative and designed to facilitate the use of CAHPS surveys and improve response rates to them, enhance reporting and use of CAHPS survey data, and improve health care QI efforts.
This project will advance public health by developing new CAHPS survey items for end-of-life care, shared decision-making, care coordination, patient engagement, and patient safety, promoting use of CAHPS surveys by implementing and evaluating an interactive database tool designed to assist with the assembly of CAHPS surveys, creating parsimonious variants of CAHPS surveys, evaluating alternative methods of data collection designed to improve response rates, enhancing the collection and use of patient narrative data in reports about health care, and assessing the impact of CAHPS surveys and reports on quality improvement efforts and patient experiences with care. | https://grantome.com/grant/NIH/U18-HS025920-04 |
Douglas Stratton Principal Designer
When it comes to thoughtful, intentional, and beautiful design, Doug has an unparalleled knack for creating a vision on paper and bringing it to life. In addition to his talent for visualizing the potential of a space, Doug’s drive, tireless work ethic, commitment to excellence, and ability to premeditate and overcome challenges sets him apart in the interior design world.
Originally from Los Angeles, Doug spent over a decade in the film industry creating specialized lighting and sets for commercials, music videos, film, and television. This experience in the film industry served to hone his natural talent for design and gave him an intimate understanding of how exceptional lighting is a necessity for a beautiful environment.
After relocating to beautiful Asheville with his wife and daughter, Doug started Stratton Design Group. Now, with more than 15 years of experience and an extensive portfolio, SDG specializes in interior environments for luxury homes and hospitality projects. Doug’s vast understanding of mood, ambiance, character, lighting, and functional, creative design has proven to be a perfect match for the building design and hospitality industries. | https://strattondesigngroup.com/about/meet-our-team/ |
We’re thirsty for insights and curious about what makes things tick.
As kids, our Strategists were the kind of people you’d find dismantling toys to see how they worked or asking ‘why?’ one too many times. That curiosity and hunger to learn is put to good use at Cucumber.
What We Do
Define the Problem
Define the Problem
Defining the right problem means that when thinking about a solution we focus on the problem and not on its symptom.
Identify the Benefits
Identify the Benefits
Identifying the benefits you and your customers/users expect from any solution guides the definition of the solution to ensure success.
Understand the Journey
Understand the Journey
Working collaboratively with you and end users we map out the process or journey to understand the process, people and systems involved.
Confirm Direction
Confirm Direction
We present insights and recommendations to confirm the focus when defining what the future could look like. | https://www.cucumber.co.nz/services/seed/understand-the-problem/ |
Purpose: This chapter examines two rather extreme examples of non-human entities in home assemblage, interior objects, and companion animals, and how their agency appears distributed with human consumers in assembling home. The authors aim at drawing conceptual contrasts and overlappings in how agency expresses itself in these categories of living and non-living entities, highlighting the multifaceted manifestations of object agency. Methodology/Approach: This chapter employs multiple sets of ethnographi-cally inspired data, ranging from ethnographic interviews and an autoethno-graphic diary to three types of (auto-)netnographic data. Findings: The findings showcase oscillation of agency between these three analytic categories (human, non-human living, and non-human non-living), focusing on how it is distributed between two of the entities at a time, within the heterogeneous assemblage of home. Furthermore, the findings show instances in which agency emerges as shared between all three entities. Originality/Value: The contribution of this chapter comes from advancing existing discussion on object agency toward the focus on distributed and shared agency. The research adds to the prevailing discussion by exhibiting how agency oscillates between different types of interacting entities in the assemblage, and in particular, how the two types of non-human entities are agentic. The research demonstrates the variability and interwovenness of non-human and human, living and non-living agency as they appear intertwined in home assemblage. | https://harisportal.hanken.fi/en/publications/when-your-dog-matches-your-decor-object-agency-of-living-and-non--2 |
Advances in technology continue to alter the ways in which we conduct our lives, from the private sphere to how we interact with others in public. As these innovations become more integrated into modern society, their applications become increasingly relevant in various facets of life.
Wearable Technology and Mobile Innovations for Next-Generation Education is an authoritative reference source on the development and implementation of wearables within learning and training environments, emphasizing the valuable resources offered by these advances. Focusing on technical considerations, lessons learned, and real-world examples, this book is ideally designed for instructors, researchers, upper-level students, and policy makers interested in the effectiveness of wearable applications.
The many academic areas covered in this publication include, but are not limited to:
International contributors in teaching, counselor education, rehabilitation counseling, educational leadership, computer applications, and science literacy offer an overview of the development and uses of wearable technologies in business and education. Section 1 deals with mobile and wearable technology introduction, offering four chapters on areas such as purposes and technical considerations of wearable technologies, educational technology, and health and fitness wearable technology. Section 2 covers device extensions in areas such as digital badges, wearable cameras, and educational multimedia. Section 3 describes applications such as wearables for people with disabilities, augmented reality teacher training, and Google AdSense as a mobile technology in education. Section 4 describes applications in applied sciences, such as smart device clickers and wearables technologies for earth science. B&w illustrations and process diagrams are included.
This book is a valuable tool for all the stakeholders in the educational field, especially educators, researchers, policy-makers, and undergraduate and graduate students interested in using technology to develop and implement active learning programs. Not only does it provide information about technology and devices, but it provides the context in which to apply the technology in a collaborative and active learning environment. | https://www.igi-global.com/book/wearable-technology-mobile-innovations-next/142108 |
BLUESTREAM SOLUTIONS has an extensive and robust Information Security Program that consists of a vast array of policies, procedures, controls and measures. This Information Security Policy is the foundation of this program.
Policy Statement
Information and physical security is the protection of the information and data that the Company creates, handles and processes in terms of its confidentiality, integrity and availability from an evergrowing number and wider variety of threats, internally and externally. Information security is extremely important as an enabling mechanism for information sharing between other parties. The Company are committed to preserving Information Security of all physical, electronic and intangible information assets across the business, including, but not limited to all operations and activities. We aim to provide information and physical security to:
• Protect customer, 3rd party and client data
• Preserve the integrity of The Company and our reputation
• Comply with legal, statutory, regulatory and contractual compliance
• Ensure business continuity and minimum disruption
• Minimise and mitigate against business risk
Purpose
The purpose of this document is to provide the Company’s statement of intent on how it provides information security and to reassure all parties involved with the Company that their information is protected and secure from risk at all times. The information the Company manages will be appropriately secured to protect against the consequences of breaches of confidentiality, failures of integrity, or interruptions to the availability of that information.
Scope
This policy applies to all staff within the Company (meaning permanent, fixed term, and temporary staff, any third-party representatives or sub-contractors, agency workers, volunteers, interns and agents engaged with the Company in Greece or overseas). Adherence to this policy is mandatory and non-compliance could lead to disciplinary action.
Objectives
The Company have adopted the below set of principles and objectives to outline and underpin this policy and any associated information security procedures:
- Information will be protected in line with all our data protection and security policies and the associated regulations and legislation, notably those relating to data protection, human rights and the Freedom of Information Act
- All information assets will be documented on an Information Asset Register (IAR) by the IT Manager and will be assigned a nominated owner who will be responsible for defining the appropriate uses of the asset and ensuring that appropriate security measures are in place to protect it
- All information will be classified according to an appropriate level of security and will only be made available solely to those who have a legitimate need for access and who are authorized to do so
- It is the responsibility of all individuals who have been granted access to any personal or confidential information, to handle it appropriately in accordance with its classification and the data protection principles
- Information will be protected against unauthorised access and we will use encryption methods
- Compliance with this Information Security and associated policies will be enforced and failure to follow either this policy or its associated procedures will result in disciplinary action The IT Manager has the overall responsibility for the governance and maintenance of this document and its associated procedures and will review this policy at least annually to ensure this it is still fit for purpose and compliant with all legal, statutory and regulatory requirements and rules. | https://bluestream.gr/information-security-policy |
The concept of a person held by a group of people is fundamental in understanding not only how a person within such framework of thought views himself but also how other matters such as the idea of being, morality, knowledge and truth that are essential for the ordering of the society are viewed. This is emphasized by the fact that such a concept encapsulates the role the society expects the individual to play for the attainment of an orderly society and this makes it inevitable for African Scholars to write on the conception of a person from the Africans perspectives. The Yoruba of south western Nigeria, a person is believed to be made up of three important parts. These are the “Ara” which is the material body, including the internal organs of a person; the “Emi” which is the life giving element and the “Ori” which is the individuality element that is responsible for a person’s personality. In Akan ontology, a person is also made up of three parts namely the “Okra”, the “Sunsum” and the “Honam” or “Nipadua”, representing the soul (or life giving entity), the spirit that gives a personality its force and body respectively.
CHAPTER ONE
INTRODUCTION
The position of a human person in the world we are is what gives meaning to our world. The human person governs and rules over other human persons, every communities and societies have human person occupying it and it can even be argued that the society exist because human person occupies it. This importance of a human person makes it an object of study for scholars to inquire into the ontological and normative conception of a person of which African scholars are not exempted. This chapter is divided into seven sections, the first section present an overview of metaphysics, which is necessary because African scholars do not exclude metaphysic in their account of a person. The second section discuss briefly on African conception of a person in general and the third section present an explanation of Akan conception of a Person from Gyekye and Kwasi Wiredu exposition. The fourth section present an explanation of the Yoruba conception of a person according to some African scholars which are Bolaji Idowu, Barry Hallen and Shodipo, Olusegun Oladipo, Segun Gbadegesin, Kola Abimbola and Wande Abimbola. The fifth section is a comparative analysis of the African conception of a person and the western conception and the Seventh section is the comparative analysis of the Akan conception of a person and the Yoruba conception.
BACKGROUND TO METAPHYSICS
There are disagreements on the nature of metaphysics. Philosophers attempt to give a definition to metaphysics has given rise to varieties of subject matter and approaches: This implies that there lies uncertainty as regards metaphysics. Despite the disagreement, two things can be deduced from the scope of metaphysics. On one hand, metaphysics is descriptive in nature, that is, metaphysics give account of what metaphysicians do. Secondly, by nature, it is normative, that is, it attempts to identify what philosophers ought to do when they engage in metaphysics.
However, the term metaphysics is taken from the title of Aristotle’s treatises’. Aristotle himself never called the treatise by that name; but rather, the name metaphysics was conferred by the later thinkers who happened to be students under Aristotle. Aristotle called the discipline (metaphysics) in his treatise ‘first philosophy’ or ‘a theology’ which aimed at wisdom. Aristotle also tagged this as ‘knowledge of first causes’. The subsequent use of the title metaphysics makes it reasonable to suppose that what is called metaphysics is the sort of thing done in that treatise. Metaphysics is a discipline that centers on God as the first cause, unlike other discipline like economics ethics etc. whose end is directed towards human action. Moreover, metaphysics is not only interested in explaining the first causes but also in the study of ‘Being qua Being’. Metaphysics tend to study being qua being from the perspective of their being ‘Being qua Being’ that exist. In other words, metaphysics considers things as beings or as existents and it tends to explain specific properties or features they exhibit so they are beings or existents. Metaphysics explains the concept of being and the general concepts like unity or identity, difference, similarity and dissimilarity that occur to everything that exist.
In Medieval Aristotelian tradition, there is a dual characteristic of what metaphysics is: the Medieval believes that the two conceptions of metaphysics are realized in a single discipline. This single discipline aims at explaining the categorical structure of reality and to establish the existence and nature of divine substance on the other hand. Although, this view was rejected by the continental rationalist of the seventeen and eighteen century, this led to expansion of the scope of metaphysical enterprise. Meanwhile, the seventeen and eighteen rationalists agreed that metaphysics is identified and characterize the most general kind of things that exist and also agreed on the idea of the divine substance. The rationalists confronted this idea with an intellectual landscape which led to the ultimate emergence of a general map of metaphysics.
Contemporary philosophers that is, philosophers from the 20th century to the 21st century such as John Austin and A J. Ayer among others refers to the term ‘metaphysics’ as a branch of philosophy which is different from other branches of philosophy such as ethics, epistemology etc. Metaphysics as a branch of philosophy attempt to find answers to most general question such as ‘what is it’ that is, what kind of thing exist in reality. There is no general answer to this question; this led to disagreement on what object or thing exists in reality. Attempt to answer the question give rise to different theories in metaphysics. At this junction, I shall proceed by discussing metaphysics as a branch of philosophy, what it centers on ands the various questions it proposed as a discipline, which leads to the discussion on the concept of a person both in Western and African culture.
1.1 METAPHYSICS AS A BRANCH OF PHILOSOPHY
Philosophy is originated from two ancient Greek words ‘philo’ and ‘sophia’ which means ‘love of wisdom’. Philosophy consists of four branches: epistemology (known as theory of knowledge), logic (this deals with reasoning), ethics (this deals with moral behavior) and metaphysics (this deals with nature of what exist).
The word ‘metaphysics’ is difficult to define. As a result of that, the twentieth century philosophers replaced the term with word ‘meta-language’ and ‘meta-philosophy’ because they viewed it as that branch of philosophy that study what goes beyond (physical or visible). Metaphysics deals with questions about reality which cannot be answered by scientific observation and explanation. However, in western philosophy (philosophy done in the west), metaphysics is the study of the fundamental nature of ‘what is it’, ‘why it is’ and ‘how can it be understood. It deals with questions like ‘what is that thing that exist’? What is reality? Does free will exist?( Free will is the doctrine that the conduct of human beings expresses personal choice and is not simply determined by physical or divine forces), is there such a process as cause and effect? And does abstract concept like ‘number’ exist.
There are three traditionally branches of metaphysical inquiry: Ontology: the word is derived from the Greek term ‘on’ which means reality and ‘logos’ which means ‘study of’. Ontology is that branch of philosophy that deals with the study of nature of reality; what is it, how many ‘reality’ are there. What are its properties? Etc. Theology on the other hand, is that which treats truth of faith concerning God and His works, it centers on the question; does gods exist, what a god, is, what a god wants. Third is universal science which involves the search for principal things such as the origin of the universe, fundamental law of reasoning. | https://projectwriters.com.ng/projects/a-comparative-analysis-of-akan-and-yoruba-conception-of-a-person/ |
Introduction {#s1}
============
The ability to navigate one\'s environment is a fundamental survival skill, required to locate sources of food (e.g., restaurants) and other important resources, such as shelter, and simply to navigate between desired locations. Spatial updating enables the navigator to keep track of the spatial relationship between themself and their surroundings when moving. According to the types of information being used in spatial updating, navigations can be classified as either piloting (landmark-based navigation) or path integration (dead reckoning or velocity-based navigation) (Gallistel, [@B34]; Yoder et al., [@B84]). In piloting, the navigator updates his or her current position and orients within the environment by using external cues, such as significant landmarks (specific buildings, intersections, etc.), in conjunction with a map. In path integration, the navigator integrates self-motion information (e.g., velocity and acceleration information) to estimate his or her current position and orientation relative to the starting point (Gallistel, [@B34]; Etienne, [@B28]). Self-motion (ideothetic) information is derived from the integration of vestibular information from the otoliths and semicircular canals, proprioceptive information from the muscles, tendons, and joints, motor efferent copies, and optical flow. Recent studies suggest that optical flow provides sufficient information for updating position and orientation (Riecke et al., [@B76]; Gramann et al., [@B36]).
Thus, spatial updating allows topographical orientation, which is generally defined as an individual\'s ability to orient and navigate from one place to another in the environment (Maguire et al., [@B57]). Spatial navigation requires many complex cognitive processes, such as attention, perception, memory, and decision-making skills (Redish, [@B74]; Brunsdon et al., [@B9]). Visual mental imagery, in particular, has been suggested to be a cognitive skill critical for successfully navigating in the environment (Farah, [@B29]; Riddoch and Humphreys, [@B75]; Davis and Coltheart, [@B17]; Brunsdon et al., [@B9]). During actual spatial navigation, individuals usually use mental imagery to internally represent spatial information, such as landmarks and routes, and use this information to navigate the environment (Farah, [@B29]; Davis and Coltheart, [@B17]; Brunsdon et al., [@B9]). In this way, individuals create a mental image of the environment in which they are navigating and to manipulate and rotate their spatial map to update their current position with respect to their target location (Palermo et al., [@B67]). Furthermore, neuropsychological studies of patients with brain damage or congenital neurodevelopmental defects suggest that compromised topographical orientation abilities are associated with disturbances in the capacity to form mental images of pathways and landmarks that would be encountered during navigation (De Renzi, [@B20]; Aguirre and D\'Esposito, [@B2]; Iaria et al., [@B43]). These findings suggest that internal representations of the environment, and manipulation of these representations, are indispensable cognitive functions required for spatial navigation.
Recent noninvasive studies that simulate spatial navigation using virtual reality and photos of scenes have identified the brain regions recruited during spatial navigation: the hippocampus, parahippocampal gyrus, posterior cingulate gyrus, temporal cortex, insula, superior and inferior parietal cortex, precuneus, dorsolateral prefrontal cortex, medial prefrontal cortex, premotor area, and supplemental motor area, etc. (Aguirre and D\'Esposito, [@B1]; Aguirre et al., [@B4]; Maguire et al., [@B56]; Burgess et al., [@B11]; Hartley et al., [@B39]; MacEvoy and Epstein, [@B54]; Spiers and Maguire, [@B77],[@B78],[@B79]; Wolbers et al., [@B83]; Iseki et al., [@B44]). Because navigation induces activation of many cortical regions simultaneously, activity in these areas must be integrated and functionally interrelated. Consistent with this idea, parallel coherent activation has been reported during virtual navigation (Li et al., [@B53]; Hori et al., [@B42]).
However, it is unknown if the above activated brain regions are associated with spatial updating or with other cognitive processes; no fMRI studies investigated brain activity at the moment when subjects explicitly updated their spatial locations due to low temporal resolution. Although three previous electroencephalogram (EEG) studies investigated spatial updating (Bellebaum and Daum, [@B8]; Peterburs et al., [@B70], [@B71]), these studies investigated updating of retinal coordinates of images after saccades, but not updating of own locations. The aim of the present study was to record EEGs while the subjects explicitly updated their spatial locations during virtual navigation. To this end, we have set up two task conditions; the control phase of the task required no spatial updating since green lines on the floor indicated the path, while the test phase of the task without the green lines required explicit spatial updating based on relationships among multiple landmarks in the virtual space. In the test phase, beep sounds, which were generated at the moment when they successfully reached the spatial reference points, indicated that they were located at the correct places. In the control phase, the same beep sounds were generated when the subjects reached the same spatial locations although spatial updating was not required. In this study, event-related potentials (ERPs) in response to the beep sounds generated at the moment subjects reached spatial reference points and updated their locations in a virtual environment were recorded. The current source density of ERPs components was analyzed by the standardized low-resolution brain electromagnetic tomography (sLORETA) method (Pascual-Marqui, [@B69]), and compared between the two task conditions.
Furthermore, recent studies suggest different theories: (1) Wang and Spelke ([@B81]) suggest that egocentric spatial representation dominates, wherein the subject is in the center of the reference frame coordinates, whereas (2) Burgess ([@B10]) suggests that both egocentric and allocentric (the center of the reference frame is independent of the subject) representations are processed in parallel during updating and navigation. These differences in spatial representation might underlie individual differences in navigation strategies \[e.g., allocentric (bird-view) or egocentric (landmark) strategies\] (Jordan et al., [@B47]). The results by sLORETA are discussed in terms of these two forms of spatial representation.
Materials and methods {#s2}
=====================
Subjects
--------
Twelve healthy right-handed male university subjects (mean age, 23.3 ± 0.69 years) participated in the study. They were naïve to the task used in the present study, and none of the subjects had a history of neurological problems. All subjects were treated in strict compliance with the Declaration of Helsinki and the U.S. Code of Federal Regulations for the protection of human participants. The experiments were conducted with the full consent of each participant using a protocol approved by the ethical committee at the University of Toyama. The subjects had no previous experience with participation in similar experiments.
Experimental paradigms
----------------------
The subjects were seated 1 m from a 20-inch LCD monitor in a chair that was grounded, within a dimly lit, shielded, room. For this task, a large virtual town was created using commercial 3D software (EON Studio ver.2.5.2, EON Reality Inc., Irvine, CA, USA). The virtual town consisted of streets and a series of buildings (Figure [1A](#F1){ref-type="fig"}). The subjects were required to manipulate a joystick with their right hand in order to navigate the virtual town presented on the monitor from a 3D first-person view. They grasped the joystick using their thumb, forefinger, and middle finger in a pronated hand position, and could move the joystick in all directions at a constant speed. The distance travelled by the joystick was a maximum of 2.5 cm from the center position in any direction, which corresponded to rotation of the joystick from a perpendicular line by 30°. Participants were able to freely navigate at constant speed in the forward, backward, right, and left directions using the joystick.
{#F1}
After setting up the electrodes, the subjects were given three trials to learn the navigation route and the layout of the virtual town. The navigation route contained 10 circular checkpoints labeled with numbers from 1 to 10, which were sequentially connected by a green line on the streets (Figure [1A](#F1){ref-type="fig"}). The subjects were required to sequentially trace the checkpoints from 1 to 10 along the green line by manipulating the joystick (control phase). When subjects entered each correct checkpoint, a beep sound lasting 0.53 s was generated. When the subjects entered checkpoint 10, the task was terminated. After a 1-min inter-trial interval, the next trial began by displaying a scene near checkpoint 1 in the virtual town. After these three learning trials (control phase), the subjects were required to perform the same task three times, except that the 10 circular checkpoints and green line were not shown in the virtual town (test phase). However, the same beep sound was generated when they reached each checkpoint. EEG recordings were performed throughout the control and test phases of the experiment. EMG recordings were performed in the test phases of the experiment.
Recordings
----------
The EEG (bandpass filtered at 0.3--120 Hz, with a sampling rate of 500 Hz) was recorded from 60 Ag/AgCl electrodes that were mounted on the subject\'s scalp, based on the International 10--20 extended system (Figure [1B](#F1){ref-type="fig"}). These were referenced to the average reference, and impedance was maintained below 5 kΩ. Electrooculograms (EOGs) with the same bandpass and sampling rate were also recorded to detect blinking and eye movements. A ground electrode was placed on the forehead.
Data analysis
-------------
The EEG data were processed using Matlab (V7.10.4) (The Math Works, Natick, MA, USA) with the EEGLAB toolbox (Delorme and Makeig, [@B19]) before the data were analyzed by sLORETA. EEG artifacts due to the task (i.e., eye blink and saccade-related artifacts) were removed by independent component analysis (ICA) (Makeig et al., [@B59], [@B60]; Jung et al., [@B48]; Delorme and Makeig, [@B19]). Short epochs including an EEG signal exceeding ±100 μV were also discarded from the data. To analyze the evoked potentials (ERPs) generated when the subjects arrived at the checkpoints (control phase) or spatial reference points corresponding to the checkpoints (test phase), 2 s of EEG data were extracted, 1 s before and 1 s after entering each checkpoint or spatial reference point.
The ERPs were then analyzed by the sLORETA software (Pascual-Marqui, [@B69]) (<http://www.uzh.ch/keyinst/loreta.htm>) to estimate the current source density. Briefly, sLORETA calculates the standardized current source density at each of the 6239 voxels in the gray matter and the hippocampus of the MNI-reference brain. This calculation of the current source density is based upon a linear weighted sum of the scalp electric potentials. sLORETA estimates the underlying sources under the assumption that the neighboring voxels should have a maximally similar electrical activity. Current source densities in each voxel between two conditions were compared by permutation test on paired data. For this comparison, sLORETA software performs "non-parametric randomization" of the data (see Nichols and Holmes, [@B63], for a detailed description of permutation test theory). Therefore, the method is non-parametric, and computes the empirical probability distribution, and does not rely nor needs normality. Since multiple cerebral cortical areas and hippocampus were activated during navigation (see Introduction), the cerebral cortical areas including the hippocampus were determined as regions of interest before sLoreta analysis was performed.
All the data are expressed as mean ± s.e.m. All statistical significance was set at *P* \< 0.05. Statistical analyses of EPR amplitudes and task duration were performed with a commercial statistical package, the Statistical Package for the Social Sciences (SPSS, Ver. 19; SPSS Inc. Chicago, IL).
Results {#s3}
=======
Behavioral results
------------------
Figure [2](#F2){ref-type="fig"} shows the mean time required to traverse 10 checkpoints across the three control trials and the mean time required to traverse the 10 spatial reference points across the three test trials. There were significant differences in the time that elapsed among the six trials \[repeated-measures one-way ANOVA with Greenhouse-Geisser correction; *F*~(1.936,\ 21.292)~ = 12.262, *P* = 0.003\]. *Post-hoc* tests indicated that elapsed time was significantly increased in the test phase (Bonferroni test, *P* \< 0.05). These results suggest that cognitive demand was larger in the test than in the control phases.
{#F2}
Representative recordings of joystick movements and EEGs from one subject in the test phase are shown in Figure [3](#F3){ref-type="fig"}. The subject manipulated the joystick to approach the spatial reference points. When the subject entered the spatial reference point at time zero, the beep sound was generated. In response to arriving at the spatial reference point, positive potentials peaking at a latency of around 340 ms were observed.
{#F3}
Evoked potentials
-----------------
Figure [4](#F4){ref-type="fig"} represents averaged ERPs aligned with the arrival at the checkpoints and spatial reference points. In the test phase, more prominent positive waveforms (blue traces) were observed in the fronto-parieto-occipital area compared to the control phase (red traces). Figure [5](#F5){ref-type="fig"} shows topographical maps of the vertex-positive ERPs at 274 (50 ms before the peak latency)-, 324 (peak latency)-, and 374 (50 ms after the peak latency)-ms latencies around the peak. In the test phase, larger vertex-positive ERPs were observed compared with the control phase. Figure [6A](#F6){ref-type="fig"} shows a comparison between the peak amplitudes of the ERPs in Cz between the control and the test phases. Statistical comparison indicated that the peak amplitudes were significantly larger in the test phase relative to the control phase (paired *t*-test, *P* \< 0.001). Figure [7](#F7){ref-type="fig"} represents averaged ERPs in the first (red traces) and third (blue trials) trials. Almost identical waveforms were observed in both the trials. Figure [6B](#F6){ref-type="fig"} shows the comparison of the peak amplitudes of the ERPs in Cz between the first and third trials in the test phase. Statistical comparison indicated that there were no significant differences in the peak amplitudes between the first and third trials in the test phase (paired *t*-test, *P* \> 0.05). These results indicate that these ERPs were not simply novelty-induced potentials.
{#F4}
{#F5}
{#F6}
{#F7}
Current source localization of the evoked potentials
----------------------------------------------------
We analyzed current source density of the ERPs with the early negative (128--208 ms) and late positive (274--374 ms) peaks. First, we compared the current source densities of the ERPs upon arrival at the spatial reference points in the test phase to the current source densities at baseline, before entering the spatial reference points (Figure [8A](#F8){ref-type="fig"}). In the 128- to 208-ms latency **(Aa)**, current source density of the initial negative deflection was significantly higher in the posterior cingulate cortex, retrosplenial cortex, and bilateral posterior insula cortex **(Ab)**. In the 274- to 374-ms latency, current source density of the vertex-positive ERPs was significantly higher in the posterior cingulate and retrosplenial cortices **(Ac)**.
{#F8}
Second, we compared the current source densities of the ERPs between the test and control phases (Figure [8B](#F8){ref-type="fig"}). In the 274- to 374-ms latency **(Ba)**, current source density in the test phase was significantly higher in the superior frontal gyrus (area 6) including the medial frontal cortex **(Bb)**. Furthermore, current source density was significantly higher in the right entorhinal cortex/hippocampus, parahippocampal cortex, and posterior cingulate cortex (**Figure 8Bb**).
Then, we analyzed the ERPs in every other 10-ms range around the peak in the same way. Figure [9](#F9){ref-type="fig"} shows five 10-ms time windows subjected to sLORETA analysis in order to compare the current source density of the ERPs in the test phase to that of the ERPs at baseline **(A)** and in the control phase **(B)**. Figure [10](#F10){ref-type="fig"} illustrates the brain areas with significant increases in current source density relative to baseline activity in the test phase. At latencies ranging from 274 to 284 and 294 to 304 ms, current source density was significantly higher in the posterior cingulate gyrus **(A,B)**. At the latencies ranging from 314 to 324, 334 to 344, and 354 to 364 ms, current source density was significantly higher in the entorhinal cortex/hippocampus, parahippocampal cortex, and lingual and fusiform gyri **(C--E)**. Figure [11](#F11){ref-type="fig"} illustrates the brain regions in which significant increases in current source density were observed, in comparison with the control phase. At latencies from 274 to 284, and 294 to 304 ms, current source density was significantly higher in the superior frontal gyrus, including the medial prefrontal cortex **(A)** and the posterior cingulate cortex **(A,B)**. At the latencies between 314 and 324, 334 and 344, and 354 and 364 ms, current source density was significantly higher in the entorhinal and parahippocampal cortices, and lingual and fusiform gyri **(C--E)**. Furthermore, at the latencies from 314 to 324 and 354 to 364 ms, current source density was significantly higher in the left inferior parietal lobule **(C)** and the right posterior middle and inferior temporal cortex **(E)**, respectively.
{#F9}
{ref-type="fig"}**.](fnbeh-08-00066-g0010){#F10}
{ref-type="fig"}**.](fnbeh-08-00066-g0011){#F11}
Discussion {#s4}
==========
Evoked potentials for goal arrival and updating
-----------------------------------------------
In the present study, vertex-positive ERPs were elicited when the subjects entered the spatial reference points. Although the beep sound was presented upon arrival, these vertex-positive potentials were not just sensory evoked potentials. First, peak latencies of the vertex-positive potentials were relatively longer (more than 300 ms) than usual auditory evoked potentials. Second, amplitudes of the vertex-positive ERPs were larger in the test phase than in the control phase although the same beep sound was presented. The difference in cognitive demand between the control and test phases is that the subjects were not required to update their own location in the virtual town in the control phase, whereas the subjects in the test phase were required to update their own locations and to determine the direction of movement toward the next reference points. It has been reported that spatial updating is an automatic (involuntary) cognitive process, which is difficult to suppress (Farrell and Robertson, [@B30]; Farrell and Thomson, [@B31]). Consistent with this, the time required to solve the task was significantly increased in the test phase, suggesting that cognitive demand was larger in the test relative to the control phase. These findings suggest that the ERPs recorded in the present study reflect elevated cognitive processes recruited in spatial updating and in action planning for joystick manipulation while navigating successive reference points in space with no visible guides. Furthermore, these vertex-positive ERPs were not novelty-induced potentials (i.e., novelty P3) (Friedman et al., [@B32]; Ranganath and Rainier, [@B73]). In the present study, there were no significant differences in the vertex-positive ERP amplitudes between the first and third trials in the test phase; although the beep sound was repeatedly presented upon arrival at the reference points in the test phase, the amplitudes of the ERPs did not change over time. These findings also suggest that the vertex-positive ERPs reflected cognitive processes involved in spatial updating and action planning rather than stimulus novelty. Thus, the present study provides the first report of ERPs associated with spatial updating.
Current source density analyses of the arrival-induced ERPs for long duration
-----------------------------------------------------------------------------
Compared with the baseline before arrival, current source densities of the initial negative potentials in the latency ranging from 128--208 ms were significantly higher in the retrosplenial cortex and posterior insular cortex. Previous fMRI studies reported that scene images consistently activated the retrosplenial cortex (O\'Craven and Kanwisher, [@B65]; Park et al., [@B68]). Since the retrosplenial cortex responded more strongly to scene images of familiar locations, this region might be involved in retrieval of scene memory (Epstein et al., [@B26],[@B27]). Furthermore, retrosplenial lesions in humans induce topographical amnesia, in which patients are unable to use landmarks to orient themselves (Aguirre and D\'Esposito, [@B2]; Maguire, [@B55]; Epstein, [@B24]). Rodent neurophysiological studies reported that retrosplenial neurons (head direction cells) encode head direction (Chen et al., [@B14]; Cho and Sharp, [@B15]). These findings suggest that the retrosplenial cortex is involved in guiding navigation based on scene memory. On the other hand, previous fMRI studies reported that the posterior insula cortex encodes sense of self-motion in response to optical flow (Cardin and Smith, [@B13]), and this area was shown to be activated during mental navigation along memorized routes (Ghaem et al., [@B35]).
Compared with the baseline before arrival, the current source densities of the vertex-positive ERPs in the 274- to 374-ms latency were significantly higher in the posterior cingulate cortex. Consistent with the present results, previous human noninvasive studies also reported an increase in activity in the posterior cingulate cortex during virtual navigation (Grön et al., [@B37]; Pine et al., [@B72]) and during recall of known routes (Ghaem et al., [@B35]; Maguire et al., [@B58]). Furthermore, the cingulate sulcus in the posterior cingulate cortex has been reported to be involved in sense of self-motion in response to optical flow (Wall and Smith, [@B80]; Cardin and Smith, [@B13]). In addition, a monkey neurophysiological study reported that posterior cingulate cortical neurons encoded spatial locations in an allocentric reference frame (Dean and Platt, [@B18]). Taken together, the contrast between the baseline and reference point ERPs indicated that the brain regions involved in perception and recognition of sensory inputs during navigation (optical flow, familiar scenes) and those involved in guiding navigation, were activated.
Compared with the control phase, the current source density of the vertex-positive ERPs in the 274- to 374-ms latency was significantly higher in the superior frontal gyrus, including the medial frontal cortex (pre-SMA), entorhinal, and parahippocampal cortices. The superior frontal gyrus including the pre-SMA is activated during virtual driving (Spiers and Maguire, [@B78]), and might be involved in monitoring traffic load and action planning during navigation (Spiers and Maguire, [@B78]) since the pre-SMA has been implicated in performing planned voluntary movements (Lee et al., [@B52]; Cunnington et al., [@B16]; Lau et al., [@B50]). Interestingly, the most anterior part of the medial frontal cortex in the present study roughly corresponds to the area activated during virtual driving, which is an activation that also correlates with goal proximity (distance between current location and the goal) (i.e., distance between the present location and reference points in the present study) (Spiers and Maguire, [@B77]). Furthermore, a human fMRI study reported that activity in the superior frontal gyrus is negatively correlated with random pointing errors in a virtual path integration task, suggesting that this brain region is involved in spatial working memory (Wolbers et al., [@B83]). On the other hand, the entorhinal cortex is also implicated in spatial navigation; a recent human fMRI study reported that characteristics of medial temporal cortical activity during virtual navigation suggested the existence of grid cells in the human entorhinal cortex (Doeller et al., [@B21]), and a neurophysiological study reported grid cells in the human entorhinal and cingulate cortices that were comparable to rodent grid cells (Jacobs et al., [@B45]). Furthermore, previous virtual navigation studies reported that the parahippocampal gyrus contains a region in its posterior extent called the parahippocampal place area, which shows increased activity in response to scenes, such as photographs of landscapes (Epstein et al., [@B25]). Neurophysiological studies reported that monkey and human parahippocampal neurons displayed place-related activities (Matsumura et al., [@B61]; Furuya et al., [@B33]), and also responded to specific landmarks in a viewpoint-dependent manner (Ekstrom et al., [@B23]; Weniger et al., [@B82]; Furuya et al., [@B33]). These findings along with human noninvasive studies (Epstein, [@B24]) suggest that the parahippocampal gyrus processes spatial information in egocentric or viewpoint-specific coordinates. Thus, the results in the present study, along with previous findings, suggest that arrival at the spatial reference points and subsequent spatial updating activate (1) brain regions involved in goal proximity and action planning and (2) brain regions involved in place recognition based on spatial information, including landmarks.
Current source density analyses of the arrival-induced ERPs for short duration
------------------------------------------------------------------------------
Compared with the baseline (Figures [10A,B](#F10){ref-type="fig"}), current source densities of the ERPs in the 274- to 284- and the 294- to 304-ms latencies were significantly higher in the posterior cingulate cortex. The posterior cingulate cortex has been implicated in recalling known routes and sense of self-motion in response to optical flow (see above). Furthermore, current source density of the ERPs in the 314- to 324-, the 334- to 344-, and the 354- to 364-ms latencies were significantly higher in the entorhinal cortex/hippocampus, parahippocampal cortex, and lingual and fusiform gyri, compared to baseline (Figures [10C](#F10){ref-type="fig"}--[E](#F10){ref-type="fig"}). The parahippocampal cortex is implicated in spatial function (see above). The fusiform and lingual gyri were reported to be activated during retrieval of spatial memory as well as during virtual navigation (Ekstrom and Bookheimer, [@B22]; Barra et al., [@B6]). Furthermore, landmark agnosia has been associated with lesions of the lingual gyrus (Aguirre and D\'Esposito, [@B2]). Although the sLORETA does not provide conclusive information for the hippocampus due to inverse problems of source localization, this region also seemed to be activated in the present study. Other studies also reported hippocampal activation by LORETA (Cannon et al., [@B12]; Miyanishi et al., [@B62]). It has been reported that activity in the human hippocampus increases during spatial tasks performed in both real and virtual environments (Aguirre et al., [@B3]; Maguire et al., [@B56]), and damage to the hippocampus produces severe deficits in memory tasks performed in a real or virtual space in monkeys and humans (Astur et al., [@B5]; Hampton et al., [@B38]). The findings in these studies are consistent with those of a cognitive map theory in which the hippocampus acts as a cognitive map of the environment with allocentric coordinates (O\'Keefe and Nadel, [@B66]). Consistent with this theory, the activities of some hippocampal neurons (place cells) increase in monkeys or humans when they navigate within a particular place in the environment in real and virtual navigation tasks (Nishijo et al., [@B64]; Matsumura et al., [@B61]; Ekstrom et al., [@B23]; Hori et al., [@B41]; Furuya et al., [@B33]).
Compared with the control phase (Figures [11A,B](#F11){ref-type="fig"}), current source densities of the ERPs in the 274- to 374- and the 294- to 304-ms latencies were significantly higher in the superior frontal gyrus including the medial prefrontal cortex and the posterior cingulate cortex (for discussion see above). Current source density of the ERPs in the 314- to 324- and the 334- to 344-ms latencies were significantly higher in the entorhinal cortex, parahippocampal cortex, and lingual and fusiform gyri, compared with the control phase (Figures [11C](#F11){ref-type="fig"}--[E](#F11){ref-type="fig"}). The entorhinal cortex, parahippocampal cortex, and lingual and fusiform gyri have been implicated in navigation and landmark recognition (see above). Furthermore, current source densities of the ERPs in the 314- to 324- and the 354- to 364-ms latencies were significantly higher in the left inferior parietal lobule (Figure [11C](#F11){ref-type="fig"}) and right middle and inferior temporal cortex (Figure [11E](#F11){ref-type="fig"}), respectively. The inferior parietal lobule including its left side has been implicated in spatial attention and navigation accuracy (Maguire et al., [@B56]; Lee et al., [@B51]). Previous noninvasive studies reported that the right middle and inferior temporal cortex were activated during visual imagery of landmarks, and during encoding and recall of spatial relationships with objects (Ghaem et al., [@B35]; Johnsrude et al., [@B46]).
Thus, the short duration analyses indicated that similar brain regions to those in the long duration analyses were activated, and confirmed the results in the long duration analysis. Furthermore, it is noted that short duration analysis allows investigation of activation sequences. In both the comparisons, the brain regions involved in sensory perception and recall (posterior cingulate cortex involved in sense of self-motion, fusiform and lingual gyri involved in visual information processing) or evaluation of present location (medial prefrontal cortex) were initially activated (Figures [10A,B](#F10){ref-type="fig"}, [11A](#F11){ref-type="fig"}--[C](#F11){ref-type="fig"}). In the later phase of the vertex-positive potentials, the parahippocampal gyri including the entorhinal and parahippocampal cortices as well as the hippocampus were activated (Figures [10C](#F10){ref-type="fig"}--[E](#F10){ref-type="fig"}, [11D,E](#F11){ref-type="fig"}). These results suggest that the medial temporal lobe including these brain regions might receive all available information from other brain regions for spatial updating.
Conclusions {#s5}
===========
The present study indicated that arrival at the spatial reference points and subsequent spatial updating elicited vertex-positive ERPs. Current source density analysis of the ERPs indicated that multiple parallel neural systems were active during spatial updating. Humans navigate their environment by dynamically updating spatial relations between their bodies and important landmarks in the surrounding environment using an egocentric system (Wang and Spelke, [@B81]). This dynamic egocentric system includes a path integration subsystem and a view (familiar landmarks)-dependent place recognition subsystem (Wang and Spelke, [@B81]). The present study indicated that these 2 subsystems were activated; the posterior cingulate cortex and posterior insular cortex in self-motion sensation during path integration, and the parahippocampal cortex in a viewpoint-dependent system for landmark-dependent place recognition. A human behavioral study suggests that these two subsystems interact and their information is integrated (Kalia et al., [@B49]). Furthermore, behavioral studies suggest that the egocentric system and allocentric system work in parallel during spatial updating and navigation (Burgess, [@B10]; Harvey et al., [@B40]). The present results indicate a parallel activation of allocentric (hippocampus) and egocentric (parahippocampal gyrus) systems. Our results provide neurophysiological evidence that humans use multiple spatial representations with different reference frames for spatial updating during navigation.
On the other hand, the inferior medial occipital lobe (lingual and fusiform gyri), right inferior temporal cortex, parahippocampal cortex, and hippocampus, which were activated during updating in the present study, are associated with route learning in a real environment (Barrash et al., [@B7]). The medial occipito-temporal cortices (lingual and fusiform gyri) and right inferior temporal cortex might be associated with the ability to quickly and accurately perceive and learn multiple topographical scenes, while the posterior parahippocampal gyrus and hippocampus might be involved in forming an integrated representation of the extended topographical environment (i.e., the appearance of places and spatial relationships between specific places), and consolidating that representation (Barrash et al., [@B7]). Compared with the previous studies that investigated remote spatial memory, which is established for many years (see a review by Spiers and Maguire, [@B79]), the present experiments imposed only six trials including both the control and test trials at one time. These findings suggest that not only updating processes but also learning and consolidation processes take place simultaneously.
Finally, it is noted that human subjects display individual differences in navigation strategies \[e.g., allocentric (bird-view) or egocentric (landmark) strategies\] (Jordan et al., [@B47]). In the present study, we could not classify the subjects based on their navigation strategies since they were required to navigate in the fixed route to receive the same visual stimuli in the virtual space. Further studies are required to investigate brain activation differences based on navigation strategies. The present study at least indicated common neural networks among the subjects during spatial updating.
Author contributions
====================
Hisao Nishijo designed the research; Hai M. Nguyen, Jumpei Matsumoto, and Hisao Nishijo performed research; Hai M. Nguyen, Jumpei Matsumoto, Anh H. Tran, Taketoshi Ono, and Hisao Nishijo analyzed data; and Hai M. Nguyen and Hisao Nishijo wrote the paper.
Conflict of interest statement
------------------------------
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This work was supported partly by the Japan Society for the Promotion of Science Asian Core Program and the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research (B) (25290005). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
[^1]: Edited by: Nuno Sousa, ICVS, University of Minho, Portugal
[^2]: Reviewed by: Lutz Jäncke, University of Zurich, Switzerland; Sebastian Ocklenburg, University of Bergen, Norway; Nuno Dias, University of Minho, Portugal
[^3]: This article was submitted to the journal Frontiers in Behavioral Neuroscience.
| |
We proudly embrace a healthy work-life balance at Cadwyn. To celebrate the National Work-Life Week 2018 we’d like to share how our flexible working approach makes a difference to our staff.
Flexible working can increase staff motivation, promote work-life balance, reduce employee stress and improve performance and productivity. Our staff benefit from flexible working hours, where they can vary work patterns in view of workload and personal needs and preferences. Cadwyn also support staff with a variety of other different types of flexible working, including:-
- Part Time hours
- Compressed Hours
- Job Share
- Time off in lieu
- Home working
- Shift swapping
- Staggered hours
- Sabbatical/Career break
Here’s what our staff said: | https://www.cadwyn.co.uk/work-life-week/ |
University of Bergen
The University of Bergen (in Norwegian: Universitetet i Bergen) is located in Bergen, Norway. Although founded as late as 1946, academic activity had taken place at Bergen Museum as far back as 1825. The university today serves more than 16,500 students.
Our people
Prof. dr. P.I. Davidsen
Prof. dr. Pål Ingebrigt Davidsen received his academic degree from the University of Bergen in 1983. His professional career started at the same university. From 1983 until 1991 Professor Davidsen was employed as an associate Professor, from 1991 onwards he served as a professor for the Department of Information Science and for the Department of Geography. He guest lectured at Chalmers Technical University, Gothenburg Sweden, the University of Minnesota and the Mikkeli Polytechnic Institute in Finland. For the Erasmus program he gave seminars at the University of Palermo, Seville, Brandenburg and Goteborg.
Prof. dr. E. Moxnes
Prof dr. Erling Moxnes is a professor at the Department of Geography at the University of Bergen. In 1982 he completed his PhD in Engineering Sciences from the Resource Policy Center at Dartmouth College which is located in the United States. His main interest lies in understanding why the presumably best policies are often ignored in practical decision-making. By using laboratory experiments among students and professionals he has contributed to the emerging literature on misperceptions of dynamic systems (MODS). For his article on fishery management in Management Science he received the Jay W. Forrester Award.
Prof. dr. B. Kopainsky
Prof. dr. Birgit Kopainsky is professor in System Dynamics at the University of Bergen, Norway. She holds a PhD in agricultural economics and a master’s degree in Geography and Environmental Studies. Her research explores the role that system dynamics can play in facilitating transformation processes in social-ecological systems such as the transformation towards sustainable and resilient agri-food systems. She conducts and supervises research both in Europe and in developing countries and works with a wide range of stakeholders at local, national and international level.
Programme
GEO-SD 302 Fundamentals of Dynamic Social Systems (10 ECTS)
This course teaches the basics of the System Dynamics method. System Dynamics helps explain how change takes place, why people misunderstand change, and why so many policies fail to solve problems. The method builds on a systems perspective where system parts influence each other and where knowledge from different fields of study may be needed. Students learn to recognize typical problem behaviors of dynamic systems, exemplified by global warming, over-utilization of natural resources, epidemics, price fluctuations. These are all problems of importance for sustainable development goals. Students learn to formulate hypotheses for why problems develop, and they learn to represent their hypotheses in simulation models and use the models to test their hypotheses. For models that give likely explanations of problem developments, students learn to formulate and test alternative policies in the very same models. At a more general level, the course gives training in applying the scientific method to socio-economic problems, it provides a common language for interdisciplinary research, and it gives training in project formulations and reporting.
GEO-SD 303 Model-Based Analysis and Policy Design
This is an introduction to System Dynamics analysis of non-linear, dynamic systems with emphasis on the relationship between system structure and behaviors, and on policy design and implementation. Students learn to build, simulate, and test models of social, natural and hybrid systems, to analyze the structural causes of problem behavior and to develop and evaluate policies aimed at addressing such problems. The students gain a deep understanding of the intimate relationship between structure and behavior in complex, dynamic systems; how structure gives rise to behavior and how the resulting behaviors may feedback to change the relative significance of the structural components of the system. This enables the students to analyze problems and to develop and evaluate policies of their own choice. The students also learn to distil the essence of a modelling experience and to communicate their analysis and design conclusions in the form of a compact executive summary.
GEO-SD 304 System Dynamics Modeling Process
In this course, students apply the System Dynamics method to problems in both the public and private sectors. Students will apply and gain reinforcement of skills learned in other system dynamics courses as they follow a structured process for modelling and simulation of dynamic problems in both social and natural systems. Emphasis is on the design of simulation models to explain problem behavior in dynamic systems, and on the re-design of such models to represent the implementation of policies aimed at improving their behavior. Students learn to use the system dynamics modelling process: define the dynamics of problems, develop hypotheses regarding the structure underlying problem behavior, analyze and validate computer simulation models, and design policies to improve systemic behavior. In addition to learning from the lectures and materials, students gain hands-on experience through in-class exercises, assignments, and an in-depth project. The reading list includes a primary textbook and supplemental material.
For more information please visit us here.
Systems Education in Bergen
You can find an academic article on how it is like to study System Dynamics in Bergen here. This article is written by Pål Davidsen, Birgit Kopainsky, Erling Moxnes, Matteo Pedercini and David Wheat. In this article the authors how System Dynamics and the EMSD program has evolved over the years and how it has impacted system think on a global level. | https://www.europeansystemdynamics.eu/host-universities/bergen-university/ |
Use of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) to investigate group and gender differences in schizophrenia and bipolar disorder.
Gender differences exist in schizophrenia and bipolar disorder (BD), therefore the aim of the present study was to clarify the role of gender in cognitive deficits in these disorders. Cognitive performance was examined in schizophrenia (24M : 14F) and BD (16M : 24F) patients compared with age-, IQ- and gender-matched control participants (21M : 22F). The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) was used to assess five cognitive domains: immediate memory/learning, visuospatial ability, language, attention, and delayed memory, which are summed to provide a Total score. In comparison to controls, schizophrenia patients showed deficits on all domains, while BD patients had impaired immediate memory/learning, language and Total score. Schizophrenia patients showed deficits compared to BD in the Total score, immediate and delayed memory and visuospatial ability. The Total and domain scores were not different in men and women across or within groups. There were gender effects on four of the 12 individual cognitive tasks, in which female patients outperformed male patients. Further, there were gender differences across groups for three of the individual tasks: female schizophrenia patients showed poorer story memory and story recall compared to male schizophrenia patients; female BD patients had enhanced figure copy performance compared to male BD patients. The RBANS highlighted the cognitive deficits in schizophrenia and BD patients compared to controls and also each other. There were no overall gender differences in cognition.
| |
This report examines the emergence of the metaverse: its history and characteristics, the factors driving
investment, how consumers and businesses are using it today and may in the future, its value-creation potential, and how leaders and policy makers can plan their strategies and near-term actions. Our work began by surveying more than 3,400 consumers and executives on metaverse adoption, its potential, and how it may shift behaviors.
We also interviewed 13 senior leaders and metaverse experts. In analyzing the metaverse’s value-creation
potential and total investment landscape, we examined the drivers of activity among major corporations, venture capital, and private-equity funds. We examined the potential impact of the metaverse on sectors most closely tied to its technology and uses, with our work supplemented with additional research, case studies, and real-world examples.
This latest research is the result of collaboration between multiple practices within McKinsey, including Growth, Marketing & Sales, McKinsey Digital, and Telecommunications, Media & Technology. We also drew on the expertise of the McKinsey Technology Council, which comprises more than 60 scientists, engineers, investors, and entrepreneurs from external tech organizations and institutions, along with our own internal experts. This report also leverages an expanding body of knowledge around the metaverse and deep expertise among our McKinsey colleagues, including contributions from: Jiamei Bai, Kim Baroudy, Ian De Bode, Marc Brodherson, Gordon Candelin, Marek Grabowski, Matt Higginson, Klemens Hjartar, Marius Huber, Vinayak HV, Nils JeanMairet, Chau Nguyen, Ichiro Otobe, Kim Rants, Kartik Trehan, and Richard Ward. We also sought the expertise of metaverse expert Matthew Ball, managing partner of EpyllionCo and McKinsey knowledge partner. The project team comprised Inês Araújo Lopes, Antonio Celso Maciel Tavares, Andreas Henriksen, Madalina Kmen, Lotte Lauer, Estelle Menye Zanga, Philibert Parquier, Stephen Schwab, Ewa Starzynska, and Peter Vang.
We would also like to thank Growth, Marketing & Sales’ Global Communications Director Cindy Van Horne, Global Publishing Manager Molly Katz, and Global Publishing Coordinator Hannah McGee, as well as Luke Collins, Jen Thiele, and John-Michael-Maas for their editorial leadership. Additionally, we would like to thank the extended communications team EMEA External Relations Manager Kinga Young, North America External Relations Manager Eric Sherman, Global Digital Specialist Sharon Woo, Communications Specialist Marion Obadia, and Jason Forrest.
Finally, we sincerely thank the senior executives and experts who graciously agreed to be interviewed to provide their perspective on the current state of the metaverse and its potential.
Our ambition is for this report to help drive ongoing dialogue about the development of the metaverse, help leaders of both consumer and business-to-business clients better understand its power and potential, identify strategic imperatives, and act as a force for its positive evolution. This work is independent and has not been commissioned or sponsored in any way by any business, government, or other institution.
Value creation in the metaverse
The metaverse is still being defined, both literally and figuratively. Yet its potential to unleash the next wave of digital disruption seems increasingly clear, with real-life benefits already emerging for early adopting users and companies. As we saw in previous shifts in technology such as the emergence of the internet followed by social media, mobile, and cloud, novel strategies can quickly become table stakes. The metaverse has the potential to impact everything from employee engagement to the customer experience, omnichannel sales and marketing, product innovation, and community building. Examining its potential effect should be part of strategy discussions, with leaders accelerating their analysis of how the metaverse could drive a very different world within the next decade. Of course, many questions remain, including how virtual worlds will be balanced with the physical world to ensure the metaverse is built in a responsible manner, how it can be a safe environment for consumers, how closely it will align with the “open” vision of the next iteration of the internet, and whether technology can advance quickly enough to build the metaverse of our imagination. This report examines the metaverse’s building blocks, investment flows, and what is driving them, and how consumer and business behavior is evolving, its potential economic impact and actions leaders should consider to capture value.
— There continue to be questions around the longevity and potential of the metaverse, with an extreme
view regarding it as merely a rebranded gaming platform of little wider interest. We do not share
that skepticism and believe the metaverse has the potential to be the next iteration of the internet. It
may seamlessly combine our digital and physical lives by featuring a sense of immersion, real-time
interactivity, user agency, interoperability across platforms and devices, the ability for thousands of
people to interact simultaneously, and use cases spanning activities well beyond gaming. But the pace
of its development will depend on multiple technological and user-experience factors, and is not limited
to one platform, device, or even technology.
— The metaverse’s technology stack has four core building blocks: content and experiences, platforms
(such as game engines), infrastructure and hardware (including devices and networks), and enablers
(such as payment mechanisms and security). Ten layers span these components, providing the critical
building blocks on which all metaverse experiences are based. One primary question about the future
evolution of the metaverse is the extent to which the interoperability of these elements can be advanced.
— Large technology companies, venture capital (VC), private equity (PE), start-ups, and established brands
are seeking to capitalize on the metaverse opportunity. Corporations, VC, and PE have already invested
more than $120 billion in the metaverse in the first five months of 2022, more than double the $57 billion
invested in all of 2021, a large part of it is driven by Microsoft’s planned acquisition of Activision for $69
billion. Large technology companies are the biggest investors—and to a much greater extent than they
were for artificial intelligence (AI) at a similar stage in its evolution, for example. Industries currently leading metaverse adoption also plan to dedicate a significant share of their digital investment budgets to it.
— Multiple factors are driving this investor enthusiasm, including ongoing technological advances across
the infrastructure required to run the metaverse; demographic tailwinds; increasingly consumer-led
brand marketing and engagement; and increasing marketplace readiness as users explore today’s early
version of the metaverse largely driven by gaming (with some games boasting tens of millions of active
players) with applications emerging that span socializing, fitness, commerce, virtual learning, and others.
In brief Value creation in the metaverse: The real business of the virtual world 5
— Our survey of more than 3,400 consumers and executives found significant excitement about the
potential of the metaverse. Almost 60 percent of consumers using today’s early version of the metaverse
are excited about transitioning everyday activities to it, with connectivity among people the biggest
driver, followed by the potential to explore digital worlds. Some 95 percent of business leaders expect
the metaverse to have a positive impact on their industry within five to ten years, and 61 percent expect
it to moderately change the way their industry operates. Industries most likely to be impacted by the
metaverse include consumer and retail, media and telecommunications, and healthcare, and those
industries are also among those already undertaking metaverse initiatives.
— While estimates of the potential economic value of the metaverse vary widely, our bottom-up view
of consumer and enterprise use cases suggests it may generate up to $5 trillion in impact by 2030—
equivalent to the size of the world’s third-largest economy today, Japan. It is shaping up to be the biggest
new growth opportunity for several industries in the coming decade, given its potential to enable new
business models, products, and services, and act as an engagement channel for both business-toconsumer and business-to-business purposes.
— The potential impact of the metaverse varies by industry, although we believe it holds implications for
all. For instance, we estimate it may have a market impact of between $2 trillion and $2.6 trillion on
e-commerce by 2030, depending on whether a base or upside case is realized. Similarly, we estimate it
to have an impact of $180 billion to $270 billion on the academic virtual learning market, a $144 billion
to $206 billion impact on the advertising market, and a $108 billion to $125 billion impact on the gaming
market. These effects may manifest in very different ways across the value chain, however.
— Companies already leveraging the metaverse may build lasting competitive advantages. Business
leaders should develop a strategic stance by defining metaverse goals and the role they want to play;
testing, learning, and adopting by launching initial activities, monitoring results, and examining user
behavior; and preparing to scale by identifying necessary capabilities and embedding the metaverse in
their operating model. They should also explore becoming metaverse users themselves.
— The metaverse also poses urgent challenges that cut across firms, their employees, independent
developers and content creators, governments, and, of course, consumers. Part of the workforce will
need to be reskilled to take advantage of it rather than compete with it, and cities and countries serious
about establishing themselves as hubs for its development will need to join the global competition
to attract talent and investment. The metaverse also has obvious societal implications. A variety of
stakeholders will need to define a road map toward an ethical, safe, and inclusive metaverse experience.
Guidelines may also be necessary around issues including data privacy, security, ethics and regulatory
compliance, physical health and safety, sustainability, and equity and fairness. | https://ciso2ciso.com/value-creation-in-the-metaverse-the-real-business-of-the-virtual-world-by-mckinsey-company-june-2022/ |
CO-CREATE CHANGE On research funding and performingDownload the project overview Send us a message
Co-Change supports the implementation of institutional change among research and innovation actors in the areas of research ethics, open science, stakeholder engagement, science education, gender equality and sustainability. The partners form a network of research organisations, research funding and government agencies, firms, ethics councils, and civil society organisations. Together, they develop tools and practices for responsible research and innovation, for example, to reflect on internal norms, values and practices, and to build capacity for governance and management, ethical and social impact monitoring, and assessment.
Latest News
-
2022. 10. 18. Conference on Open Science Science Europe invites institutional leaders, researchers at all stages of their careers, and experts from the field to join its Conference on Open Science to discuss two key questions: Is Open Science ready to become the norm in research? How do we ensure this becomes an equitable transition? To find answers to these questions, the conference will provide a comprehensive overview of practical and policy initiatives, research assessment reforms, and financial measures that support the transition to Open Science. We will also look forward at new and emerging trends.
-
2022. 08. 15. Of “Lighthouses’, ‘Living Labs’ and the ‘Wisdom of the Crowd’ - Social responsibility beyond research and teaching (an NGO perspective) There is a broad consensus that research and innovation (R&I) must be steered towards socially desirable ends, ensuring that science and technology are the driving forces behind social progress. This puts the current R&I system under increasing pressure to become more inclusive and responsive to current and future societal challenges. Although the critical issues of Responsible Research and Innovation (RRI) have been gaining academic awareness and political support as tools to move European R&I governance forward, there is broad recognition that the engagement of civil society organisations and citizens has been suboptimal in defining R&I priorities.
-
2022. 08. 05. Our Idea Competition’s winner visited Tecnalia The CTS/C!S Project „DEBIAS - Digitally Eliminating Bias In Applicant Selection“ placed first in our Idea Competition. Representing the CTS, Florian Cech visited Tecnalia - one of our Change Labs in Bilbao, Spain for a one-day workshop and guided tour. The visit was focused on an exchange of ideas on responsible research and innovation (RRI) and interdisciplinarity in technology development, and to gain insights into Tecnalia’s RRI-related activities and organisational transformations.
Upcoming events
-
ECSA 2022 Conference
The 4th ECSA Conference runs under the cross-cutting theme of Citizen Science for Planetary Health. The concept of planetary health is based on the understanding that human health and human civilization depend on thriving natural systems and the sustainable stewardship of those natural systems. This demands not only knowledge, commitment and engagement of health and environmental sciences, but inter- and transdisciplinary efforts from all research fields and societal and political actors.
Date: 2022. 10. 05.
Location: Berlin
-
What can Responsible Research and Innovation offer Smart Specialisation Strategies in their reorientation towards sustainability?
This event will take stock of some experiences promoting Responsible Research and Innovation (RRI) to Smart Specialisation Strategies in their reorientation towards sustainability, and address addressing how RRI can be integrated into regional innovation and development policies. The event is organised by the TetTRRIS consortium as a side event of the European Week of Regions and Cities.
Date: 2022. 10. 13.
Location: online
-
STS Conference Graz 2023
The 21th Annual STS Conference Graz 2023 „Critical Issues in Science, Technology and Society Studies“ samples a number of thematic fields as a guideline to address contemporary challenges of the interplay between science, technology and society. The call for session proposals is open until the 3rd of November.
Date: 2023. 05. 08. | https://cochangeproject.eu/ |
On July 9, 2020, Xiaoyun Ye purchased a home at 225 S. Sangamon St. P-70, Chicago from Margarita and Vladislov Gokhberg for $415,000.
The property tax paid for this property in 2019 was $674.09. This is 0.16% of the sale price of the home.
The last time this home sold was May 22, 2014. It last sold for $345,000.
In July 2020, four properties sold in Near West Side. | https://blockshopper.com/news/544011112-xiaoyun-ye-buys-225-s-sangamon-st-p-70-chicago |
I am a conservation geneticist working at the Royal Zoological Society of Scotland’s WildGenes laboratory. I use genetic and genomic tools to help inform conservation management which may include answering questions about individual ID, hybrid status, relatedness, parentage, geographic origin, population diversity, or taxonomic distinction.
My PhD research used evolutionary and population genetic theory to understand the distribution of contemporary genetic diversity in the palmate newt. My research and work now involves the use of similar genetic theories to assess the status of and inform management (both captive and in situ) of species that have been flagged as priorities for conservation action. This includes many endangered and susceptible populations/species both in the UK and overseas.
Conservation of endangered species frequently involves the management of small and fragmented populations and the genetic problems associated with them. Genetic data can be used to manage both individuals and populations so as to maximise the persistence of populations and the evolutionary potential of a species. Similarly, when genetic data is used to confirm species ID or geographic origin, assessments can be made as to whether the animal (or part) has been traded illegally. DNA tools are particularly useful for both monitoring and enforcement as they can be applied to samples that maybe otherwise unidentifiable (e.g. when no morphological characteristics remain).
Careful and strategic planning of sampling and laboratory analysis to meet the aims of clearly defined questions can save a lot of time later. Even if it means re-thinking your whole strategy. Recognise the limitations of the methods that you are using and don’t be fearful of negative results. It is better to know what a method is NOT capable of as it provides a clue to the direction that new methods and development should take (so limitations are positive rather than negative bits of information!).
Awarded: PhD Scholarship
Field: | https://www.carnegie-trust.org/alumni/dr-gill-murray-dickson/ |
Copyright © 2014 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/
Received 1 May 2014; revised 29 May 2014; accepted 5 June 2014
ABSTRACT
In order to make system reliable, it should inhibit guarantee for basic service, data flow, composition of services, and the complete workflow. In service-oriented architecture (SOA), the entire software system consists of an interacting group of autonomous services. Some soft computing approaches have been developed for estimating the reliability of service oriented systems (SOSs). Still much more research is expected to estimate reliability in a better way. In this paper, we proposed SoS reliability based on an adaptive neuro fuzzy inference system (ANFIS) approach. We estimated the reliability based on some defined parameter. Moreover, we compared its performance with a plain FIS (fuzzy inference system) for similar data sets and found the proposed approach gives better reliability estimation.
Keywords:Reliability Estimation, SOA, Fuzzy, Rule-Based, Reliability Model, Soft Computing
1. Introduction
Reliability is one of the most important non-functional requirements for software. Accurately estimating reliability for service oriented system (SOSs) is not possible. Moreover, soft computing techniques can help to solve problems which are uncertain or unpredictable. Many researchers have proposed different approaches to SOS reliability estimation . IEEE 610.12-1990 defines reliability as “The ability of a system or component to perform its required functions under stated conditions for a specified period of time”. The primary objective of reliability is to guarantee that the resources managed and used by the system are under control. It also guarantees that a user can complete its task with a certain probability when it is invoked.
Software reliability management is defined in IEEE 982.1-1988 as “The process of optimizing the reliability of software through a program that emphasizes software error prevention, fault detection and removal, and use of measurements to maximize reliability in light of project constraints such as resources, schedule, and performance”. Thus any reliable system is one that must guarantee and take care of fault prevention, fault tolerance, fault removal, and fault forecasting. The most suitable models for reliability of Service Oriented Architectures (SOAs) are the ones based on architecture.
Although the reliability of SOA systems cannot be completely estimated, we can estimate the reliability to a larger extent by analyzing the SOA characteristics and identifying the corresponding requirements. This paper is the result of work done in continuation to our previous study to estimate the reliability of service oriented systems. We started with the identification of important factors for SOA followed by estimating the reliability of such systems through fuzzy inference system (FIS) using Matlab fuzzy tool box, followed by the present work which we have extended to provide more accurate reliability estimation by using an adaptive neuro fuzzy inference system (ANFIS). The rest of the paper is organized as follows. In Section 1, we discussed the basic definition of SOA, services, fuzzy logic and ANFIS. Section 2 covers the work already done in this area in different research studies. Section 3 discusses the research approach for our work. The experimentation and evaluation results are discussed in Section 4. Finally, the conclusion is drawn in Section 5.
1.1. Service Oriented Architecture
SOA provides a design framework for realizing rapid and low-cost system development and improving total system quality. SOA uses the Web services standards and technologies and is rapidly becoming a standard approach for enterprise information systems. SOA is a architectural software concept whose core working is based on services, a functionality that can perform any specific task and facilitates to support business requirements. In a SOA environment, resources are made available to other participants within the network as independent services that are accessible across the network in a standardized way. Overall, a business centric, SOA approach delivers a number of benefits, which includes the following: reduced time to market, improved business alignment for growth, reduced costs, reduced business risk. Each Service Oriented Architecture plays one or more of three roles as service brokers, service registers and service providers as follows :
• A service provider has to make trade-offs between availability & security. It is a web service responsibility for deciding the type of information exposed;
• Service broker or service register is responsible for making information available to a requestor. A service broker has to decide the amount of information transfer;
• The service requestor or Web service client requests for a service and binds to the service provider in order to call upon one of its Web services.
1.2. Service
Services are loosely coupled, autonomous, and reusable. They have well-defined platform-independent interfaces, and provide access to data, business processes, and infrastructure, ideally in an asynchronous manner, so that they can receive requests from any source, making no assumptions as to the functional correctness of an incoming request. Service is an implementation of a well-defined business functionality that operates independent of the state of any other service defined within the system. It has a well-defined set of interfaces and operates through a pre-defined contract between the client of the service and the service itself, which must be dynamic and flexible to be able to add, remove, or modify services, according to business requirements . Services can be written today without knowing how it will be used in the future and may stand on its own or be part of a larger set of functions that constitute a larger service.
From a dynamic perspective, there are three fundamental concepts that are important to understand: The service must be visible to service providers and consumers; the clear interface for interaction between them is defined; and the real world is affected from interaction between services. These services should be loosely coupled and have minimum interdependency, otherwise they can cause disruptions when any service fails or changes.
1.3. Neural Networks and Fuzzy Logic
Neural Networks (NNs) and fuzzy logic are the two basic elements of soft computing techniques. Fuzzy means unsure and ambiguous. Fuzzy systems are suitable for approximate reasoning, especially for the system whose mathematical model is hard to derive. Fuzzy logic allows decision making with estimated values under incomplete information. A fuzzy set is a generalization of an ordinary set by allowing a degree (or grade) of membership for each element. The membership-function m(x) of a set maps each element to its degree. A membership degree is a real number on [0, 1]. In extreme cases, if the degree is 0 the element does not belong to the set, and if 1 the element belongs 100% to the set.
Neural networks are a form of multiprocessor computer system, with simple processing elements, a high degree of interconnection, adaptive interaction between elements; it is also referred as an “artificial” neural network (ANN). According to Dr. Robert Hecht-Nielsen, a neural network is “...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs”. There are many different kinds of learning rules used by neural networks. ANNs can learn from data and feedback and have learning capabilities. On the other hand, fuzzy logic models are rule-based models and do not have learning capabilities, therefore so for learning, fuzzy inference system performs the following operations:
• fuzzification of the input variables;
• determination of membership functions for the parameters;
• application of the fuzzy operator in the antecedent;
• implication from the antecedent to the consequent;
• defuzzification.
1.4. Adaptive Neuro Fuzzy Inference System (ANFIS)
ANFIS was first defined by J.-S. Roger Jang in 1992. It is a techniques to learn about a data set, in order to compute the membership function parameters that best allow the associated fuzzy inference system to track the given input/output data. The toolbox function “anfis” constructs a fuzzy inference system (FIS) using a given input/output data set, for which membership function parameters are tuned (adjusted) using either a back propagation algorithm alone or in combination with a least squares type of method. ANFIS has the following advantages over an FIS as follows:
• Through learning algorithms, an ANFIS can optimize the parameters of a given FIS by simulating and analyzing the mapping relation between input and output data;
• An ANFIS has networks which involve nodes and directional links, along with some learning rules are also associated with these networks whereas an FIS has no network link and its behavior only depends on its membership functions;
• Learning method in ANFIS is much similar to that of neural networks whereas FIS has no learning capability.
2. Related Work
Most of the research on software reliability engineering focuses on system testing and system-level reliability growth models However, SOA is not taken into account in these approaches. Although there are some soft computing approaches have been developed for estimating the reliability of service oriented systems (SOSs). Goseva-Popstojanova, et al. (2001) and Gokhale (2007) did remarkable work for architecture-based empirical software reliability analysis in relation to architecture-based empirical software reliability analyses . Significant work done in the direction of estimating reliability of SOA is summarized below:
Danilecki, A., et al. (2011) proposed a model named ReServE, which ensures that business processes are consistently perceived by client and services, transparently recovers the state of a business process. When a service fails, its SPU can initiate the rollback-recovery process.
Brosch, F., et al. , proposed SAMM (2010) to evaluate the impact of different component topologies on the system reliability, author concludes that not only the hardware, but also different allocation configuration have influence on the reliability prediction.
Zibin, Z., et al. (2010) proposed Collaborative Reliability Prediction of Service-Oriented Systems, in his work collaborative framework is proposed for predicting reliability of service-oriented systems which employs past failure data of similar service users for making reliability prediction for the current service user.
Wang, L., et al. , introduces unified reliability modeling framework (2009) and concludes service pools as backup alternative, reliability of simple services is addressed by considering data reliability, authors used Time Markov Chains (DTMCs) are for analyzing reliability of service composition.
Wang, et al. proposed Analyzed-stock market system (SMS) (2006), authors mapped to component failure probabilities and predicts the system reliability. In their work they derived transition probabilities from recorded transitions between components.
Tsai, et al. proposed SORM (2004), Service-Oriented Software Reliability Model which tries to determine the reliability of each component and their relationship. It consists of two stages: group testing to evaluate the reliability of atomic services; and evaluation of composite services through the analysis of components and their relationships, author used a group testing technique from the medical field to detect faults.
3. Discussion and Research Approach
This work is an extension of our previous work done to identify the SOA adoption trends & implementation factors , followed by estimating the reliability of such systems through fuzzy inference system (FIS) using Matlab fuzzy tool box, followed by the present work which we have extended to provide more accurate reliability estimation by using a adaptive neuro fuzzy inference system (ANFIS). The present work is an extension of our previous research which includes three phases as follows:
1) In first phase, thorough review of articles and research has been done and identified the factors that are relevant to SOA implementation and the extent to which each factor is crucial to SOA implementation .
2) In second phase using GQM technique, metrics are proposed, and the responses are taken from 125 people in the industry. The data, which is based on the feedback and responses, is defined into the following three parameters :
a) AR: adhoc requirements/dynamic binding/agility;
b) MG: migration/legacy system integration;
c) BI: business and IT collaboration.
The rules were defined for the inference engine. Three clusters were formed for the input factors (Low, Medium, and High), and five clusters were formed for the output reliability (Very Low, Low, Medium, High, and Very High). Therefore, with 3 clusters and 3 input factors, a total of 27 rules were formed that yield 33 = 27 sets. These 27 sets or classifications can be used to form 27 rules using fuzzy model.
3) In third phase (i.e. the present work), we followed Sugeno-type inference, defined it for the fuzzy logic toolbox to estimate the reliability of service oriented systems.
3.1. Reliability Parameters for SOA-Based Systems with Its Constituent Factors
1) AR: A system capable of fulfilling the ad hoc on-demand changing requirement of the market is assumed to be efficient and reliable. It is based upon the way the rule engine within the model has been trained to perform dynamic binding whenever the demand changes or arises. This also covers agility, which is the important issue when someone moves from present legacy systems to SOA-based systems. It is further concluded that the more the system has capability to handle dynamic binding/ad hoc requirement/agility, the more system is assumed to be reliable. Therefore, SOA reliability α AR.
2) MG: It is observed that, although an SOA system is strong enough in terms of its capacity to handle the ad hoc market, if there is no provision of integrating the legacy system or migrating successfully from old system to new one within the system; it is not effective and will not guarantee system reliability. Moreover it is observed that mostly small and medium enterprises (SMEs) using the SOA system be developed from services developed from scratch. Since it is concluded that more migrations affects the system reliability. Therefore, SOA reliability α 1/MG.
3) BI: Within a system, if the collaboration between business process and strategies is aligned with IT capabilities, the system is assumed to be more reliable. Through surveys, it has been observed that, although the powerful IT system is there, it will not be of much valuable to the organization without proper integration within the business strategies. Therefore, SOA reliability α BI.
The factors described in the three parameters above assess different properties and characteristics associated with SOA model reliability. The values of these parameters cannot be used independently to measure reliability. Rather, an integrated approach that considers all three parameters and their relative impact is required for estimating a system’s overall reliability .
3.2. Proposed Approach
1) Conduct a thorough survey of literature to identify the factors that are relevant to SOA implementation and the extent to which each factor is crucial to SOA implementation.
2) Identify reliability parameters in SOA context among these factors.
3) Cluster reliability parameters into three domain clusters of reliability factors.
4) Assemble a database for the value of these factors.
5) Design an inference engine based on the rule for identifying reliability clusters.
6) Using Sugeno system, perform the following operations
• Plot the number of inputs, outputs, input membership functions, and output membership functions.
• Load FIS or generate FIS from loaded data using your chosen number of MFs and rules or fuzzy.
• Train FIS after setting optimization method, error tolerance, and number of epochs. Training adjusts the membership function parameters and plots the training (and/or checking data) error plot(s) in the plot region
• Test data against the FIS model.
• Anticipate the FIS model output versus the training, checking, or testing data output.
We have a training data set that contains desired input/output data pairs of the target system to be modeled. These training and checking data sets are collected based on observations of the target system and are then stored in separate files. It has been observed that only the checking data set is corrupted by noise.
4. Result and Discussion
For present modeling we used the Fuzzy Logic Toolbox neuro-adaptive learning techniques incorporated in the anfis command. The parameters could be chosen so as to tailor the membership functions to the input/output data in order to account for these types of variations in the data values. Our experiments simulated the effect of rules with the MATLAB Fuzzy Logic Toolbox; the reliability for the values obtained is found to be very close to the calculated value, thus result obtained justifies our approach by giving better estimates in comparison to FIS - .
For the analysis of result obtained with the experiment we used covariance method to compare the closeness of the value obtained with the experiment with the values collected from original sample data set. Covariance provides a measure of the strength of the correlation between two or more sets of random variants. The covariance for two random variates X and Y, each with sample size N, is defined by the expectation value
(1)
(2)
where and are the respective means, which can be written out explicitly as
The comparison table for the ANFIS and original data set is shown in the table (Table 1).
Covariance matrix for (Anfis, ori) =
0.0137 0.0135 0.0135 0.0137 Average Testing Error is = 0.021% Since the covariance is positive we can say that we get results closer to the original values. We generated a plot for test data against FIS, the FIS is trained after setting optimization method, error tolerance and number of epochs. Figure 1 shows the plot of testing data against FIS, testing data appear on the plot in blue color while the FIS output is shown in red color.
After creating the ANFIS model, we compared the output reliability values for different input sets with the original values. We calculated Average Testing Error for the output obtained by the FIS and the output obtained by the ANFIS with the original output. ANFIS reduces the error to 0.021%. Hence, the ANFIS performs better than the FIS. In ANFIS, we first trained the FIS, on the basis of training data the rules were formed to produce the output of the trained model. We observed during experiments that for large data sets its execution is little complex. Our results show that the ANFIS model gives a more accurate measure of reliability than the FIS model. Table 2 illustrates the comparison chart for FIS, ANFIS and original. Similarly graph shown in Figure 2 indicates ANFIS is closer to original values than FIS.
Table 1. Comparison of original data set with ANFIS using Sugeno method.
Figure 1. Plot of testing data: Original vs. ANFIS.
Table 2. Plot of FIS, ANFIS and original.
Figure 2. The graph of ANFIS is much close to original values than FIS.
The inference system, inference rules, fuzzy inference system, rule viewer and surface viewer for ANFIS using Sugeno method is shown in appendices.
5. Conclusion and Future Work
This paper proposes a neuro fuzzy approach for estimating the reliability of service oriented systems. Proposed approach is based on an ANFIS that requires less computational time than previously proposed FIS and other traditional approaches. Our results show that the ANFIS give more accurate estimation than FIS. Future scope may be to identify other relevant factors that should be used but currently we only have data available for the discussed factors. Our experience documented in this paper will be helpful for practitioners in collecting the data necessary for reliability prediction. Researchers are provided a demonstration on how the fuzzy logic toolbox can be used to find the reliability of such system on the basis of certain SOA features.
References
- Kirti, T. and Arun, S. (2012) A Rule-Based Approach for Estimating the Reliability of Component Based Systems Advances in Engineering Software. Elsevier, Amsterdam, 24-29.
- IEEE Standard 610.12-1990 ( 2014) IEEE Standard Glossary of Software Engineering Terminology. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=159342
- IEEE Standard 982.1-1988 ( 2014) IEEE Standard Dictionary of Measures to Produce Reliable Software. http://www.baskent.edu.tr/~zaktas/courses/Bil573/IEEE_standards/982_1_2005.pdf http://standards.ieee.org/findstds/standard/982.1-1988.html
- Ashish, S ., Aggarwal, H. and Singla, A. (2012) Service Oriented Architecture Adoption Trends: A Critical Survey at 5th International Conferences on Contemporary Computing. Proceeding in Communications in Computer and Information Science, Springer, Berlin.
- Bianco, P ., Kotermanski, R. and Merson, P. (2007) Evaluating a Service-Oriented Architecture. Software Engineering Institute. Prepared for The SEI Administrative Agent ESC/XPK, 5, Eglin Street Hanscom AFB.
- Gokhale S.S. (2007) Architecture-Based Software Reliability Analysis: Overview and Limitations. IEEE Transactions on Dependable and Secure Computing, 4, 132-140. http://dx.doi.org/10.1109/TDSC.2007.4
- Goseva-Popstojanova K. and Trivedi, K.S. (2001) Architecture-Based Approach to Reliability Assessment of Software Systems. Perform Evaluation, 45, 179-204. http://dx.doi.org/10.1016/S0166-5316(01)00034-7
- Danilecki, A., Holenko, M., Kobusinska, A., Szychowiak, M. and Zierhoffer, P. (2011) ReServE Service: An Approach to Increase Reliability in Service Oriented Systems. Parallel Computing Technologies, PaCT 2011, LNCS 6873, 244-256.
- Brosch F., Koziolek, H., Buhnova, B. and Reussner, R. (2010) Parameterized Reliability Prediction for ComponentBased Software Architectures. Proceedings of the 6th International Conference on the Quality of Software Architectures (QoSA’10), Springer, New York, 36-51.
- Zheng, Z. and Lyu, M.R. (2010) Collaborative Reliability Prediction of Service-Oriented Systems. 2010 ACM/IEEE 32nd International Conference on Software Engineering, Cape Town, 2-8 May 2010, 35-44.
- Wang W.L., Pan, D. and Chen, M.H. (2006) Architecture-Based Software Reliability Modeling. Journal of Systems and Software, 79, 132-146. http://dx.doi.org/10.1016/j.jss.2005.09.004
- Wang, L., Bai, X. and Zhou, L. (2009) A Hierarchical Reliability Model of Service-Based Software System. 33rd Annual IEEE International Computer Software and Applications Conference, Seattle, Washington DC, 20-24 July 2009, 199-208.
- Tsai, W., Zhang, D., Chen, Y., Huang, H., Paul, R. and Liao, N. (2004) A Software Reliability Model for Web Services. 8th IASTED International Conference on Software Engineering and Applications, Cambridge, 8-11 November 2004.
- Ashish, S ., Aggarwal, H. and Singla, A. (2014) Estimating Reliability of Service-Oriented Systems: A Rule-Based Approach. International Journal of Innovative Computing, Information and Control. ICIC International, 10, 1111-1120.
- Gershteyn, Y . and Perman, L. (2003) Matlab: ANFIS Toolbox. http://csrit.gershteyn.net/courses/nn/Presentations/3-MatLab_ANFIS.pdf
- Arikan, S. (2012) Automatic Reliability Management in SOA-Based Critical Systems. European Conference on Service-Oriented and Cloud Computing, 1-6. http://dspace.icsy.de:12000/dspace/bitstream/123456789/367/1/reliability.pdf
- Becker S. (2008) Coupled Model Transformations for QoS Enabled Component-Based Software Design. Ph.D. Thesis, University of Oldenburg, Oldenburg. | https://file.scirp.org/Html/5-9301892_46672.htm |
This write-up deals with information pertaining to the different cardiac cycle phases one-by-one. Functioning of the heart can be understood with the knowledge of actions which take place in these phases.
The cardiac cycle is a series of events in which the heart beats in order to carry out the main functions of receiving and pumping out blood. We are now going to learn more about how the heart pumps blood to our entire body. The cardiac cycle is divided into two phases, i.e., the diastole and systole.
All the events which occur during the cardiac cycle are associated/related to the blood pressure/flow of blood which is initiated by heartbeats. The heart rate is derived from the frequency at which the cardiac cycle occurs. Let us understand more about these phases here.
Phases of Cardiac Cycle
As stated earlier, the length of a cardiac cycle is divided into the diastole and systole phases. The phase in which the heart muscles contract is called the systole, and the phase in which the muscles relax after the systole is known as the diastole. The diastole and systole too are sub-divided into first and second phases. They are explained here:
First Diastole Phase
In the first diastole phase, the heart receives deoxygenated blood from the superior and inferior venae cavae. For blood to enter the heart through the superior and inferior venae cavae, the ventricles and atria relax. With the relaxation of the atria and ventricles, the atrioventricular valve opens. This facilitates the flow of blood to the ventricles.
All the given events are followed by contraction of the SA node*, which in turn contracts the atria. Blood is then transferred from the right atrium to the right ventricle. A valve known as the tricuspid valve is present between the right ventricle and right atrium. It makes sure that the blood flows unidirectionally, and there is no backflow.
First Systole Phase
In the first systole phase, the right ventricle of the heart contracts as a result of impulses received from the 'Purkinje fibers'*. It is followed by the closure of the atrioventricular valves and opening of the semilunar valves.
These actions cause the de-oxygenated blood to get pumped into the pulmonary artery, which carries it to the lungs. The oxygenated blood is then returned to the heart via the pulmonary veins.
Second Diastole Phase
The beginning of the second diastole phase is marked by the closure of the semilunar valves and opening of the atrioventricular valves. The oxgenated blood brought by the pulmonary veins gets accumulated in the left atrium. Simultaneously, the blood present in the vena cava also gets transferred to the right atrium.
This is followed by contraction of the SA node*, which in turn causes contraction of the atria. All these actions cause the left atrium depositing the blood into the left ventricle. The mitral valve prevents the backflow of the blood.
Second Systole Phase
Closing of the semilunar valves and opening of the semilunar valves takes place at the beginning of the second systole phase. Contraction of the left ventricle takes place due to the Purkinje fibers*. This is followed by pumping of oxygenated blood into the aorta.
The blood cannot flow back to the left ventricle from the aorta, since the aortic valve prevents this from happening. The purified/oxygenated blood is distributed to various parts of the body through the aorta.
Impure/deoxygenated blood is brought in by the network of veins to the heart by means of the venae cavae. Knowledge of the cardiac muscle function should help understand the working of heart in a better manner.
*Purkinje Fibers:
These are the special group of cells which synchronizes the heartbeat after getting impulses from the SA Node.
*SA Node:
Sinoatrial node is the bunch of nerve cells which acts as a natural pacemaker. It is present in the right atrium of the heart. It is because this node that the heart maintains a regular rhythm. | https://bodytomy.com/cardiac-cycle-phases |
For 30 years, JNCC has provided robust evidence and trusted advice on nature conservation.
We are well-placed to synthesise evidence and provide advice relating to the natural environment, utilising our unique combination of strengths, which include:
- Deep expertise in biodiversity and how it underpins the ecosystem services that benefit society and the economy.
- A UK and international role that supports devolved implementation of country-specific priorities by providing cost-effective delivery, global leadership and solutions to cross-border environmental challenges.
- A position at the interface between science and policy, providing evidence and advice to enable governments and others to make informed decisions.
- Well-established partnerships with the country nature conservation bodies, other government bodies, academia, research organisations, businesses and NGOs, utilising our convening power to bring organisations together.
- People with a blend of scientific and technical skills, including environmental science, UK and international environmental policy, and data modelling and analysis, that can be deployed across terrestrial, freshwater and marine environments.
Our Staff and teams
We employ approximately 200 scientific, technical and support staff. Our teams work co-operatively with each other on many of our projects, and carry out a wide range of activities in support of our role in providing high-quality evidence and advice on the natural environment to governments and other stakeholders.
Our Corporate Services Teams provide support services, based on specialised knowledge and best practice to the whole organisation. Our corporate services teams include: Finance and Planning, Facilities, Health and Safety, Communications and Corporate Affairs, Business Development and Marketing, and Human Resources.
The Marine Monitoring Team undertakes a range of activities, including: organising offshore seabed surveys with partners to gather information to support designation, monitoring and assessment of offshore MPAs; development of data management and survey standards, guidelines and methods at a UK and international level; the provision of expert advice to national fora; and co-ordination of the UK Marine Monitoring Programme working under the UK Marine Monitoring and Assessment Strategy with partners through the Healthy and Biologically Diverse Seas Evidence Group.
The work of the Marine Evidence Team includes: enhancing understanding of the marine environment through fine- and broad-scale mapping of marine habitats and their relationships to physical and oceanographic parameters; improving understanding of human impacts on the marine environment through spatial mapping of human activity footprints and their pressures, assessment of habitat and species sensitivity and using this information together to assess vulnerability; maintenance of the national benthic marine database (Marine Recorder) and the national marine habitat classification; and provision of expert advice to international fora.
The Marine Ecosystems Team is responsible for delivering advice on the designation, management, monitoring and assessment of offshore MPAs in UK waters, as well as providing advice on best practice around MPAs through a range of international fora such as OSPAR and the Convention on Biological Diversity; developing and implementing modelling tools, metrics, and indicators to measure the condition, health and trends of marine ecosystems to support implementation of management measures, and the production of national and international assessments; exploring the role of marine natural capital in supporting environmental decision making; and providing advice to governments in their response to reporting requirements under a range of Multilateral Environmental Agreements.
The Marine Management Team provides evidence-based advice on the environmental impacts of activities taking place in the offshore marine environment (beyond 12 nautical miles from the coast) to appropriate authorities responsible for their regulation. The advisory functions of the team are centred on helping government(s), regulators and developers meet the requirements of relevant legislation. The majority of the team’s work focuses on advising on impacts of offshore oil and gas, renewable energy, aggregate extraction and fisheries, as well as other industry sectors. The Team also advise on the development of marine spatial plans. As conservation advisers, we have a significant role to play in the management of the offshore marine environment and we work closely with our partners and wider stakeholders to deliver sustainable management approaches. This includes advising on management of the network of MPAs in the UK offshore marine area
The Marine Species Team provides expert advice on conservation of marine species in UK, European and international waters, with a focus on marine mammals (particularly cetaceans) and birds. The team works closely with other teams in the organisation. Our marine bird work covers monitoring on land and at sea; provision of population estimates and trends in abundance and demographics; and the interpretation of evidence to advise on conservation and management, especially working closely with the Marine Management Team on assessing impacts from offshore industries and fisheries. We also undertake seabird research, including through collaboration with scientists in UK and European universities. Marine mammal work is focussed on cetaceans, with some work on seals. It covers supporting and advising on monitoring, status assessment and reporting; interpretation of evidence to provide advice on protected areas management and conservation more widely; and coordination of and collaboration with experts across the UK and Europe to develop evidence, advice and conservation strategies.
The Ecosystems Analysis Team works in partnership with a number of organisations. The team brings together three elements of monitoring and surveillance work to provide scientific nature conservation advice:
- A set of long-term partnerships with bodies undertaking species surveillance in terrestrial, coastal and freshwater ecosystems through networks of volunteers
- An analysis and modelling capability which brings together data on species, habitats and natural capital to inform policy making, implementation and monitoring in the UK and internationally
- An Earth observation data processing and analytical capability, which develops processing standards and methods to facilitate use of these data sources by government and country nature conservation body partners as well for use within the organisation for habitat mapping and natural capital assessments.
The International Implementation Team has responsibility for advising on the Ecosystem Approach, natural capital and ecosystem services, environmental economics and managing major thematic work related to these areas within the UK’s Overseas Territories and internationally. The team provides technical assistance to UKOT Governments to support biodiversity and wider environmental management strategies, and scientific advice to the UK Government who provide support to the OTs in relation to the environment and economic security. Within JNCC, the team works collaboratively to deliver technical implementation advice. The team also co-ordinates the Intergovernmental Platform for Biodiversity and Ecosystem Services (IPBES) UK Stakeholder Hub.
The International Advice Team provides, amongst other things, advice across the range of multi-lateral environmental agreements (MEAs) to which the UK is party. These include agreements such as the Convention on Biological Diversity (CBD), the Ramsar Convention on Wetlands, the Convention on Migratory Species (CMS), and the Convention on International Trade in Endangered Species (CITES), for which JNCC is the UK CITES Scientific Authority. The team also advises on other international environment issues, legislation and policy; leads on preparing government reports required under international obligations and contributes to the production of UK biodiversity indicators.
The Nature Conservation Policy and Advice Team plays a key role in nature conservation in the UK, working with partners in the country nature conservation bodies, the UK and devolved governments and the wider nature conservation community. The team maintains an overview of UK nature conservation policy and legislation; facilitates, convenes and, where required, co-ordinates joint UK working on key topics; and works with UK countries on areas of priority delivery where JNCC contributes complementary capabilities of evidence and advice. Within JNCC, the NCPA team works collaboratively across the organisation to deliver shared solutions, for example on multilateral environmental agreements, and in developing evidence-based approaches to addressing environmental threats and opportunities.
The Digital and Data Solutions Team is responsible for delivering a diverse portfolio of projects within JNCC. The Team maintains the organisation’s technical infrastructure by implementing systems, developing innovative approaches to data, and providing policy and advice on information management. DDS also has an internal governance function in areas such as information security and data protection.
Published: . | https://jncc.gov.uk/about-jncc/careers/teams-and-people/ |
Oxalate is endogenously produced as an end product of normal cellular metabolism and is also absorbed from a typical diet. Oxalate is present in many foods, especially healthy foods like plants, including green leafy vegetables, fruits and nuts, because plants utilize oxalate to store calcium. Humans lack the innate capacity to digest oxalate and primarily depend on renal excretion to eliminate it from the body. Although oxalate has no identified biological function, it is known to damage the kidney when present in excess amounts, a condition called hyperoxaluria.
Hyperoxaluria has two main causes. Secondary hyperoxaluria is caused by increased intestinal oxalate absorption associated with underlying gastrointestinal conditions such as inflammatory bowel disease, Crohn’s disease, and short bowel syndrome or bariatric surgery (enteric hyperoxaluria). Secondary hyperoxaluria may also occur without an identified cause (idiopathic hyperoxaluria).
Primary hyperoxaluria is an orphan genetic disease that leads to increased endogenous oxalate production by the liver due to genetic defects in certain enzymes involved in carbohydrate metabolism.
Hyperuricemia, or elevated levels of uric acid in the blood, results from overproduction or insufficient excretion of urate, or often a combination of the two. Humans lack urate oxidase, an enzyme that degrades uric acid in other animals. Hyperuricemia can be a predisposing condition for gout and kidney stones, and is also intricately linked with various metabolic disorders, including hypertension, chronic kidney disease or CKD, glucose intolerance, dyslipidemia, insulin resistance and obesity.
What is an orphan designation?
Are these products available now?
Reloxaliase (formerly ALLN-177), Allena’s lead product candidate that targets oxalate, is currently being studied in clinical trials. It is not commercially available now. For information about clinical trials, please see www.clinicaltrials.gov. | http://www.allenapharma.com/faqs |
The COVID-19 pandemic will likely recede only through development and distribution of an effective vaccine. Although there are many unknowns surrounding COVID-19 vaccine development, vaccine demand will likely outstrip early supply, making prospective planning for vaccine allocation critical for ensuring the ethical distribution of COVID-19 vaccines. Here, we propose three central goals for COVID-19 vaccination campaigns: to reduce morbidity and mortality, to minimise additional economic and societal burdens related to the pandemic and to narrow unjust health inequalities. We evaluate five prioritisation approaches, assess their likely impact on advancing the three goals of vaccine allocation and identify open scientific questions that may alter their outcomes. We argue that no single prioritisation approach will advance all three goals. Instead, we propose a multipronged approach that considers the risk of serious COVID-19 illness, instrumental value and the risk of transmission, and is guided by future research on COVID-19-specific clinical and vaccine characteristics. While we focus this assessment on the USA, our analysis can inform allocation in other contexts.
- allocation of health care resources
- clinical ethics
- health care for specific diseases/groups
- public health ethics
This article is made freely available for use in accordance with BMJ’s website terms and conditions for the duration of the covid-19 pandemic or until otherwise determined by BMJ. You may use, download and print the article for any lawful, non-commercial purpose (including text and data mining) provided that all copyright notices and trade marks are retained.https://bmj.com/coronavirus/usage
Statistics from Altmetric.com
- allocation of health care resources
- clinical ethics
- health care for specific diseases/groups
- public health ethics
Introduction
The COVID-19 pandemic has critically strained nearly every aspect of society within the USA and across the globe. Healthcare organisations are scrambling to stretch limited resources, and the rapid growth in cases of the disease has precipitated the need for tremendous planning. COVID-19 has also impacted national economies, causing rates of unemployment and business closures not seen since the Great Depression.1 While promising therapies are being researched, many experts speculate that widespread vaccination ultimately will be required to enable significant recovery from the pandemic.2
With the elucidation of the genetic sequence of SARS-CoV-2, the virus responsible for COVID-19, major strides have been made in vaccine development.3 As of November 2020, there are over 300 COVID-19 vaccine candidates worldwide.4 Among these vaccines, the methodologies employed to create an immunological response are highly variable and include the use of nucleic acids, viral-like particles, peptides, viral vectors, recombinant proteins and inactivated virus.5 Several vaccine candidates have moved forward into clinical testing, and in the USA, vaccines from Pfizer/BioNTech and Moderna are scheduled to be evaluated for emergency use authorisation by the US Food and Drug Administration (FDA).6 Despite promising advancements in vaccine development, the timeline for public availability remains uncertain, pending adequate safety testing and rigorous proof of effectiveness.7 This is further complicated by the fact that a number COVID-19 clinical characteristics relevant to vaccine efforts are still unclear, and vaccine manufacturing and distribution issues such as adequate storage/transportation may further delay dissemination.8
Although there are many unknowns surrounding COVID-19 vaccine development, proactive planning is critical to ensure equitable and prudent distribution. Healthcare leaders have a moral duty to plan for the challenges presented by this pandemic. Even with unprecedented speed in vaccine development and testing, epidemiologists anticipate there will be a major shortage of COVID-19 vaccines, both within the USA and worldwide.9 Discussion surrounding vaccine allocation both nationally and globally has already begun, and the Centers for Disease Control and Prevention (CDC)’s Advisory Committee on Immunisation Practices is actively deliberating allocation.10 11 By prospectively evaluating the factors that will impact vaccine allocation, we will be better equipped to ensure distribution best addresses the substantial health, economic and social impacts of COVID-19.
To this end, we identify the ethical goals that vaccine distribution should aim to promote, assess the likely impact of different prioritisation strategies on achieving these goals and identify key empirical questions that will shape the likely outcomes associated with different national vaccine prioritisation strategies. While we place our discussion of these topics primarily in the context of the USA, COVID-19 vaccine allocation will take place internationally. Consequently, it is important to examine context-dependent factors within each country when considering the strategies and recommendations presented.
Ethical goals for distributing COVID-19 vaccines
The first step in assessing the ethics of vaccine allocation for COVID-19 is to consider the intended goals of this endeavour.12 We propose that there are three central goals for future COVID-19 vaccination campaigns, none of which is lexically prior to another. The first is the reduction of morbidity and mortality. This is consistent with position of the CDC, which asserts that the primary purpose of vaccine campaigns is ultimately to reduce the impact of disease on health.13 The second is to minimise the pandemic’s effects on societal infrastructure and the economy. This goal is particularly salient for COVID-19, given the magnitude of the economic toll wrought by the pandemic and the importance of maintaining societal infrastructure.12 13 The third is to narrow unjust health inequalities, consistent with the view that the moral foundation of public health is social justice and, therefore, the reduction of inequalities faced by systematically disadvantaged groups.14 This goal also finds particular resonance in the COVID-19 context, given the disproportionate health and economic burdens of the pandemic borne by racial and ethnic minorities as well as those of low socioeconomic status.15
Identifying these goals provides criteria to assess the ethical implications of different vaccine allocation strategies. However, even with these defined objectives in place, numerous complexities remain with respect to how these goals can be best achieved and how potential trade-offs should be weighed against one another. Below, we outline, in no order of priority, five proposed prioritisation approaches to guide vaccine allocation decisions and evaluate the likelihood of each to advance the aforementioned goals for future COVID-19 vaccination programmes.
Prioritisation approaches to guide COVID-19 vaccine allocation
The first proposed strategy is to prioritise those most vulnerable to morbidity and mortality from COVID-19. This approach has been applied across multiple previous pandemics and is a central feature of CDC recommendations for potential influenza pandemics.13 This prioritisation strategy also features prominently in contemporary guidance on allocating scarce medical resources during the COVID-19 pandemic.16 17 Based on the current epidemiological data available for COVID-19, prioritising those most vulnerable to morbidity and mortality would largely entail vaccinating those above 65 years of age, who represent as much as 73.6% of COVID-19-related deaths,18 and those with comorbidities such as hypertension, diabetes, cancer, cardiovascular disease and cerebrovascular disease.19 This approach most directly aligns with the goal to reduce COVID-19’s health impacts. This strategy may also align with the goal of narrowing unjust health inequalities, given communities of colour have higher COVID-19 infection, hospitalisation and mortality rates, reflecting numerous background systemic injustices, ranging from economic marginalisation to racial discrimination in healthcare systems, which put these populations at greater ‘risk of risks’.15 20 Furthermore, as the current economic toll of the pandemic is at least partially a reflection of societal measures to protect the elderly, such as population-wide stay-at-home orders to minimise viral to those vulnerable, this prioritisation strategy may move towards economic revival. However, it may not be the optimal approach to reducing the pandemic’s economic burden, as non-fatal cases will likely continue to propagate, thereby perpetuating the pandemic’s spread and the corresponding strain on healthcare systems.
The second strategy is prioritising by life-cycle, so as to ensure the greatest number of individuals have the opportunity to pass through all stages of life (childhood through old age).17 21 This approach would entail vaccinating those younger than 65 years of age to ensure these individuals do not have a life-cycle cut short by COVID-19. Emerging evidence associating multisystem inflammatory syndrome with COVID-19 in children may provide some support for this approach, given accumulating evidence that children can die from a sequelae of COVID-19.22 However, this approach is likely in tension with the goal of minimising overall mortality and morbidity, as current data suggest that children and adults under 65 years of age who do not have comorbidities are at lower risk of mortality from COVID-19.23 Similarly, prioritising by life-cycle does not address social inequalities exacerbated by COVID-19. However, this strategy may be consistent with the goal to revive the economy, as those younger than 65 years are more likely be employed. Vaccination could therefore enable these individuals to return to participating in the exchange of goods and services, although this may have limited success given the strategy would still leave the elderly and vulnerable at risk and require continued societal measures to protect these populations (ie, stay-at-home orders). Yet, prioritising younger individuals may have the notable benefit of supporting return to in-person education, securing both extensive health and social benefits for students themselves as well as economic benefits for families, given the disruptions to work presented by the challenges of virtual learning.
The third strategy is prioritising individuals who provide ‘instrumental value’. This would entail vaccinating essential healthcare workers, individuals who provide life-saving services (ie, fire department workers, emergency medical services, police and so on) and workers who provide services that are necessary for society to function as normally as possible (ie, food industry employees, essential airport personnel and so on).24 Prioritising these populations is consistent with the principle of reciprocity, recognising the additional risks assumed by essential workers to maintain critical services for society during the pandemic. This approach conflicts with allocating vaccines according to other considerations discussed here (such as vaccinating those with the greatest risk of mortality). Nevertheless, it may be consistent with the goal of minimising the mortality and morbidity of COVID-19—both by ensuring that those who play a key role in the ongoing COVID-19 response are able to continue to serve in this capacity and by reducing the risk of spread, as essential workers generally have far more social contacts than others.25 This prioritisation approach is also consistent with the goal of maintaining the economy and minimising societal impacts of the pandemic, given workers are needed to continue operating essential services. Furthermore, protecting these individuals would most directly preserve the healthcare system as a whole, an important consideration given the persistent impact of COVID-19. Finally, 70% of essential workers do not have college degrees, and 45% are of ethnic minorities, suggesting prioritising essential workers may also support a commitment to addressing social inequalities.26
The fourth prioritisation strategy is that of ensuring equal access. This approach may involve giving equal priority to all individuals for vaccination, respecting each person’s inherent moral equality. On the surface, this may seem achievable by implementing a first-come, first-served policy to vaccine administration or by employing a lottery system to select individuals to receive vaccination. However, such policies fail to acknowledge the background structural inequalities that impact certain groups’ abilities to even access the queue, as illustrated by disparities in access to COVID-19 testing resources.27 Thus, this approach is unlikely to achieve the goal of narrowing health disparities and may even exacerbate them, given that equal access is not equivalent to equitable access. Furthermore, by not targeting those groups most likely to secure the greatest health or economic benefits, an ‘equal access’ policy is unlikely to achieve any of the three outlined goals.
Fifth is prioritising the reduction of spread of COVID-19 infections. One approach to this strategy is to reduce infection spread within confined communities, which would entail vaccinating groups of individuals who are in very close contact with one another, such as nursing homes and prisons. Another approach is to reduce spread through the community as a whole, which would entail vaccinating those most likely to infect large numbers of individuals (such as those regularly attending large community events). This may also include those whose jobs require in-person contact with many others. Application of this approach may be consistent with efforts to reduce the total number of cases and accelerate the reopening of the nation (and therefore, revival of the economy). However, it may not align with the overall goal of reducing morbidity and mortality, given the heterogeneity of risk factors for severe COVID-related illnesses across the various communities with elevated risk of spread. While prioritising certain groups at elevated risk for spreading COVID-19 may narrow some health disparities, the particular impact will largely depend on the specific subpopulations targeted for vaccination.
Major unknowns that may alter the outcomes of different prioritisation strategies
The aforementioned prioritisation strategies can provide guidance in decision-making regarding the ethical allocation of COVID-19 vaccines. However, the specific application of these strategies and their implications for achieving the goals of COVID-19 vaccination inevitably rest on several empirical features, many of which are as yet unknown. It is vital to consider how this information, once elucidated, may influence the ethical dimensions of allocation decisions. The major variables that may impact vaccine distribution can be organised into three categories: COVID-19 clinical characteristics, vaccine clinical characteristics and miscellaneous factors.
While many clinical characteristics of COVID-19 have been discovered, several relevant to vaccine allocation remain unknown. Namely, the risk of children spreading the disease to others is still uncertain, and this may impact the prioritisation of children for vaccination. This is especially relevant given many children are returning to in-person schools in the USA. Preliminary data show that children may have significantly higher viral loads compared with hospitalised adults with COVID-19 but may not spread the virus as readily as adults.28 29 Furthermore, it is likely that children will require separate, pediatric-specific vaccine clinical trials prior to widespread distribution. Additionally, discovering the degree to which individuals are immune to COVID-19 following recovery from infection may change allocation procedures and decisions. For example, if those previously infected are conferred long-term immunity following recovery, testing individuals for immunity prior to vaccination may be warranted, and those with immunity may not require immediate vaccination.
Additional unknowns remain regarding vaccine clinical characteristics, including which COVID-19 vaccine(s) will be the first approved for distribution. Yet the vaccine type may impact dosing and ‘booster’ schedules for given individuals (ie, immunocompromised people may require higher strength or additional doses, as is the case for certain immunocompromised individuals receiving the current hepatitis B vaccine). Similarly, the need for separate vaccines based on patient age (such as for the influenza vaccine), the timeline of development of such vaccines and whether there will be differences in COVID-19 vaccine efficacy based on patient’s age or demographics may impact decisions on who is initially vaccinated.30 Furthermore, the number of doses required to achieve immunity in an individual is unknown, with the majority of candidate vaccines requiring either two or three doses spread across 2–8 weeks, including the front-running Pfizer/BioNTech and Moderna vaccines.5 Similarly, the length of immunity that will be conferred to recipients following vaccination is unclear. Although immunity from infection is the intent of vaccination, it is also currently unknown whether COVID-19 vaccines will prevent infection and transmission among those vaccinated or simply prevent symptomatic disease. This is an important consideration given some allocation strategies proposed rest on the former while others focus on reducing the latter.
Given that multiple initial doses will likely be needed, the impact of any prioritisation approach on the goal of minimising morbidity and mortality may also be influenced by the feasibility of follow-up. This can present a conflict between the goal of advancing overall health and that of narrowing health disparities, given that at least some groups that may present the greatest challenges for reliable follow-up are also those who face systematic patterns of disadvantage.31 Differences in side-effect profile based on age and demographics may also impact the outcomes resulting from different prioritisation strategies.32 For example, if vaccine side-effect risks are too high among the elderly, these individuals may receive lower priority for vaccination, and individuals who come into frequent close contact with the elderly may be prioritised instead. If multiple vaccines are approved near one another temporally, careful consideration of the characteristics of each vaccine (ie, dosing schedules, side-effect profiles) will be vital in order to determine whether one may be better suited for given populations over another. Furthermore, if multiple vaccines demonstrate differing efficacy overall (ie, one vaccine demonstrates 90% overall efficacy while another demonstrates 80%), this may introduce an ethical dilemma surrounding who will receive which particular vaccine. If this occurs, clearly describing the reasoning behind distribution will be paramount for promoting transparency and public buy-in for these allocation decisions.
Underlying all vaccine allocation plans is the broader context of national reopening. With fluid changes to quarantine requirements and shelter-in-place mandates, vaccination priorities may change depending on which groups of people can reliably self-quarantine and reduce their disease risk until a vaccine is available for them. This concept has important implications for health equity, as disparities exist in the ability to self-quarantine based on employment obligations, among other reasons. In addition, nations may prioritise reopening certain establishments, such as schools, prior to others (eg, bars, or other areas for socialisation), under the argument of differential utility to society. In this case, vaccination priorities may need to be adjusted to protect those frequenting establishments with high population densities. Vaccine allocation will also be impacted by the discovery of satisfactory treatments for COVID-19. Bamlanivimab, a neutralising monocolonal antibody, currently has emergency use authorisation from the FDA given its promise among non-hospitalised patients with COVID-19 at risk of disease progression.33 As research into novel treatments continues, the focus of COVID-19 vaccine allocation may shift to reducing morbidity and mortality in individuals who, based on currently unknown clinical factors, are unlikely to respond to treatments. Finally, questions remain about vaccine refusals. Partisan differences in attitudes towards COVID-19 vaccination have been steadily growing, while intention to vaccinate has been declining, with as few as 50% of Americans indicating they would elect to be vaccinated against COVID-19.34 This percentage is not high enough to achieve herd immunity, suggesting the critical need to develop evidence-based strategies that promote support for COVID-19 vaccines among vaccine-hesitant groups.
Recommendations for the distribution of COVID-19 vaccines
As the above analysis indicates, a single prioritisation approach is unlikely to provide comprehensive guidance for COVID-19 vaccine distribution. However, by placing the strategies and current unknowns in the context of the proposed goals of a COVID-19 vaccination programme, we are able to reject approaches that do not reasonably achieve multiple programme goals. Prioritising equal access to vaccines, such as via a first-come first-served or lottery system, would likely not achieve any outlined programme goals. Prioritising life-cycles would likely accelerate reopening of the nation, as this approach would entail vaccinating those <65 years of age; however, this does not align with reducing morbidity and mortality nor does it address health inequities underscored by COVID-19.
To most closely achieve COVID-19 vaccination programme goals, a combination of the other prioritisation strategies likely will be needed. This is similar to what has been proposed previously for the allocation of ventilators and other scarce medical supplies during the pandemic.16 17 For COVID-19 vaccines, the extent to which these strategies are followed may also depend on whether multiple vaccines are approved for distribution simultaneously, increasing supply. Prioritising a combination of individuals above age 65 years, those with comorbidities, individuals who provide ‘instrumental value’ and those at the highest risk of spreading disease would be the most prudent approach. This aligns with CDC guidance on vaccine prioritisation in preparation for possible influenza pandemics.13 Individuals who belong to intersections of these groups should be vaccinated first so as to maximise the immediate benefit. Following this, the application to specific groups will likely vary based on future empirical data, as detailed by the unknowns discussed above.
Our recommendations align largely and should be taken together with the growing discourse surrounding COVID-19 vaccine distribution. In interim guidance provided for vaccine allocation in the USA, Toner et al discuss similar goals for a COVID-19 vaccination programme as presented here, advocating for prioritising essential workers as well as those at greatest risk of developing severe illness and death, while balancing distribution to those with elevated risk of infection and low healthcare access.11 They also discuss that vaccination programmes should aim to promote legitimacy, incorporate the diverse views in a given society and work together with community members. While the many unknowns in vaccine development may shift the practical aspects of vaccine allocation, these goals are achievable in any of the strategies discussed here, as engaging the community is vital to medical decisions made that substantially impact society. This is particularly salient for COVID-19 vaccination programmes given that programme success may rest on public willingness to accept the chosen allocation strategy. These considerations underscore the importance of transparency in communicating decisions regarding vaccine allocation strategies as well as the reasons behind those decisions and how they reflect community values.
Of note, our analysis does not specifically consider questions regarding allocation to COVID-19 vaccine clinical trial participants, including whether specific priority should be given to those who received placebos in earlier trials. Such decisions require careful weighing of the rights of participants against potential societal benefits related to long-term scientific evaluations. While these issues are beyond the scope of this analysis, we welcome continued analysis on these important ethical considerations.
Finally, national prioritisation strategies must be implemented within a larger global allocation context. Liu et al outlined an ethical framework for the future global allocation of COVID-19 vaccines, based on utilitarian resource allotment and equitable access.10 Their framework includes assessing a country’s ability to provide care, ability to implement a vaccination programme and level of reciprocity in worldwide vaccine development efforts. WHO has also developed the COVAX initiative, which is a global initiative focused on ensuring equitable access to COVID-19 vaccines through open discussion, prudent international distribution and financial planning.35 Our recommendations and strategies similarly rely heavily on these factors, namely the ability to provide care and implement an organised vaccination programme.
Conclusion
Ending the COVID-19 pandemic will likely require widespread vaccination. Proactive planning for the ethical distribution of vaccines against COVID-19 is critical to ensuring that any resulting allocation approach advances the intended public health goals for COVID-19 vaccination: namely, to minimise morbidity and mortality loss, prevent economic harms from the pandemic and to narrow unjust health inequalities. No single prioritisation approach can effectively advance all three goals. Instead, a multipronged approach that considers risk of serious COVID-19 illness, instrumental value and risk of transmission should be implemented, guided by ongoing empirical work regarding, among other factors, clinical and vaccine characteristics specific to COVID-19.
Footnotes
Twitter @smorain
Contributors RG developed the idea for the manuscript. RG and SRM contributed to the literature review, analysis, drafting, editing and proofreading of the manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement There are no data in this work.
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Copyright information: | https://jme.bmj.com/content/47/3/137 |
Learning analytics has been as used a tool to improve the learning process mainly at the micro-level (courses and activities). However, another of the key promises of Learning Analytics research is to create tools that could help educational institutions at the meso- and macro-level to gain a better insight of the inner workings of their programs, in order to tune or correct them. This work presents a set of simple techniques that applied to readily available historical academic data could provide such insights. The techniques described are real course difficulty estimation, course impact on the overall academic performance of students, curriculum coherence, dropout paths and load/performance graph. The usefulness of these techniques is validated through their application to real academic data from a Computer Science program. The results of the analysis are used to obtain recommendations for curriculum re-design. | https://learning-analytics.info/journals/index.php/JLA/article/view/4079 |
On January 30th, 2013 the City of Memphis became the 500th jurisdiction in the United States to adopt a Complete Streets policy (see this article), taking its place as a prominent community in the national movement to reclaim streets for people, reexamine the public realm, and challenge some antiquated perceptions about transportation. The policy established a mandate for the development of a Street Design Manual to ensure the adoption of national best practices into routine procedures, and to provide technical support for the agency staff who led transportation projects for the community.
Complete Streets is an exciting approach to transportation planning, design, operations, and maintenance, which provides safe and accessible transportation options for Memphians of all ages and abilities, whether walking, bicycling, riding public transportation, or driving.
This document supports the achievement of the vision set forth in of the City’s Complete Streets policy. That is, the creation of an attractive, vibrant public realm that supports the diverse qualities of our
neighborhoods and provides a robust, balanced transportation network that is safe, financially responsible, serves all users, and considers multiple modes of transportation.
Realization of this vision depends on the routine application of Complete Streets principles in decision making, the establishment of performance metrics and evaluation, and a commitment to a coordinated project delivery process. This manual presents a structure for understanding and applying these concepts on an everyday basis, folding policy into practice.
This manual is divided into five chapters: Basis, Typologies, Geometrics, Amenities, and Processes. Each chapter provides information to assist planners, designers and decision makers in developing a new design approach to enable better and safer active transportation in their communities. The information is organized to facilitate the design process and to allow the reader to access relevant information at
various stages in the development of Complete Streets.
The development of this manual was supported by a HUD Sustainable Communities Regional Planning Grant for the Mid-South Regional Greenprint & Sustainability Plan, administered by the Memphis and Shelby County Office of Sustainability. | https://midsouthgreenprint.org/progress/project/complete-streets-memphis/ |
The Ariel Center – First Temple Period
The First Temple, built by King Solomon in the 10th century BCE, was one of the greatest marvels of the ancient world. Biblical sources tell us that the magnificent structure stood for 400 years, during which it served as the focal point of Jewish worship and a place of pilgrimage on the High Holy Days. The First Temple was destroyed under the rule of Babylonian King Nebuchadnezzar II in 586 BCE.
The Ariel Center for Jerusalem in the First Temple Period, situated in the Jewish Quarter of the Old City, is dedicated to preserving the memory of this mythical piece of cultural and architectural history, and includes a model of the Temple and the city of Jerusalem in ancient times. Also on hand are reproductions of major archaeological discoveries, a sound-and-light show, and a display of authentic artifacts dating back to the time of the First Temple.
Those who plan to visit the archaeological sites of the Old City and wish to inform and enhance their experience should definitely consider first stopping by at the Ariel Center (entrance fee), situated at the corner of Bonei Hahoma and Plugat Hakotel Streets in the Jewish Quarter in the Old City. There are guides available at the center that offer excellent guided tours. Advance reservations are recommended. | https://www.itraveljerusalem.com/ent/the-ariel-center-first-temple-period/ |
Arjun Appadurai’s 1988 essay, “How to Make a National Cuisine: Cookbooks in Contemporary India,” cited the popularity of Indian cookbooks among the Indian diaspora as a textual form of heritage preservation. Currently, in India, there is no single repository, digital or otherwise, that makes community culinary history accessible. Thus, the Indian Community Cookbooks Project (ICCP) was created as an open-access archive of community cookbooks from across India—both those extant in print as well as handwritten forms. The website initially aimed to document a single community cuisine (the Tuluva community) but later expanded to include cuisines across India. This open-access archive exhibits multilingual, regionally specific community cookbooks and documentation of community food memories.
We are three undergraduate students from FLAME University in Pune, India: Ananya Pujary (B.A. in Psychology), Muskaan Pal (B.A. in Psychology), and Khushi Gupta (B.B.A in Marketing). Taking into account our collective passion for food culture, we created this project in April 2019 as a final assignment for our Introduction to Digital Humanities course. Under the guidance of our instructor Maya Dodd, a professor of literary and cultural studies at FLAME University whose research primarily looks into food cultures in digital domains, we expanded this digital archive beyond the classroom.
Currently, ICCP consists of an “Archives” section with food memories and traditional recipes from communities across India. This way, both textual and oral traditions can be archived for food histories to be re-narrated. We use tools such as Knight Lab’s Timeline.JS to record cookbook publications across time in various cuisines. A chronology allows for the historicity of the cookbook to come to life, in a manner which demonstrates the changing socio-cultural, political, and economic conditions at the time of their creations. Likewise, ArcGIS’s mapping software documents contemporary cookbooks that were published after the 1990s industrialization of India (“Modern Cookbook Story”), featuring the densities of cookbook publication and lack of community representation.
The project is further enriched by contributions from its audience, especially for the “Archives” section. Convenience sampling is the primary mode of data collection, through websites, blogs, academic literature sites, and social media platforms. Our project was created with public interest in mind and the project has the potential to engage with and document lesser-known community cuisines. Additionally, it aims to reach specialists in digital humanities, food studies, women’s studies, and related fields to aid the generation of new knowledge. Eventually, we hope this project becomes a viable resource for many others and an agent in encouraging collaborative effort on further ventures into the field of Indian community cookbooks. In the future, we plan on using translation tools like Google Translate to increase its accessibility and reach.
References
Appadurai, A. (1988). How to Make a National Cuisine: Cookbooks in Contemporary India. Comparative Studies in Society and History, 30(1), 3-24. Retrieved February 12, 2021, http://www.jstor.org/stable/179020
Souvik Mukherjee
The Indian Community Cookbook Project (ICCP) is a collection of cookbooks reflecting the varied cuisine of India (and the subcontinent). Influenced by Arjun Appadurai’s essay “How to Make a National Cuisine: Cookbooks in Contemporary India,” this important student-led project combines a comparative approach to history, particularly oral histories and long-lost cultures. The project aims to contribute to food studies, women studies, and digital humanities. ICCP has far-reaching potential for postcolonial studies and cultural studies in general, and it could be further enhanced by including other texts and cuisines.
This project, which focuses on archiving culinary culture, fills a major gap in studies on the Indian subcontinent, postcolonialism, South-to-South discourses, and, of course, cultural studies. As the project’s creators note, there are currently no organized and curated digital archives on cookbooks and culinary practices in India. Further, as an undergraduate project in digital humanities, situated in the Global South and specifically in the Indian context where few undergraduate projects exist, this is a very commendable undertaking. Given the limited resources available to Indian undergraduate students, this huge achievement is made possible by a bricolage of available out-of-the-box digital humanities tools including Wix (where websites can be built without professional coding skills) and StoryMaps (where events and narratives can be geotagged). I see it as an important example of jugaad, which the Oxford English Dictionary describes as a “flexible approach to problem-solving that uses limited resources in an innovative way” in digital spaces.
As they continue work on the project, the creators should consider providing a bibliography, a basic requirement of such research projects, which would be useful not only to students but to other scholars. This would likely include readings on the subject, such as Utsa Ray’s Culinary Culture in Colonial India (Cambridge University Press, 2014). Additionally, the creators should consider including key texts in the timelines, such as Kalyani Dutta’s Thor Bori Khara (Thema, 2018) in the Bengali timeline. Another important addition would be a methodologies section detailing how these cookbooks were scanned, how the optical character recognition (OCR) process was undertaken, what metadata was added, what metadata conventions were used (e.g. Dublin Core), what percentage of the cookbooks were obtained from born-digital sources, and how digital humanities methodologies have been applied to this project. The creators might also consider whether some texts could be translated, such as the Portuguese cookbooks from Goa. They might also consider integrating cooking traditions from Nepal, Tibet, and Bhutan as well as cuisines of marginalized communities (see Joseph Rozario’s Tribal Cuisine Cookbook for example). The level of transculturation in Indian cuisine also needs some discussion.
This commendable student-led project has been publicized in the local media and has the potential to be relevant to academic research. It will benefit from additional funding and, more importantly, guidance. Engaging with scholars in the area of Indian culinary studies is a critical next step, as is articulating research methods and methodologies. As it stands, the project offers a compelling look at Indian cuisine, and the fact that the three undergraduate students have built it with limited financial support and technical expertise is outstanding indeed. | https://reviewsindh.pubpub.org/pub/indian-community-cookbooks-project/release/3?readingCollection=794727bf |
Food is any substance consumed to provide nutritional support for an organism. Food is usually of plant, animal, or fungal origin, and contains essential nutrients, such as carbohydrates, fats, proteins, vitamins, or minerals. The substance is ingested by an organism and assimilated by the organism's cells to provide energy, maintain life, or stimulate growth. Different species of animals have different feeding behaviours that satisfy the needs of their unique metabolisms, often evolved to fill a specific ecological niche within specific geographical contexts.
Omnivorous humans are highly adaptable and have adapted to obtain food in many different ecosystems. Historically, humans secured food through two main methods: hunting and gathering and agriculture. As agricultural technologies increased, humans settled into agriculture lifestyles with diets shaped by the agriculture opportunities in their geography. Geographic and cultural differences has led to creation of numerous cuisines and culinary arts, including a wide array of ingredients, herbs, spices, techniques, and dishes. As cultures have mixed through forces like international trade and globalization, ingredients have become more widely available beyond their geographic and cultural origins, creating a cosmopolitan exchange of different food traditions and practices. (Full article...)
Cooking, cookery, or culinary arts is the art, science and craft of using heat to prepare food for consumption. Cooking techniques and ingredients vary widely, from grilling food over an open fire to using electric stoves, to baking in various types of ovens, reflecting local conditions.
Preparing food with heat or fire is an activity unique to humans. Archeological evidence of cooking fires from at least 300,000 years ago exists, but some estimate that humans started cooking up to 2 million years ago.
The expansion of agriculture, commerce, trade, and transportation between civilizations in different regions offered cooks many new ingredients. New inventions and technologies, such as the invention of pottery for holding and boiling of water, expanded cooking techniques. Some modern cooks apply advanced scientific techniques to food preparation to further enhance the flavor of the dish served. (Full article...)
Palestinian cuisine consists of foods from or commonly eaten by Palestinians, whether in Palestine, Israel, Jordan, refugee camps in nearby countries, or by the Palestinian diaspora. The cuisine is a diffusion of the cultures of civilizations that settled in the region of Palestine, particularly during and after the Islamic era beginning with the Arab Ummayad conquest, then the eventual Persian-influenced Abbasids and ending with the strong influences of Turkish cuisine, resulting from the coming of the Ottoman Turks. It is similar to other Levantine cuisines, including Lebanese, Syrian and Jordanian.
Cooking styles vary, and types of cooking style and ingredients used are generally based on the climate and location of the particular region and on traditions. Rice and variations of kibbee are common in the Galilee. The West Bank engages primarily in heavier meals involving the use of taboon bread, rice and meat, and coastal plain inhabitants frequent fish, other seafood, and lentils. The Gaza cuisine is a variation of the Levant cuisine, but is more diverse in seafood and spices. Gaza's inhabitants heavily consume chili peppers too. Meals are usually eaten in the household but dining out has become prominent particularly during parties where light meals like salads, bread dips and skewered meats are served. (Full article...)
The history of chocolate in Spain is part of the culinary history of Spain as understood since the 16th century, when the colonisation of the Americas began and the cocoa plant was discovered in regions of Mesoamerica, until the present. After the conquest of Mexico, cocoa as a commodity travelled by boat from the port of Nueva España to the Spanish coast. The first such voyage to Europe occurred at an unknown date in the 1520s. However it was only in the 17th century that regular trade began from the port of Veracruz, opening a maritime trade route that would supply the new demand from Spain, and later from other European countries.In contrast to other new culinary ingredients brought from The Americas, the acceptance and growth in popularity of chocolate in Spain was rapid, reaching its peak at the end of the 16th century. Although chocolate was not immediately adopted by other European societies, it eventually made its way to becoming a high commodity. Once the Europeans realised the societal value of chocolate, they started to incorporate it more into their diet. (Full article...)
List of selected articles
Australian cuisine is the food and cooking practices of Australia and its inhabitants. As a modern nation of large-scale immigration, Australia has absorbed culinary contributions and adaptations from various cultures around the world, including British, European, Asian and Middle Eastern.Indigenous Australians have occupied Australia for some 65,000 years, during which they developed a unique hunter-gatherer diet, known as bush tucker, drawn from regional Australian plants and animals. Australia became a collection of British colonies from 1788 to 1900, during which time culinary tastes were strongly influenced by British and Irish migrants, with agricultural products such as beef cattle, sheep and wheat becoming staples in the local diet. The Australian gold rushes introduced more varied immigrants and cuisines, mainly Chinese, whilst post-war immigration programs led to a large-scale diversification of local food, mainly due to the influence of migrants from the Mediterranean, East Asia and South Asia. (Full article...)
List of selected cuisines
Sofrito (Spanish, pronounced [soˈfɾito]), sofregit (Catalan), soffritto (Italian, pronounced [sofˈfritto]), or refogado (Portuguese, pronounced [ʁɨfuˈɣaðu]) is a basic preparation in Mediterranean, Latin American, Spanish, Italian and Portuguese cooking. It typically consists of aromatic ingredients cut into small pieces and sautéed or braised in cooking oil.
In modern Spanish cuisine, sofrito consists of garlic, onion, peppers, and tomatoes cooked in olive oil. This is known as refogado, sufrito, or sometimes as estrugido in Portuguese-speaking nations, where only garlic, onions and olive oil are often essential, tomato and bay laurel leaves being the other most common ingredients. (Full article...)
The muffins pictured are a variation on the classic lemon poppy seed muffin. Made with real lemon zest and covered with a lemon-flavored confectioners glaze, they are an ideal companion for a Sunday brunch.
The durian (/ˈdʊəriən/, /ˈdjʊəriən/) is the edible fruit of several tree species belonging to the genus Durio. There are 30 recognised Durio species, at least nine of which produce edible fruit. Durio zibethinus, native to Borneo and Sumatra, is the only species available in the international market. It has over 300 named varieties in Thailand and 100 in Malaysia, as of 1987. Other species are sold in their local regions. Durians are commonly associated with Southeast Asian cuisine, especially in Indonesia, Malaysia, Singapore, Thailand, Cambodia and Vietnam.Named in some regions as the "king of fruits", the durian is distinctive for its large size, strong odour, and thorn-covered rind. The fruit can grow as large as 30 centimetres (12 inches) long and 15 cm (6 in) in diameter, and it typically weighs 1 to 3 kilograms (2 to 7 pounds). Its shape ranges from oblong to round, the colour of its husk green to brown, and its flesh pale yellow to red, depending on the species. (Full article...)
List of Featured articles
She was primarily mentored by Eli Zabar (owner of Eli's Manhattan and Eli's Breads), Anna Pump, and food connoisseur Martha Stewart. Among her dishes are cœur à la crème, celery root remoulade, pear clafouti, and a simplified version of beef bourguignon. Her culinary career began with her gourmet food store, Barefoot Contessa; Garten then expanded her activities to several best-selling cookbooks, magazine columns, self-branded convenience products, and a popular Food Network television show. (Full article...)
The following are topics relating to food
|Beverages||Alcoholic beverage, Beer, Cocktail, Coffee, Distilled beverage, Energy drink, Espresso, Flaming beverage, Foodshake, Juice, Korean beverages, Liqueur, Milk, Milkshake, Non-alcoholic beverage, Slush, Smoothie, Soft drink, Sparkling water, Sports drink, Tea, Water, Wine|
|Cooking||Baking, Barbecuing, Blanching, Baking Blind, Boiling, Braising, Broiling, Chefs, Coddling, Cookbooks, Cooking school, Cooking show, Cookware and bakeware, Cuisine, Deep frying, Double steaming, Food and cooking hygiene, Food processor, Food writing, Frying, Grilling, Hot salt frying, Hot sand frying, Infusion, Kitchen, Cooking utensils, Macerating, Marinating, Microwaving, Pan frying, Poaching, Pressure cooking, Pressure frying, Recipe, Restaurant, Roasting, Rotisserie, Sautéing, Searing, Simmering, Smoking, Steaming, Steeping, Stewing, Stir frying, Vacuum flask cooking|
|Cooking schools||Art Institute of Fort Lauderdale, Cambridge School of Culinary Arts, Culinary Institute of America, French Culinary Institute, Hattori Nutrition College, International Culinary Center, Johnson & Wales University, Le Cordon Bleu, Louisiana Culinary Institute, New England Culinary Institute, Schenectady County Community College, State University of New York at Delhi|
|Dining||Buffet, Catering, Drinkware, Food festival, Gourmand, Gourmet, Picnic, Potluck, Restaurant, Salad bar, Service à la française, Service à la russe, Table d'hôte, Thanksgiving dinner, Vegan, Vegetarian, Waiter, Wine tasting|
|Foods||Baby food, Beans, Beef, Breads, Burger, Breakfast cereals, Cereal, Cheeses, Comfort food, Condiments, Confections, Convenience food, Cuisine, Dairy products, Delicacies, Desserts, Diet food, Dried foods, Eggs, Fast foods, Finger food, Fish, Flavoring, Food additive, Food supplements, Frozen food, Fruits, Functional food, Genetically modified food, Herbs, Hors d'œuvres, Hot dogs, Ingredients, Junk food, Legumes, Local food, Meats, Noodles, Novel food, Nuts, Organic foods, Pastas, Pastries, Poultry, Pork, Produce, Puddings, Salads, Sandwiches, Sauces, Seafood, Seeds, Side dishes, Slow foods, Soul food, Snack foods, Soups, Spices, Spreads, Staple food, Stews, Street food, Sweets, Taboo food and drink, Vegetables|
|Food industry||Agriculture, Bakery, Dairy, Fair trade, Farmers' market, Farming, Fishing industry, Food additive, Food bank, Food co-op, Food court, Food distribution, Food engineering, Food processing, Food Salvage, Food science, Foodservice distributor, Grocery store, Health food store, Institute of Food Technologists, Meat packing industry, Organic farming, Restaurant, Software, Supermarket, Sustainable agriculture|
|Food organizations||American Culinary Federation, American Institute of Baking, American Society for Enology and Viticulture, Chinese American Food Society, European Food Information Resource Network, Food and Agriculture Organization, Institute of Food Science and Technology, Institute of Food Technologists, International Association of Culinary Professionals, International Life Sciences Institute, International Union of Food Science and Technology, James Beard Foundation, World Association of Chefs Societies|
|Food politics||Committee on the Environment, Public Health and Food Safety, European Food Safety Authority, Food and agricultural policy, Food and Agriculture Organization, Food and Drugs Act, Food and Drug Administration, Food and Nutrition Service, Food crises, Food labelling Regulations, Food Safety and Inspection Service, Food security, Food Stamp Program, Food Standards Agency (UK), Natural food movement, World Food Council, World Food Prize, World Food Programme|
|Food preservation||Canning, Dried foods, Fermentation, Freeze drying, Food preservatives, Irradiation, Pasteurization, Pickling, Preservative, Snap freezing, Vacuum evaporation|
|Food science||Appetite, Aristology, Biosafety, Cooking, Danger zone, Digestion, Famine, Fermentation, Flavor, Food allergy, Foodborne illness, Food coloring, Food composition, Food chemistry, Food craving, Food faddism, Food engineering, Food preservation, Food quality, Food safety, Food storage, Food technology, Gastronomy, Gustatory system, Harvesting, Product development, Sensory analysis, Shelf-life, Slaughtering, Taste, Timeline of agriculture and food technology|
|Meals||Breakfast, Second breakfast, Elevenses, Brunch, Tiffin, Lunch, Tea, Dinner, Supper, Dessert, Snack|
|Courses of a meal||Amuse bouche, Bread, Cheese, Coffee, Dessert, Entrée, Entremet, Hors d'œuvre, Main course, Nuts, Salad, Soup|
|Nutrition||Chronic toxicity, Dietary supplements, Diet, Dieting, Diets, Eating disorder, Food allergy, Food energy, Food groups, Food guide pyramid, Food pyramid, Food sensitivity, Healthy eating, Malnutrition, Nootropic, Nutraceutical, Nutrient, Obesity, Protein, Protein combining, Yo-yo dieting|
|Occupations||Baker, Butcher, Chef, Personal chef, Farmer, Food stylist, Grocer, Waiter|
|Other||Food chain, Incompatible Food Triad|
The following are categories relating to food.
The following are some Food list articles on Wikipedia:
|
||
|
Here are some tasks awaiting attention:
|Parent project: WikiProject Food and Drink|
|Child projects:||Task forces: (All inactive)|
|
|
|Related projects:|
Rules | Match log | Results page (for watching) | Last updated: 2022-09-14 19:13 (UTC)
Note: The list display can now be customized by each user. See List display personalization for details.
The following Wikimedia Foundation sister projects provide more on this subject: | https://db0nus869y26v.cloudfront.net/en/Portal:Food |
Architectures of decolonization by Marion von Osten
In the post-war period, from early 1950s till end of the 1960s, global geopolitical conditions went through most radical changes. Not only the system competition between the capitalist North-Western to the communist North-Eastern system known as the Cold War constituted in that period, moreover, it was the era of colonial empires withdrawal as well as radical state led modernization programmes. From the late 1940s onwards many countries of the global South gained their independence due to anti-colonial struggles or many other forms of resistance and disobedience against the dominance of European colonial rule and their governmental techniques. A variety of projects and alliances of the global south, like the Non-Aligned and the Tricontinental Movement, tried to establish an alternative third way to Cold War ideology. Moreover, decolonization as such challenged the very foundation of Western thought. Social struggles and independence movements against ruling powers in the West and non-West were major actors in changing ideas of the role of the intellectual. Not only in France, decolonization was a movement that constructed new ways of thinking. They questioned the epistemological basis of Western knowledge production and opened an alternative map for social struggles and new political actors like Civil Rights Movement, the Women’s Movement, and new ways of thinking that are followed by today’s gender, queer, subaltern and post-colonial studies. Decolonization questioned the domination, segregation and discrimination by Western forces and governance techniques, as well as the Eurocentric episteme and capital led modes of production. Decolonization brought also radical changes in the understanding of the role and function of aesthetic practices. Architecture and planning was one of many of these fields in which these political shifts were negotiated.
After the Second World War, modern housing and urban planning projects in Europe acquired a symbolic function for the future-oriented reorganisation of modern societies and their ways of life under Fordist conditions. By the mid-1960s, the social housing complexes built for hundreds of thousands of families in France, England, Holland, Germany, Switzerland and the U.S.A. had already become, and would remain, international symbols of the failure of modernism. Described as inhospitable because of their strict functional separation of work, leisure and housing and their isolation from city centres, post-war modernist architecture and above all social housing represents a frequently cited negative backdrop. Most research, however, tend to leave wholly unexplored the context in which these plans were able to arise. Even in current historiography, they also largely ignore the broader planning structures in which modernist construction projects was embedded. The architect’s view and the authorship of the object of his / her analysis and planning also remains unquestioned, along with the question of the representation of architecture itself, which is generally photographed in the uninhabited state of its first completion. Above all, however, there is no explanation of the motives behind the large scale building activities that have been first developed in the former French colonies in North Africa. Many architects that were building in the colonies have been engaged in the Modernization programmes in Europe after the independence of their former working context. The colonial and anti-colonial conditions in which the discourse of architecture as town planning arose were also forgotten in the discourse about European post-war modernism, whether it was being vilified or historically reconceptualised. Thus, the influence of both colonialist and anti-colonial movements have been underestimated in the discussion on large scale housing projects and satellite cities.
Taking the Empires withdrawal as a starting point to discuss the large epistemological shifts in the late 1950s and the 1960s, discourses of Modernism and Post-Modernism need a revision today as they have to be understood as spaces of social struggles and transnational negotiations. Already Modernism was an effect of transnational and transcultural encounters. In "The Short Century" of Independence as Okwui Enwezor has described the era of decolonization in his eponymous exhibition, Modernism went through a phase of re-appropriation and resulted in an heterogeneity of multiple, local modernisms which emerged in a constant flux of domination and resistance in the post-, cold war era of decolonization. Thus, also the relationship of the West to the non-West has been constantly transformed under colonial, anti-colonial, and post-colonial conditions. Moreover, Modernism has never been a coherent unity, but an internally conflicted movement that created a multiplicity of outcomes. Nevertheless, the disciplines of art and architecture history, as Kobena Mercer has pointed out, are often lacking this specific perspective in their methodologies and objectives, although:
"Modernism, one might say, was always multicultural ‒ it is simply our consciousness of it that has changed. Each of the ruptures inaugurated in European modernism circa 1910 made contact with a global system of trans-national flows and exchanges ‒ from Malevich’s conception of monochrome painting, shaped by his reading of Vedic philosophy and Indian mysticism, to Duchamp’s ready-mades, which mirrored the de-contextualised mobility of tribal artefacts. Modernist primitivism may be the generic paradigm in which these (unequal) exchanges are most visible, but a broader understanding of cross-culturality as a consequence of modern globalisation also entails the necessity to question the optical model of visuality that determines how cultural differences are rendered legible as ‘readable’ objects of study.“¹
This paradox immediately emerges, when trying to grasp transcultural and transnational relations and encounters. As encounters, conflicts and negotiations cannot be easily extracted from an image or an object itself, art and architecture history is challenged to create new ways of hermeneutics and needs to accept its limits. Because the different ways in which the various spheres of modernity – socio-economic, artistic, and political, etc. – are interrelated and still regulated by a regime that changes through negotiations, conflicts, and struggles is one of the central research demands of post-colonial studies to understand the fabric of our present.
The crisis of High-Modernism in the third phase of globalization (in the post-war era of the 1950s and 1960s) caused the erosion of a whole visual, conceptual and epistemological framework. Many architects in that time attempted to engage with the experience of colonization / decolonization by synthesizing the way of living of people in the North African colonies, usually apostrophized as "premodern", with the project of modernization into a new and "other" modernism. One path of this movement was to learn from vernacular architecture, to acknowledge the pre-industrial city as well as dwelling practices of nomadism as major influences for new designing and planning methods. Such references are to be found in the influential exhibitions Mostra Di Architettura Spontanea by Giancarlo de Carlo in Milano 1951 and This Is Tomorrow with the involvement of Alison and Peter Smithson at the Whitechapel Art Gallery 1956 or in the famous show Architecture Without Architects by Bernard Rudofsky at the MoMa New York 1964. Theoretical writings followed like the influental book The Matrix of Man by Sibyl Moholy-Nagy published in 1968.
In the writings and projects of the Swiss architect André Studer in North Africa, this concept of a new synthesis between the modern and the pre-modern can even be found more than ten years earlier. His housing complex "Sidi Othman" built in 1952 in the outskirts of Casablanca reflected these concepts. The building complex was embedded in the larger extension plan of Casablanca designed by the Service de l’Urbanisme, which was led by the architect and urban planner Michel Ecochard. Another path of this post-war modernism engaged with the locus of anti-colonial liberation movements – the bidonville – and drafted from there a new perspective that focused on dwelling practices and hence was critical of previous modern approaches of the dwelling. As a dwelling environment the bidonville was not only the locus of the first encounters and negotiations with the modern city for a lot of people coming from rural areas, above all it was also the spatial expression of a non-planned way of organizing an urban environment. European architects like George Candilis and Shadrach Woods declared the bidonville as a study object and investigated this environment in an anthropological manner. They "learned" from the inhabitants of the bidonville how everyday dwelling practices enabled an urban neighbourhood through self-organisation. This trajectory of architectural debate acknowledged the self-build environment in the colonial city as valuable housing practices from which European planners would need to learn.
The studies in Casablanca or John Turner’s similar studies on self-build housing in the shanty towns of Peru influenced a generation of non-plan architects as well as participatory planning strategies. Moreover, as well non-western architects and planners of the era of decolonization created new adaptations and methodologies of the modern movement, some directly on the ground of colonial modern town planning in Africa or South-America. Architects like Elie Azagury, Patrice de Mazieres, Abdeslem Faraoui, Yona Friedman, Yasmeen Lari, Moshe Safdie, and many others developed approaches and perspectives that related to the colonial condition of the city and local climate and dwelling practices. As the architecture historian Udo Kultermann argues ‒ who has published Neues Bauen in Afrika already in 1963 ‒, the process of decolonization did not only change the former colonized world, but also questioned the Western hegemony of universal planning methodologies.
Moreover, many new urban developments had been tested in the colonial and evoked strong reactions from inhabitants and users in Europe and its former colonies. This focus on the colonial modern’s cracks, and the resistance against it and within it opens the possibility of new perspectives that correspond with areas of thought opened up by decolonization. As in response to the global liberation movements in the post-war era, critics of imperial Europe started to write a new post-colonial modernity, one that wanted to exist outside the realms of dominance, control, and discipline. Even European countries defined themselves over the colonial area ‒ and / or over this area populating allegedly "others" at the same time the contact zones established through colonization produced many breaks and criticism. Existing narrations were questioned and revised. New participants entered the stage of history. Moreover, the negotiations (in the colonial and post-colonial context) that continue to take place in the form of different types of aesthetic expression, planning techniques, and development of modern housing are also the product of physical and / or mediated encounters between different actors, as in the case of the utopian projects of modernist Western architects and planners with non-Western politicians, inhabitants, artists, and activists. The colonial era and the anti-colonial movement were highly transnational. Many intellectuals from the global South studied in Paris, Berlin and London and the anti-colonial struggles have been mostly organized exterritorially and internationally as one can witness in the movement of the Tricontinental, of which Mehdi Ben Barka was such an important member.
This concept of transnational relations and concrete negotiations detaches itself critically from approaches that regard modernism and modernity solely as impositions. And emphasising these lines of transnational relations, connections and conflicts has been important not just because they have been overlooked by historiography and its colonial archives, but also because they point to commonalities, to a post-colonial future, which began then and which is still unfinished and rife with conflict till today.
Today, self-proclaimed "Indigènes de la République" is a political movement fighting against racism and discriminations. Born from the urban fights coming from immigration, this rather controversial movement presents contemporary France as a "neo-colonial Republic". It does not only condemn the social conditions in the banlieues as being the administration of people and social relations and thus as analogous to techniques of colonial rule. It also aims at the core of the janus-faced character of modernity, since as the colonized, or as Jacques Rancière has put it, the "uncounted" in general, by claiming their rights represent the true meaning of democracy. Thus, with their critique, they go beyond the conclusions of research into colonialism which demonstrates that certain techniques of rule are (post-)colonial re-imports. What they rather put on the agenda is the tension within modernity between the governance of people as populations and their appellation as subjects, as citizens.
Text published in Le Journal des Laboratoires, January-April 2011
¹ Kobena Mercer: “Art History after Globalisation: Formations of the Colonial Modern”, in: Tom Avermaete, Serhat Karakayali, Marion von Osten (eds.), Colonial Modern: Aesthetics of the Past, Rebellions for the Future, Londres, Black Dog, 2010, p.236-237. | http://leslaboratoires.org/en/ctxnode/610/all |
AltraBio analysts go beyond the initial statistical output and interpret these results in the context of the biological problem the experiment addresses.
The interpretation phase aims to bring out the various biological processes associated with the effects observed after the statistical analysis. To do this it integrates the current biological knowledge available in the scientific literature and public databases to identify functions, relationships and pathways relevant to changes observed in an analyzed dataset.
To allow our partners to optimize their data using, AltraBio also designs & develops a web-based solution for data and knowledge storage, extraction and browsing. This solution, named WikiBioPath, allows our partners to integrate and understand their experimental data (e.g., microarrays, RNAseq). Using WikiBioPath, one can search for targeted information on genes, proteins and their functional evidence extracted from unstructured textual knowledge repositories such as MEDLINE. WikiBioPath is further enriched by interactive data visualization tools which allow biologists to explore their experimental data and analyses results (Brochure). | https://www.altrabio.com/interpretation |
“My muses are always a little bit boyish. I like the idea of an independent and powerful woman. I believe that today those are essential qualities for women,” said Mingyu Du (mingyudu.com), a young fashion designer originally from Qinhdao, China. Du recently graduated from the Academy of Art University and participated, with other students, at the latest Mercedes-Benz Fashion Week in New York.
Du’s voice, typically quiet and calm, became animated when she began to talk about mod culture. The Mod movement, popularized by 1960s British youth subcultures, has been a strong source of inspiration for Mingyu. Her Fall 2014 collection is inspired by all things Mod. “What I love the most about mod subculture is its sense of freedom and the urgency of exploration. A deep attitude owned by all the young guys who identified themselves into this trend,” she said. Mingyu’s collection incorporated unique materials such as an army tent, a wool blanket and a parachute.
Mingyu is also passionate about sneakers: these accessories are a must-have in her wardrobe, and she was one of the youngest talents selected to create and design sportswear’s collection for Nike Olympic. | http://www.somamagazine.com/mingyu-du/ |
Dr. Liliya GNATYUK,
INTRODUCTION
Now the multilevel latent conflict continues and grows: a conflict between the public interests and the church, the church and museums, government departments and museums, between various Christian denominations. Obviously, life issues and co-management of World Heritage Sites by religious communities and the state are relevant not only for Ukraine but also for the whole world, but the main thing is to develop the direction of establishing a dialogue between religious communities, as users of the World Heritage Site, and the state, as responsible owner. Parties of the dialogue are religious communities, which preserve and develop the religion and traditions, as well as representatives of public authorities, specialists and experts in the relevant fields (architects, conservators, restorers, etc.), facility owners, charitable organizations and other stakeholders.
It should be understood that our legacy is a legacy from the past, the one we live with today, and we use this heritage, and we pass it on to future generations. Our cultural and natural heritage is an irreplaceable source of life and inspiration, it defines our criteria, our landmarks, our identity.
PROBLEM DESCRIPTION
The diversity of cultures and heritage in our world is an irreplaceable source of spiritual and intellectual richness for all mankind. Protection and enhancement of cultural heritage and diversity in our world should be actively promoted as an important aspect of human development.
The diversity of cultural heritage exists in time and space, and demands respect for other cultures and all aspects of their belief system. In cases where cultural values come into conflict, respect for cultural diversity requires recognition of the legality of cultural values of all parties.
All cultures and societies are rooted in the particular forms and means of tangible and intangible expression which constitute their heritage, and this should be respected. Culture should be regarded as a set of distinctive spiritual, material, intellectual and emotional features of society or a social group, and it involves not only art and literature, but also lifestyles, ways of living together, value systems, traditions and beliefs. As a source of exchange, innovation and creativity, cultural diversity is as necessary for humankind as biodiversity is necessary for nature. In this sense, it is the common heritage of humanity and it should be recognized and affirmed for the benefit of present and future generations.
Interest in life of the heritage, whether it regards historical cities, cultural landscapes or religious assemblies is directly or indirectly associated with events or living traditions, with ideas or beliefs, with artistic and literary works.
In this sense, the role played by religious communities in the creating, maintaining and continuous forming and developing of the spiritual culture of the society, including sacred sites, is difficult to underestimate. Obviously, the main role in the transfer, expression and support of spiritual identity, adding values to sacred sites and places should belong to religious communities. The main thing is the existence of a living tradition, supported by the communities (unfortunately with long intervals associated with the political and historical situation) for more than a millennium, which has left a rich spiritual and cultural heritage, both tangible and intangible.
It is important to focus on the sacred sites, and what is very important, not only on sites, but also on the sacred places. Emphasis must be placed not only on the material structure of sacred complexes, but also on their sacred significance and role of communities (if present) not only in the management and preservation of the material integrity of individual buildings or complexes, but actually, in the preservation of living cultural heritage, which is displayed in the first place in liturgy and other ceremonial activities, and the need to transfer them to future generations.
Deep understanding of the distinction and separation of the spiritual and the material is essential. However, it is still desirable that the profane served better perception of the sacred. «Sacrum et profanum» - where the «sacrum» - means something that belongs to something which is sacred: the cult, people and things that were dedicated, etc., «profanum» - (that is before «fanum», or is the site dedicated, a sacred place, a monastery) - means the world secularity which cherishes claims for autonomy in religion and the Church. This scheme is based on the belief that a particular space of being can be booked for what is holy, respectively, people and things in this space can be separated from the rest of the world.
It is obvious that in different countries there are different models and schemes of care and protection of cultural heritage. It is difficult to single out any particular model as the most effective because in each case the local management models will have their effect. However, it is necessary to distribute the duties and responsibilities on management of sacral heritage between religious communities and the state. Regretfully, quite often occurs not a professional approach, but the personal vision of a particular aspect of the protection or preservation of the monument (following Picasso "No matter when and where, it is important - who"). Still, the main thing is the focus on the monument, despite the personal attitude or the subjection of the cultural heritage site to this or that agency.
It is also clear that each country has its own problems in the management of the sites. These are management problems, partly financial, and other factors, such as number of the local residents, which can have a serious impact on the form of guardianship. Thus, one of the sites, nominated for the UNESCO World Heritage List from Norway is located in the fjord village with a population - 23 people.First of all, it is necessary to identify who is responsible for particular sites, what roles are played by the user and owner of the site, what is the level and scale of their interaction.
Very often the religious community speaks from the viewpoint of the canonical requirements of a particular denomination and tries to make some adjustments, and sometimes to change the spatial organization of sacred complexes radically. This may include the attempts to restore canonical painting of the Church of St. Andrew and St. Cyril (it is good that the specialists intervened on time). Unfortunately, the priceless frescoes, attributed to the hand of Theophanes the Greek, were lost during the transmission of the Armenian church in Feodosia (Figure 1. A,B,C). Also the religious communities often insist on the changing of the shape of upper parts of the structures – e.g. reconstruction of the late 19th century tent-shaped domes, or bulb-shaped domes or, as in Feodosia – helmet-shaped completion.
However, it is important to note rare cases of a positive intervention of the local religious community, as in village Sutkovtsi (Yarmolenetski district, Khmelnitski region), where the community, recognizing the uniqueness and value of the church, that was built as a defensive structure, insisted on the restoration of the Ukrainian church according to the 15th century state (Figure 2. A,B,C), even when the opinions of the statesmen have diverged.
Unfortunately, in our country it is quite often when not professional standards or laws operate, but the people, and then the norm becomes an exception. This is the case of illegal construction activity in the buffer zone of St. Sophia (Figure 3. A,B,C) and directly on the territory of the Kiev-Pechersk Laura reserve (Figure 4. A,B,C,D). Both sites are in the UNESCO World Heritage Sites List.
At present there is no law on architectural and urban heritage protection, although there are separate laws for the protection of the archaeological and cultural heritage. A state protection and professional preservation (maintenance) are essential. One can guard with a gun in hands but the monument will still keep deteriorating. Instead, it can be saved for future generations.Obviously, the church cannot be just a consumer in this case. It is worth to mention just a century-old experience, when all churches were under the proper supervision of the eparchy architects. It is obvious that the present design agencies should operate under the leadership of the Church, while retaining highly professional approach.
While extending the hospitality to pilgrims and visitors, problems of trivialization of the sacred place and its turning into a mere touristic site are obvious; however, at the same time, it is important to find the balance and to preserve a peace of soul which is necessary for monastic life.
In this connection, it is necessary and recommended to develop a long-term Action Plan directed to protect religious heritage; in particular, a thematic programme for religious World Heritage Sites and establishment of integrated training programmes in religious sites management, in collaboration with ICOMOS, ICCROM, МSОP. All this should be designed to help representatives of the religious communities to improve their management skills.
Religious communities in charge of the management of the sacred sites of high spiritual values should consider not only their cultural and natural values, but their sacred values, since they reflect outstanding exchange of cultural and religious ideas. It is important to coordinate action plan with the religious community, in order to elaborate common concept of the sacred heritage protection, aimed at the strengthening of the role of communities and prevention of the conflict situations.
It is crucial to highlight necessity and importance of education and professional support not only to the religious communities, involved in the management of the World Heritage Site, but also to the wider public. This might have increased understanding and awareness of the values of our heritage, improved community attitudes towards the sites they own. The education is also very important - organization of seminars and trainings for the community. «Fides quares intelectum» - perhaps, when belief will be supported by intelligence, namely, understanding of the necessity of its preservation, as well as of the basic methods and techniques of cultural heritage protection, then misunderstandings between professionals and religious communities may disappear. However, all these was written in the Ten Commandments long time ago and if we are going to perform them properly (with a sense of responsibility for future generations) in particular, the commandment "do not steal", then surely our children and grandchildren will admire and enjoy not only the incredible external beauty of the cultural heritage sites, but also their spiritual beauty (and it does not actually matter, whether they have global or local significance).
It is very important to carry out permanent work linked with changes of liturgical and functional needs; with competing demands of coexistence of religions and individual denominations; with fluctuations in the level of interest towards religion and various forms of religion; with a growing secular pressure on places of religious importance; with musification of religious places and objects and with the raising of the communities awareness on the basics of the heritage protection.
CONCLUSIONS
Thus, from all mentioned above it can be concluded that the role of religious communities in the management is the key role. In this case, the basic necessary tools are: conduction of management system assessments, integrated studies of the spiritual, cultural and natural heritage, strong scientific support to the management issues, in order to meet the World Heritage Convention requirements. In order to achieve integrated approach towards the management of three types of heritage (spiritual, cultural and natural) interdisciplinary studies should be undertaken.
It is necessary to develop the role of religious communities in the long-term protection, conservation and integrated management of the sacred World Heritage Sites. It is necessary to develop: a legal framework for the conservation, use, renewal and management of religious World Heritage Sites, long-term strategy for the protection, conservation and management of the sacred World Heritage Sites; recommendations for the conservation and integrated management of sacred World Heritage Sites.
literature: | http://rcchd.icomos.org.ge/?l=E&m=4-4&JID=1&AID=10 |
The General Theory of Relativity incorporates gravity to Special Relativity. It was presented by Albert Einstein in 1916.
General Relativity, from 1916, technically includes and changes the Theory of Special Relativity of 1905. In this section, we will discuss the new or added subjects, which deals mainly with gravity effects.
Development of General Theory of Relativity was necessary to explain accelerated systems and flaws in the Theory of Special Relativity. A stellar example would be the twin paradox.
Justification of GR seats on Einstein’s Principle of Equivalence, publishe in 1911, which relates to the initial relativity of time of the Theory of Special Relativity. This principle adds temporal effects to gravity like the temporal effects of relative velocity in inertial systems.
This way, accelerated reference systems and those with gravity are non-inertial reference frames.
In other words, changes in velocity –acceleration– would be equivalent to changes in the intensity of the gravitational field. Covertly, it establishes a privileged frame of reference: the gravitational field.
The atomic clocks are most significant confirmation of Einstein’s theories. The book Scientific Experiments in Global Physics comments on various experiments with atomic clocks, which could also make clock time relative, like pressure, temperature, bumping and hammering.
Additionally, the book Physics and Global Dynamics enlightens the physical cause why a Cesium atom changes its resonance frequency, both with velocity and with the intensity of the gravitational field.
At the time, when some of the predictions of General Relativity confirmed, part of the Special Relativity indirectly confirmed because it is part of the former, although in many aspects GR contradicts the original SR.
If distant Michelson-Morley experiment, proposed by Global Physics, would show that tension of the longitudinal curvature of Global Aether –gravity field or luminiferous aether– drags light, the GR would practically cease to exist.
At the same time, to say that gravity is a geometric effect of the curvature of space-time is saying a lot. It is not surprising there are still aspects of proving or even understanding, and that after a century it is still said gravity is a force in every school.
Some things are more likely curvatures of language and mental abstractions than physical realities.
In philosophical justification of General Relativity, Albert Einstein used on various occasions models of human behavior or emotion, mainly related to love.
Although we have already dedicated the book The Equation of Love to effects of love and other vital emotions on time, I wanted to recall them here as one of the shortcomings. It is one false preconception always present in experiments confirming this theory. The subjective and objective points of view should not mix so often, and neither should physics and metaphysics.
In other words, if one thinks the time is relative. Any complex mathematical game –such as Einstein’s field equations– confirming it will make our mind to accept it straightforwardly. In our opinion, it will be a tremendous error, both material and formal.
This coincidence of subjective perspective of time with imaginary or fictional perspective in General Theory of Relativity is undoubtedly another of the coincidences or circumstances that helped acceptance of aforementioned GR.
A delicate topic is the intuitive vision of GR. When basic concepts of physics are relative, one completely loses this vision, and all problems become almost purely mathematical in Einstein’s theories. It is how famous space-time continuum appears, and we go to four-dimensional mathematical space of Minkowski’s geometry in Special Relativity and Riemann’s geometry in General Relativity.
If Minkowski’s geometry adds a fourth axis to the space-time continuum, Riemann’s geometry curves all four axes. If someone has a particular interest in these topics, he or she could also study Schwarzschild metrics; however, let him know that this could produce emotional tensors in his brain, despite having studied simple cases of Einstein’s field equations.
The General Relativity has undoubtedly achieved to explain some known natural phenomena –like the anomalous precession of Mercury’s orbit already explained by Paul Gerber in 1898– and made some predictions, but this does not mean that the interpretations or theoretical justifications of the facts are correct. Indeed, there are interpretations of empirical facts that we consider almost correct, but we consider others wrong.
It is still quite amusing how occasionally there are articles about novel experiments designed to verify GR. There must be a reason for it! | https://molwick.com/en/relativity/070-general-relativity.html |
Georgina Hall, a fifth-year Ph.D. student and a Gordon Y. S. Wu Fellow in the Department of Operations Research and Financial Engineering, was awarded the 2016 INFORMS Computing Society (ICS) Student Paper Award for her paper, "DC Decomposition of Nonconvex Polynomials with Algebraic Techniques." The INFORMS Computing Society (ICS) Student Paper Award is given annually to the best paper on computing and operations research by a student author, as judged by a panel of the ICS. She is advised by Assistant Professor Amir Ali Ahmadi.
We consider the optimal learning problem of optimizing an expensive function with a known parametric form but unknown parameters. Observations of the function, which might involve simulations, laboratory or field experiments, are both expensive and noisy.
We consider the problem of estimating the expected value of information for Bayesian learning problems where the belief model is nonlinear in the parameters. Our goal is to maximize some metric, while simultaneously learning the unknown parameters of the nonlinear belief model, by guiding a sequential experimentation process which is expensive.
We present a technique for adaptively choosing a sequence of experiments for materials design and optimization. Specifically, we consider problem of identifying the choice of experimental control variables that optimize the kinetic stability of a nanoemulsion, which we formulate as a ranking and selection problem.
We consider the choices and subsequent costs associated with ensemble averaging and extrapolating experimental measurements in the context of optimizing material properties using Optimal Learning (OL). We demonstrate how these two general techniques lead to a trade-off between measurement error and experimental costs, and incorporate this trade-off in the OL framework.
I study the problem of learning the unknown parameters of an expensive function where the true underlying surface can be described by a quadratic polynomial. The motivation for this is that even though the optimal region for most functions might be unknown, it can still be well approximated by a quadratic function.
We research how to help laboratory scientists discover new science through the use of computers, data analysis, machine learning and decision theory. We collaborate with experimentalist teams trying to optimize material properties, or to discover novel materials, using the framework of Optimal Learning, guided by domain expert knowledge and relevant physical modeling.
Our problem is motivated by healthcare applications where the highly sparsity and the relatively small number of patients makes learning more difficult. With the adaptation of an online boosting framework, we develop a knowledge-gradient (KG) type policy to guide the experiment by maximizing the expected value of information of labeling each alternative, in order to reduce the number of expensive physical experiments.
We derived the first finite-time bounds for a knowledge gradient policy. We also introduce a Modular Optimal Learning Testing Environment (MOLTE) which provides a highly flexible environment for testing a range of learning policies on a library of test problems.
| |
Establish a relationship with the fire department or other first responders that would respond to your home or business in the event of a fire or other emergency. The relationship should include:
Developing an evacuation plan with the fire department.
Reviewing the plan with the fire department at least once per year.
Practicing the evacuation plan throughout the year.
Employers, in turn, should review evacuation plans annually, and practice and evaluate them regularly. Even a brief discussion during a staff meeting can help to remind everyone what he or she needs to do. Ultimately, a solid level of preparedness should become part of the fabric of the facility.
Be sure to develop a floor plan that can be shared with people who utilize the building or posted in a public place.
Any evacuation plan should incorporate the following:
Know the locations of your usable exits on the grade level of the building and how to get to them.
Once outside, determine if a wheelchair user can get to a "public way" that is a safe distance away from the building and identify a safe meeting place.
Earlier, this brochure described a protected area for people with limited mobility outside the exit door. In many office buildings, even exits on the grade level of the building are elevated above the adjoining grade. In these instances, landings beyond the exit door should be reviewed to determine if they are adequate to accommodate a wheelchair user. Simply measure the landing. The clear floor space needed for a wheelchair user is 30 inches by 48 inches, but keep in mind that this area must be located beyond the swing of the exit door and clear of the exit path that others will use.
Establish a Floor Warden System these individuals are responsible for overseeing and coordinating evacuation activities, conducting a final pass through the office space, ensuring that everyone receives the necessary assistance as appropriate, ensuring all doors to the elevator lobby are closed, and reporting the floor evacuation status to the first fire or emergency officials arriving on the scene.
When the alarm goes off , the Floor Warden should immediately verify circumstances and inform the person with a disability accordingly. It is of great importance to designate an alternate Floor Warden in the instance when the initial designee is absent. The names of these designated individuals should be updated and posted on a regular basis.
Identify a location or locations for an area of refuge In the event of a need for evacuation from an upper floor, wheelchair users should make their way, either accompanied or on their own, to a designated area of refuge or other place of safety on the same floor (e.g., a closed staircase landing as described earlier). They should inform their supervisor, a colleague, or other available person that they will remain in that place of safety and wait for assistance. Two-way radios or a telephone in these areas should be provided to ensure that communication is available. The supervisor or other designated person should inform the first fire or emergency officials arriving on the scene of the person"s location.
Evaluate the need for evacuation devices from upper and lower floors. If used, their location(s) should be identified and their use should be practiced during regularly scheduled drills.
The use of evacuation devices can be directed through the installation of signage (e.g., individuals using evacuation chairs must use the east stairwell next to the men"s room).
Practice dealing with different circumstances and unforeseen situations, such as blocked paths or exits.
Remember never to open doors that are hot.
Ensure that all workers, including those on other shifts and those who are at the site after typical hours (e.g., cleaning crews, evening meeting coordinators, etc.) are aware of wheelchair users who are typically in the building. Such off-hour employees should be involved in fire emergency drill. | https://askus.unitedspinal.org/index.php?pg=kb.page&id=2742 |
The rapid expansion of drone technology and its applications makes it important to develop tools that connect the development and testing of drones with ethical and legal analysis. Drone technologies make it possible to perceive and to act on a distance, which has many societal implications. In public discussions, military drones have come to play a dominating role, but there are many other types of drones, raising various ethical and legal issues. Drones for surveillance and crowd control, for instance, raise issues of privacy, chilling effects and liability. Drones for (parcel) transport, earth observation and remote sensing, raise issues of environmental nuisance and public safety (EASA, 2015). One of the specific concerns is the combined use of manned and unmanned flights at smaller airports, due to the lack of air traffic control.
In this project, we aim to develop a tool that can be used for analyzing the ethical and legal issues related to the development and use of drone technology. More specifically, it will be a tool that can be used (a) to anticipate how a drone-in-development might have an impact on individual users, social practices, and societal processes; (b) to analyze the normative (legal and ethical) dimensions of this impact; and (c) to bring this analysis to technological and regulatory design processes. The tool combines insights from philosophy of technology (technological mediation, Value Sensitive Design) with a precautionary approach in legal design. This happens in close interaction with actual practices of drone development, testing, and use. | https://www.nwo.nl/onderzoek-en-resultaten/onderzoeksprojecten/i/61/25961.html |
One of the most popular ideas floating around the world of education right now is the concept of the growth mindset and its opposite, the fixed mindset. In developing this concept, Stanford’s Carol Dweck has led the way, and I highly recommend her book Mindset: The New Psychology of Success. This idea is very important, so I’m going to take the time now to explore them in depth. We’ll reference back to this idea again and again in future posts.
In short, someone with a growth mindset believes that he can improve, while someone with a fixed mindset believes he is stuck at his current ability-level.1 These two opposing mindsets apply to our lives in a surprising variety of ways. It’s helpful to have a growth mindset with regard to your athletic skills, your artistic and musical abilities, your social skills, and your ability to focus, as well as your general intelligence and your scholastic abilities.1 All of these traits can be improved with effort.
Since effort is the way for anyone to improve, the belief that you cannot improve is the ultimate self-fulfilling prophecy. If you think improvement is impossible, you’ll have no motivation to try, and without effort, you’re guaranteed to stay stuck where you are. This is the essence of a fixed mindset: the notion that the mind is set in stone, unchangeable.1 This view is not only harmful and demotivating, it is also scientifically inaccurate.
A fixed mindset is the belief that your abilities and your intelligence will never change because they were determined by your genes and your past environment.1 Of course there are genetic and environmental factors that affect who people become, but the fixed mindset takes this grain of truth–that people are different–and turns it into an immovable boulder.
The opposite of a fixed mindset is a growth mindset: the belief that the mind can be improved.1 Growth-minded people accept that there are innate differences between individuals, but don’t believe their genes are as important as how they spend their time. They believe they can improve through increased effort and better strategies.1 As a result, they work hard, seek out better techniques, and steadily make progress.1
The growth mindset is the scientifically accurate view of the mind because the brain really is a dynamic organ, constantly rewiring itself2 and growing new neurons.3 Our minds are capable of incredible growth. This fact, I believe, is the single, most important fact that a person can ever learn because it inspires you to do the work that leads to growth.
If someone believes that improvement is possible, then he’ll be willing to work hard. As a result of his hard work, he’ll start to see improvement. The progress he makes will reinforce his initial belief that improvement is possible. Thus, having a growth mindset can kick-start a feedback loop of hard work and success:
How hard a student works has a lot to do with his long-term academic success, and it’s clear that having a growth mindset encourages greater effort.1
Persistence and Resilience
The other primary effect of having a growth mindset is that it helps students deal with challenges in a better way. Growth-minded students are excited by challenges and eager to learn, and they ask for help when they need it.1 They are persistent and resilient when they face difficulties.1 They see mistakes as learning opportunities, and they see failures as inevitable stepping-stones on the path to success.1 Finally, growth-minded students see criticism as valuable feedback that they can use to improve.1
Fixed-minded students, on the other hand, feel threatened by challenges, don’t want to think about their mistakes, and don’t ask for help.1 As a result, they give up quickly when things get difficult.1 Any challenges are signs that they’re simply not good enough.1 Mistakes and failures further prove that they’re not capable.1 They become afraid of trying difficult things, and they become deeply afraid of criticism.1
To everyone, mistakes are understood as information–as data–but the mindset one has determines what type of data is seen.
People with a fixed mindset see a mistake as evidence of their own inadequacy–a sign that they’re incapable. Since people with a fixed mindset believe that they will forever be who they are right now, any feedback that informs them of their “permanent” inadequacy is devastating. Thus, they become fearful of making mistakes and unwilling to look carefully at their errors, preferring to stick their heads in the sand, rather than face their mistakes head-on. Of course, without examining their mistakes, they can’t learn from them, so they end up repeating the same mistakes again and again, “proving” to themselves that they’re incapable of getting better. Since these errors are so emotionally difficult, they start to avoid challenges altogether.
Growth-minded people, on the other hand, see mistakes as information about what to avoid doing in the future. They might see the mistake as a sign of their own inadequacy, but it is a temporary inadequacy–one that can be corrected. Since they are eager to learn from their errors, they face them head-on, examine them carefully, and, as a result, learn a great deal from them. For students, this means less frustration while doing homework, higher test scores, and most importantly, better understanding of the material.
Again, the growth mindset is the factually accurate one: While mistakes don’t feel good, our brains actually grow the most when we learn from our mistakes.1 The willingness to make mistakes is critical to learning anything or creating anything new. That’s why the great teacher Marva Collins said, “If you can’t make a mistake, you can’t make anything.”4
If, for example, a growth-minded student is working on a math problem with a coach, and the coach informs her that she’s made an error, she’ll look over her work and try to discover the mistake on her own. In doing so, she’ll be developing the critical skill of self-correction.
The response of a fixed-minded student in the same situation is very different: She will impulsively erase her work and start over, rather than looking at her work to see what went wrong. This desire to erase the attempt is really about erasing the error from her consciousness. It’s impossible to figure out at which step in the problem things went wrong if she erases the work from the page, so rather than learning from her error, she often just repeats it in her next attempt. This, of course, is very frustrating. Because her approach makes math homework consistently frustrating, the fixed-minded student will often become avoidant, leading to procrastination.
How to Shift Toward a Growth Mindset
By now it should be clear that everyone, especially students, should make an effort to shift toward a growth mindset. Luckily, there are many proven ways to both improve your own mindset and encourage a growth mindset in your children.
“Yet”
The simplest intervention is to utilize the word “yet.” Whenever you say “I can’t,” add the word “yet.” A lot rests on that one, little word. “I can’t” has a definitive and permanent feel to it. “I can’t yet” implies that you will eventually be able to do whatever it is you cannot currently do. It implies a growth mindset. If a student says, “I can’t,” you can playfully say, “Oh, you can’t yet?”1
The Dynamic Human Brain
It turns out that it’s not all that helpful to teach children explicitly about these mindsets and tell them to “have a growth mindset.” It’s far more helpful to explain how the brain changes as we learn, practice, and challenge ourselves.1 Older children can be taught the incredible science of neuroplasticity and neurogenesis, and even young children can understand the idea that their brain changes and grows.
I know this is true because we teach these ideas to students of all ages here at Northwest Educational Services. I know it works because when students come to understand the dynamic nature of their brain, their attitude changes. They handle challenges more resiliently when they know that their brain will improve in response to the challenges. They’re more willing to look at their mistakes when they’re told that their brain grows the most by learning from errors. They practice their skills more often when they understand that strong neural connections are formed through repetition.
Growth-Minded Feedback
For parents and educators trying to instill a growth mindset in students, the most important thing to do is avoid giving fixed-minded feedback. What follows is surprising, but I assure you that it is backed up by rigorous research.
When it comes to giving feedback, don’t praise intelligence. Instead, praise effort and strategy.
Praising effort and strategy increases motivation, resilience, and future success. Praising students for being “smart,” on the other hand, decreases motivation, encourages students to give up when they face challenges, and dramatically lowers future success.1
Likewise, when mistakes are made, emphasize that they are the result of chosen strategies, not inherent character traits. New strategies frequently lead to better outcomes, and learning from mistakes always does.1
Salman Khan, founder of Khan Academy wrote the following in an article titled, The Learning Myth: Why I’ll Never Tell My Son He’s Smart
“I decided to praise my son not when he succeeded at things he was already good at, but when he persevered with things that he found difficult. I stressed to him that by struggling, your brain grows.”
While we should praise strong effort and good strategy, we should especially praise improvements in effort and strategy. When a student shows an increase in the amount of work she’s doing, this is a critical turning point that should be encouraged. If she receives praise for increasing her effort, she’ll learn that working harder results in emotional rewards, and this will motivate her to work harder and harder.
The same is true for an improvement in strategy. If a student demonstrates the open-mindedness to proactively adopt new methods, this is also a critical turning point that must be encouraged. Realizing that the current strategy isn’t paying off is a very important skill. The willingness to change course after this realization is equally important.
Ask Better Questions
“If you ask a terrible question, you’ll get a terrible answer.” –Tony Robbins5
Our questions have built-in assumptions that, when we’re on autopilot, we take for granted.5 But these assumptions may not be true, realistic, or helpful.5 It’s important to recognize the assumptions built into our questions. If you ask a question like, “How have I ruined this?” then you’re assuming that the situation is beyond repair. But if you ask a question like, “How can I turn this around?” then you’re empowering yourself by assuming that a solution exists.5
Any question that assumes the current difficulty is the result of something permanent is a fixed-minded question. Here are some classic fixed-minded questions:
- What’s wrong with me?
- What personal trait do I lack?
- How can I avoid looking dumb?
- How can I avoid this type of challenge in the future?
- Whom or what can I blame?
Growth-minded questions, by contrast, assume that errors are the result of behavioral mistakes rather than character traits. Growth-minded questions are concerned with learning and improving rather than looking smart.
Growth-minded questions have the built-in assumption that challenges can be overcome.
Growth-minded questions sometimes include words that are normally understood as negative, such as “incorrect,” “mistake,” and “forget.” However, for a growth-minded student, these words do not represent bad things. Mistakes are opportunities for learning and improving, so they are seen as good things. Because errors do not feel emotionally threatening, growth-minded students are comfortable exploring them.
Here are some examples of growth-minded questions:
- What did I forget to do? (What should I remember to do next time?)
- What, specifically, did I do incorrectly?
- What did I do right?
- How can I avoid this mistake on my next attempt?
- What other strategies could I have used? (What strategy do I want to try next time?)
- How am I able to respond in a positive way, despite the circumstances?
Process vs. Outcomes
One way to make sure a student has a fixed mindset is to focus all your attention on her grades. When your attention is on her grades, she learns that grades are what matter, and she’ll begin to lose interest in learning the content taught in her classes. As a result, she’ll neglect the ideas, worrying instead about the points. Rather than figuring out how to get the answers, she’ll only want to know what the answers are. Ironically, this shift in attitude is usually detrimental to the student’s GPA.
Conversely, if parents focus their attention on the material a student is learning, she’ll be much more likely to develop a growth mindset. If the parents are curious about the ideas being taught in school, the student will learn to engage with those ideas. She may even become excited about learning the content. Rather than worrying about her grades, she’ll concern herself with figuring out what the content means. Rather than worrying if she’s getting the right answers every time, she’ll focus on understanding how to get the answers. It should now come as no surprise that this attitude usually leads to good grades. As usual, it’s best to keep your eyes on the process.
Perfect Does Not Exist
Perfectionism goes hand-in-hand with having a fixed mindset. The fixed-minded student believes that she shouldn’t try something unless she’s sure she can do it perfectly. Of course, she needs to try in order to improve, so it’s important to shift away from perfectionistic, all-or-nothing thinking. We’ll go into much greater detail on this topic in the future, but for now, please remember that “perfect” is an imaginary ideal that doesn’t exist and can never be reached.
Part of shifting away from perfectionism is just recognizing that every good thing we do is helpful. We don’t have to do everything right in order to improve. Minor improvements to such fundamental health practices as exercise, sleep, and diet result in measurable academic improvements.9,10 Furthermore, taking even tiny steps to improve the physical health of your brain improves your ability to focus your attention, persist through challenges, and retain new information.9,10
The fact that improvements to brain health are noticeably beneficial for students is a powerful example of the human potential for cognitive growth. A student with a growth mindset will actively take care of her brain and therefore see benefits, while a student with a fixed mindset will believe that it’s not worth the effort and therefore never earn these benefits.
Modeling the Growth Mindset
A very powerful intervention is for parents to model the growth mindset. For some, this means first changing your own mindset and then demonstrating it in front of your children. For those who already have a growth mindset, this might mean deliberately displaying it more often. In both cases, some acting on your part may be required.
If you’re thinking, “But I’m not good at acting!” please remember that acting is a skill that grows with practice. You’re not good at acting … yet.
As with most efforts to change behavior in children, the most success comes when parents lead by example. I don’t recommend talking about these mindsets with your children. Instead, my advice would be to have growth-minded discussions about challenges you’re facing, not with your children, but with other adults in front of your children. Your children will hear how you talk and they’ll learn from it. This is generally preferable because, as you probably know all too well, direct advice from parents often falls on deaf ears.
Parents should model resilience and persistence. They should use growth-minded language and ask each other growth-minded questions. They should talk openly about the difficulties they’re facing, mention that they sometimes feel like giving up, and demonstrate overcoming that impulse to quit.
Other Role Models
There are many examples of growth-minded people from real life who can help your children understand how success really works. The most successful people in the world were not born with the talent or genius they appear to have. If you look carefully at the biography of any world-class athlete, famous musician, or prolific inventor, what you’ll find is a story of massive effort, countless mistakes, devastating failures, and dogged persistence.6,7,8
In fact, it’s not uncommon for world-class performers to have put in 10,000 hours of “deliberate practice.”6 Deliberate practice means pushing your limits, trying that which you cannot yet do, and struggling with it.6 When the best musicians in the world practice, they sound terrible because they’re trying to play things that are slightly beyond their current ability-level.6 They don’t practice what’s easy; they practice what’s hard.
Growth-minded role models range from Michael Jordan to Abraham Lincoln, but I’d like to use Dr. Barbara Oakley as an example. She did very poorly in high school math, so, rather than pursuing a career that required math, she became a translator. Many years later, however, she decided that she wanted to become an engineer. In order to achieve this math-heavy goal, she had to learn how to learn. She learned skills and strategies that make it possible for anyone to succeed in math. Eventually, she became both a professor of engineering and an expert on the science of learning.9
Any student can learn the same skills that Dr. Oakley acquired on her journey from translator to engineering professor. In fact, she has a free online class you can take, called “Learning How to Learn.” For a more in-depth approach, please check out her fantastic book, A Mind for Numbers: How to Excel at Math and Science (Even if you Flunked Algebra).
Dr. Oakley is just one of the many growth-minded people we can use as role models.
We’re Born to Learn Through Failure
It also helps to remember that we’re born with a growth mindset. Consider how a baby learns to walk: by falling over and over again. Babies aren’t quitters; they keep trying. This process repeats over and over again as we learn skills such as talking, riding a bike, reading, and math.
Sadly, most people lose their innate growth mindset because they’re raised in a culture that worships talent and genius and ignores the process that creates world-class performers. Thus, we’re fighting an uphill battle as we try to cultivate growth mindsets, but it should be clear by now that it’s a battle worth fighting.
My last growth-mindset tip is to reframe what “failure” means. Since failures are the most powerful way to learn, it’s actually good to fail. It’s good to be wrong.
We must also remember that failure is an action, an event. It’s not a permanent identity. If you fail at something, it doesn’t make you “a failure.” The only ways you can really, truly fail, are to give up or never try in the first place. If you keep trying, you’ll keep growing.
The Process is Slow and Difficult
Lastly, please have patience. Cultivating a growth mindset in yourself or in a child is not easy. No single intervention will suffice. It’s a long game that requires a multipronged approach. Click here for a deeper dive into what it takes to cultivate a growth mindset.
Chris Loper has been an academic coach for Northwest Educational Services since 2014. Along with Greg Smith, Chris is the cocreator of Parenting for Academic Success (and Parental Sanity) – a five-part course offered every summer.
He writes the popular self-improvement blog Becoming Better, so if you liked this article, head on over to becomingbetter.org and check out his other work.
Chris also offers habit coaching, helping busy adults with habit formation and productivity.
In 2021, he published a humorous memoir titled Wood Floats and Other Brilliant Observations, a book that blends crazy stories with practical life lessons, available on Amazon and through most local bookstores.
He lives in Issaquah, WA, where he is the owner of South Cove Tutoring.
Works Cited
1 Dweck, Carol. Mindset: The New Psychology of Success. Ballantine Books, 2007.
2 Doidge, Norman. The Brain That Changes Itself. Penguin Books, 2006.
3 Perlmutter, David, MD. “Neurogenesis: How to Change Your Brain.” The Huffington Post. November 2, 2010.
4 Collins, Marva and Civia Tamarkin. The Marva Collins’ Way: Returning to Excellence in Education. Tarcher, 1990.
5 Robbins, Anthony. Awaken the Giant Within: How to Take Immediate Control of Your Mental, Emotional, Physical and Financial Destiny! Free Press, 1992.
6 Syed, Matthew. Bounce: Mozart, Federer, Picasso, Beckham, and the Science of Success. Harper Perennial, 2011.
7 Gladwell, Malcolm. Outliers: The Story of Success. Back Bay Books, 2011.
8 Csikszentmihalyi, Mihaly. Creativity. Harper Perennial, 1997.
9 Oakley, Barbara. A Mind for Numbers: How to Excel at Math and Science (Even if you Flunked Algebra). Penguin, 2014.
10 MacDonald, Matthew. Your Brain: The Missing Manual. O’Reilly Media, 2008.
Image Credits
Title Image: _DJ_. “human brain on white background.” March 4, 2005. https://www.flickr.com/. Creative Commons 2.0. Image duplicated and manipulated; text added.
Feedback Loop: Loper, Chris. 2015. | https://www.nwtutoring.com/2015/09/13/growth-mindset/ |
To further prove that we should praise children on their hard work rather than just telling them they’re smart, Dwek and her colleagues performed an in-class experiment. The experiment consisted of splitting a group of ninety-one seventh graders. They specifically chose students with low math grades in their sixth-grade year of school. Roughly half solely focused on just study skills and tips while the other learned the same study skills as well as learning about the growth mindset and the connections neurons make as we
Effort-based praise leads to learning goals as a result of work effort and being aware that learning is a process of the work they put forth. This results in less frustration because children feel they are not being tested. In addition, they will attribute their performance to effort, which opens the door for growing possibilities in learning. Hence, children can interpret poor performances as correctable by using effort and by not seeing them as deficits in intelligence or in ability (Weaver et al., 2004).
The second rule is to encourage a growth mind-set by, “telling stories about achievements that result from hard work…descriptions [like that] of great mathematicians who fell in love with math and developed amazing skills engenders a growth mind-set,” (Dweck, 171-175). Encouraging a growth mind-set allows for a child to have more success in their school life as well as in their social life as a result of motivation and the willingness to be challenged and learn.
Did you know that too much encouragement will make a child over confident and less likely to work hard. When kids get to feeling like they are really good at something they feel like all of the hard work is done and that they are at the top. They slow down their effort allowing others to catch up. They are less likely to work hard because they think they are good enough already. Once a child gets good and works at what they do they need to keep going and pushing because they will get passed by others. Mindset, by Carol Dweck explains, that kids need praise but not too much because there overconfidence will pull them down and others will pass them in life. Sometimes kids that got praise that tore them down took that praise and
In the article “The Perils and Promises of Praise” the author, Carol S. Dweck, discusses the effects of using inherent praising on students. Dweck, a psychologist, conducted studies and research that led to the discovery that if someone praised a student for their intelligence, it would put the student in a fixed-mindset; if you praise the student’s effort, they would be in a growth-mindset. A fixed-mindset is created in someone’s head when they are praised on their intelligence. People with fixed-mindsets believe that they are intelligent enough to where they should not put effort in anything else because if they're smart enough then they don’t need to try. People with growth-mindset believe in working hard and expanding their intelligence.
When it comes to the topic of having a growth mindset, most of us will readily agree that students who are praised are motivated to learn. Where this agreement usually ends, however, is on the question of how they are praised. Whereas some are convinced that praising students for their intelligence will motivate them to learn, others maintain that encouraging them for their efforts has a better impact on their motivation.
Teachers and parents have dedicated their time to tell children that they are smart and talented every time they get a good grade. Praising children this type of way has had an impact on their lives. Dweck said “many students believe that intelligence is fixed, that each person has certain amount and that’s that”. Students with fixed mindset only care about how smart they look or how smart they appear. By having this fix mindset, they turn down the ability to learn new things. They believe that if you study hard, you are not smart enough, and that if you were smart things will come to you with no effort. This has made students lose their belief in oneself when they face complicated circumstance. Dweck says that the reason for kids to have a fixed mindset is “intelligence
In Carol Dweck's video The Power of not Yet, she claims that when kids are given the grade of not yet instead of a failing the grade they tent to succeed more in school. The not yet grade giving them hope of achieving the goal instead of believing never accomplish the impossible goal. Giving them praise for the process not the grade. While I understand her reasoning behind this and somewhat agree, there are still unanswered question in her theory. Likw what happens when you reward them for son long that the reward becomes meaningless? Or when some who has actually tired, and succeed no longer sees the point trying because they all get the same reward no matter the outcome. Where is the challenge? There is a fine line between encouraging them
A growth mindset is usually set in middle school but you can change many fixed mindsets by telling them otherwise. The way the author describes what causes a fixed mindset is pretty interesting because I was like this and my grades decreased but now I'm realizing this more. Teachers need to stop praising students “We found that intelligence praise encourages a fixed mindset more often than did pats on the back for effort. Those congratulated for their intelligence, for an example, shield away from a challenging assignment- they wanted an easy one instead”(25) which means teachers need to stop giving students treat because it causes them to do worse in school by making them have a fixed mindset. This would help Anaheim students drastically because that's what many of the students coming from middle school come in with the growth mindset but then get crushed with how hard the work is and then they give
People believe that in order to be Smart, you have to become Smart, in other hands the brain works like a machine, the more you teach it, the more it learns. Usually students with a Growth Mindset are most likely to Succeed in Society. The changes that should be changed in Schools is that Students should be Congratulated on how hard they’ve worked on an Assignment etc.,“Wow… that’s a really good score, must of Worked hard” (25). The Researcher has Experimented the students with Test to see how they do and how they react to it. College students may pick up this Article to Study for Child Behavior, Counselors may also read this Article to get an ideal on how and why students Fail or Succeed. Schools should complement on how they're doing their work for it can motivate them, “We found that intelligence praise encouraged a fixed mindset more often than did pats on the back for effort” (25). Comparing the Two Articles “Marita’s Bargain” shows how they got their Intelligence unlike this article which states why students Fail or Succeed. After all, the students should be Praised for their efforts and not their
Basically, individuals with a fixed mindset often feel measured by a failure, sometimes permanently. Unfortunately, failed attempts are viewed as a label rather than an opportunity to plan a new path of succes. On the other hand, an individual with a growth mindset views a failed attempt as an opportunity to take action, to confront obstacles, to keep up with their schoolwork, and/or to better manage and organize their time. Growth mindset individuals believe that qualities can be developed, expanded, and eventually result in a successful outcome. A second lesson learned is the power of labels and the stereotype of ability; this lesson is undoubtedly one of the most enlightening. Dweck discovered in one of her studies that, “... ability praise often pushed students right into a fixed mindset, and they showed all the signs of it too. When we gave them a choice, they rejected a challenging new task that they could learn from. They didn’t want to do anything that could expose their flaws and call into question their talent” (72). One’s mindset determines their reaction to labels and stereotypes. An individual with a fixed mindset will settle for a positive label and chose stagnation and permanent inferiority rather than risk losing the label; whereas,
People have acknowledged, “Its through failure and mistakes that we learn the most” (Merryman). A child does not feel the gratitude of a win unless they have lost before. Young children must work hard in order to earn a trophy if they are given the trophy without working they are setting the child up for failure. Studies have proven,“We must focus on the process and progress not results and rewards” (Merryman). Children are easily affected by small things when they are young like a lose that pushed them forward to a victory. If people put children through the process, they will be more prepared for life. If people focus and critique their child when they are young, they will be preparing their child for future obstacles in
This is important because it’s hard to really see a lot of growth within a time period. In the article, Carol also backs this up with evidence from a controlled experiment. The experiment was focused on the minds of seventh graders and talking to them in a certain way that influenced their mindsets. The seventh graders were split up into two groups and were to work on an eight-session workshop, however, the control group was taught about what a growth mindset is and how it can be applied into their school life. This factor leads to the control group having improved at the end of the semester. This experiment showed that just by knowing about what a growth mindset is and how it could be applied in life, people are more likely to grow more rapidly than people whose minds are fixed on having a limit to their knowledge or skills. This is important because the sooner that kids know how to make their brain think that there are endless possibilities to who they can be or what they can do, the more they are likely to excel in anything that they attempt, not only in school but in their own personal life. This will allow a person with a fixed mindset to be able to change their way of thinking and start to see more about what they can accomplish with their life. | https://www.bartleby.com/essay/Carol-Dweck-The-Power-Of-Yet-In-PJ6JCB6LE9T |
Canada had the 29th highest divorce rate — one out of every 309 adults are divorced — with a probability of . 324%.
What percentage of marriages end in divorce in Canada 2020?
Divorce Rate in Canada | 40% Marriages End in Divorce.
Is divorce rate in Canada decreasing?
Divorce rates in Canada are on the decline – nearly cut in half this past decade. While the annual rates hovered around 10 divorces for every 1,000 marriages in the early 2000s, that has since fallen to six for every 1,000 as of 2016.
What’s the divorce rate in 2020?
The Rate of Divorce for Women
Despite the fact that the rate of marriage is declining faster than rates of divorce, experts predict that somewhere between 40 and 50% of all marriages existing today will ultimately end in divorce.
What country has highest divorce rate?
According to the UN, the country with the highest divorce rate in the world is the Maldives with 10.97 divorces per 1,000 inhabitants per year.
…
Share.
|Rank||Country||Divorces per 1,000 inhabitants per year|
|1||Maldives||10.97|
|2||Belarus||4.63|
|3||United States||4.34|
|4||Cuba||3.72|
How long does the average Canadian marriage last?
According to Statistics Canada, the average length of a marriage, nationally, is 14 years, with the country’s capital, Ottawa, falling not far behind that at 13.8 years.
What is the divorce rate 2021?
Divorce Rate By State 2021
|State||Divorced|
|California||9.00%|
|Hawaii||9.00%|
|New Jersey||9.00%|
|New York||9.00%|
Why is divorce so common in Canada?
What is the Leading Cause of Divorce in Canada? Although there are various common causes of divorce in Canada, if we talk about the leading reasons for divorce in Canada, statistics rank 1st, then it is money, followed by cheating and other stuff. It may seem surprising, but it is the harsh and harsh truth of life.
What country has the lowest divorce rate?
Lowest divorce rate worldwide 2018, by country
As of 2018, Guatemala had the least divorced population in the world, with 0.3 divorces per every 1,000 population. Qatar followed with 0.4 divorces per 1,000 inhabitants.
What is the #1 cause of divorce?
The most commonly reported major contributors to divorce were lack of commitment, infidelity, and conflict/arguing. The most common “final straw” reasons were infidelity, domestic violence, and substance use.
Which gender initiates more divorce?
In fact, nearly 70 percent of divorces are initiated by women. This is according to a 2015 research study conducted by the American Sociological Association (ASA) which suggests two-thirds of all divorces are initiated by women. Among college-educated women, this number jumps up to 90%. | https://experiencethewhiteshell.org/canada-landmarks/what-is-the-canadian-divorce-rate.html |
Cold storage automation is an increasingly common solution in logistics. Automated storage and retrieval systems (ASRS racks) are computer- and robot-aided systems that can retrieve items or store them in specific locations. The system is usually comprised of predefined locations where machines can follow established routes to get items. Contact us immediately for more specific advice on this solution.
The main component of the asrs system
– Storage and retrieval (S/R) equipment
– Input/output system
– Storage rack
– Computer management system
– The computer management system handles the loading and unloading of SKUs in an AS/RS system via a dedicated software that keeps track of inventory details such as:
+ The specific location of items
+ How long they were in storage
+ Where these items came from
Types of Automated Storage & Retrieval Systems (AS/RS)
Unit-Load AS/RS
Unit-load AS/RS systems are typically used to handle exceptionally large and heavy loads ranging from 1,000 to 5,500 pounds. This capability allows for unit-load AS/RS to handle full or partial pallets and cases. Usually, unit-load AS/RS consists of narrow aisle racks, which can extend to heights greater than 100 feet and which house pallets of product and inventory. These racks are paired with a crane, which is used to physically place and retrieve pallets as needed.
Fixed-Aisle Unit-Load AS/RS Crane
Moveable-Aisle Unit Load AS/RS Crane
In fixed-aisle unit-load AS/RS systems, pallet racks are arranged with narrow aisles between them. A crane travels between these aisles moving both vertically and horizontally to retrieve and store product. The crane is fixed to a single aisle of pallets.
Moveable-aisle unit load AS/RS works much the same way as fixed-aisle unit-load AS/RS. It consists of a crane moving between narrow aisles of pallets along some kind of track. The key difference is that it is not fixed to a specific aisle. This capability allows a single piece of equipment to service multiple aisles and, ultimately, a greater working space.
Mini-Load AS/RS
Mini-load AS/RS typically handles smaller loads (up to 75 pounds) compared to unit-load systems. Instead of full pallets, mini-load AS/RS handles totes, trays, and/or cartons. Sometimes, these systems are called “case-handling” or “tote-stacking” systems.
Shuttle-based AS/RS
Shuttle-based AS/RS delivers inventory via a shuttle or “bot” that runs on a track between a racking structure. They can operate on a single level or multiple levels, depending on the needs of the operation, and can be battery- or capacitor-powered. The shuttles deliver the tote or carton to a workstation integrated with the system.
AMR-Based High-Density AS/RS
An autonomous mobile robot-based high-density automated storage and retrieval system is designed in a way that uses three-axis AMR robots to travel vertically up storage rack to retrieve the required inventory tote or case. The AMR stores the inventory or tote on itself, and then navigates down the rack and on the floor to any one of the remote order picking workstations. The AMR rides up the workstation’s ramp, and the integrated pick-to-light and software system indicates which item and how many to pick. The operator then places the appropriate item and quantity into one of the batched orders and the AMR leaves for its next assignment.
Carousel-based AS/RS
Carousel-based AS/RS systems consist of bins of product or inventory which rotate continuously along a track. When the operator requests a particular item, the system will automatically rotate so that the appropriate bin is accessible so that the item can be picked. An integrated lightree will indicate to the picker which carousel, shelf, and item to pick.
Vertical Lift Module (VLM)
A vertical lift module (VLM) is an enclosed system consisting of an inserter/extractor in the center and a column of trays on either side. It is a form of goods-to-person technology. When an item is requested, the inserter/extractor locates the necessary tray, retrieves it, and delivers it to an operator, who completes the order. Once the order is complete, the VLM will return the tray to its proper location before retrieving the next requested tray.
Micro-Load Stocker
A Micro-Load Stocker provides discrete or individual totes or carton storage and retrieval. It is ideal for buffering, sequencing, and point-of-use items in a high-density footprint. The system is enclosed, and has an inserter/extractor device that runs in the center of the system, picking a specific queue of inventory and then discharging them onto awaiting conveyor or workstation. Different models store and retrieve differently, by taking either one item or a group of up to five items in one pass.
Warehouse operation with ASRS racks solution
Advantages of automated storage and retrieval systems
Automated storage and retrieval systems offer a few advantages, including:
– Reduced labor costs
– Improved accuracy, efficiency and productivity
– Reduced safety risks for employees (reducing the need to lift and move heavy or bulky items)
--> AS/RS can work in environments that aren’t ideal for human workers, such as freezer storage areas. They can function at heights that are difficult for human workers to navigate, as well, allowing warehouse operators to maximize floor space by making better use of vertical space.
Application of ASRS racks system
– Automotive and electronics spare parts storage
– Preservation of chemicals and pharmaceuticals
– Mechanical engineer
– MRO and storage of maintenance parts
– Healthcare logistics, IV bags, pharmaceuticals, glass slides, blocks, etc. | https://vietposrack.vn/en/operating-dedicated-asrs-racks-for-cold-storage/ |
With the rising stakes of continuous software delivery as a strategic business initiative, millions of dollars are being invested in DevOps initiatives within software organizations. While, in theory, DevOps should increase the speed, quality and transparency of your delivery processes through enhanced collaboration between various disciplines and work groups, real process improvement and return on investment from DevOps initiatives have been difficult to come by. One of the major barriers to DevOps success can be siloed teams, processes and tooling across the application delivery lifecycle, which make it difficult to trace and automate the flow of work, and optimize the delivery pipeline. You need to connect …
As enterprises continue to add more tools to handle specialized portions of software delivery, an alignment has begun to place more emphasis on data than tools. This alignment realizes the value of data — not just processes or applications. The result: a real need to leverage insights into the practices and better optimize them. Multiple technologies, processes, applications, and systems need to be updated and maintained on a regular basis to keep this fragile ecosystem functioning properly. What does “DevOps success” really mean, and why do we need to avoid tool chaos? Of course in a multi-team enterprise environment it …
| |
Q:
Unexpected behaviour of code while executing program for specific test cases.(in Java)
Here's a piece of code that I've written to check if the sum of digits and sum of squares of digits of a number are prime,in a given range. If they both are prime, I simply increment the counter, and ultimately I print the counter value.
for(int j = lb; j <= ub ; j++)
{
temp = j;
do
{
sd = sd + (temp%10);
sosd = sosd + ((temp%10) * (temp%10));
temp = temp/10;
}while( temp != 0);
for(int p = 2; p <= (sd/2) ; p++)
{
if( p == sd/2 )
pf = 0;
if( sd % p == 0 )
{
pf = 1;
break;
}
}
for(int p = 2; p <= (sosd/2) ; p++)
{
if( p == sosd/2 )
pff = 0;
if( sosd % p == 0 )
{
pff = 1;
break;
}
}
if( pf == 0 && pff == 0 )
count++;
sd = 0;
sosd = 0;
}
System.out.println(count);
All the variables have been properly defined and declared(please bear with the variable names).
The problem is : when I run for lb = 10 to ub = 20, I get count = 4 (which is correct).
But when I run for lb = 1 to ub = 20 , I get count = 3(It is wrong!! And I'm unable to find how this is so,I tried printing individual values only to find that there is something wrong with count and it doesn't increment for the last time. And much to my astonishment, it produces the right answer for the first case that I tested, which is a subset of this case! ).
Please help!
A:
It would be better and easier to analyze the code with adding proper methods which clearly give a hint on what they are doing. that will remove the need to rely on the global variables in such a dangerous way where they need resetting in the loops, something we tend to forget.
define three methods: one to sum the digits, one to sum the squares of digits and one to check if a number is prime. after that, finding bugs is easier as you will debug single methods.
check the following rewrite:
static int sumOfDigits(int number){
int sum = 0;
while (number != 0) {
sum += number % 10;
number /= 10;
}
return sum;
}
static int sumOfSquaresOfDigits(int number){
int sum = 0;
int digit=0;
while (number != 0) {
digit=number % 10;
sum += digit*digit;
number /= 10;
}
return sum;
}
static boolean isPrime(int number) {
//check if n is a multiple of 2
if (number%2==0) return false;
//if not, then just check the odds
for(int i=3;i*i<=number;i+=2) {
if(number%i==0)
return false;
}
return true;
}
now that those methods are defined, the code becomes like this:
int count=0;
int lb=1;
int ub=20;
for(int j = lb; j <= ub ; j++)
if ( isPrime(sumOfDigits(j)) && isPrime(sumOfSquaresOfDigits(j)) )
count++;
System.out.println(count);
Note: code for summing digits was taken from here and i modified it to make version for summing squares.
code for checking prime was taken from here
A:
The problem lies in the two inner for loops:
for (int p = 2; p <= (sd/2) ; p++)
{
if( p == sd/2 )
pf = 0;
In some cases, p would not necessarily reach sd/2 which means that the pf variable would not reset to 0. Simply try reseting pf and pff to 0 at the end of your outermost for loop:
pf = 0;
pff = 0;
EDIT:
Here's the modification to your code that seems to work fine for me:
for (int j = lb; j <= ub; j++) {
sd = 0;
sosd = 0;
pf = 0;
pff = 0;
// Check for 1
if (sd == 1 || sosd == 1) continue;
}
}
}
}
}
}
}
}
System.out.println(count);
| |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention relates to the field of real-time scheduling management. In particular, it relates to wireless, mobile, real-time workflow and scheduling for building, particularly homebuilding.
Homebuilders, particularly those who are required to simultaneously manage numerous construction sites, are continually searching for methods of improving efficiency in managing these sites and monitoring workflow. In particular, the need to manage and monitor numerous different suppliers, which may also include contractors, sub-contractors and tradespeople, and to coordinate schedules amongst these parties, represents a significant cost in both time and money.
Project management is essential to ensuring that the homebuilding operation proceeds smoothly. At each construction site, different suppliers need to be coordinated to ensure that jobs are completed to the proper standard, house completion deadlines are met, and budgets are kept in control. Each job needs to be verified for quality and confirmed as completed so that payment for the job can be executed, typically by purchase order, or placed on hold. As the homebuilder becomes responsible for multiple construction sites, hereinafter referred to as projects, each project comprised of numerous houses, hereinafter referred to as lots, with each lot comprised of different jobs, hereinafter referred to as tasks, all running simultaneously, the amount of management overhead required can quickly overwhelm profit margins and make the business operation impractical.
The preferred lots are home constructions associated with the project, a construction site and can be identified by the addresses of the homes. Tasks are materials associated with the lots, such as tiles for a kitchen floor, and can contain the specific task supplier information, if necessary. The tasks cover the assignments of the suppliers, and are organized based on the lot production schedule. For example, the task of tiling the kitchen floor is assigned to a specific supplier and a lot, which belongs to a specific project, and given a projected start date and end date based on the lot construction timetable. The same task would include the detailed task information for the tiles, purchase order information for the tiles (if applicable) and the supplier information for the task.
Products have been developed to attempt to address this need, such as BUILDPRO™ by Hyphen Solutions, however, they are based on centralized or web-based applications, requiring an active Internet connection for ongoing operation. As many projects are in locations that receive intermittent or no Internet access, a product is needed that can be provided as an application on a mobile device, eliminating the need for a continuous, active Internet connection.
As a related issue, current products rely on one-time or “data dump” synchronization. That is, the synchronization between the user units and the central ERP (Enterprise Resource Planning) system occurs once a day, usually at a scheduled time, such as midnight. By transferring all the synchronization data at once, substantial transfer bandwidth is required, necessitating both a substantial Internet connection, as well as time to complete the data transfer. Furthermore, there is a possibility of conflicting updates, as the users only receive updated data on a daily basis. Thus, there is a need for a system that provides synchronization is a manner which reduces bandwidth requirements, as well as reducing the risk of conflicting updates.
There is a need for a project management system that reduces the project management effort, reduces paperwork and data input and reduces financial costs while providing improved ability to meet closing dates, oversee suppliers, and increase the number of projects, lots, and tasks that can be managed simultaneously.
It is an object of this invention to partially or completely fulfill one or more of the above-mentioned needs.
The invention consists of a system for real-time synchronization of work teams and materials for homebuilding developments, comprising: a) an ERP system containing information for projects, including: lots for completion for each project, tasks for completion for each lot, assignment of work teams and material to specific tasks, projected and actual times for completion of tasks, and invoicing information for payment upon task completion; b) a web server enabling communication between the ERP system and mobile devices, and enabling the mobile devices to send, receive and change information in the ERP system in real-time; c) one or more mobile devices assigned to one or more individuals for real-time tracking and recording of production and work status information for the work teams and real-time transmission and intermittent synchronization of updated information to and from the ERP system, and also enabling communication with parties assigned to a task directly from a task list or individual task display; d) one or more software applications coordinating communication, data transmission, synchronization and security for the mobile devices, the web server and the ERP system; where the mobile devices are capable of independent operation from the ERP system and do not require a continuous connection to the ERP system or the web server.
The invention further consists of a method of real-time tracking of production schedules and timetables for one or more homebuilding developments, comprising: a) assigning one or more mobile devices to one or more users associated with projects and lots, the mobile devices providing access to project and lot information, including: lots for completion for each project, tasks for completion for each lot, assignment of suppliers and material to specific tasks, projected and actual times for completion of tasks, contact information for suppliers and invoicing information for payment of purchase orders upon task completion; and each mobile device capable of operating independently without a continuous connection to a central system; b) tracking performance of production tasks for each lot via real-time monitoring by the users; c) reporting task performance data to a central ERP system and comparing real-time performance of the production tasks to scheduled timetables for the production tasks; d) enabling modification and updating of scheduled timetables for production tasks based on performance data for the production tasks and additional real-time input from the individuals; e) coordinating compensation for production activities with the performance data, including enabling payment for completed tasks from the mobile devices via purchase orders; f) synchronizing, on a sequential basis, updates to scheduled timetables and other information between the mobile devices and the ERP system.
The sequential synchronization can take place by triggering from the user, or on an automatic basis, recurring at regular intervals (30 to 480 minutes).
Other and further advantages and features of the invention will be apparent to those skilled in the art from the following detailed description thereof, taken in conjunction with the accompanying drawings.
FIG. 1
The inventive system presented herein consists of a wireless, real-time scheduling system for homebuilding that permits the collection and distribution of information from any location by any employee. The system consists of at least three components as shown in : an ERP (Enterprise Resource Planning) system to store the data necessary for operation of the system, a web server to run a web service (such as OnLocation) for system communications and data transfer, and one or more mobile devices, such as a BlackBerry™, with an on-board application to interact with the web service and the ERP system. The system enables mobile devices to share scheduling, and purchase order information with the ERP system and makes the operation of multiple projects, lots and tasks more efficient.
The system also includes one or more software applications as necessary to enable communication and data-sharing between the components, particularly synchronization between the mobile device and the ERP system. As noted above, the primary application is resident on the mobile device, permitting the device to operate independent of the ERP system and web server. This also means the mobile device can be used when no Internet access is available, a common occurrence on projects.
The ERP system includes a database, which contains all the information related to the projects, lots tasks and suppliers. This data includes contact information for suppliers, purchase order information, and all other information that is required to create and maintain the construction schedules monitored by the ERP system. The mobile devices are then able to access this information from the database as required, eliminating the need for storage on the mobile device.
As applicable, the system and/or a method of executing the instructions for the system can be provided as computer-executable instructions on a computer-readable storage medium. In this context, computer-readable storage medium includes, but is not limited to, physical media, such a as CDs, DVDs and flash (solid-state) drives, as well as permanent or temporary media, such as computer ROM, computer RAM, and digital delivery services, either as a single file, or as a multi-file, multi part file sharing service (e.g. BitTorrent).
Synchronization
There are two types of synchronization used by the system, an initial “deep” synchronization, and the subsequent ongoing sequential synchronization. The deep synchronization initially transfers all the information related to projects, lots, tasks, suppliers, customers, employees and other categories that are tracked by the system to the mobile device. The ongoing sequential synchronization is then limited to transferring information related to those categories and elements that have changed since the previous synchronization. Thus, the use of sequential synchronization keeps the amount of information transferred to a minimum, reducing the network and bandwidth requirements for the system.
Ongoing synchronization can be manually activated by the individual, be automatically initiated by the mobile device (e.g. every 30 to 480 minutes), or a combination thereof. Preferably, synchronization occurs at regular intervals, such as 30 minutes or multiples thereof, up to maximum synchronization period of only once per eight-hour shift (e.g. 480 minutes). The synchronization period is set to keep bandwidth traffic to a minimum, however, the ability to trigger an immediate synchronization should be provided to the user to allow last-minute and important changes to be propagated throughout the system as rapidly as possible.
Tasks
The system is task-based, defining each project and directly lot item as a task and operating on records of task initiation and completion. A typical project will contain several lots and hundreds of tasks. Each task contains detailed information pertaining to that task, including the start date (real and projected), end date (real and projected), completion status, assigned lot(s) and assigned supplier. Purchase order information is also included as part of the task, as necessary, although it is downloaded to the device on-demand, and only stored on the ERP system. The initial task information is set out at the start of the project, specifically a lot, and is modified to reflect actual lot progress and completion either by the mobile employee supervising the tasks, or at the main server. Changes to the tasks are then recorded and sent out as part of the synchronization process. The addition of new projects and lots can require deep synchronization, due to the amount of data transfer involved.
On a homebuilding project, different stages specific to lots can be defined as tasks (e.g. foundation, framing, wiring, plumbing, etc.) and can then broken down in greater detail, by room, by supplier, or whatever other category is best suited to reflect progress on the tasks and enable proper tracking.
Preferably, each task is assigned to a supplier, who is the party responsible for the completion of the task. On a homebuilding project the supplier is typically a contractor, sub-contractor or tradesperson. Lots can also be assigned tasks, which are then tracking materials allocated to the lot (e.g. floor tiles). The lots also have suppliers, the party responsible for providing the materials. The task entry is further linked to the contact information for the suppliers, so that the user can initiate communication (phone, email, text message, etc.) on the mobile device directly from the task display.
Summary Screen
FIG. 2
The system interface is based off a summary screen on the mobile device as shown in , providing an at-a-glance summary of all current information, as well as enabling direct access to the different categories via a drop-down menu. The fields used in the summary screen are: Projects—the projects assigned to the individual; Lots—the lots assigned to the individual; Tasks—the number of tasks associated with the Projects and Lots. Further information can include the number of Lots closing this Week or this Month, Suppliers and their contacts associated with the Tasks, Buyers associated with the Lots, Lot Options associated with the Lots, Purchase Orders associated with the Tasks, and the date and time of the last synchronization, as well as the number of changed items since the last synchronization.
From the summary screen, also via the drop-down menu, the user can then navigate through and access a list of Projects, Lots, Completed Lots, Lot Details, Buyer Details, Lot Options, Tasks, Completed Tasks, Suppliers, Purchase Order details, perform synchronization and generally engage in any project tracking/recording activity which they have been assigned to monitor and application functionally available in the ERP system.
View Tasks
FIGS. 3A-3D
The most commonly used display is the task listing, which displays the tasks in the lot, belonging to a specific project. The list can be filtered to exclude completed tasks, restricted to list only those tasks in progress on a specific date (typically that day), and sorted to show flagged tasks of high priority. From this list, tasks can be marked as completed, have notes added, have a priority flag added or removed, or have further details shown about the task, the associated supplier, or the purchase order allocated to the task. Preferably, the most commonly used options (in progress, flagged) are provided as separate display options for ease of use. The series of screen shot in show the process of selecting a task and viewing task details.
The preferable configuration is to break the tasks into to-do lists, one for the current day, one for the next day, and one for tasks in progress. This setup enables at-a-glance assessment of tasks status and allows the user to prioritize their monitoring and updating of the task schedule.
Update/Create Tasks
The initial task list for the project is preferably generated at the ERP system side and then sent to the mobile devices during synchronization. However, as the homebuilding project progresses, in addition to modifying start/end times for tasks, it can be necessary to add new tasks, either due to omission from the original list, or as becoming necessary due to changes in the production schedule.
FIGS. 4A-4G
The task display interface on the ERP system side provides the user with the ability to enter new tasks, and link them to existing lots and suppliers, as required. Task creation access is generally not advised, while production schedules are in progress, but can be created solely from the ERP system side. The series of screen shots in show the different aspects of task viewing and updating.
Daily Task Updates and Tracking
The user, typically a site superintendent or project manager, is preferably the person responsible for physically monitoring task completion on the project. Thus, as the user conducts a review of ongoing tasks at the job site, they record the progress information on the task list. Information can be recorded as notes, and flags set and actual start/end dates modified in real-time to reflect the actual work progress. This information is then shared via the synchronization process with the ERP system, allowing multiple individuals on a single job site and/or multiple job sites to be coordinated from a single central hub.
By providing a mobile device for data input, the user is able to more rapidly act in response to problems on-site, and is further present on site much more often than if they are required to return to an office or other fixed location to provide updates. Additionally, by having real-time updates to the production schedule at hand, the on-site user is granted greater flexibility in the task management process, enabling them to negate or minimize potential delays arising from other areas of the project.
System Updates
FIGS. 5A-5D
As discussed above, updates to tasks and other project information (e.g. supplier contact information) are exchanged between the mobile device and the ERP system during the normal sequential synchronization process. More substantive updates, such as the addition of a new project and lot, can require a new deep synchronization. Software updates can also be included as part of the synchronization process, but can additionally require a reboot or reset of the mobile device, based on the software update requirements. Screen shots in illustrate the synchronization process.
Purchase Orders
FIGS. 6A-6C
For security purposes, purchase orders (POs) are handled in two parts. First, the general PO information (number, order date and supplier name) is transferred as part of the synchronization to coordinate with the task list. When the PO is to be completed, the second, more detailed set of information (product codes, quantities and measures, and other lien items) is downloaded on-demand by the user. Thus, the detailed information is only provided as needed, reducing system overhead, and can be made subject to an additional security check, preventing errors or abuse. Screenshots in illustrate purchase order handling.
Operation
Initially, the task list for the lot is created at the ERP system side. Each task is given a projected start date and end date, along with any further information about the task that is necessary for monitoring, including the user assigned to monitor the task, the materials (and supplier) allocated to the task, and the purchase order or other payment information associated with the task. In home construction, the tasks will be ordered according to standard building practices (e.g. foundation first, then framing, wiring, plumbing, insulation, drywall, finishing) with each task broken down into as much detail as is required to ensure proper task completion and timeline monitoring. For example, a task for “wiring” may have completion based on the entire home, but is further broken down into room-by-room completion targets.
Once the information is in the ERP system, the users are responsible for the initial synchronization (“deep” synchronization) with their mobile devices assigned to specific projects, lots and tasks. Alternatively, the devices can be synchronized by the ERP system administrator and then distributed to those users responsible for monitoring and recording progress on the projects, lots and tasks. On a homebuilding project, assigned users typically can include the project manager, construction manager, site superintendent, assistant superintendent and, if desired, salesperson. Different access levels can be provided to different users based on their authority and role within the project organization. For example, all users may have read access to tasks, but only a few are provided with write access, to minimize errors and control access. Access levels can also be set by task, in addition to global user settings.
The user is required to log in from the mobile device, using their assigned ID and password. The ID and password are preferably linked to the hardware PIN to provide additional security. Once logged in, the mobile device is synchronized to the ERP system (on first use and/or dependent on settings, as discussed above) and the user is presented with the summary screen or home screen, based on the user's settings.
The user is then responsible for monitoring the status of ongoing tasks and recording when tasks are started and completed. Changes in the start times for tasks can either be automatically reflected by a changed end time, or subject to manual changes only. When tasks are noted as completed, the individual with sufficient authority settings within the ERP system will exercise an automated payment process for the completed task, which automatically initiates the transfer of funds to the supplier based on payment settings within the ERP system.
Tasks which are delayed or incomplete can have notes appended detailing the reasons for non-completion and delay. This information can then be used to modify the projected end date, possibly modifying other tasks as necessary to maintain the overall lot schedule target. Additionally, this information can later be used in support of delayed or reduced payments that result from missing the original end date.
As each task is linked to a lot and supplier, the contact information for the supplier is also available from the task entry. Thus, the individual is provided with the means to contact the supplier to determine shipment status, or modify schedule delivery times, in accordance with the full task schedule.
Tasks can also be prioritized and flagged, such that a second task cannot be started until the first task is completed using a predecessor process on the ERP system side. Other tasks can be marked optional, if their completion status is not essential to the completion of the overall lot schedule. Tasks can also be frozen, which will preserve the scheduled start and completion date when recalculating the production schedule due to other affected tasks.
While the above system and method has been presented in the context of monitoring the construction of a single lot the method is equally applicable to simultaneous construction of multiple lots on multiple projects and to other building construction.
This concludes the description of a presently preferred embodiment of the invention. The foregoing description has been presented for the purpose of illustration and is not intended to be exhaustive or to limit the invention to the precise form disclosed. It is intended the scope of the invention be limited not by this description but by the claims that follow.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail, by way of example only, with reference to the accompanying drawings, in which like numbers refer to like elements, wherein:
FIG. 1
is an illustration of the homebuilding scheduling system according to the present invention;
FIG. 2
is a screen shot illustrating the summary screen of the user interface for the system;
FIG. 3A
is a screen shot illustrating the task viewing screen;
FIG. 3B
is a screen shot illustrating the task viewing screen;
FIG. 3C
is a screen shot illustrating the task viewing screen;
FIG. 3D
is a screen shot illustrating the task viewing screen;
FIG. 4A
is a screen shot illustrating the task update screen;
FIG. 4B
is a screen shot illustrating the task update screen;
FIG. 4C
is a screen shot illustrating the task update screen;
FIG. 4D
is a screen shot illustrating the task update screen;
FIG. 4E
is a screen shot illustrating the task update screen;
FIG. 4F
is a screen shot illustrating the task update screen;
FIG. 4G
is a screen shot illustrating the task update screen;
FIG. 5A
is a screen shot illustrating the synchronization screen;
FIG. 5B
is a screen shot illustrating the synchronization screen;
FIG. 5C
is a screen shot illustrating the synchronization screen;
FIG. 5D
is a screen shot illustrating the synchronization screen;
FIG. 6A
is a screen shot illustrating the purchase order screen;
FIG. 6B
is a screen shot illustrating the purchase order screen; and
FIG. 6C
is a screen shot illustrating the purchase order screen. | |
Over the past few decades, a growing body of research has emerged from a variety of disciplines to highlight the importance of cultural evolution in understanding human behavior. Wider application of these insights, however, has been hampered by traditional disciplinary boundaries. To remedy this, leading researchers from theoretical biology, developmental and cognitive psychology, linguistics, anthropology, sociology, religious studies, history, and economics come together in this volume to explore the central role of cultural evolution in different aspects of human endeavor. The contributors take as their guiding principle the idea that cultural evolution can provide an important integrating function across the various disciplines of the human sciences, as organic evolution does for biology. The benefits of adopting a cultural evolutionary perspective are demonstrated by contributions on social systems, technology, language, and religion. Topics covered include enforcement of norms in human groups, the neuroscience of technology, language diversity, and prosociality and religion. The contributors evaluate current research on cultural evolution and consider its broader theoretical and practical implications, synthesizing past and ongoing work and sketching a roadmap for future cross-disciplinary efforts. This book is published in the Strungmann Forum Reports Series. | http://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262019750.001.0001/upso-9780262019750?rskey=3lv0po&result=8 |
Defeating Ebola Together Week 3: Ebola's Impact "Security Forecasts and Expectations: What Constitutes an Acceptable Risk?"
Reactions to the Ebola epidemic are characterized by concern about the risks on a worldwide scale.
These risks are supposed to be rapidly spreading, leaving countries rich and poor similarly exposed.
developed in the past 15 years or so.
does this epidemic constitute an acceptable risk or not?
The Ebola epidemic is taking place in a context which sociologists have dubbed the "risk society."
explored the various risks connected with human activity.
became an issue around the end of the 20th century.
of whose existence we are aware because experts measure them and, along with the media, tell us about them.
Moreover, these risks are global, going beyond political and geographic borders, leaving no one, rich or poor, unexposed.
and that these high-risk situations are dangers for the entire planet.
anticipate potential urgent threats and to be ready for them.
should therefore be avoided, or at the very least studied.
our conceptual frame around the end of the 20th century.
new technologies would bring only benefits and that relations on a worldwide scale were becoming pacified.
So there is a new outlook on global risks.
The Ebola epidemic also relates to how we think about global health.
diseases has shot back to the forefront.
that they were no longer a meaningful source of concern.
focusing on these new threats and their devastating potential.
which remained relatively obscure but created awareness and fear of bio-terrorism.
The idea being that infectious diseases are dangerous.
but they can also be used willfully for destructive purposes.
brought these new dangers to the forefront of public awareness in modern societies.
faster than ever due to the ubiquity of personal mobility on a worldwide scale.
have increasingly accounted for new potential threats.
the "risk society" and the rise of worldwide health concerns?
as an event deemed unacceptable.
by Mary Douglas, an American anthropologist, in the 1980s.
which defines certain threats as unacceptable.
remained low compared to what is happening in Africa.
quite irrespective of the actual magnitude of a threat.
Infectious diseases have always created lots of fear.
Today, the reemergence of infectious diseases has reignited these fears.
the creation of the very first set of international health regulations.
and it is now implied that everyone is threatened by dangers such as Ebola. | https://zh.coursera.org/lecture/ebola-vaincre-ensemble/previsions-et-attentes-en-matiere-de-securite-quels-risques-sont-acceptables-dA0cC |
Reaching for neutrality/non-attachment through discernment
Attaining a state of non-attachment means that you, as the listener/viewer, are able to experience an emotionally stable response to the information that you are receiving and processing. When we get triggered and lash out at something that frightens us or when we hear some information that fits a bias we may have towards a particular viewpoint, we incline ourselves towards an immediate opinion that may or may not have a foundation in truth.
In fact, those who are master manipulators have learned the art of appealing to our emotions and pulling us into what can be likened to a trance where all critical thinking seems to go out the window. Is necessary that we develop a balance between our emotional perception of information and our analytical analysis. The emotional component is more of a feminine or intuitive aspect. The analytical component comprises more masculine or mental energies. Both are critically important and an imbalance in either direction can create that attachment to a particular bias that we mentioned above.
Non-attachment also means that we are willing to let go of long held beliefs when enough evidence has surfaced to dispel the authenticity of those beliefs. When we refuse to see the truth even when it is right in front of our noses, it is usually because there is some psychological reason why we cannot “see” it. This is called cognitive dissonance and underlying the dissonance are repressed emotional baggage stemming from aspects like survival fears, fears of disconnection, pride etc.
As we all navigate the waters of mass communications during these chaotic times of agenda-bent psychological manipulation and coercion, is necessary that we are acutely aware that censorship is occurring, fake news is an actual aspect, and that this war we are in is a war over control of humanity’s conscience and psyche. Some would say the ultimate prize is our individual souls.
Bottom line: is necessary we develop discernment. Is necessary we learn how to read our intuitions from a neutral, non-attached center and is necessary we also balance that by researching for ourselves and using critical thinking.
Below are two videos from James Corbett.
In the first video (“Who will fact check the fact checkers?”) James exposes how groups have been deliberately created to serve as “debunkers” or “Fact Checkers” in order to malign and discredit actual truths that contradict the main stream narrative. From Corbett Report: “we’ve all come across online fact checkers that purport to warn us away from independent media sites under the guise of protecting us from fake news. But who is behind these fact check sites? How do they operate? And if these ham-fisted attempts at soft censorship aren’t the solution to online misinformation, what is? Join this important edition of The Corbett Report podcast, where we explore the murky world of information gatekeeping and ask ‘Who will fact check the fact checkers?’”
In the second video (The Corbett Report: “Fact Check Bill Gates & The God Gene Vaccine”) James is using discernment and critical thinking to debunk a video that has been circulating around the alternative news outlets. It has been chosen this video because it illustrates two aspects:
- Manipulation is coming from every angle, traditional mainstream media as well as the alternative media. The ultimate goal is to confuse the masses and tarnish the reputation of those who are sharing the truth.
- It is a perfect example of how humans get attached to information when they are emotionally biased towards a particular outcome or viewpoint. | https://yogaesoteric.net/en/reaching-for-neutrality-non-attachment-through-discernment/ |
It is almost a crime to visit Norway without seeing one of its many astounding fjords. The fjords look incredible from any angle and dwarf everything else in the landscape; Sognefjord in Norway reaches a depth of 1,308 metres (4,291 feet) and Hardanger Fjord stretches across the landscape for a whopping 179 kilometres (111 miles).
Geiranger Fjord is another of the country’s most popular fjords and a UNESCO protected area, and you’ll find numerous sightseeing, hiking, fishing, rafting and cycling trips to go on from here. Oslo Fjord, meanwhile, provides a beautiful and dramatic backdrop to the Norwegian capital.
Forget the Alps, Norway is the perfect place for snow sports. It is no wonder that Norway has won more medals than any other nation at the Winter Olympics. Lillehammer, the pretty little town two hours north of Oslo, hosted the Olympics in 1994 and is now there for the public to show off their skills, or lack thereof, in both skiing and snowboarding. Hemsedal, Trysil and Geilo are also excellent winter sport resorts. Try your hand at langrenn, something that some Norwegians even practice on roller skis during the summer.
Norway is a European sanctuary for certain species. Moose, bears, wolves and musk oxen are all still spotted (some occasionally) in Norwegian forests, while reindeer can be found both in the wild and as part of Sami herds. Polar bears are reserved for Svalbard; however, diving for king crabs or spotting a white-tailed eagle is possible on the mainland (as long as you head to the north).
Whales visit the Norwegian coast every year, and there are numerous whale-watching tours to allow a glimpse of these magnificent creatures. Sperm whales are most common, and if luck is on your side you might see pilot whales, minke whales, humpbacks, dolphins and killer whiles. Head to Vesterålen, Tromsø or Narvik to give yourself the best chance of spotting these impressive beasts.
Edvard Munch’s painting The Scream is probably Norway’s most famous painting and it can be found in the National Gallery in Oslo as well as at the Munch Museum (full disclosure: he painted several versions of many of his paintings). Norway’s art scene is much bigger than Munch, however. A trip to Vigelandsparken, with more than 200 sculptures, is another must-see in Oslo. For some live art, head to the city’s beautiful, modern Opera House.
Do as the Norwegians and pack an orange, a Solo (orange-flavoured soft drink) and a Kvikklunsj (Norweigan chocolate wafer) in a trusty bag and then go and explore one of Norway’s many magnificent hiking trails. The most important thing to remember is to leave things as you find them – take nothing but photos. Some of the country’s favourite hiking trails and viewing points take you around the fjords and include, but are not limited to, Preikestolen, Trolltunga and Fjellstua.
If hiking is not adventurous enough, then glacier hiking might be just perfect. At the Briksdal glacier it is possible to join a tour and hike along one of these constantly-moving, ancient parts of nature. Hiking is a fantastic way to discover Norway’s nature and to get a better understanding of the Norwegians’ close relationship to their surroundings. Oh, and of course the views are just amazing.
Bergen is the second-largest city in Norway and was once the country’s capital. Its national and cultural significance can be explored throughout the city, not least at the beautiful Bryggen Wharf. Bryggen has been a central part of the city since the Viking age, and today comprises a colourful row of houses dating back as far as the 14th century.
Like most of Norway, Bergen is situated right by some wonderful nature. Taking Fløibanen up to Mount Fløyen is one of the city’s most iconic attractions, but you should also look at walking up the steep paved trail called Stoltzekleiven. Relax at the top with a traditional Norwegian waffle or go to explore everything the wonderful city below you has to offer.
From reindeer to mutton to salted and dried fish, traditional Norwegian cuisine has been heavily influenced by Norway’s harsh environment, which made it essential that food could last for many months. Therefore, much of the country’s traditional food has been preserved in creative and unusual ways through history. Many of these classic dishes taste amazing and make for a fun way to learn more about traditional Norwegian ways of life. For an experimental, modern take on Scandinavian food, the three-Michelin-starred Maaemo in Oslo is the ultimate place to go.
Pillaging seafarers or farmers with a keen interest in sailing – call them what you like, but the Vikings have had a huge impact on Scandinavia and the rest of the world. Learning about the Vikings is an essential part of discovering more about Norwegians and where they come from. The Viking Ship Museum in Oslo is perfect for this. The seafaring did not stop with the Vikings, and the Kon-Tiki and Fram Polar Ship Museums have fantastic exhibitions on the many exotic and freezing expeditions that Norwegians have ventured out on.
Norwegian waters are not just rich in oil but also have a huge variety of fish, something that has been essential for survival through many centuries. Traditional fishing villages such as Henningsvær and Reine are still around, and the Lofoten archipelago is dotted with them. People still live in the villages and use them as they have always been used. When visiting, try and stay in one of the traditional fishing huts, which may have a cod or two drying outside.
Oslo is the fastest growing capital in Europe. Head here to get a glimpse of modern Norway. Oslo’s setting makes it something special; despite being the biggest city in Norway, nature is always close by, and it probably has some of the freshest, most delightful air of any capital city in the world.
Norway can be a bit confusing at times, especially if you find yourself above the Arctic Circle at the right time of year, when it is either dark or light all day. Although it can take a toll on the body clock, this phenomenon is exactly that – phenomenal. You’ll find few other places on earth where you can experience the midnight sun and endless nights in the same comfort as in Norway.
Norway is of course also home to another above-the-Arctic-Circle must-see experience: The Northern Lights. The North Cape and Tromsø are perfect places to experience both.
Featured image by Stròlic Furlàn. | https://theculturetrip.com/europe/norway/articles/11-top-things-to-see-and-do-in-norway/ |
- This event has passed.
Inspiring Leadership: Stories & Conversation with Conscious Leaders
June 22, 2017 @ 6:00 pm - 8:00 pm
Leading a conscious business isn’t always easy. Shortly after Everybody Matters author Bob Chapman became CEO of Barry-Wehmiller, a supplier of manufacturing technology and services, he came to a realization: “Everything I learned was wrong in terms of the relationship that I had with the people in our organization!”
Like many leaders, Chapman soon understood that his role was as much about the company’s financial success as it was about service to its stakeholders (ranging from investors, employees to customers, vendors, the community and the environment).
Leaders face tough choices, make mistakes, and have transformative realizations that shape their approach to leading their companies. There can also be breakthrough mindset shifts like Chapman’s that change the way they view their purpose and steer their companies in new directions.
This latest event in our Conscious Conversations series will feature leaders from a diverse array of companies sharing stories about the ups and downs, the failures and the aha moments, that have shaped their approach to leadership.
These stories will serve as food for thought to stimulate small group conversations about leadership experiences and challenges for conscious businesses. Participants will choose from a set of related themes for a deeper dive, and small groups will enjoy more focused discussions to share personal experiences and challenges, coaching one another on approaches and solutions. | https://impacthubboston.net/event/inspiring-leadership-stories-conversation-with-conscious-leaders/ |
Stepped care is designed to provide mental health treatment in the most effective and efficient way. It aims to provide patients with low intensity interventions in the first instance and only move onto high intensity treatments if outcome is not 'successful'. However, there is a paucity of research about how health professionals make decisions about treatment and the experiences of patients within this decision-making process. Using a multi-method approach, this study aimed to explore health professional and patient decision-making in stepped care for anxiety and depression. 24 health professional interviews from three stepped care sites were conducted, which included the completion of an active information search (AIS) think-aloud task. In addition, 14 patients were interviewed about their experiences of decision-making whilst being managed within stepped care model. Qualitative interview data was analysed using the principles of Framework analysis, while some of the data collected in the AIS think-aloud task lent itself to quantitative analysis.This study revealed that three core tensions exist when making decisions within the stepped care model. These are 1. The notion of standardisation of outcomes versus the individual needs of patients; 2. The public health orientation of stepped care versus the therapeutic orientation of health professionals and; 3. The rhetoric about patient choices versus the realities of shared decision-making in a resource-limited system.The complexity of decision-making within the stepped care model was highlighted. The success of stepped care relies on ensuring that there is an adequate workforce to deliver the intended interventions, where this is not present health professionals are faced with difficult decisions and it is clear that those most affected are the less-experienced frontline workers. Scarcity of resources impacts heavily upon the decisions that are made. This can have a substantial impact upon variability in treatment decisions and on the ability to allow for patient choice to be incorporated. Decisions that are made for a patient are influenced by the need to provide them with the treatment that they want (which may not be regarded as what they need within the stepped care model nor necessarily by the health professional) and the capacity of the service. The problem that exists with primary care mental health is that the current demands exceed capacity. Optimal patient care is, in part, traded off by the need to meet the demands of the service. Improving the flexibility of the service may be one solution to the problem and adopting a stratified/stepped care approach might help to resolve some of the tensions and help to relieve some of the capacity issues. | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538512 |
What is a disease? How do we describe its history? These may seem like simple questions. But answering them can be quite challenging. Definitions of disease change over time and space. There are also numerous ways to describe the history of disease. In this course we will examine alternative approaches to the history of disease, how they are constructed and the contribution each approach makes to the history of medicine. We will do this by examining histories of a number of diseases. Emphasis is on how people sought to comprehend disease in the past, what resources they mobilized to make such meanings, and the prevailing cultural and scientific norms that conditioned their thinking. We investigate the ways in which studying disease control and therapeutics in multiple contexts casts a critical light on the functioning of societies and governments. We also focus on how formulations of disease can shape notions of gender, class, race, and childhood, and vice versa. Students will analyze a variety of methodological approaches that historians have adopted in trying to understand and interpret different diseases.
See here for syllabus.
Course Learning Objectives
By the end of this course, students will be able to:
- Identify how ‘disease’, ‘sickness’ and ‘illness’ differ from one another, and why this distinction matters.
- Describe the main features of the history of a range of infectious and non-infectious diseases.
- Understand how cultural, social, and scientific factors influenced how people have understood and defined disease in the past, and continue to do so.
- Evaluate different methodological approaches to studying the history of disease.
- Critique the “biography of disease” idiom in the history of medicine and public health. | https://hopkinshistoryofmedicine.org/social-and-cultural-histories-of-disease/ |
Prepared Statement to be delivered September 7, 2000,
to the U.S. House of Representatives,
Subcommittee on Space and Aeronautics, Committee on Science
Dr. Molly K. Macauley
Senior Fellow, Resources for the Future
An Economics View of Satellite Solar Power
Mr. Chairman and distinguished members of the subcommittee, thank you for inviting me to meet with you today. My name is Molly K. Macauley and I am a Senior Fellow at Resources for the Future, an independent, nonpartisan research organization established in 1952 to conduct independent analyses of issues concerned with natural resources and the environment. The views I present today are mine alone. Resources for the Future takes no institutional position on legislative, regulatory, judicial, or other public policy matters.
Background
I am an economist and have been a member of the Resources for the Future staff since 1983. During that time, I have specialized in the analysis of space policy issues with a focus on economics. I have conducted research on space transportation and space transportation vouchers; economic incentive-based approaches, including auctions, for the allocation of the geostationary orbit and the electromagnetic spectrum; the management of space debris; the allocation of resources on space stations; the public and private value of remote sensing information; the roles of government and the private sector in commercial remote sensing; and the economic viability of satellite solar power for terrestrial power generation and as a power plug in space for space-based activities. This research has taken the form of books, lectures, and published articles. My research is funded by grants from the National Aeronautics and Space Administration (NASA) and by Resources for the Future (RFF).
Introduction
I have been asked to speak today about the economics of space solar-power generation (SSP). My comments are based on recently completed research sponsored by NASA and conducted with experts from the energy industry. NASA asked us to look at SSP economics around 2020, when many space experts expect SSP to be technically achievable. It is important to note that our purpose was neither to advocate nor to discourage further investment in SSP but to provide a framework by which to gauge its economic feasibility if such investment occurs.
Our daunting task was to characterize the market for electricity during that future period. We were to identify key challenges for SSP in competing with conventional electricity generation in developed and developing countries, discuss the role of market and economic analysis as technical development of SSP continues in the coming years, and suggest future research directions to improve the understanding of the potential economic viability of SSP.
I’ve listed my coauthors at the end of my remarks, as well as other experts with whom we met to discuss specific aspects of the SSP market. These included experts in epidemiology and public health; the economics of the environmental and climate change-related effects of energy use; energy and national security; nuclear power (for lessons learned in introducing new energy technologies); and energy investment in developing countries.
I’d also like to add that our study was funded by NASA but the Agency gave us full liberty to carry out an independent course of study and publish our results. We have presented our findings to NASA managers and technologists working on SSP and many of our recommendations have been acted upon.
Summary of our Study
Satellite solar power (SSP) has been suggested as an alternative to using terrestrial energy resources for electricity generation. In our study we considered the market for electricity from the present to 2020, roughly when many experts expect SSP to be technically achievable. We found that a variety of trends from the present to 2020 should influence decisions about the design, development, financing, and operation of SSP. An important caveat associated with our observations concerns the challenge of looking ahead two decades. We based our observations on what we believe to be plausible estimates of a number of key indicators derived from the work of respected national and international research groups, the information and perspectives shared by the experts whom we consulted for the study, and our own judgment. While we believe this information is a valid basis for considering the competitive environment for SSP, we urge our audience to appreciate the pragmatic process and somewhat intuitive elements involved in their estimation. In what follows, I summarize our study. The full study is available at http://www.rff.org.
Our first set of observations concerns the market for electricity, in particular the key attributes of this market that are most relevant to investment in SSP:
The Market for Electricity
- Current trends indicate increasing global demand for energy in general, and electricity in particular, during this period. Electricity demand growth rates will vary significantly by region of the world and by stage of economic development. The highest growth will be in developing economies.
- Deregulation of electricity internationally will strengthen the trend towards decentralized, private ownership and management of utilities in most countries (developed and developing)˜a major departure from the tradition of nationalized utilities in many countries.
- Nevertheless, investment in and operation of conventional electricity markets in developing economies likely will continue to be, or will be perceived as risky due to capital constraints, infrastructure limitations, and institutional and environmental factors.
- Constant-dollar electricity generation costs in 2020 likely will be no higher than prevailing recent levels and very likely will be significantly lower.
- The monetary value of environmental externalities in electricity generation appears to be significantly less than some studies have indicated.
- Global climate change is not presently a major factor in power investment decisions in developing countries. Willingness to pay for “clean” technologies tends to rise with increasing incomes, but in developing countries, clean energy may not be highest ranking among health and environmental concerns.
- Resource constraints on fossil fuels are unlikely to be a factor in this timeframe, other than possible short-term supply disruptions caused by political and economic factors.
Taken together, these observations suggest that conventional electricity generation in both developed and developing countries may be more than adequate in terms of (1) cost, (2) supply, and (3) environmental factors.
Our second set of observations pertains specifically to challenges facing SSP:
Challenges for SSP in Competing with Terrestrial Electricity Generation
- The relative immaturity of the technologies required for SSP makes it difficult to assess the validity of estimated costs and likely competitiveness of SSP. For this reason, as in many space development initiatives, orders-of-magnitude reduction in the costs of space launch and deployment and other key technologies is critical. If these reductions occur, the economic viability of SSP would become more promising. Until then, it is premature for the U.S. government to make commitments such as loan guarantees or tax incentives specifically for SSP.
- State-of-the-art conventional power generation technologies increasingly incorporate numerous environmental controls, eroding somewhat the environmental advantage of non-fossil-fuel technologies such as SSP.
- Actual and/or perceived health risks associated with exposure to electric and magnetic fields generated by SSP are likely to be of significant public concern.
- National security and national economic considerations may discourage some countries from participating in an SSP system operated by another country or group of countries. Countries with these concerns may require equity participation in SSP, limit their reliance on SSP only to a small share of their energy portfolio, or decline use of the technology altogether.
These findings argue for the merits of furthering technical advance in technologies required not only for SSP but also for other space activities, and for special consideration of issues that transcend the technical design of SSP, such as health and national security concerns.
We also urged that economic study continue hand-in-hand with SSP technical design. During the course of our study, we shared our interim findings with the engineering teams working on SSP. All parties agreed that this interchange of ideas was mutually beneficial and contributed markedly to deepening our collective understanding of next steps for both the technical team‚s engineering studies and our economic analysis. The two must proceed in tandem, we all agreed, and specific recommendations as to further economic and market studies follow:
The Role of Economic and Market Analysis as Technical Considerations of SSP Progress
- The energy industry should be invited to be „at the table‰ in technical and economic analyses of SSP — that is, to both participate in conducting the analysis and learn about the results. The electric utility industry may be particularly interested in helping to guide the development of SSP technical components that are also capable of application in other terrestrial commercial power markets (for example, development of solar cells).
- Modeling of the economics of SSP should explicitly incorporate analyses of risk and uncertainty; include marketplace data about competition from terrestrial energy markets; and provide a means for structuring an efficient long-term technology development program that includes industry participation.
- Continued public funding of SSP for terrestrial power markets must consider the relative return on taxpayer investment in SSP compared to other technologies, in general, and energy technologies, in particular (for instance, photovoltaics). It should be noted that some past projections of large market penetration of new power generation technologies have not been borne out by actual experience (for example, nuclear, solar).
Finally, we identified specific topics for future research:
Additional Issues for Further Study
- Our focus in this report is on the use of SSP in terrestrial markets. SSP capabilities may have applicability to non-terrestrial systems such as the International Space Station, other large orbiting platforms, lunar bases, and other activities to explore and develop space. The benefits and costs of these opportunities should be investigated in the course of future SSP analyses.
- Real as well as perceived, safety, health, and environmental risks associated with SSP in both its terrestrial power and nonterrestrial power markets should be assessed and discussed in public forums, engaging both scientists and the public.
Additional Observations
I’d like to conclude my comments by elaborating on several of our study’s conclusions and making some additional observations relevant to our discussion today.
Our study did not consider the idea of satellites designed to relay power from earth-based generation facilities, but some of the findings in our study might be useful in discussion of that application of SSP.
The cost of power in 2020
Our study predicted the cost of U.S. electricity generation costs around the year 2020 — a challenging task, but one to which we brought the best information and analysis that we could find. This estimate can be used as a benchmark for the relay concept: if it were to come on line in 2020 or so, can it provide electricity at less than this cost? If so, it could be economically competitive. The estimate is around 3 cents per kilowatt hour in developed countries, and around 5.5 cents per kilowatt hour in developing countries.
The environment
We found that the environmental costs of electricity generation tend to be smaller than popular discussion suggests. Issues of pollution, deforestation, and global warming are receiving growing attention by the world community. However, cleaner forms of energy have been introduced into both the developed and developing world in numerous initiatives to ameliorate these problems, and some governments in developing countries have already have begun to use renewable energy technologies as a tool of economic development. Recent studies suggest that the damage, or social cost, of electricity generated by conventional means may be relatively small, particularly for the noncoal resources likely to figure increasingly in future capacity additions to electricity supply. The estimate of the social cost is about 2 cents per kilowatt hour.
Gas prices, brown outs, running out of oil
The question, “are we running out of oil?,” has been a concern for at least the last 100 years. During the first half of the twentieth century, analysts and officials of the U.S. Geological Survey predicted an exhaustion of U.S. oil reserves within 10 to 20 years. Since then, there have been other alarming studies about depletion, but time and again these have proven wrong. They fail to distinguish between proved, recoverable reserves and discoverable resources. Technological change, including three-dimensional seismic exploration, horizontal drilling, and deeper drilling in the oceans has led to production prospects that were not predicted twenty-five years ago.
The brown-outs over the past year in the western U.S. have been attributed as much to inadequate management of fuel supplies and transmission capacity as to shortages of fuel. The brown-outs were regional, not nationwide, suggesting that there is no overall shortage but that transmission and distribution are part of the challenge. In addition, the electricity industry estimates that about 30,000 megawatts of additional power could be on line by 2010 if plant constructions that have been announced take place.
Gasoline and home heating oil prices have soared this year — but this is only the fourth time in over thirty years. The price of oil now — about $30 a barrel — is nowhere near what it was in the early 1980s, say, when the inflation-adjusted price was about $70 in today’s dollars. The high gas prices have hardly affected the sales of low-mileage auto models like sports utility vehicles and gas consumption is still rising. The high prices were an annoyance for many consumers and a hardship for some low-income families who depend on oil to heat their homes. But for the country as a whole, they have not constituted a real economic crisis and they are now declining. For the future, from time to time, unexpectedly, the world’s oil market will swing price dramatically up, but also down.
Energy security
The perceived risks of dependence on imported energy could lead to support for policies of greater self-sufficiency, leading in turn to higher electricity costs or alternative sources of energy. This question may present a rather unique challenge in the context of an SSP regime. A country may not want to be reliant on another country’s space-generated power for a significant portion of its baseload electricity. It therefore may look to equity participation in SSP, seek other means of protecting itself against the potential discontinuity of external supply, or possible reject SSP out of hand.
Investing in developing countries
Another issue that may arise in the application of SSP in developing countries is the perceived risk associated with investing in these countries. The risk relates to unstable governments, economies, and currencies.
Innovation in power supply
Just as SSP represents a potential innovation in electricity supply, so, too, are new technological approaches being developed with which SSP would have to compete. An example is micropower, small local power plants that do not suffer huge transmission losses. Micropower may be most useful in developing countries as an alternative to building large transmission grids.
I hope these observations are useful in our discussion today, and thank you for the opportunity to meet with you.
Authors and Experts Consulted
The study team for our SSP report included RFF scholars and experts from the energy industry. Listed together with their affiliations at the time of the study, they are:
Joel Darmstadter, Resources for the Future
John N. Fini, Strategic Insight, Inc.
Joel S. Greenberg, Princeton Synergetics, Inc.
Molly K. Macauley, Resources for the Future
John S. Maulbetsch, Energy Power Research Institute
A. Michael Schaal, Energy Ventures Analysis, Inc.
Geoffrey S. W. Styles, Texaco, Inc.
James A. Vedda, Consultant
During the study, the authors met several times with other experts to discuss specific aspects of the SSP market. We are grateful for the information and viewpoints shared with us in briefings by these individuals: | https://space.nss.org/testimony-of-molly-macauley-before-house-science-committee-hearings-on-solar-power-satellites/ |
The Empathic Imagination
The critic and novelist Cynthia Ozick tells us, "Metaphor relies on what has been experienced before and therefore transforms the strange into the familiar. Without metaphor we cannot imagine what it is to be someone else, we cannot imagine the life of the Other" (1991).
The extent to which the self can enter into the other can be seen as an expression of the freedom of the imagination. In imagining the other person, the self is constrained by its own vital needs, and the degree to which it is constrained will in turn limit the complexity that characterizes the image of the other. I can illustrate this by referring to the empathic imagination and contrasting empathy to a phenomenon psychoanalysts have called projective identification. (In the discussion that follows I will restrict the term empathic imagination to refer only to people, and not to literature or inanimate works of art.) We usually think of empathy as a form of voluntary imagination in which there is a sense of the self as agent. The empathic imagination is usually experienced as a kind of pleasurable bonding with the other. It relies on metaphor, for within an empathic connection with the other there is a play of similarity and difference based on metaphor. Empathy requires this play of similarity and difference: one recognizes a sense of identity with the other while at the same time retaining one's sense of self. If this play of similarity and difference is absent, one may experience a sense of total identification with the other, which in some instances may create anxiety. This absence of meta-phoric play of similarity and difference can again be linked to trauma. In individuals and families that have been severely traumatized, metaphor becomes degraded: instead of feeling an empathic connection to a parent, a traumatized individual may feel as if he is his parent. This is especially evident in children of Holocaust survivors (Bergmann and Jucovy 1982, Grubrich-Simitis 1984).
The term empathy is a late-nineteenth-century word, a translation of the German term Einfühlung, introduced by the German psychologist Theodor Lipps to denote the projection of the self into the object of perception. For Lipps, the original objects of empathy were works of art. Yet the idea of the self entering into the object of perception did not originate with Lipps, as it can be traced back to Vico. In Isaiah Berlin's account (1969), Vico believed that we can understand the past because others' experience is sufficiently woven into one's own experience and can be revived by means of imagination. Vico was the first to discover that meaning is constructed through imaginatively entering into the minds of others.
Samuel Taylor Coleridge, who had read and admired Vico and was probably influenced by him, described imagination as a coalescence of the subject and the object: "Into the simplest seeming 'datum' a constructing, forming activity from the mind has entered. And the perceiving and the forming are the same. The subject (the self) has gone into what it perceives, and what it perceives is, in this sense, itself. So that the object becomes the subject and the subject the object" (Richards 1969, p. 57). Coleridge is saying, in effect, that we should not take the object as something given to us but as something formed through our imagination.
Coleridge's description of imagination as the self entering into what it perceives comes close to our contemporary understanding of empathy. This transitory loss of distinction between self and other also suggests that the roots of empathy may be found in the mirroring of feeling that occurs between mother and child, which may be accompanied by a temporary sense of merging.
Psychoanalysts also understand empathy as a partial or transitory identification, a process in which the self enters into the other. However, there is an important addition: psychoanalysts have observed that the empathic process can also be involuntary and unconscious. In 1926 Helene Deutsch noted that the analyst's unconscious perception of the patient's feelings became transmuted into an inner experience of the analyst.3 Empathy leads to a pleasurable sense of affective bonding with the other. If, however, the other person unconsciously manipulates our imagination and we do not sense an identification, this is experienced as unplea-surable, and accordingly we do not label such a feeling as empathic.
It appears that we do not have a word that denotes this total, conscious and unconscious, affective impact that one mind has upon another. As I noted, the term empathy usually denotes the pleasurable aspect of entering into the mind of another. However, we are all well aware of the fact that the other's unconscious intentionality may evoke in us a variety of negative feelings, such as anxiety, guilt, or rage. Empathy should include the recognition within oneself of negative feelings toward the other. Empathy may result in a modification of the self as the consequence of knowledge of the other. One's sense of self is impacted and altered in the process of assimilating the feelings of the other. Affective knowledge of the other alters the self, and accordingly the self accommodates itself to what is perceived, very much as in Piaget's (1954) description of the child's construction of external reality.
This view is consistent with biological intentionality. Future intent is communicated to another person by means of emotional signals. Emotions are present whether or not the individual providing the signal is conscious of what they are feeling. From the standpoint of the recipient of the feeling, the individual has unconsciously directed the recipient's imagination. But unlike in the examples provided by Scarry and Zeki, where the imagination is expanded and intensified, when the imagination is directed by the process of projective identification, the result will be a constriction or foreclosure of the imagination, as I shall now describe.
Do Not Panic
This guide Don't Panic has tips and additional information on what you should do when you are experiencing an anxiety or panic attack. With so much going on in the world today with taking care of your family, working full time, dealing with office politics and other things, you could experience a serious meltdown. All of these things could at one point cause you to stress out and snap. | https://www.78stepshealth.us/metaphoric-process/the-empathic-imagination.html |
How many ways can you write the word EUROMATHS by starting at the top left hand corner and taking the next letter by stepping one step down or one step to the right in a 5x5 array?
Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours?
A standard die has the numbers 1, 2 and 3 are opposite 6, 5 and 4 respectively so that opposite faces add to 7? If you make standard dice by writing 1, 2, 3, 4, 5, 6 on blank cubes you will find. . . .
Given a 2 by 2 by 2 skeletal cube with one route `down' the cube. How many routes are there from A to B?
How many different ways can I lay 10 paving slabs, each 2 foot by 1 foot, to make a path 2 foot wide and 10 foot long from my back door into my garden, without cutting any of the paving slabs?
A huge wheel is rolling past your window. What do you see?
Imagine you have an unlimited number of four types of triangle. How many different tetrahedra can you make?
Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?
Triangular numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
The triangle ABC is equilateral. The arc AB has centre C, the arc BC has centre A and the arc CA has centre B. Explain how and why this shape can roll along between two parallel tracks.
Seven small rectangular pictures have one inch wide frames. The frames are removed and the pictures are fitted together like a jigsaw to make a rectangle of length 12 inches. Find the dimensions of. . . .
ABCD is a regular tetrahedron and the points P, Q, R and S are the midpoints of the edges AB, BD, CD and CA. Prove that PQRS is a square.
Is it possible to remove ten unit cubes from a 3 by 3 by 3 cube so that the surface area of the remaining solid is the same as the surface area of the original?
Is it possible to rearrange the numbers 1,2......12 around a clock face in such a way that every two numbers in adjacent positions differ by any of 3, 4 or 5 hours?
A train leaves on time. After it has gone 8 miles (at 33mph) the driver looks at his watch and sees that the hour hand is exactly over the minute hand. When did the train leave the station?
This problem is about investigating whether it is possible to start at one vertex of a platonic solid and visit every other vertex once only returning to the vertex you started at.
In how many ways can you fit all three pieces together to make shapes with line symmetry?
Blue Flibbins are so jealous of their red partners that they will not leave them on their own with any other bue Flibbin. What is the quickest way of getting the five pairs of Flibbins safely to. . . .
Mathematics is the study of patterns. Studying pattern is an opportunity to observe, hypothesise, experiment, discover and create.
Show that among the interior angles of a convex polygon there cannot be more than three acute angles.
We start with one yellow cube and build around it to make a 3x3x3 cube with red cubes. Then we build around that red cube with blue cubes and so on. How many cubes of each colour have we used?
The whole set of tiles is used to make a square. This has a green and blue border. There are no green or blue tiles anywhere in the square except on this border. How many tiles are there in the set?
Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens?
These are pictures of the sea defences at New Brighton. Can you work out what a basic shape might be in both images of the sea wall and work out a way they might fit together?
The reader is invited to investigate changes (or permutations) in the ringing of church bells, illustrated by braid diagrams showing the order in which the bells are rung.
Imagine you are suspending a cube from one vertex and allowing it to hang freely. What shape does the surface of the water make around the cube?
To avoid losing think of another very well known game where the patterns of play are similar.
Square numbers can be represented as the sum of consecutive odd numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
Points P, Q, R and S each divide the sides AB, BC, CD and DA respectively in the ratio of 2 : 1. Join the points. What is the area of the parallelogram PQRS in relation to the original rectangle?
Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces?
Can you mark 4 points on a flat surface so that there are only two different distances between them?
Generate three random numbers to determine the side lengths of a triangle. What triangles can you draw?
Watch these videos to see how Phoebe, Alice and Luke chose to draw 7 squares. How would they draw 100?
Can you describe this route to infinity? Where will the arrows take you next?
Can you find a way of representing these arrangements of balls?
What is the shape of wrapping paper that you would need to completely wrap this model?
When dice land edge-up, we usually roll again. But what if we didn't...?
How much of the square is coloured blue? How will the pattern continue?
Jo made a cube from some smaller cubes, painted some of the faces of the large cube, and then took it apart again. 45 small cubes had no paint on them at all. How many small cubes did Jo use?
How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes?
On the graph there are 28 marked points. These points all mark the vertices (corners) of eight hidden squares. Can you find the eight hidden squares?
Bilbo goes on an adventure, before arriving back home. Using the information given about his journey, can you work out where Bilbo lives?
Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need?
How many moves does it take to swap over some red and blue frogs? Do you have a method?
A tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides?
A bus route has a total duration of 40 minutes. Every 10 minutes, two buses set out, one from each end. How many buses will one bus meet on its way from one end to the other end?
Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all?
Lyndon Baker describes how the Mobius strip and Euler's law can introduce pupils to the idea of topology.
Charlie and Alison have been drawing patterns on coordinate grids. Can you picture where the patterns lead?
The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves. | https://nrich.maths.org/public/topic.php?code=-68&cl=3&cldcmpid=741 |
If you are an inventor, you might have been increasingly asking yourself how much your invention would cost. Unfortunately, the idea itself brings money very rarely, but if you have a patent for a technical decision, you can roughly predetermine the related fee.
A patent is a document issued by the Russian Federal Service for Intellectual Property (hereinafter the Rospatent) certifying the priority, authorship and exclusive right to an invention, utility model or design invention.
By definition, the patent is a document certifying the rights of a patent holder to a specific decision. In other words, a patent only provides information about a technical decision and not the decision itself. A patent is an intangible asset and, as it is, a decision can only be revealed once it is implemented.
How are the patent related fees determined and what factors do affect them? The value of the rights to an invention or other subject matter protected by a patent is entirely determined by the economic benefits that the rights holder may receive from the introduction and use of his or her innovations in business or other areas. These benefits may have the form of creating new goods or giving new properties to old goods, reducing the cost of producing goods or services, expanding markets and solving any other important tasks.
Patent valuation has its own specificity, which is that the applicability of comparative and cost assessment approaches to such objects is very limited.
A comparative valuation approach based on comparing an object of valuation with its market analogues is very rarely used for valuation of intellectual property, since an invention or a utility model are always unique objects and comparing them with their analogues will always be incorrect, since it ignores the uniqueness of the decision. In addition, the information itself on patent sales transactions is almost always closed or difficult to access.
When it comes to the application of cost approach methods, it is also struggling. That means the cost calculation methods based on the analysis of the cost of creation of the object of assessment, since the cost of the patent may be quite weakly related to the costs of its development and there are many examples when commercially successful developments are made with a very modest budget.
The most common approach to patent valuation is the income approach, and it is its methods that reliably determine the market value of a patent. These methods make it possible to calculate the value of the subject matter being valued based on an analysis of the potential income that the rights holder may receive from the use of the invention. In other words, the methods of the income approach link the value of the patent being valued with the commercial efficiency of the patent being valued.
It should be understood that the cost of a patent may be affected by different factors, for example:
- Importance, i.e., how revolutionary and popular the innovations is. The technical decision may be a breakthrough, or it may consist in upgrading a technical decision which is already known. Even a small innovation can bring significant production benefits and force competitors to conduct new research to upgrade their technology.
- The market and its characteristics: this factor is very difficult to overestimate, even the method of cost calculation may change depending on the market. Is there competition, is there a developed patent market in the region of interest? Such indicators as supply and demand will be important for this factor. Are there many developers in the particular field or, for example, are many people trying to improve the details in the same mechanism? And if, accordingly, everyone will offer a different patent, then the price can be knocked down, referring to the fact that the supply is not the only one in this market and there is something similar.
- Term of validity, i.e., the remaining term of protection of the intellectual property, directly affects its price. If the patent has a couple of years left, then the rights holder, unfortunately, will not make millions with it, even if the developed technical decision is a breakthrough one. This is because the less a patent has to be valid, the less time is left for the so-called monopoly right to use such a decision and the less time is left to get the most out of the purchased right.
- Known background of the invention, i.e., if the patent is just a link in a chain, representing one of many related innovations, the price can go down. On the other hand, if the innovation is independent or the first link in the chain, the price may be higher. This factor is directly related to the market and its characteristics.
The factors presented above do not compose a complete list. In reality, the experts take into account not only the features of the patent and market, but some other global or local trends as well.
For the purpose of a complete and balanced valuation, complex methods are usually used, which contain elements of different approaches. A stable set of such methods of calculation has even got its name in the special literature, which is the combined approach.
In order to assess the value of a patent for an invention or utility model, specialists may need all kinds of documents and information, as follows.
Thus, it is rather difficult to determine the cost of a technical decision protected by a patent. Such complexity is related both to the specifics of the object which is not a tangible asset and a large number of subtleties and nuances in the evaluation. Let us also remember that the request of the patent owner will also have a significant impact on the value of this intellectual property. | https://zuykov.com/en/about/articles/patent-invention-and-utility-model-certificate-rel/ |
Amazon and Salesforce did not let 2017 end without entering the neural machine translation (MT) race. Salesforce joined the fray by announcing their neural MT research during their annual Dreamforce 2017, where chief scientist Richard Socher briefly showcased several AI-powered capabilities the company will soon be launching.
Currently, however, the CRM and cloud computing company is not actually using neural MT.
Teresa Marshall, Sr. Director of Localization and Globalization, told Slator that they did not use any MT at all. She confirmed, however, that they had been “in the early stages of exploration as how best to incorporate MT” over the past year.
Marshall is a well-known name in the industry with a localization career spanning over 15 years.
She earned her Master’s in translation and interpretation at Middlebury Institute of International Studies, Monterey, where she also served as faculty from 2010 to 2014. She was a board member of Women in Localization from 2014 to 2016; where she remains an active member today. She is also active in the Silicon Valley scene where she had been part of program and operational management teams in several companies, including the Google Localization team. Since 2009, she had been organizer and co-host of the Annual Localization Unconference.
In 2009, she started working at Salesforce. Currently, her team is responsible for all localization efforts within the company’s engineering organization. Marshall explained that they handled localization for products across the Salesforce product suite.
All in all, they tackle millions of words annually in 34 languages with localized versions across two of three levels of language support. Marshall declined to to share their annual translation budget.
Levels of Language Support
Marshall said Salesforce not only localizes products, but also support documentation, training, and tools. This entails multiple efforts, even outside her team, focusing on localization.
Despite the global operations of Salesforce, Marshall said scale was not exactly the issue: “What makes Salesforce unique from a localization perspective is the customizability of our platform.”
She explained this outlook was advantageous from a product perspective, since instead of building particular features for specific markets, they would instead build a platform where clients can adjust or customize.
And all the localization work is supported by different levels of language support:
- Fully Supported Languages: 17 fully supported languages where the entire product (including the UI) is localized
- End-User Languages: 17 end-user languages where only the end-user focused areas of the products are localized
- Platform-Only Languages: platform languages where no localizations are provided, though clients have a mechanism to translate apps built on the platform
Technology and Partnerships
Asked about translation and localization tools, Marshall said they use “a combination of internal tools and standard source control systems.” She said the company continuously reevaluates this approach given an increasing breadth of products, technical requirements, and content volume.
Salesforce’s localization is augmented by an external partner “for the translation and linguistic deliveries.” Marshall said they work very closely with a “medium-sized, highly specialized” partner, which was more like a strategic business partner than a vendor. “[They’re] really an extension of our team and we have a very direct relationship with our translators,” she added.
Marshall’s teams are responsible for localization, internationalization and customer-facing features, such as the Translation Workbench, which allows clients to translate customizations, and Salesforce grammar engines.
From what Marshall explained, these grammar engines are an example of the culmination of Salesforce’s customizability as a platform, its technology toolkit, and its vendor.
“We often think of our UI translation work more as linguistic engineering”
Marshall said they have a feature that allows customers to rename UIs to fit business needs and sales terminology, which requires language-specific grammar engines for each supported language.
“All of our languages are supported by our grammar engines that ensures grammatically correct translations,” she said. “We just recently open-sourced Grammaticus, our grammar engine.”
The language engines require distinct formats for UI files, and the localization process requires some additional steps from translators. “We often think of our UI translation work more as linguistic engineering, since our translators have to add the appropriate grammatical attributes rather than “just” translate a string,” Marshall said. “The format also has particular requirements on our tools, which is why we rely and take advantage of a number of tools as well as the strong partnership we have with our vendor.”
“Exploring a Machine Translation Strategy”
Localization is a long-time focus for Salesforce, Marshall said: “early on we were able to build a level of awareness and a solid process to support our three releases a year, however, the challenge now is maintaining this level of maturity throughout fairly rapid company growth.”
So where does the recent neural MT announcement fit in all of this, especially given that Salesforce does not currently use any MT for localization?
“Partnering with Richard [Socher]’s research team accelerated our efforts to leverage MT for extending localization,” Marshall said. According to her, there are still no concrete plans for deployment, but she also commented that now they had options and a direct line to the neural MT development team: “it opens up opportunities beyond our own translation needs that may have otherwise been too complex or too expensive to explore.”
Asked about her opinion on how their neural MT R&D will impact Salesforce’s product suite, Marshall maintained it was too early to tell. She did say she was excited to see where all the developments are headed: “we’ve seen an incredible increase in AI-powered features; some of the new development areas are available via APIs first.” She noted one language-related example being Einstein Vision’s Community Sentiment Model, which analyzes text to classify community (e.g. online forums, enterprise social media) sentiment.
On the Future of Salesforce and Neural
Looking to the next five years, Marshall said for Salesforce, “localization is a priority for delivering customer success and I am confident it will grow.”
On how neural MT would affect their current models of working, Marshall was optimistic: “MT won’t replace our efforts but rather augment our options and processes.” She noted MT allows her team to become more strategic in terms of schedules, resources, and budget.
As a whole, the future of neural MT looks bright to her. “the overall push for neural MT and technology in general as a great opportunity [that] will enable companies to be more strategic,” Marshall said. “I see it as an approach to focus resources on what matters most and to find efficient solutions to mundane tasks.”
“Too many companies involved in localization think in terms of supply chain, speed, and efficiency, and tend to forget that to a large extent most localization processes still rely on very skilled linguistic resources”
She cited as an example: “massive volumes of content with a short shelf life don’t require highly-skilled translators, whereas highly functional, tightly designed UI does.”
Marshall understood, however, that there was a less optimistic side to the discussion: “I am aware that my optimism is very much influenced by the fact that I work on the client-side of the localization industry. As a trained translator, I understand why these scenarios may not be encouraging and instead seem scary for some.”
“Too many companies involved in localization think in terms of supply chain, speed, and efficiency, and tend to forget that to a large extent most localization processes still rely on very skilled linguistic resources. I think that this will continue to be the case in the future,” she said. | https://slator.com/features/salesforce-localization-chief-on-nmt-skilled-linguists-and-what-lsps-tend-to-forget/ |
Film noir is a look and an attitude as much as anything else. There’s the darkness, both visually and thematically, and the fatalistic tone which creeps ever nearer the doomed characters treading the fine line dividing shadow and light, hope and despair, in this cinematic moral maze. If it grew out of the bitterness nurtured by the economic hardship of the 1930s, the wounds inflicted on society were then cauterized and desensitized by the horrors experienced in WWII. And the end result? A feeling of jaded weariness, of disenchantment when the post-war promise of a brighter future for all remained tantalizingly and agonizingly just beyond the reach of some. The Prowler (1951) is a film about disappointment and dissatisfaction, and the lengths people will go to, either consciously or unwittingly, in an effort to conquer this.
The opening sees Susan Gilvray (Evelyn Keyes) reacting with shock on realizing that someone has been observing her through her unshaded bathroom window. Naturally, she calls the police to report the incident and has a visit from a squad car containing an old pro on the eve of retirement, Bud Crocker (John Maxwell), and another younger man, Webb Garwood (Van Heflin). It’s the latter who takes the keener interest, not so much in the case itself as the lady at the center of it. You see, Garwood is a dissatisfied soul, a man whose youth was taken up with dreams of wealth and success as a professional athlete. When circumstances didn’t allow this to come to fruition Garwood became a cop, a second-rate job in his opinion and he began to brood. Here’s a man who feels life has cheated him out of what ought to have been his due, and his nocturnal visit to the luxurious Spanish home with the vulnerable and alluring woman inside has just added to his ethical itch. While our disgruntled cop readies himself to scratch while he’s fully aware of what he’s doing, a similar sensation is beginning to come over the woman, just not quite so obviously. She’s not happy either, and you read it in her demeanor, drifting listlessly around her well-appointed but empty home, as her husband (notably absent at least in visual terms until the fateful moment) is an older, less exciting man – and it’s later revealed that he is leaving her unsatisfied in more than one way. The scene is set therefore for a drama built around betrayal, deceit and ultimately murder.
I guess what I’ve written above gives a fair indication of how the tale develops. However, I’ve deliberately left it there – what I mentioned essentially occurs in the first act, and most of it quite early on – as I think it actually moves in slightly unexpected directions, due to some good writing and a pair of strong central performances. The version of the film I watched comes with supplemental contribution from such noir experts as Eddie Muller, James Ellroy and Alan Rode who make the point of how the film is a critique of corrupt authority and how dangerous it is to put too much trust in this. I certainly don’t dispute that reading and I think it’s a major element of Dalton Trumbo’s script. Nevertheless, I found certain other elements, namely the disenchantment and disillusionment with hand dealt by life, every bit as noticeable and important. The character of Garwood has been warped and turned in upon itself by a sense of thwarted entitlement; it’s there in his words when he speaks of his lousy breaks and it’s also writ large on his face as he surveys the comfortable home occupied by Susan and her elusive husband, a marked contrast to the cramped and mean room he lives in. That post-war American Dream wasn’t delivering for Garwood.
As I said, the script was from Dalton Trumbo but this was the era of HUAC and the blacklist and so his name wouldn’t appear on the credits. Originally, the story (by Robert Thoeren & Hans Wilhelm) was titled The Cost of Living, a phrase repeated by Susan’s husband during his radio broadcasts (voiced by Trumbo incidentally) and I reckon it’s a more apt one than the admittedly catchy The Prowler. The lead is driven by his materialism and his hunger for social status, and the constant refrain of how the cost of living is going down takes on a decidedly pointed meaning when we think how cheap life becomes in his eyes. Still and all, this isn’t some dull socioeconomic diatribe, it’s a pacy and not entirely predictable thriller, and director Joseph Losey moves his camera around with a calm fluidity – it’s never showy or self-conscious but effortlessly artistic. And the climax had me thinking of Anthony Mann and his penchant for driving his characters towards heights they struggle to scale.
Some years ago I wrote a piece on Act of Violence and remarked then on the way Van Heflin was cast somewhat against type. The Prowler takes that a step further by almost entirely subverting the typical dependability of Heflin’s persona. Having him play a policeman, a figure one associates with protection and security, serves to further heighten the shock value of seeing him as a cold and manipulative schemer. Evelyn Keyes is very good too as the suburban wife bored by her everyday isolation, flattered by the attention yet also horrified by the increasingly chaotic turn of events. While there is some interesting support work, most particularly from an earnest and likeable John Maxwell, this is very much a two-hander and a fine showcase for the talents of the leads.
The Prowler came out on DVD first via VCI in the US and that’s the edition I picked up. I was happy enough with the quality at the time and the attractive extra features I referred to earlier were welcome too. A few years later the same company put out a Blu-ray version of the movie but I it sound like a significant upgrade so I just stuck with my older SD copy, and i can’t say I’m displeased. Frankly, I feel this is a fine film noir, well cast, well shot, well written, and well worth ninety minutes of anyone’s time. | https://livius1.com/2018/12/02/the-prowler/ |
Q:
Equation of motion Pendulum using $w=e^{ix}$
I'm working with the equation of motion for a pendulum as follows:
$$x''+ \frac{g}{l} \sin (x)=0$$
Where $x$ is the angle between the pendulum and the vertical rest position.
I am required to use the complex variable $w=e^{ix}$ to rewrite the equation of motion in the form $(w')^2= Q (w)$, where $Q$ is a cubic polynomial. So in the form $(u')^2=u^3 + au + b$, with $a$, $b$ constants.
I'm not sure where to start with the question, can anybody help me get going?
Homework help
A:
Multiply the equation through by $x'$ and integrate once to get
$$x'^2-\frac{2 g}{\ell} \cos{x} = C$$
where $C$ is a constant of integration. Now, if $w=e^{i x}$, then $\cos{x}=(w+w^{-1})/2$ and
$$w' = i x' e^{i x} \implies x'=-i w'/w$$
Then the equation is equivalent to
$$-\frac{w'^2}{w^2} - \frac{g}{\ell} \left (w+\frac{1}{w}\right)=C$$
Then, multiplying through by $-w^2$, we get
$$w'^2+\frac{g}{\ell} w^3 + C w^2+\frac{g}{\ell} w=0$$
which is not quite the form specified, but is an equation of the form $w'^2+Q(w)=0$, where $Q$ is a cubic in $w$.
| |
Crysel Velez is a Registered Dietitian in Mexico and currently practices as a Nutritionist here in Canada. She graduated in 2011 with a Bachelor’s Degree in Dietetics and Nutrition from Vasco de Quiroga University in Morelia, Mexico. Shortly afterward she established a thriving private practice in Lazaro Cardenas, successfully supporting hundreds of clients with their dietary and lifestyle goals. Since moving to the interior of BC in 2014, and later to Victoria in 2019, she has continued her education taking a variety of courses including the International Healthcare Practitioners course from Ryerson University, Toronto and is currently working with the College of Dietitians of B.C. to obtain full registration as a dietitian here in Canada. While attending to her Canadian clients, she continues to work with her Mexican clients online as well as with those in the Latin and international communities here in Victoria.
Growing up in Mexico, Crysel was always passionate about food, cooking and nutrition. She brings this passion to her practice by creating delicious and healthy personalized meal plans for her clients, working with their specific dietary needs. She is focused on a holistic and integrative approach and prides herself in her ability to connect with her clients through compassion and good communication. Nothing brings her more joy than to help clients reach their goals.
Crysel strives to live a balanced and healthy lifestyle and to inspire those around her to do the same. She loves practicing yoga, meditation, spending time in nature, travelling and learning about the cultures and food in other countries. And don’t forget the salsa dancing! | https://purebodyhealthvictoria.ca/crysel-velez.html |
10 Questions Congress Should Ask Killer Drone Policy Architect John Brennan
John Brennan’s confirmation hearing to become head of the CIA will take place at the Senate Intelligence Committee on Thursday, February 7. There is suddenly a flurry of attention around a white paper that lays out the administration’s legal justification for killing Americans with drones overseas, and some of the Senators are vowing to ask Brennan “tough questions,” since Brennan has been the mastermind of the lethal drone attacks. But why have the Senators, especially those on the Intelligence Committee who are supposed to exercise oversight of the CIA, waited until now to make public statements about their unease with the killing of Americans that took place back in September and October of 2011? For over a year human rights groups and activists have been trying, unsuccessfully, to get an answer as to why our government killed the 17-year-old American boy Abdulrahman al-Awlaki, and have had no help from the Senators’ offices.
We look forward to hearing the Senators question Brennan about the legal justifications used by the Obama administration to kill three Americans in Yemen, as we are deeply concerned about their deaths and the precedent it sets for the rights of US citizens.
But we are also concerned about the thousands of Pakistanis, Yeminis and Somalis who have been killed by remote control in nations with whom we are not at war. If CODEPINK had a chance to question John Brennan as his hearing on Thursday, here are some questions we would ask:
1. You have claimed that due to the precision of drone strikes, there have been only a handful of civilian casualties. How many civilians deaths have you recorded, and in what countries? What proportion of total casualties do those figures represent? How do you regard the sources such as the Bureau of Investigative Journalism that estimates drone casualties in Pakistan alone range from 2,629-3,461,with as many as 891 reported to be civilians and 176 reported to be children? Have you reviewed the photographic evidence of death and injury presented by residents of the drone strike areas? If so, what is your response?
2. According to a report in the New York Times, Washington counts all military-age males in a strike zone as combatants, unless there is explicit intelligence posthumously proving them innocent. Please tell us if this is indeed true, and if so, elaborate on the legal precedent for this categorization. In areas where the US is using drones, fighters do not wear uniforms and regularly intermingle with civilians. How does the CIA distinguish between legitimate and illegitimate targets?
3. In a June 2011 report to Congress, the Obama administration explained that drone attacks did not require congressional approval under the War Powers Resolution because drone attacks did not involve "sustained fighting," "active exchanges of fire," an involvement of US casualties, or a "serious threat" of such casualties. Is it your understanding that the initiation of lethal force overseas does not require congressional approval?
4. If the legal basis for the use of lethal drones is the 2001 Authorization for Use of Military Force (AUMF), can this authorization be extended to any country through Presidential authority? Are there any geographic limitations on the use of drone strikes? Does the intelligence community have the authority to carry out lethal drone strikes inside the United States? How do you respond to the charge that the US thinks it can send drones anywhere it wants and kill anyone it wants, all on the basis of secret information?
5. Assassination targets are selected using a “disposition matrix.” Please identify the criteria by which a person’s name is entered into the matrix. News reports have mentioned that teenagers have been included in this list. Is there an age criteria?
6. In Pakistan and perhaps elsewhere, the CIA has been authorized to conduct “signature strikes,” killing people on the basis of suspicious activity. What are the criteria for authorizing a signature strike? Do you think the CIA should continue to have the right to conduct such strikes? Do you think the CIA should be involved in drone strikes at all, or should this program be turned over the military? If you think the CIA should return to its original focus on intelligence gathering, why hasn’t this happened? As Director of the CIA, will you discontinue the CIA’s use of lethal drones?
7. Article 51 of the U.N. Charter, which the US has implicitly invoked to justify strikes, requires that “measures taken by Members in the exercise of [their] right to self-defense . . . be immediately reported to the Security Council.” Please elaborate on why the United States uses Article 51 to justify drone strikes but ignores the clause demanding transparency.
8. The majority of prisoners incarcerated at Guantanamo Bay were found to be innocent and were released. These individuals landed in Guantanamo as victims of mistaken identity or as a result of bounties for their capture. How likely is it that the intelligence that gets a person killed by a drone strike may be as faulty as that which put innocent individuals in Guantanamo?
9. You have stated that there is little evidence drone strikes are causing widespread anti-American sentiment or recruits for extremist groups. Do you stand by this statement now, as we have seen an expansion of Al Qaeda in the Arab Peninsula, possibly triple the number that existed when the drone strikes began? Do you have concerns about the “blowback” caused by what General McChrystal has called a “visceral hatred” of U.S. drones?
10. If a civilian is harmed by a drone strike in Afghanistan, the family is entitled to compensation from US authorities. But this is not the case in other countries where the US government is using lethal drones. Why is this the case? Do you think the US government should help people who are innocent victims of our drone strikes and if so, why haven’t you put a program in place to do this? | https://www.alternet.org/2013/02/10-questions-congress-should-ask-killer-drone-policy-architect-john-brennan/ |
PEEL regularly observes that individuals within collaborative teams have a more conscious understanding of their colleague’s needs and interests. Consequently they respond earlier and more supportively to other’s frustration preventing unhealthy conflict, ensuring alignment of values and behaviours, fostering healthy conversations and ultimately managing behavioural engagement for high performance and goal achievement proactively.
Team success lies in the ability of team members to be able to work effectively together. This only occurs when each team member is able to have an insight into their own contributions and how these can enhance or detract from the team’s overall effectiveness. To enhance team members understanding of each other’s approaches to work we incorporate the Team Management Profile (TMP) Questionnaire. The TMP provides constructive, work-based information outlining an individual’s work preferences and the strengths that they bring to a team.
Our Team Collaboration workshops assist teams in having an insight into:
- How team members see the team currently operating;
- How values and work preferences impact on team collaboration and performance;
- How each team member prefers to communicate, receive information, make decisions and organise their work; and
- What the team needs to do to work to their strengths? | https://respectfulworkplaces.com.au/tools-and-responsive-interventions/team-collaboration-workshops/ |
Dogs are welcome at the Gray Ghost Inn. We allow well-behaved dogs to stay with their owners in our guest rooms. There is a charge of $20/night.
When dogs are outside their rooms, they must be on leash with their owner or caretaker. Please clean up after your dog throughout our property. We have a trash can by the front door where waste can be disgarded.
Other pets, including cats, birds, and reptiles are not allowed.
-
Upon check-in all dogs must be registered. Registration states that the guest is taking full personal and financial responsibility for their pet and will obey all of our dogs policies.
- Dogs may be left alone in your room for short durations (less than four hours).
- Guest must inform front desk when leaving pet unattended with contact information and estimated return time.
- If there are any noise issues (excessive barking) due to your dog, guest agrees to come back to remedy the situation.
- Dogs owners are responsible and liable for any damages or clean-up costs incurred to the room including lost revenue due to the room being out of service.
- Two dog limit per room. There is a $20 pet charge.
- All dogs must be on a leash when entering and exiting the Inn. Dogs must be on a leash in the Inn.
- No dogs allowed in the dining room during breakfast.
- Please do not allow dogs on the beds. | https://www.grayghostinn.com/guest-rooms/policies/gray-ghost-inn-welcomes-dogs/ |
The later schools question and answer asked students to mention what they think is the most important important concern for a student to do to be able to attain success. One which response stood out from the rest was practice. Successful people usually are not born successful; they become successful thru hard work and perseverance. If you would like to complete your goals, keep this in mind! following some question and answer examples that you can easily benefit from to expand your knowledge and gain insight that will help you to sustain your school studies.
Question:
Which of the following best defines sustainable fishing practices?
a.
Harvesting practices that maximize yields and profits.
b.
Harvesting practices that maximize the yield per fisherman.
c.
Harvesting practices that do not reduce the potential for future harvests.
d.
Harvesting practices that reduce the potential for future harvests.
Answer:
The correct answer of the given question above would be option C. The best option that defines sustainable fishing practices is that, harvesting practices that do not reduce the potential for future harvests. Sustainable fishing guarantees that the population of the ocean and freshwater wildlife is still present over time. Hope this answer helps.
From the answer and question examples above, hopefully, they might possibly help the student deal with the question they had been looking for and take notice of each thing stated in the answer above. Then could have some sharing in a group discussion and also study with the classmate regarding the topic, so another student also gain some enlightenment and still keeps up the school learning. | https://ifyoufeedme.com/question/5107/ |
Political Competition and the Limits of Political Compromise
We consider an economy where competing political parties alternate in office. Due to rent-seeking motives, incumbents have an incentive to set public expenditures above the socially optimum level. Parties cannot commit to future policies, but they can forge a political compromise where each party curbs excessive spending when in office if they expect future governments to do the same. We find that, if the government cannot manipulate state variables, more intense political competition fosters a compromise that yields better outcomes, potentially even the first best. By contrast, if the government can issue debt, vigorous political competition can render a compromise unsustainable and drive the economy to a low-welfare, high-debt, long-run trap. Our analysis thus suggests a legislative trade-off between restricting political competition and constraining the ability of governments to issue debt.
Alexandre B. Cunha and Emanuel Ornelas
12 March 2014 Paper Number CEPDP1263
Download PDF - Political Competition and the Limits of Political Compromise<! - KEYWORDS TAKEN OUT FROM HERE... AND PUT TO SIDE -->
This CEP discussion paper is published under the centre's programme. | https://cep.lse.ac.uk/_new/publications/abstract.asp?index=4408 |
During the past two decades, methicillin-resistant Staphylococcus aureus (MRSA) has become increasingly common as a source of nosocomial infections. Most studies of MRSA surveillance were performed during outbreaks, so that results are not applicable to settings in which MRSA is endemic. This paper gives an overview of MRSA prevalence in hospitals and other healthcare institutions in non-outbreak situations in Western Europe.
Methods
A keyword search was conducted in the Medline database (2000 through June 2010). Titles and abstracts were screened to identify studies on MRSA prevalence in patients in non-outbreak situations in European healthcare facilities. Each study was assessed using seven quality criteria (outcome definition, time unit, target population, participants, observer bias, screening procedure, swabbing sites) and categorized as 'good', 'fair', or 'poor'.
Results
31 observational studies were included in the review. Four of the studies were of good quality. Surveillance screening of MRSA was performed in long-term care (11 studies) and acute care (20 studies). Prevalence rates varied over a wide range, from less than 1% to greater than 20%. Prevalence in the acute care and long-term care settings was comparable. The prevalence of MRSA was expressed in various ways - the percentage of MRSA among patients (range between 1% and 24%), the percentage of MRSA among S. aureus isolates (range between 5% and 54%), and as the prevalence density (range between 0.4 and 4 MRSA cases per 1,000 patient days). The screening policy differed with respect to time points (on admission or during hospital stay), selection criteria (all admissions or patients at high risk for MRSA) and anatomical sampling sites.
Conclusions
This review underlines the methodological differences between studies of MRSA surveillance. For comparisons between different healthcare settings, surveillance methods and outcome calculations should be standardized. | https://bmcinfectdis.biomedcentral.com/articles/10.1186/1471-2334-11-138 |
Outfit of the Day OOTD : Mint Green Dress
Hello my beautiful ladies,
I missed blogging so much. Most of you guys much have noticed, I haven’t posted in a while. I’ve been busy with travelling and other stuff. Anyways, This time I decided to do an outfit post. The fashion fanatic in me clearly took over makeup (well for now, at-least). All the outfit, Accessories and makeup details will be at the end of the post. Keep reading. | https://beautygyaan.com/index.php/tag/ootd-green-dress/ |
Current refinement(s):
Check title to add to marked list
|
Plant responses to insect egg deposition
|
Hilker, M. ; Fatouros, N.E. - \ 2015
Annual Review of Entomology 60 (2015). - ISSN 0066-4170 - p. 493 - 515.
sogatella-furcifera horvath - elm leaf beetle - parasitoid anagrus-nilaparvatae - medfly ceratitis-capitata - oryza-sativa l. - defense responses - pieris-brassicae - host location - phytophagous insects - ovicidal substance
Plants can respond to insect egg deposition and thus resist attack by herbivorous insects from the beginning of the attack, egg deposition. We review ecological effects of plant responses to insect eggs and differentiate between egg-induced plant defenses that directly harm the eggs and indirect defenses that involve egg parasitoids. Furthermore, we discuss the ability of plants to take insect eggs as warning signals; the eggs indicate future larval feeding damage and trigger plant changes that either directly impair larval performance or attract enemies of the larvae. We address the questions of how egg-associated cues elicit plant defenses, how the information that eggs have been laid is transmitted within a plant, and which molecular and chemical plant responses are induced by egg deposition. Finally, we highlight evolutionary aspects of the interactions between plants and insect eggs and ask how the herbivorous insect copes with egg-induced plant defenses and may avoid them by counteradaptations.
|
High-throughput phenotyping of plant resistance to aphids by automated video tracking
|
Kloth, K.J. ; Broeke, C.J.M. ten; Thoen, H.P.M. ; Hanhart-van den Brink, M. ; Wiegers, G.L. ; Krips, O.E. ; Noldus, L.P.J.J. ; Dicke, M. ; Jongsma, M.A. - \ 2015
Plant Methods 11 (2015). - ISSN 1746-4811 - 14 p.
green peach aphid - nasonovia-ribisnigri - glucosinolate accumulation - signaling pathways - defense responses - feeding-behavior - myzus-persicae - lettuce aphid - arabidopsis - herbivores
Background: Piercing-sucking insects are major vectors of plant viruses causing significant yield losses in crops.Functional genomics of plant resistance to these insects would greatly benefit from the availability of highthroughput, quantitative phenotyping methods. Results: We have developed an automated video tracking platform that quantifies aphid feeding behaviour on leaf discs to assess the level of plant resistance. Through the analysis of aphid movement, the start and duration of plant penetrations by aphids were estimated. As a case study, video tracking confirmed the near-complete resistance of lettuce cultivar ‘Corbana’ against Nasonovia ribisnigri (Mosely), biotype Nr:0, and revealed quantitative resistance in Arabidopsis accession Co-2 against Myzus persicae (Sulzer). The video tracking platform was benchmarked against Electrical Penetration Graph (EPG) recordings and aphid population development assays. The use of leaf discs instead of intact plants reduced the intensity of the resistance effect in video tracking, but sufficiently replicated experiments resulted in similar conclusions as EPG recordings and aphid population assays. One video tracking platform could screen 100 samples in parallel. Conclusions: Automated video tracking can be used to screen large plant populations for resistance to aphids and other piercing-sucking insects.
|
A novel approach for multi-domain and multi-gene famliy identification provides insights into evolutionary dynamics of disease resistance genes in core eudicot plants
|
Hofberger, J.A. ; Zhou, B. ; Tang, H. ; Jones, J. ; Schranz, M.E. - \ 2014
BMC Genomics 15 (2014). - ISSN 1471-2164
genome-wide analysis - nb-arc domain - arabidopsis-thaliana - whole-genome - draft genome - phylogenetic analysis - triggered immunity - mildew resistance - defense responses - encoding genes
Background Recent advances in DNA sequencing techniques resulted in more than forty sequenced plant genomes representing a diverse set of taxa of agricultural, energy, medicinal and ecological importance. However, gene family curation is often only inferred from DNA sequence homology and lacks insights into evolutionary processes contributing to gene family dynamics. In a comparative genomics framework, we integrated multiple lines of evidence provided by gene synteny, sequence homology and protein-based Hidden Markov Modelling to extract homologous super-clusters composed of multi-domain resistance (R)-proteins of the NB-LRR type (for NUCLEOTIDE BINDING/LEUCINE-RICH REPEATS), that are involved in plant innate immunity. Results To assess the diversity of R-proteins within and between species, we screened twelve eudicot plant genomes including six major crops and found a total of 2,363 NB-LRR genes. Our curated R-proteins set shows a 50% average for tandem duplicates and a 22% fraction of gene copies retained from ancient polyploidy events (ohnologs). We provide evidence for strong positive selection acting on all identified genes and show significant differences in molecular evolution rates (Ka/Ks-ratio) among tandem- (mean = 1.59), ohnolog (mean = 1.36) and singleton (mean = 1.22) R-gene duplicates. To foster the process of gene-edited plant breeding, we report species-specific presence/absence of all 140 NB-LRR genes present in the model plant Arabidopsis and describe four distinct clusters of NB-LRR "gatekeeper" loci sharing syntenic orthologs across all analyzed genomes. Conclusion By curating a near-complete set of multi-domain R-protein clusters in an eudicot-wide scale, our analysis offers significant insight into evolutionary dynamics underlying diversification of the plant innate immune system. Furthermore, our methods provide a blueprint for future efforts to identify and more rapidly clone functional NB-LRR genes from any plant species.
|
Phytohormone mediation of interactions between herbivores and plant pathogens
|
Lazebnik, J. ; Frago, E. ; Dicke, M. ; Loon, J.J.A. van - \ 2014
Journal of Chemical Ecology 40 (2014)7. - ISSN 0098-0331 - p. 730 - 741.
systemic acquired-resistance - generalist insect herbivores - white-backed planthopper - rice blast fungus - defense responses - multitrophic interactions - necrotrophic pathogens - arabidopsis resistance - pseudomonas-syringae - aphid interactions
Induced plant defenses against either pathogens or herbivore attackers are regulated by phytohormones. These phytohormones are increasingly recognized as important mediators of interactions between organisms associated with plants. In this review, we discuss the role of plant defense hormones in sequential tri-partite interactions among plants, pathogenic microbes, and herbivorous insects, based on the most recent literature. We discuss the importance of pathogen trophic strategy in the interaction with herbivores that exhibit different feeding modes. Plant resistance mechanisms also affect plant quality in future interactions with attackers. We discuss exemplary evidence for the hypotheses that (i) biotrophic pathogens can facilitate chewing herbivores, unless plants exhibit effector-triggered immunity, but (ii) facilitate or inhibit phloem feeders. (iii) Necrotrophic pathogens, on the other hand, can inhibit both phloem feeders and chewers. We also propose herbivore feeding mode as predictor of effects on pathogens of different trophic strategies, providing evidence for the hypotheses that (iv) phloem feeders inhibit pathogen attack by increasing SA induction, whereas (v) chewing herbivores tend not to affect necrotrophic pathogens, while they may either inhibit or facilitate biotrophic pathogens. Putting these hypotheses to the test will increase our understanding of phytohormonal regulation of plant defense to sequential attack by plant pathogens and insect herbivores. This will provide valuable insight into plant-mediated ecological interactions among members of the plant-associated community.
|
Ecology of plant volatiles: taking a plant community perspective
|
Pierik, R. ; Ballaré, C.L. ; Dicke, M. - \ 2014
Plant, Cell & Environment 37 (2014)8. - ISSN 0140-7791 - p. 1845 - 1853.
shade-avoidance - nicotiana-attenuata - neighbor detection - methyl jasmonate - associational resistance - arabidopsis-thaliana - defense responses - natural enemies - predatory mites - canopy light
Although plants are sessile organisms, they can modulate their phenotype so as to cope with environmental stresses such as herbivore attack and competition with neighbouring plants. Plant-produced volatile compounds mediate various aspects of plant defence. The emission of volatiles has costs and benefits. Research on the role of plant volatiles in defence has focused primarily on the responses of individual plants. However, in nature, plants rarely occur as isolated individuals but are members of plant communities where they compete for resources and exchange information with other plants. In this review, we address the effects of neighbouring plants on plant volatile-mediated defences. We will outline the various roles of volatile compounds in the interactions between plants and other organisms, address the mechanisms of plant neighbour perception in plant communities, and discuss how neighbour detection and volatile signalling are interconnected. Finally, we will outline the most urgent questions to be addressed in the future.
|
Phenotypic analyses of Arabidopsis T-DNA insertion lines and expression profiling reveal that multiple L-type lectin receptor kinases are involved in plant immunity
|
Wang, Y. ; Bouwmeester, K. ; Beseh, P. ; Shan, W. ; Govers, F. - \ 2014
Molecular Plant-Microbe Interactions 27 (2014)12. - ISSN 0894-0282 - p. 1390 - 1402.
pattern-triggered immunity - phytophthora-infestans - salicylic-acid - defense responses - innate immunity - thaliana - gene - resistance - biology - roles
L-type lectin receptor kinases (LecRKs) are membrane-spanning receptor-like kinases with putative roles in biotic and abiotic stress responses and in plant development. In Arabidopsis, 45 LecRKs were identified but their functions are largely unknown. Here, a systematic functional analysis was carried out by evaluating phenotypic changes of Arabidopsis LecRK T-DNA insertion lines in plant development and upon exposure to various external stimuli. None of the LecRK T-DNA insertion lines showed clear developmental changes, neither under normal conditions nor upon abiotic stress treatment. However, many of the T-DNA insertion lines showed altered resistance to Phytophthora brassicae, Phytophthora capsici, Pseudomonas syringae or Alternaria brassicicola. One mutant defective in LecRK-V.5 expression, was compromised in resistance to two Phytophthora spp. but showed enhanced resistance to P. syringae. LecRK-V.5 overexpression confirmed its dual role in resistance and susceptibility depending on the pathogen. Combined analysis of these phenotypic data and LecRK expression profiles retrieved from public datasets revealed that LecRKs which are hardly induced upon infection or even suppressed are also involved in pathogen resistance. Computed co-expression analysis revealed that LecRKs with similar function displayed diverse expression patterns. Since LecRKs are widespread in plants, the results presented here provide invaluable information for exploring the potential of LecRKs as novel sources of resistance in crops.
|
Dynamic hydrolase activities precede hypersensitive tissue collapse in tomato seedlings
|
Sueldo, D. ; Ali, A. ; Misas-Villamil, J. ; Colby, T. ; Tameling, W.I.L. ; Joosten, M.H.A.J. ; Hoorn, R. van der - \ 2014
New Phytologist 203 (2014)3. - ISSN 0028-646X - p. 913 - 925.
programmed cell-death - vacuolar processing enzyme - pathogenesis-related proteins - disease resistance - cysteine proteases - defense responses - plant-pathogen - gene-expression - arabidopsis - activation
Hydrolases such as subtilases, vacuolar processing enzymes (VPEs) and the proteasome play important roles during plant programmed cell death (PCD). We investigated hydrolase activities during PCD using activity-based protein profiling (ABPP), which displays the active proteome using probes that react covalently with the active site of proteins. We employed tomato (Solanum lycopersicum) seedlings undergoing synchronized hypersensitive cell death by co-expressing the avirulence protein Avr4 from Cladosporium fulvum and the tomato resistance protein Cf-4. Cell death is blocked in seedlings grown at high temperature and humidity, and is synchronously induced by decreasing temperature and humidity. ABPP revealed that VPEs and the proteasome are not differentially active, but that activities of papain-like cysteine proteases and serine hydrolases, including Hsr203 and P69B, increase before hypersensitive tissue collapse, whereas the activity of a carboxypeptidase-like enzyme is reduced. Similar dynamics were observed for these enzymes in the apoplast of tomato challenged with C. fulvum. Unexpectedly, these challenged plants also displayed novel isoforms of secreted putative VPEs. In the absence of tissue collapse at high humidity, the hydrolase activity profile is already altered completely, demonstrating that changes in hydrolase activities precede hypersensitive tissue collapse.
|
Intra-specific variation in wild Brassica oleracea for aphid-induced plant responses and consequences for caterpillar-parasitoid interactions
|
Li, Y. ; Dicke, M. ; Harvey, J.A. ; Gols, R. - \ 2014
Oecologia 174 (2014)3. - ISSN 0029-8549 - p. 853 - 862.
phloem-feeding insect - induced resistance - defense responses - interspecific interactions - multitrophic interactions - arabidopsis-thaliana - phytophagous insects - nicotiana-attenuata - jasmonic acid - host plants
Herbivore-induced plant responses not only influence the initiating attackers, but also other herbivores feeding on the same host plant simultaneously or at a different time. Insects belonging to different feeding guilds are known to induce different responses in the host plant. Changes in a plant's phenotype not only affect its interactions with herbivores but also with organisms higher in the food chain. Previous work has shown that feeding by a phloem-feeding aphid on a cabbage cultivar facilitates the interaction with a chewing herbivore and its endoparasitoid. Here we study genetic variation in a plant's response to aphid feeding using plants originating from three wild Brassica oleracea populations that are known to differ in constitutive and inducible secondary chemistry. We compared the performance of two different chewing herbivore species, Plutella xylostella and M. brassicae, and their larval endoparasitoids Diadegma semiclausum and M. mediator, respectively, on plants that had been infested with aphids (Brevicoryne brassicae) for 1 week. Remarkably, early infestation with B. brassicae enhanced the performance of the specialist P. xylostella and its parasitoid D. semiclausum, but did not affect that of the generalist M. brassicae, nor its parasitoid M. mediator. Performance of the two herbivore-parasitoid interactions also varied among the cabbage populations and the effect of aphid infestation marginally differed among the three populations. Thus, the effect of aphid infestation on the performance of subsequent attackers is species specific, which may have concomitant consequences for the assembly of insect communities that are naturally associated with these plants.
|
Modulation of flavonoid metabolites in Arabidopsis thaliana through overexpression of the MYB75 transcription factor: role of kaempferol-3,7-dirhamnoside in resistance to the specialist insect herbivore Pieris brassicae
|
Onkokesung, N. ; Reichelt, M. ; Doorn, A. van; Schuurink, R.C. ; Loon, J.J.A. van; Dicke, M. - \ 2014
Journal of Experimental Botany 65 (2014)8. - ISSN 0022-0957 - p. 2203 - 2217.
plant-responses - anthocyanin accumulation - lepidopteran herbivores - coexpression analysis - biosynthetic-pathway - indole-glucosinolate - functional genomics - signaling pathways - defense responses - complex
Anthocyanins and flavonols are secondary metabolites that can function in plant defence against herbivores. In Arabidopsis thaliana, anthocyanin and flavonol biosynthesis are regulated by MYB transcription factors. Overexpression of MYB75 (oxMYB75) in Arabidopsis results in increasing anthocyanin and flavonol levels which enhances plant resistance to generalist caterpillars. However, how these metabolites affect specialist herbivores has remained unknown. Performance of a specialist aphid (Brevicoryne brassicae) was unaffected after feeding on oxMYB75 plants, whereas a specialist caterpillar (Pieris brassicae) gained significantly higher body mass when feeding on this plant. An increase in anthocyanin and total flavonol glycoside levels correlated negatively with the body mass of caterpillars fed on oxMYB75 plants. However, a significant reduction of kaempferol-3,7-dirhamnoside (KRR) corresponded to an increased susceptibility of oxMYB75 plants to caterpillar feeding. Pieris brassicae caterpillars also grew less on an artificial diet containing KRR or on oxMYB75 plants that were exogenously treated with KRR, supporting KRR's function in direct defence against this specialist caterpillar. The results show that enhancing the activity of the anthocyanin pathway in oxMYB75 plants results in re-channelling of quercetin/kaempferol metabolites which has a negative effect on the accumulation of KRR, a novel defensive metabolite against a specialist caterpillar.
|
Down-regulation of acetolactate synthase compromises OI-1- mediated resistance to powdery mildew in tomato
|
Gao, D. ; Huibers, R.P. ; Loonen, A.E.H.M. ; Visser, R.G.F. ; Wolters, A.M.A. ; Bai, Y. - \ 2014
BMC Plant Biology 14 (2014). - ISSN 1471-2229 - 11 p.
glutamate-dehydrogenase gene - acetohydroxyacid synthase - monogenic-resistance - defense responses - nicotiana-tabacum - ol-genes - arabidopsis - plants - inhibition - biosynthesis
Background - In a cDNA-AFLP analysis comparing transcript levels between powdery mildew (Oidium neolycopersici)-susceptible tomato cultivar Moneymaker (MM) and near isogenic lines (NILs) carrying resistance gene Ol-1 or Ol-4, a transcript-derived fragment (TDF) M11E69-195 was found to be present in NIL-Ol-1 but absent in MM and NIL-Ol-4. This TDF shows homology to acetolactate synthase (ALS). ALS is a key enzyme in the biosynthesis of branched-chain amino acids valine, leucine and isoleucine, and it is also a target of commercial herbicides. Results - Three ALS homologs ALS1, ALS2, ALS3 were identified in the tomato genome sequence. ALS1 and ALS2 show high similarity, whereas ALS3 is more divergent. Transient silencing of both ALS1 and ALS2 in NIL-Ol-1 by virus-induced gene silencing (VIGS) resulted in chlorotic leaf areas that showed increased susceptibility to O. neolycopersici (On). VIGS results were confirmed by stable transformation of NIL-Ol-1 using an RNAi construct targeting both ALS1 and ALS2. In contrast, silencing of the three ALS genes individually by RNAi constructs did not compromise the resistance of NIL-Ol-1. Application of the herbicide chlorsulfuron to NIL-Ol-1 mimicked the VIGS phenotype and caused loss of its resistance to On. Susceptible MM and On-resistant line NIL-Ol-4 carrying a nucleotide binding site and leucine rich repeat (NB-LRR) resistance gene were also treated with chlorsulfuron. Neither the susceptibility of MM nor the resistance of NIL-Ol-4 was affected. Conclusions - ALS is neither involved in basal defense, nor in resistance conferred by NB-LRR type resistance genes. Instead, it is specifically involved in Ol-1-mediated resistance to tomato powdery mildew, suggesting that ALS-induced change in amino acid homeostasis is important for resistance conferred by Ol-1.
|
Two for all: receptor-associated kinases SOBIR1 and BAK1
|
Liebrand, T.W.H. ; Burg, H.A. van den; Joosten, M.H.A.J. - \ 2014
Trends in Plant Science 19 (2014)2. - ISSN 1360-1385 - p. 123 - 132.
plant innate immunity - pattern-recognition receptors - ethylene-inducing xylanase - arabidopsis-thaliana - cladosporium-fulvum - defense responses - cell-death - signaling pathways - plasma-membrane - protein-kinase
Leucine-rich repeat-receptor-like proteins (LRR-RLPs) are ubiquitous cell surface receptors lacking a cytoplasmic signalling domain. For most of these LRR-RLPs, it remained enigmatic how they activate cellular responses upon ligand perception. Recently, the LRR-receptor-like kinase (LRR-RLK) SUPPRESSOR OF BIR1-1 (SOBIR1) was shown to be essential for triggering defence responses by certain LRR-RLPs that act as immune receptors. In addition to SOBIR1, the regulatory LRR-RLK BRI1-ASSOCIATED KINASE-1 (BAK1) is also required for LRR-RLP function. Here, we compare the roles of SOBIR1 and BAK1 as regulatory LRR-RLKs in immunity and development. BAK1 has a general regulatory role in plasma membrane-associated receptor complexes comprising LRR-RLPs and/or LRR-RLKs. By contrast, SOBIR1 appears to be specifically required for the function of receptor complexes containing LRR-RLPs.
|
Involvement of phospholipase D-related signal transduction in chemical-induced programmed cell death in tomato cell cultures
|
Iakimova, E.T. ; Michaeli, R. ; Woltering, E.J. - \ 2013
Protoplasma 250 (2013)5. - ISSN 0033-183X - p. 1169 - 1183.
phosphatidic-acid accumulation - g-protein activation - suspension cells - plasma-membrane - nitric-oxide - chlamydomonas-reinhardtii - arabidopsis-thaliana - aerenchyma formation - disease resistance - defense responses
Phospholipase D (PLD) and its product phosphatidic acid (PA) are incorporated in a complex metabolic network in which the individual PLD isoforms are suggested to regulate specific developmental and stress responses, including plant programmed cell death (PCD). Despite the accumulating knowledge, the mechanisms through which PLD/PA operate during PCD are still poorly understood. In this work, the role of PLD alpha 1 in PCD and the associated caspase-like proteolysis, ethylene and hydrogen peroxide (H2O2) synthesis in tomato suspension cells was studied. Wild-type (WT) and PLD alpha 1-silenced cell lines were exposed to the cell death-inducing chemicals camptothecin (CPT), fumonisin B1 (FB1) and CdSO4. A range of caspase inhibitors effectively suppressed CPT-induced PCD in WT cells, but failed to alleviate cell death in PLD alpha 1-deficient cells. Compared to WT, in CPT-treated PLD alpha 1 mutant cells, reduced cell death and decreased production of H2O2 were observed. Application of ethylene significantly enhanced CPT-induced cell death both in WT and PLD alpha 1 mutants. Treatments with the PA derivative lyso-phosphatidic acid and mastoparan (agonist of PLD/PLC signalling downstream of G proteins) caused severe cell death. Inhibitors, specific to PLD and PLC, remarkably decreased the chemical-induced cell death. Taken together with our previous findings, the results suggest that PLD alpha 1 contributes to caspase-like-dependent cell death possibly communicated through PA, reactive oxygen species and ethylene. The dead cells expressed morphological features of PCD such as protoplast shrinkage and nucleus compaction. The presented findings reveal novel elements of PLD/PA-mediated cell death response and suggest that PLD alpha 1 is an important factor in chemical-induced PCD signal transduction.
|
Morphological and biochemical characterization of Erwinia amylovora-induced hypersensitive cell death in apple leaves
|
Iakimova, E.T. ; Sobiczewski, P. ; Michalczuk, L. ; Wegrzynowicz-Lesiak, E. ; Mikicinski, A. ; Woltering, E.J. - \ 2013
Plant Physiology and Biochemistry 63 (2013). - ISSN 0981-9428 - p. 292 - 305.
vacuolar-processing-enzyme - 1-aminocyclopropane-1-carboxylic acid synthase - mitochondrial permeability transition - arabidopsis-thaliana - fire blight - oxidative stress - defense responses - salicylic-acid - host plants - disease resistance
In attached apple leaves, spot-inoculated with Erwinia amylovora, the phenotypic appearance of the hypersensitive response (HR) and the participation of ethylene, reactive oxygen species (ROS) and of vacuolar processing enzyme (VPE) (a plant caspase-1-like protease) were analysed. The HR in both the resistant and susceptible genotypes expressed a similar pattern of distinguishable micro HR lesions that progressed into confined macro HR lesions. The HR symptoms in apple were compared to those in non-host tobacco. The morphology of dead cells (protoplast shrinkage and retraction from cell wall) in apple leaves resembled necrotic programmed cell death (PCD). Lesion formation in both cv. Free Redstar (resistant) and cv. Idared (highly susceptible) was preceded by ROS accumulation and elevation of ethylene levels. Treatment of infected leaves with an inhibitor of ethylene synthesis led to a decrease of ethylene emission and suppression of lesion development in both cultivars. In the resistant but not in the susceptible apple cultivar an early and late increase in VPE gene expression was detected. This suggests that VPE might be an underlying component of the response to E. amylovora in resistant apple cultivars. The findings show that in the studied pathosystem the cell death during the HR proceeds through a signal transduction cascade in which ROS, ethylene and VPE pathways play a role.
|
Two-way plant mediated interactions between root-associated microbes and insects: from ecology to mechanisms
|
Pangesti, N.P.D. ; Pineda Gomez, A.M. ; Pieterse, C.M.J. ; Dicke, M. ; Loon, J.J.A. van - \ 2013
Frontiers in Plant Science 4 (2013). - ISSN 1664-462X - 11 p.
induced systemic resistance - arbuscular mycorrhizal fungi - below-ground interactions - arabidopsis-thaliana - rhizosphere microbiome - defense responses - salicylic-acid - bacterial communities - jasmonic acid - pathogenic microorganisms
Plants are members of complex communities and function as a link between above- and below-ground organisms. Associations between plants and soil-borne microbes commonly occur and have often been found beneficial for plant fitness. Root-associated microbes may trigger physiological changes in the host plant that influence interactions between plants and aboveground insects at several trophic levels. Aboveground, plants are under continuous attack by insect herbivores and mount multiple responses that also have systemic effects on belowground microbes. Until recently, both ecological and mechanistic studies have mostly focused on exploring these below- and above-ground interactions using simplified systems involving both single microbe and herbivore species, which is far from the naturally occurring interactions. Increasing the complexity of the systems studied is required to increase our understanding of microbe-plant-insect interactions and to gain more benefit from the use of non-pathogenic microbes in agriculture. In this review, we explore how colonization by either single non-pathogenic microbe species or a community of such microbes belowground affects plant growth and defense and how this affects the interactions of plants with aboveground insects at different trophic levels. Moreover, we review how plant responses to foliar herbivory by insects belonging to different feeding guilds affect interactions of plants with non-pathogenic soil-borne microbes. The role of phytohormones in coordinating plant growth, plant defenses against foliar herbivores while simultaneously establishing associations with non-pathogenic soil microbes is discussed.
|
Phenotypic plasticity of plant response to herbivore eggs: effects on resistance to caterpillars and plant development
|
Pashalidou, F.G. ; Lucas-Barbosa, D. ; Loon, J.J.A. van; Dicke, M. ; Fatouros, N.E. - \ 2013
Ecology 94 (2013)3. - ISSN 0012-9658 - p. 702 - 713.
insect herbivores - pieris-brassicae - specialist herbivores - arabidopsis-thaliana - mamestra-brassicae - defense responses - bunias orientalis - chemical defense - pinus-sylvestris - getting ready
Herbivory induces direct resistance responses in plants that negatively affect subsequently colonizing herbivores. Moreover, eggs of herbivorous insects can also activate plant resistance, which in some cases prevents hatching larvae from feeding. Until now, plant-mediated effects of eggs on subsequent herbivory, and the specificity of such responses, have remained poorly understood. We studied the specificity and effects of plant resistance induced by herbivore egg deposition against lepidopteran larvae of species with different dietary breadths, feeding on a wild annual plant, the crucifer Brassica nigra. We examined whether this plant-mediated response affects the growth of caterpillars of a specialist (Pieris brassicae) that feeds on B. nigra leaves and flowers, and a generalist (Mamestra brassicae) that rarely attacks this wild crucifer. We measured growth rates of neonate larvae to the end of their second instar after the larvae had hatched on plants exposed to eggs vs. plants without eggs, under laboratory and semi-field conditions. Moreover, we studied the effects of egg deposition by the two herbivore species on plant height and flowering rate before and after larval hatching. Larvae of both herbivore species that developed on plants previously infested with eggs of the specialist butterfly P. brassicae gained less mass compared with larvae that developed on egg-free plants. Plants exposed to butterfly eggs showed accelerated plant growth and flowering compared to egg-free plants. Egg deposition by the generalist moth M. brassicae, in contrast, had no effect on subsequent performance by either herbivore species, or on plant development. Our results demonstrate that B. nigra plants respond differently to eggs of two herbivore species in terms of plant development and induced resistance to caterpillar attack. For this annual crucifer, the retardation of caterpillar growth in response to deposition of eggs by P. brassicae in combination with enhanced growth and flowering likely result in reproductive assurance, after being exposed to eggs from an herbivore whose larvae rapidly reduce the plant's reproductive potential through florivory.
|
Beneficial microbes in a changing environment: are they always helping plants to deal with insects?
|
Pineda, A. ; Dicke, M. ; Pieterse, C.M.J. ; Pozo, M.J. - \ 2013
Functional Ecology 27 (2013)3. - ISSN 0269-8463 - p. 574 - 586.
arbuscular mycorrhizal symbiosis - ultraviolet-b radiation - abscisic-acid - climate-change - induced resistance - defense responses - water-stress - signaling pathways - fungal endophyte - salicylic-acid
Plants have a complex immune system that defends them against attackers (e.g. herbivores and microbial pathogens) but that also regulates the interactions with mutualistic organisms (e.g. mycorrhizal fungi and plant growth-promoting rhizobacteria). Plants have to respond to multiple environmental challenges, so they need to integrate both signals associated with biotic and abiotic stresses in the most appropriate response to survive. Beneficial microbes such as rhizobacteria and mycorrhizal fungi can help plants to deal' with pathogens and herbivorous insects as well as to tolerate abiotic stress. Therefore, beneficial microbes may play an important role in a changing environment, where abiotic and biotic stresses on plants are expected to increase. The effects of beneficial microbes on herbivores are highly context-dependent, but little is known on what is driving such dependency. Recent evidence shows that abiotic stresses such as changes in soil nutrients, drought and salt stress, as well as ozone can modify the outcome of plantmicrobeinsect interactions. Here, we review how abiotic stress can affect plantmicrobe, plantinsect and plantmicrobeinsect interactions, and the role of the network of plant signal-transduction pathways in regulating such interactions. Most of the studies on the effects of abiotic stress on plantmicrobeinsect interactions show that the effects of microbes on herbivores (positive or negative) are strengthened under stressful conditions. We propose that, at least in part, this is due to the crosstalk of the different plant signalling pathways triggered by each stress individually. By understanding the cross-regulation mechanisms we may be able to predict the possible outcomes of plant-microbeinsect interactions under particular abiotic stress conditions. We also propose that microbes can help plants to deal with insects mainly under conditions that compromise efficient activation of plant defences. In the context of global change, it is crucial to understand how abiotic stresses will affect species interactions, especially those interactions that are beneficial for plants. The final aim of this review is to stimulate studies unravelling when these beneficial' microbes really benefit a plant.
|
Ve1-mediated resistance against Verticillium does not involve a hypersensitive response in Arabidopsis
|
Zhang, Z. ; Esse, H.P. van; Damme, M. van; Fradin, E.F. ; Liu, Chun-Ming ; Thomma, B.P.H.J. - \ 2013
Molecular Plant Pathology 14 (2013)7. - ISSN 1464-6722 - p. 719 - 727.
ethylene-inducing xylanase - receptor-like proteins - gated ion-channel - disease resistance - rhynchosporium-secalis - functional-analysis - defense responses - gene family - tomato ve1 - cell-death
The recognition of pathogen effectors by plant immune receptors leads to the activation of immune responses that often include a hypersensitive response (HR): rapid and localized host cell death surrounding the site of attempted pathogen ingress. We have demonstrated previously that the recognition of the Verticillium dahliae effector protein Ave1 by the tomato immune receptor Ve1 triggers an HR in tomato and tobacco. Furthermore, we have demonstrated that tomato Ve1 provides Verticillium resistance in Arabidopsis upon Ave1 recognition. In this study, we investigated whether the co-expression of Ve1 and Ave1 in Arabidopsis results in an HR, which could facilitate a forward genetics screen. Surprisingly, we found that the co-expression of Ve1 and Ave1 does not induce an HR in Arabidopsis. These results suggest that an HR may occur as a consequence of Ve1/Ave1-induced immune signalling in tomato and tobacco, but is not absolutely required for Verticillium resistance.
|
Non-pathogenic rhizobacteria interfere with the attraction of parasitoids to aphid-induced plant volatiles via jasmonic acid signalling.
|
Pineda, A. ; Soler Gamborena, R. ; Weldegergis, B.T. ; Shimwela, M.M. ; Loon, J.J.A. van; Dicke, M. - \ 2013
Plant, Cell & Environment 36 (2013)2. - ISSN 0140-7791 - p. 393 - 404.
brevicoryne-brassicae attack - herbivore-induced volatiles - phloem-feeding insects - arabidopsis-thaliana - mycorrhizal fungi - myzus-persicae - multitrophic interactions - mediated interactions - systemic resistance - defense responses
Beneficial soil-borne microbes, such as mycorrhizal fungi or rhizobacteria, can affect the interactions of plants with aboveground insects at several trophic levels. While the mechanisms of interactions with herbivorous insects, that is, the second trophic level, are starting to be understood, it remains unknown how plants mediate the interactions between soil microbes and carnivorous insects, that is, the third trophic level. Using Arabidopsis thaliana Col-0 and the aphid Myzus persicae, we evaluate here the underlying mechanisms involved in the plant-mediated interaction between the non-pathogenic rhizobacterium Pseudomonas fluorescens and the parasitoid Diaeretiella rapae, by combining ecological, chemical and molecular approaches. Rhizobacterial colonization modifies the composition of the blend of herbivore-induced plant volatiles. The volatile blend from rhizobacteria-treated aphid-infested plants is less attractive to an aphid parasitoid, in terms of both olfactory preference behaviour and oviposition, than the volatile blend from aphid-infested plants without rhizobacteria. Importantly, the effect of rhizobacteria on both the emission of herbivore-induced volatiles and parasitoid response to aphid-infested plants is lost in an Arabidopsis mutant (aos/dde2-2) that is impaired in jasmonic acid production. By modifying the blend of herbivore-induced plant volatiles that depend on the jasmonic acid-signalling pathway, root-colonizing microbes interfere with the attraction of parasitoids of leaf herbivores.
|
Seed and leaf treatments with natural compounds to induce resistance against Peronospora parasitica in Brassica oleracea
|
Wolf, J.M. van der; Michta, A. ; Zouwen, P.S. van der; Boer, W.J. de; Davelaar, E. ; Stevens, L.H. - \ 2012
Crop Protection 35 (2012). - ISSN 0261-2194 - p. 78 - 84.
systemic acquired-resistance - induced disease resistance - defense responses - fusarium-wilt - downy mildew - damping-off - plants - protection - cucumber - growth
Seed and leaf treatments with natural compounds having a low risk profile (LRP) were evaluated for their potential to induce resistance in cabbage plants (Brassica oleracea) against Peronospora parasitica, causal organism of downy mildew. The selection of 34 LRP compounds comprised micronutrients, organic compounds such as proline, riboflavin, oligogalacturonides, aminolignosulfonates, bacterial lipopolysaccharides, and bacterial and fungal extracts. Treatments with the synthetic chemical inducers 2,6-dichloroisonicotinic acid (INA), d,l-ß-aminobutyric acid, salicylic acid, benzothiadiazole and the fungicide Previcur™ were included as controls. After seed treatment a maximum reduction of 27% diseased leaf area was found with an extract of a Lysobacter strain, compared to a reduction of 99% for INA, the most effective synthetic inducer. Seed treatments with extracts of Pectobacterium carotovorum subsp. carotovorum, Bacillus macerans, Pseudomonas syringae, Streptomyces and Xanthomonas campestris strains also reduced downy mildew infection significantly. After leaf treatment, a maximum reduction of 85% was again found with the Lysobacter extract, compared to a reduction of 99% for INA, the most effective synthetic inducer. Leaf treatments with CuSO4 (=1 mM), MnCl2 (=10 mM), K2HPO4 (100 mM), and extracts of P. syringae, P. carotovorum subsp. carotovorum, Streptomyces, X. campestris and B. macerans strains also reduced the diseased leaf area, but CuSO4 was highly phytotoxic. For seed and leaf treatments with Lysobacter extract, proline, MnCl2 and INA the effect on the induction of chitinase and glucanase activity was tested, using two pathogenesis-related proteins as markers for induced resistance. For seed treatments only INA and for leaf treatments INA, proline and MnCl2 treatments resulted in increased activity of both enzymes. The rate of enzyme activity induced by INA was dependent on the time seeds were exposed to the compound. Highlights ¿ Seed treatments with isonicotinic acid protects Brassica seedlings from Peronospora infections. ¿ Treatments of seedlings with extracts of Lysobacter protects against Peronospora infections. ¿ Effect of seed treatments is dependent on the time of incubation with the elicitor
|
Rhizobacteria modify plant–aphid interactions: a case of induced systemic susceptibility
|
Pineda, A. ; Zheng, S.J. ; Loon, J.J.A. van; Dicke, M. - \ 2012
Plant Biology 14 (2012)Suppl. s1. - ISSN 1435-8603 - p. 83 - 90.
gene-expression - arabidopsis-thaliana - brevicoryne-brassicae - signaling pathways - induced resistance - insect herbivores - abscisic-acid - disease resistance - defense responses - myzus-persicae
Beneficial microbes, such as plant growth-promoting rhizobacteria and mycorrhizal fungi, may have a plant-mediated effect on insects aboveground. The plant growth-promoting rhizobacterium Pseudomonas fluorescens can induce systemic resistance in Arabidopsis thaliana against several microbial pathogens and chewing insects. However, the plant-mediated effect of these beneficial microbes on phloem-feeding insects is not well understood. Using Arabidopsis as a model, we here report that P. fluorescens has a positive effect on the performance (weight gain and intrinsic rate of increase) of the generalist aphid Myzus persicae, while no effect was recorded on the crucifer specialist aphid Brevicoryne brassicae. Additionally, transcriptional analyses of selected marker genes revealed that in the plant–microbe interaction with M. persicae, rhizobacteria (i) prime the plant for enhanced expression of LOX2, a gene involved in the jasmonic acid (JA)-regulated defence pathway, and (ii) suppress the expression of ABA1, a gene involved in the abscisic acid (ABA) signalling pathway, at several time points. In contrast, almost no effect of the plant–microbe interaction with B. brassicae was found at the transcriptional level. This study presents the first data on rhizobacteria-induced systemic susceptibility to an herbivorous insect, supporting the pattern proposed for other belowground beneficial microbes and aboveground phloem feeders. Moreover, we provide further evidence that at the transcript level, soil-borne microbes modify plant–aphid interactions. | https://library.wur.nl/WebQuery/wurpubs?A320==defense%20responses |
# Portuguese Gold Coast
The Portuguese Gold Coast was a Portuguese colony on the West African Gold Coast (present-day Ghana) along the Gulf of Guinea. Established in 1482, the colony was officially incorporated into Dutch territory in 1642 following Portugal’s defeat in the Dutch-Portuguese War. From their seat of power at the fortress of São Jorge da Mina (located in modern Elmina), the Portuguese commanded a vast internal slave trade, creating a slave network that would expand after the end of Portuguese colonialism in the region. The primary export of the colony was gold, which was obtained through barter with the local population. Portuguese presence along the Gold Coast increased seamanship and trade in the Gulf, introduced American crops (such as maize and cassava) into the African agricultural landscape, and made Portuguese an enduring language of trade in the area.
## History
### Portuguese arrival on the Gold Coast
In 1471, Portuguese explorers encountered fishing villages rich with ivory and gold along the Atlantic coast of modern-day Ghana, which the Portuguese called the Gold Coast. The prospect of trade in the Gold Coast region helped spur the construction of the fortress São Jorge da Mina (St. George of the Mine) in 1482, which soon came to be known as Elmina Castle, derived from the Portuguese term "el mina" ("the mine"). The castle was erected near a populated African town which was also called Elmina. The Other major Portuguese settlements on the Gold Coast included the following:
Fort Santo António de Axim, modern Axim: established 1515 Fort São Francisco Xavier, modern Osu, a district of Accra: established c.1557—c.1578 Fort São Sebastião, modern Shama: established 1558
The Portuguese decision to construct the fortress at Elmina was influenced by a pre-established trade system between native Elminans and Portuguese merchants in the area. A natural peninsula, enclosed by the Atlantic and the Benya river, was chosen as the site of construction for Elmina Castle to maximize defensibility. A nobleman named Diogo de Azambuja was appointed by the Portuguese king, John II, to construct the coastal fortress. To maintain peace with the native peoples of Elmina, Azambuja entered into negotiations with the native leader Caramansa over their plans to construct Elmina Castle. In a discussion facilitated by a Portuguese merchant and aided by a native translator, Caramansa reacted skeptically to the proposition, as several African homes would have to be destroyed for construction on the castle to begin. After the Portuguese threatened violence, Caramansa met Portuguese demands. However, he prohibited the use of sacred local rock, known to the native Elminans as Kokobo, and forbid the Portuguese from accessing the natives’ freshwater supply. Portuguese settlers, defying Caramansa's demands, mined Kokobo rock for construction purposes. Doing so upset the local population, yet conflict was avoided after the Portuguese bestowed gifts upon the native Elminans. Once constructed, Elmina Castle represented the first major European construction in sub-Saharn Africa and is currently recognized as a UNESCO World Heritage Site.
In order to establish good trade relationships with neighboring African nations, the Portuguese frequently extended gifts to the leaders of interior states, including to the Eguafo state to which Elmina belonged. Their strategy along the coast, however, entailed using force against Africans to prevent them from trading with European competitors. Portuguese violence along the coast soured their relations with neighboring African states; as such, the Portuguese lacked sufficient manpower to enforce their rule across the entire Gulf of Guinea. Portuguese influence along the Gold Coast extended from an area near modern-day New Town, Ghana, in the west to the historic settlement of Adda (near modern-day Denu, Ghana) in the east. Other European nations conducting trade in the Gulf, including the English and Dutch, offered lower-priced commodities than the Portuguese, driving many Africans to accept the risk of Portuguese retaliation in order to yield a larger profit from trade.
### Dutch competition
Competition with European powers coupled with the decline of Portugal’s economic might in the early 1600s led to a waning of Portuguese influence in the Gold Coast region. Spurred by reports of the successful Portuguese gold trade in the Gulf of Guinea, Dutch forces began mobilizing against the Portuguese in an effort to wrest control of the region and monopolize the gold trade. In 1625, the Dutch West India Company initiated an attack on São Jorge da Mina, which stood as the trading hub for the Portuguese in West Africa. The Dutch fleet was made up of the combined forces of Captain Jan Dircksz Lam and the remaining ships from Boudewijn Hendricksz’s failed venture in Salvador against the Spanish. On October 25, 1625, the Dutch were ambushed by Portuguese forces and their African allies, which were persuaded to join the fight after the Portuguese promised them compensation. After incurring heavy losses, the Dutch were expelled from the area in what became known as the Battle of Elmina (1625).
In August of 1637, the Dutch West India Company again targeted Elmina, which they saw as both the seat of Portuguese power in the Gulf of Guinea and a potential foothold into the African slave trade. To aid in the conflict, known as the second Battle of Elmina (1637), the Dutch encouraged members of the Elmina, Komenda, and Efutu states to turn against the Portuguese. After gaining some local support, the Dutch were better equipped to take on the opposing Portuguese forces and succeeded in capturing a hill facing the fort of Elmina. After enduring days of cannon fire, the Portuguese conceded, and Elmina castle officially came under Dutch control on August 29, 1637. Without their stronghold in Elmina, the Portuguese were completely expelled from the region by 1642.
### Donatary captains
Donatary captain (donatário, or Captain-major) was a designation given by the Portuguese Crown to an official tasked with overseeing colonial territory. The following is a list of the known donatary captaincies in São Jorge da Mina:
## Economy
The Portuguese imported slaves to Elmina throughout the sixteenth century, using them primarily to transport goods to and from interior African states, but also to exchange with local Elminans for gold. The main supply of Gold Coast slaves came from the trade route between Benin and Elmina, which also supplied the Portuguese with important commodities such as cotton, cloth, and beads. The slave trade was later expanded to encompass the Niger River delta and the island of São Tome. Cloth, linens, beads, copper and brass pots, pans, bracelets, and slaves were all used as bartering tools to obtain gold from the native merchants of Elmina. Elmina's gold originated from the Asante and Denkyira regions of modern-day Ghana and became the dominant export from the colony along with, to a lesser extent, ivory. Additionally, the inflow of foreign crops into the Gold Coast region globalized the region's agricultural practices and output, introducing sugar, maize, guava, sweet potatoes, coconut, yams, and cassava to the African agricultural landscape. Further, the dominance of the Portuguese trade route along the Gulf Coast in the sixteenth century led to Portuguese becoming the principal language of exchange in the Gulf of Guinea. The language has endured in the area despite the presence of other European powers in the Gulf after the colony was ceded in 1642.
## Legacy
The internal African slave trade established by the Portuguese laid the groundwork for the vast networks of human trafficking that would flourish in the region during the centuries to come, as the Dutch and, later, the British capitalized on pre-established trade routes during the Atlantic slave trade. Further, the shipping might of the Portuguese encouraged new, long-distance river trading amidst West African states, and the volume of trade along the Gulf of Guinea increased as a result of Portuguese presence. Boatbuilding became an important craft that accompanied an increase in coastal trade and seamanship in the Gulf. After generations of intimate contact with local African dialects, Portuguese creole emerged as an important language of trade along the Gulf Coast, second only to Portuguese itself. Further, interbreeding between Portuguese and Africans led to a sizable mixed-race population along the Gold Coast.
Urbanization occurred around Elmina, spurred partly by Portuguese attempts to establish a municipality in the area. Native governors, known as braffos, were given authority by the Portuguese, and migration from the interior to coastal regions increased. The cultivation of maize and cassava, first introduced to the region by the Portuguese through trans-Atlantic trade, flourished in the Gold Coast and became dietary staples throughout West Africa. Further, Portuguese contact and activity along the Gold Coast integrated the region into the global economy. The larger trade volume in the region centralized the small, distinct states that existed prior to Portuguese contact into larger political entities. The advent of global trade in the Gold Coast also consolidated commercial activity in coastal cities, which connected inland African communities with European trade. | https://en.wikipedia.org/wiki/Portuguese_Gold_Coast |
Motorola Droid Solves Rubik's Cube
At the risk of giving away my age, I’ll admit that I am a teenager of the ’80s, during which the Rubik’s cube was invented to frustrate young and old alike. The Rubik’s Cube is a puzzle, with each cube face made up in 3×3 rows of squares with different colors. To solve the puzzle you move rows of the cube’s face to get all the squares on each face of the cube the same color.
The Rubik’s Cube has made a come back with competitions and world records. The point of the competition is to solve the cube as quickly as possible, and there are world rankings for a variety of different forms of Rubik’s Cube competitions. Apparently the Motorola Droid want’s in on the competition, watch for yourself: | https://dev.adweek.com/digital/motorola-droid-solves-rubiks-cube/ |
Summer 2020 is here! Although we may be physically far apart, we can still be together in spirit!
Thank you to everyone who attended the MSE Grants Development Faculty Workshop on May 21. If you missed it or want a refresher, check out this presentation; it provides new information about MSE grants in 2021 and resources for creating a winning proposal. If you currently have an MSE award for a project that is delayed due to COVID-19, please see the presentation for information on submitting a revision by August 31, 2020.
Internal Grant Opportunities
MSE Research Grants
If you received an MSE Research Grant after the 2016–2017 cycle and plan to reapply this fall (the grant deadline has been moved to October 1), you will need to show that you have applied for external funding for your project. Please contact me if you have questions or need information on potential funders.
Experiential Learning Grant
The next deadline for ELGs is July 1. Contact Jaynie Mitchell and Steve Christensen for assistance and preapproval of your proposal.
President’s Innovation Funding for Research-Practice Partnership (RPP)
Developing a mutually beneficial research project that meets the needs of both BYU and your K–12 school partner can create deep, long-lasting partnerships that benefit all involved. Thanks to a recent $60,000 award from the BYU President’s Innovation Fund, the dean will be awarding up to four Research-Practice Partnership–focused grants this fall. CFS-track MSE and EPP faculty are eligible to submit a pre-proposal concept paper that outlines your plan to collaborate with BYU–Public School Partnership schools and/or districts by October 1, 2020. For more information on developing Research-Practice Partnerships and this award opportunity, contact Jaynie Mitchell.
External Grant Opportunities
- July 27, 2020: National Science Foundation (NSF) Early Career Grants
- August 20, 2020: US Dept. of Ed IES Research Grants
- October 7, 2020: National Science Foundation (NSF) Discovery Research PreK–12 Grants
Find more research support information on our website. | https://education.byu.edu/news/mckay-grants-and-awards-spring-summer-2020 |
The events of the past few weeks have given investors plenty of reasons to be fearful. Gone (but not forgotten) are the fears over Brexit and Trade Wars - and instead we have worries about the impact of the Coronavirus, both socially and on the global economic system.
With more than 109,695 cases of COVID-19, including 3,811 deaths worldwide*, the threat of the virus has taken hold of the global economy. The global stock market (as measured by the MSCI World) and the UK stock market, the FTSE 100, had fallen 9.6% and 12.5% respectively between 20 February and 6 March 2020**.
Today (March 9) saw the FTSE 100 fall a further 8% in early trading, as a row between Russia and Saudi Arabia saw oil prices plunge by more than 20%*** - an unnecessary shock for an already fragile market.
The sharp declines in the value of investments could tempt people to head for the safety of cash for the time being. However, as Darius McDermott, our managing director, commented: "Share prices could fall further, but losses are not losses until you crystallise them.
“The worst thing anyone could do now, would be to redeem investments. History tells us that holding your nerve can be the better strategy.”
Although it is not impossible to time the market, it is extremely challenging and, while you may miss some of the worst days, it also leaves you open to missing out on some of the best-performing days.
Research from Schroders found that mistimed decisions on an investment of just £1,000 could have cost you more than £19,000-worth of returns in the past 30 years****.
The company analysed the performance of the FTSE 100, the FTSE 250 and the FTSE All-Share over three decades. It found that, had you invested in the FTSE 250 (the largest 250 companies in the UK) in 1989 and left the investment alone for the next 30 years, it might have been worth £26,831 by the end of 2019****.
However, if you had tried to time your entry in and out of the market, the result could have been very different - and not in a good way.
During the same period, if you missed out on the FTSE 250’s 30 best trading days the same investment might now be worth £7,543, or £19,288 less**** (figures exclusive or charges and inflation).
Essentially, if you had left your investment in the FTSE 250 untouched, you would have made an 11.6% annual return over the last 30 years. This falls starkly to 9.6% if you missed the 10 best trading days and then to 8.2% and 7% if you missed the best 20 or 30 days respectively****.
The story is similar for the FTSE 100 where a £1,000 investment in 1989 would have been worth £13,485 at the end of 2019. However, should you have tried to time the market and missed the best 30 days this would fall to £2,958****.
|What a £1k investment in 1989 is worth now||Invested the whole time||Less 10 best days||Less 20 best days||Less 30 best days|
|FTSE 100||£13,485||£6,947||£4,400||£2,958|
|FTSE 250||£26,831||£15,713||£10,665||£7,543|
|FTSE All-Share||£14,016||£7,496||£4,885||£3,378|
For those feeling brave, it is also worth remembering that, if you were contemplating investing in the stock market at the start of January, it is much better value today: the UK stock market has fallen some 20% year to date^, which means you would essentially be buying the same assets at a 20% discount.
This is a ‘best ideas’ portfolio, which encompasses any stock regardless of size or sector, although there will usually be around 50% in small and mid-cap stocks. The managers look for firms with ‘intellectual capital’ or strong distribution networks, recurring revenue streams and products with no obvious substitutes. They also like to invest in companies where management teams have a significant personal equity stake.
This fund is run by industry veteran Charles Montanaro and invests in quality growth businesses, backed by strong management teams. The fund seeks to grow its dividend over time. One of its differentiating features is the absence of stocks listed on AIM (Alternative Investment Market), as the team believes these are too risky. Each holding will offer an attractive dividend yield or the potential for dividend growth.
This fund invests primarily in the companies in the FTSE 250. However, while it naturally focuses on medium-sized companies, its manager will be pragmatic about including select opportunities from the smaller companies space, investing early in strong growth stories, as well as letting winning mid-cap holdings grow into larger-sized companies.
Finding undervalued companies that are yet to deliver on their potential the aim of this fund. Manager Hugh Sergeant uses his three decades of investing experience to identify companies where he believes management have the capability to turn things around and looks to add to his holdings at almost fire-sale prices in volatile times, so has no doubt had a busy few days.
Investec UK Alpha is a well-diversified core UK equity fund. The manager aims to buy quality companies that consistently create value for shareholders and believes that markets are excessively focused on short term factors, and not where a company will be in five years’ time. This creates opportunities to invest in quality companies that will deliver for many years into the future.
*Source: European Centre for Disease Prevention and Control, as at 8am, 9 March 2020
**Source: FE Analytics, total returns in sterling for the MSCI World, FTSE 100 and FTSE 250 20 February 2020 to 6 March 2020
***Source: www.bbc.co.uk, 9 March 2020
****Source: Schroder Insights -The £19k cost of trying to time the market
^Source: FE Analytics, total returns in sterling for the MSCI World, FTSE 100 and FTSE 250, 1 January 2020 to 10 March 2020
Past performance is not a reliable guide to future returns. You may not get back the amount originally invested, and tax rules can change over time. The views expressed are those of the author and do not constitute financial advice. | https://chelseafs.co.uk/news/blog/should-you-invest-when-stock-markets-fall/ |
How do you make a pet casket?
Carefully lay the top of the coffin on the finished box.
Place hinges approximately 6 to 12 inches apart, depending on the length of the coffin.
The number of hinges you’ll need will depend upon the size of your dog and the size of the coffin.
How do you make a cat casket?
Select a wood to create your cat’s coffin, such as old pine board or oak. You’ll need a board at least 3/4 inches thick, 3 feet wide and 8 feet long. Sketch the dimensions you’ll need to create the casket. If necessary, measure the cat to ensure the animal will be able to comfortably fit inside.
How do you make a pine box casket?
How to build a Casket ~DIY~ Pine Box –
How do you make a small casket? | https://animaliajournal.com/rabbits/how-to-make-a-wooden-pet-casket.html |
COVID-19 and Climate Change Each Require Targeted Treatment and Improved Behavior.
We’ll meet the COVID challenge by improving our current treatments and practices, not by either jettisoning them or letting the disease run its course. Here’s how that same logic applies to climate change.
2 April 2020 | COVID-19 has flattened the global economy, and it will hurt countless people before we flatten the curve. That, however, hasn’t stopped some from declaring this human tragedy an ecological “blessing” that could also save a lot of lives if it ends up supporting both national climate policies and the Paris Agreement.
Aside from a shared sense that this tragedy can be harvested for good, however, there’s little agreement on how, exactly, this awakening will happen. Some, for example, argue that our global response to COVID-19, if successful, can provide some sort of nebulous energy for tackling climate change. Others argue that we’ll solve our problems by letting airlines and fossil fuel companies die so we can start from scratch. Neither of these notions really takes stock of the existing “vaccines” and “treatments” already being brought to bear in the effort to end climate change, and that could be a tragedy in itself if it means this moment is allowed to pass.
I’m skeptical of those who argue this crisis will automatically turn climate skeptics into climate warriors, but I do believe this moment can be leveraged to provide more general support to a greener economy and to meet the goals of the Paris Agreement through a more aggressive application of existing economic, governmental, and technological “treatments” combined with the organic behavioral changes we’re already making in response to COVID-19.
Clear Skies and Fresh Eyes
One reason to believe people will feel more motivation to act is that our negative impact on the environment has drastically decreased, and many of us are directly experiencing clean air and reviving nature around us. In China, climate policy has actually historically been driven more by a need to reduce air pollution than to mitigate climate impacts. This already led to more solar energy and less coal-powered energy. In recent weeks, the sky has been visibly cleaner over China, northern Italy, South Korea, and even the United Kingdom, satellite images showed.
Source: The Guardian, nitrogen emissions from China
To be clear, these visible reductions mainly concern NO2 emissions (nitrogen oxide) and not greenhouse gasses, but they come from the same sources that also emit greenhouse gas: power plants, factories and motor vehicles.
Here in the Netherlands, with the high population and roads density and busy airport hub, there was clearly less noise nuisance, less nitrogen, a bluer sky and less CO₂.
Structural Low CO2 Emissions Trend and Available Offsets “Vaccine”
Last year, in the “Urgenda” climate ruling, the Supreme Court of the Netherlands ruled that the government was responsible for reducing the country’s greenhouse gas emissions to a level 25 percent below 1990 levels by the end of 2020 instead of its legally binding 20-percent target – a feat that’s inconceivable without extra expenses. In China, however, CO₂ emissions fell 25 percent, or by100 billion metric tons of CO₂, as a result of the country’s response to COVID-19.
The combination of lower CO2 levels, less capital in business sector and available carbon offsets may offer a welcome vaccine to meet a low carbon economy sooner that we thought. Hence, the deep crisis may teach us not to make climate policy solely a political clash, but a positive and sensible change – a new “business as usual”. Let’s be more pragmatic: the antidote is here!
How do we Make the Change Sustainable?
Trend watcher Li Edelkoort advises us to go through the COVID-19 crisis in a sort of “consumption quarantine” that becomes a “blank page for a new beginning…[that involves] less consumption and less, but cleaner production.”
It’s an admirable ambition, but history shows that this doesn’t happen automatically.
The 2008 financial crisis, for example, also raised hopes for a green re-set, but that re-set failed to materialize. Yes, global CO₂ emissions from fossil fuel combustion and cement production fell 1.4 percent during the crisis, but they rose 5.9 percent when it finished.
This is a pattern we’ve seen after several previous crises, each of which came with declarations of a new, green future.
Previous Crises Also Failed to Yield Lasting Environmental Benefits
Conditional Stimulus: Unused Carrot?
Governments have stepped up to rescue key industries and jobs, but future support should be offered with an eye on accelerating a low-carbon restart. This means attaching sustainability conditions such as the use of renewable energy or adoption of energy efficiency strategies that lead to cleaner air quality and new jobs. The head of the International Energy Agency, Fatih Birol, and the World Resources Institute have already called on countries to put renewable energy at the heart of their stimulus plans to emerge from the crisis.
CO₂ Emissions Trading Systems Support Long-Term Sustainability
The turmoil in the financial markets is also impacting emissions trading. The price of an emission reduction representing a ton of CO₂ either kept out of or removed from the atmosphere under the European Union Emissions Trading System (EU ETS) fell in two weeks from €24 to just under €15 euros. In the United States, California Carbon Allowances (CCAs) traded below the floor price for the first time in nearly three years. The Mar-20 V20 CCA contract tumbled to $16.48, a 25-cent drop on the ICE platform because there are fewer CO₂ emissions, and as a result the demand for offsets has decreased. At the same time, companies are unloading allowances and offsets to meet their cash-flow requirements, accelerating the price drop.
We may have low price levels for months into the future, and this will not impact the environmental integrity of these programs for now because the overall budget of allowances will ensure targets are met. Lower price levels make it a bit easier and cheaper, which is also suitable in times of shrinking economy.
The COVID-19 challenge is different from the 2008 financial meltdown because it’s based on a real-world event rather than a structural flaw in the financial system. Because of this, we can expect a rapid recovery once the COVID-19 crisis is over. To prevent an oversupply from accumulating and then driving down prices in the recovery, the EU uses a so-called “Market Stability Reserve” to reduce the number of allowances auctioned. There is simply less auctioning with a surplus, so that the CO₂ price will recover again next year. In California the price floor keeps prices from falling too low as well.
Long-term expectations for cap and trade are still positive, and the European Commission will even propose to increase the CO2 target for 2030 to 50 to 55 percent to comply with the Paris Agreement. This is supported by most EU member states. Ultimately, the number of allowances will decrease, fewer allowances will circulate every year.
Further State Aid is an Opportunity to Make Aviation Sustainable Faster
Let’s take a look at just one sector: aviation. Research agency CAPA expects that without support airlines will go bankrupt at the end of May. Indeed, flights from European airports are down anywhere from 50 percent to 88 percent.
EU and US governments understandably focused their first rescue on life support and job preservation, but I see further state aid as an indispensable opportunity to make aviation more sustainable by promoting the use of cleaner fuels and more fuel-efficient aircraft and practices.
I’m in the camp that believes we’ll fly less in the future as distributed working and virtual conferences become the new normal – in part because we’ve gotten used to it, but also because of budgetary constraints after economic activities begin to pick up.
Governments can anticipate this in their next round of aid packages – insisting, for example, on airlines prioritizing the use of newer, more fuel-efficient aircraft and low-emission fuels. The Dutch national carrier, KLM, already plans to take the old, polluting Boeing 747 out of service – why not insist they do it a year earlier?
We can also make any future aid contingent on reduced future emissions, which any reductions that can’t be achieved organically being achieved by high-quality offsets.
CO2 Vaccines are Required, as Well as Behavioral Changes
In summary, the COVID-19 crisis offers the opportunity to come out greener, but that is certainly not automatic. The parallels are, however, striking.
A vaccine is required for COVID-19 – an inhibitory drug that prevents it from taking hold again – but so are behavioral changes, including better treatment of animals.
For the climate test, “CO2 vaccines” are required, as well as behavioral changes. Airlines can adopt better practices and offset those emissions they can’t eliminate, but we can all fly less.
In this way, we can turn Li Edelkoort’s “consumption quarantine” into something permanent.
Please see our Reprint Guidelines for details on republishing our articles. | https://www.ecosystemmarketplace.com/articles/the-climate-challenge-like-the-covid-19-challenge-requires-targeted-treatment-and-improved-behavior/ |
Dog Years to Human Years Calculation: Importance, Ways to Convert & Dog Age Chart to Know How Old Is My Dog?
The popular formula to convert a dog’s age to human years is to multiply the dog’s age by 7. This formula has been around for so long that nobody knows who made it and where it started from.
But recently, veterinarians and dog professionals have pointed out that this formula is too simple, and many variables have an impact on the age of a dog. These factors need to be accounted for when determining your dog’s age in human terms.
This article will help you to find out how to convert your dog’s age into human years and what factors affect the calculation of your dog’s age.
Importance of calculating a dog’s age
People tried to transform the age of dogs into human terms. There is an inscription on animals and people from Westminster Abbey in London dating back to the 1200s. In the inscription, dogs live 9 years while human being lives 81 years.
Calculation of the age of a dog in human terms is not based solely on curiosity. A dog is quick older than a human being. Therefore, it is essential to calculate the age of a dog to determine how best a pet can be cared for when it passes through all phases.
Ways to convert your dog’s age into human years
Today, you can use three methods if you want to convert your dog’s age into human years. These are:
1. The “x 7” formula.
This is the most popular and most common method that people use in converting a dog’s age into human terms.
The history behind this formula, as discussed above, is not apparent. It was based on statistics, one hypothesis states that human being lives up to an average age of 70 years old while dogs live to an average age of 10 years old. So, there is a 7:1 ratio between humans and dogs.
Dog experts state that this formula is inaccurate based on two reasons:
- a. the first two years of a dog’s life is already equivalent to 18–25 human years based on physical and behavioral development; and,
- b. the size and breed of a dog can affect and change the ratio.
2. The “10.5 + 4” formula.
This formula is not as common as the one above. The equation goes:
1st dog year = 10.5 human years
2nd dog year = 10.5 human years
subsequent dog years = 4 human years each
This formula is a little bit more accurate than the first one. But it still does not take into account the size and breed of a dog.
3. Size and breed calculators and charts
There are now online calculators that take into account the size and breed of a dog.
A calculator of this kind is Pedigree (https:/www.pedigree.com/dog-care/dog-age-calculator), and another one is available from the websites at www.ajdesigner.com/fl_dog_age/dog_age.php. It can be used to convert a pet age to human years.
Some charts also take into account the weight and size of dogs to convert a dog’s age into human years.
These calculators and charts are considered more accurate because they take into account factors that affect a dog’s longevity.
Dog years to human years chart
Let’s take a closer look at some of the charts that can be used as a guide to a dog’s age.
The American Veterinary Medical Association (AVMA) uses this chart to estimate the age conversion of dog years to human years:
|Dog years||Human years|
|7||44-56|
|10||56-78|
|15||76-115|
|20||96-120|
The AVMA’s chart is very general and covers dogs from small to large breeds.
Here is another chart that estimates your dog’s age converted to human years:
Factors that affect a dog’s age
As stated above, size and breed are very important factors that affect a dog’s age. The third important factor is a healthcare.
1. Size of dogs
You can see, based on the graphs above, that dogs of a big breed tend to live longer than dogs of smaller breeds. Large breeds when they are 5-6 years of age are regarded “old” or “elderly.” Small breeds when they are 10-14 years of age are regarded “seniors.”
So let’s look at some of the breeds of dogs and how to determine which category your dog falls under based on the American Kennel Club classification.
|Size||Average Weight (Adult Dog)||Sample|
|Giant breeds||75-120+ pounds||Akita, Bernese Mountain Great Dane, Dog, Great Pyrenees, Newfoundland, Saint Bernard|
|Large breeds||55-85 pounds||Alaskan Malamute, Boxer, German Shepherd Dog, Golden Retriever, Labrador Retriever, Tibetan Mastiff|
|Medium breeds||35-65 pounds||Australian Shepherd, Chow Chow, Dalmatian, Samoyed|
|Small breeds||7-35 pounds||Bichon Frise, Cardigan Welsh Corgi, Dachshund, French Bulldog, Pug, Shiba Inu, Shih Tzu, Whippet|
|Toy breeds||2-9 pounds||Chihuahua, Maltese, Papillon, Pomeranian, Toy Poodle, Yorkshire Terrier|
Mammals with enormous body mass such as elephants and whales generally live longer. But it seems that dogs are an exception to this rule. No study has yet identified why dogs display a distinct lifespan depending on their size.
One probable cause is that larger breeds develop illnesses related to aging faster than those from smaller breeds.
2. Breed
Here are the most popular dog breeds in the U.S.A. and their usual life expectancy:
|Breed||Life expectancy|
|Australian Shepherd||12-18 years|
|Beagle||12-15 years|
|Bernese Mountain Dog||6-8 years|
|Bloodhound||9-11 years|
|Boston Terrier||11-15 years|
|Boxer||9-10 years|
|Bulldog||8-12 years|
|Cavalier King Charles Spaniel||9-14 years|
|Chihuahua||15-20 years|
|Dachshund||13-15 years|
|Doberman Pinscher||10-13 years|
|French Bulldog||8-10 years|
|German Shepherd Dog||11 years|
|German Shorthaired Pointer||12-14 years|
|Golden Retriever||11 years|
|Great Dane||6-8 years|
|Havanese||14-16 years|
|Labrador Retriever||11 years|
|Miniature Schnauzer||12-14 years|
|Newfoundland||8-10 years|
|Pembroke Welsh Corgi||12-15 years|
|Pomeranian||14 years|
|Poodle||12 years|
|Rottweiler||9 years|
|Saint Bernard||8-10 years|
|Shetland Sheepdog||12-13 years|
|Shih Tzu||12-16 years|
|Siberian Husky||12-15 years|
|Yorkshire Terrier||13-20 years|
According to records, the longest dog to live longest was an Australian Cattle Dog named Bluey who lived for 29 years and 160 days.
Concerning breeds, some researches have shown that inbreeding may lead to a shorter lifespan in a dog. In contrast, crossbreeding could lead to a longer lifespan.
The reason is that the diseases prevalent to its breed are carried by an inbred dog. In the meantime, a crossbred dog generally inherits its parents ‘ excellent characteristics and not many diseases.
Healthcare
Just like humans, a dog that is adequately cared for, given a proper diet, and frequently exercised has a greater opportunity of living longer than a dog that is obese and has no regular physical activity.
Let’s talk further discuss the important points under healthcare.
1. Proper care
A dog should have regular check-ups with a veterinarian. This is like having an annual physical examination in human terms. A vet can check your dog’s condition, advise you on the right diet for her, and offer her the significant shots she needs.
Some studies have established a positive connection between spaying and neutering and longer lifespans among dogs.
Dogs who are neutered or spayed at a young age have reduced risks of developing cancers like ovarian cancer, breast cancer, or testicle cancer.
However, other studies indicate that there is no link between surgical procedures and diseases. Although it was known that spayed females live longer than intact females, some neutered male dogs grow cancer of the urinary tract or prostate cancer.
But there is a marked reduction in stress among dogs that have been neutered and spayed. Less stress is a big factor in longer lives among dogs.
2. Diet
The debate on the ideal diet for a dog will probably never end. This is because each dog is unique.
Some dogs are supplied with raw diets for long life. Some dogs are thriving on wet food. Due to sensitive stomachs, some dogs need special diets. Some dogs are white meat allergic. Some dogs are unable to manage grains. Some dogs aren’t fond of vegetables. Some dogs can eat anything.
Your dog’s diet has to meet her specific nutritional requirements at each level of her life.
She will need a diet for maximum growth when she is still a puppy. This means she needs protein and calcium for developing muscles and bones.
When she becomes an adult dog, she will need protein, vitamins, and minerals for energy.
When she becomes a senior dog, she might need a softer diet for easier digestion.
Another reason why frequent visits to the vet are essential is to select a correct diet. Your veterinarian may assist you to track the response of your dog to certain foods, find out what ingredients it is allergic to, and answer your questions about what food is best for your dog.
3. Regular exercise
Obesity among dogs can lead to many other illnesses. This is one of the many reasons why regular exercise is important for dogs.
Many owners have testified that regular exercise has also led to more well-behaved dogs.
Just like diets, exercise should be adjusted according to your dog’s age.
Puppies have soft pads, so they should only run on soft surfaces such as carpets or grass for those under three months. Don’t let your young puppy run all the time up and down the stairs because in the future she might develop hip dysplasia. Also, don’t indulge your young pup in long runs.
When your dog is a teenager or an adult, then you can have long walks and runs with her. You can also play active games and go through intense training with her.
Of course, if your dog becomes pregnant, then her physical activities should be less intense.
This is also true when she becomes a senior. In old age, your dog could suffer from age-related illnesses like arthritis, so her exercise should not be intense or active.
Conclusion
It is a sad fact of life that your dog will have a shorter lifespan than you. She will develop and age faster than you.
This is one reason why every stage of your dog’s life is important. Whether your dog is a cute puppy or an energetic adult or a sweet senior, she will need your attention and care a happy life. | https://petxu.com/dog-years-to-human-years-calculation-importance-convert-dog-age-chart-to-know-how-old-is-my-dog/ |
Kelli Kennedy is a Research Associate on the Corporations and Food Insecurity in the Global North research project, led by Dr Hannah Lambie-Mumford. The project explores the different dimensions of relationships between food corporations and food charities in the UK, Europe and North America within the context of the global food economy.
Kelli is also completing her doctoral thesis at the University of York in the Department of Social Policy and Social Work, comparatively exploring the contextual factors behind food (in)security in the UK and US.
Kelli also works as a researcher for the project, ‘Understanding family and community vulnerabilities in transition to net zero’, funded by the Nuffield Foundation. She is the founding core member of the Social Policy Association’s Climate Justice and Social Policy group, whose aim is to follow and critically assess the interactions between UK climate and social policy.
- Qualifications
-
- BA Double Major Political Science and Women's Studies, California State University, Fresno
- MA Comparative and International Social Policy, University of York
- Research interests
-
Kelli’s research interests revolve around food (in)security in the UK and US and the factors that contribute to food security status both internal and external to one’s household. Her research includes comparative policy analysis, and review of specific local and national policies addressing food insecurity in the UK and US via case studies.
Her other research interests include the intersection between food (in)security, net zero, and environmental social policy.
In 2021 Kelli led an ESRC Impact Accelerator Account project entitled, ‘Supermarket Corporate Social Responsibility Schemes: Working Towards Ethical Schemes Promoting Food Security’ via The University of York Social Science Enterprise Scheme (SSES). The project included a workshop with UK food charity practitioners, knowledge exchange in the UK supermarket industry, and a webinar for wider stakeholders addressing how supermarkets can help end food insecurity. | https://www.sheffield.ac.uk/politics/people/academic-staff/kelli-kennedy |
By Duane Friend
Looking at the title of this article sounds almost like a ′60s rock band, doesn’t it?
Actually, I want to talk about precipitation. You’ve probably heard about acid rain and the environmental concerns associated with it. What it really should be called is “precipitation that is more acidic than normal,” because precipitation is naturally slightly acidic. First let’s give some background on what acidity is.
In simple terms, a substance is acidic when it has the ability to donate a proton (also called a hydrogen ion). The more of these available, the more acidic a substance is. This is measured using a pH scale, which stands for the potential of hydrogen ion activity. The pH scale runs from 0 to 14. Neutral is 7, below 7 is acidic, above 7 is alkaline or basic. For acids, a whole number decrease equates to 10 times more acidity. Two numbers down, 100 times more acidity. For that reason, most pH numbers include a decimal to be more precise.
Precipitation forms thousands of feet above ground in clouds. As it falls through the atmosphere, it interacts with gases like carbon dioxide, creating carbonic acid. This lowers the acidity of “natural” precipitation to the 5.3 to 5.5 range. Natural processes in the environment are adapted and work well with this slight acidity in precipitation. And, yes, I keep saying precipitation instead of just rain, because all precipitation has this acidity.
Human activity has increased the acidity of precipitation. Burning coal releases huge amounts of sulfur dioxide into the atmosphere because most coal contains sulfur. When precipitation interacts with sulfur dioxide, it creates sulfuric acid, lowering the pH of rain and snow to 4.0 or less. Some of the lowest recorded pH was around 2.1 . This is 1,000 times more acidic than natural precipitation! The term “acid rain” was coined way back in the 1870s by Robert Angus Smith, a Scottish chemist. Much like greenhouse gases and atmospheric warming, these processes have been known as scientific fact long before they became environmental problems.
Nature is not adapted to this much acidity. Wetlands, wildlife, forests, roads, bridges, buildings and soils all degrade (or die out) quickly under highly acidic conditions. This was a major concern in Europe and the eastern United States in the 1970′s to the mid 2010s. While not quite back to normal, the acidity of precipitation has improved significantly, according to the National Atmospheric Deposition program. The University of Illinois and the Illinois State Water Survey is part of the NADP network.
Why have things gotten better? Because the amount of sulfur-laden coal being burned has dropped dramatically in the U.S.
In addition, something called “scrubbers” can be used to remove most of the sulfur from the combustion process.
Other areas of the world are still experiencing acid precipitation. Wherever coal use is still a main form of energy use, this will continue to be an environmental issue. The fact that things are trending for the better in some places shows that the environment can improve — it’s not just a downward spiral.
Duane Friend is the State Master Naturalist and Climate Change Specialist with University of Illinois Extension. | https://www.agrinews-pubs.com/community-contributed/2022/11/11/extension-notebook-rain-and-acid/ |
Created by Buchan’s brand experience team, our new brand identity coheres around a concept of balance.
A balance between past and future, vision and outcome, design and delivery all reflected in the symmetry and typography of BUCHAN.
The white colour palette represents a coming together of our people and studios into one strong and unified company — a spectrum of colour passing through a prism.
The patterns and textures evoke the balance of services we offer and the scales in which we design. | https://www.buchangroup.com/insight/buchan-design-realised/ |
Gulf Winds International, Inc. was founded in April 1996. The idea was to develop a full service, third party logistics company that would focus on warehousing, land transportation, distribution and consolidation.
In 1997, the Houston Trucking division opens with 5 trucks and by 1998 Gulf Winds opened its 2nd warehouse facility. By 2012 Gulf Winds opened its 7th warehouse. Today Gulf Winds is operating with over 2.3 million square feet of warehouse space. By any measurement standard, Gulf Winds is a remarkable success story. In just a few years, the company has grown from only four employees with one account to over 500 employees including ILA and contract labor and thousands of accounts with several locations in Houston and Dallas.
Purpose of Position
The Terminal Manager - is responsible for the oversight and management of the dispatch operations group in the Memphis Terminal. The position will be responsible for managing applicable personnel within the department, as well as overseeing daily capacity planning, equipment utilization, relative to the transportation group, driver retention within the dispatch group, ELD compliance relative to Houston trucking operations, operational cost, as well as creating processes and efficiencies within the same group.
Primary Job Responsibilities: includes the following. Other duties may be assigned as deemed appropriate by Management.
- Oversee management of the Dispatch Operations staff in Memphis via the Assistant Operations manager, as well as supporting clerical staff.
- Create and manage efficiency driving metrics, to include chassis utilization, specific desk, lane, truck and regional margins, HOS efficiencies, and company service related metrics.
- Oversee daily, weekly, and monthly operational cost, relative to storage, per diem, leased chassis cost, etc.
- Oversee driver retention goals within and relative to the transportation operations group in Memphis.
- Oversee work load scheduling for the Memphis terminal
Requirements: Knowledge, Skills, and Abilities
- Must possess strong communication & writing skills necessary to interface with internal and external customers, as well as different departments, and varying levels of management, on a daily basis.
- Must be a self starter and willing to work with minimal supervision, and execute tasks and duties at a high level.
- General knowledge of transportation industry, logistics and supply chain.
- The candidate must have excellent organizational skills.
- Familiar with the “owner-operator” model, within the transportation industry.
- Ability to hire, train, and develop employees to grow within the organization.
- Strong interpersonal skills with an understanding and an emphasis on communication, training, interdepartmental interaction, supervision and management
- Perform other related duties as assigned.
- Strong understanding of company margins and driver pay structure.
Minimum Qualifications:
- 5 Years of transportation experience
- 4 years of management experience
- Must be computer literate with MS Word, Excel and Outlook
- Must be dependable and able to work independently
- Excellent written and oral communication skills. | https://gulfwinds.recruiterbox.com/jobs/fk03w3j |
If there’s one thing about which I’m absolutely certain, it’s that it doesn’t pay to be too certain. If I knew all the answers before taking on a task, it probably wouldn’t be a very interesting one. Early in my programming career, I made the decision that I would only stay with a job as long as I was learning new things. Any time I knew everything, it was time to move on.
Twisty Passages, All Different
“You can never step into the same river, for new waters are always flowing past you.” – Heraclitus of Ephesus
Life has a lot of repetition. Sometimes it feels as though you’re dropping a red vase in a “maze of twisty passages, all alike”, exploring the maze for several hours, and ending up back at the red vase. At first, it seems as though no progress has been made at all.
But there is progress. Before returning to that spot, you probably also dropped a few other objects in different sections of the maze. You may be revisiting a location, but the state of the maze – like that of Heraclitus’s river – has changed. You have more information and can make more refined decisions.
It’s All Right to Be Wrong
“The greatest mistake you can make in life is to be continually fearing you will make one.” – Elbert Hubbard
The key to making a tough choice is being willing to change your mind. If you reach a dead end, back up and try another path. This isn’t true only in adventure games; real life has many opportunities to rethink decisions and make better choices. Some choices – taking a particular job or having a baby, for example – of course commit you for a time. And that’s a good thing – You really need to give either of those time to do well; then decide whether your original decision was the right one.
Remember, there are no bad decisions. If it’s a meaningful choice, it’s also a difficult one. And that means that there are reasons for making a particular choice and reasons for doing something else. Don’t beat yourself up over small “mistakes”; learn from them instead. And when it’s time to make a similar decision, you’ll have more information and the chance to make a better choice.
Embrace Complexity
“The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.” – F. Scott Fitzgerald
To me, oversimplification of a complex issue is a capital mistake… and one we see all the time in news reporting, political analysis, and the corporate boardroom. It’s very understandable – When confronted with a really complex issue, we feel overwhelmed and uncomfortable. Simplifying the decision – usually by focusing on a single aspect of it – makes us feel more in control.
Politicians are often tarred with labels such as “fence-straddler” or “flip-flopper.” We want our representatives to have definite opinions and stick with them. But that isn’t a good reflection of reality. The issues debated in Congress, Parliament, and other political institutions don’t have nice simple answers. That’s because the simple questions get handled all the time by individual workers. Only the hard ones come up for voting.
Currently, the United States suffers from an excess of certainty. There is a line drawn in the sand between the Republican and Democratic representatives, and very few are willing to cross over it. Instead of carefully considering each issue, representatives blindly vote on party lines. Issues such as the bank bail-out, universal health insurance, and others are not at all straightforward. And yet, on many of them, all of the Democrats vote one way, and all of the Republicans against them. That degree of consensus tells me that our representatives are not thinking about the issues. They’re voting the way they’re told to vote. There is no individual judgment, and to me, that means there is no real intelligence being applied to our laws.
That’s an oversimplification in its own right, of course. I’m sure our representatives and their staffs do a tremendous amount of work writing bills and amending them to reflect their constituencies. That’s where the intelligence comes into the process. But the final decision is a vote, and most of the time, that vote doesn’t seem to reflect anything more than a rubber stamp of political party positions.
Analysis Paralysis
We can suffer from too much information. Our brains are designed for simple survival decisions – “If I sleep on the ground, predators may kill me, so I’d better either sleep in the trees or make myself a strong shelter.” We can cope with decisions like these. But modern life is much more complicated. We can spend hours – or hundreds of hours – researching questions on the Web and other resources. It’s very easy to get so much information on a subject that a meaningful decision is too hard to make.
Malcolm Gladwell wrote about this conflict in several of his New Yorker Magazine articles reprinted in his book, “What the Dog Saw.” For example, he talks about intelligence failures in attacks such as 9/11/2001, the 1973 Syrian and Egyptian attacks on Israel, and the 1998 terrorist attack on the US embassy in Nairobi. Each time, there were – at least in hindsight – clear indications that an attack was imminent.
The problem is that there is just too much information. Yes, there were leads suggesting each of these attacks. But in the case of the 1998 attack, for example, “the FBI’s counterterrorism division had sixty-eight thousand outstanding and unassigned leads dating back to 1995.” It isn’t in the least bit surprising that one letter – from an informant who was considered not credible – was ignored. There was just too much information, much of it contradictory, and most of it useless.
Trust Your Instincts
How do we make intelligent decisions when we have too much information, or too little? “How We Decide,” by Jonah Lehrer, studies this question. While people can’t make millions of calculations per second as does a computer, we make surprisingly accurate decisions all the time. That’s because we have a built-in memory and feedback mechanism that recognizes patterns and gives us positive feedback when the patterns look “right”.
A chess grandmaster can glance at the board and immediately pick out four or five moves that have the most potential. Then he’ll work through those possibilities and choose the move that seems most promising. This sort of decision is based on knowledge and experience, but the immediate decision is made by “feel”.
Are your palms sweating as you contemplate a decision? Ears ringing? Arms shaking? Your body and mind are trying to give you feedback that – based on your previous experience – something is wrong. Pay attention to those instincts and you’ll make much better decisions than if you try to exhaustively analyze every question. Then learn from the results so your instincts will improve each time.
The Simple Answer Is…
… that there are no simple answers. We live in a complex world full of difficult and complicated decisions. The best we can do is to try to make reasonable choices, pay attention to our instincts, and learn from our inevitable mistakes.
Life isn’t just an adventure game; you have a lot more freedom of choice. Sometimes you need to break out of the maze and make your own twisty passages. And sometimes you seem to end up right back where you started. But you never step in the same river twice; the experience from your previous decisions helps you to make better ones as you go along. In the end, it all comes down to this simple guide:
- 1. Make a decision that feels right.
- 2. Live with it, but also learn from it.
- 3. Rinse and repeat.
Don’t be afraid of uncertainty. Being uncertain just means you have meaningful choices. And that’s what makes the game (of life) fun… even when you don’t know what your next move should be.
Similar Posts: | http://www.theschoolforheroes.com/questlog/879/the-power-of-uncertainty/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.