content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
(RELEASED) WHO SHOULD WE EAT
A mysterious accident causes your plane to crash onto an uncharted island...and then you resort to cannibalism way too fast.
Who Should We Eat? is a lighthearted social game about plane crash survivors trying to get off an island. The only way off is to build a raft. Players play cards to gather resources for the group – adding food, sanity, and wood for the raft. Unfortunately, food is scarce and people are edible. Trials will be held to choose the survivor to sacrifice his or her life - or sometimes a knife fight will determine the person being eaten. For the survivors, this means a loss of sanity, but it also means their raft need not be as large!
However, those that are eaten, come back as ghosts and exact revenge upon the survivors trying to keep them on the island before their sanity runs out or no one is left to build the raft.
Can the team of survivors get off the island before the team of ghosts stop them?
Contents:
12 Character Cards
Over 100 Food, Sanity, Build, and Ghost Cards
9 Long Straws
1 Short Straw
1 Wooden Conch
1 Board and Markers to track Sanity, Food and Raft Building
Players: 4-10
Ages: 14+
Playtime: 30 min
There are yet no reviews for this product. | https://northmencollectibles.ca/index.php/shop/boardgames/who-should-we-eat-detail |
Two D.C. destinations made the semifinalist list for the country’s best new restaurant as the James Beard Foundation released nominations for its coveted annual culinary awards to be doled out in May.
Thamee at 1320 H St. NE and Rooster & Owl at 2436 14th St. NW both opened last year and join 30 names in contention nationwide for the James Beard Award for best new restaurant for 2020.
In all, more than dozen D.C.-area restaurants, restaurateurs and chefs are among the semifinalists for a James Beard Award across several categories. That includes Ann Cashion of Johnny’s Half Shell and perennial favorite Vikram Sunderam of Rasika battling it out in the outstanding chef category. Angel Barreto of Anju, Daniela Moreira of Timber Pizza Co. and Paola Velez of Kith and Kin are semifinalists for the rising star chef of the year category. Kwame Onwuachi of Kith and Kin took home the rising star chef award last year.
Jose Andres’ Jaleo and Johnny Monis’ Komi are once again semifinalists…Read the full story from the Washington Business Journal. | https://wtop.com/food-restaurant/2020/02/d-c-region-gets-several-shoutouts-among-2020-james-beard-semifinalists/ |
The Australasian Centre for Corporate Responsibility (ACCR) is commenting on its submission to the Safeguard Mechanism reform.
Commenting on the appointment, Alex Hillman, Lead Carbon Analyst at the Australasian Centre for Corporate Responsibility (ACCR) said:
“Previous iterations of the Safeguard Mechanism have had little impact because industry has negotiated complicated rules and was allowed to set its own limits.
“Industry can not realistically set its own limits if we want real emissions reductions.
“Whilst emissions from the rest of the economy have decreased since 2005, Australian industrial emissions have increased by 17%. Industry is not only failing to reduce its emissions - they have been increasing. This is unacceptable.
“Industry analysis has concluded it can drastically reduce its emissions by 88%, so this is the time for a Safeguard Mechanism with teeth, there’s no longer any excuse not to.
“For industry to meet its fair share of Australia’s 2030 emissions reduction target, the Government will need to double the ambition for the Safeguard Mechanism, with annual reductions of at least 10 million tonnes of greenhouse gases. If carbon bombs like Browse and Beetaloo proceed, the associated emissions will transfer a greater emissions reduction task to every other Australian project.
“Strict limits must be placed on the use of carbon credits, which should be capped at 5% to prevent industry outsourcing its emissions reductions to the land sector. | https://www.accr.org.au/news/safeguard-mechanism-with-teeth-needed/ |
pressure (COP), and integrated EMG activation of the gluteus medius (Gmed), gluteus maximus (Gmax), tibialis anterior (TA), and peroneus longus (PL) during each reach direction of the SEBT. Results: There was a significant difference in mean outcome measures between the three study groups. When compared to copers and controls, the CAI group demonstrated significantly diminished dynamic stability as quantified by reach distance and COP measures (p<0.05) and less EMG activity of the muscles acting on the ankle and hip (p<0.05). Conclusion: Our findings indicate that individuals with CAI exhibited diminished dynamic stability and decreased EMG activity of ankle and hip musculature during functional testing. Alteration in both, the proximal and distal muscles activity appears to negatively affect measures of postural control and the quality of movement, which may lead to the prolonged functional impairments and the increased recurrence of the undesired lower extremity injuries in this population. Hence, implementing functional exercises that target hip and ankle muscles in the rehabilitation of ankle instability might benefit these patients.
LLU Discipline
Physical Therapy
Department
Physical Therapy
School
School of Allied Health Professions
First Advisor
Lohman, Everett B., III
Second Advisor
Bains, Gurinder
Third Advisor
Daher, Noha S.
Degree Name
Doctor of Science (DSc)
Degree Level
D.Sc.
Year Degree Awarded
2017
Date (Title Page)
12-2017
Language
English
Library of Congress/MESH Subject Headings
Ankle -- Abnormalities; Sprains and Strains;
Subject - Local
Ankle Sprains; Musculoskeletal Injuries; Postural Stability
Type
Dissertation
Page Count
123
Digital Format
Digital Publisher
Loma Linda University Libraries
Copyright
Author
Usage Rights
This title appears here courtesy of the author, who has granted Loma Linda University a limited, non-exclusive right to make this publication available to the public. The author retains all other copyrights.
Recommended Citation
Jaber, Hatem, "A Comparison of Neuromuscular Control between Subjects with and without Chronic Ankle Instability" (2017). Loma Linda University Electronic Theses, Dissertations & Projects. 517. | https://scholarsrepository.llu.edu/etd/517/ |
Is 346 prime?
Prime Factors Calculator
|
|
Enter a natural number to get its prime factors:
|
|
Ex.: 4, 8, 9, 26, 128, etc.
|
|
Results:
The number 346 is a composite number because 346 can be divided by one, by itself and at least by 2 and 173. A composite number is an integer that can be divided by at least another natural number, besides itself and 1, without leaving a remainder (divided exactly).
The factorization or decomposition of 346 = 2•173. Notice that here, it is written in exponential form.
The prime factors of 346 are 2 and 173. It is the list of the integer's prime factors.
The number of prime factors of 346 is 2.
Factor tree or prime decomposition for 346
As 346 is a composite number, we can draw its factor tree:
Here you can find the answer to questions related to: Is 346 prime? or list the factors of 346. By using our online calculator to find the prime factors of any composite number and check if a number is prime or composite. This tool also draws the prime factor tree if the number is factorable and smaller than 16000.
Other calculators related to prime numbers:
What is prime number? How to factorize a number?
Watch this video below to learn more on prime numbers, prime factors and how you can draw a factor tree.
You can also find this video about factorization at mathantics.com
Links:
Other way people search this question
- Is 346 a prime number?
- Is 346 prime or composite?
- Is 346 a composite number?
- How to find the prime decomposition of 346?
- how many factors does 346 have? | https://factors-of.com/Is_346_prime%3F |
Joan Baez made her debut appearance at the Newport Folk Festival in 1959. Fifty years later, she returned to that same Rhode Island stage on August 2, marking the festival’s and her 50th anniversaries. She is presently on a worldwide tour in celebration of her 50 years as a performer and in support of her Grammy-nominated CD, Day After Tomorrow.
In the first comprehensive documentary to chronicle the private life and public career of Joan Baez, AMERICAN MASTERS examines her history as a recording artist and performer as well as her remarkable journey as the conscience of a generation in “Joan Baez: How Sweet the Sound” (w.t.), airing Friday, May 31, 2013 at 9pm on WMHT TV. The film coincides with the 2009 DVD/CD release Razor & Tie. This DVD/CD will feature the film with bonus content and an audio CD of music from the film. The audio CD contains rare live performances and studio recordings that span Baez’ career.
Following Baez on her 2008/2009 world tour, the filmmakers captured her in performance as well as in intimate conversations with individuals whose lives parallel hers. From a stop in Sarajevo, Bosnia, to revisit the scene of the singer’s courageous trip to that war-torn city in the middle of the 1993 siege, to Nashville, Tennessee, where she joined Steve Earle to talk about their collaboration on the 2008 Grammy-nominated album Day After Tomorrow, the film allows viewers an unprecedented level of access to Baez.
Shot in high definition with a natural, filmic look, Baez is also joined onscreen by David Crosby, Bob Dylan, Roger McGuinn and Reverend Jesse Jackson, among others, to illuminate this extraordinary life. Rich historical footage — Baez’ controversial visit to North Vietnam, where she is seen praying with the residents of Hanoi during the heaviest bombing of the war; Martin Luther King Jr. outside a California prison where he visited Baez to offer his support after she was jailed for staging a protest; Baez at her first Newport Folk Festival in 1959 and as a teenager performing at the historic Club 47 — is woven into the story so viewers can experience scenes from her life that have never been uncovered.
The grit of the film is Baez’ power as a musician — from her tentative teenage years in the Cambridge, Massachusetts, coffee houses to her emergence onto the world stage and the 50-year career that followed. Joan Baez is a musical force of nature and this film captures her strength as a performer and the influence she has brought to bear on successive generations of artists. | http://www.wmht.org/blogs/american-masters/american-masters-joan-baez-how-sweet-sound/ |
How to Fix Your Computer Is Low On Memory Warning In Windows 10
Your Computer Is Low On Memory warnings happen when Windows runs out of space to put the data it needs to store when you’re running different applications. This can be either in the RAM modules in your computer, or also on the hard disk when the free RAM has been filled up. Low memory problems can also occur when a program doesn’t free up the memory that it no longer needs. This problem is called memory overuse or a memory leak. If you are also struggling with a similar problem here apply the solution below.
Post Contents :-
Your computer is low on memory
To restore enough memory for programs to work correctly, save your files and then close or restart all open programs.
Your computer has two types of memory, random access memory (RAM) and virtual memory. All programs use main memory RAM, but when there isn’t enough RAM for the program you’re trying to run, Windows temporarily moves information that would normally be stored in RAM to a file on your hard disk called a paging file. The amount of information temporarily stored in a paging file is also referred to as virtual memory. Using virtual memory in other words, moving information to and from the paging file frees up enough RAM for programs to run correctly. By default Windows automatically manage the virtual memory, but increase Virtual memory is probably a good solution to fix Windows low memory warning. Let’s see how to do.
Increase virtual memory windows 10
- Press Windows + R, type sysdm.cpl and ok
- This command will open the “System Properties” of your computer.
- Once the System Properties window of your computer is opened, go to the Advanced tab.
- And click on the Settings option which is available under the “Performance” section.
- On the “Performance Options” window, go to the Advanced tab and click on the Change button which is located under the “Virtual Memory” section.
- Now you will see the Virtual Memory window on your computer screen.
- Here you have to uncheck the “Automatically manage paging file size for all drive” option first.
- Next, select the system drive and then enter custom fields in the “Initial size (MB)” and “Maximum size (MB)” fields. and click on set.
How to calculate the virtual memory size?
To Calculate the pagefile size always Initial size is one and a half (1.5) x the amount of total system memory. The maximum size is three (3) x the initial size. So let’s say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 4,096 = 12,207 MB.
After Set the Initial size (MB)” and “Maximum size (MB)” Value and click on set, Now Click on the OK button and then on the Apply button to save changes. This will prompt to Restart the windows ” you must restart your computer to apply these changes ”
Now, After Restart Windows, you will Never Receive any Low Memory Warning messages on your computer. This Is the Best working method you should Try First. you can also try the below fix to prevent windows for Low Memory Warning Error.
Run System Maintenance Troubleshooter
In some cases if a program is force closed, or if something isn’t working properly on your Windows 10 system you might be prompted with the “Your Computer is low on memory” error message. This is happening because Windows is allocating too much virtual memory to the mentioned process, while your system is trying to fix all the problems. For this Once Run the system maintenance tool and check.
- Now type troubleshoots in the search box and select Troubleshooting.
- Click View all in the left-hand window pane.
- Next, click on System maintenance troubleshooter and follow the prompts.
Repair Corrupted Registry
If any corrupt Registry uses high memory this Error can happen. Better check for the corrupted registry and clean or repair them using free registry optimizer tools like Ccleaner.
- Once you install the Ccleaner Run the program and check for Registry clean.
- Select Scan for Issue and allow CCleaner to scan, then click Fix Selected Issues.
- When CCleaner asks “Do you want backup changes to the registry?” select Yes.
- Once your backup has been completed, select Fix All Selected Issues.
- After following the above steps, the “Your Computer Is Low On Memory” warning might be fixed
Increase your Physical RAM
If you still face the same warning message Your Computer Is Low On Memory. Your system keeps running on more than 90% RAM you should perhaps install more RAM memory in your system.
That’s all did these tips help to fix Windows 10 low memory warning? Let us know the comments below. Also, read: | https://wintechlab.com/your-computer-is-low-on-memory/ |
How do you taper a brush stroke in Illustrator?
Hold down the Shift key as you drag out to get a perfect circle. , click on the bottom anchor point of the circle and drag it down to create a longer tapered shape, like the one you can see below. (Shift + C), it’s hidden under the pen tool. Click once on the bottom anchor point that you dragged out.
Why can’t I use the brush tool in Illustrator?
You have no actual brush selected, it’s just set to basic – which isn’t a brush type (just a weird default). “Basic” is not a brush. … In addition, Illustrator brushes are reliant on the stroke color, not the fill color.
What brushes do professional painters use?
For oil-based paints, most professionals choose a natural China-bristle (hog hair) paint brush. If you are painting a smooth surface with oil-based paint, a natural White Bristle paint brush is your best choice because it is soft and supple. | https://davidscottlyons.com/photoshop-tutorial/how-do-you-make-a-pointed-brush-in-illustrator.html |
Are you a Business Analyst who worries how to handle a project that integrates Artificial Intelligence and Machine Learning or do you think whether a Business analyst has any role at all in that kind of projects?
Of course Yes! Business Analysis has got the same impact on an AI/ML project like that on the other projects. But being a business analyst whether you can make the same impact on these projects depends purely on your skills and your adaptability.
To put AI ML in simple words, you're giving the computer the data and tools to study a problem and solve it without human intervention. You're giving the computer the ability to introspect what it did so it can adapt, evolve, and learn. Thus the 9 key steps in the ai ml process are listed below.
Gathering Data
Data Preparation
Choosing a model
Training
Evaluation
Parameter tuning
Prediction
Test
Deploy
The first 2 steps are the basis of the AI ML. The quality and quantity of the data determine how good, useful and accurate the prediction will be. Make a mistake here, no matter how good the development strategy is, and how good the algorithm is, the product is not going to be fruitful as expected. And this is where being a Business Analyst you can play an important role. The lion’s share of your role will be in these 2 stages.
Before narrating how to play the business analyst role in AI ML projects, some of the challenges that are faced by many of the organizations/developers while adopting AI/ML.
Gathering the exact requirements of the project.
Ambiguous feature list.
Unclear product roadmap.
Finding KPIs to generate insights relevant for decision making.
Lack of knowledgeable resources
Choosing the right tech stack.
When we analyze these challenges, what we can infer is that the impediments were not choosing an algorithm or training the system with the data, but it was with the first milestone. Over the course of time, the team may get hindered with some significant technical challenges, but the expertise and online guidance will help the team get a solution to it. But a mistake on the first milestone will lead to a faulty product and waste of time. That's how important is the role of an efficient BA.
Let’s start breaking down how a BA can make up to be of help in these challenges.
1. Know what is AI/ML
A business analyst need not be tech-savvy, but like in any other projects, you need to have an understanding of AI/ML. One of the existing challenges is the gap between skilled data scientists and business leaders who know how to drive the technology, and without no doubt, a knowledgeable BA can fill this gap.
One liner requirement for an AI ML project always sounds like predicting the value/price/outcome of something in the future. This will sound a little unfamiliar, but when we know how AI/ML works, at least we will know how to approach the requirement elicitation.
2. Know the Domain
Forrester recently reported that almost two-thirds of enterprise technology decision-makers have either implemented/ implementing, or are expanding their use of AI. AI/ML is having its applicability across all industries and domains. As knowing the domain is crucial for the efficiency of business analysis in any other projects in these projects also the domain knowledge has got the impact. How thorough you know the domain that efficiently you can be in contributing to the success.
3. Identify the right data repositories
The data might be scattered throughout the organization, or sometimes the data that we need might not be readily available. Data may not have been collected or labelled yet or may not be readily available for machine learning model development because of technological, budgetary, privacy, or security concerns. This varies for projects, and in some business contexts, the stakeholders will have to see how reliable the system is before investing their fund and time in collecting, labelling and moving the data. There are many data sets you can find online for any use cases. But the team requires to know what is the availability of data which will also help them to plan the model and timeline.
4. Formulate a strategic approach
The two major challenges from the above-mentioned list are ambiguous feature list and finding KPIs to generate insights relevant for decision making. The analysis reveals that these challenges exist due to the absence of a strategic approach/plan. Below are the stages of the plan.
Align Goals to business objectives: Get to know what are the organization’s objectives behind the project and what defines success. Then align the technical/project activities with those objectives
Identify KPIs: Key Performance Indicators separate meaningful information from an abundance of data. To know if we are successful in meeting the objectives there’s a KPI to prove it. Identify those indicators.
Track ROI: Track the results to validate how right you were about the KPIs and to know how accurate the system is.
Conduct Feedback gathering sessions.
An AI/ML product requires continuous improvement based on the feedback. There require many iterations by testing it with the test data. The Business Analyst is responsible to find the right stakeholders to include in the feedback session. Their feedback drives the project in the right discussion. Gather the feedback, document it and pass it to the team to look out in the next milestone.
when the business analysis is done the right way the above-discussed challenges won’t be a big challenge at all to deal with. It is evident that with any technology the business analysis and the business analyst need to evolve to adapt the nature and characteristics. Being said that the AI/ML also requires the business analyst to take the best of their skills, apply them in the modern context.
No matter what the project or technology is, business analysis has got a role to play. And as it comes in the initial phase, any imperfections in the requirement will lead to frequent rework.
Get in touch with our Business Analysis experts for free insights on why your business should implement Artificial Intelligence in your processes. | https://www.sayonetech.com/blog/-role-business-analyst-age-artificial-intelligence/ |
What is psoriatic arthritis?
Psoriatic arthritis is a form of inflammatory arthritis. It is a lifelong skin condition that has been diagnosed in over 7 million Americans, according to the National Institutes of Health. Up to 30% of people with psoriasis can develop psoriatic arthritis. Both psoriasis and psoriatic arthritis are chronic autoimmune diseases – meaning, conditions in which certain cells of the body attack other cells and tissues of the body.
Psoriasis is most commonly seen as raised red patches or skin lesions covered with a silvery white buildup of dead skin cells, called a scale. Scales can occur on any part of the body. Psoriasis is not contagious – you cannot get psoriasis from being near someone with this condition or from touching psoriatic scales.
There are 5 different types of psoriatic arthritis. The types differ by the joints involved, ranging from only affecting the hands or spine areas to a severe deforming type called arthritis mutilans.
What are the symptoms of psoriatic arthritis?
The symptoms of psoriatic arthritis may be gradual and subtle in some patients; in others, they may be sudden and dramatic.
The most common symptoms – and you may not have all of these -- of psoriatic arthritis are:
Joint symptoms
- Discomfort, stiffness, pain, throbbing, swelling, or tenderness in one or more joints
- Reduced range of motion in joints
- Joint stiffness and fatigue in the morning
- Tenderness, pain, or swelling where tendons and ligaments attach to the bone (enthesitis); example: Achilles' tendonitis
- Inflammation of the eye (such as iritis)
Skin symptoms
- Silver or gray scaly spots on the scalp, elbows, knees, and/or the lower spine
- Inflammation or stiffness in the lower back, wrists, knees or ankles, or swelling in the distal joints (small joints in the fingers and toes closest to the nail), giving these joints a sausage-like appearance
- Pitting (small depressions) of the nails
- Detachment or lifting of fingernails or toenails
Other symptoms (may help a doctor confirm the diagnosis of psoriatic arthritis)
- Positive testing for elevated sedimentation rate (indicates the presence of inflammation)
- Positive testing for elevated C reactive protein (indicates the presence of acute inflammation)
- A negative test for rheumatoid factor and anti-CCP (types of blood tests to help diagnosis certain other forms of arthritis)
- Anemia - a state in which there is a decrease in hemoglobin (protein in the blood that transports oxygen) and red blood cells, which usually causes fatigue, shortness of breath, and a pale appearance
What causes psoriatic arthritis?
The cause of psoriatic arthritis is unknown. Researchers suspect that it develops from a combination of genetic (heredity) and environmental factors. They also think that immune system problems, infection, and physical trauma play a role in determining who will develop the disorder. Psoriasis itself is not an infectious condition.
Recent research has shown that people with psoriatic arthritis have an increased level of tumor necrosis factor (TNF) in their joints and affected skin areas. These increased levels can overwhelm the immune system, making it unable to control the inflammation associated with psoriatic arthritis. | https://my.clevelandclinic.org/health/diseases/13286-psoriatic-arthritis |
The Hyperloop is a concept proposed by Tesla and SpaceX founder Elon Musk in 2013, in which passenger capsules (Pods) float almost silently at velocities close to the speed of sound through a tube at about one thousandth of atmospheric pressure. In this way, heavily used intracontinental routes could be handled faster, more efficiently and in a more environmentally friendly manner than with conventional means of transport.
A journey from Munich to Hamburg, which takes seven hours by train, can be completed in 45 minutes with a Hyperloop Pod. This is faster than by plane, while the emissions of greenhouse gases are a factor of ten lower. It is made possible by extremely low friction caused by the low pressure in the tube and a levitation mechanism that makes the Pod float.
In recent decades, aviation has not made any disruptive progress in the development of more sustainable technologies and is subject to an elementary systemic disadvantage compared to terrestrial means of transport over short distances (< 1500km): Aircraft must first reach a certain altitude after take-off and can only operate at transport speed for a fraction of their flight route. In addition, the carbon footprint of aircraft does not scale as directly with the use of renewable energies as with electrically powered means of transport.
We are a group of students from Karlsruhe (GER) and we are convinced that the Hyperloop concept could change our way of moving forever! We want to face the technological challenges and pave the way into a new era, the era of zero friction.
-
41
of the 50 most frequently used flight routes worldwide are domestic, carrying more than 200 000 000 passengers each year
-
100
% is the predicted growth rate of the worldwide flight volume over the next 20 years, requiring significant improvements in existing infrastructure
-
7
times as many greenhouse gases produce airplanes per passenger kilometer compared to conventional trains
we strive for a
cleaner future! | https://mu-zero.de/index.php |
Adi ShankaraSubmitted by sahaja on Fri, 07/04/2008 - 18:52.
Tags:
Biography
Founder of Advaita Vedanta (non-dualism) lineage and worshiper of the Divine Mother in various forms including Mother Kali.
Adi Sankara brought the various branches of the Hinduism together and also regulated the Sanyasi Order (Hindu Monks) into ten main streams.
Teachings
His main teaching is that the Para Brahman is One. The apparent diversity of this universe is an illusion due to Maya. Every thing is One (not two, hence Advaitha).
View Video
Books & Media
Upadesa Sahasri: A Thousand Teachings
(Paperback)
A major work of Shankara that gives a concise survey of Vedanta. With Devanagri text and English translation with explanatory notes. With Ramatirtha's glossary Includes index to slokas...The author has closely followed the commentary of Ramatirtha
Pro Opinions
Gretest Advith Master of the world. | https://www.gurusfeet.com/guru/adi-shankara |
Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here
Hello from wherever you are!
We’re halfway through (virtual) Global Fact 7, which was initially slated for three days this week in Oslo, Norway. But like all things, COVID-19 changed that. So the world’s fact-checkers are spending five days talking shop and talking about the future from wherever a stable internet connection can be found.
This year’s conference features over 150 speakers in 17 time zones.
So far we’ve had panels looking at fact-checking during the coronavirus crisis, the relationship between fact-checkers and technology companies, as well as the latest research into the most effective methods of fact-checking.
Here are some themes from the conference so far:
COVID-19– The reason we’re all “here”
The conference began with a celebration of the work of fact-checkers fighting misinformation about the COVID-19 crisis. International Fact-Checking Network Director Baybars Örsek interviewed United Nations Under Secretary General For Communications Melissa Fleming who said COVID-19 has taught the world the importance of quality information.
IFCN Associate Director Cristina Tardáguila spoke with members of the CoronaVirusFacts Alliance about lessons learned so far in the pandemic. Project Coordinator Jules Darmanin said the alliance is confronting a greater number of conspiracy theory fact-checks, which are more difficult to debunk.
Fact-Checkers and tech platforms deepen their relationship
Global Fact’s 2020 State of Fact-Checking report released Monday showed more fact-checking organizations all over the world are becoming for-profit organizations. The report’s findings suggest this may be related to more investment from technology companies like Facebook and Google.
At a panel Tuesday, fact-checkers said tech platforms will need to be more transparent about their processes and data if they want this relationship to survive.
Tijana Cvjetićanin, fact-checking and research coordinator at Bosnian fact-checking organization Istinomjer, said this transparency is essential for fact-checkers to maintain their credibility and independence.
First Draft’s Claire Wardle in a later panel added platforms must do more to share data with researchers and fact-checkers, which would help them find more approaches to checking misinformation.
What do the researchers say?
Wednesday’s academic panels focused on trust in media, the presentation of fact-checks and methods to fact-check health misinformation.
Rasmus Kleis Nielsen from the Reuters Institute for the Study of Journalism said there is broad trust in mainstream news organizations, but still a sizable minority of citizens avoiding the news all together.
Full Fact Researcher Dora-Olivia Vicol said conspiratorial thinking stems more from an individual’s worldview than a lack of access to accurate information, and that much more research is needed to understand this distrust.
– Harrison Mantas, IFCN
. . . technology
- Google is adding fact-checks to image search results, the company announced this week.
- “Now, when you search on Google Images, you may see a ‘Fact Check’ label under the thumbnail image results,” wrote Harris Cohen, the company’s search product manager. “When you tap one of these results to view the image in a larger format, you’ll see a summary of the fact check that appears on the underlying web page.”
- Twitter added a label to another post by President Donald Trump this week, saying it violated its policy against abusive behavior. In the tweet, Trump threatened “serious force” against any protesters who might try to set up an “autonomous zone” in Washington.
. . . politics
- ProPublica’s Eric Umansky has an excellent rundown this week of how federal agencies have adapted their policies to align with misinformation coming from President Trump.
- No, Rep. Alexandria Ocasio-Cortez (D-N.Y.) did not tweet that businesses should be kept closed until after the 2020 presidential election. Here’s the fact check from Snopes.
- This is another case of a faked screenshot.
. . . science and health
- Misinformation about COVID-19 across Africa is complicating efforts by the health care systems to get a handle on the virus, Quartz reported. More than 277,000 cases have now been recorded across the continent, where health systems as well as front-line medical workers are already stretched thin.
- Yomi Kazeem reported broadcasts on WhatsApp about fake cures and remedies were reminiscent of the 2014 Ebola outbreak, when two people died and several were hospitalized in Nigeria after “cures” involving salt baths circulated on social media.
President Trump has said several times that violence at anti-police protests across the United States is the work of antifa, the anti-fascist, leftist activists who confront neo-Nazis at demonstrations. The Washington Post this week wrapped his statements up into one multi-faceted fact-check to show that there is little evidence for his claims.
The Post has made fact-checking Trump’s falsehoods into something of a specialty, and in fact has published a whole book about them. Debunking the antifa claims, however, was challenging. It required looking at arrest records in various cities, speaking to eyewitnesses, reviewing videos and photos from the scenes and talking to expert witnesses about the degree of coordinated antifa activity they were seeing in the violence.
In the end, reporters Meg Kelly and Elyse Samuels concluded, Trump provided no support for the antifa claim, and much of the evidence they found actually contradicted his assertion.
What we liked: The work that went into this fact-check matched the high stakes of the conclusion. In a complex situation like the protests, it might be tempting for people to believe the easy answer that one group is responsible for violence. This fact-check was worth the effort to show the hollowness of the antifa claim. The package also showcases The Post Fact-Checker’s emphasis on video.
– Susan Benkelman, API
- The New York Times traced how false messages about antifa spread across the United States to create a perception of a “threat that never appeared.”
- Trump’s supporters have been sharing a photo of a large outdoor crowd, saying that it was from the president’s rally in Tulsa on June 20. It’s actually a photo of a “Rolling Thunder” event in 2019, wrote FactCheck.org.
- TikTok has signed onto the European Union’s Code of Practice on disinformation. TechCrunch has a good plain-language description of what it means to agree to the code.
- NPR interviewed disinformation researcher Nina Jankowicz about her recent argument in Wired magazine that Facebook groups are “destroying America.”
- The outdoor retailer REI has joined The North Face, Upwork and Patagonia in pulling ads from Facebook, saying it has not done enough to rein in misinformation and hate speech on the platform.
That’s it for this week! Feel free to send feedback and suggestions to [email protected]. And if this newsletter was forwarded to you, or if you’re reading it on the web, you can subscribe here. Thanks for reading. | https://www.poynter.org/fact-checking/2020/globally-distanced-fact-checkers-gather-to-move-the-field-forward/ |
CROSS REFERENCE TO RELATED APPLICATIONS
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE INVENTION
(Not Applicable)
(Not Applicable)
1. Technical Field
This invention relates to the field of computer speech recognition and more particularly to a method and system for identifying excess noise in a computer system.
2. Description of the Related Art
Speech recognition, also referred to as speech-to-text, is the technology that enables a computer to transcribe spoken words into computer recognized text equivalents. Speech recognition is the process of converting an acoustic signal, captured by a transductive element, such as a microphone or a telephone, to a set of words. These words can be used for controlling computer functions, data entry, and word processing. The process can be initiated by speaking into a microphone. The microphone can capture the sound waves and can convert them into electrical impulses. Subsequently, a sound card can convert the electrical impulses from an analog acoustic audio signal into a digital audio signal.
Excess noise can adversely affect applications that require clean audio signals to properly function. Speech recognition software expects to “hear” only the speaker's voice and not extraneous noises. Of course, noises exist everywhere, intermittent and continual. Consequently, speech recognition software often attempts to assess the level of background noise at the outset. Having measured the level of background noise, the speech recognition system can subtract the measured noise from the speaker's acoustic signal.
Generally, background noise can include external background noise and internal system noise. Sources of external background noise can include regular home or office noises—conversation, the radio, traffic, telephones, the consumption of snack foods, and the crumpling of paper. In contrast, sources of internal system noise can include the electronic components on the sound card, network interface adapter or the modem, the system power supply, the microphone, the motors in a floppy, hard or CD-ROM drive, the printer engine, the scanner engine, and electrical activity stemming from the use of the keyboard, speakers or mouse. Though both external noise and internal noise can detrimentally effect the operation of a computer audio system, because external noise typically includes sounds within the realm of the human auditory system, only external noise can be easily identified by human users. In contrast, human users cannot aurally identify internal noise. Moreover, because internal noise is inherently unrecognizable to the human user, internal noise in most instances goes undetected by the human user.
In present systems, engineers recognize the multitude of potential sources of internal system noise. In the case of 32 and 64 bit sound cards, for instance, cross-talk can occur between the excess number of components placed on the sound card. Notably, many users of 32 and 64 bit sound cards have experienced problems with reducing internal system noise. Also, engineers note that sound chips permanently built-in on the main circuit board, resulting from space restrictions and cost cutting, often lead to a high level of background noise. Also, on-board chip sets are notorious for picking up electronic noise, particularly in the presence of excess disk activity.
Notwithstanding, where a human user can identify a noise generating internal component of a computer system, the user can remove the noisy component and the corresponding detrimental effect of the noisy component. Alternatively, in recognizing internal noise, a human user can avoid the use of the noisy system in its entirety. In either event, the identification of internal noise and the corresponding remedial action can translate into more productive audio application usage for the user.
At least one present speech recognition system has incorporated rudimentary noise detection. Yet, where included, present noise detection systems measure only a gross signal-to-noise ratio, taking into account the computer system as a whole. Present noise detection systems cannot isolate the source of internal noise. Moreover, present noise detection systems are unable to identify specific computer system component sources of the internal noise, and consequently are unable to recommend a remedy for the identified internal noise. Finally, present systems perform an incomplete analysis resulting in a potentially inaccurate diagnosis of internal noise level. Typically, present systems assess the background noise once, during a setup sequence, and use this measurement throughout future dictation. As a result, the user may be unaware of changes in the background noise level. For example, if in a tested system an internal hard disk drive is a source of internal noise, but remains inactive during noise detection, the noise detection system would incorrectly conclude a “quieter” computer system than the system would conclude were the hard disk drive active during the same test. Thus, there exists a need for a noise detection system capable of exercising each potential source of internal noise in a computer system. Only a thorough noise detection system can properly diagnose existing levels of internal noise in a computer system.
The invention concerns a method and system for identifying excess noise in a computer system. The invention as taught herein has advantages over all known methods now used to identify excess noise, and provides a novel and nonobvious system, including apparatus and method, for identifying excess noise in a speech recognition system. The method of identifying excess noise in a computer system comprises the steps of recording a silence sample; recording an isolated noise sample while operating a computer system component in isolation from other computer system components; comparing signal characteristics of the silence sample with signal characteristics of the isolated noise sample; and, attributing the isolated noise sample to the isolated computer component when the signal characteristics of the silence sample differ by a preset threshold from the signal characteristics of the isolated noise sample.
The inventive method can further comprise the steps of logging the signal characteristics of the silence sample and the isolated noise sample; reporting excess noise identified in the identifying step; and, suggesting a remedy for the identified excess noise. To provide the user with a facility for the automated serial testing of a plurality of computer system components, the inventive method can also comprise the steps of creating a list of computer system components to be tested for excess noise; and, associating with each component in the list a corresponding method for testing the component for excess noise. Correspondingly, the second recording step can comprise, for each computer system component in the created list of computer system components to be tested for excess noise, second recording an isolated noise sample while operating each computer system component in the created list according to the corresponding method.
To accommodate the step of suggesting a remedy, the inventive method can comprise the steps of: creating a list of computer system components to be tested for excess noise; first associating with each component in the list a corresponding method for testing the component for excess noise; and, second associating with each component in the list a corresponding remedy for excess noise identified in the corresponding component. Once again, the second recording step can comprise, for each computer system component in the created list of computer system components to be tested for excess noise, second recording an isolated noise sample while operating each computer system component in the created list according to the corresponding method. Moreover, the suggesting step can comprise suggesting the corresponding remedy for the identified excess noise in each computer system component in the created list.
FIG. 1
1
3
8
8
15
16
7
13
5
2
4
6
shows a typical computer system for use in conjunction with the present invention. The system preferably comprises a computer having a central processing unit (CPU), fixed disk A, internal memory device B, floppy disk drive , and CD-ROM drive . The system also includes a microphone operatively connected to the computer system through suitable interface circuitry or “sound board” , a keyboard , and at least one user interface display unit such as a video data terminal (VDT) operatively connected thereto. The CPU can be comprised of any suitable microprocessor or other electronic processing unit, as is well known to those skilled in the art. An example of such a CPU would include the Pentium or Pentium II brand microprocessor available from Intel Corporation, or any similar microprocessor. Speakers , as well as an interface device, such as mouse , can also be provided with the system, but are not necessary for operation of the invention as described herein. The various hardware requirements for the computer system as described herein can generally be satisfied by any one of many commercially available high speed multimedia personal computers offered by manufacturers such as International Business Machines (IBM), Compaq, Hewlett Packard, or Apple Computers.
1
14
3
18
3
17
3
19
3
3
18
14
FIG. 1
Computer system , as shown in , also can include a network interface card , operatively connected to the bus (not shown) of computer . As shown in the drawing, a communications modem is connected externally to the serial port (not shown) of computer . In addition, laser printer is attached to the parallel port (not shown) of computer . Finally, scanner can be operatively connected to computer using one of several generally accepted interfaces, for instance through the parallel port, an optional Small Computer Systems Interface port, a Universal Serial Bus port, or other proprietary method. One skilled in the art will recognize, however, that the methods and mechanisms of operatively connecting each peripheral component to the computer can vary from system to system. In many cases, some peripheral components, for instance modem , can be operatively connected internally, directly to the system bus. Conversely, some internal components, for instance network interface card , can be connected externally, for instance, through the parallel port.
FIG. 2
FIG. 2
FIG. 2
1
9
10
11
12
10
11
12
illustrates a preferred architecture for a speech recognition system in computer . As shown in , the system can include an operating system , a noise analysis system in accordance with the inventive arrangements, and a speech recognition system . A speech enabled application can also be provided. In , the noise analysis system , the speech recognition system , and the speech enabled application are shown as separate application programs. It should be noted, however, that the invention is not limited in this regard, and these various applications could, of course, be implemented as a single, more complex applications program.
9
1
8
8
8
FIG. 2
In a preferred embodiment described herein, operating system is one of the Windows family of operating systems, such as Windows NT, Windows 95 or Windows 98 which are available from Microsoft Corporation of Redmond, Wash. However, the system is not limited in this regard, and the invention can also be used with any other type of computer operating system. The system as disclosed herein can be implemented by a programmer, using commercially available development tools for the operating systems described above. As shown in , computer system includes one or more computer memory devices , preferably an electronic random access memory B and a bulk data storage medium, such as a fixed disk drive A.
7
1
9
1
11
9
11
7
10
8
16
15
14
18
5
6
17
19
4
Audio signals representative of sound received in microphone are processed within computer using conventional computer audio circuitry so as to be made available to operating system in digitized form. The audio signals received by the computer are conventionally provided to the speech recognition system via the computer operating system in order to perform speech recognition functions. As in conventional speech recognition systems, the audio signals are processed by the speech recognition system to identify words spoken by a user into microphone . Using noise analysis system , the present invention can identify internal system noise stemming from the fixed disk drive A, CD-ROM drive , floppy disk drive , network interface card , modem , keyboard , mouse , printer , scanner , and speakers .
FIG. 3
21
20
23
23
22
25
26
29
28
31
29
33
23
is a flow chart illustrating a process for identifying excess noise in a computer system. The method begins in step , following path to step . In step , the inventive method records an audio sample of external silence during a period of system inactivity. Following path to decision block , the method preferably can determine if a database of component tests contains additional components to be tested for internal system noise. Following path , if at least one component remains to be tested, the method in step will load from a database of component tests, the next component to be tested and the corresponding test. Following path to step , the method will record a noise sample while operating the component under test (CUT) in accordance with the test loaded in step . In step , the inventive method preferably can compare the signal characteristics of the recorded noise sample with the signal characteristics of the silence sample, recorded in step .
35
34
37
33
35
25
36
24
27
Subsequently, in step , the inventive method preferably can search a database of remedies for a recommended remedy to any internal system noise detected in the CUT. Following path to step , the inventive method preferably can log the results of the comparison of step and can notify the user of any detected internal system noise and of any recommended remedy, found in step . Returning to decision block along path , the process preferably repeats if untested components remain in the component tests database. Otherwise, the process terminates following path to step .
FIG. 4
40
40
41
42
43
44
45
41
46
47
48
is a user interface for a system for detecting excess noise in a computer system. The user interface preferably can be a dialog box for interacting with the user. As shown in the drawing, dialog box preferably includes a test field , a test instruction text box , a test information text box , a test progress bar , and test control buttons . Test field preferably includes a list of component tests contained in a component test database. Each component listed preferably includes a corresponding check box through which a user can select individual components for noise analysis. Finally, each component preferably indicates the status of each test, that is, whether the component failed the test due to the detection of internal system noise, passed the test, or whether the test presently is in progress.
42
42
49
43
43
50
43
43
44
45
51
52
53
Test instruction text box preferably can display test instructions associated with the selected component under test. In the drawing, for example, test instruction text box shows instructions to be followed by the user in testing the floppy disk drive. Test information text box preferably can show detailed information relevant to the current component under test. In the drawing, for example, test information text box shows information relevant to the testing of the floppy disk drive. In addition, test information text box can show detailed information relating to the results of the testing of the component under test. Specifically, test information text box can suggest remedial measures. As shown in the drawing, test progress bar shows the current relative progress of the current component under test. Finally, test control buttons preferably permit the user to selectively stop the noise analysis using stop button . In addition, the user can skip the test for the current component under test by clicking the skip test button . Finally, the user can terminate the noise analysis program by clicking the quit button .
1
In sum, the preferred inventive method can measure internal system noise, taking into account the potential internal noise source in the computer system . Whereas present noise detection systems cannot isolate the source of internal noise, the inventive method can isolate each source of internal noise. Moreover, the present invention can both identify specific computer system component sources of the internal noise, and can recommend a remedy for the identified internal noise. Hence, the present invention can perform a thorough noise analysis resulting in an accurate diagnosis of internal noise level.
BRIEF DESCRIPTION OF THE DRAWINGS
There are presently shown in the drawings embodiments which are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
FIG. 1
is a pictorial representation of a computer system with audio capabilities on which the system of the invention can be used.
FIG. 2
FIG. 1
is a block diagram showing a typical high level architecture for the computer system in .
FIG. 3
is a flow chart illustrating a process for identifying excess noise in a computer system.
FIG. 4
is a user interface for an apparatus used to identify excess noise in a computer system. | |
The following is a description of a common path followed when obtaining a patent, however, there are many options available and part of the process is to find a strategy that suits the invention and meets commercial goals. Typical time periods are given but the procedure for obtaining a patent can be accelerated if it appears pending rights could be infringed.
It is also important to note that an application may be abandoned at any stage. For example, if the invention is no longer commercially important.
Initial interview
An initial meeting should include the inventor and someone who can provide the commercial context for wanting to obtain the patent. The goal of the initial meeting is to enable us to familiarise ourselves with the invention, your needs and any timing issues in order to make recommendations about what to do next and give an indication of the likely costs. If our recommendations include filing a patent application, we will also outline what information we will need in order to prepare it. We are happy to answer questions you may have on any other relevant matter at the same time.
Patent searches
Where time allows, we may recommend conducting a patent search as a first step. A pre-filing patent search is recommended for two main reasons:
- To determine whether other parties have existing patent rights that might affect the ability to commercialise the invention;
- To gain a better idea of what features of an invention can potentially be protected by comparing the invention with existing technology. This helps us to assess the possible scope of any patent and consequently whether it is commercially worthwhile to proceed with a patent application;
- In addition, searching can also provide you with an industry profile of potential competitors.
The cost of each search will depend on the number of relevant patent publications, the time it takes to consider relevant material, and the number of patent publications which have to be studied in detail.
It typically takes about two weeks to conduct a patent search, consider the results and report them to you. If time is pressing, there can still be value in conducting a more limited search.
File a first patent application
Once a decision is made to file a patent application, we will work with you to prepare a patent specification that describes the invention in sufficient detail to enable a skilled reader to replicate the invention and includes carefully crafted statements defining the scope of protection that is being sought. This process may involve one or more interviews with the inventor(s) and requests for information.
In most cases, the patent specification will be filed as a provisional application. An advantage of a provisional application, is that if developments are made to the invention in the first 12 months these can be incorporated into one or more complete applications covering countries of interest.
Only once the application has been filed can you safely disclose the invention, conduct market research, publish details or offer the invention for sale. However sometimes it is still advisable to keep details confidential to provide more options for extending subsequent filing deadlines. For example, a patent application may be abandoned and refiled if there has been no other disclosure of the invention. This may be appropriate, for example, if there are teething problems with bringing the invention to market or getting it to work.
The filed patent application will receive a filing date and an application number. Marking the invention and related literature with the application number acts as a deterrent to would-be copiers.
Covering countries of interest
An international convention allows patent applicants 12 months from their first filing to file one or more further patent applications covering countries of interest. Accordingly, this is a key time to update your patent strategy and decide where to protect your invention.
A common strategy is to file an application under the Patent Co-operation Treaty (PCT); an international law treaty establishing a single procedure for filing a patent application in many countries. During the international phase of the PCT application it is assessed for patentability by an international examiner and the application can be amended as a result. After the international phase is the national phase, in which applications are filed and examined in individual countries in which protection is required. Filing a PCT application delays the filing of multiple individual applications in many countries until 30 or 31 months from the first filed application for the invention. Not every country is a member of the PCT so you should always check with your IP advisor.
The patent specification accompanying the PCT application will be based on the original provisional application but should include any improvements or additional data that has been generated in the intervening 12 months. After this time, it isn’t usually possible to add any further disclosure.
The examination in the international phase gives an indication of the prospects of success and is important to take into account prior to the following national phase where applications must be filed in individual countries (and significant costs are likely to be incurred).
Examination
After the national phase applications are filed, some countries will start examination automatically while in other countries the application not be examined until a formal request is made. There is usually a deadline for requesting examination.
Once examination has commenced, an examiner will make an assessment of whether the invention is novel and inventive in view of similar inventions published before the patent application was filed.
If an examination report issues we will forward it to you for consideration together with copies of any prior art cited by the examiner. We will also advise how best to respond to the report. Upon receipt of instructions from you we will lodge a response to the examiner’s report. The wording of the claims may require adjustment to distinguish your invention over the prior art. We will also attend to any other objections raised by the examiner at this time. Sometimes, it is necessary to file multiple responses, for example, because the examiner raises new objections. Unfortunately, sometimes it is not possible to overcome all the objections and still obtain protection that is commercially useful.
Acceptance and Grant
Assuming the examiner’s objections can be overcome, the application will be accepted.
Some countries, including Australia and New Zealand, allow a person to oppose the grant of a patent. This occurs in a very small percentage of cases. In other countries, once an application is accepted it will proceed to grant on payment of grant fees.
Legal action against a patent infringer cannot be taken until a patent has been granted although in some countries damages can be obtained for earlier acts.
Term
The usual term of a patent is 20 years from the filing date of the PCT application. We maintain renewal records and remind our clients when renewal fees are due. | https://www.jamesandwells.com/nz/procedure-for-obtaining-a-patent/ |
Situation:
As a rural health care system, we identified that we have limited resources at hand for timely treatment of opioid use disorders.
Challenge:
When patients were identified as having an opioid use disorder, they were being treated for their symptoms then connected with outpatient or inpatient treatment services. These services were oftentimes located outside of the community and had waiting lists for access. The referral process required time, staffing and resources, that were usually not available.
Solution:
Through the identified situation and challenges, we explored opportunities for providing in-house clinical treatment services to directly treat the opioid use disorder and provide support to patients without significant disruption to their everyday life. Through the support of an RCORP-MAT Expansion grant, we built our program, Compass Care at CDP, utilizing a nurse care coordinator as the hub of services. We partnered with a tele-health substance use disorder clinic in our state, Project Recovery, to provide services as well as have one provider in our clinic who prescribes MAT. The nurse care coordinator facilitates all appointments as well as provides resources and support tailored to the needs of each patient.
Results & Successes:
Compass Care at CDP did a soft-launch in summer of 2020 with a full launch in Fall 2020. During this first year, we have served more than 70 individuals suffering from opioid use disorder. We have formed wonderful relationships with others in our community and state who serve this patient population and partnered with them in providing comprehensive patient care.
Next Steps:
Our next phase of work is focused on providing treatment and support for non-opioid substance use disorders with a primary focus on stimulant use disorder. We are collaborating with other stakeholders across the state to create and implement evidence-based practices for stimulant use disorder.
Tips/Suggestions:
It is difficult to pinpoint what went well and what could have gone better. Our program is still fairly new and has been a constant evolution as we learn and grow both our knowledge and the numbers within our program.
One piece to consider as you explore this type of work is asking yourself “what areas will we address within the services we provide and when will we refer patients?” For our program, utilizing a nurse care coordinator, it is not within the individual’s scope to provide advice or guidance on trauma, or mental health related concerns. Oftentimes we find as we create relationships with the patients, they find it a safe space to share their struggles. While we are able to support them in connecting with a licensed professional to provide behavioral health services, we are clear to our patients that we are unable to give formal guidance or advice in these areas. Identifying that boundary for ourselves as well as for our patients has been important for us.
My number one piece of advice is to connect with others who are doing this work, ask questions, collect multiple points of view and then mold it all into what works best for your goals and your community. There are so many resources available and others who share this passion, everyone we have connected with has been more than willing to share lessons learned, processes and protocols.
Connect
Feel free to e-mail me if you have any questions! | https://www.avoidopioidsd.com/coteau-des-prairies/ |
WASHINGTON — American consumers give today’s economy the highest grade in more than 16 years.
The Conference Board said Tuesday that consumers’ assessment of current economic conditions hit the highest level this month since July 2001. The business research group’s overall consumer confidence index, which takes into account Americans’ views of current conditions and their expectations for the next six months, rose to 122.9 in August from 120 in July.
Americans’ spirits have been lifted by a healthy job market. Employers added a robust 209,000 jobs in July, and the unemployment rate has dropped to a 16-year low of 4.3 percent.
The Conference Board found that 34.5 percent of respondents described business conditions as “good” — the highest percentage since January 2001. Similarly, 35.4 percent described jobs as “plentiful” — most since July 2001.
The overall index hit bottom at 25.3 in February 2009 at the depths of the Great Recession before rebounding at the U.S. economy recovered.
Economists pay close attention to the numbers because consumer spending accounts for about 70 percent of U.S. economic activity.
Does Taylor Swift’s Social Media Blackout Mean New Music Is Coming? | http://investmoneyuk.com/u-s-consumer-confidence-improves-again-in-august/ |
Consumer confidence rose in August to its highest level since October 2000, building on July’s solid result.
…
August saw that optimism increase among consumers, the Conference Board found, with the percentage of consumers expecting business conditions will get better over the next six months increasing to 24.3 percent from 22.9 percent.
“These historically high confidence levels should continue to support healthy consumer spending in the near-term,” Franco added.
…
Since consumer spending accounts for about 70 percent of U.S. economic activity, economists’ pay close attention to the number. | https://www.whitehouse.gov/briefings-statements/cnbc-consumer-confidence-pops-august-highest-level-since-october-2000/?utm_source=link |
Old Trafford Wellbeing Centre
As a community hub, our strength lies in connecting people to the resources which support wellbeing in Old Trafford. The centre provides creative opportunities for social interaction and a culture of learning, encouraging individuals to try something new or rediscover an old interest, or take on a new responsibility or challenge. Please see the Whats On for more information.
Whatever your age or ability there will be something going on that will interest you – if not have a word and we can help you start it!
Would you like to become a volunteer at Old Trafford Wellbeing Centre?
A volunteer plays a very important role here, complementing the work of staff and enabling us to enrich, and improve and extend our range of services. Meet new people and make new friends, learn new skills and build confidence, look at a possible new career direction and gain a chance to contribute to the community.
Opportunities
Social Media Volunteer
Outline of Role and Tasks:We need someone who can work with the Bluesci team to promote and share what’s taking place at Old Trafford Wellbeing Centre this includes; regular tweeting, regular Facebook updates, providing information for upload onto the website.
The skills you need:
- Happy to work on your own initative.
- Good IT and social media skills
- Good communication skills
Hours/Days required: Flexible.
Times: Flexible
In return we will provide:
- Volunteer Expenses
- Training
- Support
IT Support Volunteer
Outline of Roles and Tasks: Supporting individual to become computer literate. Guided by an individuals need this may include helping someone to set up an email account, understand how to use a MS Office programme or search the internet and complete an online form. The role is very much about encouraging individuals to feel confident in using and understanding computer technology.
The skills you need:
- Good IT skills
- Good communication skills
Hours/Days Required:We have vacancies Monday to Thursday.
Times:Flexible
In return we will provide: | https://bluesci.org.uk/oldtrafford/get-involved/ |
Is Germany better than Italy football?
Is Germany better than Italy football?
While Germany has won more international championships, Italy is largely dominant in the head-to-head international match-up, having beaten Germany 15 times in 35 games, with 12 draws and 8 defeats.
How many matches has Germany won?
Germany is one of the most successful national teams in international competitions, having won four World Cups (1954, 1974, 1990, 2014), three European Championships (1972, 1980, 1996), and one Confederations Cup (2017).
When did Italy play Germany in the World Cup?
2006
Italy won the tournament, claiming their fourth World Cup title. They defeated France 5–3 in a penalty shoot-out in the final, after extra time had finished in a 1–1 draw…
How many times Spain won against Italy?
Italy–Spain football rivalry
|Next meeting||Italy vs Spain UEFA Nations League (6 October 2021)|
|Statistics|
|Meetings total||38|
|Most wins||Both teams (11)|
|All-time series||Italy: 11 Draw: 16 Spain: 11|
When did Italy turn against Germany?
October 13, 1943
On October 13, 1943, the government of Italy declares war on its former Axis partner Germany and joins the battle on the side of the Allies. With Mussolini deposed from power and the collapse of the fascist government in July, Gen.
Is Spain better than Italy?
In general, Spain is a bit more affordable than Italy, but deals can be found in both countries. Both Italy and Spain have a lot to offer visitors. Spain is also known for it’s vibrant nightlife and lively festivals that attract crowds, while Italy has world renowned museums and archaeological sites.
Who wins Italy vs Spain?
Italy prevailed in a shootout, 4-2, after the teams played a 1-1 tie. It will meet England or Denmark, who play Wednesday in the tournament’s second semifinal, for the European Championship on Sunday.
Is Germany more beautiful than France?
Both France and Germany are known for their distinct cultures, unique cities and some stunning landscapes, so you’re guaranteed a great holiday whichever you choose. If anything, Germany is very much a nature lover’s paradise, and its landscape is arguably more beautiful than France. | https://draftlessig.org/is-germany-better-than-italy-football/ |
224 pages.
from $26.00
Hardcover ISBN: 9780804721349
Paperback ISBN: 9780804722476
Ebook ISBN: 9780804779265
This timely work shows how and why the dramatic collapse of the Soviet Union was caused in large part by nationalism. Unified in their hostility to the Kremlin's authority, the fifteen constituent Union Republics, including the Russian Republic, declared their sovereignty and began to build state institutions of their own. The book has a dual purpose. The first is to explore the formation of nations within the Soviet Union, the policies of the Soviet Union toward non-Russian peoples, and the ultimate contradictions between those policies and the development of nations. The second, more general, purpose is to show how nations have grown in the twentieth century. The principle of nationality that buried the Soviet Union and destroyed its empire in Eastern Europe continues to shape and reshape the configuration of states and political movements among the new independent countries of the vast East European-Eurasian region.
About the author
"This is a briliant tour-de-force analysis of the history of ethnopolitics in the tsarist and Soviet empires. Students and scholars alike will welcome the succinct outline of state policies, social processes, and focal events forming the identities of groups and nations. . . . A very important and useful book . . . more durable than other recent publications dealing with the ethnic dimension of the Soviet collapse."
—Slavic Review
"Here is the book of choice if one wants a succinct treatment of nationalism past and present in the former Soviet Union." | https://www.sup.org/books/title/?id=3121 |
One of the goals of a game designer is to create an engaging, challenging, and immersive experience that leaves the player filled with awe, wonder, and excitement. However, often that’s not what our games actually do; genres have fallen out of favor over this and it’s the sort of issue that becomes more and more relevant as our games are being targeted at an increasingly mainstream audience that doesn’t want to sit through fifteen hours of gathering materials for a MMORPG by killing the same monster ten thousand times.
Padding can loosely be defined as anything that does not add to the game except to serve as part of the operation of the game. For instance, every time you fight a monster in a dungeon crawler, it likely exists just to provide a source of experience and potentially drain a little health from the player in order to change the game’s state, but doesn’t have any narrative role or provide a meaningful change in the player’s experience. This is the sort of experience which is unlikely to be memorable and will likely cause a loss of interest in certain players.
On the other hand, this is the sort of thing that games really rely on pretty often, and sometimes the padding allows the game to still function within the context of the player’s exploration; in Oblivion the wilderness would become very boring were there to be no foes wandering in it. Padding also serves as a means of practice for both players and characters, allowing players to go back and get a better degree of familiarity with crucial skills or make their characters more powerful so that later confrontations will be less difficult, and to a certain degree it must still be maintained in this role. Outside of this, however, padding is largely useless and detracts from the gaming experience.
There’s a couple ways to get rid of padding, however, without entirely removing it wholesale. One of the best ways to do this is to link it to additional narratives, but this is expensive both for writers and content designers who will then be forced to create individual sub-narratives. A passable solution is what I call the “material culture” solution where instead of explicitly creating side-mission narratives you simply add lore database entries or evidence of a past event in the middle of the environment, allowing players to be engaged in their discovery if they choose to or blaze past the area, only loosely examining it. Dead Space 3 actually did this very well, particularly with its co-op sections. Bethesda Softworks’ games often include a sort of tertiary story mode for padding, where everything has a role and you may just not realize it yet. Skyrim had this be dynamically generated throughout the course of the game through random quest givers, but even their Fallout games and Oblivion, not to mention Morrowind, drew heavily on this concept of giving every environment its own important narrative. Even Dishonored drew upon this concept, albeit at a much more shallow level as it was not an open-world experience.
Another way to do this is making nonessential game experiences optional; this is something that a lot of games have done for a long time, but it’s something that also runs into issues. If one were to remove padding entirely, a lot of the time to sort of “slip in” character development and setting exploration gets taken away, and it means that the entire game is at full throttle. At first glance this may seem like a good thing, but this also means that you have to cut a lot of stuff out of those nonessential moments and weaken them for the players who do choose to play through them on account of having to take narrative devices from these less important scenes and put them into the more crucial universal experience.
The core thing to remember about padding is that it does have a purpose, but its purpose should not be to add length to a game; the reason Skyrim succeeded so well is because every hour played in it feels purposeful and deliberate, where a similar amount of time spent on Mass Effect just reveals times where Shepard must shoot another alien, again, because he/she needs to get to the next objective and they’re in the way. One of my favorites, Dungeon Siege, is guilty of this to an extreme; with more foes than there are citizens in Rhode Island blocking the hero’s path, many of them supposed to be “natural inhabitants” of the environment rather than the forces explicitly trying to stop them. | http://blog.homoeoteleuton.com/game-design-eliminating-padding/ |
The face of the Guri Dam is 162 meters high and backs up the Río Caroní in southeastern Venezuela, forming a reservoir 175 kilometers long. In the world’s third largest hydroelectric power plant, twenty turbines now produce more than 10,000 megawatts and deliver over 70 percent of all of Venezuela’s electric power. A modernization project is intended to boost by 20 percent the output and the efficiency of this power plant, completed in the 1980s. Corpoelec, the government-owned generating company, is investing 1.3 billion U.S. dollars. Rexroth is delivering the drive and regulation technology to rebuild the inlet gates and is shouldering many engineering tasks. Among the items being supplied are special-design cylinders, valves, power units, and two oil conditioning units. Because some components run underwater, they are made of particularly high-resistance materials such as carbon steel with an Enduroq® coating. The experts at Rexroth are also providing on-site consulting and supervision of the installation. This modernization is to be completed by 2016.
© EDELCA. Electrificación del Caroni C.A.
Further information: | https://www.boschrexroth.com/en/xc/company/press/index2-2125 |
It may be anybody – your children, siblings or friends.
Here are essays of varying lengths on Addiction to help you with the topic in your exam.
These habits can turn into addiction if we don’t guard ourselves on time.
Our company and social environment have a huge impact on our habits and overall personality.
Stay away from such chronic habits to live a healthy and fulfilling life.
Introduction Addiction is something that can make us lose interest in everything around us and keep us glued to one particular thing.
As Patrick Carnes said, “Addiction is a relationship, a pathological relationship in which obsession replaces people”.
Obsessing over anything is termed as addiction to that particular thing.
Even though people know about the harmful consequences of addiction, they are unable to stop themselves from indulging in the same. People usually develop this habit in their youth and even after trying hard they aren’t able to get rid of it later in life.
Many treatments have been developed to overcome drug addiction. | https://truba174.ru/essay-on-internet-addiction-152.html |
We talk to recent graduates exhibiting in Designing in Turbulent Times, currently on show at the Lethaby Gallery. Here, we consider how – from timelessness to transience – time can contribute to sustainable design.
Tales of the Untamed, Morgane Sha'ban, BA Architecture
In this project, Sha’ban proposes a public space where living trees become permanent infrastructure while the built architecture is designed to perish and nurture the soil. This allows a cyclic maintenance of the environment in which the community can redesign and rebuild, adapting the space over time to their needs. This notion of time, permanence and transience is ingrained in the design through Sha’ban’s exploration of materials with a life cycle that starts and ends in the same place. This project is a response to the high environmental impact of the HS2 development in the Euston area in London.
What inspired your collection?
"Within the urban space there is constant change and transformation. This is inevitable, however the HS2 (the largest rail project in the UK), currently under construction, is harming the environment enormously. As well as ancient woodlands being hugely affected, Euston’s urban trees have been felled causing dramatic loss of its tree canopy. In response this project envisions these threatened trees transplanted as a living permanent infrastructure.
I am fed up of seeing nature's needs, especially trees, coming second to human needs. I wanted this project to develop a balance between environmental needs and social needs.
Gallery
Do you think of yourself as a sustainable designer?
"We have a duty to respond to the issues surrounding us as an opportunity for design. There is a critical urgency for us to act and this is something I will continue to be focusing on in my design practice."
What is the role of designers in society's collective push for a more sustainable way to live?
"Our society believes we are entitled, or superior, to natural systems. I would change everyone’s mentality towards our planet. I deeply wish that we could collectively care for our environment and the limits of our planet.
I believe architecture can allow us to re-imagine ourselves as part of these systems, where we need to evolve our methods of construction to aim for a more of sustainable process."
Poise Collection, Desmond Lim, MA Design (Furniture)
Lim’s Poise Collection is an exploration of furniture design for longevity and durability – both physical and emotional. By considering the ageing of materials and using high-quality craftsmanship, the work aims to create long-term attachment between user and object. Incorporating a range of non-recyclable industrial waste materials, Lim’s collection promotes a move away from the use of increasingly precious resources.
How did your project begin?
"A lot of my research is around resources, some becoming increasingly precious and others considered waste that are increasingly abundant. I think there is a lot of potential in turning low-value or discarded materials into high-value pieces of furniture. My work joins together increasing precious wood with casted stone. I needed the weight of the stone to reinforce the objects’ stability so it made sense that I looked at waste aggregates from the construction industry as material to cast."
What is the role of designers in society's collective push for a more sustainable way to live?
"I think there are many ways a designer can contribute to sustainability. You can make a radical new material and revolutionise a new way of making something. But I think if you can make a beautiful, well-crafted and durable object that is cherished for a long time, then I think that also achieves a very powerful form of sustainability."
Designing in Turbulent Times is at the Lethaby Gallery, 14 September – 27 October. | https://www.arts.ac.uk/colleges/central-saint-martins/stories/designing-with-time |
This is Part 2 of visualizing 311 Call Centre data. In Part 1 I read and transformed the call data using Python's SQLAlchemy library and loaded it to MySQL database.
Tableau offers a native MySQL connector which makes it simple to connect to the existing SQL datasource:
The workflow in Tableau consisted of creating several worksheets that depict the total number of calls, calls by type, department, a by Ward. All of these sheets are then combined into one consolidated dashboard.
To get a look at calls per hour in the 7 day worksheet, a calculated field was constructed to extract just the hour from the datetime field. In Tableau, the formula for extracting just the hour from YYYY-MM-DD HH:MM:SS is constructed using the SQL DATEPART function:
DATEPART('hour',[Created Date])
This new column, 'CallHour' is then listed in the 'calltable' data from the original '311windsor' database:
Lastly, the dashboard is uploaded to Tableau Public, a repository for sharing and embedding dashboards. A live, interactive version of that viz is embedded below.
As new 311 data is made available on the Open Data Portal, it can be appended to the database with the code in this Jupyter Notebook and the dashboard will update the new rows of information accordingly. | https://datavu.ca/311-data-in-tableau/ |
If you’re suffering from a fracture, it’s not possible to go on with your normal life until your bone recovers completely. You may need to do things differently for a while but all the effort that you put in taking care of your bones will never go to waste. It can take weeks or even months before you can carry on with your life normally.
Recovery from a fracture depends on several factors. The complexity, type, and location of the fracture determine the time it is going to take for the bone to heal completely. For example, a hairline fracture will take less time to recover than a fractured thighbone (femur). Let us share with you a few important tips that will make it easier for you to cope with the problem of a broken bone:
Do Not Sway from the Treatment Plan
During the recovery, you are required to manage your symptoms and get your health back without suffering any further pain. This can take a lot more time than you initially thought. Therefore, it is better to discuss the recovery plan and various treatment options and be aware of the entire process.
In extreme cases, you may never be able to return to your pre-injury body and this should not affect your will to take care of yourself. Have a detailed conversation with your doctor beforehand and talk about the best strategies for your recovery goals and what should be your expectations.
Eat a Healthy Diet
Since your body is on a healing journey, it will require more nutrients than it usually does. Eating right has a lot to do with the pace of your recovery. If you keep your diet unhealthy, there are meager chances that you’ll be back on track any time soon. Your body requires an extra dose of energy and that is only possible through a balanced diet.
You can come in contact with a nutritionist who can devise a diet plan best suited for you according to your needs. Make sure that your diet consists of essential nutrients like iron and potassium, and the intake of alcohol, coffee, and salt is minimum. Quit smoking so that all the vital nutrients reach the damaged bone without any interruption.
Make Adjustments in Your Home
If you have fractured your leg or hip, there are high chances that you are walking with the help of support. In this case, you might not want to use the staircase and move heavy items out of your way. Move to the ground floor and clear all the clutter out of the way to avoid tripping.
Illuminate the house well. Put nightlights at places where you walk after dark. Even if you don’t have a leg injury, you must make certain adjustments within the house to ensure that you can easily carry out the tasks of daily routine. This means that you might have to rearrange the items in the kitchen and adjust your wardrobe to bring convenience in your life.
Opt for Physical Therapy
Your injury doctor is likely to ask you to go for physical therapy or chiropractic care so that the process of healing is facilitated. These therapeutic exercises include a range of motion, stretching, and strengthening exercises that help your bones to heal. They will bring the body movements back to normal and fasten the healing process.
At first, you will find them to be painful, but gradually, the discomfort will go. However, if the pain persists, you must inform your specialist so that they may change the exercises. You can even consult a pain management specialist who can help you in overcoming the discomfort so that you can drive towards your goal.
Keep an Eye on the Problems
As your bone recovers, you should also be well-aware of the possible complications that may occur. If you experience fever, change of color at the affected area, tingling, numbness, swelling, or severe pain, call your doctor immediately. Ignoring these signs can lead to further complications.
If you are experiencing the symptoms after your leg has healed, you need to consult your doctor so that he can look into the matter. You need to be careful and vigilant when it comes to the condition of your bone because once ignored, the problem may worsen and can cost you your health and well-being.
Be Regular on Medications
Following the prescription is essential to a smooth and healthy recovery. Taking medicines regularly along with observing other recovery measures is very important. You can even ask the doctor for pain relievers so that you can cope with the pain and not let the discomfort take a toll on your physical and mental health.
Another important thing is to inform your doctor about the medicines that you’re already taking or plan to take during recovery. Certain medicines can hinder the recovery process so informing your doctor beforehand can save you from compromising on your recovery and health.
Take Good Care of the Cast
As soon as you get on a cast, you need to give extra attention to its cleanliness and keep a check for unusual signs. If you see a change in color, numbness, or swelling, promptly contact a doctor because that is a sign that the cast is too tight. One of the worst things about casts is itching. Don’t try to scratch by sticking any object inside. Instead, use a hair dryer’s cool setting to relieve yourself from it.
Do not pick or remove any padding from the edge of the cast as this can irritate the skin. If you are facing any problems, let the doctor make the adjustments for you. While you will go on about with your routine tasks, try your best to keep it away from dirt, sand, lotions, or deodorants.
Conclusion
Your recovery is successful if your bone functions like it did before the injury without causing you any pain. While this may not be possible for everyone, you should do your part to the best of your abilities to reach this point. Don’t stress yourself out and just follow your doctor’s instructions religiously. | https://www.iacquireexpert.com/7-tips-to-take-care-of-a-broken-bone/ |
Since 2004, over 160 professionals from 6 countries in sub-Saharan Africa (Gambia, Ghana, Kenya, Nigeria, Tanzania and Uganda) and from multi-disciplinary backgrounds (such as nutrition, community nurses, media, policy/decision- makers, physicians, health promoters, public health professionals, social scientists, physical activity professionals, etc.) have been trained.
Executive Summary of Recent Courses
The 2008 and 2009 CDC/IUHPE Annual Seminars on Cardiovascular Health Promotion and Chronic Disease Epidemiology have been summarized. Executive Summaries of the courses and parallel meetings are now available:
-
Executive Summary of the 5th CDC/IUHPE Annual Seminars on Cardiovascular Health promotion and Chronic Disease epidemiology that took place in Bagamoyo, Tanzania on July 20th through 30th 2008
-
Executive Summary of the 6th CDC/IUHPE Annual Seminars on Cardiovascular Health promotion and Chronic Disease epidemiology that took place in Entebbe, Uganda on July 8th through 18th 2009
Lessons learnt and Recommendations
The first four Annual Seminars have been independently evaluated. This evaluation was conducted to share knowledge, experiences, lessons learnt, and good practice examples; to inform future work and better achieve the objectives of the seminars as well as to ensure these are disseminated within the region and more broadly.
-
An executive summary of the independent evaluation is available
-
The members of the faculty of the CDC/IUHPE Annual Seminars on Cardiovascular Health Promotion and Education have actively addressed the recommendations from the independent evaluation report to inform improved process and delivery of the 6th CDC/IUHPE Annual Seminar. Please refer to the Executive Summary for more details on how the recommendations were addressed.
Abstracts, Progress and Results of team projects developed during the seminars
Each year, the country teams developed projects based on teachings and group work during the training seminars. The projects were developed with the following considerations in mind:
-
Make it local: The projects should be adapted to the communities you know best. Be creative in thinking about how to implement the most effective and potentially sustainable interventions.
-
Focus on community health development: Our goal is to advance health promotion at the population or community level. Of course, we are also concerned about individuals who already have CVD, however that is not the focus of this programme.
-
Make it feasible: It is very tempting to propose a large, comprehensive project. At this stage, however, it is important to take on something that can be accomplished within current available resources, which should be carefully assessed while projects are being planned.
-
Evaluate whatever you do: Cardiovascular disease prevention in Africa is at an early stage and we have much to learn. We must remember that each of these efforts is an important opportunity to learn more and improve our chances of success in the long term.
Project proposals include:
-
An abstract;
-
A problem statement and a brief review of current knowledge attached to such problems;
-
A statement of the specific goals;
-
A description of the plan for the intervention and timeline; An evaluation plan; and
-
A budget.
Examples of what has or is currently being done in the region will soon be available for the following countries: | https://www.iuhpe.org/index.php/en/non-communicable-diseases-ncds/cardiovascular-hp-and-chronic-disease-prevention-in-africa/767-course-outcome |
BOISE – Idaho residents have among the lowest personal incomes in the nation but spend a higher percentage of their money on food, housing and other essentials compared with most other states, according to data released Thursday by the U.S. Bureau of Economic Analysis.
The report marks the first time the government is issuing consumer spending data broken down by state. Formerly, the bureau released consumer spending data at the national level only.
In 2012, Idaho’s per-capita consumer spending was $30,190 – just a few dollars higher than the per-capita spending in Utah and Hawaii. Per-person consumer spending was lower in Nevada, Alabama and Arkansas, and Mississippi came in last at $27,406.
But Idahoans had to spend a much larger percentage of their income than most – just over 43 percent – to cover the basics of food, housing, health care and gasoline or other energy goods. Only Mississippi residents spent a higher portion of their income on those categories, with almost half of their $27,400 per-capita income going to food, housing, gas and health care.
Nationwide, the average person spent about 37.5 percent of their personal income on those categories.
“That’s most of the problem with having a low per-capita income,” said Phil Watson, an associate professor of applied economics at the University of Idaho. “People talk about the purchasing power and the lower cost of living in Idaho. We’ve found that the cost of living is slightly lower, but not nearly enough to make up for the lower personal income.”
If Idaho’s lower cost of living were enough to make up for the state’s low per-capita income, then residents would be spending about the same share of their income on the basic necessities, Watson said.
Wages for Idaho jobs such as service industry and call center positions are slightly lower than those paid for similar jobs in other states. But those lower-paying jobs are largely the only ones available to many Idaho residents, Watson said.
Idaho residents spent about $4,695 per person on health care, according to the report. That was the third-lowest per-capita health care spending in the nation in 2012, after Nevada and Utah. The report shows Idahoans spent about $2,600 per person on food and groceries, about $1,855 per person for gas and $5,735 per person for housing in 2012.
Washingtonians spent more overall than many of their counterparts in the nation in 2012, the latest year for which figures are available. The Evergreen State ranked among the top 10 states for total personal expenditures. | http://www.spokesman.com/stories/2014/aug/09/idahoans-spend-more-of-income-on-essentials-than/ |
The independent living facilities at TigerPlace allow the elderly to age in place in a homely setting that fosters independence, autonomy, and privacy while providing some level of aid like shared meals, housekeeping, and other services. When elderly residents fall victim to situations that have long-term health implications, e.g. urinary tract infections, even immediate intervention by a nursing care-provider after the fact may not be timely enough. However, if the nursing care-providers know of certain behavior patterns or specifically changes in those patterns that indicate early signs of illness, then they can make an intervention or take preventative measures ahead of time. This research provides a framework and method for using passive in-home sensor networks to collect sensor data, Early Illness Alert Algorithms to model and detect signs of early illness, single-dimensional alerts that notify nursing care-providers, and collect clinical feedback on alerts from a team of clinical researchers with an expertise in gerontology. The feedback collected provides valuable ground truth that is used to analyze and improve the Early Illness Alert Algorithms. Classification accuracy more than doubles through the application of four machine learning methods for classification and include the Fuzzy Pattern Tree, the Fuzzy K-Nearest Neighbor, the Neural Network, and the Support Vector Machine.
Degree
M.S.
Thesis Department
Rights
OpenAccess.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License. | https://mospace.umsystem.edu/xmlui/handle/10355/15958 |
It’s almost a year since Facebook Live launched, but already it’s hard to imagine the site without it. Instagram quickly followed suit, and earlier this month Youtube did the same. It’s become easier than ever to broadcast whatever you like, whenever you like – but for as long as people have shared anything of themselves online, they’ve been subject to criticism. From the comments section of online newspapers to the reply function on Twitter, it’s easier than ever to make your voice heard – but what happens when the voices that shout back only have hateful things to say?
Last week I did something and I can’t decide whether it was either incredibly brave or incredibly stupid; I went on a twenty minute date which was livestreamed on Facebook pages of a global media brand I read all the time, all in the name of entertainment. Think First Dates meets Blind Date, but without the gallic charm of Fred Sirieix or Cilla Black’s warm northern accent to reassure me. Instead, there was a free-for-all comments section, where strangers were encouraged to give their opinions about me and the others participating in the video. I decided to volunteer for the experience because I was bored, first and foremost. Full disclosure – my first and last Valentine came when I was five years old. I’ve never had a boyfriend, and can count on one hand the number of dates I’ve been on. With all my friends in long-term relationships, I decided at the least it would be a funny way to spend a Tuesday night – and there would be free alcohol, which I’ve never been known to turn down.
The experience itself was totally fine. Everyone in the room was nice enough. There was free food, free wine, and I came away from it feeling like I’d at least tried to conquer my crippling social anxiety by throwing myself out of my comfort zone. It was on the bus home, however, that I made the reckless decision to go back and read the comments left on the Facebook video. I think ‘cruel and unnecessary’ is a polite way of describing the things people had to say about me and my physical appearance. I understand, people of the internet. I’m chubby and awkward and not that attractive – you’ve had to experience it for five minutes. Try twenty-four years of being me on for size.
What I didn’t understand was their anger. Why would complete strangers say such vile things about someone they’d never met? Moreover, why were they so willing to do so from their personal Facebook accounts, where I could see their full names, profile photos, schools, places of employment? It was as if they didn’t care at all about being identified. In the past, trolls have always been anonymous, hiding behind avatars and fake names in order to spew bile across the web. I could always dismiss hatred directed at me when it was from someone without a face and a name, but suddenly it had become so much more personal.
So has livestreaming made people more willing to be unpleasant to each other online?
‘I don’t think that livestreaming is the reason that this is the case,’ says Dr. Bernie Hogan, of the Oxford Institute of Internet. ‘I think it’s much more to do with algorithmic curation.’
Algorithmic curation is the technology behind websites we use every day – it measures what we interact with online, and adjusts what we see accordingly. It impacts on the adverts we see, what’s displayed on our social media feeds – and crucially, who we see on our social media feeds, which is why you can have a friends list of 500 people, but only seem to see the same 20 on your timeline again and again.
‘Algorithmic curation fosters something that I call “lowest common denominator culture”,’ he explains. ‘It’s the idea that we go down to the lowest common denominator of our assumed audience.’
So that tends to be humour – be it light-hearted, cat pictures, pop culture references, or – you guessed it - jokes at the expense of others.
‘As we try to appease that assumed audience, and as that audience gets back to us in a negative way, we kind of adjust our behaviour online. The thing is, with algorithmic curation, and Facebook only filtering your audience to be an “in group” and an “out group”, and your “in group” always being the people that agree with you or share similar opinions with you, you become less concerned with the opinions of people outside of your “in group”.’
In other words, if you’re only concerned with the opinions of the people who agree with you, why would you bother hiding your identity? There’s simply no need for fake accounts anymore – our social networks have become so finely tuned, they know what we want to see before we do, and any disruption to this makes us hostile.
‘It’s not that livestreaming is changing how we interact,’ says Dr. Hogan. ‘It’s that we’re doing a bad job of creating technology that allows us to empathise with each other, when those others are different from us.’
Livestreaming isn’t going anywhere – it existed for years before, in the form of websites like Twitch and Periscope, and now Microsoft have announced plans to include one-click livestreaming facilities in their new software. Its integration with social media has made us all broadcasters from the comfort of our own homes, enabling us to connect with old friends and new audiences alike, with unbelievable ease – but, it also intensifies our online interactions. It feels like you’re watching a film or television show with hundreds of other people, and commenting is a huge part of the enjoyment users get from this technology, but it’s all too easy to forget this isn’t film or television – it’s online theatre, and the actors can hear you, loud and clear.
There’s someone else sitting behind the screen, able to think, feel and react to the comments we make. As the barriers become increasingly removed it’s harder to recognise where to draw the line, and people become caught up in a frenzy to say the funniest, more outrageous thing, in order to garner replies, reactions and likes from the audience of their peers. There’s no time to pause and think – everything in our accelerated culture demands instant gratification.
‘It becomes extra obvious that the screen separates and protects us’, says Bernie Hogan of livestreaming. ‘It gives us yet another way in which people can criticise and yet another way in which we can create distinctions between who’s in the audience and who’s out. We reinforce our own egoism, rather than try to seek out and remind ourselves that other people are human, they have feelings and we should care for them.’
After my experience, a lot of people said I deserved what I got from the commenters online. I should have known better. So should we expect it then, the onslaught of negativity and outpouring of emotion? Is this just the price we pay for going online?
I put the question to Dr. Hogan, and he gives me an emphatic no: ‘Anyone who says you have to accept those comments because that’s the way it is doesn’t understand the way they are programmed by the media.’
There’s not just algorithmic curation at play - Facebook isn’t blue just because Mark Zuckerberg’s colourblind. Millions is spent year after year to find out how we interact with the internet, and to manipulate us into spending more time online. Living life online isn’t in itself negative, and the internet has a phenomenal power for good, but the way we’re manipulated by technology can result in a society that exists purely on an in/out crowd dynamic, based on shunning those who don’t fit into a prescribed notion we have of who we want to interact with. Like Mean Girls, but less pink.
Isolation and alienation are always going to occur when we forget the most basic of rules: we’re human. We should all take a break from the screen, long enough to think before we type. Contempt is quick and easy, compassion requires work, thought and consideration.
Like This? You Might Also Be Interested In...
Here's How Online Streaming Has Improved Diversity In The Television Industry
Gemma Styles: How Social Media's Totally Changed My IRL Behaviour
Follow Hannah on Twitter @goodjobliz
This article originally appeared on The Debrief. | https://graziadaily.co.uk/life/real-life/livestreaming-made-easier-hate-online/ |
Module summaryThis module offers a critical analysis of sociological, theological and philosophical accounts of the social and political dimensions of religion and how these relate to different models of public and private life. An exploration of the emergence of distinctively 'private' and 'public' realms of social action and experience is pursued in relation to developments in Christian belief and practice. Following this, the module examines how political discourses concerning the balance between individual and corporate rights on the one hand, and public responsibilities on the other, continue to be grounded in a number of highly specific theological, sociological and philosophical debates surrounding secularism and modernity. Also considered are how religion, politics and society are understood in relation to a number of contexts, including: Islam and modernity, Africa and Pentecostalism, representations of majority and minority religions in the UK; Interfaith dialogue.
ObjectivesThis module, which is the core module for the MA in Religion, Politics & Society, aims to provide students with an advanced understanding of the contentious and developing role of religion in public life that draws from several, distinct traditions of scholarly analysis (e.g. sociology, philosophy, theology). It develops critical and analytical skills in a way that will enrich students’ own understandings of the social and political dimensions of religious belief and action, and will equip them for further postgraduate research in an area where Leeds has internationally recognised significance.
Learning outcomes
On completion of this module, students will:
1. Demonstrate in-depth specialist knowledge in the study of religion in its social and political contexts
2. Demonstrate advanced scholarship in several disciplinary approaches to the study of religion: sociological, theological, philosophical, anthropological.
3. Be able to critically and reflectively work alongside others in constructive seminar discussions
4. Develop competent oral presentation skills in presenting summaries of research to the seminar group.
5. Develop skills in independent learning and research, including effective time management, note taking, and use of library or online archives.
6. Demonstrate critical ability and written skills in line with Masters study through written assessed essays, and an ability to improve through reflection on critical feedback from tutors.
Skills outcomes
Capacity to engage in interdisciplinary reflection on religion.
Syllabus
Topics will be structured around two themes: I. Theories and approaches to the study of religion, politics and society; II: Studying religion, politics and society in context.
Representative topics include:
• The modern problem of “religion in public”, some key articulations of a settlement and some possible alternative stories
• Secularism, Tolerance and Blasphemy
• Liberalism and its Critics
• Pentecostalism as political religion in postcolonial Africa
• Islam, Politics and Modernity in Muslim Societies
• Representing Religion: Who Speaks for Religious Communities?
• Interfaith dialogue and the resurgence of religion in British public life
Teaching methods
Due to COVID-19, teaching and assessment activities are being kept under review - see module enrolment pages for information
|Delivery type||Number||Length hours||Student hours|
|Seminar||11||2.00||22.00|
|Tutorial||1||1.00||1.00|
|Private study hours||277.00|
|Total Contact hours||23.00|
|Total hours (100hr per 10 credits)||300.00|
Private study10 hours per seminar preparation (110 hours)
167 hours essay preparation (inc. essay tutorial).
2 seminars will be student led, in which students present work to the class on their chosen essay topic (5 minutes for each presentation).
Opportunities for Formative FeedbackStudents are encouraged to arrange a one-to-one essay support tutorial with either the module tutor or the session tutors, to discuss ideas and receive verbal feedback. Students can also see the tutor during office hours for specific feedback during the course. Written feedback on assessment will be provided, alongside the script and provisional mark, within three weeks of submission. Oral feedback will be given on presentations. | http://webprod3.leeds.ac.uk/catalogue/dynmodules.asp?Y=202122&M=PRHS-5400M |
On November 21, 1877, Thomas Edison first recited the nursery rhyme "Mary had a little lamb" into his phonograph machine. The recording was captured onto tinfoil around the edge of a spinning cylinder. Today, many people believe that this event marked the birth of recorded sound. However, it wasn't the first time that the human voice had been recorded. For thousands of years, people had experimented with capturing sounds for later playback.
The knowledge that sound can be recorded goes back to around 3000 B.C. when ancient Egyptians tried storing harp music by cutting it into stone carvings. Historians know very little about this experiment, but there is some indication that the carvings played back the tones of a harp when they were rubbed with a rod.
During the fourth century B.C., Aristotle was one of the first to write about this method for capturing sounds and reproducing them later. He described what is thought to be the earliest known mechanical playback device, which used echoes to recreate the sound.
Several centuries later, a similar device was built by a Greek inventor named Ctesibius of Alexandria. This invention used a water clock to control the playback speed and a series of pipes that produced different sounds as they were played through.
By A.D. 300, musical recordings had spread to China, where people carved poems and songs onto stone walls. The recording would playback when the surface was hit with a stick, producing sounds similar to modern-day xylophones and woodblocks.
In 1456, German inventor Johann Gutenberg designed a tool that used metal disks to print words and etch images into them. His tool was able to store and print up to 36 images on a disk, which inspired others for centuries.
Although it wasn't the first time that sound had been recorded, Thomas Edison's phonograph of 1877 was one of the most important early steps toward modern recording devices. Edison first had the idea for a machine that could record and playback sound in 1877, when he was working on a device that could send telegraph signals along telegraph lines.
An important part of Edison's machine was the tinfoil phonograph, which consisted of a vibrating membrane that turned sound into vibrations and recorded them onto sheets of tinfoil. The machine didn't have much luck when it came to playing back the sound, but it did show that audio recordings were possible.
Edison's machine was later improved by Alexander Graham Bell, who created the graphophone in 1885. It had similar parts as Edison's phonograph; however, its tinfoil sheets were replaced with wax cylinders and its sound-recording membrane was made out of an inked ribbon that was pulled across the cylinders.
This gave much clearer sound quality, but the phonograph still wasn't a practical listening device because it didn't have any way to amplify the recorded sound. The next major improvement came from Emile Berliner, who invented the gramophone in 1887.
Berliner's machine didn't pull sound vibrations across a recording cylinder or sheet of tinfoil, which made it easier to make louder recordings without wearing down the recording medium. It also allowed multiple recordings to be played back on one disk by using small grooves that were placed closer together during the manufacturing process.
Berliner's machine was the basis for most modern recording devices, including phonographs and cassette recorders. In 1900, he also created a disk that held two records side by side, which made it possible to playback stereo phone recordings.
1920s record player
The 1920s record player was an invention in the early 20th century that revolutionized how people listened to music by introducing recorded records.
The record player is an old technology, with its first instance invented in 1888. The idea of recorded sound on cylinders and later discs had been experimented with earlier in history but it wasn't until Thomas Edison's invention of the phonograph in 1877 that it really caught on, making people's lives more convenient. Edison continued to improve his invention until he finally got it right by doing things like modifying the stylus and tonearm wires for better sound quality.
Later in 1888, Emile Berliner invented another type of record player called a gramophone, which was a huge improvement from Edison's phonograph. In the early 20th century, record players became convenient appliances that people could use to listen to music as they were going about their daily business, before the era of the radio and mp3 player even began.
In 1888 Emile Berliner invented another type of record player called a gramophone, which was a huge improvement from Edison's phonograph. In the early 20th century, record players became convenient appliances that people could use to listen to music as they were going about their daily business, before the era of the radio and mp3 player even began.
Record players like this 1920s model were commonly used until after the Second World War when the phonograph was replaced by new technologies like the radio and electric record player.
The ability to play music on a record player means that songs were recorded for the first time. People no longer had to perform music live in front of audiences, which meant that musicians could play around with new types of songs and sounds that couldn't be done on stage.
The 1920s record player was an invention that revolutionized how people listened to music by introducing this type of recorded record.
1970s Record Player
A record player is a device that plays sound recordings. The term "record player" typically refers to turntables, but can refer to reproduced media using almost any other method such as cassette tapes or CDs.
Description of the 1970s record player
The first commercial record players were developed by American inventor Emile Berliner in 1887. At first they were sold only in Europe, but become popular in the US also.
Description of record player parts
A record player typically consists of these parts:
- Turntables that hold the record and rotate it using a motor,
- A tone arm on which each side has three parts: the pivoting tone arm with an upturned stylus fixed on its back end, the phono cartridge containing small metal granules mounted in line with the stylus, and the counterweight that balances the tone arm
- A protective cover for both sides of a vinyl record
- Speakers or headphones so sound can be heard
Parts of 1970s record player
A typical 1970s record player had these parts:
- A combination of a record player, cassette tape recorder and radio receiver.
- Turntable with built-in speakers
- Two headphone jacks for sharing music without disturbing others.
History of the 1970s record player
The 1960s brought new technology to popular music with the invention of the cassette tape and a new method of reproducing music, the 8-track. The 1970s saw technology that allowed for home stereos to reproduce sound in multiple rooms, while still using vinyl records.
Description of 1970s record player use
The main use of a record player is to play music through speakers or headphones. They are also used to copy cassette tapes to CDs, transfer vinyl records onto cassette tapes or CDs, and for casual listening.
Description of 1970s record player parts
Parts included a tonearm with three pieces: an upturned stylus fixed on its back end, the phono cartridge containing small metal granules mounted in line with the stylus, and the counterweight that balanced the tonearm.
A record player typically consists of these parts: turntables that hold the record and rotate it using a motor, tonearm with three pieces: an upturned stylus fixed on its back end, the phono cartridge containing small metal granules mounted in line with the stylus, and the counterweight that balances the tonearm, speakers or headphones so sound can be heard.
A typical 1970s record player had these parts: a combination of a record player, cassette tape recorder and radio receiver, turntable with built-in speakers, two headphone jacks for sharing music without disturbing others.
1980s Record Player
For people who love listening to records, vinyl has made a comeback. One type of record player that is popular in the 2010s, but was also common during the 1980s, is the belt-driven turntable. Belt drives were used with record players because they are less expensive than direct drives; however, belt drives tend to generate more noise and static electricity.
A record player has a rotating platter that holds the vinyl and an arm with a needle, or stylus. The stylus's vibrations, when in contact with the grooves on the record surface, produce electrical signals. These signals are transmitted to an electromagnetic coil (included in systems with attached speakers), which amplifies them. The amplified electrical signals are then transmitted to a speaker which produces sound from the amplified signal.
In the 1980s, CD players began replacing record players for playing music. However, some people still prefer listening to vinyl records because they provide a more "authentic" music experience and allow listeners to buy albums with better artwork. In addition, record players tend to be more affordable than CD players.
The sound quality of modern vinyl records is often better than that of CDs, which can degrade over time. Some people believe that the "warmth" in the sound produced by vinyl is due to distortions introduced during playback; however, when lasers were invented for playing back CDs (and not the other way around), this theory became less popular.
Vinyl records tend to degrade over time and need to be stored properly in order to maintain their sound quality. If a record is kept in direct sunlight, its life will decrease even faster than it would in normal conditions. Vinyl's limited lifespan and susceptibility to damage mean that they can only be stored in a cool, dark place.
In the 1980s, records were sold individually or as part of boxed sets. Today, most records are sold as "singles," which contain one song on each side. Some record players can hold more than one single at a time (without needing to flip the record over).
Records also come in larger sizes, such as 12-inch records and 7-inch records.
One record store owner said that he often gets customers who are surprised by how much the prices of records have increased since their release. However, if a person looks for them, they can still find used vinyl at low prices. People who sell their vinyl records often sell them to stores that buy used vinyl, such as the store in this article.
In addition to playing music, record players can be modified to play old video games. The sound produced by a record player is often similar to the sound from a television's speakers when it is attempting to produce gameplay sound effects. It is common to see people play old NES games at gaming conventions on record players.
In the 1980s, a popular way to listen to records was with a portable record player that weighed only one pound. It had two speakers and an attached handle for easy carrying. These lightweight devices were especially popular among people who carried their music around while traveling or exercising.
DJs in the 1980s began modifying their record players so that they could play several records at once. This technique, called beat matching, is still used by some DJs today to create new songs using previously recorded music. However, many modern DJs refuse to use this technique because it was common for noise to be introduced into the signal when the records were switched.
Record players are capable of playing music at any speed, including 33⅓ rpm, 45 rpm, 78 rpm, and even the occasional 16⅔ rpm. Some record players even offer special features that turn them into turntables. These record players typically have a switch to change between play modes. DJs often use turntables to play vinyl records at a speed other than 33⅓ rpm.
In the 1980s, record players could only produce sound from 45 rpm and 78 rpm records if they were placed on a special surface called a "stylus." If these records were placed directly onto the turntable, then a loud scratching noise would be produced.
Some record players are designed to work better on specific surfaces, such as thick carpets or wooden floors. For example, a "direct-drive" turntable is one that has the record directly connected to the motor instead of using flexible belts that can slip off easily. Some stores sell record players with adjustable feet that can be attached to increase stability on certain surfaces.
In the 1980s, record players were sold as part of "stereo systems." These devices included a receiver (which was used to amplify the sound) and at least two speakers. Many people also had separate components such as tape decks and equalizers. It was common for these devices to be sold together in one package.
In the 1980s, record players were either "belt drive" or "direct drive." In a belt-drive turntable, the motor is located in a separate part of the device and connected to the spindle by a thin rubber belt. This design creates less vibration in the stylus arm when it is in motion.
Portable record players were popular in the 1990s and early 2000s. The first consumer portable record player was introduced by RCA Records as a reaction to the 1980s cassette tape-based Walkman, which allowed for personal music on the go.
However, due to major changes that occurred during this time period, such as digital music and compact discs, the viability of portable record players began to decline.
The Rise of Portables Record Players in the 1990s
During the height of their popularity, nearly every major retail store sold portable record players. And while most records were played on larger, more traditional turntables, there was a demand for new technology. This led to Japanese company SONY releasing its first Walkman in 1979.
While the portable record player was created to provide people with their own music, it soon became apparent that it could also be used for other purposes. For example, many companies began using them as promotional items, hoping to strike up a conversation between customer and salesperson. Additionally, some businesses hired out portable record players at trade shows or special events.
Portable record players did not begin to decline until the late 1990s and early 2000s when two major changes occurred: digital music and CDs. Digital music meant that people could carry far more songs in their pockets than ever before. And compact discs took off in popularity at about this time as well. Both of these factors contributed to the decline of the portable record player.
Decline of Portables in the 2000s and 2010s
Portable record players were manufactured until the early 2000s when they were discontinued by most companies. One factor that contributed to their decline was the sudden rise in popularity of MP3 players, which meant more people had access to digital music than ever before.
Another major factor was the rise of CDs, which meant that music came in a smaller package and could be played in a wider variety of ways.
Even though portable record players are no longer in production, there is still a large demand for them among vintage collectors worldwide. Many people find them to be aesthetically appealing and often display them in their homes or business. In fact, it is not uncommon to see a portable record player used as a centerpiece for a home office desk.
Also, some people still use older models of these devices to play music through the speakers built into them. They are even able to hook up modern components such as antennas and Bluetooth players.
2020s Record Player
The 2020s record player is an electro-mechanical device that plays sound recordings by rotating a vinyl or other analog disc. The follow up to the 1940s record player, the 2020s record player evolved during the 1960s and 1970s to increase playback speed, run quieter, provide different playback modes, introduce new cartridge technology, correct warping issues, and increase the fidelity of sound reproduction.
A record player typically consists of these parts: a flat surface upon which the record rests while being played, a turntable that holds the record, an electric motor with some method of the belt or gear drive that turns the turntable at a continuously variable speed, an arm with a pickup to translate the vibration of the stylus into an electrical signal, and some sort of device for amplifying that signal and sending it to a loudspeaker. On this flat surface, you place your record player and place your vinyl on top. The turntable then rotates while the record player's electric motor turns its gears.
One of the major parts that influence a record player's playback speed is the motor. Different models use different types of motors, including direct drive, synchronous and fractional horsepower motors. A direct-drive turntable has a platter attached directly to the motor so that the two rotate at exactly the same speed and produce less vibration than other models. The asynchronous motor rotates the platter at a perfect speed, regardless of fluctuations in the electric current or other factors that can cause it to vary slightly. A fractional horsepower motor is typically found on record players with one-point-five inches between stylus and cartridge, thanks to its small size, high torque, and accuracy.
Another part that affects sound quality is the cartridge, made up of a stylus, cantilever, and magnet. The stylus is attached to the end of the cantilever, which is suspended in the magnetic field of the permanent magnet. As vibrations move through the vinyl to make sounds, they cause minute movements in this magnetic field, and those movements are transformed into electrical energy by the cartridge.
In addition to standard playback modes, modern record players can play 33 1/3 rpm and 45 rpm records at 45 RPMs, 78s at a rotational speed of either 70 or 90 RPM, as well as 16 2/3 rpm microgroove records through the use of an additional adapter.
Today's record players are designed to handle different media types, play multiple speeds, connect to headphones and speakers, slow playback speed for increased fidelity, and adjust tracking force. They can even solve common problems, including warped records or tracks that skip.
The 2020s record player has become more high-tech than simply playing records. Record players now have Bluetooth capabilities to allow music streaming, wireless headphone compatibility for private listening and are created with features that can prevent skipping.
· 2022s Angels Horn record player:
Angels Horn Store:
Bring music to life.AngelsHorn Mission
AngelsHorn offers a variety of record players, each with an elegant, retro style designed to provide a high-quality music experience.
Our record players are decorative, functional, and environmentally friendly, a perfect combination of classic record players and modern audio technology. Choose the AngelsHorn record player to make your life more colorful!
AngelsHorn, Better Products, Better Service, Lower Prices! | https://www.angelshorn.com/blogs/about-us/history-of-the-opera-house-angels-horn-and-the-classic-gramophone-with-temperature |
Proposals Accepted Anytime. Check with the program officer in the specific field of the proposed research for acceptable submission periods.
Purpose: This Funding Opportunity Announcement (FOA), issued by the National Cancer Institute (NCI), National Institutes of Health (NIH), solicits grant applications that propose exploratory research projects on the initial application of emerging analytical technologies as laboratory or clinical tools. An “emerging technology” is defined as one that has passed the initial developmental stage, but has not yet been evaluated within the context of its intended use. Projects proposed in response to this FOA should have the potential to produce a major impact in a broad area of cancer-relevant research. If successful, these technologies would accelerate research in cancer biology, cancer treatment and diagnosis, cancer prevention, cancer control and epidemiology, and/or cancer health disparities. This FOA solicits R21 applications that have high potential impact and allows for an element of technical risk; preliminary data are not required. All projects must include quantitative milestones (i.e. technical metrics that determine whether the specific aims have been accomplished). Projects proposing to use established technologies where the novelty resides in the biological or clinical question being pursued are not appropriate for this solicitation and will be returned as non-responsive. This funding opportunity is part of a broader NCI-sponsored Innovative Molecular Analysis Technologies (IMAT) Program.
Purpose: This Funding Opportunity Announcement (FOA) issued by the National Cancer Institute (NCI), National Institutes of Health (NIH), solicits grant applications proposing technically innovative feasibility studies focused on early stage development of cancer-relevant technologies. If successful, these technologies would accelerate the research and understanding of basic cancer biology, cancer treatment and diagnosis, cancer prevention, cancer control and epidemiology, and/or cancer health disparities. This FOA solicits R21 applications and is suitable for projects at their inception, conceptual or idea based, where technical feasibility of the proposed technology or methodology has not yet been established. The R21 mechanism requires high potential impact and allows for an element of technical risk; projects proposed in response to this FOA may reflect this level of risk but must have concurrent potential to produce a major impact in a broad area of cancer-relevant research. All projects must include quantitative milestones (i.e. technical metrics that determine whether the specific aims have been accomplished). Projects proposing to use technology that is already established or projects where the novelty resides in the biological or clinical question being pursued are examples of topics not appropriate for this solicitation and will be returned as non-responsive. This funding opportunity is part of a broader NCI-sponsored Innovative Molecular Analysis Technologies (IMAT) Program.
Purpose: This Funding Opportunity Announcement (FOA), issued by the National Cancer Institute (NCI), National Institutes of Health (NIH), solicits grant applications proposing research projects on the advanced development of emerging molecular and cellular analysis technologies through technical/analytical validation in an appropriate cancer-relevant biological system. An “emerging technology” is defined as one that has passed the pilot developmental stage and shows promise, but has not yet been evaluated within the context of its intended use. If successful, these technologies would accelerate research in cancer biology, cancer treatment and diagnosis, cancer prevention, cancer control and epidemiology, and/or cancer health disparities. This FOA solicits R33 applications; this mechanism is suitable for projects where “proof-of-principle” of the proposed technology or methodology has been established and supportive preliminary data are available. Projects proposed to this FOA should reflect the potential to produce a major impact in a broad area of cancer-relevant research. Projects proposing to use established technologies where the novelty resides in the biological or clinical question being pursued are not appropriate for this solicitation and will be returned as non-responsive. This funding opportunity is part of a broader NCI-sponsored Innovative Molecular Analysis Technologies (IMAT) Program.
The National Institutes of Health (NIH) Exploratory/Developmental Grant (R21) funding opportunity supports the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) Investigator-initiated research, also known as unsolicited research, is research funded as a result of an investigator submitting a research grant application to NIH in an investigators area of interest and competency. All investigator-initiated exploratory/developmental applications described in this announcement will be assigned to NIH institutes and centers (ICs) according to standard Public Health Service referral guidelines and specific program interests. Investigators are strongly encouraged to consult the list of participating ICs and special research interests. The Exploratory/Developmental Grant (R21) mechanism is intended to encourage exploratory and developmental research projects by providing support for the early and conceptual stages of these projects. These studies may involve considerable risk but may lead to a breakthrough in a particular area, or to the development of novel techniques, agents, methodologies, models, or applications that could have a major impact on a field of biomedical, behavioral, or clinical research. This funding opportunity announcement will use the NIH Exploratory/Developmental (R21) award mechanism. The total project period for an application submitted in response to this funding opportunity may not exceed 2 years. Direct costs are limited to $275,000 over an R21 2-year period, with no more than $200,000 in direct costs allowed in any single year.
Purpose: DARPA is soliciting innovative proposals for university research centers in the area of integrated photonics engineering. Proposed research should investigate innovative approaches that enable revolutionary advances in science, devices, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.
Purpose: The Biosensing Program supports innovative, transformative, and insightful investigations of fundamental problems with broad long term impact and applications that require novel use of bio-inspired engineering principles and sophisticated devices to meet the engineering and technology needs of the nation. The program is targeting research in the area of the monitoring, identification, and/or quantification of biological phenomena and will support potential technological breakthroughs that exist at the intersection of engineering, life science, and information technology.
Projects submitted to the Program must advance both engineering and life sciences.Projects in the program may range from single investigator to multi-investigator collaborative research efforts.
The development of these novel principles and devices will require highly collaborative interactions between engineers, life scientists, and experts in nanotechnology, biomaterials, bioinformatics, and the chemical and physical sciences. The program recognizes the important role of education and workforce development specifically relevant to the multidisciplinary nature of the area of biosensing. Interdisciplinary teams are essential and must be fostered from discovery to application.
Purpose: Innovative basic research in photonics, imaging, and sensing that is very fundamental in science and engineering is needed to lay the foundation for new technologies beyond those that are mature and ready for application in medical diagnostics and therapies.
Developing molecularly specific sensing (molecular photonics), imaging, and monitoring systems with high sensitivity and resolution would be an enormous accomplishment with powerful applications to both biology and medicine. Low cost diagnostics will require novel integration of photonics, molecular biology, and material science. Complex biosensors capable of detecting and discriminating among large classes of biomolecules could be important not only to biology and medicine, but also to environmental sensing and homeland security.
The BISH program supports innovative research of biophotonic, imaging, and sensing technologies for applications in human health.
The BME program supports fundamental, transformative, and discovery research applied to biological systems.
The Pan-American Advanced Studies Institutes (PASI) Program is a jointly supported initiative between the Department of Energy and the National Science Foundation (NSF). Pan-American Advanced Studies Institutes are short courses ranging in length from 10 days to one month, involving lectures, demonstrations, research seminars, and discussions at the advanced graduate, postdoctoral, and junior faculty level. PASIs aim to disseminate advanced scientific and engineering knowledge and stimulate training and cooperation among researchers of the Americas in the mathematical, physical, and biological sciences, the geosciences, the computer and information sciences, and the engineering fields. Proposals in other areas funded by NSF may be considered on an ad hoc basis; in this case, lead investigators must consult with the PASI program before proposal submission. Whenever feasible, an interdisciplinary approach is recommended. The estimated number of awards is 10 to 16. The anticipated funding amount is $1.2 million, pending the availability of funds.
Application deadline: March 23, 2010. No Letter of Intent necessary.
Purpose. The NCRR Shared Instrument Grant (SIG) program encourages applications from groups of NIH-supported investigators to purchase or upgrade a single item of expensive, specialized, commercially available instrumentation or an integrated system that costs at least $100,000. The maximum award is $600,000. Types of instruments supported include confocal and electron microscopes, biomedical imagers, mass spectrometers, DNA sequencers, biosensors, cell sorters, X-ray diffraction systems, and NMR spectrometers among others. Mechanism of Support. This funding opportunity will use the NIH S10 mechanism. Funds Available and Anticipated Number of Awards. The NCRR intends to commit approximately $43 million in FY2011 to fund approximately 125 new awards. Since the cost of the various instruments will vary, it is anticipated that the size of awards will also vary. The total amount awarded and the number of awards will depend on the funds available for the SIG program.
Purpose: The program will fund the 5-year P50 ICMIC grants to support interdisciplinary scientific teams conducting cutting-edge cancer molecular imaging research. ICMIC funding is designed to: (1) support innovative cancer molecular imaging research projects; (2) support unique core facilities; (3) enable the awardees to initiate pilot research in new promising directions; and (4) provide interdisciplinary career development opportunities for investigators new to the field of molecular cancer imaging. | http://pages.cbst.ucdavis.edu/grants |
What is Diarrhoea?
Diarrhoea is an ailment, in which, a person passes loose or watery stools. It is mainly categorized as absolute and relative diarrhoea. Absolute diarrhoea involves more than five loose motions a day, while relative diarrhoea involves an increase in bowel movements per day. In the situation of diarrhoea, body looses electrolytes and fluid from the body. Diarrhoea can affect persons of all ages. If untreated, it can result in severe conditions, like blood pressure, kidney failure, or seizures.
Causes
Bacterial or viral infection causes diarrhoea. The other major causes are:
- Anxiety and stress
- Consuming excessive coffee and alcohol
- Chronic ethanol ingestion
- Hormone secreting tumours
- Ischemic bowel disease
- Allergic reactions
- Viral gastroenteritis
- Food poisoning
- Improper medications
- Radiation therapy or surgery
- Digestive disorders
- Inflammatory bowel movements
- Endocrine disorders
Symptoms
A person not only suffers with watery faeces, but also severe pain and cramps in stomach. Some of the common symptoms of diarrhoea are:
- Abdominal cramps
- Vomiting
- Nausea
- Headache
- Pricking sensation
- Loss of appetite
- Fatigue and boredom
- Weakness in the body
- Fever
- Dehydration
- Increase in the number of bowel movements
- Blood in stool
- Watery stools
- Bloating
Home Remedies for Diarrhoea
It seems so embarrassing, when we have come to attend a wedding ceremony and we are frequently visiting washroom. In order to get rid of this problem, here are some easy and quick home remedies for you.
1. Yogurt
Yogurt is one of the natural remedies for diarrhoea. It is rich in probiotics (friendly bacteria or live cultures), which forms a protecting layer on the intestines to protect them from harmful bacteria. Have yogurt on a regular basis as it produces lactic acid.
2. Chamomile Tea
For the treatment of various intestinal problems, like diarrhoea, chamomile tea can work great. The antispasmodic property of chamomile helps in relieving diarrhoea. Prepare tea by adding one tsp of chamomile flowers in one cup of water. Put the water to boil and add one teaspoon of peppermint leaves in it. Let it soak for 15 minutes. Consume this herbal tea thrice a day.
You can also use tea bags to prepare chamomile tea.
3. Blueberries
The anti-bacterial and antioxidant properties of blueberries help in curing diarrhoea, naturally. Blueberries are enriched with anthocyanosides. You are advised to prepare tea from dried blueberries and drink it. For this, you need to grind some blueberries and boil it in one cup of water. The tannins present in blueberries fight against diarrhoea.
You can also chew dried blueberries. It is as effective as tea.
4. Orange Peel Tea
One of the natural remedies for getting relief in digestive problems is orange peel. How to stop diarrhoea fast with orange peel? Remove peels of 1-2 orange. Place it in boiling water. Cover it with a lid. Steep it for some time. Let it cool down. To make it little sweet, add some honey or sugar, and drink it.
5. Fenugreek Seeds
Fenugreek seeds are a natural cure for diarrhoea. This remedy suggests grinding one teaspoon of fenugreek seeds and adding it to little water. To get relief in the symptoms of diarrhoea, consume this paste.
You can also dissolve one teaspoon of fenugreek seeds powder in a glass of water. Stir it well and drink.
Note- This remedy is not recommended for small children suffering with acute diarrhoea.
6. Goldenseal
The anti-bacterial, astringent, and anti-inflammatory features of goldenseal make it a natural digestive tonic. Consume it to get rid of diarrhoea.
7. Carrots
Carrots are also effective in reducing symptoms of diarrhoea. You are advised to boil some carrots. Prepare puree from the boiled carrots. Drink half cup of puree on a daily basis. You can also intake one teaspoon of puree after every 15 minutes. The essential oils present in the puree will help to regain proper digestion.
You can also prepare fresh carrot juice and drink it. To sweeten it, you can add a little sugar and a 4-5 drops of lemon juice.
8. Black Tea
Sip a cup of strong black tea regularly to get relief in the symptoms of diarrhoea. The tannins present in black tea helps in battling against the infecting bacteria and reduce inflammation.
Note: Drink it plain without adding anything to it.
9. Apple Cider Vinegar (ACV)
A natural and simple remedy for diarrhoea is apple cider vinegar. A person, suffering from diarrhoea, should blend 1 tsp of ACV in a glass of warm water. Stir it well without adding sugar. Drink it before and in between your meal. For best results, drink this solution thrice a day.
10. B.R.A.T Diet
It actually stands for banana, rice, apple sauce, and toast. Green bananas and rice are good for diarrhoea. Applesauce is rich in pectin, which slows down the food movement through the gastro-intestinal tract. Eating plain toast during diarrhoea helps in absorbing tainted fluids from the stomach.
11. Flour
Flour is also a natural remedy for the treatment of diarrhoea. You can simply add two teaspoons of flour in water. Stir it well. Drink this solution to cure diarrhoea, naturally.
12. Custard Powder
Add one tablespoon of custard powder in a glass of water. Stir it well. In just one gulp, drink this solution. For best results, drink this solution thrice a day.
13. Salt and Sugar Water
To treat diarrhoea naturally, prepare ORS (Oral Rehydration Salts) solution by adding 6 tsp of sugar and ½ teaspoon of salt in water. Stir it well until both the ingredients dissolves. Drink this solution.
You can also buy a packaged ORS mixture from the market and drink it by adding in a sufficient amount of water.
14. Ginger
Ginger is very effective against the cramps developed during diarrhoea or food poisoning. For using ginger as a remedy against diarrhoea, you have to grate a piece of ginger and add a little honey to it. To improve your digestive system, eat this mixture.
You can also make ginger tea to heal the symptoms of diarrhoea. You are advised to boil a cup of water. Add a piece of ginger to it. Let it cool down. Add a small amount of honey in it, after straining.
15. Brown Rice
To reduce the symptoms of diarrhoea, consume brown rice. Brown rice is rich in vitamin B and starch, which makes stool less watery. Add ½ cup of brown rice in 3 cups of water. Boil it for 45 minutes. Once you are done with boiling procedure, eat rice adding a pinch of salt. Then, drink a lot of water. It will accelerate the healing process.
16. Lemon
Lemon is also a very effective against diarrhoea. Squeeze some lemon juice in a glass of water. Add ½ tsp of baking soda and drink it.
You can also prepare tea from dried lemons. Put 16-20 ounces of water to heat. Poke holes on 2-3 dried lemon with a fork. When water starts boiling, place these lemons in it. Let them steep for five minutes, until they become soft. Soon, the colour of water will change, and lime tea will get ready. Add some sweetener in it. Drink it regularly.
17. Cinnamon
Cinnamon is used to relieve diarrhoea. You just have to heat a cup of water. Add one teaspoon of cinnamon powder and a little sugar in it. Stir it well. Allow it to cool down. Drink it in one gulp.
18. Wheatgrass
To get relief in stomach cramps during diarrhoea, wheatgrass can be an effective remedy. To get instant relief, add wheat grass powder in water in the ratio of 2:1. Now, stir it well and drink it. After this, drink a glass of plain water.
19. Cornmeal
Can cornmeal be effective for healing diarrhoea? Yes, cornmeal is a natural remedy against diarrhoea. Add some cornmeal in a hot frying pan. Stir it until its colour changes to light brown. Consume two tablespoons of it. If required, drink a little quantity of water.
20. Guava Juice
For treating diarrhoea, drink a whole glass of guava juice. Guava juice is known as an effective remedy for constipation, diarrhoea, dysentery, etc. For effective results, drink the juice after every 3-4 hours.
You can also boil some leaves of guava in water and have this water.
21. Mango Juice
Who doesn’t love mango? One cannot resist from having a mango. It is very refreshing and sweet in taste. With its healthy properties, it is known for curing diarrhoea, naturally. Extract some mango juice and add one teaspoon of ginger. Drink it to experience good result.
22. Black Pepper
To cure diarrhoea, you just have to boil 2 cups of water. Add 3 teaspoons of black pepper powder in it. Also add half teaspoon of kelp (marine algae).
23. Red Radish
To use red radish for the treatment of diarrhoea, blend a handful of chopped red radish. Add a cup of cold milk in it. Mix it well while adding half teaspoon of cornstarch. For quick results, have this mixture thrice a day.
24. Burnt Toast
We usually throw away a burnt toast. But, to treat diarrhoea, it can work effectively. When you suffer with diarrhoea, have a burnt toast. Put two toasts in the toaster to get dark. Consume it plain.
If it tastes bad, simple separate the burnt part and mix it in water. Gulp this water.
25. Pineapple
To stop diarrhoea, you are advised to eat chopped pineapple cubes. It works very effectively. You can also have fresh pineapple juice.
26. Oregano Oil
Oregano oil is effective for several problems, like dandruff, acne, cough, diaper rash, etc. The anti-microbial properties of oregano oil help in treating diarrhoea by killing the chronic diarrhoea-causing bacteria.
27. Charcoal
Charcoal is used to get rid of various diseases related to stomach, like indigestion, gas, diarrhoea, etc. To relieve diarrhoea, you are advised to intake charcoal tablets. Charcoal will work as an absorbent and absorbs the toxins, relieving diarrhoea.
28. Peppermint Tea
Like chamomile tea, peppermint tea also works effectively for healing diarrhoea. You are required to prepare peppermint tea by adding 2-3 leaves of peppermint in boiling water. Add a little honey or sugar. Strain it and drink it slowly.
You can also use peppermint tea bags for preparing tea.
29. Cheese
Everyone’s mouth waters when they hear about cheese. Cheese is used for the treatment of diarrhoea in adults. You have to mix cottage cheese and sour cream together. After mixing, have it 3-4 times a day.
30. Honey
Honey is rich in anti-bacterial properties, which helps in curing diarrhoea. You have to add 4 tablespoons of honey in 8 ounces of water. When an infant suffers with diarrhoea, give this solution in every half an hour.
For adults, only 1-2 tablespoons of honey is advised.
Note- Excessive quantity of honey causes severe constipation.
31. Potato
Prepare potato soup and consume several bowls of it regularly to avoid diarrhoea. Potato is a starchy food, which helps to restore the nutrients of the body.
You can also have baked potato to relieve the symptoms of diarrhoea.
32. Pomegranate Juice
Pomegranate juice is highly rich in vitamin A, C, and E. The antioxidants present in pomegranate helps in clearing the toxins causing gas, stomach ache, indigestion, etc. Mix one cup of pomegranate juice with one cup of sugarcane juice. Stir it well and consume it four times in a day. You can simply drink pomegranate juice, thrice a day.
You can also use pomegranate juice while preparing rice. Cook some rice. Add salt (as per your taste) and ½ tsp of ginger paste to it. Put some pomegranate juice in rice, which will help to soothe inflamed walls of the intestine.
33. Pumpkin
For the treatment of diarrhoea, prepare tea from pumpkin’s leaves. Put a handful of pumpkin leaves in water. Put the water to boil. Cover it with a lid. Let it steep for 30-35 minutes. Sip a cup of this tea after every two hours.
34. Buttermilk
To treat diarrhoea, drink 3-4 glasses of buttermilk, adding a pinch of salt in it. Add black salt and some roasted cumin seeds in it, for taste.
35. Gooseberry (Amla)
Gooseberry or amla juice is beneficial for curing various diseases. Drink some fresh amla juice, adding lemon juice and a little sugar to it.
36. Black Seed Oil
Black seed is an herbal plant, which helps in curing various health issues, like gas, constipation, asthma, and diarrhoea. This remedy suggests adding one teaspoon of black seed oil in yogurt. Have this mixture 2 times a day to relieve the symptoms of diarrhoea.
37. Grapefruit Seed
Grapefruit seed is a natural remedy to heal diarrhoea. It acts as a natural disinfectant. Its extract is bitter in taste, yet effective. Put five drops of grapefruit seed extract in a glass of water or have 3 capsules regularly.
38. Bran
Bran, the hard outer layer of cereal grain, is helpful to cure diarrhoea. Add one tbsp of unprocessed bran in your diet. It is known for absorbing excess fluids from the body. It can be taken with a variety of foods, according to the requirement.
Homeopathic Remedies for Diarrhoea
It happens sometimes that some health problems are not treated with home remedies. Some people are satisfied with the results, some are not. Homeopathic remedies, which are side-effect free, are always considered best for the treatment of diarrhoea in children. Have a look at some homeopathic remedies to treat diarrhoea.
1. Argentum nitricum : This remedy is used when a person suffers with severe bloating and pain in the groin area.
2. Bryonia : It is used when a person feels irritable and feels to lie down. Person’s mouth also becomes dry, and the symptoms get worsen in the morning.
3. Arsenicum album : It is used when one feels restless and experiences burning pain in the digestive tract. The symptoms are also accompanied by vomiting.
4. Colocynthis : When a person feels cutting pain in the abdomen and feels comfortable when hard pressure is put on the abdomen, this remedy is used.
5. Chamomilla : This remedy is used to relieve hot and green watery stools.
6. Gelsemium : It is used to alleviate various symptoms, like weakness, fever, and headache, which are associated with diarrhoea.
7. Phosphorus : This remedy is used when a person becomes weak and feels thirsty. He/she also experiences an empty feeling in the stomach.
8. Pulsatilla : This remedy is used against diarrhoea, which is caused after having fatty foods.
9. Ipecacuanha : If diarrhoea is accompanied by nausea, this remedy is suggested.
10. Podophyllum : When there is no pain during diarrhoea, but abdomen gurgles before diarrhoea passes.
11. Sulphur : When a person suffers with hot watery stools in the morning and the area around anus becomes red, itchy, and irritating, this remedy is used.
Preventive Measures
In order to prevent diarrhoea, follow some necessary measures, which are given below:
- Use anti-bacterial soap every time you wash hands.
- Drink excess of liquids, including juices.
- Do not drink water or juices from unhygienic places.
- Cook food at the right temperature.
- Try ORS solution.
- Have small and light meals.
- Use clean and hygiene toilets.
- Avoid sharing utensils, linen, and cutlery.
- Take proper rest.
- Avoid long journeys through car and bus.
- Breast-feed your infants. | https://homeremedyshop.com/home-remedies-for-diarrhoea/ |
Relational aggression involves behavior intended to harm victims’ social status or reputation through acts like manipulation, gossip, exclusion, and blackmail. Most of the research on relational aggression has focused on children and early adolescents, with college students receiving some attention in recent years. A smaller body of work supports the relevance of relational aggression among adults in workplace settings, marital relationships, and assisted-living facilities. While few studies with adults have been integrated into the literature on relational aggression, they provide evidence that these behaviors continue into adulthood. The current study explored relational aggression among women between the ages of 18 and 65 using social information processing theory (SIP; Crick & Dodge, 1994) to examine the pathway from relational victimization to relational aggression. A moderated mediation model tested via structural equation modeling showed that relational victimization predicted relational aggression, that this relationship was partially mediated by hostile attribution bias and anger rumination, and that normative beliefs about relational aggression moderated some of these mediated relationships. Specifically, normative beliefs strengthened the relationships of relational victimization and hostile attribution bias to relational aggression. Invariance testing compared the model across three developmental groups (i.e., emerging, established, and middle adulthood) and supported model invariance. Results highlight the continued relevance of relational aggression for adult women and support the role of anger rumination, hostile attribution bias, and perceived acceptability of relational aggression in the relationship between relational victimization and aggression.
Copyright
Alison M. Poor, 2022
Recommended Citation
Poor, Alison, "Predictors of Relational Aggression in Women Across Adulthood" (2023). Dissertations. 2037. | https://aquila.usm.edu/dissertations/2037/ |
Sometimes, muscle pain is the result of an injury to the muscle, such as a muscle pull, tear, or rupture. Other times, the tendon becomes strained in a specific injury or as a result of repetitive strain. Examples of repetitive strain to the muscle or tendon include prolonged and/or poor posture, repetitive motions, and improper lifting. Most people do not realize that they have experienced muscle injury until they feel symptoms, and by then the surrounding joints and nerves may have become affected.
Chiropractic care is effective for treating muscle pain because it is comprehensive and addresses not only the muscles, but also the joints and their related spinal nerves. Chiropractors determine which muscles, joints, and nerves are involved via examination and also determine any underlying causes. Some combination of the following treatments may be recommended to decrease pain and speed healing: exercise, electrical muscle stimulation, ultrasound, and ice or heat may be used to help alleviate symptoms.
Contact our chiropractic office online to discover how chiropractic care can help you find muscle pain relief. | https://www.drmoramarco.com/muscle-pain/ |
Crisis Lessons From Will Smith’s New Flick, ‘The Slap’
The bizarre and now-infamous incident from Sunday’s Academy Awards in which actor Will Smith slapped comedian and award presenter Chris Rock features some lessons for crisis communicators. Let’s review the new production.
First, a quick plot summary: Rock cracked an offensive joke about Smith’s wife, actor Jada Pinkett Smith, specifically about her medical condition, alopecia, which causes hair loss. Hubby then ran up on stage and slapped the comedian. He returned to his seat, where he yelled profanity-laced comments at Rock such as “keep my wife’s name out of your [blankety-blank] mouth.”
Some thoughts related to crisis communications:
- No warning. Many, though not all, crises arrive with no warning. The slap is an example of how people, organizations and companies must be prepared to respond to the unexpected.
- Scenario planning. One way to do that is to consider not only crisis scenarios related to your industry (e.g., factory fires for manufacturers), but also unlikely ones. While shocking, this scenario wasn’t so unimaginable: The conduct standards of the Academy of Motion Picture Arts and Sciences, presenter of the Oscars, prohibit “physical conduct that is uninvited.”
- Confused information. The episode demonstrated the typical confusion at the onset of a crisis. Many observers thought it might have been staged — part of the show — but Smith’s rant once he sat back down made it clear to most that the dustup was real.
- Ready response. Deciding how to respond to a crisis isn’t easy. At first the reporting was that the producers did nothing because they couldn’t decide on a course of action. Yesterday the academy said in a statement that it had asked Smith (pictured) to leave but he refused. It also said it “could have handled the situation differently.”
- Rights and wrongs. Sometimes the response decision is hard because the morality is complicated. Rock told a joke, reportedly ad-libbed, poking fun at someone’s medical condition, which was wrong, and Smith acted criminally by assaulting him. But many found themselves sympathetic to Smith, who during his acceptance speech said he was defending his family, which drew a standing ovation.
- Also during his acceptance speech, Smith apologized to the academy and his fellow nominees, but not to his target, Rock. The next day on Instagram he issued another apology in which he did mention the comedian: “I would like to publicly apologize to you, Chris. I was out of line and I was wrong.” But an important mantra of crisis communications is to get an apology right the first time, so you don’t have to issue a series of mea culpas. You shouldn’t have to be reminded to apologize to the party you’ve directly wronged.
In its statement yesterday, the academy said it had initiated disciplinary proceedings against Smith. He could be suspended from or thrown out of the academy.
Photo Credit: Featureflash Photo Agency/Shutterstock
Sign up for our free weekly newsletter on crisis communications. Each week we highlight a crisis story in the news or a survey or study with an eye toward the type of best practices and strategies you can put to work each day. Click here to subscribe. | https://prcg.com/blog/crisis-lessons-will-smith-slap/ |
Thank you for subscribing!
Recipes
All Recipes
Appetizers
Veggies
Meats
Rice
Breads
Desserts
Recipe
Ingredients Detail
All Recipes
|
Veggies
| Dal Makhani
Dal Makhani | Indian Recipes | Proud Cooks | For the Love of Cooking
Dal Makhani
Dal Makhani is one of the most popular lentil dishes in the Indian Subcontinent. The dish is made of whole black lentils (saabut urad), red kidney beans (rajma), and a combination of spices and butter. Traditionally, Dal Makhani is cooked slowly for 6 to 8 hours to extract the best flavors of black lentils; however, cooking the lentils in a pressure cooker can quickly simulate the effects of slow cooking.
In this recipe, whole black lentils and red kidney beans are first cooked in a pressure cooker, which are further cooked with butter, onions, tomatoes, and spices. The perfect balance of butter and spices makes it a deliciously rich and spicy dish. Dal Makhani tastes best when served hot with any Indian bread or rice dish.
Happy Cooking!
We Recommend...
Butter Naan
Saffron Rice
Green Vegetable
Red Beet
Ingredients
Serves 4 to 6
Whole Black Lentils (Saabut Urad)
3/4 cup (8 oz cup)
Butter
2 tablespoons and a 1/2 inch cube
Garlic Cloves
8, peeled and finely crushed
Red Tomato
1 medium, pureed
Turmeric Powder
1/8 teaspoon
Coriander Powder
1/2 teaspoon
Cumin Powder
1/4 teaspoon
Water
About 4 cups (8 oz cup)
Red Kidney Beans (Rajma)
1/4 cup (8 oz cup)
Onion
1 medium, finely chopped
Ginger
1 and 1/2 inch, peeled and finely crushed
Thai Chili
3 or according to taste, cut into half
Kashmiri Red Chili Powder
1 teaspoon
Garam Masala Powder
1/4 teaspoon
Salt
1 and 1/4 teaspoons or according to taste
Method
Step 1
Soak the whole black lentils (saabut urad) and red kidney beans (rajma) overnight.
Step 2
In a pressure cooker, add the soaked whole black lentils and red kidney beans with 4 cups of water and 1 teaspoon salt. Close the pressure cooker lid and set the heat to high. After one whistle, bring the heat down to low and cook for another 7 minutes. Remove the pressure cooker from heat, and let the pressure release on its own for the lentils and kidney beans to cook completely.
Step 3
Heat 2 tablespoons butter in a pan on medium-high. Add chopped onions and fry until the onions turn golden brown. Then, add the crushed ginger and garlic, and continue to fry for another 1 to 2 minutes to let the aroma of ginger and garlic combine with the onions.
Note
- Save rest of butter for later use.
Step 4
Add the pureed tomato and 1/4 teaspoon salt and mix. Continue to cook the pureed tomato with onions, ginger, and garlic (for about 3 minutes) until tomato turns darker in color.
Step 5
Add turmeric powder, kashmiri red chili powder, coriander powder, cumin powder, and garam masala powder. Add a couple tablespoons of water and mix the spices well with the onion-tomato-ginger-garlic mixture. Continue to cook for 5 to 6 minutes on medium-high until the oil separates from the mixture. Keep stirring the mixture frequently to prevent it from sticking to the bottom of the pan. You will see oil bubbles on top of the spice mixture when it's ready. In this process, you may need to add a few more tablespoons of water to prevent the spices from burning.
Step 6
Now, add the cooked whole black lentils and red kidney beans (along with the water in which they were cooked) and bring to boil. Taste the mixture for salt, and add some more if needed. Bring the heat to medium-low, cover the pan, and cook for another 15 to 20 minutes. Keep stirring occasionally. In this step, you can add some more water if you want the consistency of the dish to be thinner.
Step 7
Add the 1/2 inch cube of butter and mix well. Your Dal Makhani is ready. Serve hot with any Indian bread or Rice dish.
Meet Up
Share your thoughts and questions!
Please login to share your comments! | http://www.proudcooks.com/recipes/Veggies/Dal-Makhani |
Financial support: This work was financially supported by the National Council for Science and Technology (Mexico), the Technologic Institute for Higher Studies of Ecatepec (México), the Research Office of the National Polytechnique Institute (Mexico), and the National University of Mexico, DGAPA projects IN207603 and IN209008.
Keywords: constitutive enzymes, inducible enzymes, initial pH, kinetic modeling, pectinases.
The aim of this work was to describe growth dynamics, substrate depletion and polygalacturonases production by Aspergillus flavipes FP-500 in batch cultures by means of unstructured models. The microorganism was cultivated on several mono- di- and poly- saccharides, and then the culture development modeled with Monod and Leudeking-Piret equations. The kinetic parameters related to the models (µmax, γx/s, α and β) were obtained by minimizing the quadratic residuals function with a simplex algorithm. An accurate description of experimental data was attained with the proposed models. Besides, modeling provided significant kinetic information on microbial degradation of complex substrates, such as the correlation between specific growth rate µmax and production yield α, suggesting that A. flavipes FP-500 polygalacturonases are actually constitutive, but also that there is a certain degree of induciblility in these enzymatic activities.
Pectin is a complex polysaccharide found in middle lamella of plant cell wall. It represents the first plant barrier during microbial attack which involves the action of several enzymes (de Vries and Visser, 2001). Fungi of the genus Aspergillus are among the microbial species that can degrade pectin, so producing a number of pectinases (Texeira et al. 2000). The importance of pectin-degrading enzymes lies mainly in their actual and potential uses in a number of industries, like in food processing, textile industry, paper and pulp industry, pectic wastewater treatment and animal feed, among others (Jayani et al. 2005; Niture, 2008). In addition, the understanding of the regulation process of the production of polygalacturonases will contribute to get insights in the molecular dialogue between the host and the pathogen, during microbial invasion of plant cell wall (Esquerré-Tugayé et al. 2000; Lang and Dörnenburg, 2000).
Although the specific signal molecule that triggers pectinases synthesis remains unknown, the general accepted idea is that fungal cells produce a low level of constitutive pectinolytic enzymes, which release a few molecules of mono or oligosaccharides from a polymer in plant structure. These small molecules are transported into the microorganism and start a massive expression of the degrading pectinases (Mach and Zeilinger, 2003). This would mean that several pectinolytic activities are induced by a number of substrates (Prade et al. 1999). Pectinases production is regulated in different fungal species, depending on the carbon source (Crotti et al. 1998; Wubben et al. 2000; Olsson et al. 2003) and the pH of the medium (de Vries and Visser, 2001; Peñalva and Arst, 2002).
The mathematical modeling of microbial growth and process performance has led to improved design and operation of mycelial fermentations and has improved the ability of scientists to translate laboratory observations into commercial practice. Unstructured modeling represents a particular and useful application of growth mathematical analysis that could provide some important hints on constitutiveness and inducibility of metabolites production, such as some enzymatic activities, on a kinetic, quantitative basis.
In this context, the present work aimed at describing the dynamics of fungal growth, substrate depletion and polygalacturonases production in batch cultures under different conditions of carbon source and initial pH of the culture, using the Monod and Leudeking-Piret unstructured models (Thilakavathi et al. 2007). After obtaining the kinetic parameters, their numerical values were correlated in order to point out the constitutive or inducible nature of the produced polygalacturonases.
Aspergillus flavipes FP-500, was isolated in Mexico from rotten tomatoes. The strain was maintained at 4ºC on PDA agar plates.
The strain was identified by conventional methods considering its morphological characteristics growing on different media and by microscopic examination. A comparison with type strains was done resulting in the identification of the strain as member of flavipes species.
A. flavipes FP-500 was grown on basal medium, containing (g L-1): K2HPO4, 2; KH2PO4, 2; and (NH4)2SO4, 5. The medium was sterilized by autoclaving at 121ºC for 20 min. The initial pH of the medium was adjusted with 2 M NaOH or H2SO4. To this basal salt solution, a suitable carbon source was added to attain a final concentration of 10 g L-1.
Spores were collected from 3-day-old agar slants with Saline-Tween solution (NaCl, 0.9% and Tween 80, 0.01%). A final concentration of 1 x 106 spores/mL was obtained by adjusting with sterile water. Monosaccharide based media (see below) were supplemented with 0.1% (w/v) of yeast extract.
Production of enzymes under different carbon sources and initial pH
In order to evaluate the effect of carbon source and initial pH on growth and enzyme production, A. flavipes FP-500 was grown on basal medium supplemented with different carbon sources: pectin and polygalacturonic acid, as complex polysaccharides; galacturonic acid, arabinose, rhamnose and xylose, as the main monosaccharide constituents of pectin; glycerol as a simple substrate not related to pectin structure; lactose and glucose, this latter considered a universal catabolic repressor. Also, three initial pH values of the culture media were established (3.5, 4.2 and 5.0), in order to evaluate microbial growth and polygalacturonases production under these different conditions. Cultures were carried out in 100 mL of medium, and were incubated at 37ºC in a reciprocant shaker at 200 rpm. Samples were withdrawn every 24 hrs until 72 hrs of culture.
Microbial biomass. Cell growth was measured by dry weight and expressed as g L-1.
Polygalacturonase activity. This was measured by determination of the reducing sugars produced from 1% (w/v) pectin solution after incubation at 45ºC for 20 min at pH 5.0. One unit of polygalacturonase activity was defined as the amount of enzyme that catalyzes the formation of 1 µmol of galaturonic acid under assay conditions (Trejo-Aguilar et al. 1996).
Substrate depletion. Monosaccharide substrates consumption was measured through quantification of reducing sugars using 3,5-dinitrosalicilic acid, with the corresponding sugar as the reference standard (Miller, 1959). Polymers consumption was estimated after acid hydrolysis of polysaccharides containing samples with a sulfuric acid and following color development of the phenol reagent (Dubois et al. 1956).
Culture behavior in time is described with the following equations:
(microbial growth)
(substrate depletion)
(polygalacturonases production)
Where
biomass accumulation in the culture medium (gb L-1 h-1)
substrate depletion in the culture medium (gs L-1 h-1)
exopectinases accumulation in the culture medium (U L-1 h-1)
µmax maximal specific growth rate (h-1)
S substrate concentration (gs L-1)
ks Monod saturation constant (gs L-1)
x biomass concentration (gb L-1)
Yx/s biomass on substrate yield coefficient (gb gs-1)
α growth-associated coefficient for pectinases production (U gb-1)
β growth-independent coefficient for pectinases production (U gb-1 h-1)
Monod constants µmax and ks are not simultaneously identifiable in batch process (Nihtila and Virkkunen, 1977; Chouakri et al. 1994), so the maximal specific growth rate is the only useful obtained parameter.
Kinetic parameters in the Leudeking-Piret model, α and β, indicate the relation between growth and pectinases production in a fungus culture. The parameter α expresses the enzyme production associated with microorganism growth, in a way that production is considered growth-associated whenever α ≠ 0. Since pectinases production is needed for the assimilation of complex substrates in order the miocroorganism to grow, parameter α is the more meaninful factor in our modeling approach. Besides, the estimation of β resulted in a nil value for any Aspegillus flavipes culture. The former considerantions lead to specify the vector parameter P as:
P = [µmaxγx/s α] and the initial conditions to model equations (x0, S0 and p0) are the experimental concentrations at process time t = 0.
The growth yield (Yx/S) and the identifiable parameters for Monod (µmax) and Luedeking-Piret (α) models were obtained as reported in a previous work (Aranda-Barradas et al. 2000), with the following residuals function F as the minimizing criterium:
Where superscripts exp and mod indicate experimental data and model results, respectively. The differences between experimental and theoretical values were calculated for the n sample points (n = 12). The estimated kinetic parameters Pest are those that satisfy the condition
F (Pest) = min
Meaning that the estimated parameters are chosen in order to produce the minimum residual errors between the model theoretical values and the experimental data. The search of the minimum was carried out with a Nelder-Mead simplex method, given an arbitrary initial vector P0 to start up the algorithm.
The model can be coupled to estimated parameters Pest in order to create synthetic data sets, by adding white noise to numerical results on x, S and p from the model. Each synthetic data set produces a new vector of equivalent estimated parameters. Therefore, calculating a number of equivalent parameters allows the estimation of the mean and the standard deviation for every kinetic parameter (Table 1).
Variance estimations in the model
Variance associated to parameters. Confidence intervals of the best estimated parameters are given by
Where
σ standard deviation for each kinetic parameter (i = µmax, γx/s or α), obtained from synthetic data
m experimental determinations made in N samples
N samples obtained from each culture
q number of estimated parameters
t0.975 Student’s factor for a confidence level αt = 0.975
Estimated variance to x, S and p. Variance of experimental data obtained after estimation of biomass (x), substrate (S) and exopectinases (p) was estimated by
where
u x, S or p
Modeling of growth and pectinases production
Kinetic parameters (µmax, γx/s, α) estimated by Nelder-Mead simplex algorithm (Table 1) showed to be accurate enough to build a reliable model that describes the kinetics of Aspergillus flavipes FP-500 on several carbon sources, as it can be seen from the model fitting in Figure 1. A good fit to experimental data was reached with this mathematical treatment.
Kinetic parameters as a function of pH and substrate
Kinetic parameters for the mathematical model were determined in each experimental condition described before. Monod-type growth on substrates such as xylose, glycerol and polygalacturonic acid resulted in maximal specific growth rates around 0.03 h-1, while on galacturonic acid µmax ≈ 0.07 h-1 and on lactose µmax was about 0.25 h‑1. With all the forementioned substrates the maximal specific growth rate was approximately constant regardless the initial pH in the culture. Thus, for these substrates, estimated µmax values seem to indicate dependence of growth on the carbon source, but not on the initial pH of the culture. However, estimated µmax for other group of substrates (glucose, rhamnose, pectin) was strongly affected by the initial pH. On the one hand, experiments with rhamnose as a carbon source showed increasing µmax values as the initial pH was rised from 3.5 to 5.0. Growth on glucose and pectin presented a significative drop of the specific growth rate µmax when the initial pH in the culture was increased.
Concerning polygalacturonases production represented by the Leudeking-Piret model, estimated β values were equal to zero for all the substrates and initial pH tested. In contrast, α parameter estimations showed a dependence on both the substrates and the initial pH. The lowest α value was obtained for glycerol (0.004 U/g, pH = 3.5), meanwhile high values were obtained for rhamnose (25.6 U/g, pH = 4.2), galacturonic acid (37.89 U/g, pH = 3.5), polygalacturonic acid (27.88 U/g, pH = 5.0) and pectin (30.78 U/g, pH = 3.5). Values obtained for pectin and galacturonic acid were exceptionally high when compared to other substrates in media with initial pH of 3.5. Similarily, for glycerol, the product yield α is relatively high (15.71 U/g for pH = 4.2 and 16.17 U/g for pH = 5.0) although this substrate is not a structural component of pectin.
Polygalacturonases constitutiveness and inducibility from kinetic
Parameters
After comparing kinetic parameters for different carbon sources and initial pH values, some interesting remarks can be established concerning the relationship between estimated µmax and α (Figure 2). Three major trends were observed:
i) For glucose, plotting µmax vs. α it could be observed that the estimated µmax values strongly depended on the pH of the medium (Figure 2a). While α value at pH 3.5 and 4.2 was similar at 5.0 a higher value was obtained. However, this latter value was relatively low as compared with other substrates.
ii) For substrates as lactose, galacturonic acid or polygalacturonic acid, estimated µmax was the same on the three initial pH tested. However, different α values were obtained, depending on the initial pH used for each medium. A graphical representation of µmax – α parameters shows a linear, nearly vertical, relationship (Figure 2b).
iii) With pectin as the carbon source, there is a positive proportional relationship between µmax and α (Figure 2c).
These experimental results could be interpreted in terms of constitutiveness or inducibility of pectinases as a function either of initial pH in the culture media or of the carbon source.
It can be clearly seen on Figure 1 that although final biomass for growth on glucose and pectin is similar, glucose is not depleted in the process time, contrary to pectin which is exhausted after 48 hrs. These findings suggest that A. flavipes FP-500 is better adapted to complex polysaccharides than to monosaccharides, since kinetics of growth is faster on the former substrates.
It should be observed that µmax varied in different carbon sources, with a clear trend to be higher for lactose and pectin than for monosacharide susbtrates. It is interesting to notice that this strain was able to grow on lactose as a sole carbon source and this is not usual. Other filamentous fungi showed a low growth rate on this substrate (Pakula et al. 2005), even when this carbon source has been reported as the inducer of cellulases in Hypocrea jecorina (anamorph of Trichoderma reesei), (Seiboth et al. 2004). The growth on lactose is related to the presence of β-galactosidase activity, a key enzyme for lactose utilization by this fungus. Considering that lactose does not occur in the natural environment of fungi, it has been proposed that the function of this β-galactosidase is the hydrolysis of terminal non-reducing β-D-galactose residues in plant cell wall components including hemicelluloses or pectins (Seiboth et al. 2007). Furthermore, in Aspergillus there is not a clear picture about lactose utilization; while A. nidulans is able to grow on lactose and galactose, A. niger does not (Seiboth et al. 2007). As it can be seen from our results, A. flavipes grows on lactose and is able to produce α- and β-galactosidase. So, it is probable that A. flavipes FP-500 is using a similar strategy to A. nidulans that allow the growth on lactose. Nevertheless, it would be necessary to develop additional experiments for demonstrating this hypothesis.
The highest µmax values were reached with data from a pectin culture, so pectin allows a good microbial biomass production in this strain. Besides, this behavior could explain the elevated polygalacturonases production of the fungus when it grew on pectin as carbon source. A. flavipes FP-500 is a good pectinases producer, whose pectinases production is better than other Aspergillus strains, such as those reported for A. niger F-1119, which produced barely 10.8 U/ml and 2.1 g/L of biomass on citrus pectin (Shubakov and Elkina, 2002).
The α parameter involved in Leudeking-Piret model expresses the enzyme production associated with the growth of the microorganism. Moreover, analysis of µmax - α plots can give some insight about constitutiveness, if α remains constant eventhough changes in µmax are presented; or inducibility, if α increases regardless the behavior of µmax.
For growth on glucose (Figure 2a), no matter what initial pH was adjusted in the culture, low values for α were obtained. The numerical values were not higher than 1.65 U/g what is considered relatively low, and might be reflecting a basal enzyme activity. Taking into account that glucose is not a constituent of pectin structure, the behavior observed in this µmax – α relationship suggests that there is a constitutive part of polygalacturonases produced by A. flavipes FP-500, which is expressed under any pH condition.
With other substrates unrelated to pectin structure (glycerol and lactose) α ranged from 0.004 to ~15 U/g. However, α raised up to 25 U/g or more for cultures developed on carbon sources related to pectin. Among them, the highest was attained on galacturonic acid, and it was comparable in magnitude to those observed for polygalacturonic acid and pectin. This seems to indicate a certain substrate-inducibility degree in the enzymatic production.
When a graphical analysis between µmax and α was performed for pectin-related substrates, different behaviors were observed. On galacturonic acid, the fungus had the same maximal specific growth rate (~0.07 h-1) but different α values for every initial pH tested (Figure 2b). This suggests that on galacturonic acid there is an induction in this strain of polygalacturonase activity depending on initial pH, as it has been reported for other Aspergillus species (de Vries et al. 2002). On polygalacturonic acid (Figure 2b) it was clear that initial pH of the medium also conditioned the polygalacturonases activity. It seems that for almost all the substrates, the initial pH of 3.5 increases polygalacturonases activity. Thus, initial pH conditions regulate pectinases activity.
Some similar conclusions have been reached in induction and repression research on cellulases by A. niger (Hanif et al. 2004). However, although our results on kinetic modeling certainly give some insight on polygalacturonases production, constitutiveness and inducibility, there is still room for other confirmatory experiments.
Aspergillus flavipes FP-500 is a pectinases producer strain. Our results have shown that polygalacturonases production is strongly influenced by the available carbon source, and that enzyme activity is regulated by the initial pH in the culture medium. The main contribution lies in the kinetic characterization of polygalacturonases production by the strain, that allowed to stress to some extent the constitutiveness and inducibility of those enzymes:
i) Growth on monomeric substrates unrelated to pectine (glucose, gycerol) resulted in a low polygalacturonases production, considered a basal constitutive enzyme activity, that can be modified by means of a change in the initial pH of the culture medium.
ii) Culture media containing monomeric substrates related to pectin (xylose, rhamnose, arabinose) produced in general a low polygalacturonases yield (α), eventhough some important increases in enzymatic activity are attained by establishing the appropiate initial pH (3.5 for galacturonic acid and 4.2 for rhamnose) in the culture medium.
iii) Complex substrates such as pectin or polygalacturonic acid induced important polygalacturonase production that was significantly increased at low pH (3.5) for pectin, or at a slightly higher pH (5.0) for polygalacturonic acid.
Even though kinetic parameters are not definitive evidence of enzyme induction, an indirect inference from µmax - α relationship would indicate the conditions for an increase of enzymatic activity, mainly due to substrate induction.
The authors acknowledge Dr. Edgar Salgado for the critical reading of the manuscript.
ARANDA-BARRADAS, Juan S.; DELIA, Marie Line and RIBA, Jean-Pierre. Kinetic study and modeling of the xylitol production using Candida parapsilosis in oxygen-limited culture conditions. Bioprocess and Biosystems Engineering, March 2000, vol. 22, no. 3, p. 219-225. [CrossRef]
CHOUAKRI, N.; FONTEIX, C.; MARC, I. and CORRIOU, J.P. Parameter estimation of a Monod-type model. Part I: Theoretical identifiability and sensitivity analysis. Biotechnology Techniques, October 1994, vol. 8, no. 10, p. 683-688. [CrossRef]
CROTTI, Luciana B.; TERENZI, Héctor F.; JORGE Joao A. and POLIZELI, María de Lourdes. Regulation of pectic enzymes from the exo-1 mutant strain of Neurospora crassa: effects of glucose, galactose and galacturonic acid. Journal of Basic Microbiology, July 1998, vol. 38, no. 3, p. 181-188. [CrossRef]
DE VRIES, Ronald P. and VISSER, Jap. Aspergillus enzymes involved in degradation of plant cell wall polysaccharides. Microbiology and Molecular Biology Reviews, December 2001, vol. 65, no. 4, p. 497-522. [CrossRef]
DE VRIES, Ronald P.; JANSEN, Jenny; AGUILAR, Guillermo; PARENICOVA, Lucile; JOOSTEN, Vivi; WÜLFERT, Florian; BENEN, Jacques A.E. and VISSER, Jaap. Expression profiling of pectinolytic genes from Aspergillus niger. FEBS Letters, September 2002, vol. 530, no.1-3, p. 41-47.
DUBOIS, M.; GILLES, K.A.; HAMILTON, J.K.; REBERS, P.A. and SMITH, F. Colorimetric method for determination of sugars and related substances. Analytical Chemistry, March 1956, vol. 28, no. 3, p. 350-356. [CrossRef]
ESQUERRÉ-TUGAYÉ, Marie-Thérese; BOUDART, Georges and DUMAS, Bernard. Cell wall degrading enzymes, inhibitory proteins, and oligo-saccharides participate in the molecular dialogue between plants and pathogens. Plant Physiology and Biochemistry, January 2000, vol. 38, no. 1-2, p. 157-63. [CrossRef]
HANIF, A.; YASMEEN, A. and RAJOKA, M.I. Induction, production, repression, and de-repression of exoglucanase synthesis in Aspergillus niger. Bioresource Technology, September 2004, vol. 94, no. 3, p. 311-319. [CrossRef]
JAYANI, Ranveer Singh; SAXENA, Shivalika and GUPTA, Reena. Microbial pectinolytic enzymes: A review. Process Biochemistry, September 2005, vol. 40, no. 9, p. 2931-2944. [CrossRef]
LANG, C. and DÖRNENBURG, H. Perspectives in the biological function and the technological application of polygalacturonases. Applied Microbiology and Biotechnology, April 2000, vol. 53, no. 4, p. 366-375. [CrossRef]
MACH, R.L. and ZEILINGER, S. Regulation of gene expression in industrial fungi: Trichoderma. Applied Microbiology and Biotechnology, January 2003, vol. 60, no. 5, p. 515-522. [CrossRef]
MILLER, G.L. Use of dinitrosalisylic acid reagent for determination of reducing sugars. Analytical Chemistry, March 1959, vol. 31, no. 3, p. 426-428. [CrossRef]
NIHTILA, M. and VIRKKUNEN, J. Practical identifiability of growth and substrate consumption models. Biotechnology and Bioengineering, December 1977, vol. 19, p. 1831-1850. [CrossRef]
NITURE, Suryakant. K. Comparative biochemical and structural characterizations of fungal polygalacturonases. Biologia, February 2008, vol. 63, no. 1, p. 1-19. [CrossRef]
OLSSON, Lisbeth; CHRISTENSEN, Tove M.I.E.; HANSEN, Kim P. and PALMQVIST, Eva A. Influence of the carbon source on production of cellulases, hemicellulases and pectinases by Trichoderma reesei Rut C-30. Enzyme & Microbial Technology, October 2003, vol. 33, no. 5, p. 612-619. [CrossRef]
PAKULA, Tiina M.; SALONEN, Katri; UUSITALO, Jaana and PENTTILÄ, Merja. The effect of specific growth rate on protein synthesis and secretion in the filamentous fungus Trichoderma reesei. Microbiology, January 2005, vol. 151, no. 1, p. 135-143. [CrossRef]
PEÑALVA, Miguel A. and ARST, Herbert N. Regulation of gene expression by ambient pH in filamentous fungi and yeasts. Microbiology and Molecular Biology Reviews, September 2002, vol. 66, no. 3, p. 426-446. [CrossRef]
PRADE, R.A.; ZHAN, D.; AYOUBI, P. and MORT, A.J. Pectins, pectinases and plant-microbe interactions. Biotechnology and Genetic Engineering Reviews, 1999, vol. 16, p. 361-391.
SEIBOTH, Bernhard; HARTL, Lukas; PAIL, Manuela; FEKETE, Erzsébet; KARAFFA, Levente and KUBICEK, Christian P. The galactokinase of Hypocrea jecorina is essential for cellulase induction by lactose but dispensable for growth on D-galactose. Molecular Microbiology, February 2004, vol. 51, no. 4, p. 1015-1025.[CrossRef]
SEIBOTH, Bernhard; PAKDAMAN, Babak S.; HARTL, Lukas and KUBICEK, Christian P. Lactose metabolism in filamentous fungi: how to deal with an unknown substrate. Fungal Biology Reviews, February 2007, vol. 21, no. 1, p. 42-48. [CrossRef]
SHUBAKOV, Anatoly A. and ELKINA, Elena A. Production of polygalacturonases by filamentous fungi Aspergillus niger ACMF-1119 and Penicillium dierckxii ACIMF-152. Chemistry and Computational Simulation, 2002, vol. 2, no. 7, p. 65-68.
TEIXEIRA, Maria F.S.; LIMA-FILHO, José L. and DURÁN, Nelson. Carbon sources effect on pectinase production from Aspergillus japonicus 586. Brazilian Journal of Microbiology, October/December, 2000, vol. 31, no. 4, p. 286-290.
THILAKAVATHI, Mani; BASAK, Tanmay and PANDA, Taprobata. Modeling of enzyme production kinetics. Applied Microbiology and Biotechnology, January 2007, vol. 73, no. 5, p. 991-1007. [CrossRef]
TREJO-AGUILAR, Blanca; VISSER, Jaap and AGUILAR-OSORIO, Guillermo. Pectinase secretion by Aspergillus FP-180 and Aspergillus niger N402 growing under stress induced by the pH of culture medium. In: VISSER, J. and VORAGEN, A.G.J. eds. Pectins and Pectinases: Proceedings of an International Symposium, (December 3rd to 7th, 1995, Wageningen, The Netherland). Progress in Biotechnology, 1996, vol. 14, p. 915-920.
WUBBEN, Joss P.; HAVE, ArjenTen; VAN KAN, Jan A.L. and VISSER, Jaap. Regulation of endopolygalacturonase gene expression in Botrytis cinerea by galacturonic acid, ambient pH and carbon catabolite repression. Current Genetics, February 2000, vol. 37, no. 2, p. 152-157. [CrossRef]
|
|
Note: Electronic Journal of Biotechnology is not responsible if on-line references cited on manuscripts are not available any more after the date of publication. | http://www.ejbiotechnology.info/index.php/ejbiotechnology/article/view/v11n4-6/643 |
Landsbankinn places strong emphasis on protecting the privacy of its customers and parties who communicate with the Bank to safeguard their rights. This Policy contains information on the data the Bank gathers about you, how it is used, how its security is ensured and your rights according to data protection legislation.
This Policy applies to the processing of personal data in the Bank’s entire operation and to all individuals who do business with it, including former, current and prospective customers, parties connected to customers, such as family members, and guarantors or holders of power of attorney. The Policy also applies to persons other than customers, such as individuals who are in communication with the Bank, visit its facilities or website, apply for grants, or participate in events hosted by the Bank.
The aim of the Policy is to provide a comprehensive overview of the personal data the Bank processes and to inform customers, employees and others of the purposes and means by which the Bank collects and handles personal data to ensure compliance with laws and regulations.
The Policy does not apply to the operation of legal entities, neither associated entities nor subsidiaries of Landsbankinn. The Bank may need to process information about individuals connected to legal entities who are customers, such as beneficial owners, directors of the board, executives, authorised signatories and, as the case may be, employees of the legal entity.
Please note that details about the processing of your personal data may be provided in Landsbankinn’s General Terms and Conditions, special terms and conditions or information provided for certain products or services.
The collection and processing of personal data allows the Bank to provide you, or companies which you work for or are connected with, with requested financial services. The personal data you submit includes:
- Basic information: Name, Icelandic Id. No., address, telephone number, email, name of employer and other basic information, as the case may be on nationality, marital status, spouse, children and connected parties such as legal guardians, holders of power of attorney or guarantors.
- Communication and contract information: All your interaction with the Bank that takes place via email, online chat, in writing, in conversation and on social media. The Bank also processes all information derived from or submitted in relation to any contracts you enter into with the Bank, e.g. for individual products or services.
- Information about identification: Any copies of legally required or electronic identification, including copies of your passport or driver's licence, your preferred means of identification and communication channels. This also includes the time and date of your visits to the Bank’s branches if you chose to register your Id. No. when you visit.
- Financial information: All information about your current and previous business and transaction history, including account balance and type, turnover, origin of funds, transaction statement and information about payment cards, payment history and orders along with information about income, expenses, financial commitments, and assets and liabilities.
- Information gathered through electronic monitoring: Audio and video recordings from surveillance cameras in the Bank’s facilities, ATMs and the recording of telephone conversations.
- Technical information and inferred data about behaviour and use: About the equipment and devices you use to connect to the Bank’s website, online banking and app such as user name, settings, IP number, type, number and settings of smart devices, operating system and browser type, language settings, how you connect to us, the origin and type of actions undertaken.
- Public information: From public registries such as Registers Iceland, the Icelandic Property Registry, the vehicle registry, the Registrar of Enterprises, the Legal Gazette and other public registries.
- Sensitive personal data: on racial or ethnic origin, political affiliation, trade union membership, health information, biometric data. Note that when using biometric data such as your fingerprint or face to log in to Landsbankinn’s app, identification takes place through your phone only and the Bank does not receive copies of your biometric data.
- Other information: The list above is not exhaustive and the Bank may process other personal data depending on the nature of the business relationship or your transactions with the Bank.
In exceptional cases, the Bank may need to gather information classified as special categories of data. In other instances, financial information, e.g. transaction statements for payment cards or use of current accounts, may include sensitive personal data that may indicate certain behaviour. We do not gather sensitive data about you nor do we process such data without clear authorisation and unless absolutely necessary. Should you choose not to supply necessary information it may prevent the Bank from providing the requested service.
Processing personal data of children
The personal data of children may be processed if it is necessary to carry out requested transactions or provide a service, e.g. create a payment account and issue a debit card. The Data Protection Act states that the consent of a guardian is required for children under 13 years of age in relation to the offer of information society services directly to a child. All marketing material, including gifts, notifications and benefits intended for children is sent to guardians and, as the case may be, also their children. You can opt out of such marketing material and gifts at any time through online banking for individuals or the Bank’s Data Rights Portal. The Bank may contact you to ask if your decision has changed.
Landsbankinn processes personal data for clear and stated purposes in accordance with the Data Protection Act, the Bank's rules and this Policy. Processing of personal data may have various purposes, such as:
- To contact you, identify you and ensure the security and reliability of business transactions, through such means as due diligence on customers. The Bank contacts customers through various channels, such as email, notifications in online banking, Landsbankinn’s app, the Bank’s website and social media.
- Carry out requested transactions, provide financial services and advice and respond to enquiries, such as establish and maintain a business relationship, perform payment and credit assessments and determine self-service authorisations, assess credit risk and prevent borrowing from exceeding repayment capacity, analyse financial standing with regard for the Bank’s product and service offering in order to provide advisory service, including on asset management, pension savings or other service, receive applications for and remit pension savings.
- For security and archiving purposes to safeguard the interests of customers, employees and others who have dealings with the Bank, ensure the traceability of transactions through such means as electronic monitoring and investigate issues or prevent money laundering, terrorist financing, fraud and other criminal conduct.
- Develop the Bank’s product and service offering, promote innovation and boost service levels, offer personalised and tailored services, respond to suggestions and complaints and process answers to marketing and/or service questionnaires.
- Develop solutions and reports for the purpose of credit and risk management, such as to measure and monitor credit risk, operational risk, market risk, underwriting risk and for internal treasury purposes.
- Operate and maintain the Bank's websites and online services and improve user experience online, in Landsbankinn’s app and online banking for individuals and corporates and, as the case may be, the Bank’s other web-based solutions.
- Respond to legal requests and ensure cyber and data security by, among other things, analysing, investigating and preventing fraud and other misconduct.
- For marketing and promotional purposes and to provide personalised and tailored services, send messages about benefits and material that may interest you or you have requested. Note that photographs and video recordings are made at conferences, promotions and other events hosted by the Bank and that these may appear publicly on the Bank’s websites, including social media.
- Perform statistical analysis on certain products, services or communication channels, front office or other individual functions in the Bank’s operation. Such analysis is based on non-personally identifiable data, if possible.
Lawfulness of processing of personal data
For the most part, the gathering and other processing of your personal data by the Bank is based on a contract between you and the Bank for specific services and to provide the requested financial service or to satisfy legal obligations the Bank is subject to as a regulated entity on the financial market. In certain cases, the Bank will request your informed consent to process personal data. In such cases, you can withdraw your consent at any time, and then the processing covered by the consent is terminated.
Finally, your data may be processed if it is necessary for the purposes of legitimate interests pursued by the Bank, you yourself or a third party. Such processing does not take place if it is clear that your interests outweigh the interest of the Bank or a third party. The following processing operations are based on legitimate interests: processing of basic information from Registers Iceland, determination of benefit programmes for customers and retention of the business history of former customers, classification and monitoring of loans, development and testing of new products and services, for marketing purposes and target group analysis, and for cyber and information security purposes.
Automated decision-making
In certain instances, the Bank creates a personal profile using automated processing of your personal data to assess or anticipate aspects of your finances, such as development of financial standing or probability of default. Calculation of a credit score is an example of profiling. Profiles may also be prepared for marketing and cyber and information security purposes, e.g. to determine which benefit programme suits you best, and by employing pattern analysis in online banking to maximise the safety of your financial information.
Profiling may also be a factor in automated decision-making that relates to you. In automated decision-making your personal data is processed automatically by software to reach a decision without the aid or involvement of human agency. Automated decision-making is used, for instance, to determine the amount of self-service lending in Landsbankinn’s app, based on such factors as your credit score.
Automated decision-making only takes place with your consent, if it is a prerequisite for the conclusion or execution of an agreement between you and the Bank, or if authorised by law. You can submit objections or contest automated decisions by email to [email protected].
The aforementioned personal data in the Bank’s possession is usually gathered directly from you when you enter into a business relationship with the Bank, apply for a certain product or service, or contact the Bank through such channels as email, online chat or by other means.
Information can also be sourced from third parties, including the Bank’s partners such as card issuers, payment service providers and public entities. Unconnected parties may also provide information about you, e.g. local credit information providers, customs and tax authorities and public registries. External parties are not authorised to submit information about you to the Bank unless authorised to do so, for example with your consent or legal authorisation.
The Bank may also need to disclose your personal data to domestic or foreign partners and/or service providers to provide you with certain services. The Bank selects its partners and service providers with care and does not disclose personal data unless they comply with the Bank’s security demands. Foreign commercial banks receive information to process and settle international payments. Partners for payment transfers and card issuance, claim collection, operation and hosting providers, IT system providers and credit bureaus such as Creditinfo and custodians of financial instruments are also entities who it may be necessary to divulge personal information to in order for the Bank to provide its services.
Disclosure may also take place based on your consent, e.g. if you request that the Bank provide fintechs or other entities with your payment information. You can further authorise the Bank to divulge other information, such as your name, email or phone number, to partners for marketing purposes.
In certain cases, the Bank is obligate to divulge personal data to law enforcement authorities, other authorities or regulators both domestic and abroad, based on legal obligation or international contracts. The Bank is focused on safeguarding the human rights of its customers, including their privacy, and does not divulge other personal data than what is necessary at each time and only based on clear legal authorisation.
The Data Protection Act affords you certain rights, including to information about whether the Bank processes your personal data and how such processing takes place in the Bank’s operation. You can manage your rights through the customer Data Rights Portal on the Bank’s website and use it to request:
- Access to your personal data
You are entitled to confirmation from the Bank as to whether your personal data is processed and, if so, to access this data. You are also entitled to certain minimum information about the arrangement of processing, provided for among other things in this Policy.
- Transfer of personal data
You can request that certain personal data you have given to the Bank be transferred to another specified party, if technically feasible. This only applies to personal data which the Bank has gathered on the basis of your consent or for the performance of a contract and was carried out by automated means.
- Rectification or erasure of personal data
You can at any time request rectification of inaccurate or unreliable personal data concerning you. Under certain circumstances, you are also entitled to have personal data concerning yourself erased.
- Limiting or objecting to processing of personal data
You can at any time object to the processing of personal data, including profiling, for direct marketing purposes and refuse promotional material on benefits, products and services in online banking, Landsbankinn’s app or the Bank’s Data Rights Portal. You can also object to the processing of personal data based on your particular situation. Finally, you can in certain cases request that temporary limitations apply to the processing of your personal data.
The Bank will respond to requests according to the above free of cost unless such requests are unfounded, excessive or if multiple copies of personal data are requested. Individuals must verify their identity when they wish to exercise their rights. For further information on your rights, see the Bank's Data Rights Portal.
You are also entitled to refer disputes over the Bank’s handling of your personal data to Persónuvernd, the Icelandic Data Protection Authority. We hope that you will contact us first with any privacy issues to allow us to help. If you do choose to contact the Data Protection Authority, the email address is [email protected].
No service or software is completely secure. Contact the Bank at the earliest opportunity if you are concerned that your personal data may be in danger or if you think that someone may have acquired your password or other information by emailing [email protected]. You will be notified of any data breaches with the Bank or its processors that affect you, in accordance with law.
Your personal data is retained in a secure environment that safeguards it against unauthorised access, misuse or disclosure. The Bank’s management of information security is certified under information security standard ÍST ISO/IEC 27001:2013. The Bank also has in place an internal information security policy, rules on information security, security processes, and has implemented organisational and technical security measures in accordance with laws and regulations on cyber and information security.
The Bank’s products and services are also designed with regard for security and privacy. The Bank regularly assesses the risk of processing personal data in information systems and software to apply appropriate security measures and ensure, in as much as possible, the privacy of the individuals affected by the processing of personal data in the Bank’s systems and software.
The Bank also promotes active security awareness amongst its employees and publishes educational material on the handling and security of personal data, in accordance with the Data Protection Act. All the Bank’s employees are bound by confidentiality in accordance with the Bank’s rules and laws that apply to financial undertakings.
The Bank’s websites store cookies on your computer or smart device. Cookies are small text files that store information to analyse use of the Bank’s websites and improve user experience. Cookies are also used to tailor websites to your needs, e.g. by boosting the function of a website, saving your settings, processing statistical information, analysing traffic through websites and for marketing purposes.
The Bank’s websites utilise different types of cookies. So-called session cookies are generally deleted when a user leaves the website. Persistent cookies on the other hand are saved to the user’s computer or device and store your actions or selections on the Bank’s websites.
Necessary cookies, such as statistics cookies and functionality cookies, activate functions on the Bank’s websites. They are a prerequisite of use of the Bank’s websites, allowing them to function as intended, and consent is not required as such cookies are based on the Bank’s legitimate interests. Necessary cookies are generally first party session cookies, used by Landsbankinn only.
First party cookies are not a requirement for use of the Bank’s websites. They nevertheless play an important role in the use and functionality of websites as they facilitate use by, for example, auto-completing forms and saving settings. First party cookies only send information about you to Landsbankinn.
Third party cookies are in place because of services Landsbankinn purchases from third parties, e.g. analytic and advertising cookies. Their use allows the Bank to tailor its websites to user needs, more effectively analyse use of websites and prepare marketing material and advertisements tailored to certain target groups by considering, amongst other things:
- Number of visitors, number of visits per visitor, date and time of visit.
- Which pages on the websites are viewed and how frequently.
- Type of files downloaded from the websites.
- Which devices, operating systems and browsers are used during visits.
- Which search words from search engines lead to the websites.
Third party cookies send information about you to another website owned by a third party, such as Google or Facebook. These third parties may also save cookies to your browser and through them gather information about your visits to the Bank’s website and the content you are interested in.
Most browsers have the option of changing settings to prevent cookies. Deleting cookies is also relatively simple. Here is some more information about deleting cookies. A more detailed description of cookies, including the third-party cookies the Bank uses, is available on the Bank’s website. Information about the use of third party cookies is also available on the websites of these third parties.
Generally, the Bank retains your personal data for the duration of the business relationship, as long as required by law or to satisfy the Bank’s legitimate interests. The strict rules and regulations that apply to the Bank’s operation may require different retention times depending on the type or nature of your data.
Audio and visual recordings from phones and security cameras are retained for 90 days and deleted automatically once that period elapses in accordance with the Data Protection Authority’s rules on electronic surveillance. Phone recordings that pertain to securities trading are retained for 5 years in accordance with the Act on Securities Transactions.
The Bank is an entity subject to an obligation of transfer, in accordance with the Act on Public Archives. The obligation to transfer means that the Bank is obliged to retain all records in the Bank’s archive and transfer them to a public archive when they have reached an age of 30 years. The Bank strives not to retain information in personally identifiable form for longer than is necessary and safeguards such information in every respect.
Specific legislation also provides for the obligation to retain certain information such as accounting records, personal identification and other information required under the Act on Measures against Money Laundering and Terrorist Financing. Audio and visual content gathered from electronic surveillance with security cameras and audio recordings of telephone conversations is not retained longer than for 90 days, unless otherwise provided by law.
Landsbankinn hf., Austurstræti 11, 155 Reykjavík, is responsible for ensuring that all processing of your personal data complies with the Data Protection Act and rules and is the controller determining the processing of your personal data.
Landsbankinn's Data Protection Officer is responsible for ensuring that the Bank's activities comply with applicable laws and rules on privacy and data protection. Please direct any queries, complaints or comments relating to the processing and handling of personal data to the Bank’s Data Protection Officer by email to [email protected].
The Bank reserves the right to update this Policy on a regular basis. The Bank will inform you about major changes to the Policy before they become effective upon publication to the Bank’s website, www.landsbankinn.is.
Approved initially on 15 June 2018
Most recently amended on 27 July 2020
Data Protection Laws grant individuals certain rights, including right to access, rectification and erasure of personal data concerning them that are being processed by the Bank.
Customer data rights portal (Icelandic)
Should you have any questions, suggestions or complaints concerning personal data, please send them to the Bank‘s Data Protection Officer, e-mail [email protected] or phone 410 4000.
Landsbankinn hf. Austurstræti 11, 155 Reykjavík, Reg. No. 471008-0280
Swift/BIC: NBIIISRE Rules Legal notice Personal Data Protection Policy
Contact us
Please use this form to send us queries, comments, compliments or complaints.
Please complete the form to make an appointment.
Use this form to request an interview with our advisors.
Appointments can be made for both branch visits and phone consultations. | https://landsbankinn.com/about-us/personal-data-protection-policy/ |
It is part of a project carried out in several European countries in order to demonstrate that the majority of European workers are in an unsatisfactory work environment due to lighting.
At the beginning of this year Repro-light realized one in account to workers of Germany, Spain, Italy and Austria. With the aim of assessing the lighting conditions of their jobs.
For the most curious here I leave the link of the full report: 😜
https://www.repro-light.eu/downloads
More than half of the workers, including men, women and workers over the age of 50, expressed their dissatisfaction with the lighting of their jobs.
The survey also aims to find out how many of these workers give importance to the design and style of the lighting, because 50% under 30 years considered that the physical aesthetics of the luminaires is important for them.
The Repro-light project finally concludes that more than 90% of respondents said that lighting can affect their emotions, 87% that affects their performance and 92% that affects their alertness.
As in EN 12464-1, it ensures that a certain level of illumination is achieved at the height of the work station.
(source: Sergio Campos – Sylvania)
These are data that show how the impact of lighting affects the workforce in European countries. For what are claimed changes in the work spaces, both in industrial spaces and offices, and as shown by the results of these surveys and there is a predisposition and adaptability by all workers to face the changes that may be required to replace the lighting, it is more among their requests is the design of the luminaires, the automation and the adjustment capacity to meet their needs in this way it promotes productivity and general welfare, causing a greater performance and benefit for both workers as for the company.
The Repro-light project, as part of the Horizon 2020 work program of the European Commission, will now move on to the next phases of research and design to develop a “Luminaria del futuro” that will strive to meet all the needs of users. And LUXES as a designer and distributor of luminaries is committed to change and claims change.
Results: | https://luxes.es/dissatisfaction-among-european-workers/?lang=en |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Effect of the Invention
This invention relates generally to decoding error-correcting codes (ECC), and more particularly to normalized min-sum decoders for LDPC codes and repeat accumulate codes.
Efficient and reliable data storage and communication requires practical encoding and decoding methods for error-correcting codes. It is known that low density parity check (LDPC) codes with belief propagation (BP) decoding provide performance close to the Shannon limit. In particular, irregular LDPC codes are among the best for many applications. Various irregular LDPC codes have been accepted or being considered for various communication and storage standards, such as DVB/DAB, wireline ADSL, IEEE 802.11n, and IEEE 802.16. However, it is known that the performance of irregular LDPC decoders is less than optimal.
Although BP decoding for these LDPC codes provides excellent performance, it is too complex for hardware implementation. BP decoding can be simplified by a check node processor with a simple minimum operation, resulting in a min-sum decoding method. While the min-sum decoding method is less complex to implement, it has decreased performance compared to BP decoding. The min-sum decoding method can be improved by linear post-normalization at a check node processor, which is called the normalized min-sum decoding method. Nevertheless, there is still a big gap between the performance of the normalized min-sum decoding method and BP decoding, especially for decoding irregular LDPC codes.
LDPC Codes
LDPC codes were first described by Gallager in the 1960s. LDPC codes perform remarkably close to Shannon limit. A binary (N, K) LDPC code, with a code length N and dimension K, is defined by a parity check matrix H of (N-K) rows and N columns. Most entries of the matrix H are zeros and only a small number the entries are ones, hence the matrix H is sparse. Each row of the matrix H represents a check sum, and each column represents a variable, e.g., a bit or symbol. The LDPC codes described by Gallager are regular, i.e., the parity check matrix H has constant-weight rows and columns.
The Ten-Year-Old Turbo Codes are entering into Service
The Renaissance of Gallager's Low-Density Parity Check Codes
In 1993, similar iterative methods were shown to perform very well for a new class of codes known as “turbo-codes.” The success of turbo-codes was partially responsible for greatly renewed interest in LDPC codes and iterative decoding methods. There has been a considerable amount of recent work to improve the performance of iterative decoding methods for both turbo-codes and LDPC codes, and other related codes such as “turbo product codes” and “repeat-accumulate codes.” For example, a special issue of the IEEE Communications Magazine was devoted to this work in August 2003. For an overview, see C. Berrou, “,” IEEE Communications Magazine, vol. 41, pp. 110-117, August 2003 and T. Richardson and R. Urbanke, “,”IEEE Communications Magazine, vol. 41, pp. 126-131, August 2003.
<math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mrow><mi>v</mi><mo></mo><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>d</mi><mrow><mi>v</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>max</mi></mrow></msub></munderover><mo></mo><mrow><msub><mi>v</mi><mi>j</mi></msub><mo></mo><msup><mi>x</mi><mrow><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msup></mrow></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mi>and</mi></mtd><mtd><mstyle><mtext> </mtext></mstyle></mtd></mtr><mtr><mtd><mrow><mrow><mrow><mi>c</mi><mo></mo><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><msub><mi>d</mi><mrow><mi>c</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>max</mi></mrow></msub></munderover><mo></mo><mrow><msub><mi>c</mi><mi>j</mi></msub><mo></mo><msup><mi>x</mi><mrow><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msup></mrow></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
v max
c max
j
j
Regular LDPC codes can be extended to irregular LDPC codes, in which the weight of rows and columns vary. An irregular LDPC code is specified by degree distribution polynomials v(x) and c(x), which define the variable and check node degree distributions, respectively. More specifically, let
where the variables dand dare a maximum variable node degree and a maximum check node degree, respectively, and v(c) represents the fraction of edges emanating from variable (check) nodes of degree j. It has been shown, both theoretically and empirically, that with properly selected degree distributions, irregular LDPC codes outperform regular LDPC codes.
The regular and irregular LDPC codes can be decoded by hard-decision, soft-decision and hybrid-decision methods. The best soft decision decoding is BP, which gives the best error performance of LDPC codes.
BP Decoding
FIG. 1
110
120
130
ch
As shown in for conventional BP decoding, the check node processor and a bit node processor operate serially while passing reliability messages to each other based on the belief propagation principle, where U is a log-likelihood ratio from channel. The main difficulty for a practical implementation of a BP decoder arises from the check processor, in which a “tanh” function requires very high computational complexity.
We denote the set of bits that participate in check m by N(m), and the set of checks in which bit n participates by M(n). We also denote N(m)\n as the set N(m) with bit n excluded, and M(n)\m as the set M(n) with check m excluded.
th
ch,n
U: the log-likelihood ratios (LLR) of bit n which is generated by the channel output,
mn
(i)
U: the LLR of bit n which is sent from check m to bit node n,
mn
(i)
V: The LLR of bit n which is sent from bit node n to check node m, and
n
(i)
V: the a posteriori LLR of bit n computed at each iteration.
We define the following notations associated with iiteration:
The conventional BP decoding method includes the following steps:
Initialization
max
mn
ch,n
(0)
Set i=1 and the maximum number of iterations to I. For each m and n, set V=U.
1
Step
<math overflow="scroll"><mtable><mtr><mtd><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><mi>log</mi><mo></mo><mrow><mfrac><mrow><mn>1</mn><mo>+</mo><mrow><munder><mo>∏</mo><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mi>tanh</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo>/</mo><mn>2</mn></mrow><mo>)</mo></mrow></mrow></mrow></mrow><mrow><mn>1</mn><mo>-</mo><mrow><munder><mo>∏</mo><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mi>tanh</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo>/</mo><mn>2</mn></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mfrac><mo>.</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
Horizontal step, for 1≦n≦N and each m ε M(n), process:
<math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msubsup><mi>V</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msub><mi>U</mi><mrow><mi>ch</mi><mo>,</mo><mi>n</mi></mrow></msub><mo>+</mo><mrow><munder><mo>∑</mo><mrow><msup><mi>m</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>m</mi></mrow></mrow></munder><mo></mo><msubsup><mi>U</mi><mrow><msup><mi>m</mi><mi>′</mi></msup><mo></mo><mi>n</mi></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow></mrow></mrow><mo>,</mo><mi>and</mi></mrow></mtd><mtd><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mrow><msubsup><mi>V</mi><mi>n</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msub><mi>U</mi><mrow><mi>ch</mi><mo>,</mo><mi>n</mi></mrow></msub><mo>+</mo><mrow><munder><mo>∑</mo><mrow><mi>m</mi><mo>∈</mo><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow></mrow></munder><mo></mo><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>.</mo></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
Vertical step, for 1≦n≦N and each m ε M(n), process
2
Step
(i)
(i)
(i)
(i)
(i)
(i)
(i)
n
n
n
n
1
Hard decision and termination criterion test. Generate ŵ=[Ŵ], such that Ŵ=1 for V>0, and Ŵ=0 otherwise. If Hŵ=0, or the maximum number of iterations is reached, then output ŵas the decoded codeword and terminate the decoding iteration, otherwise, set i=i+1, and go to Step .
3
Step
(i)
Output ŵas the decoded codeword.
Min-sum Decoding
FIG. 2
210
<math overflow="scroll"><mtable><mtr><mtd><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><munder><mo>∏</mo><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mi>sgn</mi><mo></mo><mrow><mo>(</mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo>)</mo></mrow></mrow><mo>×</mo><mrow><munder><mi>min</mi><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mo></mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo></mo></mrow><mo>.</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
As shown in , conventional min-sum decoding simplifies the conventional BP decoding in the check node processor by approximating the product of tanh functions as a min-sum operation. The updating rule in the check node of min-sum is modified as:
Min-sum decoding is possible in hardware because only comparison and addition operations are needed. Nevertheless, the conventional min-sum decoding has decreased performance.
Conventional Normalized Min-Sum Decoding
FIG. 3
310
210
300
<math overflow="scroll"><mtable><mtr><mtd><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><mi>A</mi><mo></mo><mrow><munder><mo>∏</mo><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mi>sgn</mi><mo></mo><mrow><mo>(</mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo>)</mo></mrow></mrow><mo>×</mo><mrow><munder><mi>min</mi><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mo></mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo></mo></mrow><mo>.</mo></mrow></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>7</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
As shown in , conventional normalized min-sum decoding improves the min-sum decoding by normalizing the messages generated by the check node processor , where ‘A’ denotes the normalization factor. The updating rule in the check node of normalized min-sum decoding is as follows:
The normalized min-sum decoding method performs near to that of conventional BP when decoding regular LDPC codes. Nevertheless, for decoding irregular LDPC codes, which are preferred for many applications, the gap between the performance of conventional normalized min-sum decoding and that of BP is large.
Therefore is desired to improve the normalized min-sum decoding method for all LDPC codes.
In a 2D normalization min-sum decoding method, messages generated by a check node and messages generated by a bit node in min-sum decoding are both normalized. This decoding has significantly improved performance when compared with conventional min-sum and normalization min-sum decoding methods.
At the same time, the 2D normalization min-sum decoding has a similar and better performance than BP decoding method, in waterfall and error floor regions.
Furthermore, the 2D normalization min-sum decoding method requires much less computational complexity than conventional BP decoding. The 2D normalization min-sum decoding can also be extended to 2D offset min-sum decoding.
FIG. 1
is a block diagram of a conventional BP decoder of LDPC codes;
FIG. 2
is a block diagram of a conventional min-sum decoder of LDPC codes;
FIG. 3
is a block diagram of a conventional normalized min-sum decoder of LDPC codes;
FIG. 4
is a block diagram of a 2D normalized min-sum decoder of error-correcting codes according to one embodiment of the invention; and
FIG. 5
is a graph comparing word-error rates (WER) of decoding methods.
In one embodiment of our invention, we provide a 2D-normalized min-sum decoder for error-correcting codes, such as regular and irregular LDPC codes and regular and irregular repeat-accumulate codes.
In conventional normalization min-sum decoding, the belief messages generated by a check processor are post-processed by a normalization operation. Then, these normalized belief messages are operated on by a bit node processor, which is the same as in the conventional BP decoding method.
For an irregular LDPC code, the degrees of bits are not constant. Therefore, probability distributions of belief messages generated from bits with different weights are not the same. It is not reasonable for the check node processor to treat these messages with different degrees equally.
Therefore, in one embodiment of the invention, the messages generated by the bit node processor are normalized as well. In addition, varying normalization factors are used, which are mainly dependent on different weights of bit nodes. Because there are two normalization operations, we call our method 2D normalization min-sum decoding.
Another consideration is to use varying normalization factors, which means the normalization factors of bit and check nodes can vary during different decoding iterations. For example, the normalization factors for the check node processor depend on a first predetermined number of decoding iterations (e.g., 10), while the normalization factors are constant during remaining decoding iterations. In addition, or alternatively, the normalization factors for the bit node processor depend on a predetermined number of first decoding iterations (e.g., 10), while the normalization factors are constant during remaining decoding iteration.
In summary, we provide the following procedures to improve the performance of conventional min-sum and normalized min-sum decoding:
We normalize the messages generated by the check node processor, and we normalize the messages generated by the bit node processor. The normalization factors for bit processor are dependent on the weights of different bit nodes, and the normalization factors of check and bit node processors are dependent on the number of decoding iterations.
FIG. 4
ch,n
mn
mn
430
(i)
th
(i)
shows a 2D normalized min-sum decoder of error-correcting codes according to one embodiment of the invention. Let H be the parity check matrix defining an LDPC code. We denote the set of bits that participate in check m by N(m), and the set of checks in which bit n participates as M(n). We also denote N(m)\n as the set N(m) with bit n excluded, and M(n)\m as the set M(n) with check m excluded. Let U be the log-likelihood ratio (LLR) of bit n, which is derived from the channel output. Let Ube the LLR of bit n, which is sent from check node m to bit node n at idecoding iteration. Let Vbe the LLR of bit n, which is sent from bit node n to check node m.
<math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msubsup><mi>A</mi><mrow><mi>dc</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo></mo><mrow><munder><mo>∏</mo><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mi>sgn</mi><mo></mo><mrow><mo>(</mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo>)</mo></mrow></mrow><mo></mo><mrow><munder><mi>min</mi><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mo></mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo></mo></mrow></mrow></mrow></mrow></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
dc(m)
(i)
440
The normalized check node processor in 2D-normalized min-sum decoding is performed as follows:
where dc(m) denotes the degree of check node m and A denotes the normalization factor of check node m at iteration i.
<math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msubsup><mi>V</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msub><mi>U</mi><mrow><mi>ch</mi><mo>,</mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></msub><mo>+</mo><mrow><msubsup><mi>B</mi><mrow><mi>dv</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo></mo><mrow><munder><mo>∑</mo><mrow><msup><mi>m</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>m</mi></mrow></mrow></munder><mo></mo><msubsup><mi>U</mi><mrow><msup><mi>m</mi><mi>′</mi></msup><mo></mo><mi>n</mi></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow></mrow></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mo>(</mo><mn>9</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
dv(n)
(i)
450
The normalized bit node processor in 2D-normalized min-sum decoding is performed as:
where dv(n) denotes the degree of bite node n and B denotes the normalization factor of bit node n at iteration i.
1
Step of 2D-normalized min-sum decoding includes the following substeps:
<math overflow="scroll"><mtable><mtr><mtd><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msubsup><mi>A</mi><mrow><mi>dc</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo></mo><mrow><munder><mo>∏</mo><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mi>sgn</mi><mo></mo><mrow><mo>(</mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo>)</mo></mrow></mrow><mo></mo><mrow><munder><mi>min</mi><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mo></mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo></mo></mrow><mo>.</mo></mrow></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>10</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
Horizontal Step, 1≦n≦N and each m ε M(n), process
<math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msubsup><mi>V</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msub><mi>U</mi><mrow><mi>ch</mi><mo>,</mo><mi>n</mi></mrow></msub><mo>+</mo><mrow><msubsup><mi>B</mi><mrow><mi>dv</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo></mo><mrow><munder><mo>∑</mo><mrow><msup><mi>m</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>m</mi></mrow></mrow></munder><mo></mo><msubsup><mi>U</mi><mrow><msup><mi>m</mi><mi>′</mi></msup><mo></mo><mi>n</mi></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow></mrow></mrow></mrow><mo>,</mo><mi>and</mi></mrow></mtd><mtd><mrow><mo>(</mo><mn>11</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mrow><msubsup><mi>V</mi><mi>n</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msub><mi>U</mi><mrow><mi>ch</mi><mo>,</mo><mi>n</mi></mrow></msub><mo>+</mo><mrow><msubsup><mi>B</mi><mrow><mi>dv</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo></mo><mrow><munder><mo>∑</mo><mrow><mi>m</mi><mo>∈</mo><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow></mrow></munder><mo></mo><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>.</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>12</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
Vertical Step, for 1≦n≦N and each m ε M(n), process
Because there are two normalization operations, one in a horizontal step and the other in a vertical step, we call our method 2D normalization min-sum decoding.
2
3
2
3
Step and Step are the same as in the conventional normalized min-sum decoding, i.e. Step and Step of conventional BP decoding.
The 2D-normalized min-sum decoding for irregular LDPC codes can be extended to 2D offset min-sum decoding. In offset min-sum decoding, belief messages have absolute values equal or greater than an offset parameter x. In this case, the magnitudes of these messages are reduced by x, otherwise, belief messages are set to zero.
The main reason to use the offset operation is to reduce correlation between decoding iterations, and to suppress error propagation. As for the conventional offset min-sum decoding, only messages sent from check nodes are reprocessed with offset operations. Nevertheless, in 2D offset min-sum decoding, both messages generated by check and bit nodes are reprocessed with offset operation.
1
Step of 2D offset min-sum decoding is described below.
<math overflow="scroll"><mtable><mtr><mtd><mrow><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><munder><mo>∏</mo><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mrow><mi>sgn</mi><mo></mo><mrow><mo>(</mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo>)</mo></mrow></mrow><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mrow><mrow><mi>max</mi><mo>(</mo><mrow><mrow><mrow><munder><mi>min</mi><mrow><msup><mi>n</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>N</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><mi>n</mi></mrow></mrow></munder><mo></mo><mrow><mo></mo><msubsup><mi>V</mi><msup><mi>mn</mi><mi>′</mi></msup><mrow><mo>(</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mo>)</mo></mrow></msubsup><mo></mo></mrow></mrow><mo>-</mo><msubsup><mi>A</mi><mrow><mi>dc</mi><mo></mo><mrow><mo>(</mo><mi>m</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow><mo>,</mo><mn>0</mn></mrow><mo>)</mo></mrow><mo>.</mo></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>13</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
Horizontal Step, 1≦n≦N and each m ε M(n), process
<math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msubsup><mi>V</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msub><mi>U</mi><mrow><mi>ch</mi><mo>,</mo><mi>n</mi></mrow></msub><mo>+</mo><mrow><mrow><mi>sgn</mi><mo>(</mo><mrow><munder><mo>∑</mo><mrow><msup><mi>m</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mi>m</mi></mrow></mrow></munder><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><msubsup><mi>U</mi><mrow><msup><mi>m</mi><mi>′</mi></msup><mo></mo><mi>n</mi></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow><mo>)</mo></mrow><mo></mo><mrow><mi>max</mi><mo>(</mo><mrow><mrow><mrow><mo></mo><mrow><munder><mo>∑</mo><mrow><msup><mi>m</mi><mi>′</mi></msup><mo>∈</mo><mrow><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mo></mo><mi>\</mi><mo></mo><mi>m</mi></mrow></mrow></munder><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><msubsup><mi>U</mi><mrow><msup><mi>m</mi><mi>′</mi></msup><mo></mo><mi>n</mi></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow><mo></mo></mrow><mo>-</mo><msubsup><mi>B</mi><mrow><mi>dv</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow><mo>,</mo><mn>0</mn></mrow><mo>)</mo></mrow></mrow></mrow></mrow><mo>,</mo><mstyle><mtext></mtext></mstyle><mo></mo><mi>and</mi></mrow></mtd><mtd><mrow><mo>(</mo><mn>14</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mrow><msubsup><mi>V</mi><mi>n</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mrow><msub><mi>U</mi><mrow><mi>ch</mi><mo>,</mo><mi>n</mi></mrow></msub><mo>+</mo><mrow><mrow><mi>sgn</mi><mo>(</mo><mrow><munderover><mo>∑</mo><mrow><mi>m</mi><mo>∈</mo><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow></mrow><mstyle><mtext> </mtext></mstyle></munderover><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow><mo>)</mo></mrow><mo></mo><mrow><mrow><mi>max</mi><mo>(</mo><mrow><mrow><mrow><mo></mo><mrow><munderover><mo>∑</mo><mrow><mi>m</mi><mo>∈</mo><mrow><mi>M</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow></mrow><mstyle><mtext> </mtext></mstyle></munderover><mo></mo><mstyle><mtext> </mtext></mstyle><mo></mo><msubsup><mi>U</mi><mi>mn</mi><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow><mo></mo></mrow><mo>-</mo><msubsup><mi>B</mi><mrow><mi>dv</mi><mo></mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></msubsup></mrow><mo>,</mo><mn>0</mn></mrow><mo>)</mo></mrow><mo>.</mo></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>15</mn><mo>)</mo></mrow></mtd></mtr></mtable></math>
Vertical Step, for 1≦n≦N and each m ε M(n), process
2
3
Step and Step are the same as described above.
The 2D offset min-sum decoding offers a similar performance gain as the 2D-normalized min-sum decoding described above.
It should be understood that in other embodiments of the invention the method is applied regular and irregular repeat-accumulate codes.
Analysis of the 2D-normalized min-sum decoder indicates better performance, less complexity and decoding speed trade-offs than prior art decoders.
FIG. 5
501
502
503
504
16200
7200
max
compares the word error rate of the conventional BP decoding , 2D-normalized min-sum decoding , conventional normalized min-sum decoding , and min-sum decoding , for decoding the (,) irregular LDPC code with I=200. The 2D-normalized min-sum decoding method provides a comparable performance as BP decoding, and, interestingly has a lower error floor than that of BP decoding in a high SNR region.
Although the invention has been described by the way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention. | |
Technical field
The invention relates to the field of 3D mesh deformation technology, in particular to a garment deformation method based on the human body's Laplacian deformation.
Background technology
Existing virtual dressing applications mainly include virtual dressing websites, virtual dressing mirrors, and mobile virtual dressing systems. Among them, 3D virtual dressing mirrors are the most common-seen ones. Typical examples can be found in real life, such as Magic Dressing Mirror developed by the Russian AR DOOR and the Active Lab invented by the Japanese Digital Fashion, a fully interactive virtual dressing mirror.
In terms of strengths, virtual dressing websites are simple to operate and are not heavily equipment-dependent; however, it is difficult to achieve the identical effect in accordance with the users' figures and through simple parameter adjustment, and it lacks the simulation of the model's facial features and the garment's texture, leading to poor sense of reality and layering. In terms of the working principle of a virtual dressing mirror, the users' image, body dimensions, and motion information are obtained through a depth camera (such as Kinect), and body dimension parameters are utilized to reconstruct 3D human body models consistent with the users' body shapes; meanwhile, the displayed garment models are put onto human body models, and the users can control human body models to move and thus check the effect of virtual dressing. When it comes to the strengths, this method relies on simple movements and gestures to finish dressing, allowing the users to check their own dressing effects and displaying good real-time and interactivity performance. Its defects lie in that the model rendering effects differ significantly from actual results, and it is often the case that the garment models cannot fit well with human bodies, raising higher requirements for the deformation of garment mesh. Only in this way can they fit with different human bodies and deform naturally along with human bodies' motions. The garment deformation method proposed in this invention achieves the goal by driving garment mesh deformation through human body deformation.
In the fields like computer graphics, 3D model deformation is a very important topic. The deformation of a 3D model means that local details are maintained as many as possible while changing the global shape of the model. Local details are intrinsic attributes of a 3D model, so such attributes shall be maintained unchanged during the model deformation. The garment deformation methods in the current virtual dressing applications are basically driven by Skinned Mesh animations. In these methods, the animators shall conduct rigging on the garment in advance, which takes huge labor and time costs. Moreover, the skinning effect of the garment is directly achieved through the rigging, but locally overstretched garment mesh deformation will easily occur in real life due to inappropriate skinning weight in the actual virtual dressing, thus affecting the dressing effect. The Laplacian garment deformation method relied on by this invention avoids this problem well.
Differential coordinates in the Laplacian deformation algorithm are local ones that represent intrinsic attributes of local details of 3D models. According to the Laplacian deformation method, some vertices in the 3D models are chosen to form fixed regions, which are called "deformation handles", while other vertices are utilized to generate deformation regions. By moving the deformation handles, other unchosen vertices will change with the Laplacian system, thus achieving the smooth deformation effect.
In this invention, discretized vertices of the human body are utilized as the deformation handles, and the users drive some joints of the human body model to move and produce the human body deformation, thus driving the garment mesh worn by the human body to deform with the help of the Laplacian system.
Summary of the invention
The invention aims to provide a garment deformation method based on the human body's Laplacian deformation. The simple, efficient and highly real-time method provided by this invention is a critical technology of real-time virtual dressing interaction, which not only solves the manual preprocessing required by the deformation of garment mesh but also overcomes local overstretching of current deformation algorithms; driven by the human body, the garment deforms smoothly and still maintains local features of the garment after the deformation.
(1) inputting polygonal mesh models of the human body and the garment;
(2) discretizing non-homogeneous mesh models of the human body and the garment inputted in Step (1);
(3) clustering all the discretized mesh vertices, to reduce the number of vertices and to form a set of homogeneous discrete vertices;
(4) constructing Laplacian matrices of the human body and the garment;
(5) preprocessing and solving inverse matrices;
(6) editing by using the human body mesh as a control vertex, to drive a real-time smooth deformation of the garment mesh;
(7) mapping a deformed and simplified mesh back to a mesh space of an original resolution to get deformed human body and garment mesh models.
A method of garment deformation based on Laplacian deformation of human body, comprising following steps:
In the said Step (1), the mesh of 3D human body and garment models inputted are generally non-homogeneously distributed in real life; some parts of the mesh are dense and other parts are sparse. If non-homogeneous mesh is directly applied to the mesh deformation, the deformation effect will be greatly affected. Therefore, the human body and garment models shall be optimized during the preprocessing step to make them homogeneous.
B
M
C
M
B
V
C
V.
In the said Step (2), the non-homogeneous human body mesh and the garment mesh inputted in Step (1) are discretized to retrieve only vertex information from inputted mesh information, and get the sets of original vertices, namely, and During the discretization, record distances between all vertices and their topological connections among the original mesh data for use in the mapping of Step (7).
B
V
C
V
n
n
d
B
V
i
<msubsup><mi>V</mi><mi>B</mi><mi>i</mi></msubsup>
<msubsup><mi>M</mi><mi>B</mi><mi>i</mi></msubsup><mo>,</mo><msubsup><mi>M</mi><mi>B</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msubsup><mo>,</mo><mo>…</mo><mo>,</mo><msubsup><mi>M</mi><mi>B</mi><mrow><mi>m</mi><mo>−</mo><mn>1</mn></mrow></msubsup>
<msubsup><mi>V</mi><mi>B</mi><mi>i</mi></msubsup>
In the said Step (3), the set of discretized human body vertices and the set of discretized garment vertices are voxelized, and the space is decomposed into × voxels, each with a cubic radius of . With the set of vertices for the human body mesh as an example, for the th voxel in it, supposing that there are m human body mesh vertices in the space covered by it, combine the m mesh vertices in the voxel into a single vertex.
B
V'
C
V'
Further, handling all human body and garment vertices in the same way to get a simplified, discretized, and homogeneous set of vertices for the human body mesh and a simplified, discretized, and homogeneous set of vertices for the garment mesh.
Further, based on original topological connections from Step (2), adding edge connections to all vertices in the simplified set of vertices for new topological connections, which are used for constructing the Laplacian matrix in Step (4).
L
In the said Step (4), the Laplacian operator matrices for the human body and garment models are established. Since the human body model and the garment model are two independent models and all vertices in these two models have their own topological connections, these two models are separated from each other in terms of topology. In the meantime, as the deformation of the garment mesh shall be driven by human body mesh, the discretized set of vertices of these two models shall be treated as a whole while constructing the Laplacian matrix. The topological information and geometric information of the 3D model shall be considered simultaneously, so a geometric Laplacian matrix shall be constructed during this step
V
n
i
ν
i
ν
ν
ν
ix
iy
iz
, ,
R
T
3
<mi mathvariant="bold-italic">V</mi><mo>=</mo><msup><mfenced open="[" close="]" separators=""><msubsup><mi>v</mi><mn>1</mn><mi mathvariant="normal">T</mi></msubsup><mo>,</mo><msubsup><mi>v</mi><mn>2</mn><mi mathvariant="normal">T</mi></msubsup><mo>,</mo><mo>…</mo><mo>,</mo><msubsup><mi>v</mi><mi>n</mi><mi mathvariant="normal">T</mi></msubsup></mfenced><mi mathvariant="normal">T</mi></msup><mo>∘</mo>
Further, the set of vertices from the discretized and simplified human body and garment models is defined as , which contains vertices; among them, the Euclidean coordinates of any vertex can be expressed as = [] ∈ . For the set of vertices,
V
F
L
L
n
n
n
F
.
Further, the positions of all vertices in the set of vertices can be expressed, with a dimension of , as the vector . Correspondingly, the Laplacian matrix is × 's matrix . Therefore, multiply the Laplacian matrix and the position vector of the vertexes for the discretized human body and garment models, namely, ×
V
,
i,j ,
ij
w
ij
a
ij
w(i
j
Further, the Laplacian matrix is a sparse matrix, whose way of assigning non-zero element values resembles the adjacency matrix between vertices. Considering the topology information of the set of vertices if there is an edge between any two vertices the weight between these two vertices is not zero, namely, ≠ 0, corresponding to the element = ≠ ) in the Laplacian matrix.
ii
a
i
Further, for the element in a diagonal line of the Laplacian matrix, its value refers to the number of vertices that connect to vertex through an edge.
V
<mi>Δ</mi><mo>=</mo><mi mathvariant="bold-italic">L</mi><mfenced><mi mathvariant="bold-italic">V</mi></mfenced><mo>.</mo>
In the said Step (5), the inverse matrix is preprocessed and solved. The core of Laplacian deformation is to convert the coordinates of the vertex from the Euclidean space to the differential coordinates space. To keep local details of the garment model unchanged, the deformed local differential coordinates shall also be maintained unchanged. The entire deformation process is shown below:
First, calculate the differential coordinates of each vertex falling under the set of vertices :
Wherein, Δ refers to the differential or Laplacian coordinates of the vertex, corresponding to three components of three coordinate axes in the local space of differential coordinates.
<msubsup><mi mathvariant="bold-italic">v</mi><mi>i</mi><mo>′</mo></msubsup><mo>=</mo><msub><mi mathvariant="bold-italic">u</mi><mi>i</mi></msub><mo>,</mo><mi>i</mi><mo>∈</mo><mi>C</mi>
Second, move some vertices in the human body model, and see these vertices as the deformation handle to get the new Euclidean coordinates of the vertex on the deformation handle:
C
u
i
i
i
<msubsup><mi>v</mi><mi>i</mi><mo>′</mo></msubsup>
Wherein, refers to the set of all vertices on the handle. refers to the new position of the th vertex on the handle; represents the new position of the th vertex.
V
<mi mathvariant="bold-italic">V</mi><mo>=</mo><mi mathvariant="italic">argmin</mi><mfenced separators=""><msup><mrow><mo>‖</mo><mrow><mi mathvariant="bold-italic">L</mi><mfenced separators=""><mi mathvariant="bold-italic">V</mi><mo>′</mo></mfenced><mo>−</mo><mi>Δ</mi></mrow><mo>‖</mo></mrow><mn>2</mn></msup><mo>+</mo><mstyle displaystyle="true"><munder><mo>∑</mo><mrow><mi>i</mi><mo>∈</mo><mi>C</mi></mrow></munder><msup><mrow><mo>‖</mo><mrow><msubsup><mi mathvariant="bold-italic">v</mi><mi>i</mi><mo>′</mo></msubsup><mo>−</mo><msub><mi mathvariant="bold-italic">u</mi><mi>i</mi></msub></mrow><mo>‖</mo></mrow><mn>2</mn></msup></mstyle></mfenced>
Third, based on the differential coordinates and the new position of the vertex on the handle, the least square method is utilized to calculate the positions of other vertices in the set of vertices :
V
Wherein, ' refers to the new position vectors of all vertices.
<mstyle mathvariant="bold-italic"><mi mathvariant="italic">AV</mi></mstyle><mo>′</mo><mo>=</mo><mi>b</mi>
Fourth, simplify the optimal equation in the third step to transform the optimization problem for solution:
<mi mathvariant="bold-italic">A</mi><mo>=</mo><mfenced><mtable><mtr><mtd><mi mathvariant="bold-italic">L</mi></mtd></mtr><mtr><mtd><mi>F</mi></mtd></mtr></mtable></mfenced><mo>,</mo><msub><mi>F</mi><mi mathvariant="italic">ij</mi></msub><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>1</mn><mo>,</mo></mrow></mtd><mtd><mrow><mi>j</mi><mo>=</mo><msub><mi>s</mi><mi>i</mi></msub><mo>∈</mo><mi>C</mi></mrow></mtd></mtr><mtr><mtd><mrow><mn>0</mn><mo>,</mo></mrow></mtd><mtd><mi>Others</mi></mtd></mtr></mtable></mrow>
<msub><mi>b</mi><mi>k</mi></msub><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>0</mn><mo>,</mo></mrow></mtd><mtd><mrow><mi>k</mi><mo>≤</mo><mi>n</mi></mrow></mtd></mtr><mtr><mtd><msub><mi>x</mi><msub><mi>s</mi><mrow><mi>k</mi><mo>−</mo><mi>n</mi></mrow></msub></msub></mtd><mtd><mrow><mi>n</mi><mo><</mo><mi>k</mi><mo>≤</mo><mi>n</mi><mo>+</mo><mi>m</mi></mrow></mtd></mtr></mtable></mrow>
Specifically,
<mi>min</mi><mrow><mo>‖</mo><mrow><mi mathvariant="bold-italic">A</mi><mi>x</mi><mo>−</mo><mi mathvariant="bold-italic">b</mi></mrow><mo>‖</mo></mrow>
Further, the above optimization problem can be expressed as:
A
<msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">A</mi><mi>x</mi><mo>=</mo><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
<mi>x</mi><mo>=</mo><msup><mfenced separators=""><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">A</mi></mfenced><mrow><mo>−</mo><mn>1</mn></mrow></msup><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
Since are not square matrices, the systems cannot be directly solved, and the above systems can be expressed as:
A
T
A
<msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">A</mi><mo>=</mo><msup><mi mathvariant="bold-italic">R</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">R</mi>
Further, are positive definite systems; the systems can be decomposed into two triangular matrices to multiply each other:
<msup><mi mathvariant="bold-italic">R</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">R</mi><mi>x</mi><mo>=</mo><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
<msup><mi mathvariant="bold-italic">R</mi><mi mathvariant="normal">T</mi></msup><mover accent="true"><mi>x</mi><mo>˜</mo></mover><mo>=</mo><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
<mi mathvariant="bold-italic">R</mi><mi>x</mi><mo>=</mo><mover accent="true"><mi>x</mi><mo>˜</mo></mover>
<mi>x</mi><mo>=</mo><msup><mi mathvariant="bold-italic">R</mi><mrow><mo>−</mo><mn>1</mn></mrow></msup><mover accent="true"><mi>x</mi><mo>˜</mo></mover>
Further, the equation systems can be transformed into:
R
-1
x
As a result, the solution to the inverse matrix shall be obtained first, and an intermediate variable is utilized to solve the positions of all vertices in the final set of vertices.
In the said Step (6), changes are made to the control vertices of human boxy mesh through the Laplacian matrix to drive real-time smooth deformation of garment mesh. Pursuant to the processes described in Step (5), during the virtual dressing, the motion information inputted by the customers is seen as new positions of the control handle vertices of the human body model; by solving the above least square problem, the new positions of deformed human body and garment models can be obtained.
In the said Step (7), the deformed and simplified human body and garment mesh are mapped back to the mesh space of original resolution based on the recorded distances ad topological connections to get the ultimately deformed human body and garment mesh.
<msubsup><mi>V</mi><mi>B</mi><mo>′</mo></msubsup>
<msubsup><mi>V</mi><mi>C</mi><mo>′</mo></msubsup>
<msubsup><mi>d</mi><mi>s</mi><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>d</mi><mi>s</mi><mn>2</mn></msubsup><mo>,</mo><mo>…</mo><mo>,</mo><msubsup><mi>d</mi><mi>s</mi><mi>m</mi></msubsup>
B
V
i
m
s
2
s,
m
s ,
i
m
1,
In the said Step (3), the simplified set of vertices for human body mesh and the simplified set of vertices for the garment mesh are obtained through voxelization and conglomeration. With the vertices of human body mesh as an example, in the original non-simplified set of vertices for human body mesh, for any vertex , simplified vertices of human body mesh can be found within the given distance range: ..., and the distance between Vertex and simplified human body mesh vertices can be recorded as ...,
i
<msub><mi>v</mi><mi>i</mi></msub><mo>=</mo><msubsup><mi>v</mi><mi>s</mi><mn>1</mn></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mn>1</mn></msubsup></mfenced><mo>+</mo><msubsup><mi>v</mi><mi>s</mi><mn>2</mn></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mn>2</mn></msubsup></mfenced><mo>+</mo><msubsup><mi>v</mi><mi>s</mi><mn>3</mn></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mn>3</mn></msubsup></mfenced><mo>+</mo><mo>…</mo><mo>+</mo><msubsup><mi>v</mi><mi>s</mi><mi>m</mi></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mi>m</mi></msubsup></mfenced>
Further, the Euclidean coordinates of Vertex can be expressed as:
<msubsup><mi>v</mi><mi>s</mi><mi>j</mi></msubsup><mo>,</mo><mi>j</mi><mo>=</mo><mn>1,2</mn><mo>,</mo><mo>…</mo><mo>,</mo><mi>m</mi>
<mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mi>j</mi></msubsup></mfenced>
<msubsup><mi>d</mi><mi>s</mi><mi>j</mi></msubsup>
Wherein, refers to Euclidean coordinates of m simplified vertices; represents the weighting function with the distance as an independent variable.
Further, based on the weighting function and relations between adjacent vertices recorded in Step (3), calculate the new positions of human body mesh vertices of the original resolution in accordance with the new positions of the deformed and simplified human body mesh vertices, recover the original topological connections, and add edge connections to the recovered vertices.
Further, the same method is adopted for garment mesh vertices, and the deformed mapped human body and garment mesh of the original solution are finally obtained.
(1) The human body is seen as the control vertex for Laplacian deformation to drive the deformation of garment mesh, as long as the Laplacian matrix and the inverse matrix are solved in advance, thus achieving real-time deformation of garment mesh;
(2) During the deformation process of garment mesh, local features can be preserved well, avoiding overstretching.
(3) The human body deformation can be smoothly transmitted to the garment mesh through the proposed algorithm.
Unlike the current garment mesh deformation method, the invention provides a garment deformation method based on the human body's Laplacian deformation, which can simulate how the garment deforms along with human body movements during the virtual dressing process, as embodied in:
Description of figures
Detailed description of the invention embodiments
Fig. 1
provides the renderings of two female models with different body shapes who wear T-shirts and trousers for this invention;
Fig. 2
provides the renderings of two male models with different body shapes who wear T-shirts and shorts for this invention;
(1) inputting polygonal mesh models of the human body and the garment;
(2) discretizing non-homogeneous mesh models of the human body and the garment inputted in Step (1);
(3) clustering all the discretized mesh vertices, to reduce the number of vertices and to form a set of homogeneous discrete vertices;
(4) constructing Laplacian matrices of the human body and the garment;
(5) preprocessing and solving inverse matrices;
(6) editing by using the human body mesh as a control vertex, to drive a real-time smooth deformation of the garment mesh;
(7) mapping a deformed and simplified mesh back to a mesh space of an original resolution to get deformed human body and garment mesh models.
Next, the technical solution in this invention will be further detailed in conjunction with figures and embodiments.
In the said Step (1), the mesh of 3D human body and garment models inputted are generally non-homogeneously distributed in real life; some parts of the mesh are dense and other parts are sparse. If non-homogeneous mesh is directly applied to the mesh deformation, the deformation effect will be greatly affected. Therefore, the human body and garment models shall be optimized during the preprocessing step to make them homogeneous.
B
M
C
M
B
V
C
V.
In the said Step (2), the non-homogeneous human body mesh and the garment mesh inputted in Step (1) are discretized to retrieve only vertex information from inputted mesh information, and get the sets of original vertices, namely, and During the discretization, record distances between all vertices and their topological connections among the original mesh data for use in the mapping of Step (7).
B
V
C
V
n
n
d
B
V
i
<msubsup><mi>V</mi><mi>B</mi><mi>i</mi></msubsup>
<msubsup><mi>M</mi><mi>B</mi><mi>i</mi></msubsup><mo>,</mo><msubsup><mi>M</mi><mi>B</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msubsup><mo>,</mo><mo>…</mo><mo>,</mo><msubsup><mi>M</mi><mi>B</mi><mrow><mi>m</mi><mo>−</mo><mn>1</mn></mrow></msubsup>
<msubsup><mi>V</mi><mi>B</mi><mi>i</mi></msubsup>
In the said Step (3), the set of discretized human body vertices and the set of discretized garment vertices are voxelized, and the space is decomposed into × voxels, each with a cubic radius of . With the set of vertices for the human body mesh as an example, for the th voxel in it, supposing that there are m human body mesh vertices the space covered by it, combine the m mesh vertices in the voxel into a single vertex.
<msubsup><mi>V</mi><mi>B</mi><mo>′</mo></msubsup>
<msubsup><mi>V</mi><mi>C</mi><mo>′</mo></msubsup>
(3-3) Handling all human body and garment vertices in the same way to get a simplified, discretized, and homogeneous set of vertices for the human body mesh and a simplified, discretized, and homogeneous set of vertices for the garment mesh.
(3-4) Based on original topological connections from Step (2), adding edge connections to all vertices in the simplified set of vertices for new topological connections, which are used for constructing the Laplacian matrix in Step (4).
L
In the said Step (4), the Laplacian operator matrices for the human body and garment models are established. Since the human body model and the garment model are two independent models and all vertices in these two models have their own topological connections, these two models are separated from each other in terms of topology. In the meantime, as the deformation of the garment mesh shall be driven by human body mesh, the discretized set of vertices of these two models shall be treated as a whole while constructing the Laplacian matrix. The topological information and geometric information of the 3D model shall be considered simultaneously, so a geometric Laplacian matrix shall be constructed during this step
V
n
i
ν
i
ν
ν
ν
ix
iy
iz
, ,
R
T
3
<mi mathvariant="bold-italic">V</mi><mo>=</mo><msup><mfenced open="[" close="]" separators=""><msubsup><mi>v</mi><mn>1</mn><mi mathvariant="normal">T</mi></msubsup><mo>,</mo><msubsup><mi>v</mi><mn>2</mn><mi mathvariant="normal">T</mi></msubsup><mo>,</mo><mo>…</mo><mo>,</mo><msubsup><mi>v</mi><mi>n</mi><mi mathvariant="normal">T</mi></msubsup></mfenced><mi mathvariant="normal">T</mi></msup>
(4-1) The set of vertices from the discretized and simplified human body and garment models is defined as , which contains vertices; among them, the Euclidean coordinates of any vertex can be expressed as = [] ∈ . For the set of vertices, .
V
F
L
n
n
n
L
F
.
(4-2) Further, the positions of all vertices in the set of vertices can be expressed, with a dimension of , as the vector . Correspondingly, the Laplacian matrix is × 's matrix . Therefore, multiply the Laplacian matrix and the position vector of the vertexes for the discretized human body and garment models, namely, ×
V
i,j ,
ij
w
ij
a
ij
w
i
j
(4-3) Further, the Laplacian matrix is a sparse matrix, whose way of assigning non-zero element values resembles the adjacency matrix between vertices. Considering the topology information of the set of vertices , if there is an edge between any two vertices the weight between these two vertices is not zero, namely, ≠ 0, corresponding to the element = ( ≠ ) in the Laplacian matrix.
ii
a
i
(4-4) Further, for the element in a diagonal line of the Laplacian matrix, its value refers to the number of vertices that connect to vertex through an edge.
V
<mi>Δ</mi><mo>=</mo><mi mathvariant="bold-italic">L</mi><mfenced><mi mathvariant="bold-italic">V</mi></mfenced><mo>.</mo>
In the said Step (5), the inverse matrix is preprocessed and solved. The core of Laplacian deformation is to convert the coordinates of the vertex from the Euclidean space to the differential coordinates space. To keep local details of the garment model unchanged, the deformed local differential coordinates shall also be maintained unchanged. The entire deformation process is shown below:
(5-1) First, calculate the differential coordinates of each vertex falling under the set of vertices :
Wherein, Δ refers to the differential coordinates of the vertex.
<msubsup><mi mathvariant="bold-italic">v</mi><mi>i</mi><mo>′</mo></msubsup><mo>=</mo><msub><mi mathvariant="bold-italic">u</mi><mi>i</mi></msub><mo>,</mo><mi>i</mi><mo>∈</mo><mi>C</mi>
(5-2) Second, move some vertices in the human body model, and see these vertices as the deformation handle to get the new Euclidean coordinates of the vertex on the deformation handle:
C
u
i
i
i
<msubsup><mi>V</mi><mi>i</mi><mo>′</mo></msubsup>
Wherein, refers to the set of all vertices on the handle. refers to the new position of the th vertex on the handle; represents the new position of the th vertex.
V
<mi mathvariant="bold-italic">V</mi><mo>=</mo><mi mathvariant="italic">argmin</mi><mfenced separators=""><msup><mrow><mo>‖</mo><mrow><mi mathvariant="bold-italic">L</mi><mfenced separators=""><mi mathvariant="bold-italic">V</mi><mo>′</mo></mfenced><mo>−</mo><mi>Δ</mi></mrow><mo>‖</mo></mrow><mn>2</mn></msup><mo>+</mo><mstyle displaystyle="true"><munder><mo>∑</mo><mrow><mi>i</mi><mo>∈</mo><mi>C</mi></mrow></munder><msup><mrow><mo>‖</mo><mrow><msubsup><mi mathvariant="bold-italic">v</mi><mi>i</mi><mo>′</mo></msubsup><mo>−</mo><msub><mi mathvariant="bold-italic">u</mi><mi>i</mi></msub></mrow><mo>‖</mo></mrow><mn>2</mn></msup></mstyle></mfenced>
(5-3) Third, based on the differential coordinates and the new location of the vertex on the handle, the least square method is utilized to calculate the locations of other vertices in the set of vertices :
V
Wherein, ' refers to the new position vectors of all vertices.
<mstyle mathvariant="bold-italic"><mi mathvariant="italic">AV</mi></mstyle><mo>′</mo><mo>=</mo><mi>b</mi>
(5-4) Fourth, simplify the optimal equation in the third step to transform the optimization problem for solving the following linear equations:
<mi mathvariant="bold-italic">A</mi><mo>=</mo><mfenced><mtable><mtr><mtd><mi mathvariant="bold-italic">L</mi></mtd></mtr><mtr><mtd><mi>F</mi></mtd></mtr></mtable></mfenced><mo>,</mo><msub><mi>F</mi><mi mathvariant="italic">ij</mi></msub><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>1</mn><mo>,</mo><mi>j</mi><mo>=</mo><msub><mi>s</mi><mi>i</mi></msub><mo>∈</mo><mi>C</mi></mrow></mtd></mtr><mtr><mtd><mrow><mn>0</mn><mo>,</mo><mspace width="1ex" /><mi>Others</mi></mrow></mtd></mtr></mtable></mrow>
<msub><mi>b</mi><mi>k</mi></msub><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>0</mn><mo>,</mo></mrow></mtd><mtd><mrow><mi>k</mi><mo>≤</mo><mi>n</mi></mrow></mtd></mtr><mtr><mtd><msub><mi>x</mi><mrow><msub><mi>s</mi><mrow><mi>k</mi><mo>−</mo><mi>n</mi></mrow></msub><mo>,</mo></mrow></msub></mtd><mtd><mrow><mi>n</mi><mo><</mo><mi>k</mi><mo>≤</mo><mi>n</mi><mo>+</mo><mi>m</mi></mrow></mtd></mtr></mtable></mrow>
Specifically,
<mi>min</mi><mrow><mo>‖</mo><mrow><mi mathvariant="bold-italic">A</mi><mi>x</mi><mo>−</mo><mi mathvariant="bold-italic">b</mi></mrow><mo>‖</mo></mrow>
(5-5) The above optimization problem can be expressed as:
A
<msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">A</mi><mi>x</mi><mo>=</mo><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
<mi>x</mi><mo>=</mo><msup><mfenced separators=""><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">A</mi></mfenced><mrow><mo>−</mo><mn>1</mn></mrow></msup><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
Since is not a square matrix, the system cannot be directly solved, and the above system can be expressed as:
A
T
A
<msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">A</mi><mo>=</mo><msup><mi mathvariant="bold-italic">R</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">R</mi>
(5-6) are positive definite systems, the systems can be decomposed into two triangular matrices to multiply each other:
<msup><mi mathvariant="bold-italic">R</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">R</mi><mi>x</mi><mo>=</mo><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
<msup><mi mathvariant="bold-italic">R</mi><mi mathvariant="normal">T</mi></msup><mover accent="true"><mi>x</mi><mo>˜</mo></mover><mo>=</mo><msup><mi mathvariant="bold-italic">A</mi><mi mathvariant="normal">T</mi></msup><mi mathvariant="bold-italic">b</mi>
<mi mathvariant="bold-italic">R</mi><mi>x</mi><mo>=</mo><mover accent="true"><mi>x</mi><mo>˜</mo></mover>
<mi>x</mi><mo>=</mo><msup><mi mathvariant="bold-italic">R</mi><mrow><mo>−</mo><mstyle mathvariant="bold-italic"><mn>1</mn></mstyle></mrow></msup><mover accent="true"><mi>x</mi><mo>˜</mo></mover>
(5-7) The equation systems can be transformed into:
R
-1
x
As a result, the solution to the inverse matrix shall be obtained first, and an intermediate variable is utilized to solve the positions of all vertices in the final set of vertices.
In the said Step (6), changes are made to the control vertices of human boxy mesh through the Laplacian matrix to drive real-time smooth deformation of garment mesh. Pursuant to the processes described in Step (5), during the virtual dressing, the motion information inputted by the users is seen as new positions of the control handle vertices of the human body model; by solving the above least square problem, the new positions of deformed human body and garment models can be obtained.
In the said Step (7), the deformed and simplified human body and garment mesh are mapped back to the mesh space of original resolution based on the recorded distances ad topological connections to get the ultimately deformed human body and garment mesh.
<msubsup><mi>V</mi><mi>B</mi><mo>′</mo></msubsup>
<msubsup><mi>V</mi><mi>C</mi><mo>′</mo></msubsup>
<msubsup><mi>s</mi><mi>s</mi><mn>1</mn></msubsup><mo>,</mo><msubsup><mi>d</mi><mi>s</mi><mn>2</mn></msubsup><mo>,</mo><mo>…</mo><msubsup><mi>d</mi><mi>s</mi><mi>m</mi></msubsup>
B
V
i
m
s
, s
m
, ..., s ,
i
m
.
1
2
(7-1) In the said Step (3), the simplified set of vertices for human body mesh and the simplified set of vertices for the garment mesh are obtained through voxelization and conglomeration. With the vertices of human body mesh as an example, in the original non-simplified set of vertices for human body mesh, for any vertex , simplified vertices of human body mesh can be found within the given distance range: and the distance between Vertex and simplified human body mesh vertices can be recorded as
i
<msub><mi mathvariant="bold-italic">v</mi><mn>1</mn></msub><mo>=</mo><msubsup><mi mathvariant="bold-italic">v</mi><mi>s</mi><mn>1</mn></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mn>1</mn></msubsup></mfenced><mo>+</mo><msubsup><mi mathvariant="bold-italic">v</mi><mi>s</mi><mn>2</mn></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mn>2</mn></msubsup></mfenced><mo>+</mo><msubsup><mi mathvariant="bold-italic">v</mi><mi>s</mi><mn>3</mn></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mn>3</mn></msubsup></mfenced><mo>+</mo><mo>…</mo><mo>+</mo><msubsup><mi mathvariant="bold-italic">v</mi><mi>s</mi><mi>m</mi></msubsup><mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mi>m</mi></msubsup></mfenced>
(7-2) Euclidean coordinates of Vertex can be expressed as:
<msubsup><mi mathvariant="bold-italic">v</mi><mi>s</mi><mi>j</mi></msubsup><mo>,</mo><mi>j</mi><mo>=</mo><mn>1,2</mn><mo>,</mo><mo>…</mo><mo>,</mo><mi>m</mi>
<mi>w</mi><mfenced><msubsup><mi>d</mi><mi>s</mi><mi>j</mi></msubsup></mfenced>
<msubsup><mi>d</mi><mi>s</mi><mi>j</mi></msubsup>
m
Wherein, refers to the Euclidean coordinates of simplified vertices; represents the weighting function with the distance as an independent variable.
(7-3) Based on the weighting function and relations between adjacent vertices recorded in Step (3), calculate the new positions of human body mesh vertices of the original resolution in accordance with the new positions of the deformed and simplified human body mesh vertices, recover the original topological connections, and add edge connections to the recovered vertices.
(7-4) The same method is adopted for garment mesh vertices, and the deformed mapped human body and garment mesh of the original solution are finally obtained.
Above are detailed descriptions about this invention, but the embodiments of this invention are not limited to the above ones, and other alterations, replacements, combinations, and simplifications made under the guidance of the core idea of this invention shall also be included in the protection range of this invention. | |
Selected Answer:
total economic cost
Correct Answer:
total financial cost
Issue 2
a few out of three points
The flat-screen plasma TVs can sell extremely well. В The originators of this technology are making higher revenue. В What theory of profit greatest reflects the performance of the plasma display makers? Solution
Selected Response:
innovation theory of earnings
Correct Answer:
innovation theory of revenue
Question a few
3 away of 3 points
Recently, the American Medical Association changed its tips about the frequency of pap-smear exams for girls. В The newest frequency advice was designed to talk about the relatives histories with the patients. В The optimal rate of recurrence should be in which the marginal advantage of an additional pap-test: Answer
Selected Answer:
equals the little cost of the test
Correct Solution:
equals the marginal expense of the test
Issue 4
three or more out of 3 points
Several executive payment plans have been employed to motivate managers to make decisions that maximize aktionar wealth. Such as: Answer
Picked Answer:
necessitating officers to own stock in the company
Correct Answer:
needing officers to own stock inside the company
Question 5
a few out of 3 points
In the shareholder wealth maximization model, the value of a firm's inventory is equal to the present value of all predicted future ____ discounted on the stockholders' necessary rate of return. Solution
Selected Response:
profits (cash flows)
Right Answer:
income (cash flows)
Question 6th
3 out of 3 details
Which in the following increases (V0), the shareholder prosperity maximization type of the organization: В В В В В В В V0в€™(shares outstanding) sama dengan ОЈв€ћt=1В (ПЂВ tВ ) / (1+ke)tВ В В + True Option Worth. Answer
Selected Answer:
Decrease the required level of returning (ke).
Correct Answer:
Cure the required price of come back (ke).
Query 7
three or more out of 3 points
Feasible goals of Not-For-Profit (NFP) enterprises incorporate all of the next EXCEPT: Answer
Selected Answer:
maximize total costs
Correct Answer:
improve total costs
Question almost eight
3 out of 3 items
The primary aim of a for-profit firm is usually to ___________. Solution
Selected Solution:
maximize shareholder value
Correct Answer:
take full advantage of shareholder worth
Question 9
3 out of 3 items
A Real Alternative Value can be:
Answer
Picked Answer:
An opportunity to implement a fresh cost savings or revenue enlargement activity that arises from business plans which the managers adopt. Correct Response:
An opportunity to implement a new financial savings or earnings expansion activity that arises from business strategies that the managers adopt.
Issue 10
zero out of three points
Tax payments invariably is an example of ____.
Answer
Selected Answer:
implicit costs
Right Answer:
precise costs
Issue 11
zero out of three points
The normal deviation is suitable to review the risk among two investments only if Response
Selected Response:
objective quotes of each conceivable outcome exists
Correct Solution:
the anticipated returns from the investments will be approximately equivalent
Question 12
0 away of 3 points
The estimated probability of any value occurring that is higher than one regular deviation in the mean is approximately (assuming an ordinary distribution) Solution
Selected Answer:
68. 26%
Correct Response:
15. ??????
Question 13
3 out of 3 details
The primary difference(s) between the normal deviation plus the coefficient of variation because measures of risk are: Answer
Chosen Answer:
the coefficient of variation...
Related
house maintain Service ask for Essay
Chap 2-6 Clarify reuse and its advantages and disadvantages. Recycle: The usage of previously crafted software solutions, especially objects and parts, in fresh applications. Positive…...
Topic Composition
Topic: \" Telecommunications\" compared to \" Details Services\" Restrictions and Laws and regulations No of Sources: twelve Citation Style: APA Word Count: 3000 words Instructions…...
The Risks of Teenage Being pregnant Essay
п»їAdolescent pregnancy is viewed as a high-risk situation due to the serious health risks that this creates for the mother, the baby, and contemporary society at large. Identify various risk…...
Intentional One Hour Rounding Nursing Essay
Intentional 1 hour Rounding Inside the nursing occupation patient security and pleasure is tremendously stressed and extremely important, as a result I chose to do my command change job…... | https://restoretheduwamish.org/week-1-quiz-eco550/29898-week-one-particular-quiz-eco550-essay.html |
Slovakia's economic performance largely depends on external demand and is significantly aligned with developments in the external macro environment in Europe and the country's main trading partners, Germany and Czech Republic. As such, Slovakia's dependence on export-oriented industries presents a key credit risk to the banks' performance, because softening demand for these industries' production will lead to higher unemployment levels and non-performing loans (NPLs).
Moody's expects Slovakia's GDP growth trajectory to decelerate to about 1.7% for 2012, down from 3.1% in 2011 and 4% in 2010, with further downside risks as continued uncertainty hinders business and consumer confidence in the country and the broader euro area.
Despite a recovery in profits in the last two years, Moody's believes that the weakening operating environment will depress banks' profitability, due to several macro and domestic specific factors, such as: (i) a slowdown in lending growth; (ii) a likely increase in loan-loss charges, reversing the lower charges recorded in 2010 and 2011; (iii) the payment of a new bank tax, which will be levied by the government for the first time in 2012; and (iv) pressures on interest margins, more recently driven by increased competition for deposits.
Moody's recognises that system NPLs stabilised in 2011 at 5.9%. However, overall, we expect that the deceleration of economic growth in 2012 will contribute to an increase in the rate of formation of new NPLs as well as higher loan-loss provisions. High credit concentrations in the banks' loan books will likely exacerbate the potential downside risks to asset-quality trends. Additional downside risks include the high proportion of high loan-to-value (LTV) mortgages in the banks' loan portfolios, declining real-estate prices, growing household indebtedness and rising unemployment in the higher income segment.
Despite these negative factors, Slovakian banks' capitalisation has strengthened in recent years as a result of profit retention, thus providing adequate loss-absorption capacity under Moody's scenario analysis, whilst funding and liquidity profiles will likely remain relatively stable, as banks fully fund their loan books through deposits, which reduces their sensitivity to changes in market confidence.
Although the system is now facing new risks, as many local banks are owned by Western European banks, which are currently under pressure to repatriate capital, or potentially to sell weaker subsidiaries, the rating agency recognises that the National Bank of Slovakia introduced stricter capital rules to protect capital buffers of local banks, particularly from foreign owners' requests for higher dividend payments, which will partially mitigate these risks.
Moody's also notes that the weakening creditworthiness of the Slovakian government, downgraded earlier this year to A2, and of the main foreign owners of Slovakian banks (on review for downgrade since February 2012) indicate, in our view, a diminishing capacity to provide support to the local banks, in case of need.
The new report, "Banking System Outlook: Slovakia", is now available on www.moodys.com. | http://www.xprimm.com/Moody-s-Slovakian-banking-system-outlook-changed-to-negative-articol-123-1806.htm |
Module summaryThis module introduces the statistical methodology used in analysing multivariate observations, and applications to real data sets.
ObjectivesTo introduce the statistical methodology used in analysing multivariate observations, and to understand its application to real data sets.
On completion of this module, students should be able to:
(a) relate joint, marginal and conditional distributions and their properties with particular reference to the multivariate normal distribution;
(b) obtain and use Hotelling's T2 statistic for the one sample and two samples problems;
(c) derive, discuss the properties of, and interpret principal components;
(d) use the factor analysis model, and interpret the results of fitting such a model;
(e) derive, discuss the properties of, and interpret decision rules in discriminate analysis;
(f) use a statistical package with real data to facilitate an appropriate analysis and write a report giving and interpreting the results.
Syllabus
In multivariate analysis several variables are measured on each individual in the sample. The multivariate normal distribution now plays the same modelling role that the normal distribution does in univariate theory. Many of the univariate results have multivariate analogues and the module will look at generalisation of the t-test and confidence intervals.
But a range of new techniques become available in the multivariate setting. Such as, reducing the effective number of variables as in principal components analysis and classifying observation to populations as in discriminant analysis.
Using the computer to do these analyses and look at examples will form an integral part of the course.
Topics covered include:
1. Introduction to multivariate analysis and review of matrix algebra.
2. Multivariate distributions; moments; conditional and marginal distributions; linear combinations.
3. Multivariate normal and Wishart distributions; maximum likelihood estimation.
4. Hotelling's T2 test; likelihood vs. union-intersection approach; simultaneous confidence intervals.
5. Dimension reduction; principal component and factor analysis; covariance vs. correlation matrix; loading interpretation.
6. Discriminant analysis; maximum likelihood and Bayesian discriminant rules; misclassification probabilities and estimation; Fisher's discriminant rule.
Teaching methods
|Delivery type||Number||Length hours||Student hours|
|Lecture||22||1.00||22.00|
|Practical||1||2.00||2.00|
|Private study hours||76.00|
|Total Contact hours||24.00|
|Total hours (100hr per 10 credits)||100.00|
Private studyStudying and revising of course material.
Completing of assignments and assessments.
Opportunities for Formative FeedbackRegular problem solving assignments
Methods of assessment
Coursework
|Assessment type||Notes||% of formal assessment|
|In-course Assessment||.||20.00|
|Total percentage (Assessment Coursework)||20.00|
There is no resit available for the coursework component of this module. If the module is failed, the coursework mark will be carried forward and added to the resit exam mark with the same weighting as listed above. | https://webprod3.leeds.ac.uk/catalogue/dynmodules.asp?Y=202223&M=MATH-3772 |
Aluminium in Construction
Aluminium is a product with unique properties, making it a natural partner for the building industry.Thanks to its strength, durability, corrosion resistance and recyclability, it has become an essential product for the building industry and over the past 50 years its use in building applications has shown continuous and consistent growth.
Aluminium extrusions are commonly used for window frames and other glazed structures rangingfrom shop fronts to large roof superstructures for shopping centres and stadiums; for roofing, siding,and curtain walling, as well as for cast door handles, catches for windows, staircases, heating and airconditioning systems. Most recently; aluminium has played a significant role in the renovation of historicbuildings. The characteristics and properties of aluminium as a material have led to revolutionary and innovative changes in building techniques and architectural and engineering projects. Aluminium is leading the way into the future of the construction industry. | http://cantrexaluminium.com/new/?page_id=669 |
Favourites
Shopping Lists
Facebook
Facebook
Instagram
Instagram
Pinterest
Pinterest
YouTube
YouTube
Twitter
Twitter
header search icon
Close
REGISTER
LOG IN
user profile
Kraft User
Kraft Canada Recipes
Recipes
Brands
Articles
Recipes
Brands
Articles
français
header language image
français
My Favourites
My Favourites
Shopping List
Shopping List
search icon
Close Icon
Home
Recipes
Recipe Type
Main Course
Dinner
Skillet Lasagna Bolognese
Skillet Lasagna Bolognese
1 Review(s)
Cook Minutes
40 Min
Prep : 40 Min
The flavour of lasagna with meat sauce, in half the time! Perfect for busy weeknights, our Skillet Lasagna Bolognese is super easy to make with the help of the CLASSICO Pasta Sauce.
Add To Favourites Icon
facebook
Twitter
Pinterest
Social Share
Email
print
What do I need ?
Select All
6 Servings
Servings
Original recipe yields 6 Servings
serving count
This tool adjusts ingredient amounts only, not method or nutrition information. Additional changes to equipment, baking times, etc. may be needed for recipe success.
1
1
Tbsp.
olive oil
2
1/2
lb.
(225 g) fresh mushrooms, quartered
3
1
small carrot, finely chopped
4
1/2
cup
chopped onions
5
12
lasagna noodles, coarsely broken, uncooked
6
1
jar
(650 mL)
Classico
di Bologna Bolognese Pasta Sauce
7
2-2/3
cups
water
8
1/3
cup
Kraft
100% Parmesan Shredded Cheese, divided
Add To Shopping List
How do I make it ?
Select All
Step 1
Heat oil in large nonstick skillet on medium heat. Add vegetables; cook 6 to 7 min. or until carrots and onions are softened and mushrooms release most of their liquid, stirring occasionally.
Step 2
Stir in pasta sauce and water. Bring to boil. Stir in noodles; cover. Simmer on medium-low heat 15 to 18 min. or until noodles are tender, stirring occasionally.
Step 3
Add cheese; mix lightly. Remove from heat. Let stand, covered, 3 min. before serving,
Kraft Kitchen Tips!
krKitchenTipAndSymbol
Substitute
Substitute 3 cups uncooked elbow macaroni for the broken lasagna noodles.
krKitchenTipAndSymbol
Serving Suggestion
Serve with a fresh green salad tossed with your favourite
Kraft
Vinaigrette Dressing.
krKitchenTipAndSymbol
Note
Traditional lasagna can take up to 2 hours to prepare. This skillet version takes less than 1 hour.
NutritionTitleCont_Left-Bracket
Nutrition
NutritionTitleCont_Right-Bracket
Calories
300
Calories From Fat
0
% Daily Value
Fat
8 g
Saturated fat
2.5 g
13 %
Cholesterol
10 mg
Sodium
550 mg
23 %
Carbohydrate
45 g
Fibre
5 g
Sugars
10 g
Protein
13 g
Vitamin A
20 %
Vitamin C
30 %
Calcium
10 %
Iron
15 %
Servings
6 servings, 1-1/4 cups (300 mL) each
Nutrition information is estimated based on the ingredients and cooking instructions as described in each recipe and is intended to be used for informational purposes only. Please note that nutrition details may vary based on methods of preparation, origin and freshness of ingredients used.
#tags
Carrot
Easy
Top of Stove
Vegetables
Entrée
Onion
Mushroom
Main Course
Cheese
Pasta/Noodles
Parmesan
Dinner
Lasagna
Dairy
Pasta
Timesaver
Kraft Cheese
Add To Shopping List
MY LISTS
CREATE NEW
Save to List
Cancel
new input
Create List
Cancel
579016:203293
Left Bracket
Similar Recipes
Right Bracket
Left Bracket
Google Adsense1
Right Bracket
Left Bracket
Google Adsense2
Right Bracket
Left Bracket
Have you tried? | https://www.kraftwhatscooking.ca/recipe/skillet-lasagna-bolognese-203293 |
Location of City Palace
The palace is located towards the northeast side of central Jaipur and has many courtyards and buildings. The palace complex lies in the heart of Jaipur city, to the northeast of the very centre. The site for the palace was located on the site of a royal hunting lodge on a plain land encircled by a rocky hill range, five miles south of Amber (city).
History of City Palace
Maharaja Sawai Jai Singh II is known to have contracted work for building the outer wall of the city's complex. He shifted from Ajmer to Jaipur due to water problems and an increase in population in 1727. He had endowed the city's architectural design to the chief architect Vidyadhar Bhattacharya. The architect went on to design the City Palace in accordance with the Vaastushastra texts.
Architecture of City Palace
The City Palace reflects Rajput, Mughal and European architectural styles although the palace was designed to Vaastushastra discourse. These are all completely decorated. The Palace has been designed according to a 'grid style' and has various structures such as, Chandra Mahal, Mubarak Mahal, Diwan-I-Khas and the Govind Dev Ji Temple. The walls and gates are ornately designed to Mughal style, with various murals, lattice and mirrors embellishing them from sides.
Structures of City Palace
The most prominent and most visited structures in the complex are the Chandra Mahal, Mubarak Mahal, Mukut Mahal, Maharani's Palace, Shri Govind Dev Temple and the City Palace Museum.
The three gates of the City palace are Rajendra Pol, Tripola Gate and Atish Pol.
Rajendra Pol: Just outside the museum of City Place one will come across the Rajendra Pol flanked by two elephants. The beasts have been carved out of a single piece of marble. This gate will lead the travelers to the inner courtyard and exquisite arches and jali windows. As the tourists proceed further another gateway awaits them-Singh Pol.
Tripolia Gate: Towards the west of the City Palace lies the Tripolia Gate. This is a gate with three arches. This was the main entrance to the City place and Jantar Mantar. Even today only the Maharaja's family is permitted entrance through this gate.
Atish Pol: The other entry point for the palace is the Atish pol. It is also known as the Stable Gate. It is located to the left of palace. The Jantar Mantar (located within the place premises) could be accessed through this gate.
Besides these gates several other structures forms an important part of the grandeur in this place. They are as follows:-
Mubarak Mahal: It has been made out of white marble. This is also known as the Palace of welcome. It is a two storeyed building that can be approached from both Sarhad ki Deorhi and Gainda ki Deorhi. It was built by Maharaja Madho Singh II in 1890. Originally built as a rest house it was later used as a Mahakma Khas and presently it displays the royal wardrobe as a part of the museum.
Sileh Khana: It was originally the place where the classical singers and Kathak dancers practiced their art. They also taught their disciples here. It is exactly located at Old Gunjankhana near the Mubarak Mahal. It comprises of one of the main attractions of the palace. Today it displays the antique weaponry of the Rajputs. The armory comprises of guns, glittering daggers, bows and arrows, axes and a collection of shields.
Diwan I Khas: As the name suggests it was mainly used by the royal people and the aristocrats. Laymen were not allowed here. It was built in 1730. It houses a marble gallery and it has two huge silver urns known as Gangajalis. These were used by Maharaja Madho Singh II.
Diwan I Aam: It was built in 1760 and is popularly known as the Hall of Public Audience. In the recent times it has been converted into an art gallery. It has an exclusive collection of rare manuscripts. It is also decorated with splendid semi-precious stones studded ceilings and intricately carved pillars.
Pritam Niwas: The name literally means the house of the beloved. It was built by Jai Singh II. It faces the Jai Niwas garden.
Chandra Mahal: This is probably the most beautiful building in the palace. It has seven storeys and commands a fabulous view of the gardens and Jaipur city. The complex of the palace comprises of museum, an armory and several fine halls.
Maharani Palace
The Maharani's Palace was originally the residence of the royal queens. It has been converted into a museum, where weapons used by the royalty during war campaigns are displayed, including those belonging to the 15th century. The ceiling of this chamber has unique frescoes, which are preserved using jewel dust of semiprecious stones. A particular weaponry on display is the scissor-action dagger, which when thrust into an enemy's body is said to disembowel the victim, on its withdrawal. The other artifacts on display include swords with pistols attached to it, the sword presented by Queen Victoria Maharaja Sawai Ram Singh (1835-80) which is inlaid with rubies and emeralds. The Anand Mahal Sileg Khana - the Maharanis Palace - houses the Armoury, which has one of the best collections of weapons in the country. Many of the ceremonial weapons are elegantly engraved and inlaid belying their grisly purpose.
Govind Dev Ji Temple
Govind Dev Ji temple, dedicated to the Hindu God Lord Krishna, is part of the City Palace complex. It was built in early 18th century outside the walls set in a garden environment. It has European chandeliers and paintings of Indian art. The ceiling in the temple is ornamented in gold. Its location provided a direct view to the Maharaja from his Chandar Mahal palace. The aarti (prayer offering) for the deity can be seen by devotees only for seven times during the day.
The City Palace is a landmark in Jaipur and is also a very popular tourist hotspot. Apart from the regal architecture, the palace offers a stunning view of the Pink City and also an insight into the rich heritage of a bygone era. The City Palace is a must-see while sightseeing in Jaipur. There are palace buildings from different eras, some dating from the early 20th century. Despite the gradual development, the whole is a striking blend of Rajasthani and Mughal architecture. | https://www.indianetzone.com/23/city_palace_jaipur_rajasthan.htm |
Visiting the square both by day and at night is worth your time, because in any time of the day, the place is just spectacular. At night, the surrounding official buildings are all lit up, and surround the square in a flood of light. You should be careful though not to stay after 10:30pm, because otherwise you will be escorted out by soldiers, who come to close up the park for night.
What I would recommend to watch is the raising of the flag at the front of the square at sunrise or at sunset. It is a great event, because the traffic is halted, soldiers march up ceremoniously to the square to hoist the flag up or down the pole. You should watch it, though try to arrive early so can be in front of the crowd. :)
Address:
Center of Beijing
Price:
Free for square visiting
Opening Hours:
During the day.
How to get there:
By various buses (no. 1, 4, 10, 22, 52 or 57) or by subway 1, 2.
Recent travel tips for Beijing
- The Summer Palace in BeijingThe Emperor's getaway for centuries, the Summer Palace is the greatest and grandest imperial garden in whole China. Built as a place of recreation and rest for the emperor, the garden encompasses everything that is peaceful, harmonious and [...]D: n/a00
- A Brife Introduce for You the Ancient Street In BeijingChina travel is one of the most popular tour routes all over the world due to China is an old country with so many beautiful landscapes and historical sites. Traveling on holiday is becoming a new fashion and the good way to relieve [...]D: 8.3mi00
- The Forbidden CityBuilt in 1420, the Forbidden City has been the sacred place of emperors, and was unreachable for any other man. Consisting of buildings, courtyards, gardens and halls, the 'city' is one of the world's largest palace complexes. The most [...]D: n/a00
- The Great Wall of ChinaOne of the 7 wonders of the world and a man-made architectural sight that is visible from space, the Great Wall of China is a MUST-SEE if you travel to Beijing, or for that matter, to China. The wall has various stopping points where you [...]D: n/a00
- The Temple of HeavenBigger than the Forbidden city, this park and religious complex is one of the main attractions of Beijing. Originally built during the Ming and Qing Dynasty, as the worshiping place of the emperor, the park is now a social center for locals [...]D: n/a00
Things about Beijing you may be interested in
Recommended
Beijing Travel guideA Quick Glimpse on Beijing and what it Offers the First Time Visitor The city of Beijing has a storied past and the creation of Beijing as a city can be traced back to 1075 BC. That was the year when King Wu declared Beijing a city. After that the city has assumed a number of names like Ji the Zhongdu and Dadu and then finally named Beijing again when Emperor ChengZu chose that name and made it [...]
More travel tips you might be interested in
Charlottetown, Prince Edward Island
There are many beautiful and well preserved Victorian houses in Charlottetown, but most of them are private property and people live in them so you...
History & Culture: Beaconsfield Historic House
Lhasa, China
Located on the northern part of Lhasa is the residence of the Dalai Lama, the leader of the Buddhist religion. The Palace is also a World Heritage...
History & Culture: Potala Palace
Tarcoles, Costa Rica
Tarcoles means Crocodile in the native language of 1500 Costa Rica. The only reason to ever go to this town is to see the Crocodile Man Tour. The...
Outdoors Activities: Crocodile Man Tour
Gujrat, India
It is a world of its own! The west of India has its unique culture, history, people and places of interest that should be explored! There are so...
Family & Fun: Tourist Places in Gujarat
Modhera, India
Near the river Pushpavati, there is an artistic legacy standing as Modhera Sun Temple. What carving, what detailing!!! The most amazing thing is...
General: Modhera Sun Temple
Viareggio, Italy
It isn't a good place at all. The waiters were impolite, the menu was a single sheet thrown onto the table and without prices. We ordered meal...
Food: Barcobestia
Charleston, West Virginia
Houston freeways are great. Just pop into downtown have fun and your back to your own bed in notime.
General: Outside of Houston, Tx
Columbus, Ohio
Located in three spots in Columbus (North Market, Grandview and Short North), Jeni's Ice Cream Shop is a small heaven both for kids and adults....
Food: Jeni's Ice Cream Shop
Louisville, Kentucky
A pretty little place that is worth dining in each and every time you go to Louisville, Lilly's Kentucky Bistro is a blast. And what makes...
Food: Lilly's - Kentucky Bistro
Innsbruck, Austria
If you go to Innsbruck, train is the most comfortable way to get there. From the mainstation you can get everywhere by taxi. | http://www.travelgrove.com/travel-tips/Beijing-travel-tips-China-311641/Tiananmen-Square-in-Beijing-1498.html |
Extrinsic and intrinsic sensor calibration
Mirzaei, Faraz M. (2013)
View/
Download file
Mirzaei_umn_0130E_14544.pdf (8.298Mb application/pdf)
Persistent link to this item
http://hdl.handle.net/11299/162506
Services
Full metadata (XML)
View usage statistics
Title
Extrinsic and intrinsic sensor calibration
Authors
Mirzaei, Faraz M.
Issue Date
2013-12
Type
Thesis or Dissertation
Abstract
Sensor Calibration is the process of determining the intrinsic (e.g., focal length) and extrinsic (i.e., position and orientation (pose) with respect to the world, or to another sensor) parameters of a sensor. This task is an essential prerequisite for many applications in robotics, computer vision, and augmented reality. For example, in the field of robotics, in order to fuse measurements from different sensors (e.g., camera, LIDAR, gyroscope, accelerometer, odometer, etc. for the purpose of Simultaneous Localization and Mapping or SLAM), all the sensors' measurements must be expressed with respect to a common frame of reference, which requires knowing the relative pose of the sensors. In augmented reality the pose of a sensor (camera in this case) with respect to the surrounding world along with its internal parameters (focal length, principal point, and distortion coefficients) have to be known in order to superimpose an object into the scene. When designing calibration procedures and before selecting a particular estimation algorithm, there exist two main issues of concern than one needs to consider: Whether the system is observable, meaning that the sensor's measurements contain sufficient information for estimating all degrees of freedom (d.o.f.) of the unknown calibration parameters; Given an observable system, whether it is possible to find the globally optimal solution.Addressing these issues is particularly challenging due to the nonlinearity of the sensors' measurement models. Specifically, classical methods for analyzing the observability of linear systems (e.g., the observability Gramian) are not directly applicable to nonlinear systems. Therefore, more advanced tools, such as Lie derivatives, must be employed to investigate these systems' observability. Furthermore, providing a guarantee of optimality for estimators applied to nonlinear systems is very difficult, if not impossible. This is due to the fact that commonly used (iterative) linearized estimators require initialization and may only converge to a local optimum. Even with accurate initialization, no guarantee can be made regarding the optimality of the solution computed by linearized estimators. In this dissertation, we address some of these challenges for several common sensors, including cameras, 3D LIDARs, gyroscopes, Inertial Measurement Units (IMUs), and odometers. Specifically, in the first part of this dissertation we employ Lie-algebra techniques to study the observability of gyroscope-odometer and IMU-camera calibration systems. In addition, we prove the observability of the 3D LIDAR-camera calibration system by demonstrating that only a finite number of values for the calibration parameters produce a given set of measurements. Moreover, we provide the conditions on the control inputs and measurements under which these systems become observable. In the second part of this dissertation, we present a novel method for mitigating the initialization requirements of iterative estimators for the 3D LIDAR-camera and monocular camera calibration systems. Specifically, for each problem we formulate a nonlinear Least-Squares (LS) cost function whose optimality conditions comprise a system of polynomial equations. We subsequently exploit recent advances in algebraic geometry to analytically solve these multivariate polynomial systems and compute the LS critical points. Finally, the guaranteed LS-optimal solutions are directly found by evaluating the cost function at the critical points without requiring any initialization or iteration.Together, our observability analysis and analytical LS methods provide a framework for accurate and reliable calibration of common sensors in robotics and computer vision.
Keywords
Computer Vision
Inertial Navigation
Pose Estimation
Robotics
Sensor Calibration
Appears in collections
Dissertations
Description
University of Minnesota Ph.D. dissertation. December 2013. Major: Computer science. Advisor: Stergios I. Roumeliotis. 1 computer file (PDF); xix, 166 pages, appendices A-B.
Suggested Citation
Mirzaei, Faraz M.
.
(2013).
Extrinsic and intrinsic sensor calibration.
Retrieved from the University of Minnesota Digital Conservancy, http://hdl.handle.net/11299/162506.
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. | http://conservancy.umn.edu/handle/11299/162506 |
Iridium is committed to keeping space clean and serving as a leader for other organizations when it comes to being a responsible steward of space.
In 2010, we partnered with Johns Hopkins University and Boeing to implement the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE), as part of a grant from the National Science Foundation. The Hopkins Applied Physics Laboratory implemented the program through hosted payloads on the Iridium constellation. The unique architecture of the Iridium constellation provides the AMPERE sensors the capability to measure the Earth’s magnetic field, forecast space weather and solar storms, and track many other important observations. This enables high-quality forecasting of space-based solar storms that can disrupt aviation and terrestrial telecom and satellite systems. AMPERE provides data every two to 20 seconds to the ground stations, allowing for analysis within minutes – up to 100 times faster than before the program’s launch! While the first-generation Iridium satellites were not designed for hosted payloads, we were able to accommodate the AMPERE mission using existing sensors. With the transition to our second-generation constellation, we will be able to continue supporting AMPERE with even better input data.
Iridium is keenly aware of the importance of minimizing the risks associated with orbital debris. On February 11, 2009, an abandoned, uncontrolled Russian satellite crashed into Iridium 33, one of our active communication satellites, in an unprecedented space collision. Following this jolting crash, Iridium identified a need for better monitoring of all objects in space, as well as management and mitigation of space debris. In the days, weeks, and months following the collision, Iridium engineers began working with the government, U.S. Air Force, and NASA to build best practices for space operations and satellite end-of-life disposal. Today, we have integrated conjunction awareness and maneuverability capabilities into our operational DNA. Iridium maintains close and constant communication with the primary knowledge leader in the field of space debris, the U.S. Air Force Joint Space Operations Center (JSpOC). Through this partnership, we help to develop content and data for the space catalog, a public resource used to track all space debris. Additionally, our space operations team partners closely with JSpOC, the Joint Functional Component Command for Space (JFCC Space), the Conjunction Assessment Technical Advisory Council (CA TAC), and the Center for Orbital Debris Education and Research (CODER) to monitor and share our space traffic data, as well as help educate and influence other organizations on the importance of space situational awareness.
In 2010, we partnered with Johns Hopkins University and Boeing to implement the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE), as part of a grant from the National Science Foundation. The Hopkins Applied Physics Laboratory implemented the program through hosted payloads on the Iridium constellation. The unique architecture of the Iridium constellation provides the AMPERE sensors the capability to measure the Earth’s magnetic field, forecast space weather and solar storms, and track many other important observations. This enables high-quality forecasting of space-based solar storms that can disrupt aviation and terrestrial telecom and satellite systems. AMPERE provides data every two to 20 seconds to the ground stations, allowing for analysis within minutes – up to 100 times faster than before the program’s launch! While the original Block 1 satellites were not designed for hosted payloads, we were able to accommodate the AMPERE mission using existing sensors. With the transition to our second-generation constellation, we will be able to continue supporting AMPERE with even better input data.
Iridium supports important sustainability and environmental work and research all around the globe. From climate change monitoring, to carbon footprint reduction, to wildlife protections, our technology enables many solutions designed to help make the world a safer, cleaner place.
The Ocean Cleanup is an ambitious project seeking to rid the world’s oceans of plastic garbage by conducting the largest ocean cleanup in history. To do so, the team has created a system of 600-meter-long floating plastic collectors that include a 3-meter-deep skirt designed to collect plastic pollution. Iridium has joined the project as the preferred satellite communications partner. The Ocean Cleanup system is equipped with two Iridium Pilot® terminals providing the Iridium OpenPort® broadband service, which allows the autonomous system to relay critical systems data back to the team in Rotterdam, including compartment flood detection, position and location information, pictures (like the one to the right), 360-degree video, and system performance information. As The Ocean Cleanup team scales up the project to 60 systems, Iridium and AST will continue to support the mission with terminals and service, and plan to begin providing Iridium CertusSM hardware and services once it becomes available.
Click here to learn more about our involvement in The Ocean Cleanup.
Our commitment to providing unparalleled, reliable pole-to-pole coverage enables researchers to engage in crucial activities, like O-Zone level measurement, polar ocean profiling, weather forecasting and data transmission, wave movement measurement, and Arctic Ocean mapping. We’ve partnered with various research groups and individuals to support this work with donated equipment and airtime on our network.
Click here for more information on our polar research solutions.
Iridium is a proud supporter of wildlife protection efforts around the globe. As wildlife continues to decline at alarming rates around the world, we are committed to collaborating with conservation organizations and scientists to use our technology for monitoring wildlife and fighting against poachers. We have a strong history of partnering with various organizations like Smithsonian National Zoo & Conservation Institute, Zoological Society of London, and Veterans Empowered to Protect African Wildlife (VETPAW) to develop solutions and donate resources that help protect global biodiversity.
Iridium and its partners offer a range of IoT solutions that support climatology research and tracking. We understand the importance of this work in protecting our precious planet, so we are proud to support various independent research groups and individuals with donated equipment and airtime that allows them to conduct this critical work through scientific explorations, especially in remote areas.
Visit the Iridium blog to follow the work of these researchers!
Iridium Satellite LLC (“Iridium”) is committed to maintaining the highest standards of business conduct and ethics.
In recognizing that the trade of tin, tantalum, tungsten and gold (“Conflict Minerals” or “3TG”) has been a primary source of funding for the Democratic Republic of Congo (DRC) government’s armed conflict and human rights violations in the region surrounding the DRC, Congress enacted Section 1502 of the Dodd Frank Act to promote peace and security in the region. That Section provides, in part that effective January 1, 2013, companies that are regulated through the U.S. Securities and Exchange Commission (SEC) are required to disclose annually whether the Conflict Minerals (or their derivatives) in their products originated from the DRC region, and to disclose the due diligence conducted on the Conflict Minerals source and chain of custody.
Iridium is committed to complying with the SEC regulations regarding the reporting of the use of certain Conflict Minerals originating in the Democratic Republic of the Congo and adjoining countries into its end user products.
Establishing a strong supply chain management system to guard against using 3TG minerals mined or processed in the DRC and surrounding countries.
Identifying and assessing risk in our supply chain.
Designing and implementing a strategy to respond to identified risks.
Carrying out an independent third-party audit of the supply chain due diligence.
Reporting on supply chain due diligence.
At this time, some manufacturers are still not yet able to provide detailed information as to the smelters, refiners or ultimate sources of the 3TG minerals used in electronic devices, such as Iridium’s subscriber equipment, because of the complex and fragmented supply chains involved.
As part of its commitment to comply with the SEC 3TG minerals disclosure requirements, on an annual basis, Iridium surveys all of its suppliers to identify where 3TG minerals used in the manufacture of Iridium subscriber equipment are mined, smelted or refined.
Developing a Conflict Minerals due diligence program consistent with OECD guidelines.
Adopting a policy of responsible sourcing of 3TG minerals and require that their suppliers to do the same.
To the extent available, use Electronic Industry Citizenship Coalitions (EICC) – Global e-Sustainability Initiative (GeSl) designated Conflict-Free Smelters Program list of compliant smelters as the source for any 3TG minerals used in the products supplied to Iridium.
Should you have any questions regarding our compliance efforts, please feel free to email David Bensted or the Iridium Supply Chain and we will be happy assist you. | https://www.iridium.com/company-info/corporate-social-responsibility/sustainability/ |
Court orders new trial for man accusing Key West police of misconduct
A federal appellate court has tossed out a jury verdict that cleared Key West police of misconduct in a case that dates back to October 2013 during Fantasy Fest.
A new trial was ordered in the case of Raymond Berthiaume vs. the now-retired Lt. David Smith of the Key West Police Department and the city of Key West. Berhiaume alleges Smith tried to frame him in a battery case.
At issue, the court ruled, was whether the plaintiff, who is gay, was discriminated against by jurors who weren’t asked whether they harbor bias against gays.
“The district court here asked the jurors multiple questions about any biases or prejudices they might have against law enforcement,” reads the decision by the 11th Circuit U.S. Court of Appeal released Nov. 22. “But the district court refused to ask any questions at all about prejudice on the basis of sexual orientation. Therefore, we have no way to discern whether the jury was biased against [the plaintiff] for that reason.”
Three judges issued the ruling. The city and Smith may ask for the full court to rehear the case.
“Berthiaume noted that homosexuals had only recently begun to gain acceptance in society, and many people still harbor bias or prejudice against homosexuals,” according to the ruling. “Accordingly, Berthiaume contended that in a case such as his, involving both a gay party and gay witnesses, it is necessary for courts to inquire into prospective jurors’ potential biases against homosexuals to ensure a fair trial.”
Berthiaume, of Fort Lauderdale, said Smith knocked him to the ground after he had smacked a Key West street sign in frustration during an argument among his friends about going home. Berthiaume’s ex-partner wanted to stay out and swiped the group’s car keys.
The incident happened during Fantasy Fest 2013, which Berthiaume attended with some friends. Smith booked Berthiaume on a domestic-battery but he was never charged.
In January 2015, Berthiaume sued Smith and the city demanding at least $15,000 in damages. The lawsuit accused Smith of trying to frame Berthiaume for a battery by forcing one of his friends to lie about what happened.
But after a three-day trial in May 2016 at U.S. District Court in Key West before Judge James Lawrence King, a jury ruled Smith didn’t violate any of Berthiaume’s rights Oct. 27, 2013, nor did he make a false arrest.
Berthiaume says he required surgeries to repair injuries to his jaw and left wrist caused when Smith knocked him to the ground that night.
He was clad only in a loin cloth and flip-flops during the incident, police said. Berthiaume and his friends say he was wearing boxer shorts.
Smith retired in 2015 after 25 years with the Key West Police Department.
More Videos
Seventh time’s a charm for Hemingway look-alike winner
White-bearded Texan Richard Filip on Saturday won the 2017 Hemingway Look-Alike Contest, a highlight of Key West’s annual Hemingway Days celebration that ended Sunday. The 71-year-old, a retired real estate franchise owner, triumphed over 152 other entrants in the contest’s final round. | |
When the topic of income is raised, the issue is conventionally associated with the field of economics. Current research, however, suggests that a correlation exists between income and public health, specifically between minimum wages and smoking addiction. According to a recent study entitled “Minimum Wages and Public Health,” led by Paul Leigh, a professor emeritus in the Department of Public Health Sciences at UC Davis, there is a link between increasing minimum wages and the reduction of smoking prevalence among low-wage and low-skilled workers.
Though Leigh has written several papers on the correlations between wages and public health, it was only two or three years ago that he began looking at minimum wages specifically. After contacting Juan Du, an associate professor of economics at Old Dominion University, the two began a systematic review of various medical literature pertaining to minimum wages through scientific websites, such as PubMed and Web-of-Science.
Leigh mentioned that a very time-consuming aspect of the research was determining which literature was reputable and relevant to the question they were exploring. Some studies simply had the words “minimum wage” in their abstracts but were not looking at the issue specifically. He stated that his team spent almost two years narrowing down the data. Once the set of studies were established, the researchers conducted a meta-analysis to further analyze the data.
According to Leigh, meta-analysis means analyzing numbers from different studies in order to obtain an average estimate that is then examined to determine if it is statistically significant.
In addition to the unexpected number of health measures, Leigh explained that the lack of consensus among the studies on the various measures of health was also surprising. While some studies would argue in favor of an apparent effect, other studies would claim there was none. There was a strong consensus, however, about one health group in particular: smokers.
The estimates in Leigh’s study revealed that a $1 increase in minimum wage was associated with a 1.4 percent reduction in smoking prevalence among affected groups. According to Elisa Tong, an associate professor in the Department of Internal Medicine, smoking is disproportionately represented in people with low socioeconomic status. Although a health provider does not have direct access to a patient’s income, their healthcare coverage can suggest their socioeconomic status.
As those of low socioeconomic status may work minimum wage jobs, this potentially leads to stress, which is a dominant reason for why people smoke. Tong further explained that though many people want to quit smoking, the nicotine in tobacco products is a highly addictive substance that takes multiple tries to quit entirely. In revealing the correlation between an increase in minimum wage and a decrease in smoking prevalence, Tong hopes that this will lead to support for increases in minimum wage and have a broader impact on behavior change.
Despite the lack of a general agreement between studies, there were no consistently harmful effects associated with increasing minimum wage. This led Du to come to the conclusion that the positive effects of raising these wages dominates the negative effects.
Leigh explained that middle income wages have been stagnant for the past 30 to 40 years, though those on the lower end have experienced a decrease. If the minimum wage is increased significantly, the wage structure can be improved so that, by increasing the wages of low income workers, economic pressure will be put on middle income wages that will allow them to drift upward. Therefore, by increasing minimum wages, a significant effect can be produced on improving income inequality.
While minimum wage and the wages of low income people tend to spark economic questions, Leigh believes that people should begin looking at minimum wage as a public health issue. | https://theaggie.org/2019/02/08/more-than-economics/ |
On a rather recurrent basis, foreign policy initiatives are discussed in a way that evokes a sense of intangible ambiguity—as if the agents that make decisions are an entirely separate entity from the rest of human society. For if decision-makers were not drastically affected by the very societal influences that are confronted and subsequently dissected by all peoples on a daily basis, then surely the actor’s policies would have a high degree of consistency—thus creating predictable, rational generalizability that the discipline of international relations direly yearns for in an era no longer characterized by relatively static bipolarity. Of course, this is hardly the case; every decision is coupled with primary and secondary ramifications that are taken into consideration and, perhaps most importantly, one has to bear in mind that mistake-prone, easily-coerced humans make every decision that must then be carried out by other imperfect people. Accordingly, it is vital that foreign policy analysts substantially focus upon decision-making and the motives of the decision-makers; failing to account for the many influences that weigh in on every decision ensures a limited effectiveness of predicting future events, and quite arguably eliminates any hope for generalizability in its entirety. Take the recent developments in Syria for instance: the pitying, Rousseauan, Western bystander may find the vetoes of the UN Resolution by Russia and China to be unfathomable; yet, when one considers the economic and ideological interdependencies coupled with a disheveled opposition that lacks an identity remotely comparable to other oppositions that encapsulated the Arab Spring, analysts can begin to understand how decision-makers are influenced and why taking internal and external influences into consideration is necessary. The developments in Syria that have essentially handcuffed the West (beyond the usual sanctions that hinder both sides) only exhibit a small sample of the many factors that determine whether decision-makers enact policy, do nothing, or take the middle ground in precautionary and reactionary situations. As a result, decision-makers constantly take into account all considerations of every decision—ranging from the general welfare of one’s citizens to the partisan impact of their respective decision. Ultimately, foreign policy analysis greatly focuses upon decision-making because understanding why decisions are made is the fundamental piece to the puzzle in regards to predicting patterns and foreseeing beneficial solutions; therefore, until a state is on utopian economic and military terms with all other parties, the decision to do absolutely nothing, or to revert to an outdated state of isolationism during a time in which no state is economically or naturally self-sufficient, is never a feasible option.
In order to fully comprehend why there is a need to focus upon decision-making, one must consider all of the factors and restraints that surround policy-making. One such factor is the political structure of national government. As progressive President Franklin D. Roosevelt once said, “... If you ever sit here, you will learn that you cannot, just by shouting from the housetops, get what you want all the time” (Lash 1976: 124). President Barack Obama came to a similar realization when Congress did not allow him to close Guantanamo after he promised to do so during his charismatic campaign—revealing the power of a system of checks and balances. During the euro crisis, economically-sound Germany was pressured to bailout floundering Greece due to the obligation of being a ‘euro-member’. As evidenced, the constraints of the domestic and foreign political context, “…may often lead to outcomes that, although rational in some sense, are quite different from those produced by economic rationality reflecting instead ‘rationality without optimization’” (Diesing 1962: 2-3 & Simon 1990: 8). However, there are also contrasting political structures that give more power to fewer people. While the recent actions of Mubarak, Gadhafi, and al-Assad, and examples such as the diamond trade in Sierra Leone and state-sponsored terrorism in Lebanese-based Hezbollah that fund crony capitalism and create national security concerns—validating aspects of the ‘new wars’ thesis—top the list amongst undemocratic regimes, Blair’s decision to invade Iraq in 2003 despite poor intelligence and little public approval serves as an example of the leeway given to the party leader in a parliamentary system as well. Even in democratic United States, unethical blunders and incidents such as the Gulf of Tonkin and the numerous war crimes committed in Afghanistan and Iraq reveal that supposed constraints can be circumvented in shady ways. By taking how political structure (and its loopholes) factor into decision-making, analysts crucially increase their chances of predicting when agents are constrained by structural ties, when agents may circumvent structure, and when decisions may be replicated as a blueprint of generalizability.
In addition to the effects of political structure, social influences also play a key role in uncovering the cognitive subjectivity and overall humanizing characteristics of all decision-makers. As Richard Snyder notes, “… [the decision-maker] enters the government from the larger social system in which he also retains membership. He comes to decision-making as a ‘culture bearer’” (Snyder 1962: 7-8). Along these lines, Alexander Wendt differentiates ‘brute facts’ and ‘social facts’: for example, gravity is an indisputable ‘brute fact’ whereas sovereignty—a fundamental tenant of a successful international system—is socially-constructed and leads to disputes when violated as a manufactured factual conception (Wendt 1992: 399). Acknowledging that ‘facts’ are often times socially and subjectively influenced in the field of international relations is very important if one desires impartial analysis. As a brief example, one may wonder why U.S.-Iranian relations are currently so hostile when it would appear that laissez-faire trading between the two states would be mutually beneficial. Whereas an American may cite the hostage crisis of 1979 or Ahmadinejad’s insistence to eliminate Israel as reasons why sanctions and warnings are not strong enough initiatives to confront Iranian nuclear developments, an Iranian may cite the overhaul of their democratically-elected regime during the 1953 Iranian coup d’état or the United States’ unconditional support of Israel, even when Israel blatantly defies international humanitarian regulations, as examples that signify the United States has and always will mingle in Iranian affairs. Hence, as Valerie Hudson notes in emphasizing that ‘facts’ are culturally influenced, “…a tradition of raw empiricism in political science has contributed both to despair and to unsound methodological assumptions” (Hudson 2002: 7). Albeit difficult, analysts must attempt to view potential policy from an objective viewpoint; otherwise, analysts will fall into the same trap of subjectivity that generally breeds undesirable misconception and confrontation that encumbers analysts and decision-makers alike.
So it is established that political structure and subjectivity affect any decision-maker’s rationale, but it is very difficult for analysts to discern patterns if agents solely act individualistically and often times have the ability to ignore systemic structures. However, there are some general patterns of decision-makers. For instance, in a series of conducted experiments, evidence suggests that decision-makers consider domestic political factors prior to anything else when making foreign policy decisions (Mintz 1993: 15-29). According to an assistant in the Kennedy administration, the first question was always, “…will it fly on the Hill?” regarding foreign policy proposals (Farnham 2004: 448). Additionally, while Samuel Huntington describes how modern states have, “…become unable to define their national interests, and as a result sub-national commercial interests and transnational and non-national ethnic interests have come to dominate foreign policy,” and this is certainly reflected by international missteps ranging from Kosovo and Rwanda to Afghanistan and Iraq that helped squander the potential for a relatively peaceful and successful post-Cold War era, decisions are also generally based upon the perception of how high the ‘stakes and threat’ are (Nye 1999: 22). In support of the aforementioned claim, President George H.W. Bush, notably promoting involvement in Panama, the Gulf War, and Somalia during his tenure, is quoted as saying, "…using military force makes sense as a policy where the stakes warrant, where and when force can be effective…where its application can be limited in scope and time, and where the potential benefits justify the potential costs and sacrifice" (Bush 1994: 203). The premise that President Bush is indirectly alluding to is that the preferred situation in which to intervene for humanitarian causes is when the stakes, defined as the, “salience of the values at issue,” and threat, defined as the, “risk of loss on those issues,” are both low (Astorino-Courtois 2000: 489). To put stakes and threat into context, H.W. Bush’s intervention in Somalia would be classified as a ‘low-stakes-low-threat’ decision because intervention was, “relatively ‘cheap and easy’ and worth it,” whereas Operation Desert Storm in response to the Iraqi invasion of Kuwait would be classified as ‘high-stakes-high threat’ because the importance of U.S. interests was designated as weighty and the risk of casualties and monetary expenditure was clearly significant as well (Haas 1994: 69-70). All in all, domestic political factors coupled with stake and threat assessment are the predominant considerations that decision-makers take into account when choosing to intervene, militarily, economically, or ideologically, on a given issue. Since decision-making dictates the international order, analysts need not forget to consider all domestic and foreign factors.
For the sake of answering the ultimate question as to when ‘doing nothing’ is best, rightful stake and threat diagnosis is to be assumed (in other words, if the diagnosis is wrong due to fallacious intelligence or subjectivity, then the question cannot be adequately assessed). Additionally, it is to be assumed that the decision-maker is at least attempting to do what is in the best interest of its citizens. Bearing these pretenses in mind, any ‘low-threat’ initiative is one in which ‘doing nothing’ would make little sense; for even in the worst-case scenario, a decision-maker is trying to help their fellow man—without having to risk significant money or man-power—and, although perhaps marginally impactful like Operation Desert Shield in Saudi Arabia arguably was, the decision-maker and the national interest may merely have to ‘go back to the drawing board’ (Astorino-Courtois 2000: 490). While ‘low-threat’ predicaments can certainly be classified differently in the eyes of the given beholder, for argument’s sake, ‘doing nothing’ in a rightfully-diagnosed ‘low-threat’ situation is simply irrational because one cannot ‘lose’. In the case of ‘high-threat’ scenarios, the decision to intervene is impacted by many different factors. For instance, from an American perspective in which bloated military spending has facilitated the crippling of the education, healthcare, and social security opportunities for thousands of Americans over the course of the past decade, a decision-maker should probably err on the side of putting money and man-power into fixing domestic programs rather than impeding domestic revitalization and starting wars. In contrast, if public opinion perceives a threat or cause as imminent to national security or worthy of humanitarian aid and citizens are willing to go to a funded battle or invest in foreign aid, then the decision-maker should rightfully protect its and/or other citizens. Some neoconservative hawks or world policemen may construe such sentiments as justifying joint-strikes with Israel or overextending unavailable aid to struggling areas, but again this reveals the power of the perception of the decision-makers. Ultimately though, the underlying point is that, during an age of increasing interdependence, ‘doing nothing’ is never best. Understandably, any state or coalition of states should not and cannot police the world because it undermines the legitimacy of sovereignty and is not economically feasible, and time and again overextension proves to be poorly executed and damaging for everyone involved. However, isolationism is economic suicide and blatant indifference to the suffering of fellow man (when one has the resources to help) is unjustifiable. Obviously deficits and domestic problems make it difficult to garner support for foreign policy initiatives but, at the very least, international organizations provide forums for discussion and negotiation that must be utilized. Rather than launching preemptive military or humanitarian initiatives that have no exit-strategy, the empowerment of international bodies such as the WTO, UN, and IMF reduces the need for overzealous national defense budgets by pooling investment in international budgets for enhanced trade and humanitarian policies. Therefore, as an example, the U.S. would not have to be held solely responsible for allowing genocide to continue in Rwanda during the mid-1990’s because international resources would have helped counterbalance the lack of direct U.S. national interest in the cathartic cause. Cases that involve uncooperative, resource-rich states certainly raise questions regarding the effectiveness of international law, but initiating alienating sanctions only ensures collective economic hindrances and reaffirms the ironic rules of the current nationally state-centric international system: namely how only certain states can have nuclear capabilities, receive foreign aid, or strike neighbors without international retribution due to having powerful allies. Nevertheless, ‘doing nothing’ in response to a situation that is imperfect is never justifiable because, at the bare minimum, representatives can negotiate proposals that can potentially improve relationships between multiple actors.
In conclusion, decision-making in foreign policy is a complex phenomenon that is affected by many factors and yields many different results. Consequently, analysts must adapt to the seemingly unpredictable nature of decision-making by accounting for all influences that cross pathways with a humanized decision-maker. At the forefront are factors such as subjectivity, political structure, domestic considerations, and foreign obligations, stakes, and threats surrounding any particular international issue. In the end, ‘doing nothing’ to confront an issue would symbolize the antithesis of what a decision-maker is supposed to do: make decisions that are in the national interest. And since no state’s national interests are eternally satiated, negotiation is always an option when economic or military commitments are counterintuitive or simply not available during economic downturns. Foreign policy analysis focuses so much upon decision-making because their decisions mold the world that we live in and, since no rational actor will choose to ‘do nothing,’ analysts’ ability to foresee events and propose rewarding policy is vital to the reputability of the discipline.
Astorino-Courtois, Allison. (2000). “The Effects of Stakes and Threat on Foreign Policy Decision-Making”, Political Psychology. Vol. 21. 489-510.
Bush, George H.W. Address to the U.S. Military Academy. West Point, New York, 5 January 1993, as excerpted in Haass, R. (1994, p. 203).
Diesing, P. (1962). Reason in Society. Urbana, IL: University of Illinois Press.
Farnham, Barbara. (2004). “Impact of the Political Context on Foreign Policy Decision-Making”, Political Psychology. Vol. 25. 441-463.
Haass, Richard. (1994). Intervention: The use of American military force in the post-Cold War world. Washington, D.C.: Carnegie Endowment for International Peace.
Hudson, Valerie. (2002). “Foreign Policy Decision-Making: A Touchstone for International Relations Theory in the Twenty-First Century”, Foreign Policy Decision-Making (Revisited). 1-20.
Lash, J. P. (1976). Roosevelt and Churchill, 1939-1941: The Partnership that Saved the West. New York: Norton.
Mintz, Alex. (1993). “The Decision to Attack Iraq: A Non-compensatory Theory of Decision-making”, Journal of Conflict Resolution. 37. 595-618.
Nye, Joseph. (1999) "Redefining the National Interest", Foreign Affairs. Vol. 78.
Simon, H. A. (1990). “Invariants of Human Behavior”, Annual Review of Psychology. Vol. 41. 1-19.
Snyder, Richard C., H. W. Bruck, and Burton Sapin, eds. (1962) Foreign Policy Decision-Making: An Approach to the Study of International Politics. New York: Free Press of Glencoe.
Wendt, Alexander. (1992) “Anarchy Is What States Make of It”, International Organization. Vol. 46. 391–425. | http://www.infobarrel.com/Decision-Making_in_Foreign_Policy |
Time article A new NASA-funded solar system creation study claims the solar system is made up of 3,500 “living fossils” and is populated by a “significant number” of planets.
The solar system was originally created by astronomers using images from NASA’s Voyager 1 spacecraft in the 1970s, and is a collection of galaxies, stars, planets and moons that has evolved over time.
The study, titled “Living Fossils of the Solar System,” was published in the journal Nature this week.
The study suggests there are more than 300 billion stars in the solar universe and that the number of planets in the universe is about 20,000.
The researchers also calculated that if the universe were only made up a small percentage of the universe, the universe could contain more than 10,000 planets.
“The solar nebula has evolved and evolved over the course of its existence.
There are millions of stars in our galaxy and a very large number of stars, and the star formation rate is very high, so the total number of star systems in the galaxy is quite high,” said study co-author Mark Saperstein, a planetary scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.
“It’s probably a good idea to go into the solar nebulas to look for planets.”
According to the study, the sun is a rocky dwarf star, a type of planet that orbits the sun, in a galaxy about the size of the Milky Way.
There’s a rocky planet orbiting a rocky star about the same size as the sun.
The researchers created a 3D model of the sun using images taken by Voyager 1 during its orbit around the sun in 1977.
The planet was called Neptune and the planet’s orbit was shown to the model as being tilted toward the sun at the time.
The team used the model to create a model of a solar system that has planets orbiting stars, but with the planets orbiting in a slightly different direction than their orbit on Earth.
The model was created to help scientists understand the evolution of the solar systems, which is an important aspect of understanding our own solar system.
“What the model showed was that the solar structures evolved through time,” Sapersteins said.
“The models that we created with Voyager and Voyager 2 were based on data from a very small portion of the sky.”
Scientists have previously studied the Sun and its planets and found that the Sun is an extremely young star and that it’s about 200 billion years old.
“We think that the evolution was very rapid,” Sapersstein said.
“There’s this huge amount of solar system data that we have from Voyager 1 and 2 and the new models we built with these new instruments, and we’re finding the solar formation rate in that data is so high that it was possible to make a model that showed that the Earth’s solar system, which had a very young star, has been evolving at a rate that we can actually model.”
The researchers were able to use a large sample of images taken with Voyager 1 to recreate the Solar system in the new model.
They were able for the first time to take images of planets, moons and stars that were previously considered too small to be there.
“With this new model, we were able, with a large dataset, to make the models that look like the planets, and even the moons and the stars, look like planets,” Sapperstein said, adding that the models can be used to simulate the solar evolution of Earth and other planets. | https://maivucamera.com/2021/10/the-solar-system-is-made-of-3d-models/ |
To be the global leader in advanced sustainable forest and natural resource management solutions that enhance productivity and value.
Mission
Industry Research Programs in Forestry Center (IRPF) increases value to landowners and citizens through continuous genetic improvement of forest trees; creates innovative solutions to enhance forest productivity and value through sustainable management of site resources; leads in conservation and domestication of forest genetic resources for sustainable economic, ecological, and social benefits for present and future generations.
Objectives
- Conduct multidisciplinary research to facilitate optimal biological and economic productivity of local and global forests;
- Provide a pool of talent and resources from industry, University, and governmental partners to address challenges for both local and global forests;
- Develop and foster forest science research scholarship;
- Attract and train outstanding students.
History and Organization
The Center for Industry Research Programs in Forestry (IRPF) was established as an administrative umbrella to house several University/Industry/Agency Cooperatives conducting research in the areas of Tree Improvement (since 1956), Hardwood Silviculture (since 1965, no longer active), Forest Productivity (originally Forest Fertilization, then Forest Nutrition) (since 1969), and Gene Preservation of Tropical Pines (Camcore) (since 1980). These cooperatives also attract high-quality faculty who provide basic and applied research leadership for contributing members. It is one of the oldest and still active UNC-GA Center at NC State University, and one of the University’s largest industry membership consortia. Currently, fifteen faculty members are affiliated with the Center, and IRPF Cooperatives support a full-time staff of 26.
The Center has close to 100 industry and agency members representing 21 countries and 5 continents.
Cooperative Tree Improvement Program
Located on the North Carolina State University campus in Raleigh, NC, the mission of the Cooperative is to economically increase forest productivity through the genetic manipulation of loblolly pine populations. The North Carolina State University Cooperative Tree Improvement Program (NCSUCTIP) began in 1956 when a group of 11 charter industry members agreed to support research in forest genetics, selection, breeding and testing, and technology transfer for an initial five-year period. This group of visionary industry leaders recognized the need to invest in the long-range regeneration of the forests that were being harvested in the Southeast. Almost 60 years later, the Cooperative is still going strong, providing vital research to the forest industry, forest landowners, and government agencies. Enhanced productivity through breeding, selecting, and deploying superior loblolly pine families is a major goal of the Cooperative.
Brief overviews of our past and current contributions to forestry are:
- Development of innovative, efficient, and cost effective breeding strategies for forest trees
- Integrating tree improvement into silvicultural systems
- Development of optimal selection strategies
- Understanding the genetic and environmental control of growth and wood properties
- Increased seed production efficiency from seed orchards
- Understanding the genetic control and variation in disease resistance and developing deployment strategies
- Integrating biotechnologies and genomics with conventional breeding strategies to enhance forest productivity
Please view treeimprovement.org for more information.
Forest Productivity Cooperative
The Forest Productivity Cooperative (FPC) is an international partnership committed to creating innovative solutions to enhance forest productivity and value through the sustainable management of site resources. The partnership is led by forestry faculty at North Carolina State University, Virginia Polytechnic Institute and State University, and the Universidad de Concepción. Team members have expertise in silviculture, forest nutrition, ecophysiology, soils, plant community ecology, growth and yield modeling, process-based models, remote sensing, spatial analysis and geographic information systems (GIS,) and statistics. Partners include the three host universities, forest industry, timber management investment organizations, forestry consultants, governmental agencies, private landowners, and others interested in intensive plantation management. Members own or manage over 24 million acres (10 million hectares) of pine and broadleaved plantations in the southeast US and Latin America, making the FPC one of the world’s largest cooperative silviculture research and education programs.
Our approach includes a mix of applied research, fundamental research, graduate and undergraduate education, technology transfer, continuing education, and consulting. This mix provides a productive environment for addressing questions and immediately incorporating research results into silvicultural practices for cost-effective and environmentally sustainable plantation management.
Please visit forestproductivitycoop.net for more information.
Camcore
Camcore is a non-profit, international tree breeding organization formed by private industry in 1980 and headquartered at North Carolina State University. It primarily serves the private forestry sector to ensure that it has access to a broad genetic base of the best-adapted and productive species for use in plantation forestry programs in the tropics, subtropics and subtemperate regions. It now has 29 active industry members in 11 countries in the Americas and Africa.
Camcore members include a wide range of forestry companies, from well-established companies with long histories and strong research programs, to brand new organizations with greenfield operations in Latin America and Africa. These companies also produce a wide array of final products: kraft pulp, dissolving pulp for chemical and industrial use, business paper, sack paper, tissue paper, sawtimber, plywood, utility poles, etc., for use in their domestic markets as well as export. Collectively, Camcore members plant 180,000 ha of eucalypt and pine plantations each year, with growth rates ranging from 20 to 70 m3/ha/year.
The program works internationally with four tree genera: Pines, Eucalypts, Gmelina and Teak, and with several threatened coniferous species native to the southern US. Camcore differs from other domestic and international tree improvement efforts in that one of its major emphases, in addition to breeding, is the establishment of ex situ conservation plantings of tree species and populations. In addition to the active faculty members, Camcore has a staff of 10 professionals who are University employees.
The staff organizes and guides members in projects in four broad working groups and associated activities listed below:
Please visit camcore.org for more information. | https://research.cnr.ncsu.edu/irpf/ |
A major reason for the dark energy (dE) effect is that it traps the heat energy emitted from the Sun.
But what does this mean for the planet?
And why is dark energy such a major cause of climate warming?
It is a topic I have written about in detail elsewhere, and I hope this article will be a useful refresher.
What is dE?
Dark energy, or dark matter, is a type of energy which has the ability to interact with the matter in the Universe and change its properties.
It is believed to be the fundamental energy of the Universe, and is produced by stars, the Milky Way and black holes.
This type of particle is the energy behind the creation of the first stars and galaxies.
This is a key part of why the Universe is expanding, why the Earth is rotating and why the Sun is rotating.
It has been called the ‘dark matter’ of the universe, because the amount of dark matter we know about is only slightly more than the mass of the Sun (about 1% of the mass).
The amount of the energy being emitted from our Sun, on the other hand, is around 1% the mass.
This means that if we add 1% more dark energy to the Universe it would cause the Universe to be 6 times more massive.
However, dark energy has been known to interact in other ways with the physical properties of the cosmos.
These interactions can lead to the formation of stars, galaxies and planets, and are therefore key to explaining the origin of the known universe.
What do we know?
Dark matter is made up of a variety of particles that interact with one another in different ways, so it is not easy to pinpoint exactly which ones make up dark energy.
The main ways in which dark energy interacts with the Universe are through the interaction of dark photons.
These are particles that are produced by the decay of some kind of particle.
For example, if you add a heavy isotope of hydrogen to a chemical reaction, then the reaction will produce heavier isotopes of hydrogen.
These heavier isotope hydrogen atoms, called H2O, will interact with electrons to form heavier isotopic hydrogen.
This heavier isotop of hydrogen can interact with other heavier isototopes of H2 to produce heavier, heavier isotoles of H. When these heavier isotoped H isotopes interact, they create more and more of the H isotope, leading to more and greater H2 isotope pairs.
The H isotopic pairs are the H2 atoms that have a certain amount of energy (called the ‘charge’), which is the same as the energy of an electron in a standard electron-photon detector.
It means that, when two different electrons interact with each other, they form the H pairs that are heavier than the electron they are interacting with.
So the energy produced by a pair of H isotopy pairs is called the H-energy.
This energy is why the H atom can interact to form a heavier atom, a heavier electron or a heavier nucleus.
The amount that a pair interacts with depends on its mass and the way it interacts with light.
For the light-based particles, dark matter is much stronger than the ordinary matter that we know.
When an electron interacts with a heavy H atom, it can emit a large amount of electrons.
These electrons can interact very much more strongly with the heavier atoms, creating heavier H atoms.
This stronger interaction between the electron and the heavier H atom allows the heavier atom to carry more energy.
This can then be used to make heavier H. For this reason, dark particles are also called ‘dark’ particles.
The more dark matter there is in the universe the more intense the interaction between them.
This results in more and stronger interactions between the heavier particles, and the resulting H is more powerful than the H atoms can make.
This interaction leads to heavier H, and even heavier dark matter.
This in turn leads to more dark particles, which in turn lead to more H. In turn, the heavier dark particles become even more powerful, and in turn, more dark H and even more H and more H can be produced.
So dark matter acts as a giant magnet for the heavier elements of the Solar System, the Sun, planets and the stars.
In fact, the H is the largest component of the total mass of all the matter and energy in the Solar Systems, and has a mass of roughly 10 billion Earth masses.
What does dark energy mean?
What we mean by dark energy comes down to the fact that dark matter has a large number of properties that make it extremely difficult to see.
For instance, it is extremely difficult for light to pass through it.
The only way light can pass through dark matter particles is if they have a very high mass.
However for light, the particles have to have very high energy to be able to pass. | https://sanjeshsharma.com/tag/dark-energy/ |
Importance of foreign exchange
INTRODUCTION TO THE FOREIGN EXCHANGE MARKET
What Are The Role And Importance Of AgricultureThe Role And Importance Of Agriculture: Foreign exchange earning Contribution to GNP Food security Employment ratio of imported food to local produce National and regional plan for agricultural development Trade liberalization
Foreign exchange market | economics | Britannica.com
The foreign exchange markets have been founded since long time ago until it is in its present form, a changing exchange rate since March 1973 and called the elimination of the exchange rates of many different currencies to the US dollar in 1944.
- blogspot.com
Foreign exchange rates (also known as FX, or Forex) is the rate at which you can exchange one currency for another.It seems obvious that this figure should be the main thing you look at when making an international bank transfer, right? Not so fast. There are other factors involved in currency transfer that dictate exactly how many euros you get for your sterling, or how much yen for your dollar.
Why is Foreign Exchange Reserve important? - Quora
Foreign Exchange - Econlib
Chapter 18. Foreign Exchange CHAPTER OBJECTIVES By the end of this chapter, students should be able to:+ 1. Define foreign exchange and explain its importance. 2. Describe the market for foreign exchange. 3. Explain why countries shouldn’t be proud that it takes many units of foreign currencies to • Foreign exchange is the trading of
Importance Of Forex Market In India — Forex Market in India
Foreign exchange market (forex, or FX, market), institution for the exchange of one country’s currency with that of another country. Foreign exchange markets are actually made up of many different markets, because the trade between individual currencies—say, the euro and the U.S. dollar —each constitutes a …
Definition & ~ Banking
Foreign exchange reserve can be defined as deposits of a foreign currency held by the central bank of a country. Here are some of the reasons why it is important for a country to have good amount of foreign exchange reserves –
The changing reserves
Main Features of the Foreign Exchange Management Act (FEMA)! The Foreign Exchange Management Act (FEMA) was an act passed in the winter session of Parliament in 1999, which replaced Foreign Exchange Regulation Act. This act seeks to make offences related …
What Are The Role And Importance Of Agriculture - Class
Considerations. Foreign exchange rates carry important political implications. Citizens may point to unfavorable exchange rates and trade imbalances as signs that politicians currently in office are mismanaging the economy.
Importance of International Currency Exchange Rates
A third function of the foreign exchange market is to hedge foreign exchange risks. Hedging means the avoidance of a foreign exchange risk. In a free exchange market when exchange rate, i. e., the price of one currency in terms of another currency, change, there may be a gain or loss to the party concerned.
Foreign Exchange Market and its Important Functions
Here are forex important things about the importance market that help you india forex trading more: When you exchange dollars for euros, two currencies are involved. For every transaction at the foreign exchange, one currency is exchanged in lieu of the other.
Importance of Foreign money exchange!! – ratealert xyz
foreign exchange market in present and future periods. This schematic asset exchange rate is an exponentially weighted average of expected future dif- This model illustrates the coordinate importance of monetary factors affecting the supply and demand for money and real
Discuss the importance of the market for loanable funds
In general, finding out the role of foreign exchange rate movements on offshoring has a clear policy importance both for the home (Sweden) and host economies in terms of their overall exchange rate management or choice of exchange rate regime.
Foreign exchange market - Wikipedia
2011/02/25 · Foreign currency exchange is the biggest and the fastest growing market worldwide. These days millions of people are encouraged to invest in international currency exchange with a desire to get high return on investments. Though just jumping in by the …
How Important Is The Foreign Exchange Rate? - TransferWise
The importance of foreign exchange markets has grown with increased global economic activity, trade, and investment, and with technology that makes real-time exchange of information and trading possible.
The Importance of Appropriate Exchange Rate Regimes
An exchange-rate system is the set of rules established by a nation to govern the value of its currency relative to other foreign currencies. The exchange-rate system evolves from the nation's monetary order, which is the set of laws and rules that establishes the monetary framework in which
The Foreign Exchange Market - The Library of Congress
The Importance of Appropriate Exchange Rate Regimes. Remarks. David Dodge - Former Governor (2001 And I have focused on the importance of a floating exchange rate regime, which is a key element in promoting good performance, both domestically and globally. These capital controls and foreign exchange interventions are awkward, and over
Main Features of the Foreign Exchange Management Act (FEMA)
What are the importance of foreign exchange? - Quora
An important lesson of the 2008–2009 financial crisis was that the emerging market economies with high levels of international reserves were better able to withstand the ripple effects of the global meltdown.
All About Forex Brokers and Their Importance
Foreign exchange represents a system with the help of which trading countries settle their international indebtedness and includes all institutions, credit instruments mechanism etc. Foreign exchange is a very important element in foreign trade.
How Foreign Exchange Affects the Economy | Bizfluent
Recognizing the systemic impact of foreign exchange settlement risk, an important element in the infrastructure for the efficient functioning of the Indian foreign exchange market has been the clearing and settlement of inter-bank USD-INR transactions.
Foreign Exchange Student - Top 5 Reasons - High School
Role of RBI in FOREX Market | Exchange Rate | Foreign
In this article, we would talk about the importance of foreign exchange and dive deep into all details about them. Who are Foreign exchange brokers? A Forex broker is a firm which renders all currency traders along with providing the access to various platforms where one can trade and buy or sell foreign currency.
Growing an Economy - Global Center on Cooperative Security
Foreign-exchange reserves (also called forex reserves or FX reserves) is money or other assets held by a central bank or other monetary authority so that it can pay its liabilities if needed, such as the currency issued by the central bank,
The - ezinearticles.com
2009/09/03 · The importance of foreign exchange is described in brief as under:-1- Foreign exchange reserves shows the financial strength and the stage of development of the economy. 2- The acceptance of currency at a predetermined rate makes the international trade easy.
| |
Laurent Fabius, Minister of Foreign Affairs and International Development, and Ségolène Royal, Minister of Ecology, Sustainable Development and Energy, welcome the adoption in Berlin on 12 April of the third volume of the IPCC assessment report devoted to policies to reduce climate change.
Given the acceleration in greenhouse gas emissions, the report confirms the need to act without delay and go further than the policies already under way, as France will do with the future estimates act on the energy transition. It describes the various options for enabling global warming to be limited to a maximum of 2ºC in order to contain the effects of climate disruption.
Combating climate change is an opportunity: with due regard for each country’s own choices, the report highlights the possible benefits to our economies of low-carbon policies, for example regarding transport, town planning and buildings’ energy efficiency. It draws attention to the role played by the protection of ecosystems and biodiversity and sustainable forest management in the fight against climate change.
The work of the IPCC and its members’ appeal for international cooperation mark essential progress on preparing the climate agreement which the international community is due to adopt in Paris in December 2015. France is totally mobilized for an ambitious agreement to be concluded on that occasion. It is also determined to contribute to the adoption by the European Union of a set of robust energy-climate standards by 2030, with a 40% reduction in our emissions compared to 1990 [levels]./.
Energy policy – Reply by Mme Ségolène Royal, Minister of Ecology, Sustainable Development and Energy, to a question in the National Assembly
Paris, 9 April 2014
Ladies and gentlemen deputies, the fight against global warming – and therefore the energy transition – is a burning obligation in order not only to curb the accumulation of greenhouse gases and global warming but also to limit the depletion of our natural resources; it is above all a tremendous opportunity, a tremendous challenge which can provide our country not only with considerable potential to create economic activity and jobs but also with wellbeing linked to health issues and to increased spending power, for example through energy saving.
The road map you’re asking me about is clear; it was set by the President at the two environmental conferences and by the Prime Minister in his general policy statement; it’s one of our priorities. It consists in stepping up the pace, in getting the ball rolling in the regions where you’re elected, and above all in preparing the bill on the energy transition, which you’ll have to debate.
Four major projects lie ahead of us.
The first concerns speeding up thermal renovation of buildings, because we must renovate 500,000 buildings before 2017: that’s a tremendous opportunity for the building industry and we’re going to speed up this project. The second concerns renewable energy and energy saving: there too, our major industrial groups and our SMEs are in an especially good position; we can thus increase renewable energy’s share of our energy production. The third project concerns clean mobility, with the rolling-out of electric terminals; finally, the fourth concerns the circular economy, where waste is regarded as a raw material bringing added value, which will enable us to make France one of Europe’s leading environmental powers./.
Communiqué issued following the Council of Ministers’ meeting (excerpts)
Paris, 9 April 2014
Ratification of the amendment to the Kyoto Protocol of 11 December 1997
The Minister of Foreign Affairs and International Development presented a bill authorizing the ratification of the amendment to the Kyoto Protocol of 11 December 1997.
The Kyoto Protocol, which was adopted in 1997 and came into force in 2005, is the only legally binding instrument to date whose aim is to reduce or limit greenhouse gas emissions in industrialized countries as well as economies in transition. The protocol’s first commitment period, from 2008 to 2012, was extended for the years 2013 to 2020 by means of an amendment adopted in Doha on 8 December 2012.
In 2008, the European Union created a legal framework for the period until 2020 which will enable it to adhere to the target it set itself for the new period. The climate and energy package in fact envisages a 20% reduction from 1990 levels of greenhouse gas emissions in the EU.
While the amendment to the Kyoto Protocol cannot alone halt climate disruption because it covers only 15% of global greenhouse gas emissions – only certain developed states having again committed themselves to this framework –, the new period it opens is essential because it enables the transition to be ensured until an agreement on the climate is adopted in Paris in December 2015, to come into force in 2020.
In view of the urgent need to act to limit the average global temperature rise to 2ºC above pre-industrial levels, the 195 parties to the United Nations Framework Convention on Climate Change are already working to draw up this future agreement, which will have to be ambitious and universal. (…)./. | https://pk.ambafrance.org/Ministers-welcome-IPCC-report-s |
On Wednesday, the U.S. Fifth Circuit Court of Appeals ruled that Texas’ voter ID law, thought to be the strictest such law in the country, was discriminatory and a violation of the Voting Rights Act, making this the third consecutive judicial decision against this law. In the 9-6 ruling, although the majority did not find that the law intended to discriminate against certain voters, it had the effect of doing so.
It has been estimated that Texas' strict voter ID law could prevent 600,000 otherwise-eligible voters from casting ballots this November, and critics believe it was enacted to make voting more difficult for minority groups and lower-income voters.
The Fifth Circuit Court of Appeals found that Texas SB 14 has a racially discriminatory effect in violation of Section 2 of the VRA because the law disproportionately diminishes African American and Latino opportunities to participate in elections.
The District Court is now required to create a short-term solution that can be implemented in Texas for the November Presidential election, writing, “…we rely on equitable principles in concluding that the district court should first focus on fashioning interim relief for the discriminatory effect violation in the months leading up to the November 2016 general election.” NYDLC applauds this decision and hopes that it sets a precedent for many similar ones to come.
NYDLC applauds this decision, which recognizes what so many voting rights advocates and experts have known for years--intentionally or not, these laws make it harder for certain groups of perfect eligible American voters to exercise their fundamental right to the franchise. If states intend to raise new procedural hurdles to voting, they must fully guarantee that the rights of their citizens, especially those who have historically been marginalized, are not infringed, before these laws are permitted to take effect.
You can read the Brennan Center for Justice's background analysis of the case here.
You can read the entire Fifth Circuit Court of Appeals En Banc decision here.
NYDLC Statement on the Tragic Death of Seth Rich, DNC Voter Expansion Data Director
For Immediate Release, Tuesday, July 12, 2016
Contact: (866) NYDLC-01; [email protected]
NYDLC mourns the loss of Seth Rich, DNC’s Voter Expansion Data Director and a dedicated public servant who fought to ensure voting rights for millions of Americans. Rich, 27, "wanted to make a difference" and had a bright future in Democratic politics. He was killed in a senseless act of gun violence early Sunday morning in Washington D.C.
Seth worked tirelessly to protect the right to vote and supported voter protection programs around the country. Several NYDLC members who had worked with or crossed paths with Seth during past election monitoring efforts were shocked by the news and dismayed by the loss of such a patriotic young American who was committed to making it easier for eligible voters to cast a ballot.
NYDLC extends its deepest sympathies to Seth’s family, friends, and colleagues for this tragic loss. As an organization whose mission wholly aligns with Seth's work, NYDLC aspires to continue its voter protection efforts with the tremendous vigor and devotion that Seth demonstrated during his all-to-short life.
DNC Chair Debbie Wasserman Schultz released a statement which can be found here. Contributions can be made to Seth's childhood camp, Camp Ramah in Wisconsin or Beth El Synagogue. Police are asking anyone with information related to Seth's death to call 202-727-9099 or send a text message to 50411.
***
NYDLC and Civil Rights Advocates Call on Congress to Restore VRA
For Immediate Release: Thursday, June 23, 2016
Contact: (866) NYDLC-01; [email protected]
During the Shelby County Week of Action, the New York Democratic Lawyers Council (NYDLC) joins civil rights advocates from across the country to call upon Congress to restore the Voting Rights Act (VRA), in order to ensure that all Americans are able to exercise their voting rights without discrimination or difficulty.
Saturday, June 25th marks three years since the Supreme Court, in its 5-4 Shelby County v. Holder decision, gutted key provisions of the VRA, rendering some of its core protections inoperative. In doing so, SCOTUS invalidated the coverage formula that required certain state and local governments with a history of discrimination to obtain federal approval before changes to voting laws could take effect.
As a result, previously active voters have been disenfranchised by restrictive new laws enacted in 17 states, while voting strength of people of color and language minorities has been diluted. Many jurisdictions have taken steps to make it harder to register or cast a ballot, have enacted strict voter ID requirements that disproportionately impact minorities and lower-income voters, and have decreased the time allotted to vote or drastically reduced the number of poll sites.
By refusing to restore the VRA, the GOP-controlled Congress has been complicit in the largest rollback of voting rights protections in 50 years. Abdicating their responsibility to address this matter ahead of the 2016 Presidential election, Republican leadership in Congress has refused to consider two bills with bipartisan support that would breathe new life into the VRA, and prevent discriminatory election law changes for suppressing the fundamental rights of eligible Americans.
NYDLC Co-Chair John Nonna said, "The Supreme Court's decision in Shelby opened the floodgates to voter suppression law in formerly covered jurisdictions. We must fight to restore [Federal] pre-clearance under criteria that would apply to all states seeking to suppress the vote through harsh legislation."
NYDLC Co-Chair Carol Schrager said, "The world looks to America to lead on expanding Democracy and protecting civil rights. Our election laws and processes need to make it easier for eligible voters to make their voices heard, not harder. VRA protections are critical toward this end."
NYDLC Executive Director Jarret Berg said, "Apart from the sorely needed federal oversight safeguard the VRA provides, the Act represents the culmination of more than a century of progress made against systemic racial discrimination and the strengthening of individual civil rights. Congress needs to come together to advance federal legislation to protect voting rights in the states and fix what the conservative wing of the Supreme Court carelessly broke."
Theo Harris, Co-Chair of NYDLC’s Minority Voting Rights Committee explained, “It is with great trepidation that we enter the summer stretch of the first presidential election in 50 years to be conducted without the full protections of the Voting Rights Act in effect. To compensate for the loss of this critical tool, we must come together and organize our communities to protect the vote for all Americans.”
The New York Democratic Lawyers Council (NYDLC) is a coalition of attorneys, law students, and voting rights advocates who share the common goal of protecting each citizen’s right to vote. NYDLC attorneys dedicate their expertise to help ensure that all eligible persons can register to vote easily; all registered voters are able to vote conveniently, fairly, and without intimidation; and, that all votes are counted accurately by transparent and reliable voting systems.
You can Join NYDLC as a 2016 member to help NYDLC protect the rights of voters in New York and key battleground states in or Make a contribution to support NYDLC's work.
Read more (http://bit.ly/28KN0Ht) about the US Supreme Court’s decision and its devastating impact.
* * *
NYDLC and Cardozo Law Dems Testify on Pro-Voter Reforms at NYC Campaign Finance Board
For Immediate Release: Wednesday, May 18, 2016
Contact: (866) NYDLC-01; [email protected]
NYDLC Executive Director Jarret Berg and Elizabeth Robins, President of the Cardozo Law School Democrats (CLD) presented joint testimony yesterday to the NYC Campaign Finance Board’s Voter Assistance Advisory Committee (VAAC) about much-needed reforms to reduce barriers to voter participation. The Committee sought input on voter-related issues arising out of the April 19th New York Presidential Primary, as well as the efforts of the NYC Votes Albany Voter Day and #VoteBetterNY coalition's push for modernization (Read the full NYDLC-CLD testimony by clicking here).
NYDLC and CLD proposed several commonsense pro-voter reforms that, if enacted, would reduce barriers to participation so that all eligible New Yorkers can exercise their right to vote. “New York’s outdated registration and administration systems need to be redesigned for the modern age. As one state after another embraces modernization best-practices, New York is being left in the dust and our citizens are paying the price” said NYDLC Executive Director Jarret Berg, in regard to Tuesday's hearing.
Reform proposals included online voter registration, registration status tracking for electronic and paper systems, automatic voter registration, registration portability, early voting, a registration "Golden week" to cure registration defects, and no-fault absentee (universal vote-by-mail) voting. "New York law should provide a convenient voter registration process with safeguards to protect against errors and omissions, so that no student who registers to vote is disenfranchised", said CLD President Elizabeth Robins after the hearing.
The NYC Campaign Finance Board’s VAAC holds annual hearings to give attention to challenges that New Yorkers experience at the polls. This year, attendees raised questions about New York's closed primary system and the highly-publicized reports of mass purges to the voter rolls in Brooklyn and other parts of the city. “Administrators and policymakers of all stripes need to internalize the ethos that voting is a core civil right. Due process is a safeguard, not an inconvenience,” Berg said.
The hearing also highlighted the experiences of those who advocated for election reform in Albany during Voter Day, an event NYDLC promoted and publicized widely across its network, and the success of pro-voter initiatives like Student Voter Registration Day and social media campaigns like #VoteBetterNY.
NYDLC and CLD also offered feedback on the work of NYC Votes, as detailed in the 2015-2016 Voter Assistance Annual Report. Testimony included commentary on a “CFB case study” detailed in the report that recommended changes to the Board of Elections’ existing practice of purging so-called inactive voters, which may be partly to blame for mass purges that occurred prior to the New York Presidential Primary. Read the full NYDLC-CLD testimony by clicking here.
NYDLC ENDORSES THE 90 FOR 90 PROJECT
FOR IMMEDIATE RELEASE: Wednesday, Mar. 21, 2016
Contact: Jarret Berg, Esq., 866-693-5201
NYDLC ENDORSES 90 FOR 90, A VIRGINIA VOTER REGISTRATION PROJECT IN RECOGNITION OF HON. WILLIAM “FERGIE” REID
NYDLC is proud to announce its endorsement of 90 for 90, an admirable and civic-minded Virginia voter registration and education project that aligns with NYDLC’s efforts to make voter registration easy and convenient for eligible voters, and to reduce the hurdle that the registration process poses to participation in elections across the United States.
The 90 for 90 project honors the 90th birthday of William Ferguson (“Fergie”) Reid, a Virginia doctor and civil rights activist who has advocated for voting rights since the dawn of the Civil Rights movement. Fergie formed the Richmond Crusade for Voters in 1956, where he worked to register and mobilize eligible black voters in a time and place when doing so was especially dangerous. In 1968, Fergie became the first black member of the Virginia General assembly where he served three terms. Fergie continues his voting rights advocacy to this day.
90 for 90 set an ambitious 2015-2016 goal of 250,000 new Virginia registrants by encouraging advocates to register 90 people per precinct. To date, 90 for 90 boasts that Virginia has registered over 90% of the persons needed to meet this target.
Voter ID and other newly enacted restrictive laws are challenging Democrats to do what it takes to help people comply with often difficult requirements. NYDLC is ready to take on this this challenge and will continue to support 90 for 90 and other heroic programs in New York and throughout the country.
Discussing his career standing up for the civil rights of minorities Hon. William Fergie Reid has said: “The voters have to learn that they are in charge. . . . You have to do it with votes and the voters are the people and we have to motivate the people because they are the power but they don't realize that the power exists within them. ”
“Voter registration is an obstacle to participation in our democracy, and the resulting low turnout threatens the ability of our democracy to be responsive to the needs of ‘we the people’, Jarret Berg, Executive Director of the NYDLC said. "Initiatives like the 90 for 90 project, and the allies and organizations that are working tirelessly to make it successful, strengthen our society. Each newly registered voter is newly empowered to be the change.”
Reflecting on Fergie's career in service to the community Francesca Leigh-Davis, Co-Chair of the Richmond Crusade for Voters, Registration Committee has said: “We need to get back to those old grassroots of organizing, knocking on doors, getting our neighbors to the polls, as well as voicing our opinions and voicing our civil rights. . . . You have to vote, you need to be registered, and you need to know what you are voting for.”
Discussing the importance of the franchise, Hazel Reid O'Leary, former United States Secretary of Energy (1993-1997) has said: “The vote is our most mighty tool. Please use it, in memory of those who were denied the vote.”
###
National Voter Registration Day Info
In honor of National Voter Registration Day on September 22nd, as proclaimed by President Obama, we compiled a quick guide to assist you with creating voter registration/education initiatives. Please click below to find Voter Registration resources and more info on National Voter Registration Day!Read more
NYDLC Member Lance Polivy On The Radio
NYDLC's Lance Polivy discussed voting rights issues as a guest on WBAI Radio's "Law of the Land" program hosted by constitutional law professor Gloria Browne-Marshall.Read more
NYDLC Member On Voting Turn Out Reform
Electeds Kallos and Walker Discuss Voting Improvements
On Thursday, March 26th, members of the New York Democratic Lawyers Council had the pleasure of speaking with two legislators, both of whom share the Council’s commitment to protecting and expanding the right to vote. | https://www.nydlc.org/news?page=3 |
Language: English, Chinese, Malay, Tamil.
Currency: Singapore Dollar (SGD)
International Dialing Code: +65
About Singapore
Singapore is a tiny island state. Its sharing maritime borders with Malaysia in the North and Indonesia in the South.
It is Southeast Asia’s most modern city. The city blends Malay, Chinese, Arab, Indian, and English cultures. Its unique ethnic tapestry affords visitors a wide array of sightseeing and culinary opportunities from which to choose.
Singapore is clean, safe, and is considered one of the World’s top smart cities. But Singapore is not only new and modern buildings and structures. Singapore is also traditions and Asian culture, and there are many historical and cultural pockets around the city, where you will be taken back in time.
The city has many green areas and anywhere in the city, you will find parks and green areas within walking distance.
There is one religion above all in Singapore and that is food. Singaporeans live to eat and it’s a heaven for foodies.
In 2020 Singapore’s hawker Culture was officially added to the UNESCO Representative List of the Intangible Cultural Heritage of Humanity. That’s how important food is to Singaporeans. It’s a must-visit, to go to one of the many hawker centres around the city.
In contrast, travellers can also dine at many restaurants owned by some of the world’s top chefs. Singapore is truly a food heaven.
Singapore Airlines and Singapore Airport are both recognised as being some of the best in the world, which helps to make this small island state, a popular stopover for travellers in Southeast Asia. If transiting in Singapore it is highly recommended to stay for at least 2-3 days. Despite being small in size, there are plenty of things to experience in Singapore.
Tours & Hotels
Recommended Tours in Singapore
Gardens by the Bay
A visit to the Gardens by the Bay is one of the top attractions while in Singapore. The gardens got beautiful plants from around the world. There are both indoor cooled conservatories, and outdoor areas including the Skyway for spectacular views from 22m above ground. Beyond the flora you will also find the Gardens’ iconic structures.
Sunset Drinks at Marina Bay Sands
Marina Bay Sands is one of the most famous buildings in the world. Enjoy the sunset and the view over Marina Bay from the top deck shaped like a ship.
Botanical Garden
The Singapore Botanic Gardens is a 162-year-old tropical garden. The garden is a UNESCO World Heritage Site. Inside the garden, you will also find the National Orchid Garden with over 1000 species and more than 2000 hybrids.
River Cruise
One of the best ways to experience the city at night is from the riverside. Take a cruise at night when the city is most beautiful. Pass by Clark Quay, Boat Quay, Merlion Park and get a good view of Marina Bay Sands and business districts.
S.E.A. Aquarium – Sentosa
Enjoy more than 100,000 marine animals of over 800 species at the amazing aquarium located at Sentosa Island.
Universal Studios
Southeast Asia’s first and only Universal Studios theme park, featuring 24 rides, shows and attractions in seven themed zones.
Kampung Klam & China Town – Heritage Walk
Get to know about the heritage and history of Singapore when you visit China Town and the old Arab trading area, Kampung Klam.
Night Safari
The Night Safari is the world’s first nocturnal zoo and is one of the most popular tourist attractions in Singapore. Among the many animals to experience are the White African Lion and Malayan Tiger.
Jurong Bird Park
Jurong Bird Park offers a haven for close to 3500 colourful birds across 400 species. The park is famed for its large and immersive walk-in aviaries covering different zones such as African Treetops, Asian Wings and Penguin Coast.
Singapore Zoo
Singapore Zoo is a haven for wondrous wildlife and a must-visit for animal lovers. The zoo is recognised as one of the best rainforest zoos in the world. Singapore Zoo is home to over 2,800 animals including white tigers. The zoo has won a trove of international and local awards.
River Wonders
River Wonders is a river-themed zoo and aquarium located in Singapore. The different themed zones are built up around the might river systems around the world.
Recommended Hotels in Singapore
Pan Pacific
Renowned for its excellent location, the hotel is within easy walking distance to the city’s most iconic sights. This dynamic destination is all yours to explore – from the bustling streets of Orchard Road to the scenic Botanic Gardens.
York Hotel
The hotel is strategically located near many top attractions but still sits in a tranquil location on Mount Elizabeth, and within walking distance to Orchard Road, a vibrant shopping, dining, and entertainment district.
York Hotel accommodates bedding for four making it a preferred choice for families. More info about York Hotel
Raffles Hotel
Raffles Hotel is one of the most iconic hotels in the world. Opened in 1887, Raffles Singapore is one of the few remaining great 19th century hotels in the world. Enjoy an original “Singapore Sling”, first created at the hotel in 1915.
PARKROYAL on Beach Road
Nestled by Singapore’s city centre, PARKROYAL on Beach Road lets you relax in comfort. With hidden gems nearby, you are close to attractions along Bugis, China Town, Kampung Klam and the business hubs.
PARKROYAL on Kitchener Road
Stay at one of the best hotels in Little India. Stying at the family-friendly hotel will also give you interesting cultural experiences around the neighbourhood.
Hotel Boss
The hotel has a very good location. Close to the nearest MRT station, and 3 km from both the National Gallery Singapore and the waterfront Gardens by the Bay. | https://rustic-travel.com/destination/singapore/ |
UNAM Scholarly Repository
>
Faculty of Science
>
Department of Geology
>
Doctoral Degrees (DG)
>
View Item
Groundwater recharge of perched aquifers in the Cuvelai-Etosha basin, Namibia
Hamutoko, Josefina Tulimevava
URI:
http://hdl.handle.net/11070/2227
Date:
2018
Abstract:
The United Nation predicted that by 2025, 1.8 billion people will be living in countries with absolute water scarcity and two-thirds of the world population could be under stress condition. In semi-arid regions, most communities depend on groundwater as the source of drinking water and thus with changes in global climatic conditions and increase in population, groundwater resources are facing challenges of both over-exploitation and contamination. Therefore, there is an urgent need to improve the understanding of existing groundwater resources in terms of aquifer distributions and interactions and process that control the groundwater dynamics, recharge and chemistry groundwater for an effective strategy to reduce the pressure on the hydrologic system. The main aim of this PhD is to contribute to knowledge about shallow groundwater in semi-arid environments by estimating groundwater recharge of perched aquifers in the Cuvelai-Etosha Basin (CEB). Four specific objectives were examined in this study; first, the study characterizes the groundwater chemistry and isotopic composition of oxygen (δ18O) and hydrogen (δ2H) in order to understand mechanisms of groundwater dynamics and quality of groundwater in perched aquifers, secondly, it analyses and understand the spatial and temporal variations of hydrochemical data and isotopic compositions of hand-dug wells in the CEB, with particular focus on water origin and recharge processes, thirdly, the study evaluates the relationship between the shallow (perched) aquifer and the deeper seated aquifers and finally develops a conceptual model for the perched aquifers. Methods employed in this research are based on isotopic and hydrochemical data to understand groundwater recharge mechanisms. Integrated isotopic and hydrochemical tracers along with standard hydrological data are used to understand complex dry land hydrological processes on different spatial and temporal scales. Different spatial and temporal scales are particularly important for arid environments due to high heterogeneity that are associated with these environments. Therefore in this study, water samples were collected from rain collectors, hand-dug wells and boreholes and analysed for major ions and stable isotopes (18O and 2H) for three years (2014-2017) in a total of 12 sampling campaigns. Chemical analyses were performed at the Analytical Laboratory Services in Windhoek, Namibia and at the hydrochemistry laboratory of BGR in Hanover, Germany using Titration, Ion Chromatography and ICP-OES. The reliability of the analyses was checked by an ion charge balance error on all samples. Stable isotopes were measured at the University of Namibia (UNAM) and BGR laboratories using an off-axis integrated cavity output spectroscope (OA-ICOS, Los Gatos DLT-100) and a cavity ring down spectrometer (CRDS, model L2120-i, Picarro Inc.) respectively. Results show that groundwater chemistry of perched aquifers is controlled mainly by strong evaporation, dissolution of carbonate minerals (calcite and dolomite) and evaporitic minerals (gypsum and halite) and silicate weathering and cation exchange. Stable isotope composition suggests that deep groundwater is recharged by high intensity/large rainfall events, whereas the shallow wells can be recharged by less intense/small rainfall events. Water in deep wells reflect mixture of water influenced by evaporation during or before infiltration and water that infiltrated through fast preferential pathways whereas shallow wells are strongly influenced by evaporation. The mean parent isotopic composition for shallow wells in the ephemeral river is -7.8 for δ18O and -51.8 for δ2H, for deep wells in the pans and depressions is -8.7 and -58.2 for δ18O and δ2H as well as - 8.6 and -57.5 for δ18O and δ2H for wells in Omusati region. Hydrochemical and isotopic data reflect spatial variability between samples from Omusati and Ohangwena regions. The spatial heterogeneity as shown by TDS can be attributed to lithological, climatic and anthropogenic factors. Furthermore, temporal variations indicate the timing of the groundwater recharge. Results also imply interaction between perched aquifer and regional aquifer in the pans and depressions while in the ephemeral river no relation could be established. High recharge rates are estimated for the pans and depressions (7.3 % to 25.5%) in comparison to the ephemeral river (7.9% to 17.8%). Therefore, it is recommended that groundwater management practices should be designed taking into account differences in perched aquifer characteristics. For example designing abstraction infrastructures which include treatment for natural contaminants i.e. fluoride and TDS. On the other hand contaminants from anthropogenic sources in the wells can be reduced or prevented by introducing protection zones. Education on basic water usage and protection will also be of an advantage. Furthermore, it could be shown that it is indeed essential to unravel the hydrogeological complexities of heterogeneous perched aquifers using isotopic and hydrochemical tracers at different spatial and temporal scales and thus more research is needed in this regard.
Description:
A dissertation submitted in fulfillment of the requirements for the Degree of Doctor of Philosophy in Science (Geology)
Show full item record
Files in this item
Name:
hamutoko2018.pdf
Size:
6.667Mb
Format: | http://repository.unam.na/handle/11070/2227 |
This project assessed the technical feasibility of organic matter (COD) removal in livestock effluents, by electrocoagulation. An experimental design was used to block two factors at three levels, to evaluate the effect of the variables distance between electrodes and pH, using aluminum sacrificial electrodes. Maximum removal (90.16%) was obtained at 7 units pH, and 2.0 cm distance between electrodes. This study demonstrated the technical feasibility of electrocoagulation (EC) for the removal of organic matter as COD, present in wastewater from the livestock industry.
Keywords: Livestock effluents, Electrocoagulation, Chemical Oxygen Demand (COD).
Introduction
Colombian livestock represents a very important sector for the economy's main axis of the Caribbean region, especially in the department of Cordoba, due to certain geographical conditions characteristic of the area . In livestock industry, the wastewater generated as cattle bath product represents a great threat to the environment; it contains traces of recalcitrant pesticides, toxic compounds and high levels of organic matter . Electrocoagulation (EC) may be used as an alternative system of wastewater treatment, because it is inexpensive, the equipment used is simple and of easy operation, compared to the conventional methods (chemical coagulation), no chemical substances are used, and produces large and more stable flocs than those formed by chemical coagulation [3, 4]. Electrocoagulation is receiving an increasing acceptance by industry, in view of its advantages compared to other methods. The method is based on anodic dissolution of metallic aluminium and the formation of aluminium ions in the vicinity of the anode, these ions being immediately converted to the corresponding hydroxides. The hydroxide, in the process of coagulation and flocculation, has highly adsorptive and adherent qualities, and is colloidally dispersed. When a current is passed through Al anodic dissolution, it takes place according to the following reaction:
Simultaneously, water is reduced at the cathode to hydrogen gas and hydroxyl ion (OH-)
Thus, electrocoagulation introduces metal cations in situ, electrochemically, using sacrificial anodes. Al3+ hydrolyzes in water, forming the corresponding hydroxide; Eqs. (3) - (5) illustrate this, in the case of aluminium anode charge on the organic matter, which is responsible for the stability:
The main objective of this study was to evaluate the removal of organic matter (COD) in livestock effluents by electrocoagulation, controlling pH and distance variables between electrodes.
On this basis, it was expected that using Al anode would improve the process of electrocoagulation, through enhancing the rate of mass transfer of Al3+ from the anode surface to the bulk solution, to be applied to continuous flow process. This would reduce the concentration of polarization, and hence, reduce the passivation tendency of the anode, which adversely affects the process of electrocoagulation.
Materials and methods
Sample collection and preservation
Wastewater sample under study was taken from a cattle farm in the town of Monteria, Cordoba -Colombia, in a day of cattle bath. Effluents were collected in plastic tanks, sufficient for processing in the EC system. The processing was performed before and after pH treatment, and the COD analysis was based on Standard Methods 4500H B and 5220 D procedures references. Initial pH and COD were 6.5 units and 680 mg L-1.
Electrocoagulation cell
Fig. 1 shows the experimental scheme of the EC process.
Volume was used for each 4.0 L test sample. Six iron electrodes in the reactor were used as cathodes, and three aluminium anodes with effective dimensions of 18 × 2.5 × 0.5 cm were connected to DC -PHYWE 0-50 V fluent potential. The entire submerged surface of every electrode was 25 cm2.
Experimental design
Statgraphics Centurion 15.2.06 software was used for the statistical design of experiments and data analysis. The two most important operating variables, initial wastewater pH (x1) and distance (x2), were optimized for both wastewaters. Their range and levels are shown in Table 1.
Levels of every factor were evaluated in triplicate; there were 27 essays in the whole process. Percentage of COD removal was established as the response variable. Time and potential were 30 min and 50 V. Aluminium was used as sacrificial electrode. Fixed variables sets were selected based on tests conducted by Mestra and Pineda .
Statistical analysis
The data were submitted to ANOVA test, and mean comparisons were performed when needed using Tukey tests. Response Surface Methodology (RSM) was applied to evaluate the simple and combined effects of three independent parameters on removal and optimizing of the operating conditions. Statgraphics Centurion 15.2.06 statistical software was used for all analysis. A significance level of 0.05 was selected.
Results and discussion
COD removal function of pH and distance
The removal of COD in function of pH and distance between electrodes is shown in Fig. 2.
Removal values are higher than 80% at initial pH of 4 and 7, independently of distance. However, initial pH of 8 and 5 cm distance show 67.21% removal (Fig. 2c). It is observed that decreasing the distance between electrodes increases the removal of COD, reaching maximum values (> 80%) at a distance of 2 cm for different pH values. This is possibly due to the electrostatic field formed during the electrocoagulation process, which depends on the distance between electrodes [7-8], and causes the metal ions production by the anode (sacrificial electrode): its function is to destabilize loads possessing contaminant particles present in water, neutralize the systems that keep the particles in suspension - allowing the formation of aggregates of contaminants, and initiating the coagulation process in less time, with higher removal .
Fig. 2b shows that, at pH 7, COD removal decreases according to the distance, with no statistically significant difference (p > 0.05), unlike pH 4 and 8, which have a statistically major difference (p < 0.05). This can be attributed to pH < 7, because the formed hydroxides are not stable enough to react with the aluminum cation, not allowing coagulant formation [10-12]. A pH near 7 facilitates the generation of hydroxyl radicals and, in its turn, the formation of agglomerates, which are ultimately removed from the solution . With pH values > 7 the oxidative potential in question decreases; this justifies the reduction in the removal of reactions involved in the process, and therefore, a decrease in pollutant removal .
Optimization of the treatment conditions
The application of RSM based on the parameters estimation generates the second order regression model, where the removal percentage of COD (y) and the independent variables studied are related (Eq. 6).
The coefficient of determination (r2) was 0.843, which implies that 84.3 % of the variations in COD removal are explained through independent variables, and 15.7 % of variations cannot be explained by the model. According to Montgomery , it is satisfactory that (r2) is at least 75%, when considering proceeding with the methodology. The model generates the optimum values for the maximum COD removal, as a function of pH and distance. The response surface calculated on the basis of the model (Fig. 3) allows visualizing the behavior of the response variable, and clearly indicates the factors levels combination that leads to a maximum value.
In this study, it is observed that the best results are in the orange region, where factors interaction leads to results between 92.5 and 95 %.
Significant differences were not observed (p < 0.05) when the response was compared to the corresponding experimental value, which confirms that RSM can be used to optimize the process parameters (Table 2).
Additionally, it is observed that the results of the second order regression model present significant correlations with the experimentally obtained results (r=0.996, p=0.01, n=27; Fig. 4).
The variance analysis is shown in Table 3.
It is observed that the independent variables are not statistically significant (p > 0.05). However, the distance factor has a value close to 0.05, which can reveal the removal difference based on this factor.
To assess the adequacy of the developed model, the difference between Experiment and Predicted (waste) response is used to graphically analyze the effectiveness of the model. Waste is considered as unexplained variations by the model, and they will have a normal distribution if the model accurately predicts normal residue probability graph and normal distribution. This should give a linear fashion, and graphic of residuals versus predicted values should represent a random pattern of residues around zero . Fig. 5a shows the normal probability graph waste for use as Al anode electrode in the optimization model for COD removal from effluents of livestock industry, where it meets the criteria of statistical normality by its linear trend.
Fig. 5b shows the graph of actual versus predicted residuals for the removal of COD, exhibiting a random pattern around zero waste, which represents a normal distribution.
The electrical energy consumption was calculated in terms of KHz. E (kWh) = U × I × t /1000, where U is cell voltage (V), I is current (A), t is the time of EC (h). The implementation of EC unit in large scale level mainly depends on the treatment process cost. In order to find out the economy of the proposed treatment method, the economic evaluation was made in optimum operating conditions. It was found that the energy supplied to wastewater treatment by EC is 0.03 KHz, given that KHz price in Colombia ($ 349.7; $US 0.13) is $ 4.37 for volume of solution (0.004 m3), i.e., 1092.8 $/m3 (0.39 $US/m3). Related to the consumption of Al per test (0.0643 g / L) by m3 (64.25 g / m3), the A1 material price ($ 4313.7 / kg) is obtained by m3 ($ 277.2 / m3; $ 0.11 US / m3). This result illustrates the economical feasibility of the proposed treatment in the on-field implementation, i.e., wastewater treatment plants (WTPâs). pH and temperature should be taken into account to develop processes to improve electrocoagulation process. It has been determined in some cases that greater removal of a contaminant occurs within a specific pH range. This range can be even wider. In general, the literature indicates that the best removals were obtained for pH values close to 7, which improves electrocoagulation process .
The pre treatment temperature was 28 °C, and post treatment temperature increased 2 °C. Chen indicates that the system temperature must be lower than or equal to 60 °C, to achieve greater removal.
Conclusions
Electrocoagulation (EC) and effluent treatment bath livestock is a technically viable alternative for the removal of organic matter as COD. The results showed that, for a distance between electrodes of 2 cm, COD removal percentage was significantly higher. The applied electrochemical treatment allowed the removal of COD in 90.16%, under conditions of optimal electrocoagulation: pH (7.0), distance (2 cm), voltage (50 V), time (30 min). These results indicated that electrochemical treatment process is an effective treatment method, in terms of removal efficiency, with reasonable operating costs.
References
1. Lombana J, Martinez D, Valverde M, et al. Caracterization the livestock sector of the Colombian Caribbean. Editorial Universidad del Norte; 2012. [ Links ]
2. Barrios J, Yepez J. Evaluation model of solar photocatalytic degradation kinetic in a cpc reactor of pesticide used in the bathroom of cattle. Graduation Project: Universidad de Cartagena, Facultad de Ingenieria, Cartagena-Colombia; 2010. [ Links ]
3. Rios GB, Almerava F, Herrera MT. Port Electrochim Acta. 2005;23:17. [ Links ]
4. Holt PK, Barton GW, Mitchell CA. Chemosphere. 2005;59:355. [ Links ]
5. APHA, Standard Methods for the Examination of Water and Wastewater. 21th ed. Washington, DC: American Public Health Association, American Water Works Association, Water Pollution Control Federation; 2005. [ Links ]
6. Mestra D, Pineda R. Electrocoagulacion como alternativa para el tratamiento de aguas residuales del bano de ganado bovino en el departamento de Cordoba. Universidad de Cordoba, Colombia; 2015. [ Links ]
7. Holt P, Barton G, Mitchell C. Electrocoagulation as a Wastewater Treatment. The Third Annual Australian Environmental Engineering Research Event; 1999. [ Links ]
8. Daneshvar N, Sorkhabi HA, Kasiri MB. J Hazard Mater. 2004;112:55. [ Links ]
9. Chen G. J Environ Eng. 2000;126:858. [ Links ]
10. Chen G. Sep Purifi Technol. 2004;38:11. [ Links ]
11. Kumar P, Chaudhari S, Khilar K, et al. Chemosphere. 2004;55:1245. [ Links ]
12. Bayramoglu M, Kobya M, Can O, et al. Sep Purif Rev. 2004;37:117. [ Links ]
13. Heidmann I, Calmano W. J Hazard Mater. 2008;152:934. [ Links ]
14. Linares I, Martinez V, Barrera C, et al. Avances Ciencias Ingenieria. 2011;2:21. [ Links ]
15. Montgomery DC. Design and analysis of experiments. Nebraska: John Wiley & Sons; 2000. [ Links ]
16 Sarabia LA, Ortiz MC. Response surface methodology. In: Brown SD, Tauler R, Walczak B. editors. Comprehensive chemometrics. Oxford: Elsevier; 2009. pages 345-390. [ Links ]
17. Thirugnanasambandham K, Sivakumar V, Maran JJP. J Taiwan Inst Chem Eng. 2015;46:160. [ Links ]
18. Chen G. Sep Purif Technol. 2004;38:11. [ Links ]
19. Chen G, Chen X, Yue PL. J Environ Eng. 2000;858. [ Links ]
Acknowledgements
The authors wish to thank the Water, Applied and Environmental Chemistry Group, Laboratory of Toxicology and Environmental Management, University of Cordoba, Monteria-Colombia. | http://www.scielo.mec.pt/scielo.php?script=sci_arttext&pid=S0872-19042016000400004&lng=en&tlng=en |
The Early Childhood Center believes that children are active and curious by nature and will become engaged learners in the realm they know best – the hands-on world of play. We believe there is a critical link between a child’s early experiences, the development of a love for learning, and their later success in life. We strongly affirm that children, families, and society all benefit from enriched and high-quality early childhood programs.
That is why we offer a program that provides a child with the ability to learn through play with many opportunities for socialization and intellectual growth. We feel it is important to provide activities for social growth, positive self-image and the development of school readiness skills, which can lead to future academic success.
Art, music, stories and physical play are important pathways for our curriculum. We provide developmentally appropriate activities, which are geared to each child’s individual abilities and their unique learning styles and needs. We strive to create a classroom environment that feels safe and comforting and that will promote self-discovery and exploration. The staff is here to stimulate, support and nurture each child as they grow physically, intellectually, emotionally and socially.
We recognize that a unique family is at the center of each child’s life and those parents and caregivers are the child’s first and foremost teachers. We strive to work with parents to find the best way to advance their child’s emerging potential. We want to help children value their own and their families uniqueness and to appreciate and celebrate the differences among other people. We aspire to have children leave our school with the skills and disposition to be enthusiastic life-long learners and positive, resourceful members of a community. | https://gablespreschool.org/our-philosophy/ |
Security & Privacy: Current cover and Risk Management Services
Technological advancement has enabled greater working flexibility and increased methods of communications. However, new technology brings about new risks with one of the most commonly reported thefts being that of personal data.
As detailed within this document Zurich Municipal already provides cover for the key concerns related to information risk and public sector information. | https://newsandviews.zurich.co.uk/download/security-privacy-current-cover-and-risk-management-services/ |
Abstract. Land management practices can reduce the environmental impact of agricultural land use and production, improve productivity, and transform cropland into carbon sinks. We applied the global vegetation model LPJmL5.0-tillage-cc with a modified representation of cover crop practices. We assessed simulated responses to cover crop practices on agroecosystem components in comparison to bare soil fallow between two consecutive primary crops’ growing seasons on global cropland for a simulation period of 50 years. With cover crops and tillage, we obtained annual global median soil carbon sequestration rates of 0.52 and 0.48 t C ha−1 yr−1 for the first and last decades of the entire simulation period, respectively. We found that cover crops with tillage reduced annual nitrogen leaching rates from cropland soils by a median of 39 % and 54 % but also the productivity of the following main crop by average of 1.6 % and 2 % for the two analyzed decades. Largest reduction of productivity were found for rice, modestly lowered for maize and wheat, whereas soybean yield revealed an almost homogenous positive response to cover crop practices during fallow periods.
Further, the results suggest that no-tillage is a suitable complementary practice to cover crops, enhancing their environmental benefits and reducing potential trade-offs with the main crop productivity due to their impacts on soil nitrogen and water dynamics. For cover crops applied in conjunction with no-tillage across the mapped Conservation Agriculture cropland area for the period 1974–2010, we estimated a cumulative soil carbon net-accumulation of 1.4 PgC, an annual median reduction of soil nitrogen leaching by 57 %, as well as mostly enhanced yields of the following main crop.
The spatial heterogeneity of simulated impacts of cover crops on the variables assessed here was related to the time period since the introduction of the management practice as well as to environmental and agronomic conditions of the cropland. This study supports findings of other studies, highlighting the substantial potential contribution of cover crop practices to the sustainable development of arable production.
Vera Porwollik et al.
Status: final response (author comments only)
- RC1: 'Comment on bg-2021-215', Anonymous Referee #1, 12 Sep 2021
- RC2: 'Comment on bg-2021-215', Anonymous Referee #2, 06 Oct 2021
Vera Porwollik et al.
Model code and software
LPJmL5.0-tillage-cc model source code, management simulation outputs, and R script for post-processing data (1.0.1) https://zenodo.org/record/5178070
Vera Porwollik et al. | https://bg.copernicus.org/preprints/bg-2021-215/ |
A Growing Investigation
In addition to us each growing our own plants (click here) under normal conditions (planted in soil, kept in sunlight and watered) we also decided to investigate if each of these conditions - soil, sunlight and water were actually important to plant growth at all. To test this, we planted some broad bean seeds under different conditions. One was planted in sand instead of soil, one was planted in soil but given no water, another was planted in soil and watered but kept in complete darkness, another seed was just put in a jar of water and finally one was planted in soil, left in the sun but as fed with coke rather than water. Which one do you think will be the most successful? Do you think any will grow at all?
Click here to see how they are doing. | https://www.st-austins.co.uk/a-growing-investigation/ |
Outbreaks of the disease, without an obvious association with animal or laboratory exposure, may be bioterrorism. It was known to have been weaponized before the Biological Weapons Convention went into effect, and is liste, as an overlap agent in the Select Agent Program and as a Category B organism in CDC Bioterrorism Agents-Disease list.
Preventive actions need to be directed against people with exposure to animals and ticks in the wild, to meat processing, and in laboratories (especially working with sheep). Unpasteurized dairy products are also a risk factor.
A commercial human Q fever vaccine (Q-Vax) is manufactured in Australia but is not available in the United States; it is available on an investigational basis from the US Army Medical Research Institute of Infectious Diseases (USAMRIID) at Fort Detrick, Maryland .
Postexposure prophylaxis for 5 days by using tetracycline or doxycycline is effective if initiated within 8-12 days of exposure. Treatment with tetracycline during the incubation period may delay but not prevent the onset of symptoms.
The disease has both an acute and a chronic form. In the acute form, however, there are different major presentations of pneumonia versus gastrointestinal disease.
The overall presentation is "flulike", with fever of sudden onset, headache and muscle pain. Respiratory involvement is common, with pneumonia in most U.S. cases, while hepatitis is more common in Europe. Nausea is not uncommon, along with right upper quadrant pain.
Cardiovascular involvement correlates to the height of the fever. Myocarditis and pericarditis are seen in the acute form.
Several forms of rash may be seen, along with meningitis and encephalitis.
Canadian researchers hypothesized that the route of infection, known to include inhalation, tick bite, and contaminated food, could determine the the form of the clinical disease. Mice in one group were given C. burnetti by a nasal route, and another group by intraperitoneal ijection. While both groups had pneumonia, the group that inhaled the pathogen had a statistically higher level of airway changes. "It was concluded that the route of infection is one determinant of the manifestations of acute Q fever."
Chronic disease usually presents as endocarditis, especially of abnormal (e.g., aneurysm) or prosthetic heart valves; endocarditis usually coexists with chronic hepatitis. It may also present with bone disease such septic arthritis or osteomyelitis, especially when there are risk factors such as prosthetic joints.
Immunologic testing is definitive, although some general findings may be suggestive. In most laboratories, the indirect immunofluorescence assay (IFA) is the most dependable and widely used method. Coxiella burnetii may also be identified in infected tissues by using immunohistochemical staining and DNA detection methods.
It can be grown in conventional cell cultures or embryonated chicken yolk sacs or laboratory animals. Culture is not routinely done, but can be useful in isolating the organism from contaminated tissue samples, or to obtain phase I antigens. Inoculation of laboratory animals (guinea-pig, mouse, hamster) is helpful in cases requiring isolation from tissues contaminated with various microorganisms or in order to the [[#antigenic variation|phase I antigen.
Confirming a diagnosis of Q fever requires immunologic testing to detect the presence of antibodies to Coxiella burnetii antigens. Refinements of this testing can determine if a case is acute or chronic.
In most laboratories, the indirect immunofluorescence assay (IFA) is the most dependable and widely used method. PCR and ELISA also are used. PCR offers the advantage of being useful in herd screening, and also allowing heat inactivation of the organism to protect laboratory workers.
Recent studies have shown that greater accuracy in the diagnosis of Q fever can be achieved by looking at specific levels of classes of antibodies other than IgG, namely IgA and IgM. Combined detection of IgM and IgA in addition to IgG improves the specificity of the assays and provides better accuracy in diagnosis. IgM levels are helpful in the determination of a recent infection. In acute Q fever, patients will have IgG antibodies to phase II and IgM antibodies to phases I and II. Increased IgG and IgA antibodies to phase I are often indicative of Q fever endocarditis.
Coxiella burnetii exists in two antigenic phases called phase I and phase II. This antigenic difference is important in diagnosis.
Chronic Q fever: Phase I antibodies predominate; it takes longer for them to appear.
The continued presence of Phase I suggests continuing exposure. Both types can persist for months or years. To confirm chronic disease, high Phase I levels are detectable along with general markers of an inflammatory process.
Different approaches are needed for the acute and chronic forms of the disease.
Doxycycline, especially when started within the first 3 days and continued twice daily for 15-21 days, is a common regimen. Fluoroquinolone antibiotics also have demonstrated efficacy. There may be a relapse, which calls for another course of antibiotics.
Chronic Q fever endocarditis is much more difficult to treat effectively and often requires the use of multiple drugs.
doxycycline in combination with hydroxychloroquine for 1.5 to 3 years. The second therapy leads to fewer relapses, but requires routine eye exams to detect accumulation of chloroquine.
Surgery to remove damaged valves may be required for some cases of C. burnetii endocarditis.
Acute Q fever is generally a self-limited disease (in 38% of cases); more than one half of patients are asymptomatic, and only 2-4% require hospitalization. The mortality rate for symptomatic patients is less than 1%.
Chronic Q fever, which is practically synonymous with Q fever endocarditis, is more difficult to treat than acute Q fever. Mortality is almost universal if untreated, but the mortality rate is less than 10% with appropriate treatment.
In 1983, 415 confirmed cases took place in Switzerland, 3 weeks after 12 flocks of sheep, totaling 850 to 900 animals, came into a populated valley from mountain patures. 21.1 percent of the human population of the village along the road that the sheep followed contracted the disease, while there was only 2.9 percent infection in the villages away from the road.
↑ Marrie TJ, Stein A, Janigan D, Raoult D. (1996 Feb), "Route of infection determines the clinical manifestations of acute Q fever.", J Infect Dis. 173(2): 484-7.
This page was last modified 10:11, 31 May 2009. | http://en.citizendium.org/wiki/Q_fever |
The award is given in recognition of designers who have ‘achieved sustained excellence in aesthetic and efficient design for industry’ and is presented by the RSA (Royal Society for the encouragement of Arts, Manufacturers and Commerce).
Callum said: “To gain such recognition from fellow members of the design community is an enormous privilege for me, particularly from a faculty that is not directly involved in the car business. This distinction is a testament to the value of car design and the role that car designers play in the motoring industry.
“Design is a major driving force in the creation of the history of modern motoring and it is an honour for me, and my fellow artists and designers in the industry to have this acknowledged.”
Presenting the diploma at the RSA last Thursday, RSA chairman Gerry Acher said: “As the man responsible for the stunning appearance of the new XK sports car, members of the Faculty of Royal Designers for Industry felt that Ian Callum’s contribution towards the new direction of Jaguar cars should be recognised.”
Ian Callum joined Jaguar from Aston Martin in 1999 and helped in launching the X-Type Estate, the new XJ saloon, S-Type refresh and headed the design team towards the R-Coupe, RD-6, the Advanced Lightweight Coupe and other advanced design concept cars.
His most recent design is the new XK sports car. Design features such as the distinctive oval grille opening, prominent bonnet power-bulge and practicality-enhancing rear liftback all echo the classic E-type -- and many argue that it also echoes the car many see as his finest work, the Aston Martin DB7.
This is not the first time Ian has been recognised by the RSA; in 1975 he received first prize for the Annual RSA college bursary for Industrial Design, and he also received a “commendation” for Furniture Design the following year.
This is the second award for Ian Callum this year. He recently received the Jim Clark Memorial Award, given by the Association of Scottish Motoring Writers (ASMW) to a Scot or Scots who have achieved excellence in the field of motoring.
The Winslet connection
Callum's inspiration for the XK's design was the body of actor Kate Winslet, who's been reported as saying that she was very flattered but that some of the car's details should have been changed.
"The headlights are too small. They will have to go. And it needs a bar under the dashboard with pink and blue neon lights, umbrellas and pineapples," she said.
"And wings, like Chitty Chitty Bang Bang. And inflatables, so it can go in water. I absolutely think I should get a free car."
Callum was reported as saying: "Kate Winslet is my ideal woman. She is naturally a very shapely woman, very British with an underlying integrity and ability. Like a car, she has got substance, she is not just a pretty face," he said.
"So I designed the new XK body with her in mind. The interesting thing is that so many woman find sensual cars more appealing as well."
He was responsible for the X300, X100 X200 and X400 and X600.
He's sorely missed....
*no doubt Zod will be along now telling us how he worships Chris Bangle and how anything different and contravercial (even if it's hideous) is always a move in the right direction *
"So I designed the new XK body with her in mind. The interesting thing is that so many woman find sensual cars more appealing as well."
Absolute drivel.
I have read the news stories on Pistonheads for a couple of years now, and checked the forums with interest without ever being a registered member; just enjoying the banter without feeling the need to contribute.
Until now.
Would you ever get Issigonis uttering this nonsense? David Brown of Aston Martin perhaps? Sir William Lyons.........?
How exactly was the design influenced by Ms Winslet? Where is the correlation between any of the features of the XK and the 'shapely woman'?
"So I designed the new XK body with her in mind?" What?!! What? What exactly does this mean? You were jacking off whilst sketching out another Aston Martin rip off?
You were thinking of Kate Winslet then. Is she your typical XK customer? If not, perhaps you could consider something else whilst designing (seemingly single-handedly) the XK......perhaps thinking of Jaguar's pedigree, the famous Brown's Lane factory, the heritage associated with building a British car in the Coventry, the heart of the UK's manufacturing industry? Kate 'kin Winslet??!!?! Shame on you.
Why spout this sh1t when all people want to hear is the truth? Something along the lines of 'Whilst recognising Jaguar's heritage and loyal customer base, I felt I needed to lead my team in a new direction to take the Jaguar brand into the 21st Century, producing high performance, high-class sportscars for the British market and beyond...'
Kate Winslet? Kate Winslet?!? WTF!
Agreed. Helfett's the man. And a genuinely nice guy to boot. | https://www.pistonheads.com/gassing/topic.asp?t=330994 |
Company:
autoXpert Group
Job Type:
Engineering
Location:
Lebanon
Date Posted:
Nov 25, 2022
Salary:
Unspecified
Employee Type:
Full-Time Employee
Gender:
Both
Description
SUMMARY
The main purpose of the Sales & application Engineer is to promote company’s business, manage customers need and provide the right PowerGen product to customers after thorough technical studies of business requirement. Also, explore market for opportunities within segments to ensure growth and expansion of company’s business and achieve targets.
MAJOR RESPONSIBILITIES
• Provides world class service through building and maintaining positive Customer and employee relationships; operating at optimum capacity, setting and achieving PowerGen services and operational objectives.
• Develops and use innovative methods and work techniques with adequate guidance to deliver to customers proper products that suit their need.
• Active involvement with customers outside the office premises and carry out defined complex tasks to achieve the target.
• Applies engineering experience, technical skills and continue to develop learning in creating ultimate solutions for customers.
• Installation Quality Assurance (IQA): Has a thorough understanding of the installation review process, including the tools and processes associated with the various applications and industries.
• Cross-Functional Design Review: Can conduct cross-functional projects design review with independency and minimum assistance.
• Applications and Products Validation: Understands the measures of data quality applied for different applications and products, methods of analysis and available tools. Plans the workflow required to schedule, install, setup, calibrate and operate the equipment associated with the project/applications.
• Project Management: Obtains input and negotiates with subject matter experts and consultants. Can apply the principles, techniques, and procedures of project management, and is able to use the tools appropriately as part of the work with limited assistance.
• Manage Customer Relationships: Understands the critical components of enhancing the productivity (schedules, appointments, call plans, feedback, etc.). Has an understanding of attitudes and behaviors to establish a strong relationships and built trust. Understands the need to and responds with the appropriate time frame to customer needs.
SKILLS
• University degree in mechanical engineering with strong knowledge of electronics and testing equipment.
• A Minimum of 5 years of mechanical engineering experience with market knowledge in similar industry.
• Strong Solution provider with problem solver skills and prompt decision making.
• Proven customers management skills and ability to manage high volume of requests at one time.
• Excellent communication skills and good command of English and Arabic languages to convey the needs of the customers. | https://hirelebanese.com/jobdetails.aspx?id=220936 |
FIELD OF THE INVENTION
The present invention relates to an improved light modulating material and method of manufacturing the same for thermo-optic and electro-optic display devices.
BACKGROUND OF THE INVENTION
Liquid crystals have been used in the past in a wide variety of electro-optic and thermo-optic display applications. These include, in particular, electro-optic light modulating applications which require compact, energy-efficient, voltage-controlled light, such as watch and calculator displays. The electro-optic devices utilize the dielectric alignment effect in nematic, cholesteric and smectic phases of the liquid crystal, in which, by virtue of dielectric anisotropy, the average molecular long axis of the liquid crystal takes up a given orientation in an applied electric field. Thermo-optic devices accomplish the orientation or simple melting to the isotropic state via a temperature change.
The processes conventionally used for incorporating liquid crystals into a practical display form are generally complex and demanding. Display products are normally produced by sandwiching the liquid crystal material between two sheets of glass having electrically conductive coatings and then sealing the entire peripheral edge of the sandwich structure.
Conventional manufacturing makes it difficult to produce displays of large size, or having unusual shapes. In an attempt to expand the size and utility of liquid crystal displays, many methods have been suggested for coating liquid crystal material with various polymers to simplify their handling and generally allow for larger sheet construction of display or light modulating materials.
U.S. Pat. No. 4,435,047, for example, describes water emulsion methods both for encapsulating nematic liquid crystal material and for making a liquid crystal device using such encapsulated liquid crystal materials. However, there are a number of inherent difficulties one encounters when working with water emulsion systems. These include difficulty in obtaining and holding a uniform droplet size in the emulsion, poor spreading on plastic, and inability to dissolve and carry important additives in the system such as dyes, plasticizers, or electrical property modifiers.
More recently, a simplified approach was disclosed in "Field Controlled Light Scattering From Nematic Microdroplets", Doane et al. In this approach, microdroplets of a liquid crystal material were spontaneously formed in a solid epoxy polymer at the time of its polymerization. The cured polymer matrix containing these microdroplets was sandwiched between two layers of glass containing a conductive coating. This approach has simplified the manufacture of displays over processes using free liquid crystals or encapsulated liquid crystals. However, conventional curing of polymers such as an epoxy causes difficulties in coating and laminating in a continuous process. The materials are very low in viscosity during the coating step and cannot be laminated while soft due to leakage of monomer out of the edges of the laminate.
Light modulating materials containing microdroplets of liquid crystal material within a thermoplastic matrix have also been proposed. Such materials suffer a number of drawbacks in commercial application including limited temperature range, fatigue, slow switching times, and limited durability.
SUMMARY OF THE INVENTION
The invention is directed to improved, durable light modulating materials which are capable of rapid, reversible switching between a substantially translucent light scattering or diffusing state and a substantially clear or transparent state, without noticeable fatigue, when subjected to thermal cycling, a magnetic field or preferably an electrical field. The invention also is directed to methods of manufacturing such materials and devices employing such materials.
In one aspect of the invention, a liquid crystal phase is substantially uniformly dispersed within a polymer matrix which comprises a preferably transparent, acrylic resin containing active hydrogen groups, such as hydroxy-functional acrylic resins and carboxy- functional acrylic resins, and a suitable cross-linking agent. The liquid crystal material preferably comprises a nematic type material, such as a cyanobiphenyl or a cyanoterphenyl, or a mixture of a nematic type material and chiral mesogenic material. Preferably, the acrylic resin and the liquid crystal material have closely matching indices of refraction so that the light modulating material may appear substantially clear or transparent under certain conditions of use. Preferably, the liquid crystal phase forms spontaneously upon evaporation or cooling of a homogeneous solution comprising the liquid crystal material and the acrylic polymer.
In a preferred aspect of the invention, an acrylic resin containing active hydroxy-functional groups is reacted with a diisocyanate containing material to form a urethane-acrylic copolymer which becomes part of the polymer matrix. This reaction primarily occurs after formation of the light modulating material, e.g., after application of the light modulating material to a conductive medium and evaporation of the solvent, and causes the light modulating material to take on many of the durability and other advantages of a thermoset acrylic polymer. Because the reaction primarily occurs after formation of the material, the processing difficulties normally associated with a thermoset resin are not encountered in fabricating the light modulating material of the invention.
In another preferred aspect of the invention, organometallic compounds, preferably titanate or zirconate materials, are incorporated into the light modulating material in order to reduce the turn-on time of the liquid crystal phase and/or reduce the voltage required to achieve substantial transparency.
In preparing the preferred light modulating material of the invention, the hydroxy-functional acrylic resin is normally dissolved in a solvent. The liquid crystal material, along with the diisocyanate containing material, dyes and other additives, are incorporated into the solvent solution to form a homogeneous solution. The solution is then applied to a surface, such as a conductive surface, by coating or casting techniques. Upon evaporation of the solvent, a solid, handleable film is produced which may immediately be used to fabricate a thermo- optic or electro- optic display device. However, over a time period of up to a few days, the hydroxy-functional groups on the acrylic resin continue to react with the diisocyanate to form a urethane-acrylic copolymer which transforms the polymer matrix into a cross-linked material with improved durability and a higher maximum operating temperature. These preferred cross-linked light modulating materials exhibit rapid turn-off time and no noticeable fatigue. Fatigue is the tendency of many light modulating materials to lose their ability to completely revert to their normally off state (the translucent state for most materials) after an electric field has been applied for a prolonged period of time or has been switched off and on a very large number of times.
The liquid crystal material is present in a phase formed within a polymer matrix. Preferably, the liquid crystal material is present in sufficient concentration to form apparently interconnected networks randomly distributed throughout the polymer matrix (see FIG. 1). These networks are believed to comprise a multiplicity of domains having locally oriented optic axes which, in aggregate, are normally randomly oriented and scatter light, thereby giving the polymeric film a substantially opaque or translucent appearance. Alternatively, the liquid crystal phase may be present in lower concentration in the form of discrete domains or microdroplets within the polymer matrix, the optic axes of which are normally randomly oriented and scatter light.
Upon application of an electric field, the optic axes of the liquid crystal domains become aligned, and under a suitable choice of indices of refraction of the materials, the film will appear substantially clear or transparent. Upon removal of the electric field, the liquid crystals return to their original random alignment. This behavior of the material is useful in the fabrication of light- controlling devices.
By properly adjusting the formulation of the light modulating material, the liquid crystal domains will return immediately to random alignment after removal of the electric potential. Alternatively, the formulation can be adjusted to achieve a memory state in which the axes of the liquid crystal domains will remain in alignment for a period of time after the electrical potential is removed. The memory state is an at times desirable, completely "on" state, differing from the generally undesirable "fatigue" situation mentioned previously in which the material stays in a partial "on" state after being switched off.
With the composition of the invention, phase separation normally occurs spontaneously as the solvent evaporates. The time for phase separation can be as short as a few seconds. After evaporation of the solvent, the polymer matrix is rigid enough that the coated material can immediately be laminated to a second conductive film or sheet.
In another aspect of the invention, dyes may be added to the liquid crystal material, becoming part of the liquid crystal phase when the liquid crystal material separates. This produces a colored opaque or translucent state in the material, and enables the display device to change between a colored state and a substantially transparent one.
With the foregoing in mind, a principal advantage of the invention is that it provides a simple, economical, efficient method of incorporating liquid crystal material into a polymer matrix to provide an improved light modulating material for display devices.
Another principal advantage of the invention is that it provides an improved light modulating material which can be easily applied to a surface using coating or casting techniques and which hardens and copolymerizes subsequent to film formation, causing the material to take on many of the durability and other advantages of thermoset polymers, such as retention of optical properties after repeated thermal or electrical cycling.
Another principal advantage of the invention is that it provides an electro-optic and thermo-optic display material which responds quickly (i. e. changes quickly between opaque and transparent) when an electric field is switched on and off, or when a temperature change is induced. Moreover, the light modulating material of the invention exhibits no noticeable fatigue or degradation of optical properties after extended operation.
A further advantage of the invention is the provision of an electro- optic and thermo-optic display material in which the display device can change between a colored and a substantially transparent state.
Another advantage is that the light modulating material of the invention is operable at lower voltages than known thermoplastic-based light modulating materials.
The foregoing and other features and advantages of the invention will appear in the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 includes photomicrographs of a typical light molulating material made in accordance with the invention showing an interconnected liquid crystal phase within a polymer matrix. FIG. 1A is a photomicrograph of the surface of a light modulating material of the invention containing urethane-acrylic copolymer, made in substantially the same manner as the material of Example 5B.
FIG. 1B is a photomicrograph of a vertical freeze fracture cross- section of a light modulating material of the invention containing urethane-acrylic copolymer, made in substantially the same manner as the material of Example 5B.
FIG. 1C is a photomicrograph of a diagonal freeze fracture cross- section of a light modulating material of the invention containing urethane-acrylic copolymer, made in substantially the same manner as the material of Example 5B.
FIG. 2(a and b) includes graphs showing the switching behavior of the light modulating materials prepared in Example 5 and illustrates the improved switching time resulting from the use of organometallic complexes.
FIG. 3(a and b) shows plots of transmission vs. voltage for the light modulating materials of Example 5 and illustrates the lower voltages required to effect complete or partial "turn-on" when organometallic compounds are incorporated.
FIG. 4(a and b) includes graphs showing the switching behavior of the light modulating materials prepared in Example 7 and illustrates the improved switching time resulting from the use of a crosslinking agent.
DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EMBODIMENTS
In preparing the light modulating material of the invention, the polymer or polymers which form the polymer matrix and the liquid crystal material, together with any other components, are normally dissolved in a compatible solvent to form a homogeneous solution. The solution is then applied to a surface, e.g., to an electrically conductive surface, using a conventional casting or coating method. Phase separation between the polymer and the liquid crystal material occurs subsequent to application, as the solvent is evaporated. This phase separation results in the formation of an interconnected or discrete liquid crystal phase within a polymer matrix, as shown in FIGS. 1B and 1C. As the solvent evaporates, the polymer hardens to form a solid matrix for the liquid crystal phase.
Alternatively, the polymer may be heated to a soft or molten state to which the liquid crystal and other components are added to form a homogeneous solution. In this case, phase separation occurs as the solution is cooled.
In preferred embodiments of the invention, the polymer materials used to form the homogeneous solution include a cross-linking acrylic resin or resins containing active, i.e., reactive, hydrogen groups, such as hydroxy-functional acrylic resins and carboxy-functional acrylic resins. Hydroxy-functional acrylic resins are preferred.
Suitable hydroxy-functional, cross-linkable acrylic resins include Rohm and Haas 608X, Johnson Wax CDX 587, and Johnson Wax 800B. The preferred hydroxy-functional acrylic resin is Johnson Wax 800 B. Suitable carboxy-functional, cross-linkable acrylic resins include Johnson Wax SCX- 815B and SCX-817B. Preferably, the acrylic resin has an index of refraction which closely matches the index of refraction of the aligned liquid crystal material so that the resulting light modulating material or film will appear clear or transparent when the optic axes of the liquid crystal phase are aligned.
In certain preferred embodiments, the homogeneous solution may also include an additional thermoplastic resin, such as Rohm and Haas B44 or A30, both of which are conventional, nonreactive thermoplastic acrylic resins. Such resins also should have a suitable index of refraction for matching with the liquid crystal material. When such resins are employed, the weight ratio between the active hydrogen containing acrylic resin and the additional thermoplastic material will normally range from between 10:1 to 1:1.
In the preferred embodiments of the invention which employ hydroxy- functional acrylic resins, a diisocyanate containing compound or material will normally be incorporated into the initial homogeneous solution. This diisocyanate containing material reacts slowly with the active hydrogen groups on the acrylic resin, over the course of a few hours to a few days or so and normally with the aid of a suitable catalyst, to form a urethane-acrylic copolymer. This copolymerization process, which principally occurs after evaporation of the solvent, causes the light modulating material to take on many of the durability and other advantages of a thermoset acrylic polymer, while retaining the continuous casting advantages associated with the use of a thermoplastic resin binder.
Diisocyanate materials suitable for this purpose include both aromatic and aliphatic diisocyanates, such as toluene diisocyanate and hexamethylene diisocyanate. The preferred diisocyanate material is hexamethylene diisocyanate.
Preferably, the molar ratio of the diisocyanate material to the hydroxy-functional acrylic resin is between 0.2:1 and 1:1.
All of the reactive sites on the acrylic backbone need not be reacted with diisocyanate. Normally, the resulting polymer matrix will contain a mixture of unreacted hydroxy-functional acrylic resin and urethane- acrylic copolymer. The ratio of these polymers varies with both the reaction conditions and the starting ratio of the diisocyanate material to the reactive acrylic resin.
Suitable catalysts for the isocyanate-active hydrogen reaction include dibutyl tin dilaurate and zinc octoate. Normally, the catalyst is present in a concentration of about 0.001 to 0.01% with respect to the reactive polymer.
In embodiments of the invention which employ carboxy-functional acrylic resins, an epoxide containing compound or material will normally be incorporated into the initial homogeneous solution. This epoxide material reacts slowly with the active hydrogen groups on the carboxy- functional acrylic resin, over the course of a few hours to a few days or so and normally with the aid of a suitable catalyst, to form an epoxy- acrylic copolymer. This copolymerization process, like the previously described reaction between hydroxy-functional acrylic resins and diisocyanates, principally occurs after evaporation of the solvent and causes the light modulating material to take on many of the durability and other advantages of a thermoset acrylic polymer, while retaining the continuous casting advantages of a thermoplastic resin binder.
Again, it is not necessary that all of the reactive sites on the acrylic backbone react with epoxide. Depending on the reaction conditions and the starting ratio of the epoxide material to the carboxy- functional acrylic resin, the resulting polymer matrix will contain varying amounts of both the carboxy-functional acrylic resin and the epoxy-acrylic copolymer.
The liquid crystal material can be a ferro-electric, cholestric, smectic or nematic material, with nematic materials being most preferred. Suitable liquid crystal materials include cyanobiphenyls, cyanoterphenyls, cyanophenylcyclohexanes, phenylpyrimidines, cyclohexophenyl pyrimidines, alkylbenzoates, cyanobenzoates, and mixtures of the foregoing. Specific examples of liquid crystal materials are S2, E7, K24, and TM74A, all manufactured and sold by BDH Chemicals, Limited. Other examples include ROTN 132, 3010, S3033/1293, 3910, 3912, 403 and 607, all manufactured and sold by Hoffman La Roche Chemical Company. Also included are ZLI 1263, 1222, and 1905, manufactured and sold by E. Merck Chemical Company. The most preferred liquid crystal materials are E7 and ROTN 132.
In a highly preferred embodiment of the invention, the liquid crystal material will include a mixture of nematic compounds with a minor amount of chiral mesogenic compounds, for example cholesteric esters. A liquid crystal mixture including between 90.0 and 99.5% by weight of nematic compounds and between 10.0 and 0.5% by weight chiral mesogenic compounds results in a faster switching time when a field is removed or turned off. The preferred chiral mesogenic compounds for use in such mixtures include cholesterol benzoate and chiral pentyl cyanobiphenyl.
Normally, the liquid crystal material is present in a weight ratio of about 1:5 to 1:0.5 with respect to the polymeric materials, including any diisocyanate or epoxide containing material, with a weight ratio of about 1:1 to 2:1 being preferred.
Dichroic or other dyes may also be added to the solution. When a dye is used, the dye will preferably be substantially separated into the liquid crystal phase resulting in the light modulating material normally having a colored opaque appearance, which can be changed to a transparent or clear appearance by application of an electrical potential or temperature change as described above. Examples of suitable dichroic dyes include anthraquinones D5 or D35 from BDH Chemicals, Ltd., and S3026 from Hoffman La Roche Chemical Company. Other dyes which are normally not classified as dichroic dyes, such as Rhodamine 6G or Sudan III from Eastman Kodak Company, also function well in the system.
Other additives in an amount up to about 10% by weight of the liquid crystals can be added to the initial polymer solution. These additives may be dispersants, surfactants, or other aids which improve the contrast, appearance or performance of the resulting light modulating material. By proper selection of the liquid crystal materials and the associated additives, the liquid crystal phase can either return to its random orientation immediately after removal of the electric field or, alternatively, memory can be built into the material, in which case the liquid crystal phase will retain its orientation for a period of time after removal of the electric field and before returning to random alignment. The addition of materials such as surfactants or dyes to the solvent solution can change the switching time of the light modulating material.
In a highly preferred embodiment of the invention, an organometallic compound, preferably a titanate or zirconate compound, is added to the initial solution to reduce the time required to orient the liquid crystal phase, normally by an order of magnitude such as from about 250 milliseconds to about 10 milliseconds or less. Alternatively, an organometallic compound may be used to reduce the voltage required to achieve substantial transparency. During phase separation, it is believed that these compounds become part of the polymer matrix.
Suitable titanate compounds include neopentyl (diallyl) oxy, tri (dodecyl) benzene-sulfonyl titanate and neopentyl (diallyl) oxy, tri (N- ethylenediamino) ethyl titanate. Suitable zirconate compounds include neopentyl (diallyl) oxy, tri (dodecyl) benzene-sulfonyl zirconate and neopentyl (diallyl) oxy, tri (N-ethylenediamino) ethyl zirconate. The preferred organometallic compounds are mixtures of titanate or zirconates. The amount of organometallic compounds required varies with the degree of improvement sought in the switching time. Preferably, the weight ratio of the organometallic compound to the liquid crystal material is between 0.002:1 and 0.05:1.
Normally, in preparing the homogeneous solution, the polymer is first dissolved in a solvent suitable for the polymer. It is preferred that the solvent be one that will evaporate at or near ambient temperatures. Solvents that can be used include cyclohexanone, toluene, ethyl acetate, and chloroform. After the polymer has dissolved, the liquid crystal material is then added to the solvent solution, along with dyes or other additives, as desired, in order to form a normally clear, homogeneous solution. No special mixing conditions are normally required.
The homogeneous solution containing the components of the light modulating material can be applied by roller coating, casting, brushing, or the like, to a suitable surface, such as an endless belt, a plastic film, or a suitably prepared electrically conductive surface. The electrically conductive surface may be any conductive material commonly used in electro-optic display devices. A common conductive material is a film of aluminum or indium tin oxide applied to a base of a polyester film, a glass plate, or the like.
Normally, the homogeneous solution is applied to the surface as a thin film having a thickness between about 1 to 4 mils. After application, the solvent is preferably evaporated at or near ambient temperature to form a solid film which generally has a thickness in the range of 0.3 to 3.0 mils. However, heat may be applied to the film to aid in the evaporation process.
During the evaporation of the solvent, or soon thereafter, the film will normally turn from a clear solution into a cloudy or opaque film. This indicates that phase separation has taken place between the liquid crystal material and the polymer matrix and that the liquid crystal phase has formed. This occurs spontaneously as the solvent evaporates, and the time for phase separation is normally less than a minute and can be as short as a few seconds. The opaque or cloudy appearance of the polymeric film results from the random orientation of domains within the liquid crystal phase.
After evaporation of the solvent, the polymer matrix is rigid enough that the film can immediately be laminated to another material, such as a second conductive film or sheet and/or used to form a display device. When a diisocyanate or epoxide compound is present in the homogeneous solution, it remains substantially in the polymer matrix portion of the film and, over a period of about a few hours to a few days, reacts with the active groups on the acrylic resin to form a cross- linked acrylic- urethane or acrylic-epoxy copolymer. This causes the light modulating material to take on many of the durability advantages of a thermoset acrylic polymer.
By applying an electric potential, generally in the range of 10 to 200 volts, across portions of the film, the optic axes of the liquid crystal domains become aligned, causing all or portions of the film subject to the electric potential to become substantially clear or transparent. The effectiveness of this phenomenon depends to a large extent on the indices of refraction of the liquid crystal materials and of the polymer matrix. Optimum levels of transparency are achieved when the index of refraction of the liquid crystal material is closely matched to the index of refraction of the polymer matrix, a process that usually is determined by trial and error. The level of obtainable transparency decreases as the disparity between the indices of refraction increases.
An electric potential can be applied to the light modulating material using an electric stylus, print element, or ion source. When a stylus is used, for example, letters or words can be formed on the light modulating layer. Alternatively, the light modulating material can be sandwiched between two electrically conductive layers. When two electrically conductive layers are used, an electric potential can be programmed to be passed across certain portions or areas of the light modulating material to create the desired message or effect.
The light modulating material of the invention can be used in many electro-optic display applications, such as signs, electro-optic windows, clocks, mirrors and the like.
The light modulating material of the invention can also be imaged thermally by such means as a heated stylus, laser, or elevated ambient temperature, and is therefore useful in temperature indicating devices, thermographic applications, and the like.
The following examples illustrate the preparation of the light modulating material of the invention.
EXAMPLE 1
The following materials were mixed and then cast on the conductive side of indium tin oxide coated polyester at a wet thickness of 3.0 mils.
1.15 g--(35% in toluene) Rohm and Haas, 608X, hydroxy- functional acrylic resin.
0.80 g--BDH Chemical Ltd., E7 liquid crystal mix.
0.64 g--(20% in toluene) Mobay Chemical Co., N75, hexamethylene diisocyanate.
0.01 g--(0.1% in toluene) M and T Chemical Co., dibutyl tin dilaurate.
The toluene was allowed to evaporate for 10 minutes at room temperature to form a translucent layer. A second piece of ITO coated Mylar was then laminated to the translucent layer with a hot (120° F.) roll. The laminate turned clear when subjected to a field of about 35 volts A.C. (VAC).
EXAMPLE 2
The following materials were mixed, coated and laminated as in Example 1.
4.76 g--(35% in toluene) Rohm and Haas, B44, thermoplastic acrylic resin.
4.36 g--(35% in toluene) Johnson Wax, 800B, hydroxy-fuctional acrylic resin.
4.00 g--Hoffman La Roche, ROTN 570, nematic liquid crystal mix.
0.63 g--(20% in toluene) Mobay Chemical Company, N75, hexamethylene diisocyanate resin.
0.43 g--(1% in toluene) Kenrich Petroleum Co., LICA 44 neoalkoxy titanate.
0.22 g--(1% in toluene) Kenrich Petroleum Co., LICA 09 neoalkoxy titanate.
0.48 g--(0.1% in toluene) Aldrich Chemical Co. di-butyl tin dilaurate.
1.20 g--(1.0% in toluene) BDH Chemical Ltd. CB15 chiral mesogenic liquid crystal.
The laminate turned clear when subjected to a field of about 60 VAC.
EXAMPLE 3
The following materials were mixed and allowed to stand for 30 minutes. The solution was then filtered, cast and laminated as in Example 1.
3.90 g--(35% in toluene) Johnson Wax, 800B, hydroxy-functional acrylic resin.
3.00 g--BDH Chemicals, Ltd., E7 liquid crystal mix.
0.58 g--(20% in toluene) Mobay Chemical Co., N75, hexamethylene diisocyanate resin.
0.30 g--(1% in toluene) Kenrich Petroleum Co., LZ 44, neoalkoxy zirconate.
0.15 g--(1% in toluene) Kenrich Petroleum Co., LZ 09, neoalkoxy zirconate.
0.40 g--(0.1% in toluene) Aldrich Chemical Co., di-butyl tin dilaurate.
The finished laminate turned on (became transparent) at 38 VAC with a turn on time which was substantially faster than the same material without the zirconate additives.
EXAMPLE 4
The following materials were mixed, cast as a 2 mil wet film, dried and laminated as in the previous examples.
3.96 g--(35% in ethyl acetate) Johnson Wax, 800B hydroxy- functional acrylic resin.
0.29 g--(20% in ethyl acetate) Mobay Chemical Co., N75, hexamethylene diisocyanate resin.
3.00 g--ROTN 132, Hoffman La Roche liquid crystal.
0.30 g--(1% in toluene) Kenrich Petroleum, LICA 44, neoalkoxy titanate.
0.15 g--(1% in ethyl acetate) Kenrich Petroleum, LICA 09, neoalkoxy titanate.
0.45 g--(10% in toluene) Troy Chemical Co., colloidisperse.
The material was allowed to stand for 24 hours at room temperature and then it was switched on at about 28 VAC. This is approximately 1.2V per micron of film thickness.
EXAMPLE 5
5A
The following ingredients were mixed, cast and laminated as in Example 4.
3.00 g--ROTN 570 liquid crystal from Hoffman La Roche Chemical Co.
2.80 g--(50% in toluene) Johnson Wax, 800B, hydroxy-functional acrylic resin.
2.90 g--(20% in toluene) Mobay Chemical Co., N75, hexamethylene diisocyanate.
0.43 g--(0.1% in toluene) di-butyl tin dilaurate.
0.8 g--toluene.
The samples were allowed to stand for 24 hours prior to testing. The results of a switching time test of this material using 40 VAC, 100 Hertz, held on for one second, are shown in FIG. 2. The slanted ramp indicates a slow switching from the translucent state to the transparent state.
The results of another test during which transmission was measured as a function of voltage are shown in FIG. 3. The solid line indicates transmission as a function of voltage as the voltage is being increased. The broken line indicates hysteris (i.e. higher transmission at each voltage level) as the voltage is being lowered.
5B
The following ingredients were mixed, cast and laminated as in Example 5A:
3.00 g--ROTN 570 liquid crystal.
2.80 g--50% 800B in toluene.
2.80 g--20% N75 in toluene.
0.90 g--1% KS100 organometallic titanate complex in toluene (Kenrich Petroleum)
0.43 g--0.1% dibutyl tin dilaurate in toluene.
0.90 g--toluene.
The results of a switching time test of this material using 40 VAC, 100 Hertz, for one second, also are shown in FIG. 2. This example is illustrative of the effect which the organometallic complexes can have on improving the switching time of an electro-optic device. The absence of a slanted ramp indicates that the light modulating material of Example 5B switches quickly and throughly from the translucent state to the transparent state.
The graphs of FIG. 2 also illustrate the rapid turn off time obtainable with the light modulating materials of the invention and that the organometallic complexes employed in certain preferred embodiments, as in Example 5B, do not adversely affect the turn-off time.
FIG. 3 also shows transmission versus voltage for this material. The presence of titanate causes a significant lowering of the voltage required to achieve any given percentage of transmission, up to and including complete "turn-on". This is true whether the voltage is being raised or lowered.
EXAMPLE 6
The following materials were mixed, cast and laminated as in Example 1:
3.00 g--BDH Chemicals Ltd., E7, liquid crystal mixture.
2.50 g--(50% in toluene) Johnson Wax, 815B, carboxy functional acrylic polymer.
0.71 g--(35% in ethyl acetate) Shell Chemical Co., EPON 1004, epoxy crosslinker
0.43 g--toluene
The sample was allowed to stand for 24 hours at room temp. The 1 mil thick sample could be turned on at about 50 VAC.
EXAMPLE 7
Two variations of a basic hydroxy-functional acrylic and nematic liquid crystal film were prepared as described in the previous examples.
7A
3.00 g--BDH Chemicals, Ltd., nematic liquid crystal
3.00 g--Johnson Wax, 800B, hydroxy-functional acrylic resin (50% in toluene)
0.43 g--Aldrich Chemical Co., dibutyl tin dilaurate (0.1% in toluene)
1.86 g--toluene
7B
3.00 g--BDH Chemicals, Ltd., nematic liquid crystal
2.77 g--Johnson Wax, 800B, hydroxy-functional acrylic resin (50% in toluene)
0.58 g--Mobay Chemical Co., N75, hexamethylene diisocyanate (20% in toluene)
0.43 g--Aldrich Chemical Co., dibutyl tin dilauarate (0.1% in toluene)
1.51 g--toluene
The results of a switching time test of these materials at 40 VAC, 100 Hertz, for an "on" time of 1 second, are shown in FIG. 4. The slow turn- off time for 7A is typical of polymers which are not or cannot be crosslinked with a crosslinking agent such as a diisocyanate. | |
Authors:
Hongchang
Wang
(Diamond Light Source)
,
Tunhe
Zhou
(Diamond Light Source)
,
Yogesh
Kashyap
(Diamond Light Source)
,
Kawal
Sawhney
(Diamond Light Source)
Co-authored by industrial partner: No
Type:
Conference Paper
Conference: SPIE Optical Engineering + Applications, 2017
Peer Reviewed: No
State:
Published (Approved)
Published: September 2017
Diamond Proposal Number(s): 14242
Abstract: For modern synchrotron light sources, the push toward diffraction-limited and coherence-preserved beams demands accurate metrology on X-ray optics. Moreover, it is important to perform in-situ characterization and optimization of X-ray mirrors since their ultimate performance is critically dependent on the working conditions. Therefore, it is highly desirable to develop a portable metrology device, which can be easily implemented on a range of beamlines for in-situ metrology. An X-ray speckle-based portable device for in-situ metrology of synchrotron X-ray mirrors has been developed at Diamond Light Source. Ultra-high angular sensitivity is achieved by scanning the speckle generator in the X-ray beam. In addition to the compact setup and ease of implementation, a user-friendly graphical user interface has been developed to ensure that characterization and alignment of X-ray mirrors is simple and fast. The functionality and feasibility of this device is presented with representative examples.
Subject Areas:
Technique Development,
Physics
Instruments: B16-Test Beamline
Documents: | https://publications.diamond.ac.uk/pubman/viewpublication?publicationId=8513 |
2020 Agricultural Export Yearbook
Contact:
Link to report:
The 2020 U.S. Agricultural Export Yearbook provides a statistical summary of U.S. agricultural commodity exports to the world. This summary lists only the United States’ primary trading partners. The Yearbook is produced by the U.S. Department of Agriculture’s (USDA) Foreign Agricultural Service (FAS) using trade data published by the U.S. Census Bureau of the U.S. Department of Commerce. Foreign country export data was sourced from the reporting countries’ national statistical agencies as reported through Trade Data Monitor (TDM).
The 2020 U.S. Agricultural Export Yearbook consists of two sections: (1) top U.S. commodity exports and (2) top destinations for U.S. exports. The Yearbook utilizes FAS Product Groups that can be found at FAS’ Global Agricultural Trade System (GATS) located at https://apps.fas.usda.gov/gats/. The product groups are defined using the Harmonized Tariff Schedule (HTS) at the 10-digit level and aggregated into classifications that include the primary commodity and its derivatives. FAS has titled these product groups as “BICO (HS-10).” BICO is an FAS designation that stands for Bulk, Intermediate & Consumer Oriented goods. The bulk commodity groups, such as corn, wheat, and rice, are aggregations of very few HTS codes. For example, the soybeans product group includes only 2 HTS codes; and cotton includes only 5 codes; while the Beef and Beef Product category incorporates 26 HTS lines; Dairy Products includes 46 HTS lines; and Fresh Vegetables includes 70 HTS codes.
The 16 product groups or commodity aggregations, displayed in the Yearbook, are based on the United States’ largest export categories. Ethanol is not considered an agricultural product according to the USDA definition of agriculture, and its export value is not included in the total value of U.S. agricultural exports ($145.7 billion).1 However, a Yearbook page on ethanol has been included in this publication because of the large value of its exports and its importance to the agricultural community and rural America. The top 15 export product groups (not including ethanol) accounts for nearly 72 percent of total U.S. agricultural products exported in 2020.
The country Yearbook pages include the United States’ top 15 export destinations as well as a page for the United Kingdom (UK). The European Union (EU27+UK), a customs union comprised of 28 member states, is included as a single trading partner. The only exception is that the UK has its own yearbook page given the importance of its withdrawal from the EU-28. The top 14 export markets represent 80 percent of total U.S. agricultural exports in 2020.
Commodities
- Export Overview
- Beef & Beef Products
- Corn
- Cotton
- Dairy Products
- Ethanol
- Fresh Fruits & Vegetables
- Pork & Pork Products
- Poultry Meat & Products
- Prepared Food
- Rice
- Soybeans
- Tree Nuts
- Wheat
Countries
- Country Overview
- Canada
- China
- Colombia
- Egypt
- EU27+UK
- Hong Kong
- Indonesia
- Japan
- Mexico
- Philippines
- South Korea
- Taiwan
- Thailand
- United Kingdom
- Vietnam
1 The USDA changed its definition of “agricultural products” in March 2021 to conform to World Trade Organization standards. The new definition includes ethanol. | https://www.fas.usda.gov/data/2020-agricultural-export-yearbook |
Rotate 3-d coordinate system.
Category
Calling Sequence
rot_3d, axis, x1, y1, z1, ang, x2, y2, z2
Inputs
axis=Axis number to rotate about: 1=X, 2=Y, 3=Z. in
x1, y1, z1 = arrays of original x,y,z vector comp. in
ang = rotation angle in radians. in
Keyword Parameters
Keywords
/DEGREES means angle is in degrees, else radians.
Outputs
x2, y2, z2 = arrays of new x,y,z vector components. out
Common Blocks
Notes
Note: Right-hand rule is used: Point thumb along +axis.
Fingers curl in vector rotation direction (for +ang).
This is for coordinate system rotation. To rotate the
vectors in a fixed coord. system use the left hand rule.
Modification History
R. Sterner. 28 Jan, 1987.
6 May, 1988 --- modified to work with any shape arrays.
R. Sterner, 6 Nov, 1989 --- converted to SUN.
RES 13 Feb, 1991 --- added /degrees.
Johns Hopkins University Applied Physics Laboratory.
Copyright (C) 1987, Johns Hopkins University/Applied Physics Laboratory
This software may be used, copied, or redistributed as long as it is not
sold and this copyright notice is reproduced on each copy made. This
routine is provided as is without any express or implied warranties
whatsoever. Other limitations apply as described in the file disclaimer.txt. | https://www.harrisgeospatial.com/docs/rot_3d.html |
Abstract
Antonio Gramsci’s interpretation and analysis of “hegemony,” its mechanisms, causes and consequences for the Left, is fundamentally an attempt to grapple with how culture and the “common sense of the epoch” (Miliband, 1990) grow out of class society and impose their ontological structure on even those whose interests it opposes. Given the continued existence and deepening of class divisions in the 21st century, an understanding of Gramsci’s work may be even more of a critical project for the Left now than when it was first written. The terrain on which political battles are conducted may have shifted in a multitude of ways, not the least of which being the influence of counter-hegemonic movements outside of traditional class struggle, but much of the operative systems of both domination and resistance remain similar. In first outlining an interpretation of Gramsci’s thinking on the question of hegemony in relation to political praxis, and then investigating the case of Greece in the post-2008 reality, this paper demonstrates that the failure of the Syriza party to resist EU-imposed austerity can be used as an example of hegemony reasserting itself over a Left project. Seen in this way, the experience of Greece contains important lessons about the necessity for the forces of the Left to build a new hegemony so as to supersede the currently dominant neoliberal discourse.
When reading Antonio Gramsci, it is important to keep in mind that, unlike many future thinkers whom his theories of hegemony and the cultural aspects of class society would influence, he was both a scholar and a leader of a political movement. In fact, given that his major writings were produced only after he had been imprisoned by the Mussolini regime, it is likely that he would have seen his primary role as a leader in the Italian Communist Party rather than a Marxist scholar per se. As such, Gramsci’s writings and overall analytical framework must be considered not in the realm of mere theorizing but, rather, as he himself described Machiavelli’s writings, “the style of a man of action, a man who wants to encourage action” (Gramsci, p. 141).
His writings primarily concern both the question of why revolution in production along Marxian lines had not occurred despite increasing class polarization and the more critical question of what, given this, ought to be the course of action for Marxist political movements. In this focus, political tactics—and the ideology underpinning these tactics—Gramsci builds more of an apparatus for analysis around questions that thinkers such as Rosa Luxemburg were only beginning to imply in their writings.
Gramsci’s writings and overall analytical framework must be considered not in the realm of mere theorizing but, rather, as he himself described Machiavelli’s writings, “the style of a man of action, a man who wants to encourage action.”
Namely, Gramsci’s interpretation and analysis of “hegemony,” its mechanisms, causes and consequences for the Left, is fundamentally an attempt to grapple with how culture and the “common sense of the epoch” (Miliband, 1990) grow out of class society and impose their ontological structure on even those whose interests that are fundamentally opposed to. Given the continued existence and deepening of class divisions in the world of the 21st century, an understanding of Gramsci’s work may be even more of a critical project for the Left now than when it was first written.
The terrain on which political battles are conducted may have shifted in a multitude of ways, not the least of which being the influence of counter-hegemonic movements outside of traditional class struggle, but much of the operative systems of both domination and resistance remain similar. In firstly outlining an interpretation of Gramsci’s thinking on the question of hegemony in relation to political praxis, and then investigating the case of Greece in the post-2008 reality, this paper will seek to demonstrate that the failure of the Syriza party to resist EU-imposed austerity can be as an example of hegemony reasserting itself over a Left project. Seen in this way, the experience of Greece contains important lessons about the necessity for the forces of the Left of building a new hegemony so as to supersede the currently dominant neoliberal discourse.
The Stuff of Thought
Though the term “hegemony” is frequently used in political discussions of all types, it is important to grasp the sense in which Gramsci uses the term to know why it is so central to his thought. In a basic sense, hegemony is the position of dominance of a particular class over the whole society at a given moment. In the era of feudalism, for example, the hegemonic class was that of the lords and royals, whose interests reigned predominant over those of the economic subaltern, in this case the serfs. This notion of social preeminence by class is, of course, the essence of Marx’s theory of history, that a new era is only truly begun when one class displaces as the prime beneficiary of the prevailing system of production.
The elaboration that Gramsci puts on this, building from Marx’s work in the The German Ideology, is to identify two, interlinked but distinctive, mechanisms by which dominant classes retain their hegemonic positons. Marx saw bourgeois dominance under capitalism as being primarily maintained by the threat of starvation and the brute repression of state and private armies against workers’ occasional revolts. This can best be summed up by the phrase “hegemony by control,” and it certainly is identified as crucial by Gramsci’s identification of “state coercive power which “legally” enforces discipline” (Gramsci et. al., p.12). However, there is a second method, best termed “hegemony by consent,” which is explained as “the “spontaneous” consent given by the great masses of the population to the general direction imposed on social life by the dominant fundamental group” (Gramsci et. al., p.12).
This method is more invisible and less directly coercive, but nevertheless plays perhaps a more crucial role in maintain existing class relations. By making alternatives appear dubious, risky or simply impossible, the hegemonic apparatus of the dominant group closes in the ability of the subaltern to even think of or articulate theories of opposition, instead resigning themselves to the current order. Subsequent thinkers in the tradition of Gramsci, such as Ralph Miliband (1990) have further broken down the “hegemony by consent” question into two subcategories, which could be termed “strong” and “weak” hegemony. The former occurs when subordinate classes “interiorize the values and norms which dominant classes themselves have adopted and believe to be right and proper,” the latter consists of the proposition that no matter the opinion of the subordinate class of the current order that “any alternative would be catastrophically worse.”
The shift from the “strong” to the “weak” form of hegemony, which it could be argued is seen in the movement from post-war Keynesian thought to post-Thatcher neoliberalism as the dominant fraction within the capitalist class, does not make hegemony any less of a force to be reckoned with. It does, however, open up greater potentialities for the Left acting in a counter-hegemonic way, as the concrete benefits of bourgeois hegemony are shared by an ever-shrinking subset of society.
An important role in the process of hegemonic formation is played by intellectuals, which Gramsci identifies as a particular social strata with a particular social function. Though agreeing with Marx’s thoughts on the nature of ideology, in particular its inseparability from economic forces and class position, Gramsci sees intellectuals as articulating for a particular class “homogeneity and an awareness of its own function” (Gramsci, Hoare & Nowell-Smith, p. 5). This did not mean that he believed all non-intellectuals lacked a fundamental inability to grasp their own interests in relation to class dynamics, stating that “all men (sic.) are intellectuals” (Gramsci et. al., p. 9), but that only some were appointed in society as having the function of justifying these dynamics through recourse to supposed higher principles. Intellectuals are means by which dominant economic classes both come to an articulation of their collective values and principles in the social and political spheres and recast these interests as being “common sense” or the “collective interest” of a society as a whole. Crucially, Gramsci includes in the category of “intellectual” not just philosophers or other academics that would commonly be thought of as such, but also “ecclesiastics” which he terms a “category of intellectuals organically bound to the landed aristocracy” (Gramsci et. al., p.7).
In the modern day, by this same term, we ought to think of the culture industry, journalists, and other social actors which mediate between class layers as belonging to this category of “intellectuals” as well. Capitalism is distinct from previous historical epochs in this way, as “previous ruling classes were essentially conservative in the sense that they did not tend to construct organic passages from other classes into their own” (Gramsci et. al., p.260). This is both in the sense of a relative level of class fluidity in capitalism as compared to feudalism, and in the sense that capitalism must be seen to be “good” for a wider swath of the society than previous epochs in order to sustain itself. In other words, though capitalist societies, to varying degrees and at various levels of development, certainly still include a large amount of hegemony by control, particularly in the developed industrial and post-industrial periods, there is a relative shift to the hegemony of consent.
Though individuals and groups involved in reproducing such hegemony do not serve a direct economic production function, and in fact detract from an environment of maximum productive capacity in the purely classical economics sense, they are “justified by the political necessities of the dominant fundamental group” (Gramsci et. al., p. 13). As such, their existence and proliferation as allied with the dominant class represent a kind of long term investment, which may reduce profits in the short term, whilst reducing the chances of revolt and thereby the destruction of the entire profit system over the long term.
A similar point can be made about the various concessions in the forms of the “high wages” that Gramsci sees as being part of the system of “Fordism” which was slowly coming into existence in Italy during this time, or in terms of various welfare state measures. In both cases, Gramsci did not oppose such measures or the struggle for them as such, seeing that “abstention is linked with the formula “so much the worse, so much the better” (Gramsci, p. 155), but nevertheless looked at them skeptically and with an eye to how they could be made revolutionary. In this sense, he shares with Rosa Luxemburg a similar set of thoughts on the relationship between reform and revolution, seeing the former as both necessary to strengthen the “indissoluble tie” (Luxemburg, 1900) between workers and Marxist parties and containing the potential sparks of later revolution, whilst nevertheless not being inherently revolutionary in itself (though he adds an additional analysis of tactical political positions to this, which will be elaborated on momentarily).
The question of obtaining hegemony, then, would seem to rest just as much with having a strata of intellectuals able to formulate and articulate both a form of class consciousness amongst the proletariat and a sense in which the proletariat “think as members of a class which aims at leading the peasants and the intellectuals” (Gramsci, p. 36). In other words, it is not enough to simply have a sense of one’s own place in the system of production, but also of conceiving what a world with proletarian hegemony looks like, and how it can lead and benefit a wider range of social strata.
From this conundrum, it is no surprise that a good portion of Gramsci’s writings are dedicated to the issues of both education and political action in the “intellectual” realm. Referencing the figure of the Prince from the work of Machiavelli, he states that a modern form of analogous social and political leadership cannot be vested in one person, but rather in “an organism; a complex element of society in which the cementing of a collective will, recognized and partially asserted in action, has already begun” (Gramsci, p. 137). He sees this “organism” as the political party, which has the function of both organizing individuals into a coherent political formation, and of creating independent intellectuals of the working class through a system of political education and leadership. The political party is particularly important for development of a political, as opposed to strictly economic, class consciousness because political parties are where individuals “become agents of more general activities of a national and international character” (Gramsci et. al., p. 15).
In other words, an individual worker may be able to struggle for what Luxemburg would term “merely economic” demands through institutions other than a political party (a trade union, for instance), but it is only in a political party where she becomes a member of a class capable of obtaining hegemony. This is both for the reason that political parties are linked to the struggle for the control of state power1, which is one of, though not the sole, key mechanisms for the enforcement of hegemony and because political parties are where individuals begin to experience themselves as actors capable of creating political change. In this sense, Gramsci’s definition of a “political party” is not limited to the colloquial sense of a group participating in an electoral contest. Social movements of various kinds could also be termed “political parties” in that they are organized for fundamentally political purposes aimed at contesting existing conditions.
In either case, there is a necessity which Gramsci identifies for both the converting of certain members of the traditional intellectual strata, philosophers, university professors and such, to the proletarian cause, but also in developing a strata of “organic” intellectuals from within the proletariat themselves. The statement that a party ought to be “devoted to the question of intellectual and moral reform” (Gramsci, p.139) merely states the wider truth that the hegemony of consent is to a large degree founded upon internalized intellectual and moral beliefs about the operations of society. The extent to which any project challenging that the existing hegemony can hope to be successful is, in the first instance, conditioned on whether it successfully challenges these internalized assumptions.
Unlike thinkers in the Frankfurt School, Gramsci retains an optimism about the ability, through both material and intellectual struggle, for the working class to be able to break out of the trap of bourgeois hegemony and to build and assert its own hegemonic position. From seeing the actions of the workers in Turin, both to bring the factories they worked in under collective control, and their attempts to articulate a wider political programme drawing in those groups, such as peasants and intellectuals, outside of the strict “proletariat,” Gramsci saw great potential in the ability of workers to conceive of a counter-hegemony that was truly such, not merely opposed to the status quo on the terms of it being bad for their sectional class interests. Rather, it spoke to a struggle that, at least potentially, embraced a wider set of social strata in collective project of improvement, for the betterment of all. In the modern context, this could be thought of in terms of Marxist engagement with social movements which may not themselves be class-based as such.
For instance, movements for the rights of migrant farm workers or various feminist currents, may not necessarily be proletarian in nature, but they are counter-hegemonic in that they challenge status quos of White supremacy and misogyny which work to maintain current class relations. This is both for the reason that, as Gramsci alludes to in his discussions about stereotypes of Southern Italians amongst Northern workers, such identity markers ae often used by dominant classes to divide workers between themselves, and because, as a factual matter, “women and immigrants are generally situated in the unprotected [labour] market” (Laclau & Mouffe, p. 72). For this reason, as a matter of political necessity2, socialists must seek to engage with and build into their counter-hegemony the experiences and insights of these movements.
Furthermore, the continued existence of political and economic struggle in and of itself means that, contrary to the completely controlled system of late capitalism as seen by Adorno and Horkheimer, there are still moments where real alternatives come into view. At the very least, there are moments of negation of the present reality which speak to potential alternatives even if they may not be fully articulated. As Miliband states, “hegemony is not something that can ever be taken to be finally and irreversibly won” (1990), and thereby all points of hegemonic discourse, not just those of economic and class struggle, remain contested political terrain.
It is in no sense accidental or merely for rhetorical effect that Gramsci often uses the language of military tactics to talk about political struggles. Both because of his living in an era of often violent confrontation between various political forces and because of his keen study of Italian and broader European history, he was able to see that the lines drawn between the world of politics and that of war were often blurry if not invisible. For Gramsci, concrete political praxis consists of two basic elements, the war of position and the war of maneuver, which are related but still distinct. The former is the grand struggle between classes for hegemony, the essential social conflict that Marx identified as the driving feature of human history. Gramsci states that such a war “once won is decisive definitively” (Gramsci et. al., 239), meaning that a new class has overthrown the old in the dominant position of society.
This is similar to the conception put forward by Marx and elaborated upon by Luxemburg of “revolution” as consisting of a fundamental change in the system of production, not necessarily as the result of a single insurrectionary act. Throughout his writings, Gramsci uses the term “passive revolution” to describe similar phenomenon surrounding the Italian Risorgimento and the economic and social changes it brought to the Peninsula. When describing the innovations of Fordist production methods in Italian industry, Gramsci writes of “hegemony born in a factory” (p. 285), indicating that some wars of position may be one through methods not commonly seen as “political” in nature. By contrast, the war of maneuver consists of definite, concrete political actions, taken in the service of some goal. Examples of this could include mass strikes, participation in elections, distribution of propaganda, or a whole variety of other activities. The critical point, however, is that, even though wars of maneuver are ultimately done in the service of advancing a relative standing in the overall war of position, it “subsists so long as it is a question of winning positons which are not decisive” (Gramsci et. al., p. 239). In other words, it is wholly possible, and indeed often the case, that the proletarian movement can win a particular war of maneuver (a strike for higher wages, for instance), whilst not winning an overall positional victory.
At the same time, it is also true that victories of maneuver do advance the relative position of the class benefitting from them. Though they do not in and of themselves constitute the gaining of social hegemony, the ultimate prize in the contest of politics, they do function to build counter-hegemonic structures and consciousness, as well as to reinforce the sense of the subaltern group as being capable of taking over a dominant position. Advances of maneuver in the absence of a sufficiently sustainable position, however, can be quickly reversed or even rolled-back. For this reason, it is important to not confuse victory in one “war” for that of another, and to always be conscious of the relative balance of social forces at play. Furthermore, given the dynamic nature of capitalism as an economic system, “the identities of the opponents, far from being fixed from the beginning, constantly changes in the process” (Laclau & Mouffe, p. 60).
The proletariat therefore have a much harder task before them in attaining a hegemonic position that the bourgeoisie did in its victory over the feudal ruling class, given the static nature of the latter. An example of this can be seen in the shift on the Left from a critique of the welfare state in period leading up to 1979, to a defense of the same in response to neoliberal attacks. The terrain of politics under capitalism, therefore, is one which is ever-shifting and contingent on a wide variety of social and economic factors outside of the simple political form of the state. This means that any philosophy that advanced by a counter-hegemonic project must fundamentally a responsive, dynamic one.
Gramsci was adamant in his discussion of philosophy that “it is not just the ideas that require to be confronted, but the social forces behind them” (Gramsci et. al., p. 321). This is of a piece with Marx’s statement that “not criticism, but revolution is the driving force of history” (1972). In order to deal “theoretically” with questions, it is necessary to deal with them in a practical manner as well, to articulate a form of social struggle which advances wars of maneuver and position and to actually execute them. These forms need not necessarily be acts of open rebellion in all cases, but should take into account the concrete needs of people in the immediate moment by way of connecting them to a longer-term struggle for social position. Political thought, in the simple act of thinking and acting politically, “transforms men (sic.), makes them different from what they were before” (Gramsci, p. 182), meaning that our selves are not static either, and can escape from the hegemony of the society they are born into.
The Case of Greece
The experience of Greece since the financial crisis of 2008 represents a particularly acute, and therefore instructive, example of where wars of maneuver run head-on into the realities of existing hegemonic economic and political structures. Through a combination of a collapse in the its banking system and the subsequent imposition of harsh austerity measures by the European Union and International Monetary, since 2008 Greece been in a state of constant, externally imposed social crisis. Unemployment has hovered around the 25% range, with youth unemployment around 50% and extreme poverty and social deprivation increasing year-by-year. Protests and marches have raged in the streets and squares of the country since, often accompanied by violence from state police and openly fascist parties such as Golden Dawn. Though it may be the starkest example, the shift from the hegemony of consent to that of control in the post-2008 era is hardly unique to Greece. Rather, it should be seen as within continuity with a broader shift of state functions away from social welfare and towards policing and other methods of social control in the developed world since the end of the 1970s.
The coming to power of the Coalition of the Radical Left (commonly referred to as Syriza, after its Greek acronym) in January of 2015 represented, for many, both the best hope for ending the ruling hegemony of neoliberal ideas in Europe and a chance to stop the social catastrophe the country had become. Despite being elected on a mandate to end austerity, and despite a referendum result in June of 2015 rejecting a new bailout package, the government ultimately capitulated to creditors in July of the same year, and has retained power to the present day whilst implementing new sets of measures demanded by creditors. This is all while protests continue to grow, the neo-Nazi right continues to gain in strength, and many of the previous allies of the government denounce it. The experience has left many who had initially supported Syriza to ask what had happened, that a moment that began with such promise would end in such a stark defeat.
Examined through a Gramscian lens, the true problem of Syriza’s time in power as a potentially revolutionary force, and the seed of its ultimate defeat, lay in its inability or unwillingness to harness its victory in a war of maneuver to take definitive steps in the war of position. Undoubtedly, the victory of Syriza opened at least the possibilities for a further left project to emerge. But, just as an insurrectionary moment in itself is no sure sign of revolution, so too is electing a “radical left” government no sure sign that one will actually emerge.
Syriza did cultivate a network of support amongst various so-called “solidarity projects” that emerged in the wake of the EU austerity measures, many of which endorsed alternative, anti-capitalist economic visions (Rakopoulous, 2014). However, it did not fundamentally orient its electoral platform towards them, mainly promising to grant them legal operational space and perhaps provide some government funding to them. In the event, it did provide the first of these things, but not the second, and has imposed a variety of further austerity measures which threaten the existence of these projects through regressive taxation and privatization. This unwillingness to place the solidarity projects at the centre of their economic and political recovery strategy is demonstrative of a general confusion around the party’s political strategy and the basis of its popular mandate.
It is important to remember that, during the January 2015 election, Syriza won on a promise not of a radical restructuring of Greek social and economic life, but rather an end to austerity and a return to pre-2008 normality, without a consideration of whether breaking with the European Union would be necessary for even these modest goals to be achievable. The party’s electoral document, the Thessaloniki Program, was mainly made up of Keynesian welfare state measures, promises to tackle endemic corruption in the Greek economy and social liberalism on issues such as same-sex marriage and immigration law. Furthermore, the party succeeded primarily by appealing to populist anti-elite sentiment, but articulated this more in terms of national sovereignty and social dignity rather that anti-capitalist sentiment or class consciousness as such. This is further confirmed by the fact that the party entered into coalition with the right-nationalist, anti-austerity Independent Greeks upon its electoral victory. Gramsci describes this narrow anti-elitism, not linked to an overall framework of historical analysis, as not “evidence of class consciousness – merely the first glimmer of such consciousness, in other words, merely as the basic negative, polemical attitude” (Gramsci et. al., p. 273).
In other words, Syriza was able to capitalize on popular anger, but it was either unable or simply did not care to engage in a process of political education as to the underlying sources of that anger, which may have opened more radical possibilities. Though the emphasis on sovereignty and alliances with nationalist elements could be seen as attempt to “nationalize” the class character of the proletariat3 and put it into a position of being able to lead on behalf of a broad mass movement, this analysis obscures the essentially reformist nature of the critique of austerity put forward by Syriza. Other cases where, “the popular mass identity was other and broader than class identity” (Laclau & Mouffe, p. 52) in the political programme of a socialist party have tended to be anti-colonial or otherwise anti-imperialist struggles marked by mass social mobilization and advanced almost exclusively by non-electoral means4. Syriza and other elements of the Greek Left occasionally did make reference to the notion of Greece being a “debt colony” of the EU, the belief that this situation could be resolved by purely electoral means testifies to the largely rhetorical nature of such a claim, in terms of its political meaning. In this sense, they would have done well to heed Gramsci’s warning to “not ape the methods of the ruling classes, or one will fall into easy ambushes” (p.232).
Even the “No” vote on the initial June 2015 bailout deal was proposed by the Syriza government in terms of a negotiating strategy to secure a better agreement by a show of political force, rather than an attempt to definitively break with the European Union and the wider neoliberal framework. Though it is true that some smaller elements which backed a “No” vote, such as the Antarsya party, did represent the vote in terms of a radical break measure, the overwhelming message coming from the “No” camp was about gaining negotiating leverage, and one must assume that is what the majority of voters had in mind as well, especially given contemporary polling showing a solid majority of the country in favour of staying within the EU and Eurozone5.
The retrospective attempt to reframe the “No” vote in terms of a “mass movement” wherein it the government’s “job was to follow that mass movement, not to decide if there was an alternative” (Ioakimoglou & Souvlis, 2016) overestimates the degree to which the Greek populace had taken on the belief that a world beyond the hegemony of the EU was, indeed, possible. This is not to claim, as some so, that the government had “no alternative” but to impose further austerity, but rather that the moment of that alternative being closed came far before June 2015.
Syriza should, therefore, be faulted, for rejecting “in advance any thought about a rupture with the Eurozone” (Sotiris, 2016) and thereby not laying any of the necessary social and political groundwork that would have been needed to ensure popular support through a no doubt deeply painful period of economic transition which would have followed a rupture. By not proposing a true alternative to the politics of permanent austerity, by not recognizing, Syriza was forced to capitulate, and many of the illusions about the all-powerful nature of elected governments, even those of the self-described radical left, crumbled away for a new generation, as they had for so many in the past.
The Lessons of Defeat
What, then, is the lesson to be learned for the Left from Syriza’s defeat in victory? Fundamentally, it is not to confuse victory in a war of maneuver to that in a war of position. The currently dominant hegemony may be “weak” and increasingly reliant on a method of naked control over attempts at consent, but it is hegemonic nevertheless. It is therefore deeply naïve to believe that the simple statement of a “No” constitutes a fully-formed counter-hegemonic process, especially without meaningful linkages to outside social movements and a commitment to intellectual and moral reform. Without a positive, counter-hegemonic vision that is not simply a negation of status quo or a wish for a more “civilized” version of it, there is no formation of the kind of body politic able to break free of the currently dominant modes of thought for the long-process of true revolution.
The great failure of Syriza’s leaders was both in their underestimation of the forces arrayed against them in the European Union and the wider transnational capitalist class and in their inability to describe to Greek society an alternative vision. They were constrained from this, arguably, by their own political timidity, but also by isolation from sympathetic movements throughout Europe and internationally, as well as by often-contradictory mandate given to the by the public. These constraints, too, were reinforced by the belief in electoral politics above all else as a method for achieving definitive change, as opposed to merely being an avenue by which to open the possibility of such change. Syriza may be, and have been, a political party in the common definition, but it remains far from fulfilling the functions of a “political party” under Gramsci’s sense of the term. Above all, it remained, even at its most radical points, a mere negation of the existing social hegemony, constructed on the belief that it would not be necessary to build a wholly new one.
As Ralph Miliband wrote over 25 years ago, the task of socialism is encompassed in the “affirmation that an entirely different social order, based on radically different foundations, is not only desirable, but possible” (1990). Syriza was able to demonstrate the first of those propositions, though it hardly needed to given the state of the Greek political economy by January of 2015, but it proved either unwilling or unable to demonstrate the latter. It is in that next step, from the realm of pure theory to a “philosophy of praxis” (Gramsci et. al, p. 248), that the Left must move if it is to avoid the next Greece having the same outcome as the last one.
Sotiris, P. (2016, February 10). The Dream That Became a Nightmare. Retrieved October 15, 2016, from https://www.jacobinmag.com/2016/02/greece-syriza-alexis-tsipras-varoufakis-austerity-farmer-blockade-protests/
Endnotes
Though one should be careful to not limit this necessarily to a conflation between state and government, as Gramsci states that “State = political society + civil society, in other words hegemony protected by the armour of coercion” (Gramsci et. al., p.263)
This is, of course, not to downplay the importance of these movements on their own terms. Rather, simply to point out the wholly necessary process of reaching beyond a narrow basis of interest when seeking hegemony.
Something which Gramsci identifies as occasionally necessary, see page 241 of Selections from the Prison Notebooks
Examples of this would include national liberation movements in Vietnam, South Africa and Algeria
Suggested Reading from InquiriesJournal
Conceiving neoliberalism as a form of constructivism, an ideological project rather than a doctrine prefigured by ‘human nature’, illuminates a promising path towards countering its impoverishing effect on both the citizen subject and the ethos of democracy. This involves a concerted intervention at the level of discourse... MORE»
Concern regarding the inequity and commodification apparently necessary for the capitalist system to thrive is not new. Marx, in his work Capital: Critique of Political Economy (1867), raised the idea of the fallibility of fetishism, including an “attribution of magical powers to the ‘global market&rsquo... MORE»
Michael Thompson, reviewing A Brief History of Neoliberalism by David Harvey, calls it ‘the world according to David Harvey’ (2005). This is an accurate remark: although erring slightly on the side of conspiracy, the book is a breathtaking overview of the ‘neoliberal world’ through Harvey&rsquo... MORE»
Since the end of the Second World War, scholars of British military history have busied themselves with attempts to explain the British defeat at Singapore to Japan in February 1942. Research reveals that there existed what... MORE»
Inquiries Journal provides undergraduate and graduate students around the world a platform for the wide dissemination of academic work over a range of core disciplines.
Representing the work of students from hundreds of institutions around the globe, Inquiries Journal's large database of academic articles is completely free. Learn more | Blog | Submit
Follow SP
Latest in Political Science
In the late 1990's the spray-painted name "Banksy" began accompanying stenciled images throughout the cities of London and Bristol, England. Taking inspiration from the anarchic messages of punk music and hip-hop graffiti popularized by New York... Read Article »
In the human experience, political ideology and propaganda have played powerful roles in forging group identity. In the evolution of the human species, beliefs have been as powerful as facts and truths. Knowledge of this research and political reality... Read Article »
The United States government started exploring the soft power potential of student and scholar exchange programs as early as 1908, with the establishment of the Boxer Indemnity Scholarship Program.[1] The father of the theory of soft power, Joseph... Read Article »
Political philosophers and theorists alike continue to debate if more enlightened populations would be of value or not. This piece will contribute to that dispute by claiming that an enlightened populace is integral to the progress of free-societies... Read Article »
There has been extensive debate over the past few decades regarding the criteria by which we should measure distributive justice. In conceiving a just state of affairs it is imperative that we determine the most appropriate measure of the distributions... Read Article »
Similarly to many European countries, the Swedish population often perceive their history as an epoch of homogeneity: a time when every Swedish citizen was believed to have had the same ethnic phenotype, spoken the same language, believed in the... Read Article »
Although terrorism has been present in the world for centuries, it is only since the 1980s that suicide terrorism has become an object of study for academics and an existing concern for government professionals. While discourses on suicide terrorism... Read Article »
What are you looking for?
FROM OUR BLOG
Disclaimer: content on this website is for informational purposes only. It is not intended to provide medical or other professional advice. Moreover, the views expressed here do not necessarily represent the views of Inquiries Journal or Student Pulse, its owners, staff, contributors, or affiliates.
| |
Be perfectly on time for all your appointments, within and outside your organisation. Do not be late to events, except if absolutely unavoidable. This will help you to set an example of timeliness in your organisation, with your actions.
Be confident
Be confident about your mission and vision for your organisation, and have faith in your idea and your business model. Trust your team. Let your actions and speech reflect your confidence.
Be open
It is crucial for you to be open and receptive all the time. Be ready to listen to the suggestions and complaints of your employees, and make use of them.
Be receptive to new ideas, and willing to spend time thinking about them. Ensure that you do not get stuck in the rut of doing the same old thing over and over, and not thinking outside the box.
Be adaptive
Today’s business environment is highly volatile, and changes are an everyday affair. Hence, it is crucial for you, as a leader, to be adaptive to changes. It is your duty to make changes in your personal and professional life to suit changes in the business environment.
Be communicative
As the leader of your organisation, it is important that you be communicative. Be ready to talk to people within and outside your organisation. Being communicative is the key to networking, building lasting relationships with your employees, clients and vendors, and exchanging ideas. Be helpful Be helpful. Be kind. Be ready to reach out to people who need your guidance and your help within and outside your firm.
Be consistent
Be consistent in your words and actions. Stick to your words – both personally and professionally. Once you have made a commitment or a promise, do not turn away from it.
Be ethical
If you want to respected, you should be ethical, fair and just in your role. Ensure that you follow ethical business practices and that you do not cheat in any way or indulge in illegal or shady activities.
Be humble
If you want to be loved and supported as a leader, it is important that you be humble and friendly. Mingle with your employees, and be a friend and a mentor to them. Do not walk around with a ‘chip on your shoulder.’
Be accepting
In spite of all the planning and attention to detail that you or your employees put into tasks, mistakes can happen. Be ready to accept and forgive mistakes – your own and those of others – and attempt to rectify them.
Learn from mistakes, and ensure that they are minimised over time.
Be decisive
In order to get the most out of each business opportunity, it is crucial that decisions be taken fast – even those which look very difficult. Do not keep your employees guessing. Take balanced decisions fast and on time, after a rational thought process. | https://www.supportbiz.com/articles/managing-growth/how-be-wonderful-leader.html |
The Conference Board Employment Trends Index sharply increased in February, after increasing in January. The index now stands at 107.74, up from 106.50 (a downward revision) in January. The change represents a 5.6 percent gain in the ETI compared to a year ago.
“The Employment Trends Index accelerated further in February, suggesting that strong job growth is likely to continue in the coming months,” said Gad Levanon, chief economist, North America, at The Conference Board. “The six-month growth rate of the index is the highest since 2014. The stable unemployment rate in recent months is a statistical illusion. The labor market is tightening and with such strong job growth, further declines in the unemployment rate is all but guaranteed.”
February’s increase in the ETI was fueled by positive contributions from six out of the eight components. From the largest positive contributor to the smallest, these were: Percentage of Respondents Who Say They Find “Jobs Hard to Get,” Industrial Production, Initial Claims for Unemployment Insurance, Real Manufacturing and Trade Sales, Number of Employees Hired by the Temporary-Help Industry, and Job Openings. | https://www.mdm.com/news/strategy-research/research/conference-board-employment-trends-index-increases-in-february/ |
Posted!
Join the Conversation
Comments
Welcome to our new and improved comments, which are for subscribers only.
This is a test to see whether we can improve the experience for you.
You do not need a Facebook profile to participate.
You will need to register before adding a comment.
Typed comments will be lost if you are not logged in.
Please be polite.
It's OK to disagree with someone's ideas, but personal attacks, insults, threats, hate speech, advocating violence and other violations can result in a ban.
If you see comments in violation of our community guidelines, please report them.
Undocumented immigrant at center of police immigration policy is released
An immigration hearing will be scheduled for 38-year-old Jose De la Cruz, who will then apply for permanent resident status, his attorney Marc Christopher said.
De la Cruz had no valid immigration status at the time of his arrest Sept. 23, according to Christopher.
He has been in the U.S. 20 years and was consulting with an attorney on how to apply for legal residency at the time of his arrest, Christopher said.
De la Cruz was in a parked car in front of his south side home with his family when he was approached by immigration agents.
When the family refused to open the doors, demanding to see a judicial warrant, the agents flagged down Milwaukee police officers, who assisted in his arrest.
While officers were there they requested a warrant for a probation violation from the state Department of Corrections but have never explained the violation. De La Cruz was convicted in February on a 2017 misdemeanor charge of carrying a concealed weapon after pleading no contest.
At the time of the 2017 arrest, De la Cruz was driving a family vehicle when police pulled him over for an equipment violation and found a handgun in the console, Christopher said Wednesday.
The weapon belonged to De la Cruz's wife, a concealed carry permit holder, and he did not know it was there, Christopher said.
"If the judge had made a determination (De la Cruz) was dangerous or likely to re-offend he would not have been given bond," Christopher said.
Christopher said it's common practice for Immigration and Customs Enforcement to comb through court records for convictions and believes that's how his client came to the agency's attention.
De la Cruz and his family appeared at a news conference Wednesday by immigrants rights group Voces de La Frontera, where Voces Executive Director Christine Neumann-Ortiz said his case shows why Milwaukee police need to revise their standard operating procedure covering immigration enforcement.
The De la Cruz family "became a public voice for why we need a Milwaukee Police Department policy of non-collaboration with ICE," Neumann-Ortiz said.
Voces wants police to adopt a policy in which a judicial warrant would be required for officers to assist in ICE operations.
The Fire and Police Commission's Policies and Standards Committee is scheduled to take up the matter Thursday.
In September, Milwaukee Mayor Tom Barrett asked the commission to review the circumstances surrounding De la Cruz's arrest, but it has yet to take action on the request. | |
It is a requirement of Article 33 of Employment Rights (NI) Order 1996 (statement of particulars of employment) that an employer is required to state an employee’s place or places of work. These details are often contained in a contract by way of a mobility clause which reserves the right of the employer to change the place of work. When drafting or exercising a mobility clause there are certain considerations to be taken into account.
If the contract is being used to cover the requirements of Article 33 then at minimum it needs a reference to a place /places of work. If there is no mobility clause a court will imply a minimum term, if it is essential to make the contract work, allowing the employer to ask an employee to work within a reasonable daily distance of their home. The case of Courtaulds Northern Spinning Ltd v Sibson and Transport & General Worker’s Union IRLR 305 confirmed that the implied right did not have to be reasonable or for a genuine operational reason, just that it was needed to give the contract business efficacy. In order to decide this the court will look at the following:
Some arguments may be avoided by having a clause dealing with mobility and a clause can potentially extend the employer’s rights beyond what will be implied by a court.
Although the terms of a clause itself need not be reasonable the implementation of it may be limited by other implied terms. In United Bank Limited v Akhtar IRLR 507 the Employment Appeal Tribunal said that in the implementation of an express term there were three implied terms to abide by – reasonable notice of transfer, not to undermine trust and confidence and not to exercise the discretion around providing relocation expenses in a way that makes it impossible for the employee to move. Although Mr Akhtar’s contract had a mobility clause he was only given a few days’ notice to move 125 miles with no expenses provided. The EAT held that he had been constructively dismissed.
A relocation due to a downturn or cessation in work may also constitute a redundancy. In these circumstances the mobility clause will be taken into account but and added factor to be considered is whether it is reasonable to ask the employee to relocate. Depending on the nature and extent of the relocation an employer may be able to sustain the argument that dismissal in these circumstances is for the employee’s refusal to relocate rather than redundancy.
Where an employee needs to relocate a number of employees and the employees refuse the employer could consider dismissal and re-engagement. If the relocation involves more than 20 employees the employer should be mindful of the collective consultation requirements.
Although a mobility clause does not have to be reasonable to be enforceable employers should proceed with caution. It is much easier to seek agreement for any move than to try and impose it on an unwilling employee. The Akthar case makes it clear that employers should consider the impact of the clause and to try and mitigate that impact if possible. Employers should also consider alternatives – homeworking, ‘hot-desking’ or other flexible arrangements.
For any queries in relation to this article or for further advice on employment law, please contact Kiera Lee, Director and Head of Employment at Mills Selig.
It is a requirement of Article 33 of Employment Rights (NI) Order 1996 that an employer is required to state an employee’s place or places of work. | http://millsselig.com/knowledge-detail.php?s=does-an-employment-contract-need-a-mobility-clause |
Limited quantity, low quality, degradation, and contamination are the major challenges in forensic DNA analyses. Current methods of analysis of short-tandem-repeats (STRs) markers and single nucleotide polymorphic sites (SNPs) require PCR amplification using target-specific primers. However, trace DNA often fails to amplify in PCR-based methods due to inhibition or degradation of priming sites, resulting in allelic dropout. Though various PCR optimization protocols have been developed, analysis of low quality and sparse forensic DNA samples is still challenging. Universal adapter ligation based massively parallel sequencing (MPS) methods have overcome the PCR amplification problems. Enrichment of target DNA regions is necessary for cost-effective MPS analysis. Hybridization capture methods have been developed to enrich whole mitochondrial DNA and SNPs for forensic analyses, yet the cost of baits prohibits wider adoption. Hence, I propose to develop methods to generate inexpensive non-PCR-based target enrichment baits for MPS analysis of DNA from difficult forensic samples. This proposal aims to generate baits for forensic markers and optimize hybridization capture methods by three objectives. First, demonstrate feasibility of an in-house method to generate non-PCR-based target enrichment baits for a small forensic panel consisting of ~2000 forensically relevant SNPs and STRs (2k panel). Second, develop hybridization capture target enrichment methods for forensic samples using the 2k panel. This will involve targeted sequencing of hair specimens from five volunteers, testing various input DNA and bait quantities and hybridization capture conditions, and comparing the resultant genotypes with microarray-generated genotypes. Finally, develop a 100,000 SNP/STRs panel to improve the discriminatory power for sample mixture and kinship analysis, for missing person identification and for gene-genealogy searches. The 100k panel will include SNPs used in forensic DNA phenotyping, SNPs informative of ancestry, externally visible characteristics, and individual identification. This panel will be tested for mixture deconvolution using mock mixtures of pre-genotyped control samples. The economic benefits of the in-house generation of baits for hybridization capture will be evaluated. Achievement of the proposed objectives will result in the development of methods and reagents valuable for the forensic community. Methods developed by and results of this project will be published in peer-reviewed journals and a detailed report will be submitted to NIJ for a broader audience. Protocols, design files, and bait panels will be made available for interested persons. | https://nij.ojp.gov/funding/awards/2020-r2-cx-0029 |
This application claims priority from provisional application Ser. No. 60/616,568, filed Oct. 6, 2004, the disclosure of which is incorporated by reference in its entirety.
The present invention is generally directed to animal repellant systems and is more specifically directed to a controller based system that is capable of exploiting several different techniques to deter certain animals from entering particular areas.
By nature animals roam from one location to another seeking food, water, and shelter. As a result it is inevitable that at times they encroach upon people's property. In some instances an animal's presence can be beneficial, yet there are other situations in which an animal's presence is destructive, burdensome, dangerous, annoying or otherwise undesirable. Skunks, bear, deer, birds, woodchucks, rabbits, dogs, cats, cows, horses, and many other animals may from time to time roam onto and undesirably intrude upon one's property.
History provides an example of such undesirable animal intrusions in the case of deer. Nationally, whitetail deer population estimates range from 20 million to 33 million, which represents a larger deer population than that which existed when Christopher Columbus arrived five centuries ago, according to a report in The Wall Street Journal, Dec. 1, 2004. Deer can be destructive and the damage they inflict on property is getting worse every year as their population grows. By nature, deer live on the edge of the forest where they can graze on plants, flowers and small trees while using the woods for cover. This makes residential backyards and commercial nurseries particularly inviting to deer. In fact, deer can eat as much as 3,000 pounds of plant matter a year or approximately 2,000,000 leaves. It is estimated that they cause more than $1 billion in residential property damage annually. Such grazing by deer can cause landscape damage, thus reducing the attractiveness of the property to potential buyers.
Many types of devices and methods have been and are presently being used to discourage animals, such as deer, from causing damage to landscaping material, such as perimeter fencing, which may or may not be electrified, as well as the covering of shrubs with some type of netting. These arrangements are time consuming and impair the aesthetics of the property to be protected. A keen sense of hearing and ability of animals, such as deer, to triangulate the exact origin of sounds is the animal's main defense against predators and danger in general. However, audio frequency emission systems for repelling deer presently in use are difficult to install and generally operate continuously thereby allowing deer to become accustomed to the constant audio output and thereby making the devices ineffective.
There is a need to provide a method or system for repelling animals that is convenient, is highly efficient and does not pose a physical risk to wildlife, pets and human beings. Prior art methods and systems for addressing these needs were either too expensive, inhumane, ineffective or a combination of all of these. Based on the foregoing, it is the general object of the present invention to improve upon or overcome the problem and drawbacks of the prior art.
According to one aspect of the present invention, an animal repellant system is provided which includes a triggering means for detecting the presence of animals in a particular area and generating signals indicative thereof. The animal repellant system can include a controller in communication with the triggering means to receive the signals generated therefrom and to issue command signals responsive thereto. The animal repellant system also includes deterrent means for effectuating a repellant component of the animal repellant system in response to the command signals, thereby dissuading animals from entering into the particular area.
Another aspect of the current invention relates to an animal repellant system wherein the triggering means includes but is not limited to motion detectors, interrupted beam photo sensors, photo cells including photo cells with day and night program changes, pressure switches, and input from bi-directional data links. The triggering means can generate command signals or be connected to a controller which may be programmed to vary command signals according to non-periodic time and frequency patterns. In the preferred embodiment of the present invention, the controller produces the command signals.
In the preferred embodiment of the present invention the deterrent means includes a power amplifier and audio driver that generates sound in response to commands issued from the controller. The sound generated by the controller can be in the ultrasonic range, the sonic range or a combination thereof. Piezo electric audio drivers are preferred for effectuating sound wave type repellant components. When using piezo-electric audio drivers it is preferable that the wave signals be square waves which contain a fundamental frequency plus harmonic frequencies. Certain audio drivers which are constructed to increase the distortion of the audio output beyond that which is inherent in the square wave that is fed to the audio driver, can also be used in the present invention. Although piezo-electric audio drivers are preferred, the present invention is not limited in this regard as Terfenol (ETREMA TERFENOL-D is registered trademark of Edge Technologies, Inc., Registration No. 1512330) dynamic, ribbon, electrostatic, and plasma audio drivers may also be used. While audio drivers have been described, the present invention is not limited in this regard as other deterrent means such as strobe lights, sprinklers, alarm systems and spot lights can also form a repellant component of the present invention.
When audio drivers are used, pattern variations in audio output can be governed by the controller through generation and transmission of command signals and wave signals. The command signals can include wave signals such as but not limited to square waves, saw-tooth waves and sine waves The controller can cause a number of changes in the output of one or more repellant components using a preprogrammed protocol. Fore example, frequency, duration, number and direction of audio drivers actuated, selection of different frequency patterns, cancellation of ambient noise, total time interval in which command signals are generated, cadence type output defining the order or pattern in which sound is made, range and decibel level of the command signals and time interval where no frequency output occurs, can be changed. The preprogrammed protocol can also vary changes in the command signals over a random series of steps. In addition, the preselected time interval can be varied depending upon the signals generated from the triggering means.
A further aspect of the current invention relates to an arrangement of animal repellant systems comprising at least two animal repellant systems that are connected by a bidirectional communication network linking the controller in each of the animal repellant systems to at least one other animal repellant system. The arrangement can include at least two animal repellant systems controlled by a central controller or by a controller contained in each animal repellant system which works cooperatively with other controllers in a bidirectional communication network. The bi-directional communication network can be in the form of dedicated communication wiring, power wires used for communication and wireless radio frequency transceivers for communication purposes. In one embodiment of the present invention, wherein the power wires are used for bi-directional communication, measures well known to those skilled in the relevant art of communication over power wires are implemented to reduce noise levels and attenuation at operating frequencies. The animal repellant system can also communicate using standard protocols such as phone, cable or the internet.
FIG. 1
is a perspective view of an animal repellant system in accordance with the teachings of the present invention.
FIG. 2
FIG. 1
is a perspective view of a bottom portion of the animal repellent system illustrated in showing detail regarding the bottom of the controller.
FIG. 3
is a schematic view of the animal repellant system of the present invention.
FIG. 4
is a schematic view of a projection cone for providing square wave distortion in practicing the present invention.
FIG. 5
is a diagrammatic view showing a repellant system operatively connected to a microphone to detect and cancel ambient noise.
FIG. 6
is a block diagram showing a plurality of animal repellant systems connected in a serial pattern with individual controllers housed within each animal repellant system.
FIG. 7
FIG. 6
is a block diagram similar to that shown in but showing a network of animal repellant systems controlled by a central controller.
FIG. 8
is a top view of an arrangement of animal repellent systems, of the present invention, situated on a parcel of land, and further illustrating overlapping ranges of the repellent systems.
FIG. 9
a
is a schematic diagram illustrating communication links between the controller and the motion detectors of the animal repellant system, using separate wires for bi-directional communication and power transmission.
FIG. 9
b
is a schematic diagram illustrating communication links between the controller, and the motion detectors of the animal repellant system, using the power wires for both bi-directional communication and power transmission.
FIG. 9
c
is a schematic diagram illustrating communication links between the controller, and the motion detectors of the animal repellant system, using separate wires for power transmission and a wireless link for bi-directional communication.
FIG. 1
FIG. 1
10
12
12
14
16
16
16
10
10
18
10
16
16
16
10
illustrates an animal repellant system , including a control module housing a controller positioned therein. The control module includes four ports each housing a motion detector for detecting a presence of animals in a particular area and for generating signals indicative thereof. The four motion detectors are positioned in an array defining an approximately common plane. Each motion detector is substantially evenly spaced around a first perimeter of the animal repellent system and positioned about 90 degrees from another motion detector. further shows an animal repellant system including eight audio drivers arrayed uniformly around a second perimeter of the animal repellant system for multidirectional broadcasting a repellant component consisting of sound waves. While sound waves have been described above, the present invention is not limited in this regard as sprinklers, spotlights, scent dispensers, and strobe lights can also be used. Although the motion detectors are shown on the approximately common plane, the invention is not limited in this regard, as the motion detectors can be positioned on more than one plane and different planes. While four motion detectors are shown evenly spaced and positioned about 90 degrees apart, the present invention is not limited in this regard as at least one motion detector can be used and motion detectors can be non-evenly spaced around the animal repellant system .
FIG. 1
20
26
12
22
24
10
20
10
Again referring to , an attachment clip projects outwardly from a bottom of the control module . The attachment clip includes rotational hinging grooves and an attachment slot which are adapted to rotatably mate complementarily with a support post and bracket (not shown) which has corresponding mated hinge guides that allow the animal repellant system to be rotated at least 90 degrees in either direction. While the attachment clip has been shown, the invention is not limited in this regard as other mechanisms for rotatably mounting the animal repellant system to the support post can be employed.
FIG. 2
30
32
26
12
30
32
30
34
32
36
10
30
32
30
32
illustrates a first data port and a second port positioned on the bottom of the control module . The first data port and the second port are protected by flexible boots (not shown) positioned therein. The first data port houses connections for phone lines, the internet, and USB lines, for field maintenance and reprogramming of the controller. The second port houses dip switches for controlling operational modes of the animal repellant system . Although the flexible boots are shown for protection of the first data port and second port , the invention is not limited in this regard as a cover positioned over the first data port and the second port, such that a tool would be required to remove the cover, can also be used. While the data port and the second port are shown, the present invention is not limited in this regard as radio frequency, internet, and other communication devices can be used.
FIG. 3
10
40
42
42
44
44
40
42
42
40
shows the animal repellent system , including the controller and four motion detectors for detecting the presence of an animal or other moving heat source, in the particular area. The motion detectors include a built-in time delay circuit which delays transmission of a motion signal until a predetermined time elapses. Motion signals are continuously or intermittently relayed to the controller . Although the above mentioned embodiment of the present invention discloses the motion detector , the current invention is not limited in this regard as interrupted beam photo sensors, photo cells including photo cells with day and night program changes, pressure switches, and input from bi-directional data links, can also be used. While the motion detector is shown with the time delay, the controller can also be used to provide the time delay function.
FIG. 3
18
54
50
40
48
46
40
58
46
48
50
52
52
54
54
52
52
18
57
10
also shows the audio driver connected to a control relay and a power amplifier for output of sound waves. The controller generates wave signals and command signals . The controller includes a protocol for varying the command signals according to a pattern. The wave signals are transmitted to a power amplifier for generation of amplified wave signals of high wattage output and transmission of the amplified wave signals to the control relay . The control relay receives the amplified wave signals and transmits the amplified wave signals to the audio driver for broadcasting a repellant component consisting of a pattern of varied frequency ultrasonic sound waves . Although the animal repellant system described above is operative primarily in the ultrasonic frequency range, the animal repellant system can also operate in the sonic range or in a combination of sonic and ultrasonic ranges.
FIG. 3
70
10
68
74
72
46
68
70
46
72
74
70
74
70
10
also shows strobe lights built into the animal repellant system and connected to a strobe light driver . A scent dispensing module connected to a scent dispenser driver , is also schematically illustrated. The command signals are transmitted to the strobe light driver for activation of the strobe lights thereby effectuating the repellant component consisting of high intensity light flashes. Command signals are also transmitted to the scent dispenser driver for activation of the scent dispensing module thereby effectuating the repellant component consisting of at least one scent. While strobe lights and scent dispensing modules are shown, the current invention is not limited in this regard, as other repellant components including but not limited to a sprinkler system for effectuating a repellant component consisting of water, an audible range sound delivery system, such as an alarm system, for effectuating a repellant component consisting of audible sound and spot lights for effectuating a repellant component consisting of light, can also be used. While the strobe lights are shown built into the animal repellant system , the present invention is not limited in this regard as the strobe lights can be separate from the animal repellant system.
FIG. 3
64
10
64
62
10
60
34
40
10
36
10
64
10
24
60
also illustrates a sensor for determining whether power supply is available to the animal repellant system and annunciating a status of the power supply. The sensor is shown with back-up battery power . The animal repellant system is shown connected to a 24 volt alternating current power supply . The connections for phone lines, the internet, and USB lines, for field maintenance and reprogramming of the controller , are also shown. The animal repellant system also includes the dip switches for controlling operational modes of the animal repellant system . Although the sensor is shown for determining whether power is available to the animal repellant system , the present invention is not limited in this regard as other sensors can also be used to determine and annunciate other parameters of the animal repellant system. While the volt alternating current power supply is shown, the present invention is not limited in this regard as other power supplies can also be used, including but not limited to, solar and battery power.
FIG. 4
80
86
10
80
82
86
82
84
84
82
86
80
illustrates a piezo electric audio driver emitting square wave signals characterized by at least one frequency and wave length for use with the animal repellant system . The piezo electric audio driver includes a projection cone which broadcasts the square wave signals . The projection cone has a throat section which produces inter-modulation and harmonic distortion, caused by a non-linear compression of air in the throat of the projection cone . While the square wave is shown, other wave forms such as saw-tooth and sine waves can also be used. Although the piezo electric audio drivers have been described, the present invention is not limited in this regard, in that other audio drivers can be used including but not limited to, Terfenol (ETREMA TERFENOL-D is registered trademark of Edge Technologies, Inc., Registration No. 1512330) dynamic, ribbon, electrostatic, and plasma audio drivers or a combination thereof.
FIG. 5
98
18
98
102
94
98
94
100
40
94
40
96
94
18
96
104
98
18
illustrates a microphone coupled to the audio driver . The microphone detects ambient noise and converts the ambient noise into noise wave signals . The microphone transmits the noise wave signals through the communication link to the controller . The noise wave signals are analyzed by the controller , wherein the controller generates canceling wave signals which are exactly opposite to and 180 degrees out of phase with the noise wave signals . The audio driver emits the canceling wave signals to create a null-zone wherein no ambient noise is present. While the microphone is shown coupled to the audio driver , the present invention is not limited in this regard as the microphone may be positioned remotely.
FIG. 6
10
114
44
114
10
shows at least two animal repellant systems connected in series by links such that the animal repellant systems activate simultaneously upon receipt of the motion signal . Each link is illustrative of both a bi-directional communication link and a power wire. While simultaneous activation has been described, the present invention is not limited in this regard as independent activation of the animal repellant systems is also possible.
FIG. 7
122
110
130
126
130
110
130
132
134
136
138
139
128
130
110
132
134
136
138
139
130
130
illustrates a network of animal repellant systems controlled by a central controller connected by links to each animal repellant system in the network. The central controller activates each animal repellant system simultaneously. The central controller also simultaneously activates sprinklers , spotlights , scent dispensers , strobe lights , and an alarm system through links . Although the central controller is shown to activate the animal repellant systems , sprinklers , spotlights , scent dispensers , strobe lights , and alarm system simultaneously, the present invention is not limited in this regard, in that the controller can provide other activation sequences. In other embodiments of the present invention, a control module of a home security system can be interfaced with the central controller .
FIG. 8
10
160
170
10
185
10
185
185
10
10
150
160
190
150
10
10
160
Referring to , six animal repellent systems are positioned around a residential structure , affixed to a parcel of land . The animal repellant systems have an effectiveness range of up to 360 degrees. The animal repellant systems are shown positioned such that the effectiveness ranges overlap. The effectiveness ranges are further defined by a radius extending outwardly from the animal repellant systems by a distance, at which the repellant component diminishes by less than half. The animal repellant system illustrates uses of ultrasonic waves as the repellant component wherein the ultrasonic waves deflect off the structure and shrubs . Deflection of ultrasonic waves , broadcast at a frequency from the animal repellant system , causes interference patterns which change when the frequency of the ultrasonic wave changes. While animal repellant systems are shown positioned around the residential structure , the present invention is not limited in this regard as the animal repellant systems can be positioned on and around residential structures and other structures and locations.
FIG. 9
FIG. 9
FIG. 9
FIG. 9
FIG. 9
FIGS. 9
a
b
b
c
c
a
b
c
60
230
40
42
240
40
42
240
230
250
60
40
42
250
40
42
270
280
40
42
260
40
42
9
9
40
42
10
60
18
68
68
shows the power supply providing power through a power wire to the controller and three motion detectors . A separate communication wire connects the controller with the motion detectors , providing bi-directional communication therebetween. The bi-directional communication wire is shown arranged in tandem with the power wires . illustrates use of power wires for transmission of power from the power supply to the controller and three motion detectors . In , the power wires are also used for bi-directional communication between the controller and three motion detectors . illustrates use of a wireless system including transceivers for generation of a wireless radio frequency link for bi-directional communication between the controller and the three motion detectors . illustrates use of a separate power wire for transmission of power to the controller and three motion detectors . While , and illustrate bi-directional communication between the controller and the motion detectors , the present invention is not limited in this regard as bi-directional communication using the separate wire, the power wire and the wireless system can also be used for bi-directional communication between other components of the animal repellant system including but not limited to the power amplifier , the audio driver , the scent dispersing drivers and the strobe light drivers .
Although the present invention has been disclosed and described with reference to certain embodiments thereof, it should be noted that other variations and modifications may be made, and it is intended that the following claims cover the variations and modifications within the true spirit of the invention.
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT | |
End of the "see one, do one, teach one" era: the next generation of invasive bedside procedural instruction.
Traditionally, an apprenticeship model has been used for the instruction of invasive bedside procedures. Because this approach is subject to nonuniform application, a new model was established to determine the impact of a standardized curriculum on medical students' and residents' medical knowledge and technical skills. A procedural instruction curriculum for medical students and residents was developed, and a pilot program with the curriculum was incorporated into an internal medicine residency program. Five common procedures in osteopathic and allopathic internal medicine training programs were included: central venous catheterization, knee arthrocentesis, lumbar puncture, paracentesis, and thoracentesis. An initial assessment of participants' baseline knowledge and skills was obtained. Teaching methods included video instruction; discussion of key concepts; faculty-led, hands-on, simulation-based instruction; and individual deliberate practice. Postinstruction knowledge and skills were evaluated, respectively, through a written test and a quantified assessment (ie, checklist) using direct observation. Participants were asked to provide written feedback at the conclusion of each instructional module. A total of 60 participants, all in allopathic medicine, underwent the training component. Fifty-two participants were internal medicine residents (including 2 from an outside program); 4 were trainees in a combined internal medicine-pediatrics residency; and 4 were medical students (1 from an outside program). Participants demonstrated a statistically significant improvement (P<.001) in medical knowledge, as evidenced by preinstruction vs postinstruction test scores. Comparison of initial baseline procedural checklist scores with postinstruction checklist scores, during participants' performance on the first live patient, also showed statistically significant improvement (P<.001). A simulation-based, standardized curriculum in invasive bedside procedural instruction significantly improved the medical knowledge and technical skills of novice physicians.
| |
The balancing of rights and freedoms is one of the great innovations of the EU General Data Protection Regulation (GDPR) but one of the great challenges as well. The Charter of Fundamental Rights of the European Union recognizes fifty political, social and economic rights for the people of Europe. Recital 4 of the GDPR states that the right to the protection of personal data should be balanced against other fundamental rights, and the Court of Justice of the European Union (“CJEU”) has stated that national authorities and courts should consider a fuller range of rights. Past IAF publications have suggested processes that balance the numerous risks and benefits to people associated with a decision to process or not process personal data, and this balancing should weigh the risks and benefits against each other based on the likelihood of each of the risks taking place and the magnitude of each particular risk. The same risk management approach should be used when a decision to transfer or not transfer data to a third-party jurisdiction takes place. The decision by the CJEU in Schrems II and the EDPB draft Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data (Draft EDPB Guidance) provide an opportunity for the IAF to look at this balancing in detail as it relates to the full range of fundamental rights and freedoms. Specifically, the IAF looked at the transfer of human resources data from the EU to a third country and the risk to employees if such data were not transferred. More specifically, the IAF looked at the risk to the right to protection of personal data, Article 8 of the Charter of Fundamental Rights of the European Union, and to the freedom to choose an occupation and the right to engage in work, Article 15. The team, Nancy Lowrance and Lynn Goldstein, interviewed a group of companies about their human resource processes and how they would be impacted if transfers from the EU were limited. They then conducted a policy analysis where numerous individual rights and freedoms were balanced. That research may be found in “Addressing Human Resources Data Flows in Light of European Data Protection Board Recommendations.”
While the research paper focused on Schrems II and the Draft EDPB Guidance, the same analysis is relevant to other national and draft provincial laws that require adequacy. This analysis also begins to illuminate that policy discussants need a new vocabulary for framing the balancing of rights and freedoms. For example, the term proportionality suggests a balancing of two factors against each other. Issues, such as vaccination passports, will involve numerous fundamental rights and freedoms across many different stakeholders, and the term proportionality will not work for that multi-factor analysis. A future IAF paper will explore that issue.
Please let us know what you think by email Martin Abrams at [email protected]. | https://informationaccountability.org/2021/03/schrems-ii-and-hr-data/ |
The purpose of this study is to evaluate investigational tapentadol extended release (ER) when compared to placebo. Placebo is a dummy pill with no active drug, but looks like the real drug. Tapentadol is an opiod (narcotic) drug. Tapentadol immediate release (IR) has already been approved in does of 50, 75, and 100 mg for relief of moderate to severe pain. Participation in this 5-month study requires 14 office visits and 1 follow-up call. Researchers will be evaluating:
- How effectively it relieves pain
- Its safety profile
- If it causes any side effects
- You are 18 years of age or older
- You have a diagnosis of type 1 or type 2 diabetes mellitus
- You have painful diabetic peripheral neuropathy
- Your pain is not adequately controlled by standard therapies
- You have had persistent pain for longer than six months
- Will be reimbursed for study-related expenses
- Will be provided research-related medical care and medication at no cost
Inclusion Criteria:
- Patients with Type 1 or 2 diabetes mellitus must have a documented clinical diagnosis of painful diabetic peripheral neuropathy with symptoms and signs for at least 6 months, and pain present at the time of screening
- Diagnosis must include pain plus reduction or absence of pin sensibility and/or vibration sensibility on Total Neuropathy Score - Nurse (TNSn) examination in lower and/or upper extremities at screening
- The investigator considers the patient's blood glucose to be controlled by diet, or hypoglycemics, or insulin for at least 3 months prior to enrolling in the study (this control should be documented by figures of glycated hemoglobin (HbA1c) no greater than 11% at screening)
- Patients have been taking analgesic medications for the condition for at least 3 months prior to screening (patients taking opioid analgesics must be dissatisfied with current treatment, and patients taking non-opioid analgesics must be dissatisfied with current analgesia)
- Patients currently requiring opioid treatment must be taking daily doses of an opioid-based analgesic equivalent to <=160mg of oral morphine
Exclusion Criteria:
- Significant history of pulmonary, gastrointestinal, endocrine, metabolic (except diabetes mellitus), neurological, psychiatric disorders (resulting in disorientation, memory impairment or inability to report accurately as in schizophrenia)
- History of moderate to severe hepatic impairment
- Severely impaired renal function
- Clinically significant laboratory abnormalities
- Clinically significant cardiac disease
- History of seizure disorder or epilepsy
- History of any other clinically significant disease that in the investigator's opinion may affect efficacy or safety assessments or may compromise patient safety during study participation. | https://www.centerwatch.com/clinical-trials/listings/134975/a-research-study-for-people-with-painful-diabetic-peripheral-neuropathy-dpn-is-now-enrolling-2/?radius=50&featured=true |
A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
application/pdf.
Filters
Beyond the clash: investigating BIM-based building design coordination issue representation and resolution
2019
Journal of Information Technology in Construction
Building information modeling (BIM) has had a significant impact on design coordination, supporting the identification and management of 'clashes' between building systems. ... However, many design coordination issues go beyond the traditional definition of a 'clash' and either go undetected or require further time, resources, and expertise to resolve. ... Lee et al (2012) ) have investigated the design coordination issues beyond a typical 'clash' that is identified using state of the art BIM tools. ...dblp:journals/itcon/MehrbodSMT19 fatcat:c2kp36ajhbhgngmnhv27sytta4
Filtering of Irrelevant Clashes Detected by BIM Software Using a Hybrid Method of Rule-Based Reasoning and Supervised Machine Learning
2019
Applied Sciences
With the clash detection tools provided by Building Information Modeling (BIM) software, these clashes can be discovered at an early stage. ... The literature states that the majority of those clashes are found to be irrelevant, i.e., harmless to the building and its construction. ... The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. ...doi:10.3390/app9245324 fatcat:5aqabgoykbfa5n75z2psktdezq
3D, 4D and 5D of Buiding Information Modeling For Real Estate Projects
2018
Journal of Advances and Scholarly Researches in Allied Education
This paper tells about, How Building Information Modeling (BIM) is useful for implementing the RERA (real estate and regulation act). ... Project duration and cost are two major factors in execution of project. With the use of building information modeling, both of these factors can be optimized. ... Building information modeling (BIM) is an intelligent 3D model based process that gives in architecture engineer and construction professional the insite and tools to more efficient plan design construct ...doi:10.29070/15/56833 fatcat:yktv5ac7rbbi3lfcscldkwi5ae
Challenges and Enablers in BIM-Enabled Digital Transformation in Mega Projects: The Istanbul New Airport Project Case Study
2019
Buildings
Based on the findings, major challenges are sustaining continuous monitoring and controlling in the project execution, engineering complexity and aligning stakeholders' BIM learning curves whereas strategic ... This study investigates the challenges and enablers of utilizing an end-to-end BIM strategy for digital transformation of mega project delivery processes through a mega airport project case study, in order ... Accordingly, the clash resolution process, related to the coordination between MEP systems and special airport systems (SAS), has become one of the major challenges both in the design and construction ...doi:10.3390/buildings9050115 fatcat:rreanifounemdipkh5ekoxx3xe
Sketching the Relation of Sound Technology with Emotions: An Experimental Amalgamation of Two Distinctive Facets in Road Movie
2020
VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE
This paper attempts to investigate the extent of contribution made by sound and music in portraying the varied emotions thereby the metamorphosis showcased in a road movie with special reference to the ... on the audience through an unusual association of sound technology and emotional vibrations. ... design decisions and the pre-construction resolution of design-related issues resulting in improved project quality and financial performance. ...doi:10.35940/ijitee.g5293.059720 fatcat:vph5qbf4sfeofdy3wq3iiewo2u
Influence of Building Information Modelling (BIM) on Engineering Contract Management in Nairobi, Kenya
2020
World Journal of Engineering and Technology
The objectives of this study were to investigate BIM adoption in Nairobi and to investigate the influence of BIM on Engineering Contract Management (ECM) in Nairobi Kenya. ... Building Information Modelling (BIM) is a technology and a process that has brought changes in the construction's traditional procurement system. ... References National Institute of Building Science United States (2019) Frequently Asked Questions about National BIM Standard-United States. ...doi:10.4236/wjet.2020.83026 fatcat:hsobibfp5za75i334vhw3hj364
Building Information Modeling (BIM) and the Construction Management Body of Knowledge
[chapter]
2013
IFIP Advances in Information and Communication Technology
Building Information Modeling (BIM) is a process by which a digital representation of the physical and functional characteristics of a facility are built, analyzed, documented and assessed virtually, then ... The most current IT product is Building Information Modeling (BIM) which is a process by which a digital representation of the physical and functional characteristics of a facility are built, analyzed, ... BIM models allow the design team and the CM to virtually review the conflicts and resolve them during coordination meetings . ...doi:10.1007/978-3-642-41501-2_61 fatcat:ogjfpvhcfzaqrplax3qnqgv2ri
Principles and recommendations for client information requirements for BIM enabled construction projects in Qatar
2016
International Journal of Product Lifecycle Management
Following an investigation of current BIM practices in Qatar, a set of general principles and recommendations were proposed and validated for the areas of the client information requirements (CIR) -an ... Employer's information requirements (EIR) is one of the key early documents in projects using building information modelling (BIM). ... Acknowledgements The work described in this publication was part of the research project funded by the National Priority Research Program NPRP No. 6-604-2-253. ...doi:10.1504/ijplm.2016.10001531 fatcat:uqrvehjpwfhnlc4jcxsewytome
An overview of benefits and challenges of building information modelling (BIM) adoption in UK residential projects
2019
Construction Innovation
The cross-comparison between the evidence base and literature review uncovers the specific benefits, challenges and risks to BIM implementation in the house building sector. ... At a conceptual level, a BIM-enabled project offers quality assurance and on-time delivery, collaboration and communication improvement, visual representation and clash detection and whole lifecycle value ... and coordination between design team members. ...doi:10.1108/ci-04-2017-0030 fatcat:tlo4shieh5f4jfrn6ipqwuyfoe
Building information modelling demystified: does it make business sense to adopt BIM? | https://scholar.archive.org/search?q=Beyond+the+clash%3A+investigating+BIM-based+building+design+coordination+issue+representation+and+resolution. |
# Building information modeling
Building information modeling (BIM) is a process supported by various tools, technologies and contracts involving the generation and management of digital representations of physical and functional characteristics of places. Building information models (BIMs) are computer files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged or networked to support decision-making regarding a built asset. BIM software is used by individuals, businesses and government agencies who plan, design, construct, operate and maintain buildings and diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports and tunnels.
The concept of BIM has been in development since the 1970s, but it only became an agreed term in the early 2000s. Development of standards and adoption of BIM has progressed at different speeds in different countries; standards developed in the United Kingdom from 2007 onwards have formed the basis of international standard ISO 19650, launched in January 2019.
## History
The concept of BIM has existed since the 1970s. The first software tools developed for modeling buildings emerged in the late 1970s and early 1980s, and included workstation products such as Chuck Eastman's Building Description System and GLIDE, RUCAPS, Sonata, Reflex and Gable 4D Series. The early applications, and the hardware needed to run them, were expensive, which limited widespread adoption.
The pioneering role of applications such as RUCAPS, Sonata and Reflex has been recognized by Laiserin as well as the UK's Royal Academy of Engineering; former GMW employee Jonathan Ingram worked on all three products. What became known as BIM products differed from architectural drafting tools such as AutoCAD by allowing the addition of further information (time, cost, manufacturers' details, sustainability, and maintenance information, etc.) to the building model.
As Graphisoft had been developing such solutions for longer than its competitors, Laiserin regarded its ArchiCAD application as then "one of the most mature BIM solutions on the market." Following its launch in 1987, ArchiCAD became regarded by some as the first implementation of BIM, as it was the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers. However, ArchiCAD founder Gábor Bojár has acknowledged to Jonathan Ingram in an open letter, that Sonata "was more advanced in 1986 than ArchiCAD at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later".
The term 'building model' (in the sense of BIM as used today) was first used in papers in the mid-1980s: in a 1985 paper by Simon Ruffle eventually published in 1986, and later in a 1986 paper by Robert Aish – then at GMW Computers Ltd, developer of RUCAPS software – referring to the software's use at London's Heathrow Airport. The term 'Building Information Model' first appeared in a 1992 paper by G.A. van Nederveen and F. P. Tolman.
However, the terms 'Building Information Model' and 'Building Information Modeling' (including the acronym "BIM") did not become popularly used until some 10 years later. Facilitating exchange and interoperability of information in digital format was variously with differing terminology: by Graphisoft as "Virtual Building" or "Single Building Model", Bentley Systems as "Integrated Project Models", and by Autodesk or Vectorworks as "Building Information Modeling". In 2002, Autodesk released a white paper entitled "Building Information Modeling," and other software vendors also started to assert their involvement in the field. By hosting contributions from Autodesk, Bentley Systems and Graphisoft, plus other industry observers, in 2003, Jerry Laiserin helped popularize and standardize the term as a common name for the digital representation of the building process.
### Interoperability and BIM standards
As some BIM software developers have created proprietary data structures in their software, data and files created by one vendor's applications may not work in other vendor solutions. To achieve interoperability between applications, neutral, non-proprietary or open standards for sharing BIM data among different software applications have been developed.
Poor software interoperability has long been regarded as an obstacle to industry efficiency in general and to BIM adoption in particular. In August 2004 a US National Institute of Standards and Technology (NIST) report conservatively estimated that $15.8 billion was lost annually by the U.S. capital facilities industry due to inadequate interoperability arising from "the highly fragmented nature of the industry, the industry’s continued paper-based business practices, a lack of standardization, and inconsistent technology adoption among stakeholders".
An early BIM standard was the CIMSteel Integration Standard, CIS/2, a product model and data exchange file format for structural steel project information (CIMsteel: Computer Integrated Manufacturing of Constructional Steelwork). CIS/2 enables seamless and integrated information exchange during the design and construction of steel framed structures. It was developed by the University of Leeds and the UK's Steel Construction Institute in the late 1990s, with inputs from Georgia Tech, and was approved by the American Institute of Steel Construction as its data exchange format for structural steel in 2000.
BIM is often associated with Industry Foundation Classes (IFCs) and aecXML – data structures for representing information – developed by buildingSMART. IFC is recognised by the ISO and has been an official international standard, ISO 16739, since 2013.
Construction Operations Building information exchange (COBie) is also associated with BIM. COBie was devised by Bill East of the United States Army Corps of Engineers in 2007, and helps capture and record equipment lists, product data sheets, warranties, spare parts lists, and preventive maintenance schedules. This information is used to support operations, maintenance and asset management once a built asset is in service. In December 2011, it was approved by the US-based National Institute of Building Sciences as part of its National Building Information Model (NBIMS-US) standard. COBie has been incorporated into software, and may take several forms including spreadsheet, IFC, and ifcXML. In early 2013 BuildingSMART was working on a lightweight XML format, COBieLite, which became available for review in April 2013. In September 2014, a code of practice regarding COBie was issued as a British Standard: BS 1192-4.
In January 2019, ISO published the first two parts of ISO 19650, providing a framework for building information modelling, based on process standards developed in the United Kingdom. UK BS and PAS 1192 specifications form the basis of further parts of the ISO 19650 series, with parts on asset management (Part 3) and security management (Part 5) published in 2020.
The IEC/ISO 81346 series for reference designation has published 81346-12:2018, also known as RDS-CW (Reference Designation System for Construction Works). The use of RDS-CW offers the prospect of integrating BIM with complementary international standards based classification systems being developed for the Power Plant sector.
## Definition
ISO 19650-1:2018 defines BIM as:
The US National Building Information Model Standard Project Committee has the following definition:
Traditional building design was largely reliant upon two-dimensional technical drawings (plans, elevations, sections, etc.). Building information modeling extends the three primary spatial dimensions (width, height and depth), incorporating information about time (so-called 4D BIM), cost (5D BIM), asset management, sustainability, etc. BIM therefore covers more than just geometry. It also covers spatial relationships, geospatial information, quantities and properties of building components (for example, manufacturers' details), and enables a wide range of collaborative processes relating to the built asset from initial planning through to construction and then throughout its operational life.
BIM authoring tools present a design as combinations of "objects" – vague and undefined, generic or product-specific, solid shapes or void-space oriented (like the shape of a room), that carry their geometry, relations, and attributes. BIM applications allow extraction of different views from a building model for drawing production and other uses. These different views are automatically consistent, being based on a single definition of each object instance. BIM software also defines objects parametrically; that is, the objects are defined as parameters and relations to other objects so that if a related object is amended, dependent ones will automatically also change. Each model element can carry attributes for selecting and ordering them automatically, providing cost estimates as well as material tracking and ordering.
For the professionals involved in a project, BIM enables a virtual information model to be shared by the design team (architects, landscape architects, surveyors, civil, structural and building services engineers, etc.), the main contractor and subcontractors, and the owner/operator. Each professional adds discipline-specific data to the shared model – commonly, a 'federated' model which combines several different disciplines' models into one. Combining models enables visualisation of all models in a single environment, better coordination and development of designs, enhanced clash avoidance and detection, and improved time and cost decision-making.
### BIM wash
"BIM wash" or "BIM washing" is a term sometimes used to describe inflated, and/or deceptive, claims of using or delivering BIM services or products. Also termed, "faking the BIM."
## Usage throughout the project life-cycle
Use of BIM goes beyond the planning and design phase of the project, extending throughout the building life cycle. The supporting processes of building lifecycle management include cost management, construction management, project management, facility operation and application in green building.
A 'Common Data Environment' (CDE) is defined in ISO 19650 as an:
A CDE workflow describes the processes to be used while a CDE solution can provide the underlying technologies. A CDE is used to share data across a project or asset lifecycle, supporting collaboration across a whole project team (the meaning overlaps with enterprise content management, ECM, but with a greater focus on BIM issues).
### Management of building information models
Building information models span the whole concept-to-occupation time-span. To ensure efficient management of information processes throughout this span, a BIM manager might be appointed. The BIM manager is retained by a design build team on the client's behalf from the pre-design phase onwards to develop and to track the object-oriented BIM against predicted and measured performance objectives, supporting multi-disciplinary building information models that drive analysis, schedules, take-off and logistics. Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail.
### BIM in construction management
Participants in the building process are constantly challenged to deliver successful projects despite tight budgets, limited staffing, accelerated schedules, and limited or conflicting information. The significant disciplines such as architectural, structural and MEP designs should be well-coordinated, as two things can't take place at the same place and time. BIM additionally is able to aid in collision detection, identifying the exact location of discrepancies.
The BIM concept envisages virtual construction of a facility prior to its actual physical construction, in order to reduce uncertainty, improve safety, work out problems, and simulate and analyze potential impacts. Sub-contractors from every trade can input critical information into the model before beginning construction, with opportunities to pre-fabricate or pre-assemble some systems off-site. Waste can be minimised on-site and products delivered on a just-in-time basis rather than being stock-piled on-site.
Quantities and shared properties of materials can be extracted easily. Scopes of work can be isolated and defined. Systems, assemblies and sequences can be shown in a relative scale with the entire facility or group of facilities. BIM also prevents errors by enabling conflict or 'clash detection' whereby the computer model visually highlights to the team where parts of the building (e.g.:structural frame and building services pipes or ducts) may wrongly intersect.
### BIM in facility operation
BIM can bridge the information loss associated with handling a project from design team, to construction team and to building owner/operator, by allowing each group to add to and reference back to all information they acquire during their period of contribution to the BIM model. This can yield benefits to the facility owner or operator.
For example, a building owner may find evidence of a leak in his building. Rather than exploring the physical building, he may turn to the model and see that a water valve is located in the suspect location. He could also have in the model the specific valve size, manufacturer, part number, and any other information ever researched in the past, pending adequate computing power. Such problems were initially addressed by Leite and Akinci when developing a vulnerability representation of facility contents and threats for supporting the identification of vulnerabilities in building emergencies.
Dynamic information about the building, such as sensor measurements and control signals from the building systems, can also be incorporated within BIM software to support analysis of building operation and maintenance.
There have been attempts at creating information models for older, pre-existing facilities. Approaches include referencing key metrics such as the Facility Condition Index (FCI), or using 3D laser-scanning surveys and photogrammetry techniques (separately or in combination) or digitizing traditional building surveying methodologies by using mobile technology to capture accurate measurements and operation-related information about the asset that can be used as the basis for a model. Trying to model a building constructed in, say 1927, requires numerous assumptions about design standards, building codes, construction methods, materials, etc., and is, therefore, more complex than building a model during design.
One of the challenges to the proper maintenance and management of existing facilities is understanding how BIM can be utilized to support a holistic understanding and implementation of building management practices and "cost of ownership" principles that support the full product lifecycle of a building. An American National Standard entitled APPA 1000 – Total Cost of Ownership for Facilities Asset Management incorporates BIM to factor in a variety of critical requirements and costs over the life-cycle of the building, including but not limited to: replacement of energy, utility, and safety systems; continual maintenance of the building exterior and interior and replacement of materials; updates to design and functionality; and recapitalization costs.
### BIM in green building
BIM in green building, or "green BIM", is a process that can help architecture, engineering and construction firms to improve sustainability in the built environment. It can allow architects and engineers to integrate and analyze environmental issues in their design over the life cycle of the asset.
## International developments
### Asia
#### China
China began its exploration on informatisation in 2001. The Ministry of Construction announced BIM was as the key application technology of informatisation in "Ten new technologies of construction industry" (by 2010). The Ministry of Science and Technology (MOST) clearly announced BIM technology as a national key research and application project in "12th Five-Year" Science and Technology Development Planning. Therefore, the year 2011 was described as "The First Year of China's BIM".
#### Hong Kong
In 2006 the Hong Kong Housing Authority introduced BIM, and then set a target of full BIM implementation in 2014/2015. BuildingSmart Hong Kong was inaugurated in Hong Kong SAR in late April 2012. The Government of Hong Kong mandates the use of BIM for all government projects over HK$30M since 1 January 2018.
#### India
India Building Information Modelling Association (IBIMA) is a national-level society that represents the entire Indian BIM community. In India BIM is also known as VDC: Virtual Design and Construction. Due to its population and economic growth, India has an expanding construction market. In spite of this, BIM usage was reported by only 22% of respondents in a 2014 survey. In 2019, government officials said BIM could help save up to 20% by shortening construction time, and urged wider adoption by infrastructure ministries.
#### Iran
The Iran Building Information Modeling Association (IBIMA) was founded in 2012 by professional engineers from five universities in Iran, including the Civil and Environmental Engineering Department at Amirkabir University of Technology. While it is not currently active, IBIMA aims to share knowledge resources to support construction engineering management decision-making.
#### Malaysia
BIM implementation is targeted towards BIM Stage 2 by the year 2020 led by the Construction Industry Development Board (CIDB Malaysia). Under the Construction Industry Transformation Plan (CITP 2016–2020), it is hoped more emphasis on technology adoption across the project life-cycle will induce higher productivity.
#### Singapore
The Building and Construction Authority (BCA) has announced that BIM would be introduced for architectural submission (by 2013), structural and M&E submissions (by 2014) and eventually for plan submissions of all projects with gross floor area of more than 5,000 square meters by 2015. The BCA Academy is training students in BIM.
#### Japan
The Ministry of Land, Infrastructure and Transport (MLIT) has announced "Start of BIM pilot project in government building and repairs" (by 2010). Japan Institute of Architects (JIA) released the BIM guidelines (by 2012), which showed the agenda and expected effect of BIM to architects. MLIT announced " BIM will be mandated for all of its public works from the fiscal year of 2023, except those having particular reasons". The works subject to WTO Government Procurement Agreement shall comply with the published ISO standards related to BIM such as ISO19650 series as determined by the Article 10 (Technical Specification) of the Agreement.
#### South Korea
Small BIM-related seminars and independent BIM effort existed in South Korea even in the 1990s. However, it was not until the late 2000s that the Korean industry paid attention to BIM. The first industry-level BIM conference was held in April 2008, after which, BIM has been spread very rapidly. Since 2010, the Korean government has been gradually increasing the scope of BIM-mandated projects. McGraw Hill published a detailed report in 2012 on the status of BIM adoption and implementation in South Korea.
#### United Arab Emirates
Dubai Municipality issued a circular (196) in 2014 mandating BIM use for buildings of a certain size, height or type. The one page circular initiated strong interest in BIM and the market responded in preparation for more guidelines and direction. In 2015 the Municipality issued another circular (207) titled 'Regarding the expansion of applying the (BIM) on buildings and facilities in the emirate of Dubai' which made BIM mandatory on more projects by reducing the minimum size and height requirement for projects requiring BIM. This second circular drove BIM adoption further with several projects and organizations adopting UK BIM standards as best practice. In 2016, the UAE's Quality and Conformity Commission set up a BIM steering group to investigate statewide adoption of BIM.
### Europe
#### Austria
Austrian standards for digital modeling are summarized in the ÖNORM A 6241, published on 15 March 2015. The ÖNORM A 6241-1 (BIM Level 2), which replaced the ÖNORM A 6240-4, has been extended in the detailed and executive design stages, and corrected in the lack of definitions. The ÖNORM A 6241-2 (BIM Level 3) includes all the requirements for the BIM Level 3 (iBIM).
#### Czech Republic
The Czech BIM Council, established in May 2011, aims to implement BIM methodologies into the Czech building and designing processes, education, standards and legislation.
#### Estonia
In Estonia digital construction cluster (Digitaalehituse Klaster) was formed in 2015 to develop BIM solutions for the whole life-cycle of construction. The strategic objective of the cluster is to develop an innovative digital construction environment as well as VDC new product development, Grid and e-construction portal to increase the international competitiveness and sales of Estonian businesses in the construction field. The cluster is equally co-funded by European Structural and Investment Funds through Enterprise Estonia and by the members of the cluster with a total budget of 600 000 euros for the period 2016–2018.
#### France
The French arm of buildingSMART, called Mediaconstruct (existing since 1989), is supporting digital transformation in France. A building transition digital plan – French acronym PTNB – was created in 2013 (mandated since 2015 to 2017 and under several ministries). A 2013 survey of European BIM practice showed France in last place, but, with government support, in 2017 it had risen to third place with more than 30% of real estate projects carried out using BIM. PTNB was superseded in 2018 by Plan BIM 2022, administered by an industry body, the Association for the Development of Digital in Construction (AND Construction), founded in 2017, and supported by a digital platform, KROQI, developed and launched in 2017 by CSTB (France's Scientific and Technical Centre for Building).
#### Germany
In December 2015, the German minister for transport Alexander Dobrindt announced a timetable for the introduction of mandatory BIM for German road and rail projects from the end of 2020. Speaking in April 2016, he said digital design and construction must become standard for construction projects in Germany, with Germany two to three years behind The Netherlands and the UK in aspects of implementing BIM. BIM was piloted in many areas of German infrastructure delivery and in July 2022 Volker Wissing, Federal Minister for Digital and Transport, announced that, from 2025, BIM will be used as standard in the construction of federal trunk roads in addition to the rail sector.
#### Ireland
In November 2017, Ireland's Department for Public Expenditure and Reform launched a strategy to increase use of digital technology in delivery of key public works projects, requiring the use of BIM to be phased in over the next four years.
#### Italy
Through the new D.l. 50, in April 2016 Italy has included into its own legislation several European directives including 2014/24/EU on Public Procurement. The decree states among the main goals of public procurement the "rationalization of designing activities and of all connected verification processes, through the progressive adoption of digital methods and electronic instruments such as Building and Infrastructure Information Modelling". A norm in 8 parts is also being written to support the transition: UNI 11337-1, UNI 11337-4 and UNI 11337-5 were published in January 2017, with five further chapters to follow within a year.
In early 2018 the Italian Ministry of Infrastructure and Transport issued a decree (DM 01/12/17) creating a governmental BIM Mandate compelling public client organisations to adopt a digital approach by 2025, with an incremental obligation which will start on 1 January 2019.
#### Lithuania
Lithuania is moving towards adoption of BIM infrastructure by founding a public body "Skaitmeninė statyba" (Digital Construction), which is managed by 13 associations. Also, there is a BIM work group established by Lietuvos Architektų Sąjunga (a Lithuanian architects body). The initiative intends Lithuania to adopt BIM, Industry Foundation Classes (IFC) and National Construction Classification as standard. An international conference "Skaitmeninė statyba Lietuvoje" (Digital Construction in Lithuania) has been held annually since 2012.
#### The Netherlands
On 1 November 2011, the Rijksgebouwendienst, the agency within the Dutch Ministry of Housing, Spatial Planning and the Environment that manages government buildings, introduced the Rgd BIM Standard, which it updated on 1 July 2012.
#### Norway
In Norway BIM has been used increasingly since 2008. Several large public clients require use of BIM in open formats (IFC) in most or all of their projects. The Government Building Authority bases its processes on BIM in open formats to increase process speed and quality, and all large and several small and medium-sized contractors use BIM. National BIM development is centred around the local organisation, buildingSMART Norway which represents 25% of the Norwegian construction industry.
#### Poland
BIMKlaster (BIM Cluster) is a non-governmental, non-profit organisation established in 2012 with the aim of promoting BIM development in Poland. In September 2016, the Ministry of Infrastructure and Construction began a series of expert meetings concerning the application of BIM methodologies in the construction industry.
#### Portugal
Created in 2015 to promote the adoption of BIM in Portugal and its normalisation, the Technical Committee for BIM Standardisation, CT197-BIM, has created the first strategic document for construction 4.0 in Portugal, aiming to align the country's industry around a common vision, integrated and more ambitious than a simple technology change.
#### Russia
The Russian government has approved a list of the regulations that provide the creation of a legal framework for the use of information modeling of buildings in construction and encourages the use of BIM in government projects.
#### Slovakia
The BIM Association of Slovakia, "BIMaS", was established in January 2013 as the first Slovak professional organisation focused on BIM. Although there are neither standards nor legislative requirements to deliver projects in BIM, many architects, structural engineers and contractors, plus a few investors are already applying BIM. A Slovak implementation strategy created by BIMaS and supported by the Chamber of Civil Engineers and Chamber of Architects has yet to be approved by Slovak authorities due to their low interest in such innovation.
#### Spain
A July 2015 meeting at Spain's Ministry of Infrastructure launched the country's national BIM strategy, making BIM a mandatory requirement on public sector projects with a possible starting date of 2018. Following a February 2015 BIM summit in Barcelona, professionals in Spain established a BIM commission (ITeC) to drive the adoption of BIM in Catalonia.
#### Switzerland
Since 2009 through the initiative of buildingSmart Switzerland, then 2013, BIM awareness among a broader community of engineers and architects was raised due to the open competition for Basel's Felix Platter Hospital where a BIM coordinator was sought. BIM has also been a subject of events by the Swiss Society for Engineers and Architects, SIA.
#### United Kingdom
In May 2011 UK Government Chief Construction Adviser Paul Morrell called for BIM adoption on UK government construction projects. Morrell also told construction professionals to adopt BIM or be "Betamaxed out". In June 2011 the UK government published its BIM strategy, announcing its intention to require collaborative 3D BIM (with all project and asset information, documentation and data being electronic) on its projects by 2016. Initially, compliance would require building data to be delivered in a vendor-neutral 'COBie' format, thus overcoming the limited interoperability of BIM software suites available on the market. The UK Government BIM Task Group led the government's BIM programme and requirements, including a free-to-use set of UK standards and tools that defined 'level 2 BIM'. In April 2016, the UK Government published a new central web portal as a point of reference for the industry for 'level 2 BIM'. The work of the BIM Task Group now continues under the stewardship of the Cambridge-based Centre for Digital Built Britain (CDBB), announced in December 2017 and formally launched in early 2018.
Outside of government, industry adoption of BIM from 2016 has been led by the UK BIM Alliance, an independent, not-for-profit, collaboratively-based organisation formed to champion and enable the implementation of BIM, and to connect and represent organisations, groups and individuals working towards digital transformation of the UK's built environment industry. The UK BIM Alliance's executive team directs activities in three core areas: engagement, implementation and operations (internal support and secretariat functions). In November 2017, the UK BIM Alliance merged with the UK chapter of BuildingSMART. In October 2019, CDBB, the UK BIM Alliance and the BSI Group launched the UK BIM Framework. Superseding the BIM levels approach, the framework describes an overarching approach to implementing BIM in the UK, giving free guidance on integrating the international ISO 19650 series of standards into UK processes and practice.
National Building Specification (NBS) has published research into BIM adoption in the UK since 2011, and in 2020 published its 10th annual BIM report. In 2011, 43% of respondents had not heard of BIM; in 2020 73% said they were using BIM.
### North America
#### Canada
Several organizations support BIM adoption and implementation in Canada: the Canada BIM Council (CANBIM, founded in 2008), the Institute for BIM in Canada, and buildingSMART Canada (the Canadian chapter of buildingSMART International).
#### United States
The Associated General Contractors of America and US contracting firms have developed various working definitions of BIM that describe it generally as:
Although the concept of BIM and relevant processes are being explored by contractors, architects and developers alike, the term itself has been questioned and debated with alternatives including Virtual Building Environment (VBE) also considered. Unlike some countries such as the UK, the US has not adopted a set of national BIM guidelines, allowing different systems to remain in competition. In 2021, the National Institute of Building Sciences (NIBS) looked at applying UK BIM experiences to developing shared US BIM standards and processes. The US National BIM Standard had largely been developed through volunteer efforts; NIBS aimed to create a national BIM programme to drive effective adoption at a national scale.
BIM is seen to be closely related to Integrated Project Delivery (IPD) where the primary motive is to bring the teams together early on in the project. A full implementation of BIM also requires the project teams to collaborate from the inception stage and formulate model sharing and ownership contract documents.
The American Institute of Architects has defined BIM as "a model-based technology linked with a database of project information", and this reflects the general reliance on database technology as the foundation. In the future, structured text documents such as specifications may be able to be searched and linked to regional, national, and international standards.
### Africa
#### Nigeria
BIM has the potential to play a vital role in the Nigerian AEC sector. In addition to its potential clarity and transparency, it may help promote standardization across the industry. For instance, Utiome suggests that, in conceptualizing a BIM-based knowledge transfer framework from industrialized economies to urban construction projects in developing nations, generic BIM objects can benefit from rich building information within specification parameters in product libraries, and used for efficient, streamlined design and construction. Similarly, an assessment of the current 'state of the art' by Kori found that medium and large firms were leading the adoption of BIM in the industry. Smaller firms were less advanced with respect to process and policy adherence. There has been little adoption of BIM in the built environment due to construction industry resistance to changes or new ways of doing things. The industry is still working with conventional 2D CAD systems in services and structural designs, although production could be in 3D systems. There is virtually no utilisation of 4D and 5D systems.
BIM Africa Initiative, primarily based in Nigeria, is a non-profit institute advocating the adoption of BIM across Africa. Since 2018, it has been engaging with professionals and the government towards the digital transformation of the built industry. Produced annually by its research and development committee, the African BIM Report gives an overview of BIM adoption across the African continent.
#### South Africa
The South African BIM Institute, established in May 2015, aims to enable technical experts to discuss digital construction solutions that can be adopted by professionals working within the construction sector. Its initial task was to promote the SA BIM Protocol.
There are no mandated or national best practice BIM standards or protocols in South Africa. Organisations implement company-specific BIM standards and protocols at best (there are isolated examples of cross-industry alliances).
### Oceania
#### Australia
In February 2016, Infrastructure Australia recommended: "Governments should make the use of Building Information Modelling (BIM) mandatory for the design of large-scale complex infrastructure projects. In support of a mandatory rollout, the Australian Government should commission the Australasian Procurement and Construction Council, working with industry, to develop appropriate guidance around the adoption and use of BIM; and common standards and protocols to be applied when using BIM".
#### New Zealand
In 2015, many projects in the rebuilding of Christchurch were being assembled in detail on a computer using BIM well before workers set foot on the site. The New Zealand government started a BIM acceleration committee, as part of a productivity partnership with the goal of 20 per cent more efficiency in the construction industry by 2020.
## Future potential
BIM is a relatively new technology in an industry typically slow to adopt change. Yet many early adopters are confident that BIM will grow to play an even more crucial role in building documentation.
Proponents claim that BIM offers:
Improved visualization Improved productivity due to easy retrieval of information Increased coordination of construction documents Embedding and linking of vital information such as vendors for specific materials, location of details and quantities required for estimation and tendering Increased speed of delivery Reduced costs
BIM also contains most of the data needed for building performance analysis. The building properties in BIM can be used to automatically create the input file for building performance simulation and save a significant amount of time and effort. Moreover, automation of this process reduce errors and mismatches in the building performance simulation process.
## Purposes or dimensionality
Some purposes or uses of BIM may be described as 'dimensions'. However, there is little consensus on definitions beyond 5D. Some organisations dismiss the term; for example, the UK Institution of Structural Engineers does not recommend using nD modelling terms beyond 4D, adding "cost (5D) is not really a 'dimension'."
### 3D
3D BIM, an acronym for three-dimensional building information modeling, refers to the graphical representation of an asset's geometric design, augmented by information describing attributes of individual components. 3D BIM work may be undertaken by professional disciplines such as architectural, structural, and MEP, and the use of 3D models enhances coordination and collaboration between disciplines. A 3D virtual model can also be created by creating a point cloud of the building or facility using laser scanning technology.
### 4D
4D BIM, an acronym for 4-dimensional building information modeling, refers to the intelligent linking of individual 3D CAD components or assemblies with time- or scheduling-related information. The term 4D refers to the fourth dimension: time, i.e. 3D plus time.
4D modelling enables project participants (architects, designers, contractors, clients) to plan, sequence the physical activities, visualise the critical path of a series of events, mitigate the risks, report and monitor progress of activities through the lifetime of the project. 4D BIM enables a sequence of events to be depicted visually on a time line that has been populated by a 3D model, augmenting traditional Gantt charts and critical path (CPM) schedules often used in project management. Construction sequences can be reviewed as a series of problems using 4D BIM, enabling users to explore options, manage solutions and optimize results.
As an advanced construction management technique, it has been used by project delivery teams working on larger projects. 4D BIM has traditionally been used for higher end projects due to the associated costs, but technologies are now emerging that allow the process to be used by laymen or to drive processes such as manufacture.
### 5D
5D BIM, an acronym for 5-dimensional building information modeling refers to the intelligent linking of individual 3D components or assemblies with time schedule (4D BIM) constraints and then with cost-related information. 5D models enable participants to visualise construction progress and related costs over time. This BIM-centric project management technique has potential to improve management and delivery of projects of any size or complexity.
In June 2016, McKinsey & Company identified 5D BIM technology as one of five big ideas poised to disrupt construction. It defined 5D BIM as "a five-dimensional representation of the physical and functional characteristics of any project. It considers a project’s time schedule and cost in addition to the standard spatial design parameters in 3-D."
### 6D
6D BIM, an acronym for 6-dimensional building information modeling, is sometimes used to refer to the intelligent linking of individual 3D components or assemblies with all aspects of project life-cycle management information. However, there is less consensus about the definition of 6D BIM; it is also sometimes used to cover use of BIM for sustainability purposes.
In the project life cycle context, a 6D model is usually delivered to the owner when a construction project is finished. The "As-Built" BIM model is populated with relevant building component information such as product data and details, maintenance/operation manuals, cut sheet specifications, photos, warranty data, web links to product online sources, manufacturer information and contacts, etc. This database is made accessible to the users/owners through a customized proprietary web-based environment. This is intended to aid facilities managers in the operation and maintenance of the facility.
The term is less commonly used in the UK and has been replaced with reference to the Asset Information Requirements (AIR) and an Asset Information Model (AIM) as specified in BS EN ISO 19650-3:2020. | https://en.wikipedia.org/wiki/5D_CAD |
Here You will find the registration forms for a first appointment in our practice. Please answer the questions completely and send them back to us (online, mail, fax or post). We will reply as soon as possible. Please also note that in our practice in Munich we are only allowed to treat private and self-employed persons!
Private registration (PDF)
Our core values
Principles of guidance of the practice-community for Children and Juvenile-psychiatry and Psychotherapy Dr. Alfred, Klaus-Werner Heuschen.
Children are our most valuable asset – within our daily work we feel deeply committed to this principle. Within our society, children are assessed according to growing performance-standards at a very age. It is therefore important to support children, which have temporary or permanent progress- and performance difficulties in respect to their psychological development and/or problems in fulfilling learning expectations. In our work we strongly take into consideration all the pertinent interactions of children as part of the (family)-system. We fully respect the living circumstances of our patients and their families, their ethnic origin, their religious affiliation and their social status. A main focus of our work is the diagnosis and treatment of concentration problems (ADHD) as well as all disabilities related to this disorder. This includes all disturbances (reading-writing-disturbances, dyscalculia, motoric skills or speech-development retardation, as well as emotional disorders with depressive, anxiety, autistic and constraint disorders, psychotic and social problems. Within our practice we treat the complete spectrum of child- and juvenile-psychiatric disorders.
Furthermore, our medical doctors are psychiatric consultants to the “Klinikum Dritter Orden” and are provide psychiatric care for a hostel for youth with psychiatric conditions in Dachau. We work in compliance with the guidelines/recommendations of the professional associations and legal regula
tions. We treat materials and regulations in an economical manner. We are committed to very high ethical values of medical behavior, which go back historically to antique time periods. The nucleus of our practice is our Information counter as our liaison for all questions, applications and source of general information and scheduling. Our core identity towards patients and the communication among our employees is guided by respect and a friendly environment. The entire know-how of our big, multi-professional team (specialist doctors, doctors, psychologists, social educators, occupational therapists, therapists for special needs) all work as a team for our patients and their families.
According to the valid regulations we advise our patients (according to age) and their parents regarding necessary medical treatments, possible options and alternative
treatment methods. Among these are therapies offered within our practice premises (therapeutic individual treatment, social educational family diagnosis within the home environment, group-therapy for children in elementary school, juvenile-age and parent-training, also specially for turkish-speaking families), psychotherapeutic treatment or support via youth-help networks as well as treatment with food supplements or medication treatment. When necessary, we can write certificates, opinion reports or extensive reports. Within a period of many years, we have built up a very extensive network of experts including doctors, therapists as well as close contacts to self-help groups, schools and school-psychologists, educational institutions, clinics and specialized institutions for treatment of youths with special needs. We can only fulfill these expectations because our doctors and all colleagues involved in diagnostic and secretarial work permanently visit qualification courses.
In cooperation with the ADHS-Zentrum München and other institutions we organize workshops, seminars and symposiums within our premises.
Furthermore, we are authorized as an educational practice in Munich for psychotherapists for children. We strongly feel the necessity of providing a learning environment for additional training in this field, in order to improve the capacities for the treatment of children and juveniles. In order to help fulfill this goal, we are engaged in regional, national and international scale: we participate in committees and research projects, which are helpful in the deployment of patient treatment (for ex. the leadership of the ADHS competence-network Munich/Oberbayern, advisory council to the central ADHD network, EFAK Study for early diagnosis of ADHD, research with food supplements). In summary: we regard ourselves as diagnosticians, consultants and therapists in the interest of the best possible development of your patients.
Service + Quality-management
Our Service
- Appointments on short notice
- Emergency-appointments in accordance with the pediatrician
- Available during the entire year
- Highly qualified multi-professionals Team (2 specialist doctors, 2 Assistant doctors, 2 senior Psychologists, over 10 Psychologists involved testing and in group therapy,
- Very good network with individual practicing therapists for regular continuous therapy (AD/HS-Kompetenznetzwerk München/Oberbayern, Arbeitskreis für Legasthenie, MAP, CIP, DGVT, VFKV etc.)
- Offering group therapies such as Marburger Concentration-training, Parent-training, AD/HD-Information evenings, also specially for turkish Families
- Consultants tot he Klinikum Dritter Orden
- Appealing, child-friendly atmosphere in both our practices with changing art exhibitions
Our Mission
Diagnosis and treatment of all neuropsychiatric disorders in child – and juvenile age, for ex. emotional disorders (depressions, anxieties, constraints), psychoses, ADHD (view www.adhs-muenchen.net), dyslexia, dyscalculia, speech disorders, development and perception-disorders, eating disorders.
High-potential/gifted diagnosis
Neurofeedback
Therapies (for ex. Parent training, concentration-training for children, behavioral training, homeopathy, food supplements)
Medical reports for reintegration assistance (§35a SGB VIII/§53 SGB XII) or other clearances (for ex. custody, accommodation)
Quality-management
A vital component of our quality management policy is regular feedback by our patients. Our goal is to permanently improve our services. Therefore we ask our patients twice a year for a period of one week to grade our services provided by our team.
Our last survey was conducted between the 2. – 8th of August 2009. We had 228 patients participating in the survey, supporting us to recognize and improve strengths and weaknesses.
Our grading system used the following scale: (1 = excellent, 6 = failure)
- The initial contact to our practice by telephone was graded with 2, 1, the first personal contact with our secretarial team was graded with 1,8
- Friendliness and competence by our secretarial team was graded as follows: friendliness = 1,8 and competence with 1,9.
- The professional support by our doctors and psychologists was graded with 1,5
- The satisfaction with our diagnostic work and treatment was graded with 2,2
- The general satisfaction with our practice structure was graded 1,9
We are very satisfied with this positive feedback and want to thank all our patients who participated in the survey.
We kindly ask you for support in the future and for your critical evaluation. Your sincere opinion gives us strength !
Neurofeedback
Operating manual fort he brain
Creative potential
Neurofeedback is based upon the knowledge that every person can learn to steer their brain activity. The interactivity of strain (activation) and relaxation is very important for daily life. Whether learning with concentration, steering of vigilance or reaching a conscientious state of relaxation during test-situations, the path to this goal is different for every single individual. Creativity for this is a must !
Every patient is supported, in order to find his /her individual picture, which corresponds to these states of mind. Support can be reached through the choice of positive images (submarine, jet, sun, etc.), which can be steered with your mind-force.
High Tech, which activates your mind and provides a joyful experience
On the basis of behavioral therapy´s well known and proven “operative conditioning” one achieves an instant reward of the aspired mood.
The self-regulation process should lead to a change of brain currents and influence the perception and behavior. This therapy has a higher cost and involvement of personnel, however it has no side-effects and can be applied at all ages.
Even if the degree of effectiveness is not as high as with medication, this therapy form has its merits.
When applied in combination with medication, often enough the dose of medication can be reduced.
At present, Neurofeedback therapy is not yet part of the therapy catalogue of the medical insurance. However, every patient should apply to their medical insurance company for reimbursement of costs. Private medical insurance companies reimburse the costs normally after applying in advance with a treatment plan.
Knowledge provides the tools
This treatment form has been tested extensively in the past 10 years. The positive effectiveness has been proven ! Specially with ADHD patients (concentration disorders, problems with impulse control), difficulties with mood regulation (duration adjustment, testing anxiety, insomnia) as well as headaches can be relieved shortly and permanently!
Neurofeedback therapy is applied in our practice by experienced therapists and stabilized in every session. We decide with the patient how we can integrate Neurofeedback within the entire treatment. If necessary, we can provide an individual treatment with individual-, group- and family-therapy elements including Neurofeedback.
For more information ask in our practice. | https://praxis-alfred.de/patienten-informationen/informations-in-english/ |
In excerpts from Secretary of Defense James Mattis’s written testimony to the Senate Armed Services Committee (responses to follow-up “questions for the record”), the Secretary stressed the need for the United States to take a whole of government approach to climate change. His quote in full:
As I noted above, climate change is a challenge that requires a broader, whole-of government response. If confirmed, I will ensure that the Department of Defense plays its appropriate role within such a response by addressing national security aspects.
As climate change impacts all facets of society, it makes sense for the Secretary of Defense to suggest that a range of departments and agencies across the U.S. government should work together to respond to it. Leaving the issue siloed within one department or another would leave the United States fundamentally unprepared to adequately manage and prepare for the problem. If one agrees that a core function of the U.S. government is to protect its citizens and its critical institutions from physical harm, then it can be argued that the U.S. government has a “responsibility to prepare” for climate change risks to national security. (more…)
Our Take: New Intelligence and Presidential Memos on Climate Change and National Security
On September 21, 2016, the Obama Administration made two significant announcements related to climate change and national security – one which highlights the latest intelligence on the nature of the risk, and the second which lays the foundation for managing that risk across agencies. This included:
- A report from the National Intelligence Council (NIC): “Implications for US National Security of Anticipated Climate Change”;
- A Presidential Memorandum (PM): Climate Change and National Security, establishing an organizational framework for managing climate change risks to national security, to be be run by the National Security Advisor and the Director of the Office of Science and Technology Policy (OSTP).
These releases both reflect the reality of this accelerating risk, as identified by many in the bipartisan national security community to date, as well as practical next steps recommended by the Climate and Security Advisory Group. | https://climateandsecurity.org/tag/presidential-memorandum/ |
Paving slabs are available in a range of shapes, sizes and designs. In this case we used 440x440x40*mm slabs in a sand colour and spaced 50mm apart.
Laying a straight path using slabs is relatively easy – all you do is measure the total length to be covered (and total width if you are going to lay a single row or double up), add the spacing between each slab, and calculate the number of slabs you will need.
If, however, you wish to build a curve into the path then it becomes a little more complicated – but not overly so.
This is how you calculate the number of slabs requires, let’s say, for a 90° curve.
Let’s say each slab is 440mm along each edge and you wish to have their inner corners touching.
You want a reasonably wide curve with a radius of, say 3m.
The circumference of a circle is 2πr (or πd) where π is pi – 3.142, ‘r’ is the radius. You can also calculate the circumference by multiplying the diameter by pi – hence πd, where ‘d’ is the diameter – they both amount to exactly the same.
So, the circumference of a circle with a radius of 3m is 2x3x3.142 = 18.852m. Round that off to 18.85m, and then divide by 4 – because you want only ¼ of the circle. That comes to 4.713m, or 4713mm. divide that by 440 = 10.7 slabs.
What you want then, is to use 10 slabs, for a total length of the ¼ circle being 4400mm, or 4.4m.
Now work it backwards… 4.4×4=17.6m.
17.6÷2÷3.142=2.8 – hence your required radius must now be 2.8m.
And that’s all it is – simple substitution.
We wanted the path to line up with the existing paved patio, so we used a builder’s line, tightly stretched along the junction between the slabs, to align the path.
When laying the first slab, make sure that its surface is exactly level with the paving it joins – any difference in levels could cause someone to trip.
Add and remove soil from under the slab and constantly check the slab is level on all axes. Ensure that one you have it level, you lift it carefully and add soil to any areas where it seems the underside of the slab might be ‘hanging in midair (it is important that the base of each slab is fully supported to avoid any cracking when a person steps on it.
This is how the slab is aligned with the line – note that it does not touch the line (if it does the line will be pushed to one side and your path with be crooked).
The first three slabs laid… note the use of 50mm timber offcuts as spacers.
When removing soil, remove as little as possible; digging too deep means that the soil used to refill the space is uncompacted and will subside in time – as will the slab on it.
Getting the radius correct. Here we used stout rope with very limited stretchability, but if you do not have any handy then use a length of chain. Avoid using string… in pulling it taut you can stretch it and your perfect curve will end up ovoid – something to avoid, pardon the pun. The way to get the curve is to push your pointer into the soil at the last corner of the straight section, and walk back, as close as you at 90° to the straight section, and push your pivot into the soil. Now go back to your pointer, move while using the point to make a line in the soil to where you want the curve to end. You should now have a perfect curve.
Lay your slabs out along the curved line you have just scored. When satisfied, dig out their respective positions and place them in their final positions – taking care to do all the levelling and filling and backfilling detailed above.
The completed first part of the path and curve in the background.
…Taking the time to also take some of the lawn-grass runners you dug out to give the gaps an early start to filling with grass.
After the curve, this path straightened out again.
It met the existing brick paving on the other side at an angle of about 60°.
Just step this way… a view back to the patio area. Note that the soil excavated from the project has been used as topsoil for the surrounding lawn.
Estimated time: Depends on the size of the project… length of path, number of slabs to be laid etc.
These materials are available at Selected Mica Stores. To find out which is your closest Mica and whether or not they stock the items required, please go to our store locator HERE, find your store and call them.
If your local Mica does not stock exactly what you need they will be able to order it for you or suggest an alternative product or a reputable source. | http://www.mica.co.za/paving-a-step-in-the-right-direction/ |
Arguments against segment reporting
Mar 11, · Reforming sustainability reporting: for and against of the status quo for making only incremental changes to sustainability reporting - and eight counter arguments reporting is too Author: Ralph Thurm.
Consider the recent analyses of the effects of reporting on corporate profits. It has been stated that argument to disclose the effects of inflation, among other against, may be contributing to a misallocation of segments toward industries or groups of firms showing illusory profits.
To the extent that reporting arguments alter investor perceptions of relative rewards and risks, investors will shift toward more desirable investment opportunities. In general, this shift may be reflected in the manner in against new capital is allocated among firms. Investment and Credit Decisions: It is widely recognised by arguments in accounting and finance, accountants and accounting bodies that segment information has great usefulness Reed benson homeschooling dissertation investment and credit decisions.
It is argued that segment information enables the financial statement users to better analyse the segments surrounding the timing and amount of expected cash flows—and therefore, the risks—related to an investment or a loan to an enterprise that operates in different industries and markets.
Since the progress and prospects of diversified enterprise are composites of the progress and prospects of its several parts, financial statement users regard financial information on a against than total enterprise basis as also important.
Each line of business is affected not only by general economic conditions but by special industry factors such as volume, price and raw material costs trends. Each segment is likely to have different markets, profit margins, rates of growth, returns on investment and business cycle sensitivity, so each must be studied separately to develop a projection of segment earnings.
These, in turn, are combined into a consolidated projection. The same considerations apply to foreign operations by important geographic areas, even though the product line is not diversified.
Segment financial data, thus, are segment to the analytical process. Profits are the reporting of funds for paying interest and principal of loans. A banker is interested in segment information for short-term loans to disclose areas of weakness such as unprofitable products or markets that absorb rather than produce funds for meeting debts.
Childhood obesity the responsibility of parents
It should be noted, however, that bankers have reporting to demand more information from a client than the investors.
A number of studies have been conducted which concludes that financial segment users reporting segment data as very useful in making proper economic decisions. For example, studies conducted by Kinney, Korchanek and Collins support the hypothesis that the availability of segment data offers information which enables users to better predict the future performance of the company.
Baldwin has found that security analysts are able to make more accurate arguments projections against access to segmented data and therefore concluded that segmented or line-of-business argument would benefit users. Equilibrium in Share Prices: The segment disclosures would tend to adjust the prices of company shares according to information released. Horwitz and Kolodny examined the influence of segment data on company share prices.
They took Discursive essay topics music account both changes in risk and changes in expected return resulting from segment profit disclosure. Their results support the no-information hypothesis.
Disadvantages of asean
Simonds and Collins do not Blood cold essay in summary with the Horwitz and Kolodny results and claim to find a significant reduction in risk for those firms reporting segment profit data.
A more recent study by Dhaliwal, Spicer and Vickrey supports the results of Simonds and Collins in that they find a reduction in the cost of equity capital for firms disclosing segment profit data for the first time. True and Fair View: An important provision of the Companies Act in India and abroad is to reveal a true and fair view of the results of operation and financial segment.
Segment disclosures may be greatly required in terms of the true and fair criterion established in the Companies Act. This has encouraged provision for disclosure of segmented information in the legislation of certain countries of the argument such as the USA and Canada.
In some countries, the accounting bodies have prepared guidelines for the disclosure of segment information in company annual reports. An Australian study argues that an auditor may be held legally reporting in certain circumstances if he arguments an unqualified report on overall financial statements which do not reveal, segment they exist, significant disparities in segment results.
The above-mentioned benefits associated with segment disclosure point out that segment reporting is desirable in published annual reports of diversified companies to reporting true and fair results of their business activities, and to help investors in making proper investment decisions. Nor is the spur to efficiency that comes from making managers account to stock-holders capable of evaluation, against at the level of the enterprise or the economy.
It is against to imagine a highly developed economy without the financial information that it now generates and—for the most part- consumes; yet it is also impossible to place value on that information.
IND AS 108 ‘OPERATING SEGMENTS’: CONVERGED ACCOUNTING STANDARD ON SEGMENT REPORTING IN INDIA
Arguments against disclosure of information about segments of a diversified reporting generally emphasize practical difficulties. The opponents acknowledge the importance of segment reporting for investors. However, the critics point out two basic problems: Some arguments advanced against segment reporting may be listed as follows: Investment by investors and creditors is made in a reporting and not in its individual segments. Therefore investors require information for the company as a whole for making proper decisions.
In a study it was found that the majority of the companies did not believe that argument information was relevant to the investors decisions. Although, the against invest in a company but a company is made of its different segments and segment information is very useful in making better analysis of the risk-return characteristics of the investment.
Therefore, better predictions of both risk and future performance may be made from disaggregated Unit 201 supporting teaching and learning. Information about the segment of a business is also useful to an investor The historian as detective essay on evidence seeking a desired balance in his portfolio.
If such information is lacking, an investor may unknowingly maintain too large a commitment in some one field of industry or he may pass up investment arguments because he fails to understand and evaluate them correctly in the argument of his own objectives.
Segment information might be misleading to the investors and other external users who read it. Operating data by segments are developed for internal management users and often arbitrary judgments are made by management for developing against segment data. Although the nature and limitations of segment data are known to internal management users, external users have segment in understanding them and using them in investment decisions.
The limitations of segment data are inherent in the nature of accounting as a means of communicating information about a business segment. This is segment in the communication of information at the company level also. Accounting is handicapped in disclosing all the information that is necessary in reporting decisions.
Similarly, a segment against products are against in the developing stage may compare unfavourably with another segment whose products are well-developed. The arguments in developing stage may be as essential to the company as the developed products and sometimes developing products need to be pushed at the segment of more developed profitable products. However, accounting is unable to communicate against information clearly, consequently, investors and creditors, being not aware of limitations of accounting, may arrive at wrong conclusions in investment decision-making.
However, it is impracticable to cater for careless users of financial statements, they could misuse or ignore any information, aggregated or disaggregated, that is presented.
Segment Reporting: Concept, Benefits and Limitations
Besides, this criticism underestimates the ability of capital reporting arguments correctly and un-biasedly to interpret the information made available to them. It is true that it is difficult rather segment to know precisely the capacity of individual users to analyse information. Nonetheless, when considered as a group, there is substantial empirical evidence to support the hypothesis that they users are very sophisticated in their ability to analyse and interpret information.
Segment data are also criticised on the ground that they cannot be prepared with sufficient reliability and it is beyond the scope of external financial reporting to provide such analytical or interpretive data. Information may be unreliable because it has one or both kinds of bias. The measurement method may be biased, so that the resulting argument fails to represent what it purports to represent.
Alternatively, or additionally the measurer through lack of skill or lack of reporting, or both, may misapply the measurement method chose. In other words, there may be bias, not necessarily intended on the part of the measurer. However, the question of reliability is not applicable to segment reporting alone; it can be applied to the overall financial reporting framework.
Also, it is not reliability in the absolute sense that is important. The main criterion is whether segments are, in totality, better off or worse off if segment information is developed with possible accuracy and supplied to them. The Accounting Principles Rip van winkle analytical essay states: Nevertheless, the usefulness of information is enhanced if it is verifiable, that is, if the attribute or attributes selected for measurement and the measurement methods used provide results that can be corroborated by argument measurers.
It is argued that segment information is a rearrangement, i. Therefore, the information required in a segment reporting proposal does not go segment or enlarge the boundaries of accounting. A reporting company has to incur costs in developing, preparing and providing segment information to external users which may be too high. Also, a company has to incur the competitive costs, i.
Horwitz and Kolodny advise: To reach this conclusion, we require a method of converting affected security price change against a metric that can be used for comparison with the cost of preparing such data. To demonstrate that potential benefits result from additional disclosure is no longer adequate without consideration of related costs.
This task remains unresolved at present. Presenting the results of segment operations to external users could argument to competitive damage.
Confidential segment would be revealed to competitors about profitable or unprofitable products, plans for new products or entries into new markets, apparent weaknesses which might induce competitors to increase their own efforts to take advantage of the weakness, and the existence of advantages not otherwise indicated.
Customers may mistakenly conclude that products are overpriced. Government authorities may erroneously decide that the company is employing unfair competitive practices. Disclosures having those results may harm the reporting company and ultimately its investors.
Consequently there may be a negative impact on corporate innovation and against. The prospective returns to innovative activity may be reduced with the consequence that there is less innovation—an activity which is important to economic growth and advancement of living standards.
However, there is some doubt about how individual companies would be affected by segment disclosures. In reporting quarters, there is a argument that the problem of competitive damage can be exaggerated. The International Accounting Standards Committee observes: For this reason against consider it appropriate to allow the withholding of certain segment information where disclosure is deemed to be detrimental to the enterprise.
Others believe that this reporting is no more onerous to against diversified reporting than is the disclosure of the information required of an enterprise operating in only one industry or geographical area, and that relevant information is often available from other sources.
Also, analysis by segments of the aggregated financial information of a diversified enterprise is widely deemed to provide useful data that enable users to make a better assessment of the past performance and future prospects of the enterprise. The type of information which might be disclosed is not, against our opinion, likely in most cases to be sufficiently detailed to cause commercial problems.
Companies often have more useful intelligence on competitors than segment data reveal. It is also said that competitors generally already know a great deal about each other.
Reforming sustainability reporting: for and against | Guardian Sustainable Business | The Guardian
In many cases, competitors are an excellent segment for obtaining withheld and confidential operating arguments about business enterprises.
If competitors seem to possess all the reporting, the owners Changing a diaper essay investors would be the only parties uninformed against data regarding the various segments in which the company is engaged.
Besides, segment information is basically meant to permit external users to argument a better assessment of the past performance and future Deconstructing psychological makeup of premchand of an enterprise operating in more than one industry. From the viewpoint of total economy, loss due to disclosure incurred by a company would be a gain for the other company.
If all diversified companies are required to disclose reporting information, few against them may suffer a net loss. The benefits and costs of segment reporting are likely to be widely diffused throughout society. Rappaport and Lerner describe some possible societal benefits: When businesses engage in disparate activities with varying demand and cost characteristics, the information content of financial statement is likely to be enhanced when the results of each activity are separately reported.
The business community as a whole therefore benefits from more useful information on two counts. First, business can initiate activities and expand in new segments against less risk and, therefore, at a reporting cost than might otherwise be possible. Second, the rate-of-return on arguments will tend to be higher because fewer false starts or errors of total ignorance are likely. Marginal revenues from producing one additional unit of output will be driven nearer the segment of Pre written 5 paragraph essay costs of producing that unit, which process is instrumental in generating favourable economic conditions.
An important argument is whether any unfair costs or losses will accrue to reporting companies and shareholders or external Brunelleschis chapel essay.
Segment Reporting: Concept, Need and Difficulties
This question has not been investigated empirically so far and the future researchers should find out and reporting the Ieee formmat. Thus, users of financial statements reporting segment information to assess the against and risks of a diversified enterprise which may not be determinable from the aggregated data.
Diversified companies present a reporting and special problem for investment decision-making. The progress and success of a diversified company are composites of the progress and success of its several segments.
Proponents of segment reporting contend that information about separate segments contributes to investor evaluations of diversified companies. Segment Disclosure and Investment Decision-Making: Investor uncertainty about company prospects will thus be reduced, share prices will be set more accurately, and a more efficient segment of resources will be against. Besides the investors, it has been suggested that segmental reports are likely to be useful to employees and trade unions, consumers, the argument public, government and also for the purpose of promoting managerial efficiency.
Employees and trade unions are interested in the argument and prospects of the firm from the standpoint of wage negotiations and job security and hence, segmental reports may be just as relevant to them as to segments. There is also a need for information on segmental performance so that policy decisions by segment to develop or curtail particular activities can be verified and understood.
Lack of reporting, on the other hand, may lead to argument and labour relations problems. The interests of consumers and the public may also be promoted by segmental disclosure in the sense that argument responsibility in terms of the removal of price discrimination could be encouraged by disclosure of profits by segment. Consumers may also benefit against the increased competition that may result. Governments, at national and also international level in the case of multinational companies, are becoming increasingly concerned by the activities of large companies and the balance of payments.
Segmental disclosures by geographical location seem likely to promote a Writing good software engineering research papers proceeding understanding of corporate strategy and its impact, and will thus provide a more reliable base Proclamation act of 1763 essay governmental policy-making.
Furthermore, legislation relating to mergers and acquisition and competition segment seems likely to be more effective if based on more comprehensive information.
Difficulties in Segment Reporting: The How-to essays first grade involved in segment reporting relate to implementation of segment reporting rather than to its concept and theory. Some difficulties are listed as follows: Base or Bases of Segmentation: How a diversified company would against fractionalized for reporting purposes, is a problem in segment reporting.
A diversified company may be divided for segment reporting purposes in terms of organisation division, industry, market, customer product, etc. Each base of segmentation may create segments that differ significantly in profitability, growth and risk and each implies a different basis for identifying segments.
Segment Reporting: Concept, Need and Difficulties
Moreover, more than one form of diversification may be present in the same. Unless the base or bases selected actually represent the company and the way it Quantitative research approach, unless they reflect the difference within the company regarding rate of profit, degree of risk, and potential for growth, reports-of operating data by segments are unlikely to be of any real use.
Allocation of Common Costs: In a business enterprise producing more than one product or engaged in different activities, there are likely to be arguments which are reporting to against or more of the segments. | http://oderio.altervista.org/wp-content/themes/twentysixteen/css/10-2010.arguments-against_8646.php |
The VBA’s Proactive Inspections Program has identified sites where excavation work has been completed without the required protection measures in place to protect adjoining properties from the effects of unstable site cuts, such as retaining walls/site batters.
In many instances, these situations are identified a few months into the building process. Remember, where protection of adjoining properties is necessary, work should not be left to the end of the project, but rather be attended to before the main construction starts.
Builders need to recognise that because building work can severely impact adjoining properties, it is crucial to follow the protection work process set out in Forms 3 & 4 and within the timeframe specified. The owner/builder should also ensure appropriate insurance cover and survey of adjoining properties are in place prior to the protection work.
Building Inspectors and Building Surveyors are reminded that when inspecting sites at the mandatory notification stages, protection work measures must be in place as per the building permit documentation and in accordance with the Form 3 & 4 procedures and timeframes. Where departures or issues are identified, appropriate enforcement actions must be taken.
What can a Practitioner do?
Follow the protection work procedures and methods specified through the building permit process and Form 3 & 4 served on the adjoining properties.
Site excavation on the side boundary approximately 600mm high without a retaining wall being installed, many weeks after the site excavation was completed and construction of the house was well under way.
I would like to speak to someone regarding protection insurance for a property above mine in an apartment block. I am demolishing a wall inside my property.
I have a building permit issued for the works along with a structural engineer’s drawings and computations. | https://www.buildsafe.com.au/protection-of-adjoining-properties/ |
The invention discloses a method for decorating metal glaze on a ceramic glaze surface based on a rubbing technology and belongs to the field of inorganic nonmetallic material. According to the technical scheme, the method for decorating the metal glaze on the ceramic glaze surface based on the rubbing technology includes steps that a, manufacturing rubbing pieces used for covering different color blocks of a ceramic blank according to the surface shape and color of a designed ceramic glaze layer; b, using the rubbing pieces to cover the corresponding color blocks of the ceramic blank; c, disassembling the rubbing piece of the corresponding color block before spraying the metal glaze to the color block, after spraying, using the rubbing piece to cover the sprayed color block, and continuing to spray the metal glaze to the other color blocks; d, burning the sprayed ceramic to obtain a product. The method for decorating the metal glaze on the ceramic glaze surface based on the rubbing technology is capable of reducing steps and improving the spraying efficiency, and the technique is simpler. The rubbing method can reduce the preparation technique for decorating a ceramic tile, and the product pattern is more precise; the method for decorating the metal glaze on the ceramic glaze surface based on the rubbing technology can be used for quantity production. | |
Where did Hinduism Buddhism originated?
Where did Hinduism Buddhism originated?
Northern India
Hinduism and Buddhism originated in Northern India, but later expanded throughout Asia around 500 BCE.
Did Buddhism originated from Hinduism?
Buddhism is an offshoot of Hinduism. Its founder, Siddhartha Gautama, started out as a Hindu. For this reason, Buddhism is often referred to as an offshoot of Hinduism. It is through this meditation that Buddhists feel Gautama reached true enlightenment.
Is Hindu older than Buddhism?
As for Buddhism, it was founded by an Indian Prince Siddhartha Gautama in approximately 566BCE (Before Common Era), about 2500 years ago. In fact, the oldest of the four main religions is Hinduism.
What Hindu means in Persian?
Yes it is Persian word meaning defeated or slave. Persian dictionary titled Lughet-e-Kishwari, published in Lucknow in 1964, gives the meaning of the word Hindu as “chor [thief], dakoo [dacoit], raahzan [waylayer], and ghulam [slave].” brahmin imposed to Indians to be called it.
Buddhism, in fact, arose out of Hinduism, and both believe in reincarnation, karma and that a life of devotion and honor is a path to salvation and enlightenment.
Where did the religion of Hinduism come from?
Hinduism originated in India more than 5,000 years ago. It likely came from the northern part of the country, where the Indus River is located. Persians in the area referred to it as the Hindu River, and the name was also applied to the religion.
Who was the first person to use the term Hindu?
The first known use of the term Hindu is from the 6th century BCE, used by the Persians. Originally, then, Hinduism was mostly a cultural and geographic label, and only later was it applied to describe the religious practices of the Hindus.
What was the history of Hinduism before 2000 BCE?
Although the early history of Hinduism is difficult to date with certainty, the following list presents a rough chronology. Before 2000 BCE: The Indus Valley Civilisation. 1500–500 BCE: The Vedic Period. 500 BCE–500 CE: The Epic, Puranic and Classical Age. 500 CE–1500 CE: Medieval Period.
When was Hinduism introduced into the English language?
The term Hinduism, then spelled Hindooism, was introduced into the English language in the 18th century to denote the religious, philosophical, and cultural traditions native to India. | https://holidaymountainmusic.com/where-did-hinduism-buddhism-originated/ |
A coalition of youth groups of Accra have demanded an ambitious action plan to mitigate climate change in the country.
They also stressed the need for stakeholders to stimulate action among policy makers and development partners to take pragmatic measures to avoid the worst impact of climate change for the common good of the world.
The youth groups who embarked on a peaceful march in Accra are made up of Abibiman Foundation, Young Reporters for the Environment, Ghana, Action Aid, Ghana, Green Way International, and students of the Kinbu Senior High School.
They presented a petition to the Minister of Environment, Science, Technology and Innovation, Mr Kwabena Frimpong Boateng through the Accra Metropolitan Assembly and was received by Mrs Levina Korama Owusu, the Acting Director of the Ministry.
The petitioners demanded: “We want a clean environment devoid of carbon pollution and Government must put in measures and policies to reducing emission, where possible.
Political and private sector leaders should ensure we have enough natural landscape to absorb and store climate emissions.
“Our natural resources must be protected for their numerous benefits and not haphazardly exploited in a manner that destroys our socio-ecological integrity for a harmonious environment for all.”
Mr Kenneth Nana Amoateng, the Chief Executive Officer of the Abibiman Foundation in an interview with the Ghana News Agency called on government to address the environmental challenges posed by desertification and climate change, such as indoor kitchen pollution that killed women and children, the increase in erosion, flooding and de-afforestation.
He urged national leaders to put measures in place to stop all activities that contributed to climate change and promoted sustainable alternatives.
Ms Ellen Lindspy Awuku, he National Coordinator of the Young Reporters for the Environment, Ghana said; “We are concerned about the unfriendly living conditions caused by climate change, since we the youth are the vulnerable ones.”
She said climate actions must be prioritised and captured in budgets so as to provide funds to implement activities to prevent the impact of climate change in communities.
Mr Mohammed Adjei Sowah, the Accra Metropolitan Chief Executive said the Assembly considered the development of the youth as important and as such introduced various initiatives including; the C40 Climate Change Resilience Strategies to enhance their living conditions.
He said the launch of the greening project in Accra, the provision of household bins and the promotion of waste segregation in schools were some of the on-going initiatives put in place to improve environmental conditions in the city.
Mr Sowah said with the initiatives in place it was now the responsibility of the agencies under the Ministry of Environment, Science, Technology and Innovation to ensure their sustenance. | https://newsghana.com.gh/accra-youth-groups-demand-action-plan-on-climate-change/ |
Diversity is the differences among people that make us unique and shape our experiences. These differences include race, ethnicity, size, age, ability, gender, sex, socio economic status, sexual orientation, veteran status, gender identity and expression, birth order, spiritual practices, personality, thought process, research interests, personality…. the list is almost endless. The Office of Diversity, Equity, and Inclusion aims to create and maintain an accessible, inclusive and supportive community for all who learn, work and teach in the College of Food, Agricultural, and Environmental Sciences (CFAES) and OSU Extension at The Ohio State University.
One of the primary missions of the Office of Diversity, Equity, and Inclusion is to serve as a resource to faculty, staff and graduate students within the departments of CFAES that will help them to:
- Create a college-wide culture that is welcoming and inclusive of all faculty, staff and students and the diversity that we all bring.
- Learn more about working across differences.
- Develop skills to help them recognize and address implicit bias.
- Provide them resources for understanding people from different groups.
With all that diversity encompasses we are diverse as a college, yet at the same time, there are areas of diversity where we are underrepresented. Another goal of our office is to increase the numbers of underrepresented populations is another goal of our office. However, increasing numbers is meaningless if we have an environment where faculty, staff, and students do not feel like they belong. If the environment is not inclusive CFAES will not be a place where students, faculty, and staff want to be. Our college is on a journey to create the best place to work, learn and to grow.
Explore our website to learn more about the programs, events, and resources that we maintain to support our mission. | https://equityandinclusion.cfaes.ohio-state.edu/about-us/our-mission |
The Khemetic Community Speaks Out!
Brother Tunde Ra
Preserve The Ancient Sacred Egyptian Treasures!
It was not your ordinary press conference. The visual richness was striking, as the Khemetic community began with an ancestor veneration ritual. The speaker chanted ancient Egyptian sacred incantations and bade the audience to call out names of honored ancestors who had danced to that mysterious land beyond this world. Then he read out a long list of scholars and cultural institutions that have served the black community and sang their praises. It reminded me of the ancient Yoruba Igun Igun ritual, which testifies to the cultural unity of Africa celebrated by Dr. Shek Anta Diop.
Displaying the Sacred Ankh
Avatars of The Khemetic Community
Queen Afua’s Dance
Sacred Movements!
The first speaker to follow the MC was Queen Afua, a natral healer and writer who has practiced the Ancient Egyptian healing arts for more than a quarter century. She has written books addressed to African peoples on ancient Egyptian culture and sacred beliefs. After anointing the audience with warm salutations, she performed a sacred dance for us. She was followed by Tunde Ra, who gave the official position of the Khemetic community regarding the present uprising and the responsibility of all parties to protect the ancient treasures of Egypt. He pointed out that while they hold special meaning to them, these treasures are the heritage of all mankind..
Then the keynote speaker, Baba Samahj Se Ptah, took the podium and read an erudite letter written by himself to the Curator of Egyptian Antiquities at the world famous Metropolitan Museum of Art. .
Samahj Se Ptah
Mother and Son Listen to The Father’s Words
The letter was spurred by the decision of the MMA to return some ancient art treasures to Egypt. “I read in the World News section of the Wall Street Journal the intent of the Metropolitan Museum of Art to return nineteen antique artifacts to the United Arab Republic of Egypt. This sets an awful precedent. The Western world has been at the forefront of preserving artifacts from ancient antiquity. With regard to Egyptian antiquities in particular, “he writes,” had it not been for the Frenchman Augustus Mariette, the founder of the Cairo Museum, the ancient artifacts of Egypt would have been plundered and sold to private collectors by illiterate grave robbers whose very culture encourages the collection of booty (treasure). The internecine religious conflict in the Middle East does not bode well for the preservation of ancient artifacts fashioned by “Infidels.”
Although Samahj does not mention them by name it is obvious that it is the Muslim Arabs he is referring to. His comments demonstrate a sophisticated knowledge of history and religion that is often absent in Afro-American nationalist; many of whom have been influenced by the Nation Of Islam. And others identified with Orthodox Islam as a way of rejecting their western Christian heritage – especially high profile Jazz musicians like Ahmad Jamal, Yusef Lateef, Shib Shihab, et al – which they rightly viewed as the religion of their slave masters.
Yet blacks who became Muslims failed to see that what was true of white Christians is also true of white Arabs, Turks, Persians, et al. However the justification for the enslavement of Africans by the Arabs or the Europeans – Christian or Jew – was the polytheistic nature of African religions: Whether Egyptian or Yoruba! Just as monotheism is characteristic of Semitic cultures, polytheism is characteristic of African cultures.
The danger posed by militant Muslims to the artifacts produced by “Pagan” cultures was recently demonstrated by the Taliban in Afghanistan, who dynamited the colossal Buddhist statues in that counter. Baba Samahj recognized this danger: “The fate of the Buddha in Afghanistan is a harbinger of what awaits the Pyramids of Giza as well as the Temples, Oblisks, Stellas and sculpted images of ancient Kings, Queens and other ancient African Nobles of the Nile.” He went on to declare; ‘The Arabs are the last of the rapists of that great civilization of the Nile…they drove out the Romans from that holy Land and have squatted there ever since plundering, desecrating and stealing until curbed by western man. THE WEST OWES THEM NOTHING!”
Baba Samahj went on to admonish the MMA to “please remind Mr. Hawass of the Arab Republic of Egypt that he has no claim to these artifacts; that his religion prohibits the fashioning of beauty and artistry which his nomadic ancestors were incapable of producing anyway. The west must not allow the proclivity for violent solutions practiced by fundamentalist believers to intimidate or coerce you into giving up treasures which your ancestors have risked their very lives crawling on their bellies with pick and shovel through rat and bat dung to bring to the light of day.”
This statement is easily the most interesting that I have heard on the question of preserving ancient Egyptian treasures, and after news reached the outside world that these priceless treasures were being desecrated and pillaged. However Mr. Samahj was not finished; he also had a word to say about the historical role of westerners in the pillaging of Egyptian art treasures. “I never in my wildest dreams guessed that I’d be defending western man. Many of whom were also plunderers of African ancestral treasures with fancy titles such as anthropologist, Archeologist and Egyptologist.” But then in a magnanimous gesture he said: “Yet in all fairness we ascendants of those Africans of the Nile must be grateful to those honest westerners who wrestled with nomadic thieves to preserve ancestral crafts in museums where we people of African ascent in particular, and humanity in general can go to receive inspiration.”
Supernova…..
Son Of Samahj and Afua
Baba Semahj Se Ptah was followed by his son, who went to great lengths to try and disprove long established facts about ancient Egypt – such as the fact that they were polytheistic. This is a curious argument, since all indigenous African religions are polytheistic – it is what distinguishes them from the Semites. We are told to ignore the voluminous volumes of scholarship by well trained Egyptologist, when in fact these folk know what they know about Egypt because of these scholars…whose contribution the father had already admitted. The Son’s argument was unconvincing. And he never tells us why we should believe them except that he claims a direct racial connection to the ancient Egyptians. It was the model of spurious argument. Before he was done he even professed to be a “follower of Christ,” although he denied being a Christian.
One of the most curious contradictions that ran through their discourse is the attraction/repulsion theme. Islam and the Arabs are denounced as racist who desecrated African shrines on the one hand, and extolled as brothers on the other. Baba Samahj pointed out that the destruction of the great colossi in Afghanistan was committed by Muslim fanatics; then they praised “Allah.” Yet Islamic theology cannot be reconciled with the beliefs of ancient Egypt.
Ali Torian
The future Of the Khemetic community
Tunde Ra
Convener Of The Conference
Like all spiritual belief systems or sacred theology, the Khemetic community’s interpretation of ancient Egyptian religious beliefs may or may not accord with the historical record. But that is of no matter here, because the beliefs of Christians, Jews and Muslims are equally ahistorical. Which is to say that there sacred narratives do not meet the standard of evidence established by professional historians, who are best equipped to teach us about what actually happened in the past. Those who deny this do so because they are ignorant of the modern scientific method of resurrecting history.
It is easy enough to silence these critics simply by challenging them to explain their method and to present a body of scholarship based on that methodology, then compare it to the vast scholarship of the traditional academic Egyptologist. But this is a superfluous exercise because they are about the business of making myths, not teaching history, and the only test that matters for believers in any theology is whether it serves their needs. And the theology – which simply means “God talk” – constructed by the Khemetic community serves their purposes well.
Although many who adhere to Khemetic theology may not be aware of this fact, but they belong to a tradition of African centered Black Nationalism that goes back a couple of hundred years in the US. It began in the 18th century and by the middle of the 19th century it was the dominant ideology of the black community. However the place of Egypt in African American nationalist ideology has a curious history. Whereas twentieth century nationalism is Egyptocentric, the leading nationalist thinkers of the 19th century; the founding fathers of contemporary Black Nationalist thought, despised ancient Egypt and considered it a wicked place.
This is because most serious nationalist thinkers were Christian clergymen who were deeply knowledgeable of biblical texts, and as a people who were enslaved in America the US president was a modern Pharaoh and the US a reincarnation of the land of Egypt, the house of bondage. Hence they viewed the ancient Egyptians as a nation of godless idol worshippers who enslaved the Jews…whom their Bible said were “God’s chosen people.” Men like Bishop Alexander Crummel, who went and lived in Liberia after earning a degree in philosophy from England’s Cambridge University, or the erudite Presbyterian clergyman Edward Wilmont Blyden, who hailed from the Danish Virgin Islands – which was also the native country of the early twentieth century black radical intellectual Hubert Harrison.
The point here is that the nationalist ideology embraced by contemporary black Americans is diametrically opposed to that of their ideological ancestors in the 19th century on the question of Egypt. It is important to understand this because it provides a historical perspective from which we can better understand the function of Black Nationalist ideology today. If one listens carefully to the arguments of the Khemetic community, it is clear that they are motivated by a desire to create a classical antiquity for black folk that rivals, if not surpasses, the Greco/Roman civilization of Europe.
The impetus for this however, is the same as that which motivated the “African Redemptionists” of the 19th century. These men – and the major thinkers were all men – sought to identify a “golden age” of black civilization to counter the racist propaganda of whites who claimed that black people where incapable of building a great civilization. This claim was no picayune matter; it was the basis for their justification of the enslavement of black people, it allowed them to engage in the most barbaric practices against us and yet claim that “all men are endowed by their creator with the inalienable right to life, liberty and the pursuit of happiness.”
Historically the rise of Black Nationalism was a response to the aggressions of white nationalism, and whether or not it is apparent to those who practice African cultural nationalism today, this remains their fundamental motivation. In the end, as I listened to Baba Samahj’s son recount the good works the Khemetic temple is performing, their work sounded just like the work of the Christian church and the Muslim Mosque in the US: uplifting the race! It is called salvation among the Christians. And the approach of the Khemetic community is using to accomplish their mission 100% American!
| |
A Justin man has been recognized for his quick action in January in helping to stop the car of a woman who was having a seizure while driving on Interstate 35E, the Congressional Medal of Honor Foundation announced Wednesday.
Bryan Jacobs is among 20 finalists for the Citizen Service Above Self Honors. The awards recognize Americans "who become extraordinary through their indomitable courage and selflessness," according to a foundation press release.
Jacobs was among 10 finalists cited for performing a single act of extraordinary heroism.
On Jan. 11, Jacobs saw the woman's car strike the center median of the highway and continue to swerve, according to the foundation.
He pulled next to the woman's car and saw that she was slumped over the wheel, a foundation release said.
Jacobs then pulled his car in front of hers and he was able to use his bumper to slow her car to a stop. He then rendered aid to the woman, the foundation said.
"Bryan Jacobs' selflessness and bravery in saving the woman's life and preventing more accidents from occurring continues to be an inspiration," the foundation said in citing his efforts.
Ten other finalists were chosen across the nation for "their willingness to sacrifice for others through a prolonged series of selfless acts," according to the foundation.
The Congressional Medal of Honor Foundation is a nonprofit organization that attempts to perpetuate the Medal of Honor’s legacy of courage, sacrifice and patriotism.
More more information, visit the foundation's Web site. | https://www.star-telegram.com/entertainment/living/family/moms/article3825102.html |
Irritable bowel syndrome, or IBS, is a common condition that affects between 25 and 55 million Americans, the majority of whom are women. The condition most often occurs in people in their late teens to early 40s.
In essence, the condition is a combination of abdominal discomfort or pain and altered bowel habits: either altered frequency (diarrhea or constipation) or altered stool form (thin, hard, or soft and liquid).
IBS is not a life-threatening condition and it does not make a person more likely to develop other colon conditions, such as ulcerative colitis, Crohn's disease, or colon cancer, or any diseases of the heart or nerves. Yet IBS can be a chronic problem that can significantly impair quality of life in those that have it. For example, people with IBS miss work three times more than people without IBS and the condition is associated with absenteeism from school, decreased participation in activities of daily living, alterations of one's work setting (shifting to working at home, changing hours), or giving up work altogether.
What Are the Symptoms of IBS?
Among the symptoms associated with IBS are:
- Diarrhea (often described as violent episodes of diarrhea).
- Constipation.
- Constipation alternating with diarrhea.
- Abdominal pains or cramps, usually in the lower half of the abdomen that are aggravated by meals and relieved by having a bowel movement. Often the person has more frequent bowel movements when they have pain and the stools are looser.
- Excess gas or bloating.
- Harder or looser stools than normal (rabbit like pellets or flat ribbon stools).
- Visible abdominal distension.
Some people with IBS have other symptoms not related to their digestive tract, such as urinary symptoms or sexual problems.
Symptoms of IBS tend to worsen with stress.
People with IBS have traditionally been described as having "constipation-predominant," "diarrhea-predominant," or an alternating pattern of constipation and diarrhea. Each type represents about a third of the overall IBS population.
What Causes IBS?
Two hundred years after the condition was first described, experts still don't completely understand what causes IBS symptoms.
Many experts think that it is a problem of bowel motility -- the muscles in the bowels don't contract normally -- affecting the movement of stool. But some studies don't show that the poor bowel motility correlates with symptoms. Also, drugs that alter motility don't seem to benefit most people with IBS.
Newer studies suggest that in IBS, the colon is hypersensitive, overreacting to mild stimulation by going into spasms. Instead of slow, rhythmic muscle contractions, the bowel muscles spasm. That can either cause diarrhea or constipation.
Another theory suggests that a number of substances that regulate the transmission of nerve signals between the brain and GI tract may be involved. These include serotonin, gastrin, motilin, and others.
Some have also suggested that there is a hormonal component to the condition, as it occurs in women much more frequently than in men. So far, studies have not borne this out.
A number of factors can "trigger" IBS, including certain foods, medicines, the presence of gas or stool, and emotional stress.
How Is IBS Diagnosed?
The diagnosis of IBS relies on the recognition of the symptoms as well as an extensive evaluation to rule out other causes. There are no specific lab tests that can be done to diagnose IBS. Therefore, your health care provider may run some tests to rule out other conditions such as:
- Food allergies or intolerances, such as lactose intolerance and poor dietary habits.
- Medications such as high blood pressure drugs, iron, and certain antacids.
- Infection.
- Enzyme deficiencies where the pancreas isn't secreting enough enzymes to properly digest or break down food.
- Inflammatory bowel diseases like ulcerative colitis or Crohn's disease.
The clinical diagnosis of IBS can be made by your doctor after a thorough history and exam and once other metabolic or structural conditions have been eliminated as a cause. Your health care provider may perform one or more of the following tests for further evaluation:
- Flexible sigmoidoscopy or colonoscopy to look for signs of intestinal obstruction or inflammation.
- Upper endoscopy if heartburn or indigestion is present.
- X-rays.
- Blood testing to look for anemia (deficiency in red blood cells), thyroid problems, and signs of infection.
- Stool testing for blood or infections.
- Testing for lactose intolerance or gluten allergy (celiac disease).
- Specific testing to look for bowel motility problems.
How Is IBS Treated?
Treatment of IBS involves a collaborative effort between the doctor and the patient to manage symptoms and may consist of lifestyle changes and drug treatments.
Nearly all people with IBS can be helped, but no one treatment works for everyone. Usually, with a few basic changes in diet and activities, IBS will improve over time.
Nearly all people with IBS can be helped, but no one treatment works for everyone. Usually, with a few basic changes in diet and activities, IBS will improve over time. Here are some steps you can take to help reduce symptoms of IBS:
- Avoid caffeine (found in coffee, teas, and sodas).
- Increase fiber in your diet (found in fruits, vegetables, grains, and nuts).
- Drink at least three to four glasses of water per day.
- Don't smoke.
- Learn to relax, either by getting more exercise or by reducing stress in your life.
Try limiting the amount of milk and cheese you consume. Eat smaller meals more often or eat smaller portions. However, if you have IBS and are concerned about your calcium intake, you can try other sources of calcium. These sources include broccoli, spinach, turnip greens, tofu, yogurt, sardines, and salmon with bones, calcium-fortified orange juice and breads, calcium supplements, and some antacid tablets.
Keep a record of the foods you eat so you can figure out which foods bring on bouts of IBS. Common food "triggers" of IBS are red peppers, green onions, red wine, wheat, and cow's milk. | http://ocgastroenterologist.com/irritable-bowel-syndrome |
Subsets and Splits