content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
THE TOP TEN WEBSITES THAT DISSEMINATE MISINFORMATION ABOUT VITAMIN C
Research by Virginia Kraus, MD. Collaborators on the study include Janet Huebner, Thomas Stabler, Charlene Flahiff, Loria Setton, Christian Fink and Amy Clark, all of Duke.
Press Release
Thu Aug 12 09:41:21 EDT 2004
Report entitled: MEDLINE PLUS Update, Vitamin C, Date: 1/18/2003 by: Steven Angelo, M.D., Assistant Professor of Medicine, Yale School of Medicine, New Haven, CT. Review provided by VeriMed Healthcare Network.
Misinformation:
Falsely claims scurvy, a deficiency of vitamin C, is rare in the USA. "A deficiency of vitamin C causes the disease scurvy, which is rare in the United States."
Falsely claims excessive doses of vitamin C "can lead to toxicity."
Falsely claims vitamin C toxicity can produce kidney stones.
Falsely claims high-dose vitamin C impairs absorption of vitamin B12. Many studies have disproven this claim. [Am J Clin Nutr. 1981 Jul;34(7):1356 -61; Am J Clin Nutr. 1976 Jun;29(6):645; Scott Med J. 1982 Jul;27(3):240-3]
Accurately claims the "Recommended dietary allowances (RDAs) are defined as the levels of intake of essential nutrients that, on the basis of scientific knowledge, the Food and Nutrition Board judges to be adequate to meet the known nutrient needs of practically all healthy persons," but fails inform the public that the current RDA meets the vitamin C needs of few if any Americans.
Mistakenly advises the best way to get the daily requirement of essential vitamins is to eat a balanced diet that contains a variety of foods from the food guide pyramid. The food pyramid has since been re-done twice since the RDA for vitamin C was established. The most common plant foods consumed by Americans (in order - iceberg lettuce, tomatoes, French fries, orange juice and onions) provide little vitamin C.
http://www.niddk.nih.gov/welcome/releases/4_15_96.htm
Report entitled: 200 Milligrams Daily of Vitamin C is Appropriate, April 15, 1996 press release
Misinformation:
Mistakenly claims 200 milligrams is all that is needed by healthy individuals.
Advocates the consumption of five servings of fruits and vegetables to obtain 200 mg of vitamin C. But the National Cancer Institute now advocates 9 servings of fruits and vegetables based upon the fact that 5 servings did not lower the risk for heart disease or cancer.
Misleads consumers into thinking high-dose vitamin C causes kidney stones.
Mistakenly claims 200 milligrams of vitamin C saturates the blood plasma. The saturation studies for vitamin C were performed 12 hours after oral ingestion of vitamin C without calculating for the half life of the vitamin. The half life (when half of the dose is gone) for vitamin C is 30 minutes, so measuring blood plasma saturation 12 hours (or 24 half lives) following oral ingestion, is a patently bogus methodology. See S. Hickey, H. Roberts, Ascorbate: The Science of Vitamin C, available at www.lulu.com/ascorbate.
Report entitled: NIH Research Shows 100 to 200 Mg of Vitamin C Daily May Benefit Healthy Adults, April 20, 1999
Misinformation:
Indicates "The RDA is based on the amount of vitamin C needed to prevent scurvy, a potentially fatal disease marked by fatigue and bleeding," but fails to mention amounts of vitamin C needed for optimal health.
Errantly states "Our work reinforces the health message that healthy people should be eating five servings of a variety of fruits and vegetables every day. You'll get adequate vitamin C and you have the potential benefit of preventing disease, especially certain cancers." The National Cancer Institute concedes that 5 servings of fruits and vegetables a day has not reduced the risk of cancer and heart disease and now advocates nine servings a day.
Without substantiation, makes the false claim that "healthy people are better off eating fruits and vegetables rather than relying on supplements because absorption of the vitamin in supplements varies widely, depending on manufacturing methods and the dose taken."
Makes the false claim that "at 1,000 mg, some volunteers showed high levels of oxalate and uric acid in their urine, which might lead to kidney stones." The levels of oxalate in studies were only marginally higher and other studies dispel the idea that vitamin C pill increase the risk for kidney stones.
Makes the inaccurate claim that "at 200 mg, blood plasma had more than 80 percent maximal concentration of vitamin C, and tissues were completely saturated." Recent studies performed by National Institutes of Health researchers themselves indicates blood plasma concentrations of vitamin C can increase three times beyond the mythical "saturation point."
Report entitled: Twenty-Five Ways to Spot Quacks and Vitamin Pushers
By Stephen Barrett, M.D. and Victor Herbert, M.D., J.D.
Misinformation:
Falsely maintains the diet provides all the nutrients necessary to maintain health.
Mistakenly claims that health quacks claim that the Recommended Dietary Allowances (RDAs) Have Been Set Too Low because then "you are more likely to buy supplements."
Claims anyone who recommends dietary supplements are "beneficial for everyone" is a health quack.
Falsely claims "no normal person following the U.S. Dietary Guidelines is in any danger of vitamin deficiency."
Misinformation:
Mistakenly claims doses "greater than 1000 mg daily are not recommended" because vitamin C increases iron absorption and high levels of iron may be associated with an increased risk of cardiac disease. "Another concern about consuming high doses of any antioxidant is that under certain conditions, antioxidants can have the opposite effect (ie, can become a pro-oxidant) and perhaps damage cells and DNA." Vitamin C supplements do not induce iron overload and do not cause DNA damage in humans. [Science. 2001 Sep 14;293 (5537):1993-5; Int J Vitam Nutr Res. 1999 Mar;69(2):67-82]
Report entitled: Mayo Clinic, Ascorbic Acid (Vitamin C) DR202071; May 01, 19 95
By Micromedex Inc.
Misinformation:
Claims that the "increased need for vitamin C should be determined by your health care professional." There are few if any health professionals who understand how to estimate the body's increased need for vitamin C.
Mistakenly claims that the "the RDAs for a given nutrient may vary depending on a person's age, sex, and physical condition (e.g., pregnancy)." The RDA is calculated for healthy persons only and is not adjusted for age or physical condition.
Blood problems-High doses of vitamin C may cause certain blood problems
Misleads consumers into believing high-dose vitamin C is potential harmful for diabetics because it "may interfere with tests for sugar in the urine." High-dose vitamin C may reduce blood sugar levels which certainly does alter tests, but in a beneficial manner.
Continues to spread the false claim that "high doses of vitamin C may induce kidney stones." [J Am Society Nephrology 1999 Apr;10(4):840-5; Nutrition Reviews 1999 Mar;57(3):71-7; Clin Chem Laboratory Medicine. 1998 Mar;36(3):143-7; Annals Nutrition Metabolism. 1997;41(5):269-82]
Report entitled: Vitamin C Capable Of Damaging DNA
Misinformation:
Continues to post a Reuters Health report dated June 14, 2001, based upon a study published in Science magazine which erroneously claimed that high-dose vitamin C damages DNA and could cause cancer. [Science 2001;292:2083-2086] This report was rebutted in a later issue of Science magazine. [Science. 2001 Sep 14;293 (5537):1993-5]
Report entitled: Vitamin C Worsens Knee Osteoarthritis in Animal Study
Researchers Say Dietary Intake Should Not Be Above the RDA Recommendation
By Jennifer Warner, WebMD Medical News. Reviewed By Brunilda Nazario, MD
Misinformation
Published the results of an animal study conducted at Duke University which concluded that high-dose vitamin C may induce osteoarthritis without checking on the validity of this report nor publishing any contrary opinions. [Arthritis Rheumatism June 2004] In fact, a large, long-term human study published 8 years earlier showed that humans who consume the most vitamin C are three times less likely to develop osteoarthritis. [Arthritis Rheumatism 1996 April;39: 648-56] WebMD makes the false claim that "this new study shows prolonged use of vitamin C supplements may aggravate osteoarthritis," when in fact this has never been demonstrated in humans.
Report entitled: High Doses Of Vitamin C May Be Harmful
Misinformation:
Continues to post a bogus Reuters News story based on a report in Nature Magazine [Nature, April 9, 1998;392:559], based on a test-tube study, that high -dose vitamin C may damage DNA. At least five human study disprove this idea. [Science. 2001 Sep 14; 293 (5537):1993-5]
Report entitled: Additional Information about Products Approved in Testing
Misinformation:
States: "Vitamin C supplementation should not exceed 2,000 mg/day. Dosages above this established upper limit may cause diarrhea and intestinal gas." There is no study which shows the oral vitamin C above 2000 milligrams produces intestinal gas or diarrhea. It is known that there is a variable dosage where diarrhea occurs, called the bowel tolerance point. Diarrhea may not occur in some people who take more than 20,000 milligrams of oral vitamin C.
Bill Sardi authored and Owen Fonorow contributed to this report.
Vitamin C Foundation Home Page
Press Release
Research by Virginia Kraus, MD. Collaborators on the study include Janet Huebner, Thomas Stabler, Charlene Flahiff, Loria Setton, Christian Fink and Amy Clark, all of Duke. | http://www.vitamincfoundation.org/topten/ |
Our goal today is to make a checkbox that has two states, checked and unchecked. Each state will be styled with CSS, we will choose when to apply one style or the other after we have checked with JQuery if the checkbox is checked or not.
The CSS am going to use for this tutorial is very simple. The first thing I want to do is hide the default look of the checkboxes.
Next I want to have a class that I can apply to all the checkboxes regardless of their state. I want all the checkboxes to have white color font and corner radius and paddings of 10 pixels.
I want to keep the styling simple for both checked and unchecked boxes, but you should be able to take it as far as you want easily. The unchecked boxes will be black and the checked ones red.
We can’t style checkboxes directly, that is why we set the display attribute to none, but we can wrap them around other elements and add labels to them.
In order for this script to work we will wrap our input checkbox tags around spans and we will add the styles we made above to the spans instead of adding them to the input tags.
Here is the code you will need for each checkbox you want to have in your form, you can have as many as you want.
Since we hid our checkboxes we can’t click on them to check or unchecked them, BUT we can use JQuery to change their state when click we click on their parent span tag.
This following line will return the state of the box, true if it is checked and false if not.
The "this" selector will refer to span tag inside the click function, you will see how this works in a moment.
If the box is checked we will remove the class "checked" and add the class "unchecked", we will do the opposite for unchecked boxes.
That’s it for this tutorial, enjoy! | http://webhole.net/2010/02/06/how-to-style-checkboxes/ |
The length of a rectangle of x units is increased by 10%, and its width of y units is increased by 15%. What is the ratio of the area of the old rectangle to the area of the new rectangle?
Correct answer:
Did you find an error or inaccuracy? Feel free to write us. Thank you!
Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it.
Tips to related online calculators
Our percentage calculator will help you quickly calculate various typical tasks with percentages.
Check out our ratio calculator.
Check out our ratio calculator.
You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem: video1
Related math problems and questions:
- Percent change
If the length of a rectangle is increased by 25% and the width is decreased by 10%, the area of the rectangle is larger than the area of the original rectangle by what percent?
- Plot
The length of the rectangle is 8, smaller than three times the width. If we increase the width by 5% of the length and the length is reduced by 14% of the width, the circumference of the rectangle will be increased by 30 m. What are the dimensions of the
- Hop-garden
The length of the rectangular hop garden Mr. Smith increased by 25% and its width by 30%. What is the percentage change in area of hop garden?
- Property
The length of the rectangle-shaped property is 8 meters less than three times of the width. If we increase the width 5% of a length and lendth reduce by 14% of the width it will increase the property perimeter by 13 meters. How much will the property cost
- Parcel
Both dimensions of the rectangular parcel were increased by 31%. By how many % has increased its acreage?
- The room
The room has a cuboid shape with dimensions: length 50m and width 60dm and height 300cm. Calculate how much this room will cost paint (a floor is not painted) if the window and door area is 15% of the total area and 1m2 cost 15 euro.
- Square to rectangle
What is the ratio of the area of a square of side x to the area of a rectangle of a rectangle of width 2 x and length 3
- Three shapes
1/5 of a circle is shaded. The ratio of area if square to the sum of area of rectangle and that of the circle is 1:2. 60% of the square is shaded and 1/3 of the rectangle is shaded. What is the ratio of the area of circle to that of the rectangle?
- Hall
Rectangular hall will have pave by square tiles with a side length 15 cm. The hall has length 18 meters and width 3 m. How many tiles need to buy if 2 percent of the amount is disrupted during the work?
- The TV
The TV costs CZK 9,999. First, they reduced her price by 15%, then they became more expensive by 15%. How much does television cost now?
- Rectangle
The width of the rectangle is 65% of its length. Perimeter of the rectangle is 132 cm. Determine the dimensions of the rectangle.
- The hall
The hall had a rectangular ground plan one dimension 20 m longer than the other. After rebuilding the length of the hall declined by 5 m and the width has increased by 10 m. Floor area increased by 300 m2. What were the original dimensions of the hall?
- Percentage and rectangle
About what percentage increases perimeter and area of a rectangle if both the sides 12 cm and 10 cm long we increase by 20%?
- Square - increased perimeter
How many times is the increased perimeter of the square, where its sides increase by 150%? If the perimeter of the square will increase twice, how much% increases the content area of the square?
- Rectangle
Find the dimensions of the rectangle, whose perimeter is 108 cm and the length is 25% larger than the width.
- Energy saving
They were released three different, independent inventions saving 20%, 24% and 15% energy. Some considered that while the use of these inventions, the total savings will be 20% + 24% + 15% = 59%. Is this true? How much will percent of energy save all thre
- 15% of
15% of the revenue from sales was CZK 24,000 and it had to be written off as sales tax. What was the net profit on sales? | https://www.hackmath.net/en/math-problem/5902 |
---
abstract: 'Cancer analysis and prediction is the utmost important research field for well-being of humankind. The Cancer data are analyzed and predicted using machine learning algorithms. Most of the researcher claims the accuracy of the predicted results within 99%. However, we show that machine learning algorithms can easily predict with an accuracy of 100% on Wisconsin Diagnostic Breast Cancer dataset. We show that the method of gaining accuracy is an unethical approach that we can easily mislead the algorithms. In this paper, we exploit the weakness of Machine Learning algorithms. We perform extensive experiments for the correctness of our results to exploit the weakness of machine learning algorithms. The methods are rigorously evaluated to validate our claim. In addition, this paper focuses on correctness of accuracy. This paper report three key outcomes of the experiments, namely, correctness of accuracies, significance of minimum accuracy, and correctness of machine learning algorithms.'
author:
- |
[**Ripon Patgiri, Sabuzima Nayak, Tanya Akutota, and Bishal Paul**]{}\
National Institute of Technology Silchar, Assam, India-788010\
bibliography:
- 'mybibfile.bib'
title: '**Machine Learning: A Dark Side of Cancer Computing**'
---
**Keywords:** [Machine Learning, Cancer, Breast Cancer, Prediction, Analysis]{}
Introduction
============
The Cancer, took many lives, and still people are surrendering their lives in front of Cancer. The unpleasant truth is that there is no permanent solution for Cancer till date. However, the Scientists are still trying their level best to save many lives and they are successful too. There are many controversies on “whether a Cancer is a disease or not”. Many scientists claim that the Cancer is an unwanted cell behavior due to some mutation. The Scientists believe that the reason of being a Cancer victim may be a high body mass index, low fruit and vegetable intake, lack of physical activity, tobacco use, and alcohol use [@WHO]. The definite reason for Cancer is yet to be reported. Many cancer victims could not survive. The cancer mortality is presented in Figure \[w\]. However, the modern technology is helping in saving lives of human being from Cancer. For instance, machine learning. The machine learning algorithm plays a vital role in Cancer Computing. The machine learning algorithms are used to analyze the probable presence of Cancer.
The machine learning algorithms are modified to achieve better accuracy for many purposes, and researchers are developing modern techniques to analyze the Cancer. Chen et al. [@Chen2014] reported accuracy of 83.0% in lung cancer using Artificial Neural network (ANN) with 440 samples. Xu et al. [@xu] reported an accuracy of 97% in breast cancer using Support Vector Machine (SVM) with 295 sample size. Exarchos et al. [@Exa] reported 100% accuracy in Oral squamous cell carcinoma (OSCC) using their proposed method. Ahmad et al. [@Ahmad] compares three machine learning algorithms on breast cancer, namely, Decision Tree (DT), ANN, and SVM. DT, ANN, and SVM gives an accuracy of 93.6%, 94.7%, and 95.7% respectively using 547 samples.
From the above research results, some research questions (RQ) arise which are given below-
RQ1:
: How can we achieve 100% accuracy, using machine learning algorithms in prediction of Cancer? Is it ethical?
RQ2:
: Can a machine learning algorithm be misled?
RQ3:
: Why does researcher emphasize on enhancing the maximum accuracy? Is it really necessary for Cancer prediction?
RQ4:
: When can we believe or deploy the proposed machine learning algorithm of a researcher based on their research result?
The research questions are really difficult to answer. However, we critically analyze the research result based on our research questions. RQ1 introduces another dimension to think on machine learning algorithms. It forces to think on ethical and unethical way of gaining accuracy. Similarly, RQ2 also gives indications on the possible misleading of machine learning algorithm. Most importantly, the RQ3 creates a controversial thoughts on maximum and minimum accuracy. Interestingly, RQ4 emphasizes to think about the reliability of the research result with machine learning algorithms. Thus, these four RQs forces to rethink on the machine learning algorithms in dangerous diseases, like Cancer.
{width="90.00000%"}
We neither present any propose a model nor have any intention to increase the accuracy of the machine learning algorithm. On the contrary, we exploit the behavior of machine learning algorithms and its consequences. In this paper, we present following key points-
- Experimentation results using WDBC dataset.
- Experimentation results using doubling the WDBC dataset.
- Behavior of machine learning algorithms.
- Significance of minimum accuracy in dangerous diseases.
- Discloses unethical way of misleading the algorithms.
The paper is organized as follows- Section \[bg\] discusses on various machine learning approaches to predict Cancer. Section \[dm\] provides data and methods to perform experiments. Section \[er\] discusses in-depth on the results of our experimentations. Section \[dis\] discusses various aspects of machine learning algorithms. And finally, Section \[con\] concludes the paper.
Background {#bg}
==========
With the large amounts of cancer data available to work with, machine learning methods have become a de-facto standard of predicting cancer. Machine learning algorithms uncover and identify patterns and extract relationships among the complex data. Prediction accuracy depends on different parameters like patient’s age, stage of cancer, medical history, lifestyle, food habits, gender, region based factors, diagnosis histopathology, etc. [@Jhajharia]. The accuracy of cancer prediction outcome has significantly improved by 15%-20% in the last years, with the application of ML techniques [@Kourou]. Kourou et al.[@Kourou] compares some of these techniques for breast cancer prediction, namely Neural Network (NN), Bayesian Network (BN), SVMs and DTs. Barracliffe et al. [@Luke] achieve of an 83.6% accuracy on breast cancer using SVM. The age of the female breast cancer victims are from 28 to 85 years. Surprisingly, Kesler et al. [@Kesler] achieves 100% accuracy using Random Forest model in breast cancer. In addition, there are numerous research on various cancer types. Delen et al [@Delen] compares the results of decision tree and neural networks applied to the SEER dataset, with the C5 decision tree having a 93.6%accuracy compared to Neural Networks, with a 91.2%. Hamsagayathri and Sampath [@Hamsagayathri] proposed a priority-based decision tree which achieves 98.5% accuracy. Nguyen et al. [@Nguyen] have presented the application of random forest combined with feature extraction applied to the diagnosis and prognosis of breast cancer. Their testing accuracy averaged as one of the highest, around 99.8%. Data set is first N-fold cross validated, estimation of Bayesian probability is done, followed by estimation of feature ranking and value.
Importance of accuracy
----------------------
Figure \[acc1\], and \[acc2\] shows the accuracy statistics of hundred rounds on WDBC dataset. The statistics show that accuracy never remains same for the same input data without changing anything. Most of the article reports the maximum accuracy of their algorithm. The maximum accuracy changes time to time. Early conclusion of maximum accuracy with a few runs is incorrect. Because, the accuracy always changes with time. In addition, it is unreliable calculation of the mean accuracy after 5 or 10 runs of their proposed model. A series of experiments are conducted to evaluate the algorithms’ performance in terms of mean accuracy. Moreover, the minimum accuracy with two or three experiments to evaluate the algorithms is incorrect. Because, sometimes the algorithms give poor performance in minimum accuracy.
### Significance of minimum accuracy
It is not required to report the maximum accuracy of a Cancer prediction using a proposed system. The maximum accuracy is in the best case scenario and the algorithm cannot perform beyond that accuracy. It is not so important to become a successful machine learning algorithm in a Cancer or other life threatening diseases. The mean accuracy is an important parameter to report the proposed machine learning algorithms. It is very useful to study cancer disease and to benchmark with other existing algorithm. Now, the most important is the minimum accuracy. The minimum accuracy is the worst case scenario. The worst case scenario dictates the strength of the proposed machine learning algorithm. If people can prove that the proposed system cannot go beyond the minimum benchmark, then the proposed algorithm is reliable. We cannot rely on the maximum accuracy report in life threatening disease. Benchmarking using minimum accuracy gives us more impact than benchmarking using maximum accuracy. Let, Method X and Method Y give result of $m\%$ and $n\%$ in maximum accuracy respectively where $m>n$. Thus, the Method X is better than Method Y. Let, Method X and Method Y give result of $p\%$ and $q\%$ in minimum accuracy where $p<q$. In this case, the Method Y is better than Method X. In a life threatening disease, we cannot rely on Method X, since its minimum accuracy is lower than Method Y.
Data and Methods {#dm}
================
**Name** **Description**
-------------------- ----------------------------
ID Identity of the patients
Diagnosis M- Malignant and B- Benign
Number of features 32
Number of patients 569
: Parameters of Wisconsin Diagnostic Breast Cancer dataset
![Number patients with malignant and benign cancer.[]{data-label="MB"}](Figure_1.png){width="45.00000%"}
We have experimented on a well-known standard dataset, Wisconsin Diagnostic Breast Cancer (WDBC) dataset. The dataset contains malignant and benign patients. The dataset consists of 569 patients reports. Figure \[MB\] depicts the data of malignant patients and benign patient. The dataset contains 212 malignant and 357 benign cancers.
\
\
\
The WDBC dataset is used to exploit the machine learning algorithms. We have conducted this rigorous experiment in two phases which is listed as follows-
- **Phase I:** We input the original WDBC dataset to the machine learning algorithms for 100 times and results are plotted in the chart.
- **Phase II:** We double the WDBC dataset by duplicating the dataset and input to the machine learning algorithms. The outcome of the experiments is plotted in chart.
Experimentations and Results {#er}
============================
Figure \[acc1\] depicts the 100 rounds prediction accuracy of Random Forest, SVM, k-Nearest Neighbor, and Neural Networks. Similarly, Figure \[acc2\] depicts the 100 rounds prediction accuracy of Naive Bayes, Logistic Regression, Decision Tree entropy, and Decision Tree regressor. Overall, the Random Forest model performs excellent in prediction and Naive Bayes performs very poorly during the 100 round iterations. As per our experienced, the Neural Network takes huge time in training and testing.
{width="95.00000%"}
{width="95.00000%"}
Figure \[acc3\], \[acc4\] and \[acc5\] depicts the best, average and worst case accuracy in prediction of Random Forest, Support Vector Machine (SVM), k-Nearest Neighbor, Neural Networks, Naive Bayes, Decision Tree entropy, and Decision Tree regressor. The worst performer is Naive Bayes algorithm in this dataset. The best, average and worst case prediction accuracy of Naive Bayes are $92.98245614$, $89.28654971$, and $84.21052632$ respectively. The highest ’Best Case’ is achieved by SVM and Neural Networks which is 99.41520468 for both. The Random Forest model consistently predicts with high accuracy on an average $95.5380117$. It outperforms SVM and all other learning models as shown in Figure \[acc3\], \[acc4\] and \[acc5\]. The Best Case of Random Forest is slightly lower than SVM and Neural Network. However, the SVM is the best in Best Case (99.41520468) and Worst Case (92.39766082). The Neural Network shows poor performance in Worst case (87.13450292) and average case (93.32748538). The accuracy of the learning models fluctuates with the times due to random samples taken from the input. Therefore, we perform 100 times experiments with the same dataset to extract the mean value.
The randomness in Machine Learning algorithms makes difficult to decide the prediction accuracy. As per our experience, the prediction accuracy always varies. Figure \[acc1\], and \[acc2\] depicts the randomness of the accuracy during 100 rounds training and testing. We have observed that the results never remain same as previous result. Therefore, it is very easy to claim the maximum accuracy with highest possible results which cannot be validated or believed easily. Is it wiser way to believe others research results? Or can a researcher manipulate their results? These questions arrives in randomness.
Accuracy measurement with various partition
-------------------------------------------
![Average accuracy calculation of some machine learning algorithms on UCI Breast Cancer dataset. Considering every possible split as 50-50, 60-40, 70-30, and 80-20.[]{data-label="acc3"}](avgm.png){width="45.00000%"}
Figure \[acc3\] shows the average accuracy calculation on various machine learning algorithms. The dataset is split into 50-50, 60-40, 70-30, and 80-20. In this case, we have observed that 80-20 accuracy is better than other splitting. Moreover, most of the article reports 70-30 and it is assumed as standard practice. The SVM and Random Forest algorithm excel all the split. Logistic regression outperform Nearest Neighbor and Neural Network. However, the Logistic regression exhibits its poor performance in 60-40 split.
![Maximum accuracy calculation of some machine learning algorithms on UCI Breast Cancer dataset. Considering every possible split as 50-50, 60-40, 70-30, and 80-20.[]{data-label="acc4"}](maxm.png){width="45.00000%"}
Figure \[acc4\] shows the maximum accuracy in 50-50, 60-40, 70-30, and 80-20 split. The SVM achieves highest accuracy in 80-20 dataset split. The Random Forest performs better than all other algorithms in 50-50, 60-40, and 70-30 dataset split. However, the maximum does not count in life threatening diseases.
![Minimum accuracy calculation of some machine learning algorithms on UCI Breast Cancer dataset. Considering every possible split as 50-50, 60-40, 70-30, and 80-20.[]{data-label="acc5"}](minm.png){width="45.00000%"}
Figure \[acc3\], \[acc4\], and \[acc5\] shows the mean accuracy, maximum accuracy, and minimum accuracy respectively. The figures show the accuracy of Naive Bayes, Nearest Neighbor, Neural Network, Decision Tree Entropy, Decision Tree Regressor, Logistic Regression, Random Forest and SVM. SVM performs the best in 50-50 and 80-20 split and Random Forest perform the best in 60-40 and 70-30 split in minimum accuracy. Nearest Neighbor performs well in 50-50 split and Logistic regression perform well in 50-50 split in the case of minimum accuracy. Surprisingly, the Neural Network performs worst in Minimum accuracy in all split. Therefore, we cannot rely on Neural Network algorithm, albeit the algorithm gives the best result in some cases. However, in this evaluation, the Random forest outperforms all other algorithms.
Doubling the dataset
--------------------
![Average accuracy calculation of some machine learning algorithms on UCI Breast Cancer dataset by doubling the dataset. Considering every possible split as 50-50, 60-40, 70-30, and 80-20.[]{data-label="acc6"}](avgo.png){width="45.00000%"}
![Maximum accuracy calculation of some machine learning algorithms on UCI Breast Cancer dataset by doubling the dataset. Considering every possible split as 50-50, 60-40, 70-30, and 80-20.[]{data-label="acc7"}](maxo.png){width="45.00000%"}
{width="45.00000%"}
A 100% accuracy is unbelievable! It’s surprising! However, we have achieved. The Random Forest shows the maximum accuracy of 100% in the Wisconsin Breast Cancer dataset by doubling the input size. The accuracy of Random Forest, Neural Network, and Decision Tree reached to 100% in the best case by doubling the input dataset. The Nearest Neighbor, Naive Bayes, and SVM could not reach to hundred percent. However, the accuracy also increased in this input. The total hundred percent accuracy count is maximum in 80-20 partition and the Random Forest exhibit the highest of 21s hundred count in 100 runs. Moreover, the average accuracy of all algorithms has raised. Random Forest shows the best performance in the average case. And, the decision tree also performs satisfactory in the average case.
![Minimum accuracy calculation of some machine learning algorithms on UCI Breast Cancer dataset by doubling the dataset. Considering every possible split as 50-50, 60-40, 70-30, and 80-20.[]{data-label="acc8"}](mino.png){width="45.00000%"}
The Random Forest excels in prediction in minimum accuracy, however, the all other algorithms also performs well except Neural Network. The result shows that the accuracy of a machine learning algorithm can easily be manipulated. The Random Forest model is more vulnerable to this kind of malicious result intentionally or unintentionally. However, the SVM is not affected more with the doubling the size, but we also observed the low rises in accuracy.
Discussion {#dis}
==========
**Input** **Misclassification** **Accuracy**
----------- ----------------------- --------------
2 1 $50\%$
5 1 $80\%$
10 1 $90\%$
20 1 $95\%$
30 1 $96\%$
40 1 $97.5\%$
50 1 $98\%$
60 1 $98.33\%$
70 1 $98.57\%$
80 1 $98.75\%$
90 1 $98.89\%$
100 1 $99\%$
: Misclassification of one and its consequences
\[tab\]
Table \[tab\] exposes one of the reasons for increasing accuracy. An increment of input increases the accuracy. The machine learning algorithm depends on the input size. An input of 2 and one misclassification causes 50% degradation of accuracy. While same number of misclassification in 100 inputs causes 99% accuracy. The accuracy also increased in doubling the data. Moreover, the machine learning algorithms selects a random sample from the input which is more accurate to predict. Because, the data sample is duplicated and one of the samples is picked and matched with a duplicate sample data. Thus, accuracy increases to 100%. However, this practice is unethical. On the contrary, a large amount of data can be generated using genetic algorithm. A malignant and a benign dataset can be used to generate offspring randomly by crossover method. This is not deployable in real life, however, we can evaluate experimentally the performance of the machine learning algorithm by the large set of dataset.
Conclusion {#con}
==========
As we have shown that we have achieved 100% accuracy at maximum. We illustrate that maximum accuracy is not a significant factor in life threatening diseases. The minimum accuracy plays utmost important in benchmarking process and real life scenario in the case of life threatening diseases, for instance, Cancer. The paper also discusses on how to achieve 100% accuracy, using machine learning algorithm. Also, we demonstrated the unethical way of reporting accuracy of machine learning algorithm which can easily mislead the algorithm intentionally or unintentionally. Enhancing the maximum accuracy does not impact in Cancer Computing. On the contrary, most of the researchers interested in the enhancement of maximum accuracy which does not serve the purpose of cancer computing.
| |
Q:
Matrix double modulo multiplication to get identity
I have to multiply to matrices A and B which can consist of numbers 0,2,3,4,5,6 to get an identity matrix, however multiplication happens with moduli after every step. e.g.:
[A1 A2 A3] and [B1 B2 B3]
[A4 A5 A6] [B4 B5 B6]
[A7 A8 A9] [B7 B8 B9]
((A1*B1)%7+(A2*B4)%7+(A3*B7)%7)%7 = 1
Which would be the element I_11
How can i find two matrices A and B?
A:
Code to get possible combinations of A and B that would satisfy the required condition -
nums = [0,2,3,4,5,6]
%// allcomb is a MATLAB File-exchange tool available at -
%// http://www.mathworks.in/matlabcentral/fileexchange/10064-allcomb
t1 = allcomb(nums,nums)
t2 = mod(prod(t1,2),7)==1
out_comb = t1(t2,:)
Output is -
out_comb =
2 4
3 5
4 2
5 3
6 6
This means that the possible combinations of A and B would be (assuming I means 3x3 sized identity matrix) -
A is 2I, B is 4I and A is 4I, B is 2I %%// 2I would be 2.*I and so on
A is 3I, B is 5I and A is 5I, B is 3I
A is 4I, B is 2I and A is 2I, B is 4I
A is 5I, B is 3I and A is 3I, B is 5I
A is 6I, B is 6I and A is 6I, B is 6I
Thanks to the pointer by @Luis, note that you can mix and match these numbers as follows to have more combinations of A and B to choose from -
A as diag([2 4 6]) and B as [4 2 6])
A as diag([5 3 2]) and B as [3 5 4])
| |
Q:
Finding the closest point to a set of lines in 2D
I would need to write an algorithm to find the closest point to a set of lines. These lines are infinite and are not parallel between each other.
Closest point means that point where the sum of the squared distances between the point and each of these lines is minimum.
Let's suppose a simple 2D case, where I have a line passing through $a_0 (1, 3)$ and $b_0 (2, 2)$ and another one through $a_1 (1, 1)$ and $b_1 (2, 2)$
I tried a couple of algorithms reading some other questions like this one but I couldn't get a valid result from the general formula
$$0 = \sum_{i=0}^m\vec c - \vec a_i - \vec d_i \frac{(\vec c - \vec a_i)\cdot \vec d_i}{\|\vec d_i\|^2}$$
I tried to set a system made by
$$
\left\{
\begin{array}{l}
0 = c_x-a_{x0}-d_{x0}\dfrac{(c_x-a_{x0})\cdot d_{x0}}{\|d_{x0}\|^2} + c_x-a_{x1}-d_{x1}\dfrac{(c_x-a_{x1})\cdot d_{x1}}{\|d_{x1}\|^2} \\
0 = c_y-a_{y0}-d_{y0}\dfrac{(c_y-a_{y0})\cdot d_{y0}}{\|d_{y0}\|^2} + c_y-a_{y1}-d_{y1}\dfrac{(c_y-a_{y1})\cdot d_{y1}}{\|d_{y1}\|^2}
\end{array}
\right.
$$
but I end up with
$$
\require{cancel}
\left\{
\begin{array}{l}
0= \cancel{c_x} - 1 - \cancel{c_x} + 1\\
0=\cancel{2c_y} -4 -\cancel{c_y} +3 - \cancel{c_y} -1
\end{array}
\right.
$$
So I guess I didn't apply properly the equations..
Then I also followed this other answer
I took two points on the line $$a_0 (1, 3)$$ and $$b_0 (2, 2)$$ subtracted to get a vector $$\vec v = b_0 - a_0 = (1, -1)$$ rotated by 90° $$(x, y) \to (-y, x)$$ that is $$(1, -1) \to (1, 1)$$ and divided by its length $$\sqrt{1^2+1^2} = \sqrt2$$ that is $$\left(\frac {1}{\sqrt2}, \frac {1}{\sqrt2}\right)$$ but I don't know how to get the formula
$$a_ix + b_iy+c_i$$
And then I also found this one, but it sounds too complicated..
How can I solve?
A:
It took me a while but I realized how to do.
So, let's define the generic closest point as
$$p (x_0, y_0)$$
and then let's call $d_i$ the generic $i_{th}$ distance from the $line_i$ to $p$
We have to find the point $p$ in order that sum of all the $n$ distancies $d_i$ is smallest as possible.
In order to do this, we need to sum all the squared distances
$$\sum_{i=0}^n d_i^2$$
In the previous example with two lines
$$ \begin{aligned} &L_0: \qquad x+y-4=0 \\
&L_1: \qquad x-y=0 \end{aligned}$$
We calculate the corresponding distances as following:
$$\begin{aligned}
d_0&=\dfrac{|x_0+y_0-4|}{\sqrt{1^2+1^2}}\\
d_1&=\dfrac{|x_0-y_0|}{\sqrt{1^2+(-1)^2}}
\end{aligned}$$
so the sum of squared distances is:
$$\sum_{i=0}^2 d_i =\left(\dfrac{|x_0+y_0-4|}{\sqrt{1^2+1^2}}\right)^2 + \left(\dfrac{|x_0-y_0|}{\sqrt{1^2+(-1)^2}}\right)^2=\\
=2x_0^2+y_0^2-8x_0-4y_0+12$$
to get the minimum we have to derivate that partially first on $x_0$ and then on $y_0$:
$$\dfrac{d}{dx_0} 2x_0^2+y_0^2-8x_0-4y_0+12 = 4x_0-8\\
\dfrac{d}{dy_0} 2x_0^2+y_0^2-8x_0-4y_0+12 = 2y_0-4$$
then put them equal to 0 (since we look for the minimum) and put everything into a system of equations:
$$
\left\{
\begin{array}{l}
4x_0-8=0\\
2y_0-4=0
\end{array}
\right.
$$
from where we obtain $$p (2, 2)$$
| |
Music in Film: Ideas on Analyzing Narratives and Creating Themes
Themes. They are what give a musical work it’s signature. They’re what people hum when they exit the theatre, and they can help give your score structure unity. Whether abundant or few, simple or complex, themes can add a compelling layer of meaning to music for moving pictures.
The word “theme” can also refer to ideas, metaphors, and symbols used in narratives. The relationship between literary and musical themes is an important and fascinating intersection in film music, and something worth exploring in depth if you work in this field.
The Function of Themes
What’s This Thing and What Do I Do With It?
When looking at music’s function in film, TV, or media, we are really talking about how it works in the context of telling a story. Themes can relate to, and make a musical statement about, a character, idea or concept, or other aspect of the film. A good theme will recall emotions and ideas, lending structure and a feeling of continuity while helping the story advance and leading the audience come to deeper a understanding of the events and characters and how they are connected. But where does a theme come from and what are some methods to generate themes that resonate effectively with the film?
Film as Literary Form
You Are Not What You Say You Are
As a film composer, you are not really a composer at all. You are a filmmaker who specializes in music. It’s important to keep this in mind as you are in fact taking your cues not from your imagination (freely expressing yourself as an artist), but rather responding to what is already there – the story as told by the film. Composer Leonard Roseman had the following to say of music in film:
If this is the case, our first job as a filmmaker is to get a firm and deep understanding of the story and all its elements.
Photo by umjanedoan – http://www.flickr.com/photos/umjanedoan/
Analyze This!
If you consider yourself a filmmaker who is an invested and collaborative contributor to the film, you’ll want to take in and analyse as much material as you can with the goal of finding your way in to the film’s heart – its hook. Before you write any music you need to absorb all you can. The script and the working cuts of the film are your primary sources, but even costume and set design can help in deepening your understanding of the story. There may be things which will trigger you to do your own research during which you might uncover an aspect of the production that you can use to draw from, musically. Ultimately, you’ll want to come to the spotting session with a deep understanding of the story: its themes, symbols/metaphors, arcs, the characters and their relationships, and be ready to bring a new layer to the whole. In order to do this and to communicate your ideas effectively, you’ll need to speak the language of the filmmakers – not the language of music, but the language of emotion and storytelling.
You should have a solid grasp of screenwriting theory, the 3 act structure that most films adhere to, and the narrative pattern referred to as the Hero’s Journey or Monomyth. Not only will knowing these help you understand the story and communicate with the director, they will help you find your voice in the film and help guide the writing process, ultimately making it more fun and effective. As with composition and orchestration, becoming versed in analysis is a matter of practice. Once you have a handle on the concepts listed above, try applying them when you watch films and read fiction. You’ll be amazed at how much more quickly you’ll grasp the structure of a story simply be applying them as a template – maybe even shocked at how formulaic many stories are!
What now, Mr. Freud?
Considering the Aspects of Story and Applying Them to Themes
Once you have thoroughly analyzed the film and done any additional research, you’ll probably be brimming with ideas which you can use to start formulating initial thematic material. It’s usually a good idea to start big and then work your way down to smaller chunks and sub-themes.
Is there a central, over-arching motif that unifies the film?
Will it be conceptual, personal/emotional, or a combination?
Does it relate to a theme, a concept, a character, or a combination of these?
The first main overarching theme might be related to the main theme/metaphor/symbol of the film, which may also relate directly with the main character and his/her central conflict – the driving motivation. Really getting this theme to work is important because it will be your bread and butter – if it’s strong and flexible enough you can write an almost endless number of variations, casting the idea in different light each time. From here, you can “zoom in” and consider other parts of the story from which themes might be derived, including plot, character, setting, and style.
The When and Where
Time and Place
Photo by ToniVC – http://www.flickr.com/photos/tonivc/
A mode, scale, or style of composition/instrumentation might be appropriate to lend a sense of time and place to the theme, but not necessarily. Be sure that whatever decision you make is appropriate and in line with the story. Just because your story is set in China does not automatically mean you should write your theme in the style of traditional Chinese music, especially if the story is about an expat who longs to return his native England. The emotional core of the story would lie with him in this case and his theme should speak to that. It may also be that while a story is set in medieval times, you wish to convey that the ideas presented by the film are modern or timeless and therefore use a contemporary scoring style.
The Who
Characterization and POV
The idea of motifs for characters is often referred to as Leitmotif; a term made famous by the operatic works of Wagner. Will the characters have themes, perhaps more than one? Is the musical perspective omniscient, all-knowing, or from the POV (point of view) of the audience – naive, perceiving everything for the first time? Or is the music from the character’s point of view? This is an interesting point – consider the role of music in the following scenarios:
a character knows something the audience does not
the audience knows something the character does not
the character believes something untrue, but the audience must initially believe this to be true as well.
In each case music’s role may be either to slowly clue the audience in, or to purposely keep them in the dark or even trick them into believing something which is later revealed to be false – for instance, we may believe a character to be a villain until it is revealed that they are in fact a heroic double agent. The music might help this untruth by playing it from the protagonist’s point of view, supporting the initial belief that this person is evil.
A Note regarding Complexity and Number of Themes
Whether to court or avoid complexity is an important consideration for a number of reasons, but it really boils down to what is right for the film in terms of style, genre, aesthetic, and the taste of the filmmaker. There is always a balance to be struck between interest and change versus familiarity and repetition. If you restrict yourself to on over-arching theme and repeat it verbatim ad nauseam, you will run the risk of boring your audience (to avoid this use variation, which we’ll look at later). On the other hand, if you have a giant number of themes in your work it can sometimes loose a feeling of unity. Even though Howard Shore wrote a staggering 80 themes for his The Lord of the Rings scores, he made sure they were thematically unified:
“Shore uses, for instance, a rising three-note phrase to connect three of the most influential themes in The Fellowship of the Ring, subtly reminding audiences that there are connections at every level between the hobbits, the world of men, and the evil ring, among others.”
Photo by micheal.heiss – http://www.flickr.com/photos/michaelheiss/
Additionally, it’s easy to go overboard with leitmotifs, if you choose so use them. Simply quoting them whenever the character appears on screen will quickly make the film seem a farce, so you want to be sure you have an emotional justification for quoting a theme and are making use of variation:
“Beyond simply the structural considerations for each theme, Shore also changes the personality of each idea masterfully, depending on the guise needed for a particular scene in the film. Tempo alterations and the swapping or addition of notes to denote times of play or lament consistently keep each theme fresh to the ears.”
Further Reading
A wonderful resource is available for free download online, and for further reading I recommend it. These are annotated scores which detail the various cues and use of themes in Howard Shore’s music for The Lord of the Rings Trilogy (in PDF format):
| |
KINGSTON >> McKenzie hit for the cycle and drove in eight runs, leading Deimples in a 32-1 five-inning rout of Melvin T. Higgins in Kingston Recreation women’s Lower B softball action.
Fisher’s performance highlighted an assault by Dimples, which scored 11 runs in both the third and fifth innings.
In another Lower B game, Young Lions Daycare rallied for a run in the seventh to force extra innings, then needed two in the bottom of the eighth to defeat Hickory BBQ 15-14.
In an A Division game, Nancy Holbrook had a double, homer and seven RBI in Coba’s 28-13 win over Perfezione Painting.
A DIVISION
Young Lions Daycare 18,
Tony’s Pizzeria 6
WP-Samantha Robbins. LP-Andrea Clausi.
YLD-Samantha Robbins 4 RBI; Nicole Glass HR, 3 RBI; Brooke Frey 2B, 3B, 3 RBI; Genia Pierre Louis 2-2B, 3 RBI; Jen Blackman 2B; Erin Doyle 2B.
TP-Jen Varner Reckins 2 RBI; Marissa Interrante 2B; Traci Young 4-1B; Alison Tobar 3-1B.
—
Fromson Injury Law 13,
Matt Hall CPA 7
WP-Jean Chilcott. LP-Michelle Williams.
FIL-Melissa DelGuidice 3-2B, 4 RBI; Gen Santoro 2B, 3B, 2 RBI; Jamie DeCicco 3B; Heather Young HR; Jean Chilcott 3-1B.
MH-Kirstie Cafaldo 3-1B, 2 RBI: Cassie Artist 2 RBI; Jen Barnett 2B; Hather Frantz 2B.
—
Coba 28, Perfezione Painting 13
WP-Genivieve Kroner. LP-Terry Carney.
C-Diana Decker 4-1B, 2B, 2 RBI; Nancy Holbrook 2B, HR, 7 RBI; Kathy Klosterman 3-1B, 4 RBI; Genivieve Kroner HR, 4 RBI; Cathy Schetter 2-2B, 4 RBI; Kristina Davis 5-1B. 2B, 2 RBI; Carolyn Mayer 4-1B, 2 RBI: Kate Monahan 4-1B; Katie Miller 3-1B.
PP-Jane Reuss 2 RBI; Marie Welch 2B, 2 RBI; Peg Graham 2B, 3 RBI; Michelle Gallo 3-1B; Melinda Dukat 3-1B.
—
UPPER B DIVISION
Savona’s Plaza Pizza 23,
Spinneweber PVC 8
WP-Virginia Knauer.
SPP-Michelle Bruck 2-1B, 3B, HR, 5 RBI; Danielle Pillsworth 4-1B, 3 RBI; Krystal Keizer 5-1B, 2 RBI; Alisha Beniot 3-1B, 2 RBI; Elena DeCicco 2 RBI; Virginia Knauer 3B: Sue Schrader 3-1B.
S-Tara Brennan 3B, 3 RBI; Marie Sickler 2-1B, 2B.
—
Kingston Alteration 15,
Wine Hutch 11
WP-Stacey Franklin. LP-Molly Carroll.
KA-Cynthia Ruiz 2 RBI; Lynn Battista 3-1B, 3B; Chelsey Cara 3-1B.
WH-Sarah Perry HR, 2 RBI; Kelly Cherry 3-1B, 4 RBI; Katie Banks 3-1B, 2 RBI; Lyneah Cole 3-1B, 3B, 2 RBI.
—
A.F.C.O. 15, Wapner Law 3
AFCO-Carly Stefanowicz 3B, 2 RBI; Kim Kavanaugh 1B, 3B, HR, 4 RBI; Jen Campbell 2B; Kristin Stefanowicz 2B; Lacey Spadafora HR.
—
LOWER B DIVISION
Global Palate 8,
Turf Pro Landscaping 7
WP-Lisa Stiefel. LP-Michele Jerkowski.
GP-Maria Demelis 2B, 2 RBI; Lisa VanHouta 3B, 3 RBI; Kaylee Plain 2B, 3B; Lindsay Roman 2B.
TPL-Anna Saccoman 2 RBI; Alexis English 2B; Kim Boice 2B.
—
Young Lions Daycare 15,
Hickey BBQ 14
WP-Stacey Jacobs. LP-Kreis Fowler.
YLD-Kris Wachtel 2B, 2 RBI; Kristina Fowler 2B.
H-Cynthia Drake 2 RBI; Tina Burris 3-1B, 2 RBI; Beth Gougoutris HR, 2 RBI: Addy Jones 2 RBI; Letus 2B; Shonnie Drake 3B; Burns 3-1B.
—
Dimples 32, Melvin T. Higgins 1
WP-Rebecca Brocco. LP-Jordan Schoonmaker.
D-Shanna Drake 2 RBI; Rebecca Brocco 4-1B, 4 RBI; McKenzie Fisher 1B, 2B, 3B, HR, 8 RBI; Zhannelle Ross 2B, 2 RBI; Liz Baker 2 RBI; Gabby Becker 4-1B; Gross 4-1B; Lisa Spencer 3-1B.
Join the Conversation
We invite you to use our commenting platform to engage in insightful conversations about issues in our community. We reserve the right at all times to remove any information or materials that are unlawful, threatening, abusive, libelous, defamatory, obscene, vulgar, pornographic, profane, indecent or otherwise objectionable to us, and to disclose any information necessary to satisfy the law, regulation, or government request. We might permanently block any user who abuses these conditions. | https://www.dailyfreeman.com/2015/05/22/kingston-softball-mckenzie-fishers-8-rbi-lead-dimples/ |
Teenage skateboarder sought after crash injures 70-year-old man in North Vancouver
A 70-year-old North Vancouver pedestrian was struck down by an out of control skateboarder, resulting in the victim’s wrist/forearm being broken in four places.
The incident occurred at approximately 9:00 p.m. on Monday in the 100 block of 15th Street West, North Vancouver.
Police attended and spoke to witnesses who confirmed that the elderly male had just safely crossed the alleyway when a skateboarder, coming out of the alley, lost control on the speed bumps. The boarder collided with the man which caused the injury.
The young person fled with his skateboard after a trite apology. The police would like to speak to the skateboarder who is described as a 16-year-old Caucasian, 5’ 8 tall, wearing a red shirt, white shorts and a red helmet.
Anyone who may have witnessed the incident is asked to call police at 604-985-1311, quoting file number 2014-18328. | https://dailyhive.com/vancouver/teenage-skateboarder-sought-crash-injures-70-year-old-man-north-vancouver |
---
abstract: 'Consider a finite collection $\{T_1, \ldots, T_J\}$ of differential operators with constant coefficients on $\mathbb{T}^2$ and the space of smooth functions generated by this collection, namely, the space of functions $f$ such that $T_j f \in C(\mathbb{T}^2)$. We prove that under a certain natural condition this space is not isomorphic to a quotient of a $C(S)$-space and does not have a local unconditional structure. This fact generalizes the previously known result that such spaces are not isomorphic to a complemented subspace of $C(S)$.'
author:
- Anton Tselishchev
title: 'Absence of local unconditional structure in spaces of smooth functions on two-dimensional torus'
---
Introduction
============
It is well known and easy to see that the space $C^k({\mathbb{T}})$ of $k$ times continuously differentiable functions on the unit circle is isomorphic to $C({\mathbb{T}})$. Also, it has long been known that in higher dimensions the situation is different — already for two dimensions the space $C^k({\mathbb{T}}^2)$ is not isomorphic to $C({\mathbb{T}}^2)$.
This fact was first announced in $\cite{Grot}$ and later generalized in many directions (see [@Henk; @KislFactor; @Kisl0; @KwaPel; @Sid; @PelSen; @KisSid; @KisMaks; @KisMaks2; @Maks]). However, the most general and natural framework was introduced only in the quite recent paper [@KMSpap] (see also the preprint [@KMSprepr] for the two-dimensional case).
More specifically, suppose we have a collection ${\mathcal{T}}=\{T_1, T_2, \ldots, T_J\}$ of differential operators with constant coefficients on the torus ${\mathbb{T}}^2$. So, each $T_j$ is a linear combination of operators $\partial_1^{\alpha}\partial_2^{\beta}$. We call the number $\alpha+\beta$ the order of such a differential monomial and the order of $T_j$ is the maximal order among all monomials involved in it. We consider the following seminorm on trigonometric polynomials $f$: $$\|f\|_{{\mathcal{T}}} =\max_{1\leq j\leq J} \|T_j f\|_{C({\mathbb{T}}^2)}.$$
Now define the Banach space $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ by this seminorm (that is, factorize over the null space and consider the completion). For example, whet ${\mathcal{T}}$ consists of all differential monomials of order at most $k$, we get the space $C^k({\mathbb{T}}^2)$.
In the papers [@KisMaks; @KisMaks2] the following statement was proved. Suppose that all differential monomials involved in any of $T_j$ are of order not exceeding $k$. Let us drop the junior part of each $T_j$ (this means that we drop all monomials whose order is strictly smaller than $k$). If among the remaining senior parts there are at least two linearly independent, then $C^{\mathcal{T}}({\mathbb{T}}^2)$ is not isomorphic to a complemented subspace of $C(S)$. (We denote by $S$ an arbitrary uncountable compact metric space. According to Milutin theorem, all the resulting $C(S)$ spaces are isomorphic.) However, if all senior parts are multiples of one of them, the situation was unclear.
So, in the preprint [@KMSprepr] (and in the paper [@KMSpap] for arbitrary dimensions) a refinement of this statement was proved. In order to state it, we need the concept of mixed homogeneity.
Fix some *mixed homogeneity pattern*, that is, a line $\Lambda$ that intersects the positive semiaxes. The equation of such a line is $\frac{x}{a}+\frac{y}{b}=1$ where $a$ and $b$ are positive numbers. We call such a line *admissible* if all multiindices $(\alpha, \beta)$ such that $\partial_1^\alpha \partial_2^\beta$ is involved in one of $T_j$ lie below $\Lambda$ or on it. This means that all such multiindices must satisfy the following inequality: $$\frac{\alpha}{a}+\frac{\beta}{b}\leq 1.$$
Now we define the senior part of $T_j$ as the sum of all differential monomials involved in $T_j$ whose multiindices lie on the line $\Lambda$ and the junior part as the sum of all other monomials of $T_j$. The senior part is denoted by $\sigma_j$ and the junior by $\tau_j$.
Suppose that for some choice of $\Lambda$ there are at least two linearly independent among all senior parts $\sigma_j$. Then it was proved in [@KMSpap; @KMSprepr] that $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ is not isomorphic to a complemented subspace of a $C(S)$ space.
However, in a less general setting, this is not the best known statement. For example, in [@KislFactor] it was proved that $C^{k}({\mathbb{T}}^2)$ is not isomorphic to any quotient space of $C(S)$. The following theorem generalizes this statement in the described setting.
If for the collection ${\mathcal{T}}$ there are at least two linearly independent operators among $\sigma_j$ *(*for some choice of an admissible line $\Lambda$*)*, then $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ is not isomorphic to any quotient space of $C(S)$.
This is the first result of this paper.
Also, we note that there is another generalization of the theorem from [@KMSpap; @KMSprepr] (again in a less general setting). In [@KisSid] it was proved that if all operators in the collection ${\mathcal{T}}$ are differential monomials and at least two senior monomials (with respect to some pattern) are linearly independent, then the space $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ does not have local unconditional structure.
Following [@GorLew] we give the definition. A Banach space $X$ is said to have local unconditional structure if there exists a constant $C>0$ such that for any finite-dimensional subspace $F\subset X$ there exists a Banach space $E$ with $1$-unconditional basis and two linear operators $R: F \rightarrow E$ and $S: E\rightarrow X$ such that $SRx = x$ for all $x\in F$ and $\|S\|\cdot \|R\| \leq C$. A basis $\{e_n\}$ is $1$-unconditional if for any numbers $\varepsilon_n$ with $|\varepsilon_n| \leq 1$ and any finitary sequence $(\alpha_n)$ the following inequality holds: $\|\sum \varepsilon_n \alpha_n x_n| \leq \|\sum \alpha_n x_n\|$.
It is worth noting that $X$ has local unconditional structure if and only if its conjugate $X^*$ is a direct factor of a Banach lattice (see [@Pietsch]). Since the space $C(S)$ does have local unconditional structure, the non-isomorphism of $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ to a complemented space of $C(S)$ would also follow once it is proved that $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ does not have local unconditional structure. This is exactly the statement of the next theorem.
If for a collection ${\mathcal{T}}$ there are at least two linearly independent operators among $\sigma_j$ *(*for some choice of an admissible line $\Lambda$*)*, then $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ does not have local unconditional structure.
The main ingredients of our proofs are the same as in [@KMSpap; @KMSprepr]. We use the new embedding theorem established there together with some facts about $p$-summing operators.
At first, we introduce some definitions. A distribution $f$ on the torus ${\mathbb{T}}^2$ is called proper if $\hat{f}(s,t)=0$ whenever $s=0$ or $t=0$. Next, we need a notion of Sobolev spaces with nonintegral smoothness: $$W_2^{\alpha, \beta} ({\mathbb{T}}^2) = \{f\in C^\infty({\mathbb{T}}^2)': \{(1+m^2)^{\alpha/2} (1+n^2)^{\beta/2}\hat{f}(m,n)\} \in \ell^2 (\mathbb{Z}^2) \}.$$ Of course, the norm of $f$ in $W_2^{\alpha, \beta}({\mathbb{T}}^2)$ is defined as $\|\{(1+m^2)^{\alpha/2} (1+n^2)^{\beta/2}\hat{f}(m,n)\}\|_{\ell^2}$.
Now we state the embedding theorem (see Theorem 0.2 and Remark 1.6 in [@KMSpap]) which we are going to use.
Suppose that proper distributions $\phi_1, \ldots, \phi_N$ satisfy the following system of equations*:* $$\begin{aligned}
\label{ET}
-\partial_1^k{\varphi}_1 = \mu_0; \qquad \partial_2^l {\varphi}_j - \partial_1^k {\varphi}_{j+1}=\mu_j, \quad j=1,\ldots, N-1; \qquad \partial_2^l \phi_N=\mu_N,
\end{aligned}$$ where $\mu_0, \ldots, \mu_N$ are functions in $L^1({\mathbb{T}}^2)$ *(*or measures*)*. Then $$\sum_{j=1}^N \|{\varphi}_j\|_{W_2^{\frac{k-1}{2}, \frac{l-1}{2}} ({\mathbb{T}}^2)} \lesssim \sum_{j=0}^N \|\mu_j\|.$$
Here (and everywhere in this paper) the symbol $A\lesssim B$ means that there exists some constant $C>0$ such that $A\leq CB$.
Several remarks are in order. First, Theorems 1 and 2 hold also for the torus of arbitrary dimension, ${\mathbb{T}}^n$. But this fact cannot be derived from $2$-dimensional statements (or at least it is unclear how to do this, see [@KMSpap] for some explanations). The proofs in higher dimensions are somewhat similar, however, they are much more technically sophisticated (and even require a different embedding theorem, again, see [@KMSpap] and Theorem 1.1 there). So, in this article we restrict ourselves to the two-dimensional case.
In this paper we present the proofs of Theorem 1 and Theorem 2. We start with the first theorem because its proof is easier and contains less technical details (however, the reader will see that the proofs of both theorems are quite similar and similar to the proof from the preprint [@KMSprepr]).
We note that in the paper [@KisSid] it was also proved (again, in case when all operators in ${\mathcal{T}}$ are differential monomials and there are at least two linearly independent operators among their senior parts) that if $C^{\mathcal{T}}({\mathbb{T}}^2)^*$ is isomorphic to a subspace of a space $Y$ with local unconditional structure, then $Y$ contains the spaces $\ell_\infty^k$ uniformly (again, for the definition see [@KisSid]). The same statement can also be proved in our situation, but we do not present the details here, because our main goal is to show that, using the embedding theorem from [@KMSprepr], we can adapt various techniques to a more general context. And although this statement implies Theorem 1, we choose to sacrifice the generality for the sake of simplicity and transparency of presentation.
Also, a few words should be said about the notation. As it has already been mentioned, we write $A \lesssim B$ if $A\leq CB$ for some constant $C>0$. It will always be clear from the context from which parameters $C$ can depend and from which it cannot. Besides that, the notation $A\asymp B$ means that $A\lesssim B$ and $B \lesssim A$.
The author is kindly grateful to his scientific advisor, S. V. Kislyakov, for posing these problems, for very helpful discussions during the process of their solution and for great help with editing this text.
Nonisomorphism to a quotient of a $C(S)$-space
==============================================
Like it was done in [@KMSprepr], we start our proof of Theorem 1 with some simple but helpful observations.
Several reductions
------------------
We denote the space of proper functions in $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ by $C_0^{{\mathcal{T}}}({\mathbb{T}}^2)$. It is clear that this space is complemented in $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ (a projection is given by convolution with some measure), so we can prove Theorem 1 for $C_0^{{\mathcal{T}}}({\mathbb{T}}^2)$ instead of $C^{{\mathcal{T}}}({\mathbb{T}}^2)$.
Next, suppose that the admissible line $\Lambda$ is given by the equation $x/a+y/b=1$. Let us show that without loss of generality we may assume that $a$ and $b$ are positive integers. Indeed, according to the conditions of Theorem 1, there are at least two points $(r_1, r_2)$ and $(\rho_1, \rho_2)$ with nonnegative integral coordinates on $\Lambda$. We may assume that $r_1>\rho_1$ and $r_2<\rho_2$. Then the equation of $\Lambda$ can be written in the following form: $$\frac{x}{r_1-\rho_1}+\frac{y}{\rho_2-r_2}=\frac{\rho_1}{r_1-\rho_1}+\frac{\rho_2}{\rho_2-r_2}.$$ Now note that we can shift the line $\Lambda$ (and the whole construction) by a vector with integral coordinates. This means that we can change the collection ${\mathcal{T}}$ by the collection $\{T_1\partial_1^u\partial_2^v,\ldots T_J\partial_1^u\partial_2^v\}$. The corresponding spaces $$C_0^{\{T_1, \ldots, T_J\}}({\mathbb{T}}^2) \quad \hbox{and} \quad C_0^{\{T_1\partial_1^u\partial_2^v,\ldots T_J\partial_1^u\partial_2^v\}}({\mathbb{T}}^2)$$ are isomorphic — isomorphism is given by the map $f\mapsto \partial_1^u\partial_2^v f$. So, by doing this shift we may assume that the equation of $\Lambda$ is the following: $$\frac{x}{r_1-\rho_1}+\frac{y}{\rho_2-r_2}=\frac{\rho_1+u}{r_1-\rho_1}+\frac{\rho_2+v}{\rho_2-r_2}.$$ If we write this equation in the form $x/a_1+y/b_1=1$, then $a_1$ and $b_1$ are the following: $$\rho_1+u+ (\rho_2+v)\frac{r_1-\rho_1}{\rho_2-r_2}\quad \hbox{and} \quad \rho_2+v+ (\rho_1+u)\frac{\rho_2-r_2}{r_1-\rho_1}.$$ Clearly, we can find positive integers $u$ and $v$ so that these two expressions become integers.
So, we assume that the equation of $\Lambda$ is $x/a+y/b=1$ where $a$ and $b$ are positive integers. We denote their greatest common divisor by $N$ and then all points on $\Lambda$ are of the form $(jm, (N-j)n)$ with $0\leq j\leq N$ (here $m=a/N$ and $n=b/N$ so $m$ and $n$ are coprime).
Main construction
-----------------
Suppose that $C_0^{{\mathcal{T}}}({\mathbb{T}}^2)$ is isomorphic to a quotient space of $C(S)$. Denote by $P$ the quotient map, $P:C(S)\rightarrow C_0^{{\mathcal{T}}}({\mathbb{T}}^2)$.
Due to the reductions we have done, the senior part of every operator from ${\mathcal{T}}$ has the following form: $$\sigma_s=\sum_{j=0}^N a_{sj}\partial_1^{jm}\partial_2^{(N-j)n}.$$ We note that the space $C^{{\mathcal{T}}}({\mathbb{T}}^2)$ depends only on the linear span of operators in ${\mathcal{T}}$ so we can change our collection if these changes do not affect its linear span.
Now we consider the matrix $(a_{sj})$. Suppose $j_0$ is the smallest index such that $a_{sj_0}\neq 0$ for at least one $s$. Without loss of generality we may assume that $a_{1j_0}\neq 0$. Then, multiplying $T_1$ by a constant and subtracting a multiple of $T_1$ from other operators, we can ensure that $a_{1j_0}=-1$ and $a_{sj_0}=0$ for every $s>1$. By the assumption of the theorem, there exists $j_1$ such that $a_{sj_1}\neq 0$ for some $s>1$. Again, without loss of generality we assume that $a_{2j_1}=1$ and $a_{sj_1}=0$ for all $s>2$.
Therefore, we have two operators, $T_1$ and $T_2$, whose senior parts are linearly independent. Then for simplicity we denote the coefficients of their senior parts by $a_j$ and $b_j$ respectively, that is $$\begin{aligned}
\sigma_1=\sum_{j=0}^N a_j\partial_1^{jm}\partial_2^{(N-j)n};\\
\sigma_2=\sum_{j=0}^N b_j\partial_1^{jm}\partial_2^{(N-j)n}.\end{aligned}$$ Moreover, $T_1$ is the only operator in ${\mathcal{T}}$ that involves the differential monomial $\partial_1^{j_0m}\partial_2^{(N-j_0)m}$ and $T_2$ is the only operator in ${\mathcal{T}}$ besides maybe $T_1$ that includes the monomial $\partial_1^{j_1m}\partial_2^{(N-j_1)m}$.
Consider the embedding of the space $C_0^{{\mathcal{T}}}({\mathbb{T}}^2)$ in $C_0^{T_1, T_2}({\mathbb{T}}^2)$ (denote it by $i$). Next, we can embed this space into $W_1^{T_1, T_2}({\mathbb{T}}^2)$. We denote this embedding by $g$. Here, clearly, the spaces $C_0^{T_1, T_2}({\mathbb{T}}^2)$ and $W_1^{T_1, T_2}({\mathbb{T}}^2)$ are defined by the seminorms $\max\{\|T_1f\|_{C({\mathbb{T}}^2)}, \|T_2 f\|_{C({\mathbb{T}}^2)}\}$ and $\max\{\|T_1f\|_{L^1({\mathbb{T}}^2)}, \|T_2 f\|_{L^1({\mathbb{T}}^2)}\}$ respectively and consist only of proper functions. We note that operator $g$ is $1$-summing, this follows easily from the Pietsch factorization theorem. A good reference on the theory of $p$-summing operators is the book [@Wojt] (see Chapter III.F there).
Next, we are going to construct an operator $s$ from $W_1^{T_1, T_2}({\mathbb{T}}^2)$ into $W_2^{\frac{m-1}{2},\frac{n-1}{2}}({\mathbb{T}}^2)$. Again, the construction will be very similar to that in [@KMSprepr] with certain simplifications.
First, we need the following simple fact.
The system *(\[ET\])* with proper measures *(*or $L^1$ functions*)* $\mu_j$ is solvable if and only if the following relation holds true*:* $$\begin{aligned}
\label{solv}
\sum_{j=0}^N \partial_1^{jk}\partial_2^{(N-j)l}\mu_j = 0.
\end{aligned}$$
The proof is quite easily done by induction and can be found in [@KMSprepr] (see Lemma 2.1 there).
Now take any $f\in W_1^{T_1, T_2}({\mathbb{T}}^2)$ and consider the pair of functions $(f_1, f_2)=(T_1f, T_2f)$. Clearly, they satisfy the equation $T_2 f_1-T_1 f_2=0$. This is a differential equation and now we rewrite it in a different form. In order to do this, we note that if $\alpha/a +\beta/b < 1$, then we can express the differential monomial $\partial_1^\alpha \partial_2^\beta$ in terms of $\partial_1^a$ and $\partial_2^b$, using Fourier multipliers: $$\partial_1^\alpha \partial_2^\beta f=I_{\alpha\beta}\partial_1^a f + J_{\alpha\beta}\partial_2^b f,$$ where $I_{\alpha\beta}$ and $J_{\alpha\beta}$ are Fourier multipliers with the following symbols: $$\frac{(iu)^{\alpha+a}(iv)^{\beta}}{(iu)^{2a}\pm (iv)^{2b}} \qquad \hbox{and} \qquad \pm\frac{(iu)^{\alpha}(iv)^{\beta+b}}{(iu)^{2a} \pm (iv)^{2b}},$$ respectively. By this we mean that they act on a function $g\in L^1_0({\mathbb{T}}^2)$ by multiplying its Fourier coefficients $\hat{g}(u,v)$ by these expressions. The choice of a sign $\pm$ is determined by the condition $(-1)^a=\pm (-1)^b$, so that the denominators do not vanish when $u$ and $v$ are not equal to zero. In [@KMSprepr] it was proved that such multipliers are bounded on $L^1_0({\mathbb{T}}^2)$.
The Fourier multipliers $I_{\alpha\beta}$ and $J_{\alpha\beta}$ defined as above are bounded on $L^1_0({\mathbb{T}}^2)$.
Using these multipliers, we can write the junior parts of operators $T_1$ and $T_2$ in the following form: $$\sum_{\alpha, \beta} c_{\alpha\beta} (I_{\alpha\beta}\partial_1^a + J_{\alpha\beta}\partial_2^b).$$ Therefore, we can regroup the terms in the expression $T_2 f_1 - T_1 f_2$ and rewrite it as $$\sum_{j=0}^N \partial_1^{jm}\partial_2^{(N-j)n} \mu_j = 0,$$ where the $\mu_j$ are precisely the functions $b_j f_1-a_j f_2$ when $j\neq 0, N$, $\mu_0$ is equal to $b_0 f_1-a_0f_2$ plus some linear combination of the operators $J_{\alpha\beta}$ applied to $f_1$ and $f_2$, and $\mu_N$ equals $b_N f_1-a_Nf_2$ plus some linear combination of the operators $I_{\alpha\beta}$ applied to $f_1$ and $f_2$.
Now we use Fact 2 and find a solution of the following system of differential equations:
$$\label{sys1}
-\partial_1^m {\varphi}_1=\mu_0; \qquad \partial_2^n{\varphi}_j-\partial_1^m {\varphi}_{j+1}=\mu_j, \quad j=1,\ldots, N-1; \qquad \partial_2^n {\varphi}_N =\mu_N.$$
By Fact 1, all functions ${\varphi}_j$ lie in $W_2^{\frac{m-1}{2},\frac{n-1}{2}}({\mathbb{T}}^2)$. We take the function ${\varphi}_{j_0+1}\in W_2^{\frac{m-1}{2},\frac{n-1}{2}}({\mathbb{T}}^2)$ (it depends linearly on the initial function $f$) and therefore we get a bounded linear operator $s$ from $W_1^{T_1, T_2}({\mathbb{T}}^2)$ into $W_2^{\frac{m-1}{2},\frac{n-1}{2}}({\mathbb{T}}^2)$. Summing up, we have the following diagram:
$$C(S)\xrightarrow{P} C_0^{\mathcal{T}}({\mathbb{T}}^2) \xrightarrow{i} C_0^{T_1, T_2}({\mathbb{T}}^2) \xrightarrow{g} W_1^{T_1, T_2}({\mathbb{T}}^2) \xrightarrow{s} W_2^{\frac{m-1}{2},\frac{n-1}{2}}({\mathbb{T}}^2).$$
Contradiction
-------------
Now we pass to the final part of the proof. We will construct an operator from a finite-dimensional subspace of $W_2^{\frac{m-1}{2},\frac{n-1}{2}}({\mathbb{T}}^2)$ to $C(S)$ and use some standard facts from Banach space theory (mainly, about absolutely summing operators) to get a contradiction. Now let us pass to the details.
Consider the function $v_{pq}:=z_1^p z_2^q \in C^{\mathcal{T}}({\mathbb{T}}^2)$. We are going to assume that natural numbers $p$ and $q$ satisfy the inequality $$\frac{\delta}{2} q^n \leq p^m \leq \delta q^n,$$ where $\delta$ is a small fixed constant (depending of course on our collection ${\mathcal{T}}$ but not on $p$ and $q$) which will be chosen later. Also, we will consider only large values of $p$: $p>C$ for some big constant $C$. We always assume that the numbers $p$ and $q$ satisfy these conditions and do not emphasize this later in the present section.
First of all, we note that $\|v_{pq}\|_{C^{\mathcal{T}}({\mathbb{T}}^2)}\asymp p^{mN}$.
Indeed, if we take any differential monomial $\partial_1^\alpha \partial_2^\beta$ involved in a junior pat of any operator from ${\mathcal{T}}$, then we have: $\partial_1^\alpha \partial_2^\beta z_1^p z_2^q = (ip)^\alpha (iq)^\beta z_1^p z_2^q.$ Since this monomial is in a junior part of some operator, the following inequality holds: $\frac{\alpha}{Nm}+\frac{\beta}{Nn}<1$. Therefore, if $\alpha=\alpha_0 m$, then $\beta=(N-\alpha_0-c)n$ for some $c>0$. Hence, the norm of $\partial_1^\alpha \partial_2^\beta v_{pq}$ in $C({\mathbb{T}}^2)$ is equal to $p^\alpha q^\beta = p^{\alpha_0 m} q^{(N-\alpha_0-c)n} \asymp p^{m(N-c)}$. Clearly, this quantity can be made arbitrarily smaller than $p^{mN}$ if we make $p$ sufficiently large.
On the other hand, if we apply any differential monomial involved in the senior part of one of the operators (which is of the form $\partial_1^{jm} \partial_2^{(N-j)n}$) to $v_{pq}$, we get a function whose norm is $p^{jm} q^{(N-j)n} \asymp p^{mN}$. Moreover, if $j>j_0$, then $p^{jm} q^{(N-j)n} \asymp \delta^j q^{nN}$ and this quantity can be made arbitrarily smaller than $p^{j_0m} q^{(N-j_0)n} \asymp \delta^{j_0} q^{nN}$ if we make $\delta$ small, therefore $\|T_1 v_{pq}\|_{C({\mathbb{T}}^2)} \asymp p^{mN}$. All these facts easily imply that indeed $\|v_{pq}\|_{C^{\mathcal{T}}({\mathbb{T}}^2)} \asymp p^{mN}$.
Similarly, $\|v_{pq}\|_{C^{T_1, T_2}({\mathbb{T}}^2)} \asymp p^{mN}$ and $\|v_{pq}\|_{W_1^{T_1, T_2}({\mathbb{T}}^2)} \asymp p^{mN}$. Therefore, we consider functions $$w_{pq}:=\frac{v_{pq}}{p^{mN}}.$$
By the discussion above, we have $\|w_{pq}\|_{C_0^{\mathcal{T}}({\mathbb{T}}^2)} \asymp 1$. Hence, there exist functions $f_{pq} \in C(S)$ such that $P(f_{pq})=w_{pq}$ and $\|f_{pq}\|_{C(S)} \leq C$. Besides that, we see that $T_1 w_{pq} = c_{pq} v_{pq}$ and $T_2 w_{pq} = d_{pq} v_{pq}$ where $|c_{pq}|, |d_{pq}| \asymp 1$.
Next, we need to solve the system of differential equations (\[sys1\]). We recall that $\mu_0$ equals $b_0 c_{pq} v_{pq} - a_0 d_{pq} v_{pq}$ plus some linear combination of the operators $I_{\alpha\beta}$ and $J_{\alpha\beta}$ applied to $T_1 w_{pq}$ and $T_2 w_{pq}$. Therefore, we write it in the following form: $$\mu_0 = \xi_{pq} c_{pq} v_{pq} + \eta_{pq} d_{pq} v_{pq} + (b_0 c_{pq} v_{pq} - a_0 d_{pq} v_{pq}).$$
It is easy to see that $\xi_{pq}, \eta_{pq} = O(p^{-\varepsilon})$ for some small fixed $\varepsilon>0$. Indeed, we simply recall that the symbol of any Fourier multiplier $I_{\alpha\beta}$ is of the form $$\frac{(ip)^{\alpha+a} (iq^\beta)}{(ip)^{2a}\pm (iq)^{2b}}.$$ The absolute value of this expression can be estimated by $$\Big| \frac{(ip)^{\alpha+Nm} (iq^\beta)}{(ip)^{2Nm}\pm (iq)^{2Nn}} \Big| \asymp \frac{p^\alpha q^\beta}{p^{Nm}} \asymp \frac{p^\alpha \cdot p^{\frac{m}{n} \beta}}{p^{Nm}}.$$ Here $\alpha/m + \beta/n < N$, therefore this expression indeed equals $O(p^{-\varepsilon})$. The same is true for all operators $I_{\alpha\beta}$ and $J_{\alpha\beta}$.
Now we find a solution of the system of differential equations (\[sys1\]). Specifically, we are interested in the function ${\varphi}_{j_0 + 1}$ (that is how we defined the operator $s$).
If $j_0 = 0$, then we need only the first differential equation to find ${\varphi}_1$. By construction, $j_0=0$ means that $a_0=-1$ and $b_0 = 0$. Therefore, clearly we have ${\varphi}_1 = k_{pq} \frac{v_{pq}}{p^m}$ where $|k_{pq}| \asymp 1$.
If $j_0 > 0$, then again by construction $a_0 = b_0 = 0$ and we use the first equation from system (\[sys1\]) to conclude that $${\varphi}_1 = \xi_{pq}^{(1)} \cdot \frac{v_{pq}}{p^m}, \quad \hbox{where} \quad \xi_{pq}^{(1)} = O(p^{-\varepsilon}).$$ Note that in this case $|\partial_2^n {\varphi}_1| = |\xi_{pq}^{(1)}\frac{q^n}{p^m} v_{pq}| \asymp |\xi_{pq}^{(1)} v_{pq}|$. Now, if $j_0 = 1$, then we use the second equation to conclude that ${\varphi}_2 = k_{pq} \frac{v_{pq}}{p^m}$ with $|k_{pq}| \asymp 1$ (again, in this case $\mu_1 = b_1 c_{pq} v_{pq} - a_1 d_{pq} v_{pq}$ and since $j_0=-1$, we see that $a_1 = -1, b_1 = 0$). If $j_0 > 1$, then we conclude from the second equation that ${\varphi}_2 = \xi_{pq}^{(2)} v_{pq}$ with $|\xi_{pq}^{(2)}|=O(p^{-\varepsilon})$, etc.
Anyway, we see that the following relation holds for a function ${\varphi}_{j_0+1}$: $${\varphi}_{j_0+1} = k_{pq} \frac{v_{pq}}{p^m}, \quad \hbox{where} \quad |k_{pq}| \asymp 1.$$
Now we emphasize the dependence of ${\varphi}_{j_0+1}$ on $p$ and $q$ so we denote ${\varphi}^{(p,q)}:={\varphi}_{j_0}$. We see that $\{{\varphi}^{(p,q)}\}$ is an orthogonal system in $W_2^{\frac{m-1}{2}, \frac{n-1}{2}}$ and $$\|{\varphi}^{(p,q)}\|_{W_2^{\frac{m-1}{2}, \frac{n-1}{2}}} \asymp p^{-m} p^{\frac{m-1}{2}} q^{\frac{n-1}{2}} \asymp p^{-1/2} q^{-1/2}.$$
Finally, we consider the finite-dimensional operator $A: W_2^{\frac{m-1}{2}, \frac{n-1}{2}} \to C(S)$ which takes $p^{1/2}q^{1/2} {\varphi}^{(p,q)}$ to $\alpha_{pq} f_{pq}$ where $(\alpha_{pq})$ is an arbitrary sequence of numbers such that $\sum |\alpha_{pq}|^2 = 1$. Here we take $p$ and $q$ satisfying the previous conditions and such that $p\leq M$ for some big number $M$. To be more precise, the operator $A$ is the composition of the orthogonal projection onto $\mathrm{span}\{{\varphi}_{p,q}\}_{p<M}$ and the operator we have described.
For any function $g\in W_2^{\frac{m-1}{2}, \frac{n-1}{2}}$ with $$g = \sum_{p<M} \varkappa_{pq} p^{1/2} q^{1/2} {\varphi}^{(p,q)}$$ we have: $$Ag = \sum_{p<M} \varkappa_{pq} \alpha_{pq} f_{pq}.$$ Then the norm of $A$ can be estimated in the following way: $$\|Ag\|_{C(S)}\lesssim \sum_{p<M} |\varkappa_{pq}\alpha_{pq}|\leq \Big(\sum |\varkappa_{pq}|^2\Big)^{1/2} \Big(\sum |\alpha_{pq}|^2\Big)^{1/2} \leq \Big(\sum |\varkappa_{pq}|^2\Big)^{1/2}\lesssim \|g\|_{W_2^{\frac{m-1}{2}, \frac{n-1}{2}}},$$ and we conclude that $\|A\| \lesssim 1$.
Besides that, we recall that an operator $g$ (see the diagram in the end of subsection 2.2) is 1-summing and therefore $AsgiP$ is also a 1-summing operator. Since this operator acts on a $C(S)$-space, it is also $1$-integral and therefore its restriction to a finite-dimensional subspace (namely, $\mathrm{span}_{p<M}\{f_{pq}\}$) is 1-nuclear (which simply means that it has a finite trace and its trace can be estimated by its norm), see for example [@Wojt pp. 218–219]. We have: $\mathrm{tr}(AsgiP)\lesssim \|AsgiP\| \lesssim 1$.
Now we are going to prove that this is false. Recall that by our constructions the operator $sgiP$ takes $f_{pq}$ to $\psi^{(p,q)}$, and $A$ takes $\psi^{(p,q)}$ to $p^{-1/2}q^{-1/2}\alpha_{pq} f_{pq}$. Hence, the operator $AsgiP$ has diagonal form (in the basis $\{f_{pq}\}$): $$AsgiP(f_{pq}) = p^{-1/2}q^{-1/2} \alpha_{pq} f_{pq}.$$ Since its trace is bounded, we infer that $$\Big| \sum p^{-1/2} q^{-1/2}\alpha_{pq} \Big| \lesssim 1.$$ This inequality holds for an arbitrary sequence $(\alpha_{pq})$ with $\sum |\alpha_{pq}|^2=1$, which implies that $$\sum p^{-1} q^{-1} \lesssim 1.$$ But this is clearly false. Indeed, recall that the number of admissible numbers $q$ here is about $p^{m/n}$ and for every such $q$ we have $q\asymp p^{m/n}$. Therefore, our sum can be estimated from below simply by the following: $$\sum_{C<p<M} p^{-1}.$$ Since $\sum p^{-1}$ is a divergent series, this is a contradiction and the proof of Theorem 1 is finished.
Absence of local unconditional structure
========================================
Now we start the proof of another main statement of this paper, Theorem 2. Mainly, this proof involves methods from [@KisSid] and also the embedding theorem from [@KMSpap], Fact 1. Precisely like in the proof of Theorem 1, we suppose that the space $C^{\mathcal{T}}({\mathbb{T}}^2)$ has local unconditional structure.
Main constructions
------------------
First, we note that we can do the same reductions as in the previous section where we proved Theorem 1. We consider the space $C^{\mathcal{T}}_0 ({\mathbb{T}}^2)$ instead of $C^{\mathcal{T}}({\mathbb{T}}^2)$, since the passage to a complemented subspace preserves local unconditional structure. Besides that, we are going to assume that all the additional assumptions from Subsection 2.1 are fulfilled. Next, we define operators $i$, $g$, $s$ in the same way as in Subsection 2.2. We still denote the senior part of $T_j$ by $\sigma_j$ and the junior part by $\tau_j$.
Let us denote by $H$ the collection of differential operators corresponding to all points with integral coordinates on the line $\Lambda$. Then we can consider the embedding $j: C_0^H({\mathbb{T}}^2)\to C_0^{\mathcal{T}}({\mathbb{T}}^2)$ that is a continuous operator (see [@BesIlNik Theorem 9.5]). So, we have the following diagram: $$C_0^H({\mathbb{T}}^2)\xrightarrow{j}C_0^{\mathcal{T}}({\mathbb{T}}^2)\xrightarrow{i}C_0^{T_1, T_2}({\mathbb{T}}^2)\xrightarrow{g} W_1^{T_1, T_2}({\mathbb{T}}^2)\xrightarrow{s} W_2^{\frac{m-1}{2}, \frac{n-1}{2}}({\mathbb{T}}^2).$$
Recall that the operator $g$ is 1-summing. Now we are going to use the following fact (see [@GorLew] or [@Pietsch Sec. 23]):
Let $X$ be a Banach space having local unconditional structure. Then every $1$-summing operator $T$ from $X$ to an arbitrary Banach space $Y$ can be factored through the space $L^1$, i.e., there is a measure $\mu$ and operators $V:X\to L^1(\mu)$ and $U: L^1(\mu)\to Y^{**}$ such that $UV = \kappa T$, where $\kappa:Y\to Y^{**}$ is the canonical embedding.
Using this fact, we get the following commutative diagram:
C\_0\^H(\^2) & C\_0\^(\^2) & C\_0\^[T\_1, T\_2]{}(\^2) & W\_1\^[T\_1, T\_2]{}(\^2) & W\_2\^[, ]{}(\^2)\
& & L\^1() & &
Now we can consider the dual diagram:
C\_0\^H(\^2)\^\* & C\_0\^(\^2)\^\* & C\_0\^[T\_1, T\_2]{}(\^2)\^\* & W\_1\^[T\_1, T\_2]{}(\^2)\^\* & W\_2\^[, ]{}(\^2)\
& & L\^() & &
The next step of the proof is to construct a specific operator which would take elements of $C_0^H({\mathbb{T}}^2)^*$ to elements of the space $W_{1/2}^H ({\mathbb{T}}^2)$ (which is quasi-Banach; the definition will be given below). This construction is the same as in the paper [@KisSid] but for the sake of completeness we repeat it here.
Consider the space $W_2^H({\mathbb{T}}^2)$ that is defined by means of the following seminorm (and contains only proper functions):
$$\|f\|_{W_2^H({\mathbb{T}}^2)}=\max\limits_{T\in H} \|Tf\|_{L^2({\mathbb{T}}^2)}.$$
Clearly, this is a Hilbert space. Recall that $H=\{\partial_1^{jm}\partial_2^{(N-j)n}\}_{j=0}^N$. The space $W_2^H({\mathbb{T}}^2)$ can be identified with a subspace of $L^2({\mathbb{T}}^2)\oplus \ldots \oplus L^2({\mathbb{T}}^2)$ (there is $N+1$ copy of $L^2({\mathbb{T}}^2)$ here); this identification is given by the map $$f\mapsto (\partial_1^{jm}\partial_2^{(N-j)n} f)_{j=0}^N.$$
Therefore, we can consider the orthogonal projection from the direct sum of $N+1$ copies of $L^2({\mathbb{T}}^2)$ to $W_2^H({\mathbb{T}}^2)$. We denote this projection by $P$. We need some properties of these operators, so now we state these properties here. All of them are listed in [@KisSid].
First, it is easy to see how $P$ acts on a natural basis of $L^2({\mathbb{T}}^2)\oplus \ldots \oplus L^2({\mathbb{T}}^2)$. Suppose that $k=(k_1, k_2)$ is a pair of integers and denote by $\phi_k^l$ the following element of the space $L^2({\mathbb{T}}^2)\oplus \ldots \oplus L^2({\mathbb{T}}^2)$:
$$\phi_k^l=(0, 0, \ldots, 0, z_1^{k_1}z_2^{k_2}, 0, 0 \ldots, 0),$$ where $z_1^{k_1}z_2^{k_1}$ is at the $l$th position, $0\leq l\leq N$. Then the following statement can be proved by simple calculations.
If either $k_1$ or $k_2$ equals $0$, then $P(\phi_k^l)=0$. Otherwise, $$P(\phi_k^l)=\bar{\lambda}_l\Big( \sum_{j=0}^N |\lambda_j|^2 \Big)^{-1} (\lambda_0 z_1^{k_1}z_2^{k_2}, \ldots \lambda_N z_1^{k_1}z_2^{k_2}),$$ where $\lambda_j=(ik_1)^{jm} (ik_2)^{(N-j)n}$.
Now we need to understand how $P$ acts on the space $C_0^H({\mathbb{T}}^2)^*$. The space $C_0^H({\mathbb{T}}^2)$ can be identified with a subspace of $C({\mathbb{T}}^2)\oplus\ldots\oplus C({\mathbb{T}}^2)$ (in the same way as $W_2^H({\mathbb{T}}^2)$ is identified with a subspace of $L^2({\mathbb{T}}^2)\oplus \ldots \oplus L^2({\mathbb{T}}^2)$). Therefore, we have: $$C_0^H({\mathbb{T}}^2)^*=(\mathcal{M}({\mathbb{T}}^2)\oplus\ldots\oplus \mathcal{M}({\mathbb{T}}^2))/\mathcal{X},$$ where $\mathcal{X}$ is the annihilator of $C_0^H({\mathbb{T}}^2)$ in $C({\mathbb{T}}^2)\oplus\ldots\oplus C({\mathbb{T}}^2)$, that is, $$\mathcal{X}=\Big\{(\mu_0, \mu_1, \ldots, \mu_N): \sum_{j=0}^N \int \partial_1^{jm}\partial_2^{(N-j)n}g\, d\bar{\mu}_j = 0\ \forall\, g\in C_0^H({\mathbb{T}}^2) \Big\}.$$
At this point, formally speaking, we should consider an operator $\Phi_M$ that is convolution with the $M$th Fejér kernel in both variables, and the operators $P_M$ such that $$P_M (F) = P(\Phi_M \mu_0, \Phi_M \mu_1, \ldots, \Phi_M \mu_N), \quad F\in C_0^H({\mathbb{T}}^2)^*,$$ where $(\mu_0, \ldots, \mu_N)$ is any representative of a functional $F$. This formula is meaningful because $P$ is an *orthogonal* projection and if $(\nu_0, \ldots, \nu_N)$ lies in $\mathcal{X}$, then $(\Phi_M \nu_0, \ldots, \Phi_M \nu_N)$ lies in $\mathcal{X}\cap (L^2({\mathbb{T}}^2)\oplus\ldots\oplus L^2({\mathbb{T}}^2))$, that is, in the kernel of the projection $P$. Now we state the following fact from [@KisSid].
The operators $P_M: C_0^H({\mathbb{T}}^2)\to W_{1/2}^H({\mathbb{T}}^2)$ are uniformly bounded in $M$.
The definition of the space $W_{1/2}^H({\mathbb{T}}^2)$ should now be clear from the context.
The proof (modulo some technical details) follows from the theory of singular integrals (and Fourier multipliers) with mixed homogeneity developed in [@FabRiv] (we see from the formula in Fact 5 that the components of $P$ are Fourier multipliers with a certain homogeneity) and it is even true that these operators are uniformly of weak type $(1,1)$. Of course, there are some technical differences, for example, in [@FabRiv] everything was done for the space $\mathbb{R}^n$ instead of ${\mathbb{T}}^n$. On the other hand, these differences can be overcome quite easily, again, some details can be found in [@KisSid].
Since all the estimates are uniform in $M$, we omit the letter $M$ in our notation. Now we have the following commutative diagram:
W\_[1/2]{}\^H(\^2) & C\_0\^H(\^2)\^\* & C\_0\^(\^2)\^\* & C\_0\^[T\_1, T\_2]{}(\^2)\^\* & W\_1\^[T\_1, T\_2]{}(\^2)\^\* & W\_2\^[, ]{}(\^2)\
& & & L\^() & &
Now we are going to use some facts from the theory of $p$-summing operators. A good reference here is [@KislOp]. The space $W_{1/2}^H({\mathbb{T}}^2)$ is a quasi-Banach space of cotype 2 and $L^\infty(\mu)$ is a space of type $C(K)$. Therefore, $Pj^*V^*$ is a $2$-summing operator (this is a generalization of Grothendieck’s theorem; see [@KislOp] for details). Hence, $Pj^*i^*g^*s^*$ is also $2$-summing. In the next subsection we are going to show that this is not the case.
Final computations and a contradiction
--------------------------------------
As in the proof of Theorem 1, we denote by $v_{pq}$ the function $z_1^p z_2^q$. Again, we consider only sufficiently large values of $p$ and assume that the pair $(p, q)$ in question satisfies the following condition: $$\frac{\delta}{2}q^n \leq p^m \leq \delta q^n.$$
We see that $$\|v_{pq}\|_{W_2^{\frac{m-1}{2}, \frac{n-1}{2}}} \asymp p^{\frac{m-1}{2}} q^{\frac{n-1}{2}} \asymp p^m p^{-1/2}q^{-1/2}.$$
Now denote by $w_{pq}$ the function $\frac{v_{pq}}{\|v_{pq}\|}$. This is an orthonormal system in the space $W_2^{\frac{m-1}{2}, \frac{n-1}{2}}$ and hence it is weakly 2-summable. Therefore, since $Pj^*i^*g^*s^*$ is a 2-summing operator (which by definitions means that it takes weakly 2-summable sequences to 2-summable sequences), we have: $$\sum \|Pj^*i^*g^*s^* w_{pq}\|_{W_{1/2}^H}^2 <\infty.$$
First, let us realize where the operator $j^*i^*g^*s^*$ takes the function $w_{pq}$. Take any function $v_{\tilde{p}\tilde{q}}=z_1^{\tilde{p}}z_2^{\tilde{q}}\in C_0({\mathbb{T}}^2)$; linear combinations of such functions are dense in $C_0({\mathbb{T}}^2)$. We write: $$\label{dualop}
\langle v_{\tilde{p}\tilde{q}}, (j^*i^*g^*s^*) w_{pq} \rangle = \langle (sgij) v_{\tilde{p}\tilde{q}}, w_{pq} \rangle = \langle sv_{\tilde{p}\tilde{q}}, w_{pq} \rangle_{W_2^{\frac{m-1}{2}, \frac{n-1}{2}}}.$$ Now we need to recall how $s$ acts on the function $v_{\tilde{p}\tilde{q}}$. We need to solve the system of equations (\[sys1\]) and by definition all functions $\mu_j$ are multiples of $z_1^{\tilde{p}} z_2^{\tilde{q}}$. Therefore, the solution is also a multiple of $z_1^{\tilde{p}} z_2^{\tilde{q}}$ and so we see that $\langle sv_{\tilde{p}\tilde{q}}, w_{pq} \rangle_{W_2^{\frac{m-1}{2}, \frac{n-1}{2}}}\neq 0$ only if $p=\tilde{p}$ and $q=\tilde{q}$.
So, now we need to determine the function $s(v_{pq})$. Recall that in the previous section (where we proved Theorem 1) we showed that $s$ takes $\frac{v_{pq}}{p^{mN}}$ to $k_{pq}\frac{v_{pq}}{p^m}$ where $|k_{pq}|\asymp 1$. Therefore, we have: $$s(v_{pq}) = k_{pq} p^{mN} p^{-m} v_{pq}.$$ Hence, we have the following identity: $$\langle sv_{pq}, w_{pq} \rangle = k_{pq} p^{mN}p^{-m}\|v_{pq}\|_{W_2^{\frac{m-1}{2}, \frac{n-1}{2}}}\asymp k_{pq}p^{mN}p^{-m}p^{-1/2}q^{-1/2} = k_{pq} p^{mN}p^{-1/2}q^{-1/2}.$$ Therefore, we arrive at the following formula for the right-hand side of (\[dualop\]): $$\langle sv_{\tilde{p}\tilde{q}}, w_{pq} \rangle_{W_2^{\frac{m-1}{2}, \frac{n-1}{2}}}=
\begin{cases}
0, \ (p,q)\neq (\tilde{p}, \tilde{q}),\\
k_{pq} p^{mN}p^{-1/2}q^{-1/2}, \ (p,q) = (\tilde{p}, \tilde{q}).
\end{cases}$$
Recall that the following element of $C({\mathbb{T}}^2)\oplus\ldots\oplus C({\mathbb{T}}^2)$ corresponds to $v_{pq}\in C_0^H({\mathbb{T}}^2)$: $$(\partial_1^{jm}\partial_2^{(N-j)n} v_{pq})_{j=0}^N = ((ip)^{jm}(iq)^{(N-j)n} v_{pq})_{j=0}^N.$$ So, since $p^{mN}\asymp q^{nN}$, we can take the following representative from the equivalence class corresponding to $(j^*i^*g^*s^*)w_{pq}$: $$(l_{pq}p^{-1/2}q^{-1/2} v_{pq}, 0, 0, \ldots, 0), \quad \hbox{where} \quad |l_{pq}|\asymp 1.$$
Finally, we apply the projection $P$ (using the formula from Fact 5; in our case, $|\lambda_j|=p^{jm}q^{(N-j)n}\asymp p^{mN}$ and hence $\bar{\lambda}_l \lambda_k (\sum |\lambda_j|^2)^{-1} \ \asymp 1$). We have: $$\sum \|l_{pq} p^{-1/2}q^{-1/2} v_{pq}\|_{L^{1/2}}^2\asymp \sum p^{-1}q^{-1},$$ and it was already established that this sum is divergent. Therefore, we get a contradiction and the theorem is proved.
[99]{}
O. V. Besov, V. P. Il’in, S. M. Nikol’skii, *Integral Representation of Functions and Imbedding Theorems*, Halsted Press, Washington (1978).
E. B. Fabes and N. M. Rivi[è]{}re, *Singular integrals with mixed homogeneity*, Stud. Math., 27, No.1, 19–38 (1966).
Y. Gordon, D. R. Lewis, *Absolutely summing operators and local unconditional structure*, Acta Math, 133, 27–48 (1974).
A. Grothendieck, *Erratum au mémoire: produits tensoriels topologiques et espaces nucléaires*, Ann. Inst. Fourier (Grenoble) 6 (1955–1956), 117–120.
G. M. Henkin, *Absence of a uniform homeomorphism between spaces of smooth functions of one and of $n$ variables ($n\geq 2$)*, Mat. Sb. 74 (4) (1967), 595–606 (in Russian).
S. V. Kislyakov, *Absolutely summing operators on the disc algebra*, Algebra i Analiz, **3**:4 (1991), 1–77; St. Petersburg Math. J., **3**:4 (1992), 705–774.
S. V. Kislyakov, *Sobolev embedding operators and nonisomorphism of certain Banach spaces*, Funktsional. Anal. i Prilozhen. 9 (4) (1975) 22–27 (in Russian).
S. V. Kislyakov, *There is no local unconditional structure in the space of continuously differentiable functions on the torus*, LOMI Preprint R-1-77, Leningrad, 1977 (in Russian).
S. V. Kislyakov, D. V. Maksimov, *Isomorphic type of a space of smooth functions generated by a finite family of differential operators*, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 327 (2005) 78–97 (in Russian).
S. V. Kislyakov, D. V. Maksimov, *Isomorphic type of a Space of smooth functions generated by a finite family of nonhomogeneous differential operators*, POMI Preprint 6/2009 (in Russian).
S. V. Kislyakov, D. V. Maksimov, and D. M. Stolyarov, *Differential expressions with mixed homogeneity and spaces of smooth functions they generate in arbitrary dimension*, J. Funct. Anal, 269, 3220–3263 (2015).
S. V. Kislyakov, D. V. Maksimov, and D. M. Stolyarov, *Differential expressions with mixed homogeneity and spaces of smooth functions they generate* https://arxiv.org/abs/1209.2078
S. V. Kislyakov, N. G. Sidorenko, *Absence of a local unconditional structure in anisotropic spaces of smooth functions*, Sibirsk. Mat. Zh. 29 (3) (1988) 64–77 (in Russian).
S. Kwapień, A. Pe[ł]{}czyński, *Absolutely summing operators and translation-invariant spaces of functions on compact abelian groups*, Math. Nachr. 94 (1980) 303–340.
D. V. Maksimov, *Isomorphic type of a space of smooth functions generated by a finite family of differential operators*. II, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 333 (2006) 62–65.
A. Pe[ł]{}czyński, K. Senator, *On isomorphisms of anisotropic Sobolev spaces with “classical" Banach spaces and Sobolev-type embedding theorem*, Studia Math. 84 (1986) 169–215.
A. Pietsch, *Operator Ideals*, Elsevier, North-Holland (1980).
N. G. Sidorenko, *Nonisomorphism of certain Banach spaces of smooth functions to the space of continuous functions*, Funktsional. Anal. i Prilozhen. 21 (4) (1987) 169–215.
P. Wojtaszczyk, *Banach spaces for analysts*, Cambridge University Press (1991).
A. Tselishchev
Saint Petersburg Leonard Euler International Mathematical Institute, Fontanka 27, St. Petersburg 191023, Russia
[email protected]
| |
Chef Ray: Pasta with Smoked Salmon
This is a fresh take on the classic version without the heaviness of cream and butter.
From: Thomas, Biju & Lim, Allen (2011) The Feedzone Cookbook – Fast and Flavorful Food for Athletes Velo Press
Serves 4
- 8 ounces (225g) farfalle (or other medium-sized pasta)
- 3 ounces (85g) smoked salmon
- 1 cup yoghurt (if using Greek yoghurt add a splash of water)
- ½ cup low-sodium stock, water, or milk
- 1 small tomato, diced
- ½ cup carrots, diced
- ½ cup green peas (frozen peas and carrots are a fine substitute for fresh)
- ¼ cup chopped fresh parsley
Optional Additions
- fresh tarragon, capers, and/or chopped green olives
- Cook the pasta according to package directions to be al dente (10-11 minutes). Drain. Toss with a small amount of olive oil; set aside.
- Remove any bones from the salmon and flake the meat with a fork.
- In a large deep saucepan, whisk together the yoghurt and stock. The consistency should be thick. Set over medium heat and bring to a gentle boil, stirring continually. Simmer.
- Add tomato, carrots, peas, and parsley. Cook 5-6 minutes, or until carrots are tender. Add cooked pasta and salmon (and any optional ingredients), mix thoroughly, and remove from heat.
To finish the dish, add salt and pepper to taste, and offer grated parmesan and fresh lemon juice as condiments.
Although I’m not a qualified chef I do like to pretend I’m on Master Chef. I source the recipes from the stated source (not sauce) and make them taking the photo of the meal I produce for myself.
Share this post so your friends can benefit as well.
If you enjoyed this recipe here is one I published a year ago:
For more great recipes like this get the Feedzone Cookbook.
Chef Biju and Dr. Lim vetted countless meals with the world’s best endurance athletes in the most demanding test kitchens. In The Feed Zone Cookbook: Fast and Flavorful Food for Athletes, Thomas and Lim share their energy-packed, wholesome recipes to make meals easy to prepare, delicious to eat, and better for performance.
The Feed Zone Cookbook provides 150 delicious recipes that even the busiest athletes can prepare in less time than it takes to warm up for a workout. With simple recipes requiring just a handful of ingredients, Biju and Allen show how easy it is for athletes to prepare their own food, whether at home or on the go.
The Feed Zone Cookbook strikes the perfect balance between science and practice so that athletes will change the way they think about food, replacing highly processed food substitutes with real, nourishing foods that will satisfy every athlete’s cravings. | https://www.coachray.nz/2019/11/10/chef-ray-pasta-with-smoked-salmon/ |
# Atlantic spotted dolphin
The Atlantic spotted dolphin (Stenella frontalis) is a dolphin found in warm temperate and tropical waters of the Atlantic Ocean. Older members of the species have a very distinctive spotted coloration all over their bodies.
## Taxonomy
The Atlantic spotted dolphin was first described by Cuvier in 1828. Considerable variation in the physical form of individuals occurs in the species, and specialists have long been uncertain as to the correct taxonomic classification. Currently, just one species is recognised, but a large, particularly spotty variant commonly found near Florida quite possibly may be classified as a formal subspecies or indeed a species in its own right.
Atlantic spotted dolphins in the Bahamas have been observed mating with bottlenose dolphins. Rich LeDuc has published data that suggest the Atlantic spotted dolphin may be more closely related to the bottlenose dolphins (genus Tursiops) than to other members of the genus Stenella. More recent studies in the 2020's indicate that this is a consequence of reticulate evolution (such as past hybridization between Stenella (spotted dolphins) and ancestral Tursiops (bottlenose dolphins)) and incomplete lineage sorting, and thus support T. truncatus and T. aduncus belonging to the same genus. This likely explains why Atlantic spotted dolphins can mate with both species of bottlenose dolphins.
## Description
The coloring of the Atlantic spotted dolphin varies enormously as it grows, and is usually classified into age-dependent phases known as two-tone, speckled, mottled, and fused. Calves are a fairly uniform gray-white, with one or no spots. When they are weaned, speckling occurs, typically between 3 and 4 years and lasting for an average of 5 years. A juvenile is considered mottled when it develops merging gray and white spots on the dorsal surface and black spots on the ventral surface. This usually happens between age 8 or 9. A fused pattern is reached when dark and white spots are on both the ventral and dorsal sides. As the animal matures, the spots become denser and spread until the body appears black with white spots at full maturation.
In comparison to other dolphin species, the Atlantic spotted dolphin is medium-sized. Newborn calves are about 35–43 in (89–109 cm) long, while adults can reach a length of 2.26 m (7 ft 5 in) and a weight of 140 kg (310 lb) in males, and 2.29 m (7 ft 6 in) and 130 kg (290 lb) in females. Compared to the much smaller pantropical spotted dolphin, the Atlantic spotted dolphin is more robust. It shares its habitat with the pantropical spotted dolphin and the bottlenose dolphin.
The species exhibits a range of about ten different vocalizations, including whistles, buzzes, squawks and barks, each corresponding with different behaviors.
## Behavior
Atlantic spotted dolphins are extremely gregarious animals, which gather in complex social groups of 5-15, often in mixed groups with common bottlenose dolphins. They are fast swimmers and known for their bow-riding and long, shallow leaping behaviors. Their mating system consists of one male mating with several females, and the pod is highly protective over pregnant females. The dolphin’s gestational period is ~11 months, and the mother cares for its calf for up to 5 years, with the help of the rest of her group.
These animals are cooperative hunters that hunt in groups at night. They strategically encircle their prey, which consists mostly of small fish, benthic invertebrates, and cephalopods such as squid. They can dive to depths of up to 60 m (200 ft) and can stay beneath the surface for up to 10 minutes at a time.
## Population and distribution
The species is endemic to the temperate and tropical areas of the Atlantic Ocean. It has been widely observed in the western end of the Gulf Stream, between Florida and Bermuda. Off the Bahamas, tourism industries to swim with dolphins are available. It is also present in the Gulf of Mexico. More infrequent sightings have been made further east, off the Azores and Canary Islands. Northerly sightings have been made as far north as Cape Cod across to the southwestern tip of Spain. They are certainly present further south, too, as far as Rio Grande do Sul in Brazil and across to west Africa, but their distribution is poorly understood in these areas.
About 20 years ago, only about 80 dolphins were in the Bahamas. Now, almost 200 dolphins are found there. On account of their similar appearance to other dolphins in their range, it is difficult to be sure of the Atlantic spotted dolphin's population. A conservative estimate is around 100,000 individuals.
## Human interaction
Some Atlantic spotted dolphins, particularly some of those are around the Bahamas, have become habituated to human contact. In these areas, cruises to watch and even swim with the dolphins are common. They are usually not held in captivity.
Atlantic spotted dolphins are an occasional target of harpoon fishermen, and every year some creatures are trapped and killed in gill nets, but these activities are not currently believed to be threatening the survival of the species. This species lives in the mesopelagic layer of the ocean. These dolphins are not threatened by extinction, however, commercial trade may affect their evolution and sustainability. Sometimes they are killed by harpoons off St. Vincent.
## Conservation
The Atlantic spotted dolphin is included in the Memorandum of Understanding Concerning the Conservation of the Manatee and Small Cetaceans of Western Africa and Macaronesia. They are also marked as Least Concern of the Conservation Action Plan for the World's Cetaceans. | https://en.wikipedia.org/wiki/Atlantic_spotted_dolphin |
What Would You Do If There Was Nothing You Had To Do?
Romance novels
WriteSpa Oasis
WriteSpa
Mentoring and Coaching
WriteSpa Newsletter
Teaching and Learning
Metaphysics
Metaphysics
Private Sessions
Classes and Videos
Tarot Oasis
Happinesses
What are happinesses?
Happiness Vignettes
Subscribe to Happinesses
News
Connect
Happiness 4-25
Posted on
April 25, 2017
by
winslow eliot
embracing danger with complete stillness. | http://winsloweliot.com/2017/04/happiness-4-25-7/ |
Editor's Note: When was the last time you used dried fruit in a recipe? If you've only ever snacked on dried apples, berries, and other fruits, then get ready to try something different when you make this recipe for Rustic Dried Fruit Tart! This free-form tart is easy to make and can be adjusted to fit your personal preferences. You'll love that this dessert recipe only needs four ingredients, many of which you may already have in your pantry and refrigerator. The next time company drops by unexpectedly, don't panic. Instead, whip up this delicious tart! Switch-in suggestions as well as tips on how to make the crowning touch for this tart are included below the recipe.
Makes1 tart; serves 4 to 6
Cooking MethodBaking
CostInexpensive
OccasionBuffet, Casual Dinner Party
Recipe CourseDessert
Five Ingredients or LessYes
MealBrunch, Dinner, Tea
Taste and TextureButtery, Crisp, Fruity, Sweet
Type of DishDessert, Tart
Ingredients
- ½ cup confectioners sugar, divided
- 2¼ cups (10½ ounces) lightly packed mixed dried fruit, coarsely chopped
- ½ cup orange marmalade
- 1 frozen puff pastry sheet (about 9 ounces), thawed
Instructions
-
Heat the oven to 400 degrees F. Line 1 rimmed baking sheet with a nonstick liner. Set aside 2 tablespoons of the confectioners sugar.
-
Put the dried fruit, 2/3 cup water, and the marmalade in a medium saucepan and cook over medium heat until boiling. Reduce the heat to low, cover, and simmer, stirring occasionally, until the fruit is tender and the liquid is reduced and thickened, about 10 minutes. Set aside to cool completely or transfer to a bowl, cover, and refrigerate for up to 3 days.
-
Arrange a large piece of plastic wrap on the counter. Sprinkle generously with some of the remaining confectioners sugar. Using a rolling pin, roll out the puff pastry sheet, sprinkling the top and bottom often and generously with the sugar to prevent slicking, into a 10-inch square, then trim into a 10-inch round. Move to the prepared baking sheet.
-
Spoon the cooled dried fruit into the center of the pastry, mounding it slightly and leaving about a 2-inch border. Fold the edges over the filling, pleating the pastry as you go around the filling and leaving the center of the fruit uncovered. Sprinkle the remaining sugar over the pastry.
-
Bake until the pastry is golden brown, about 35 minutes. If the pastry puffs up, prick with a fork or knife tip. Set on a rack to cool slightly before serving.
Switch-Ins
-
In place of the mixed dried fruit, switch in 1½ cups, lightly packed dried apricots, coarsely chopped.
-
In place of the orange marmalade, switch in ½ cupseedless raspberry jam.
Gussy It Up
Serve with Boozy Hard Sauce or vanilla ice cream:
Combine 4 ounces room-temperature unsalted butter and 1 cup confectioners’ sugar in a medium bowl. Beat with an electric mixer on low speed until the ingredients are blended. Add 1 tablespoon brandy or dark rum or ¾ teaspoon pure vanilla extract or 1 teaspoon finely grated citrus zest and a pinch of table salt. Increase the speed to medium and beat until smooth and fluffy. Serve immediately or cover and refrigerate for up to 2 weeks. The sauce can be served cold or at room temperature. Makes about 2/3 cup.
2010 Abigail Johnson Dodge
Tags / Related Topics
YOUR RECENTLY VIEWED RECIPES
Cmaser 9195479
Dec 26, 2012
This was a great dessert! I used dried fruit from a local market, and added a handful (1/4 cup?) of fresh cranberries. Highly recommend this recipe!
Report Inappropriate Comment
Are you sure you would like to report this comment? It will be flagged for our moderators to take action.
Thank you for taking the time to improve the content on our site. | https://www.cookstr.com/Dessert/Rustic-Dried-Fruit-Tart |
While failure isn’t something any new entrepreneur or small business owner wants to even consider a possibility, for many startups it’s not only a possibility, but an eventual reality.
According to the United States Bureau of Labor Statistics, 20% of businesses will fail in their first year and 50% by year five.
Those are some pretty discouraging figures.
On the flipside though, that also means a whopping 80% of all businesses starting out every year will survive through year one. And of those businesses that make it through their first year, 62% will still be standing at the end of their fifth year. Put simply, for every business that makes it through year one, three out of five will go on to make it past the five year mark.
And for those businesses that make it through their first five years, the odds of success get even better with almost 70% of those remaining making it to 10 years.
When you look at the figures from a success standpoint rather than failure, it doesn’t look so grim does it? And these survival rates have remained consistent over time (based on the first BLS private sector survival rates data from 1994), which means that businesses continue to succeed or fail regardless of what’s going on in the economy. Even during the financial crisis of 2008, 75% of businesses started that year remained in business a year later, and only an extra 2% dropped off at year five. Survival rates are also rather similar across industries as well, so it doesn’t appear that industry factors have much to do with the success (or lack thereof) either.
Perhaps rather than looking at it from a glass half empty perspective, we should be taking a more optimistic outlook, and rather than worrying about failure, learning what we can to avoid it.
Unless of course, you’re a startup with the goal of becoming a unicorn (a company valued at $1 billion or more). Unfortunately for you, your odds of success are less than 1%.
So why do new businesses fail? According to CB Insights who analyzed over 100 self-reported startup failures, the number one reason for failure was a lack of market demand. This accounted for a huge 42% of all failures. In second place was lack of capital at 29% and interestingly, a close third at 23% was not having the right team. You can check out the full list in their top reasons for startup failure article, which includes a handy little infographic outlining each reason and the correlating percentage.
But for now, let’s consider the top three reasons: 1) not enough demand for your product, 2) running out of money and 3) not choosing the right people to build the business. All of these things seem like pretty fundamental issues that a good entrepreneur would see coming from a mile away, but it’s even for savvy entrepreneurs, it’s easy to get swept up in the idea. They believe so much in their business offering that they fall into the “if you build it, they will come” trap, convinced that once people find out about their new/better/different product or service, they’ll be lining up with their wallets open. They convince themselves that the next big investor is around the corner or that despite management issues, the product can still sell while their internal problems are being resolved.
In reality, these are major problems that could have been avoided with the right planning on the front end. In fact, you can look at all of these reasons for failure as really boiling down to one single reason: the result of poor planning. This is why business experts continuously harp on about the fundamentals of knowing your market and building a solid business plan. You simply cannot expect to succeed without either one.
Just as many businesses fail, however, and most often due to poor planning, it’s also important to note that not all business ventures end because they were a failure. In some cases, it may be a case of not successful enough versus failing. According to the 2015 Global Entrepreneurship Monitor (GEM) survey, around a third of businesses in the developmental phase across 60 countries cited unprofitability as their exit reason, and while that could mean these businesses were losing money – i.e. failing – it could also mean they simply weren’t profitable enough to bother continuing. Other reasons for discontinuance included lack of finance, sale of the business, following new opportunities or retirement, and on a small but notable scale, bureaucracy.
It bears keeping in mind that not all businesses shut down after a year, five or even ten, because they are failing. Business owners may move on to a new venture or simply decide the payoff isn’t worth the effort they’re putting into their business. Others may have made their fortune and gone off to happily travel the world in early retirement.
But for those businesses that do shut their doors due to failure, much can be learned. CB Insights offers a fantastic foray into this very topic with their recent 253 Startup Failure Post-Mortems. There are also loads of tips and information about how to build a successful startup available right here in the davidisiere.com Entrepreneur Life archives including Common Startup Mistakes to Avoid, Learning From Your Mistakes, Tips For Success and Best Business Practices. | https://daviddisiere.com/entrepreneur-life-why-businesses-fail/ |
In seventh grade I gave a presentation about being a neonatologist. I loved the idea of working with babies, helping to save lives.
So I found a white coat, borrowed a stethoscope from lord knows where and snagged a baby doll from my basement for the perfect prop. I loved dressing up and teaching my class what it meant to be a doctor, and how excited I was to become one someday.
Yikes.
In retrospect, my middle school alter ego also wanted to be a marine biologist, travel agent and broadway actress. She was all over the place.
The experience of speaking so passionately about a job reminds me how often we strive to land our “dream job” but more importantly it makes me consider what we do once we find it.
If you’re a millennial, you’ve grown up in the era of entrepreneurship around every corner. Social media has made it possible for anyone to add the tagline ‘entrepreneur’, ‘lady boss’ or ‘self made’ to any Instagram or Twitter profile, creating an unfair and inaccurate reflection that we all must strive for this goal.
But what if we like our jobs?
In the age of self made and self promoted, it can often be overwhelming to simply enjoy the job you have now. We find ourselves trying to justify the satisfaction we have in our current position, telling ourselves its just something to tide us over until we launch our own thing.
Recently I have felt the urge to do more. But not because I dislike my job, it’s simply because I want more for myself.
The myth is that we must choose.
A life of an entrepreneur or a pencil pusher. What if there was a happy medium that would allow us to explore our desires with the security of a job we enjoy?
Growing up with parents who decided to follow the path of an entrepreneur and leave behind the office space, has often made it difficult for me to feel happy where I am.
Much like the pressure we feel as kids to be just like the athletes or musicians our parents once were, we feel that same pressure as adults. To follow the path that was show to us our entire life.
It comes down to this. It’s ok to like your job and still want more. Feeling guilty for being happy in your role is the most ass backwards way of thinking about life, and yet it makes complete sense when we think about the world around us. A world where being an entrepreneur means glamour and riches, but we forget that hustle, sweat and a whole lot of sacrifices come along with those rewards.
There were times when my Dad, the owner of his own construction company, would come home everyday at 10 or 11 pm. No weekends off, up and out the door at 6 am the next day. My Mom, a career coach and speaker, never stops working. Her thoughts are always focused on the next client call, the next speaking engagement, the next way to bring money in the door.
For them, there is never a guarantee.
But the freedom to be at all of my performances, drop everything to come help me move, or take me to the airport in the afternoon for a flight to Spain? All things they were able to do.
It’s a balancing act, having an entrepreneurial spirit but seeking security and desiring a dependable income. I’m working on ways to balance on that sword. Walking the fine line that will allow for my creativity and passion for self employment to find space inside my need for a sense of belonging and camaraderie that comes with a workplace.
I wonder how many of us feel the same way. Overwhelmed by the pressure we feel to make our own way, forgetting that it’s ok to enjoy where we are and find ways to nurture our passions in the time we would be watching Netflix instead.
If you’re someone struggling to find a way for the entrepreneur in you to be happy playing the long game, know that you are not alone and in fact, you’re in great company. | https://meriksenrusso.com/2019/04/02/balancing-your-inner-entrepreneur/ |
Watch Farmtastic Fun (2019)
You are watching the movie Farmtastic Fun 2019 produced in USA belongs in Category Fantasy , with duration 70 min , broadcast at YESMOVIESHD.INFO,Director by Pippa Seymour, The film is directed by Pippa Seymour. Farmtastic Fun is a movie starring Alfred Hill and James Kane. Get ready for a Hee-haw, Moo-tastic, Baa-mazing sing and dance along in farmtastic fun town! Come on down to the farm where horses, cows, chickens and pigs all love to...
Director: Pippa Seymour
Genre: Fantasy
Country: USA
Duration: 70 min
IMDb: | http://9jto.us/watch/farmtastic-fun-2019-online-free-yesmovies.html |
Welcome to Beverly Oaks. We hope your residency here is a good one. The majority of the homes in Beverly Oaks are owner occupied. We want all renters to feel at home in the community and welcomed.
As a temporary resident, we ask that you become familiar with and abide by the association rules in order to stay in good standing with your neighbors.
The Association maintains individual front yards (mowing), the entrance gates and entrance landscape, the exit roadway & traffic spike system, the cul-de-sac landscaping, exterior painting, street maintenance, and enforce architectural standards and vehicle/parking control. The operating body of the Association is the a Board of Directors.
The Association own the streets, water lines, sewer lines, and the concrete perimeter fences associated with your home/lot. Please contact the homeowner for all repairs.
All Beverly Oaks residents are required to provide current contact information. Please register or update your information as it changes.
The Community Mail and Package Delivery Center is located at 2023 Wilshire (middle door). The center is open in the late afternoon and early evening and can be accessed after hours using the keypad. Please contact the Association for the current pass code.
All resident renters are required to have renters insurance (download information) and provide proof to the Homeowners Association. Please fax to (800) 639-2434.
No boats, trailers, trucks, campers or commercial vehicles shall be parked or maintained in the Properties. Commercial vehicle is defined as any vehicle that has a commercial registration, or displays signage. This restriction shall not restrict trucks or commercial vehicles making pickups or deliveries to or in the Properties, nor shall this restriction prohibit trucks or commercial vehicles within the Properties which are necessary for the construction of residential dwellings or maintenance of the Common Properties.
All streets of the subdivision are twenty two (22) feet wide fire lanes and are marked as such- Vehicles parked within the fire lanes shall be subject to ticketing and/or towing.
Visitor parking spaces (24 total) are marked as such and shall be used only by visitors- someone that is on property fewer than 10 days a year. Parking is for a maximum of 12 hours. Vehicles parked in violation of the rules shall be subject to ticketing, towing, or booting (approximately $300 to receive a release).
No inoperable motor vehicles, vehicles with out current registration and state inspection stickers, are permitted on The Property unless stored in the garage of a Living Unit. Vehicles cannot be repaired on The Property, unless repaired in the garage of a Living Unit with the door closed.
All residents shall assume full responsibility for any personal injuries or property damages caused by any animals which are in their personal care.
All responsibility of animals of visitors shall rest with the Owner or Resident of the living unit visited.
All living units shall have a working lamp post in the yard that is illuminated from dusk to dawn.
No garbage, refuse, rubbish or cuttings shall be deposited on any street, road or Common Properties and not on any Lot unless placed in a manner that complies with the collection requirements of the City of Irving. Trash should not be put out earlier than 8 PM of the day prior to collection with the exception of brush and limbs which shall not be put out earlier than 5 days before collection. Trash collection is Monday (trash) and Thursday (trash, recycle, bush). For more information call the Public Health and Environmental Services at 972.721.2346.
Units are zoned and intended for residential use only. Commercial activity such as business offices, storage of inventory, garage workshops, and the storage of commercial tools or vehicles shall not be permitted.
Residents shall exercise reasonable care to avoid making or permitting to be made loud, disturbing objectionable noises. This shall include use of any musical instrument, power tool, radio, sound equipment, television or any other equipment which would cause or tend to cause any objectionable noise or would interfere with reception of television or radio signals.
Any window coverings which are visible from the common areas shall be neutral colors (preferably beige or white). No window shall be covered with aluminum foil or similar material.
No driveways, entrance or passageways which are common properties shall be obstructed or used by any Owner or lessee for any purpose other than ingress and egress to their residence.
No play equipment, swing sets, trampolines, basket ball goals and any other play equipment shall be permanently located anywhere but at the rear of the house.
No structures that will be visible to an adjacent Lot are permitted without the consent of the immediate neighbors.
No yard signs will exceed a customary and conventional size as to detract from the overall appearance the Property. Only signs for the purpose of selling, renting, or protection of the house and placed on the ground are permitted. Campaign signs shall be limited to one per candidate per Lot and removed the day after voting is complete.
No clotheslines, drying yard, service yards, woodpiles or storage areas shall be so located as to be visible from a street or Common Properties.
No exterior lighting installed on any Lot shall be either indirect or of such controlled focus and intensity as to disturb the residents of the adjacent property.
All leases or rental agreements for Living Units shall be in writing and specifically subject to this Declaration, the Association Bylaws and Rules and Regulations. No Living Unit may be leased or rented for a period of less than six months. A copy of a the residential lease must be provided to the Association upon request. | http://beverly-oaks.org/renters.htm |
Chaired by Deputy Prime Minister Hambardzum Matevosyan, the second meeting of the policy dialogue series of the "GREEN Armenia" platform was held today. The Resident Representative of the United Nations Development Program in Armenia, Natia Natsvlishvili, the head of the European Union delegation in Armenia, Ambassador Andrea Wiktorin, Country Manager of the World Bank for Armenia Carolin Geginat, representatives of a number of concerned government institutions, as well as private sector, scientific institutions, civil society and development organizations participated in the discussion.
During the meeting, reference was made to the framework of the green economy policy, strategic planning and implementation proposed to Armenia.
Welcoming the participants of the meeting, the Deputy Prime Minister emphasized the importance of the involvement of all the stakeholders - representatives of international and non-governmental organizations, academia and the private sector - in forming a joint agenda for the transition to a green economy. "Today's meeting aims to transition to more substantive discussions aimed at the need to increase the resilience of the economy in the face of global and local climate challenges. Our commitments in this direction are clearly defined in the Government's 2021-2026 Action Plan, in which the transformation of the economy in accordance with the low-carbon energy reality and the provision of the prerequisites for the long-term preservation of natural resources in the economic cycle are defined as important preconditions for sustainable development", the Deputy Prime Minister noted.
UNDP Resident Representative in Armenia Natia Natsvlishvili highly appreciated the platform's role in the process of developing better and "green" policies for Armenia and its population. "Today, we are one step closer to realizing that promise by analyzing the essential elements of green economy policies and trying to turn high-level policy statements, such as the green transition vision and the country's strategic goals, into operational-level policies," Natsvlishvili emphasized, adding that Armenia is not alone in this process.
The participants of the meeting also discussed the possible main directions Armenia’s green and sustainable economic development strategy and its road map, which is currently being developed by the Ministry of Economy of Armenia in cooperation with other state bodies and international and local stakeholders.
According to the head of the EU delegation in Armenia, Ambassador Andrea Wiktorin, the international experience, especially the European model, has proven that the protection of the environment does not contradict the economic well-being, on the contrary, these two directions can complement each other. "The transition to green economic management can create many opportunities for businesses and job seekers, as well as increase the resilience and competitiveness of Armenian products and services in international markets," said the head of the EU delegation, reaffirming the EU's readiness to actively participate in the dialogue on sectoral policies in Armenia and coordination of donors.
During the meeting, separate thematic discussions on sustainable finance, green taxonomy, circular economy and current initiatives in Armenia took place.
Noting that Armenia is a country vulnerable to climate change, Country Manager of the World Bank for Armenia Carolin Geginat emphasized the need to transition to a greener economy model, which, taking into account this vulnerability, will also create significant growth opportunities. "The vision of Armenia's green economy, which was discussed today, is the first important step towards realizing the potential of green growth," Geginat emphasized.
During the discussions, the participants of the meeting emphasized the need to create a favorable environment for the protection of natural ecosystems, the development of human capital and the promotion of greater investments in the transition to a green economy. | https://www.gov.am/en/news/item/10235/ |
Welcome to Identosphere Weekly Digest!
Consider pledging a small amount each month via Patreon …or contact Kaliya for PayPal
Read previous issues and Subscribe : newsletter.identosphere.net
Content Submissions: newsletter [at] identosphere [dot] net
Upcoming
HM Government of Gibraltar DLT seminar by Paul G Astengo 11/01
Announcing the OpenID Foundation Hybrid Workshop at Visa 11/14 OpenID
Internet Identity Workshop #35 11/15-17, Mountain View CA
The EEMA Open SSI Forum 11/17 Berlin
[Hiring] Open position: Projects lead MyData Submit your application latest by 20 Nov!
Standards Watch
Orie Steele @OR13b Seeking feedback:
I am trying to improve the out of the box semantic support in Verifiable Credentials v2, if you have time to comment, I greatly welcome your feedback on: In support of adding @vocab to Verifiable Credentials V2
SuperSkills! A Mobile App Use Case for DIDs & VCs LearnCard
SuperSkills! is a learning game ecosystem and digital wallet for kids, developed in partnership with LEGO Foundation, to showcase W3C's Universal Wallet, a packaging of draft standards and open source frameworks
FIDO Conference Authenticate
[Event Report] Day 1 Recap • Day 2 Recap • Day 3 Recap
FIDO and the importance of data signing Lockstep
punchline of the talk “Thus the FIDO standards are adjacent to the most important data security functions today. If an end user device can perform FIDO authentication, then it should only be a matter of programming to extend that device to hold and present verifiable credentials.”
Big Picture
W3C TAG Ethical Web Principles
We recognize that web technologies can be used to manipulate and deceive people, complicate isolation, and encourage addictive behaviors. We seek to mitigate against these potential abuses
IDTech framing getting good support
Tim: IDtech is a great label! Much less charged than Web3, Web5 or SSI. I am also starting to use the more boring term of ‘user-controlled data.’
Riley: Yes, and I think they serve different purposes
User-controlled data / self-sovereign identity = a concept or philosophy
Web3 / Web5 = technologies (blockchain, DIDs, NFTs, etc)
IDtech = a product. Something a user can download and/or pay for
The internet’s missing layer Hard Yaka
whether or not Galloway realizes it, what he’s describing is the emerging self-sovereign identity movement and developments around verifiable credentials. You don’t want to trust Meta, but more importantly, you don’t have to when you own your own portable, digital identity.
Anil John Writes
Web3 has an Identity Problem: Building Decentralized Identity by Jelena Hoffart at 9Yards Capital provides a VC's perspective on the applicability of W3C Decentralized Identifiers and W3C Verifiable Credentials to the Web3 ecosystem and its importance.
The Myth of The Infrastructure Phase USV
what we hope to have shown is that in other platform shifts, we are able to build the first few apps before there are great tools
Community Annoucements
The Human Colossus Foundation Releases “Overlays Capture Architecture”
OCA is providing a critical tool to enable “Objectual Integrity”, an essential Dynamic Data Economy (DDE) Principle of the DDE Trust Architecture Semantic Layer (Layer 1), by providing an interoperable solution for data capture and exchange, agnostic to existing data models and representation formats.
Announcing the Verifiable Credential Selector TBD
The Verifiable Credential Selector is an open source widget that makes it easy for you to gain possession of your data from FinTech apps.
Transmute Request: Test Our Closed Beta
The Transmute Verifiable Data Platform (VDP) is now available for limited beta testing.
We are looking for users with interest or experience in digitally securing a supply chain ecosystem using cryptographically verifiable data.
Reports
MEF Market Review: Personal Data and Identity Meeting of the Waters Michael Becker, Identity Praxis
Regulatory, technological, cultural, and economic factors are shifting the context of personal data and identity: the what, when, why, who, where, and how. [download here]
HCF announces Dynamic Data Economy v1.0 Human Colossus Foundation
a trust infrastructure that preserves the structural, definitional, and contextual integrity (DDE Principle 1) of any object and their relationships in the Semantic domain, the factual authenticity (DDE Principle 2) of any recorded event in the Inputs domain, and the consensual veracity (DDE Principle 3) of any purpose-driven policy or notice in the Governance domain. As a result, actors in DDE ecosystems will ultimately have the transactional sovereignty (DDE Principle 4) to share accurate information bilaterally in the fourth domain, the Economic domain.
The Human Colossus Trust Infrastructure Stack Version 1.0 - "Infrastructure" versus "Security" incrementally through the core data domains as a tool to define and describe what contributes to creating a trust infrastructure for access and use of accurate data.
Public Sector
Why the UK banking sector and FCA are banking on digital IDs in 2023 IDnow
Koreans to have access to blockchain-powered digital IDs by 2024 Cointelegraph
Foundational ID: Restoring the Chain of Trust for Identity DIACC
Inclusive and Ethical Uses of Digital Idenity DigitalID New Zealand
Read the Inclusive and Ethical Use of Digital Identity (IEUDI) Working Group’s research and discussion paper on the existing mahi both in New Zealand and around the world in the areas of inclusivity and ethics in digital identity.
The HTF partners with the DIACC on recommendations for a trusted and safe adoption of Digital Identity
The Human Technology Foundation (HTF) is excited to announce its partnership with the Digital ID and Authentication Council of Canada […] to develop a white paper for the fall of 2022 that will provide recommendations to Canadian and European policymakers on ongoing projects, with the common goal of unlocking secure, equitable access to the global digital economy.
DIDAS Submits its Position Paper re. the draft of E-ID Law DIDAS Swiss
DIDAS looks forward to the next steps and recommends a priority treatment in the Federal Council. Furthermore, we recommend pushing for technical proof of concepts and minimal viable products (MVPs) involving all stakeholders and civil society in order to make the best use of the planned implementation timeline for the political decision.
The Power of e-ID ecosystems DIDAS
Use Cases
Do you know the ROI of implementing digital signatures in your processes? ValidatedID
How Self-Sovereign Identity can be integrated into KYC processes Gataca.co
Biometrics with self-sovereign identity combo pitched to solve authentication challenges Biometric Update
Position Paper: Of Gold Standards, Blockchain & The New Learning Economy LearnCard
[clip] dhiway @dhiwaynetworks · Oct 19
#unfakeablecredentials from http://studio.dhiway.com are digitally verifiable credentials which businesses need today!
[podcast] Hear why SAP has issued verifiable credentials to global partners Velocity Network
recently used verifiable credentials to recognize SAP Success Factor’s ‘partner excellence’ at an event featuring nominated organizations like Accenture, Beamly, Deloitte, EY and PwC.
Explainers
The building blocks of Self-sovereign Identity (SSI) Crypto Huduga
Selective Sharing GlobaliD
Decentralized Identifiers (DIDs): The Ultimate Beginner’s Guide 2022 Dock
Self-sovereign identity is making data personal TechMonitor - Countries around the world are adopting their own forms of SSI. Questions about interoperability, however, remain unresolved
[glossary] The ABC's of Decentralized Identity Spruce Systems
[slide deck] The Future of Open Recognition Technology
Open Recognition Challenges
1. Building the World Wide Web of Recognition
2. Recognition should be visible immediately.
3. Pre-defining achievements locally is not necessary to begin recognizing them.
4. Recognition happens in conversations.
5. Recognition builds a community and helps it understand itself.
Nice diagram from Extrimian.
Who Develops Decentralized Ecosystem Governance? Indicio Tech
The development process is collaborative. There will, inevitably, be specific issues and technical details revealed in development that the rule-makers didn’t consider. The result of this collaborative process is a machine-readable governance file.
[Podcast] Converging Towards a Common Digital Trust Protocol (with Drummond Reed) Northern Block
Wallets vs Agents – their differences, their relationship and how agents will use more and more contextual intelligence to help you make decisions according to your preferences.
Can non-OS digital trust infrastructure providers compete with the device OS providers? (e.g., Apple owns the OS for mobile/desktops/tablets/smart watches)
Comparing DIDComm to NFC – if NFC really facilitates security and trust for close distance, do the combinations of digital wallets, digital agents and protocols (like DIDComm) do the same for trust at distance?
Trust Spanning Protocol – establishing authentic connections where both parties can authenticate each other (using the same hourglass model as TCP/IP). What are the architectural requirements for this protocol? And how can various protocols (e.g., DIDComm, KERI) converge into a trust spanning protocol?
Decentralized Web
BlueSky: The AT Protocol
Bluesky was created to build a social protocol. [...] today we’re sharing a preview of what’s to come. ADX is now the “Authenticated Transfer Protocol” — or, more simply, the “AT Protocol.”
[thread] Robert Mao (@mave99a) tweets: I just read through the developer document of @jack's new social network @at_protocol by @bluesky. Here how it works in high-level
What is ID Karma?
And this MVP is dedicated to one feature: ID Karma Social Network. I hope it sounds weird to you, cause why the hell do we need one more social network at all? Let's discover it in the next article...
The Highly Anticipated Impervious Browser 😈 Impervious.ai [tech]
October 19, 2022 – Impervious Technologies Inc. (Impervious.ai) today released the Alpha version of the highly anticipated Impervious Browser.
Organization
The DIACC partners with HTF on recommendations for a trusted and safe adoption of Digital ID
[video] ToIP Reference Architecture explained by Wenjing Chu of Futurewei Tehchnologies
From Structure to Governance, everything you need to know about Metadium to grant trust based on VC (Verifiable Credentials) to various services. Metadium's DID authentication system has demonstrated its capabilities in various government projects and corporate partnerships with solid performance) [docs]
Web 3
The Month of DAO: What to Expect ConsenSys
Friends with Benefits – the community that uses crypto, not a crypto community Defiant - DAO’s as cities, not companies.
More Than Half of Ethereum Network is Excluding U.S.-Sanctioned Wallets Defiant - ‘Sad Milestone in Censorship’ Pressure on Flashbots and MEV Players
[Podcast] Ethan Buchman & Zaki Manian: ATOM 2.0 – Deep Dive Epicenter
Atom 2.0 has sparked intense debate in the Cosmos community and throughout the blockchain ecosystem. It proposes three pillars for the Cosmos Hub: Interchain Security, the Interchain Allocator, and the Interchain Scheduler.
Pseudonymity in the workplace Noxx
Talents verify themselves via KYC. We use Stripe Identity for this purpose. During the process, we ask for provide government documents and selfies to prove that the Talent exists. All data is redacted and no logs are stored on Stripe after the verification completes.
Cardano lets you own your identity Cexplorer
Lace will be integrated with a [...] (SSI) platform called Atla PRISM, which uses the Cardano blockchain to function. | https://newsletter.identosphere.net/p/identosphere-105-bluesky-authenticated |
The course is mainly intended to strengthen the students’ mathematical thinking, and their ability to apply such thinking in applications, and in their continued studies. The focus is not on mathematical knowledge in the traditional sense, but on the often implied abilities needed to effectively be able to apply the mathematics you already know, and efficiently be able to learn new mathematics. The most important parts are mathematical reasoning, problem solving and modelling. Important aspects such as using the computer as a part of your mathematical thinking, and to be able to communicate with and about mathematics are also integrated in the course.
The course also in a natural way introduces basic mathematical knowledge useful in computer science and other areas, including a selection of Swedish upper secondary courses Mathematics 4 and 5.
By developing the ability to think mathematically, the course complements other more traditional courses in mathematics, and by providing the student with experience of different areas of application, the gap between mathematical theory and relevant applications is bridged.
The core of the course is a number of carefully selected problems, used as starting points for the student’s own learning, where student by working in an investigative way develop their own abilities. We also have lectures which provide a broader understanding, follow-up and perspective. The problems illustrate many different areas of application, and their level of difficulty is adapted to efficiently practice the abilities to think and work mathematically in different situations.
In connection with the exercises, we also discuss different problem solving strategies, reflect on solutions, and compare different ways to solve the same problem. We also give an orientation about the role of mathematics in various applications and demonstrate the importance of mathematical computer models.
Requirements: General entrance requirements for university studies and the Swedish course Mathematics C or equivalent. | https://utbildning.gu.se/education/courses-and-programmes/course_detail?courseid=DIT025 |
A significant obstacle to the realization of the free and equal status of all citizens within democratic societies is the inheritance of wealth -- or more precisely, the intergenerational accumulation and transfer of wealth within families. The extreme wealth inequality caused by flows of inheritances can render a de jure democratic society a de facto aristocracy, wherein individuals' life-prospects are determined largely by the economic class into which they are born. Because of this, liberal egalitarian justice demands limits on inheritances. John Rawls, for instance, recommends that intergenerational bequeathments and gifts be taxed, so that individuals can acquire only limited amounts of wealth through such processes over the courses of their lifetimes.
Rawls's treatment of inheritance is quite brief, and there has been little discussion of the topic by other egalitarian philosophers over the two decades since the publication of Justice as Fairness: A Restatement. This is surprising, given our "new Gilded Age" of extreme wealth inequality. Thankfully, Daniel Halliday's excellent book helps to fill this lacuna. Of special philosophical interest is Halliday's attempt to integrate elements of "luck egalitarianism" within a "social egalitarian" framework. This endeavour is both misguided and unnecessary -- or so I shall suggest below. I nonetheless recommend this book enthusiastically to anyone interested in questions of distributive justice. Hopefully it will prompt further examination and debate of this important topic.
I.
The first chapter of Halliday's book provides an overview of its main themes and theses. A primary concern is "economic segregation." In an economically segregated society: (a) individuals belong to groups that are distinguished by different levels of wealth (say, the "top 1%" versus the "next 19%" versus the next four quintiles); (b) there is little movement by individuals between groups during their lives; and (c) members of different groups enjoy different levels of opportunities (educational, professional, etc.) and political power. Liberal egalitarian justice requires the elimination, insofar as it is feasible, of entrenched class hierarchy. The arbitrary inequalities among citizens based on the economic groups into which they are born violates liberal justice in much the same way as group-based inequalities based upon race, sex, sexual orientation, and religion.
The book also aims to integrate two rival views of egalitarian justice. Halliday's liberal egalitarian framework is novel in that it is a social egalitarian one that purports to incorporate certain core luck egalitarian ideas. He claims that "neither approach works especially well if used alone but that they work well when combined in the right way" (p .5). (As I explain later, I do not think that this integrative project is successful; however, I also do not think it is necessary for Halliday's other main positions.)
A third key claim of the book is that liberal egalitarianism can best address the problem of inherited wealth via what Halliday refers to as the "Rignano scheme" (this scheme is drawn from the early twentieth-century work of the Italian theorist Eugenio Rignano). According to the Rignano scheme, "inheritance can be taxed at a greater rate when it rolls over -- when it gets passed down more than once" (p. 7). What this means, roughly, is that if Albert creates 100 dollars of wealth during his lifetime, he should be able to bequeath most of that to his daughter Beth. However, if Beth retains most of that wealth, say 80 dollars, much of that (perhaps all) should be taxed away if she bequeaths it to her son Cassius (so Cassius would receive little or nothing in second-generation inheritance). The low tax rate imposed on only the initial transfer has two justifications. First, it gives Albert an incentive to work hard (so that he can pass on some wealth to Beth), but it also creates an incentive for Beth to avoid idleness (since she must create new wealth if she wishes to bequeath any to Cassius). Second, the Rignano scheme may encourage the development and dispersal of new wealth throughout the population, thereby fostering the growth of the middle-class (bequeathments of "old money," in contrast, have no similar positive effects, and hence can be taxed away).
Chapter two discusses the views of early liberal writers on inheritance. The positions of John Locke, Adam Smith, Thomas Paine, William Godwin, and John Stuart Mill are outlined and evaluated. Chapter three focuses on Mill's utilitarianism and the Rignano scheme, as formulated primarily in Rignano's The Social Significance of Death Duties. Halliday explains that Rignano was working within Mill's utilitarian framework, and recommends that we repurpose the Rignano scheme for liberal egalitarianism. Liberal egalitarianism is concerned with securing and maintaining the free and equal standing of all citizens over time. This requires preventing or breaking up economic segregation, and the Rignano scheme can help do this.
Halliday develops his egalitarian framework in chapters three, four, and five. Since my main criticism of the book concerns his attempt to integrate elements of luck egalitarianism into a social egalitarian framework, I will save my discussion of these chapters until the next section.
Chapter seven addresses libertarian views. It is not clear to me why this chapter is included in the book. Halliday writes that "there is more to be gained from using libertarian insights to develop the views that I have already defended, rather than being drawn into the broader fight between egalitarians and libertarians" (p. 162). But since libertarianism is incompatible with liberal egalitarianism, including Halliday's version, I cannot see what utility these "insights" might have. I would have welcomed some explanation of how liberal egalitarianism might perhaps revise and appropriate them.
The final chapter is the most "applied" of the book as it considers alternative tax schemes and argues that the Rignano scheme is to be preferred over viable alternatives for addressing the problem of economic segregation. Halliday emphasizes that a just tax scheme must satisfy a (Rawlsian) criterion of publicity, and that alternative schemes must be evaluated against each other rather than against some perfect ideal, with the recognition that any scheme when implemented will be imperfect (pp. 186-88).
I found especially interesting Halliday's comparison of his proposal with Thomas Piketty's endorsement of a wealth tax (pp. 201-204). The two proposals have different targets: Piketty's wealth tax concerns the staggering wealth of a small class of elite rentiers, whereas the Rignano scheme addresses more general economic segregation. Halliday concludes that the two proposals consequently are compatible. This is an interesting claim, I think, and worthy of further consideration.
II.
As mentioned earlier, in chapters three, four, and five, Halliday tries to formulate a version of liberal egalitarianism that integrates luck egalitarianism and social egalitarianism. I think that this integrative endeavour is unsuccessful.
First, some background. Two families of liberal egalitarian conceptions of justice have emerged over the past few decades: luck egalitarianism and social egalitarianism (the latter also is known as "relational egalitarianism"). Both families share a common ancestor: the account of justice presented in Rawls's A Theory of Justice. Luck egalitarians have understood their project, at least in part, as developing the implications of Rawls's comments on the "moral arbitrariness" of the distribution of unchosen social and natural advantages in Theory into a distinct approach to theorizing about justice, one that is egalitarian in nature but also sensitive to individual responsibility. According to luck egalitarians, the aim of justice is to neutralize any disadvantages that people are born into or acquire as the result of brute luck, disadvantages for which they are not responsible and consequently do not deserve. In a fully just luck egalitarian society, people would fare well or poorly solely in conformity to those decisions and actions for which they are rightly responsible.
But despite helping to inspire luck egalitarianism, Rawls's own conception of "justice as fairness" is a form of social egalitarianism. While not all social egalitarians endorse justice as fairness, they generally follow Rawls's "constructivist" approach to thinking about justice. According to this approach, broadly speaking, principles of political justice should be understood as rationally constructed in order to satisfy the requirements of reciprocity among free and equal citizens under conditions of relative scarcity. A fully just social egalitarian society, then, is not one that "neutralizes luck," but rather one in which citizens relate to each other as social equals on the basis of mutual respect, and freely govern their lives on conditions fair to all.
Halliday's view is primarily a social egalitarian one. He explains some of the key problems with luck egalitarianism, especially as applied to questions having to do with inheritance, in chapter four. For instance, because of its single-minded focus on the distinction between "choice" and "circumstance," what Halliday terms "naïve luck egalitarianism" implausibly condemns "all inheritance no matter its size." So naïve luck egalitarianism condemns as unjust the inheritance of "my grandfather's old beer tankard" (pp. 77-78). After dispatching naïve luck egalitarianism, the more nuanced views of theorists such as G.A. Cohen, Kok-Chor Tan, and Ronald Dworkin are discussed and criticized. Ultimately, Halliday holds that luck egalitarianism, by itself, is unsatisfactory because of its focus on whether or not individuals "deserve" their conditions and holdings: the view cannot address the problem of group segregation and inequality, including economic segregation.
Chapters five and six discuss economic segregation and how inheritance helps to maintain it over time. "Economic segregation," Halliday writes, "is a type of social segregation that occurs when groups have their boundaries defined by economic difference rather than by (e.g.) racial or religious difference" (p. 102). Halliday points out that it is not just inequality of wealth that is a cause of social segregation. The intergenerational transfer of wealth facilitates the hoarding of nonfinancial -- social and cultural -- capital. Social capital "consists in valuable knowledge and opportunities," whereas cultural capital "consists in certain behavioural norms or dispositions" (p. 107). According to Halliday,
the significance of inheritance owes much to the way in which intergenerational transfers help groups maintain their accumulated nonfinancial capital, even if nonfinancial capital is not transferred down the generations simply as an automatic consequence of the transfer of wealth. (pp. 107-108)
Those who inherit wealth, or know that they will eventually, can devote considerable resources and time to providing their children with the means for superior life-prospects. Valuable cultural capital can be transferred through the cultivation of "prestigious" hobbies and skills (violin-playing or fluency in a foreign language), as well as education and more general patterns of behaviour (accents, confidence when interacting with authorities, and so forth). Social capital is secured via access to elite schools, internships, job opportunities, and the like. These forms of nonfinancial capital reinforce each other: those who possess cultural capital can exploit effectively their social capital. Moreover, these advantages reinforce themselves over generations: parents who already possess cultural and social capital can transfer it to their children more readily than those who do not. The discussion of these processes in chapters five and six -- and the difficulty, if not practical impossibility, of overcoming them without wealth redistribution, including restrictions on inheritances -- is insightful and important.
Halliday notes that social egalitarianism is committed to two core claims about justice, one "negative" and one "positive." The negative claim is that "equality requires the elimination of oppressive social hierarchies." The positive claim is that justice requires that social institutions be reformed or designed "so as to create a genuine society of equals" (p. 105). Inheritance flows facilitate economic segregation, and economic segregation is a social hierarchy that thwarts the creation of a society of substantively free and equal citizens. Consequently, the social egalitarian case for regulating inheritance, whether via the Rignano scheme or some other form of taxation, is straightforward. Citizens cannot relate to one another as equals if they live in a de facto aristocracy, that is, a society that (to a great extent) allocates economic opportunities and political power based upon citizens' unchosen classes.
The social egalitarian view looks sufficient to justify the regulation of intergenerational flows of wealth. Yet Halliday contends that luck egalitarianism also has a role to play: "luck cannot be eliminated from an egalitarian diagnosis of what is objectionable about unrestricted inheritance," he writes, "Inheritance is unjust when it allows some people to enjoy brute luck advantage, but the specific kind of brute luck advantage is understood in terms of group membership" (p. 152). So, while social egalitarianism helps explain why economic segregation is unjust and measures should be taken to eliminate or reduce it as much as possible, including regulating inheritances, it needs to be supplemented with the claim that it is unjust that some people enjoy superior or inferior life prospects simply in virtue of having been born into one economic class or another.
I think that this attempt to integrate luck egalitarianism and social egalitarianism misconstrues the nature of social egalitarianism's objection to social segregation. Social egalitarians are aware that it is a matter of "luck" that, for instance, some people need wheelchairs to get around adequately and others do not. But this is not why justice requires the adequate provision of wheelchairs to all citizens who need them, and that public and commercial spaces must accommodate them. Such accommodation simply is necessary for equal citizenship -- "correcting" for brute bad luck has nothing to do with it.
When someone is born into a family in which there is considerable inherited wealth -- in which one's parents themselves inherited or will inherit wealth, and thus can secure competitive advantages with respect to social and cultural capital for the person in question -- this is indeed a matter of "luck," insofar as that person did nothing to "deserve" that place within the social hierarchy. Likewise, someone born white and male in a racist and patriarchal society did nothing to "deserve" that privileged position. But it is not the luck in such cases that is the problem; rather it is the hierarchy. Social egalitarians aim at the elimination of economically segregated hierarchies altogether, whatever their source, because hierarchies prevent egalitarian social relations -- just as they aim at the elimination of race- or sex-based hierarchies. Consequently, I do not think that social egalitarianism "needs" to embed any luck egalitarian component into its framework -- "anti-luckism" simply is not a concern of social egalitarianism. (Being born poor, non-white, and/or female are not "misfortunes" for which one should be "compensated.")
The Rawlsian version of social egalitarianism, for instance, holds that social equality involves ensuring that the principles of justice that are to regulate the most important social institutions of society, its "basic structure," enable citizens to live and interact as both equal subjects and co-sovereigns. Hence those principles must satisfy what Rawls calls the "criterion of reciprocity." According to Halliday, "reciprocity is not the only concept that Rawls used to derive the requirements of justice and . . . he did not actually invoke the concept of reciprocity when alluding to why justice might require restrictions on inherited wealth" (p. 90). This is a misinterpretation, though, as it fails to recognize the foundational role of reciprocity in Rawls's theory. The criterion of reciprocity is the "intrinsic (moral) political ideal" of justice as fairness -- indeed, it justifies the use of the "original position" device to formulate the principles of justice as fairness. Consequently, reciprocity does justify Rawls's overall conception of justice and the restrictions on inherited wealth that he thinks are required by that conception. Justice as fairness is the most reasonable conception of justice because it best satisfies the requirements of reciprocity.
Once we see that social egalitarianism (at least of the Rawlsian variety) is committed to reciprocity, and economic segregation violates the requirements of reciprocity (as expressed in the principles of justice), then the case for regulating the intergenerational transfer of wealth is straightforward -- as Rawls's own brief recommendations indicate. There is no need to appeal to any form of luck egalitarianism. Moreover, the social egalitarian case is not only sufficient, but its "second personal" constructivist character, focused on reciprocity, cannot felicitously be merged with the assumptions of luck egalitarianism.
III.
My comments have focused primarily on the aspect of Halliday's view with which I disagree. Nonetheless, I think that the analyses and arguments presented in this book are interesting and important. I learned a lot from reading it. Halliday certainly is correct that justice requires the regulation of inheritances, and the Rignano scheme is an intriguing proposal for how to do so. Hence, I strongly recommend this book to anyone interested in contemporary issues concerning distributive justice.
See J. Rawls (1999), A Theory of Justice, revised edition (Harvard University Press), pp. 245-246; (2001) Justice as Fairness: A Restatement (Harvard University Press), pp. 160-161.
See, e.g., T. Piketty (2014) Capital in the Twenty-First Century (Harvard University Press).
See Rawls (1999), pp. 63-65, 87-89, 274.
Some social egalitarians, like Elizabeth Anderson, refer to "contractualism" rather than "constructivism" (see E. Anderson (2010) "The Fundamental Disagreement between Luck Egalitarians and Relational Egalitarians," Canadian Journal of Philosophy (Supplementary Volume 36): 1-23). Everything that I say about constructivism can be restated in contractualist terms.
Rawls 2005, p. xlv.
See Rawls 2005, pp. xlviii-xlix, 450.
See Rawls 2005, pp. xlvi-xlvii.
See Anderson (2010). | https://ndpr.nd.edu/reviews/the-inheritance-of-wealth-justice-equality-and-the-right-to-bequeath/ |
New Report Highlights Americas Failing Infrastructure; American Society of Civil Engineers President Says Investment Will Stimulate Economy and Create Jobs
February 2009 – With each passing day, aging and overburdened infrastructure further threatens the economy and quality of life in every state, city and town in the nation. In all areas of modern life, from transportation and energy to dams and drinking water, the countrys infrastructure is struggling to meet the needs of its growing population. As the nation debates how best to address its current economic crisis, investment in the vital infrastructure systems that support our society has become a key component for discussion. Not only could such an investment create jobs, if done right it could also provide tangible benefits to the American people, such as reduced traffic congestion, improved air quality, clean and abundant water supplies and protection against natural hazards.
On January 28th, 2009, the American Society of Civil Engineers (ASCE), released its newest Report Card for Americas Infrastructure, the first update since 2005. ASCE released its very first report in 1998, and unfortunately the nations infrastructure GPA has only continued to worsen. This newest report covers 15 categories of infrastructure, including: roads, bridges, inland waterways, aviation, drinking water, energy, hazardous waste, rail, schools, solid waste, transit, wastewater, public parks and recreation, dams and levees. While the full report will not be released until March 25th, this initial release includes letter grades for each the categories, solutions for improvement and the overall investment needed to improve the nations infrastructure. Wayne Klotz, President of ASCE, explains the grades and give them context in light of the current economic stimulus plan, outlines solutions including the need for a significant investment, outlines guidelines for successful investment of the stimulus infrastructure funds and provides insight into local issues. | https://www.engineeringdaily.net/a-crumbling-infrastructure-cannot-support-a-healthy-economy/ |
Background: Acupuncture is widely used by patients with low back pain, although its effectiveness is unclear. We investigated the efficacy of acupuncture compared with minimal acupuncture and with no acupuncture in patients with chronic low back pain.
Methods: Patients were randomized to treatment with acupuncture, minimal acupuncture (superficial needling at nonacupuncture points), or a waiting list control. Acupuncture and minimal acupuncture were administered by specialized acupuncture physicians in 30 outpatient centers, and consisted of 12 sessions per patient over 8 weeks. Patients completed standardized questionnaires at baseline and at 8, 26, and 52 weeks after randomization. The primary outcome variable was the change in low back pain intensity from baseline to the end of week 8, as determined on a visual analog scale (range, 0-100 mm).
Results: A total of 298 patients (67.8% female; mean +/- SD age, 59 +/- 9 years) were included. Between baseline and week 8, pain intensity decreased by a mean +/- SD of 28.7 +/- 30.3 mm in the acupuncture group, 23.6 +/- 31.0 mm in the minimal acupuncture group, and 6.9 +/- 22.0 mm in the waiting list group. The difference for the acupuncture vs minimal acupuncture group was 5.1 mm (95% confidence interval, -3.7 to 13.9 mm; P = .26), and the difference for the acupuncture vs waiting list group was 21.7 mm (95% confidence interval, 13.9-30.0 mm; P<.001). Also, at 26 (P=.96) and 52 (P=.61) weeks, pain did not differ significantly between the acupuncture and the minimal acupuncture groups.
Conclusion: Acupuncture was more effective in improving pain than no acupuncture treatment in patients with chronic low back pain, whereas there were no significant differences between acupuncture and minimal acupuncture. | https://pubmed.ncbi.nlm.nih.gov/16505266/ |
According to the all popular American businessman and entrepreneur, Sam Walton, "The goal of a company is to have customer service that is not just the best but legendary." Without a shadow of doubt, building strong customer relationships is like laying a foundation for evangelizing your brand name and to boom your business. This calls for a need to evaluate the call center performance to ensure that the customer's satisfaction is reached and productivity is enhanced. Here in this blog, we are going to discuss some of the essential call center key performance indicators KPIs and latest metrics to ...Read more
In my previous blog Why Support is the ...
Video Conferencing has knitted the world closely. Now, geographical ...
This blog aims to talk about some of the ... | https://www.taraspan.com/blog/tag/call-center-kpi/ |
Decarbonisation roadmap for the chemicals sector
The French National Low Carbon Strategy sets a target for the industry sector to reduce greenhouse gas (GHG) emissions by 35% by 2030, compared to 2015. The chemicals sector accounts for 25% of total emissions of the industrial sector.
The Ministry of Ecological Transition published a Decarbonation Roadmap for the chemicals sector. It highlights the progress made in the sector, its future needs and emissions, and the ways for the sector to reduce its carbon emissions. It sets a new emissions reduction target of 26% by 2030, compared to 2015. The goals highlighted in the roadmap are:
- improving energy efficiency (-1.8 MtCO2 eq reduction of annual GHG emissions emissions between 2015 and 2030),
- reducing nitrous oxide (N2O) (-0.8 MtCO2 eq)
- reducing hydrofluorocarbon (HFC) emissions (-0.9 MtCO2 eq)
- producing low-carbon heat (-1.4 MtCO2 eq for heat from biomass and -0.8 MtCO2 eq for heat from Solid Recovered Fuel - SRF)
The roadmap encourages tools to ensure competitive and predictable access to low-carbon electricity, while providing incentives for energy efficiency, including:
- Tools to support competitive low-carbon energy supply for industry;
- Extending and securing the interruptibility and tariff reduction mechanisms of the public electricity network;
- Implementing compensation of indirect costs of the ETS scheme for the period 2021-2030; and
- Encouraging an energy tax system favourable to improve electrification. | https://origin.iea.org/policies/13516-decarbonisation-roadmap-for-the-chemicals-sector |
Topical Area: Policy, Global Nutrition
Objectives : Eliminating malnutrition is on many countries’ political agendas but knowledge of how enabling environments are created and used is needed. We assessed the drivers of change in stunting reduction among children < 5 y of age in Rwanda and contributors to differential reduction over 10-25 y.
Methods : We conducted in-depth interviews on changes in nutrition with nutrition stakeholders at national (n=32), district (n=38), and community (n=20) levels, and community focus group discussions (n=40) in 10 purposefully selected districts in Rwanda’s 5 provinces. In each province, we selected 1 district with decreased stunting and 1 where no change or an increase occurred (2010-2015). We also used regression decomposition analysis to investigate drivers of change in stunting with Demographic and Health Surveys (2005, 2010, and 2015) data.
Results : Respondents believed peace and security along with improved leadership and decentralization helped to create an enabling environment for change. Rwanda experienced increased political and institutional commitment to nutrition indicated by adoption of a multisectoral policy and reinforced with horizontal coordination platforms and plans at national and sub-national levels, but greater financial commitment is needed according to respondents. Vertical coordination across administrative levels improved through communication, staff working on nutrition at these levels, and relationships between nutrition actors. From respondent reports, health and agricultural programs and increased availability and use of health services helped improve nutrition; differences between study districts included climate change challenges, food insecurity, weak horizontal and vertical coherence, and weak implementation of coordination plans. Supporting this, giving birth in a health facility, attending ≥ 4 antenatal care visits, antenatal care quality, fertility, parental education, household wealth, and health insurance coverage drove stunting reduction from the regression decomposition analysis.
Conclusions : Leadership, commitment and horizontal and vertical coherence are important for creating enabling environments and providing programs and services that can lead to reduced malnutrition. | https://eventscribe.com/2019/ASN/fsPopup.asp?efp=UURQVExBSVA3OTU5&PosterID=203229&rnd=0.1433422&mode=posterinfo |
Genetic variation affects morphological retinal phenotypes extracted from UK Biobank optical coherence tomography images. PLoS Genet 2021;17(5):e1009497.Abstract.
Optical Coherence Tomography (OCT) enables non-invasive imaging of the retina and is used to diagnose and manage ophthalmic diseases including glaucoma. We present the first large-scale genome-wide association study of inner retinal morphology using phenotypes derived from OCT images of 31,434 UK Biobank participants. We identify 46 loci associated with thickness of the retinal nerve fibre layer or ganglion cell inner plexiform layer. Only one of these loci has been associated with glaucoma, and despite its clear role as a biomarker for the disease, Mendelian randomisation does not support inner retinal thickness being on the same genetic causal pathway as glaucoma. We extracted overall retinal thickness at the fovea, representative of foveal hypoplasia, with which three of the 46 SNPs were associated. We additionally associate these three loci with visual acuity. In contrast to the Mendelian causes of severe foveal hypoplasia, our results suggest a spectrum of foveal hypoplasia, in part genetically determined, with consequences on visual function.
Comparison between wide-field digital imaging system and the red reflex test for universal newborn eye screening in Brazil. Acta Ophthalmol 2021;Abstract.
PURPOSE: To compare neonatal eye screening using the red reflex test (RRT) versus the wide-field digital imaging (WFDI) system. METHODS: Prospective cohort study. Newborns (n = 380, 760 eyes) in the Maternity Ward of Irmandade Santa Casa de Misericórdia de São Paulo hospital from May to July 2014 underwent RRT by a paediatrician and WFDI performed by the authors. Wide-field digital imaging (WFDI) images were analysed by the authors. Validity of the paediatrician's RRT was assessed by unweighted kappa [κ] statistic, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). RESULTS: While WFDI showed abnormalities in 130 eyes (17.1%), RRT was only abnormal in 13 eyes (1.7%). Wide-field digital imaging (WFDI) detected treatable retina pathology that RRT missed including hyphema, CMV retinitis, FEVR and a vitreous haemorrhage. The sensitivity of the paediatrician's RRT to detect abnormalities was poor at 0.77% (95% confidence interval, CI, 0.02%-4.21%) with a PPV of only 7.69% (95% CI, 1.08%-38.85%). Overall, there was no agreement between screening modalities (κ = -0.02, 95% CI, -0.05 to 0.01). The number needed to screen to detect ocular abnormalities using WFDI was 5.9 newborns and to detect treatable abnormalities was 76 newborns. CONCLUSION: While RRT detects gross abnormalities that preclude visualization of the retina (i.e. media opacities and very large tumours), only WFDI consistently detects subtle treatable retina and optic nerve pathology. With a higher sensitivity than the current gold standard, universal WFDI allows for early detection and management of potentially blinding ophthalmic disease missed by RRT.
Relationships between expertise and distinctiveness: Abnormal medical images lead to enhanced memory performance only in experts. Mem Cognit 2021;Abstract.
Memories are encoded in a manner that depends on our knowledge and expectations ("schemas"). Consistent with this, expertise tends to improve memory: Experts have elaborated schemas in their domains of expertise, allowing them to efficiently represent information in this domain (e.g., chess experts have enhanced memory for realistic chess layouts). On the other hand, in most situations, people tend to remember abnormal or surprising items best-those that are also rare or out-of-the-ordinary occurrences (e.g., surprising-but not random-chess board configurations). This occurs, in part, because such images are distinctive relative to other images. In the current work, we ask how these factors interact in a particularly interesting case-the domain of radiology, where experts actively search for abnormalities. Abnormality in mammograms is typically focal but can be perceived in the global "gist" of the image. We ask whether, relative to novices, expert radiologists show improved memory for mammograms. We also test for any additional advantage for abnormal mammograms that can be thought of as unexpected or rare stimuli in screening. We find that experts have enhanced memory for focally abnormal images relative to normal images. However, radiologists showed no memory benefit for images of the breast that were not focally abnormal, but were only abnormal in their gist. Our results speak to the role of schemas and abnormality in expertise; the necessity for spatially localized abnormalities versus abnormalities in the gist in enhancing memory; and the nature of memory and decision-making in radiologists.
Clinical Update on Metamorphopsia: Epidemiology, Diagnosis and Imaging. Curr Eye Res 2021;:1-15.Abstract.
Purpose: To discuss the pathophysiology of metamorphopsia, its characterisation using retinal imaging and methods of assessment of patient symptoms and visual function.Methods: A literature search of electronic databases was performedResults: Metamorphopsia has commonly been associated with vitreomacular interface disorders (such as epiretinal membrane) and has also regularly been noted in diseases of the retina and choroid, particularly age-related macular degeneration and central serous chorioretinopathy. Developments in optical coherence tomography retinal imaging have enabled improved imaging of the foveal microstructure and have led to the localisation of the pathophysiology of metamorphopsia within the retinal layers of the macula. Alteration of alignment of inner and outer retinal layers at various retinal loci has been identified using multimodal imaging in patients with metamorphopsia in a range of conditions. Although the Amsler Grid assessment of metamorphopsia is a useful clinical indicator, new emerging methods of metamorphopsia assessment with psychophysical tests such as M-CHARTS and preferential hyperacuity perimetry, have been developed.Conclusions: It appears that there is a complex relationship between visual acuity and metamorphopsia symptoms that vary between retinal conditions. Although metamorphopsia has traditionally been challenging to measure in the clinic, advances in technology promise more robust, easy-to-use tests. It is possible that home assessment of metamorphopsia, particularly in conditions such as age-related macular degeneration, may help to guide the need for further clinic evaluation and consideration of treatment.
Optical Coherence Tomography Angiography in Prematurity. Semin Ophthalmol 2021;36(4):264-269.Abstract.
Purpose: During normal foveal development there is a close interaction between the neurosensory and vascular elements of the fovea making it vulnerable to prematurity and retinopathy of prematurity (ROP). We aim to assess this potential effect on foveal development in preterms evaluated simultaneously with both optical coherence tomography (OCT) and OCT angiography (OCTA).Method: Unrestricted literature search in the PubMed and Cochrane library databases yielded 20 distinct citations. Fifteen were relevant and reviewed.Results: In preterms, OCTA demonstrated a significant decrease in the foveal avascular zone area and an increase in foveal vessel density. OCT showed a decrease in foveal pit depth and an increase in the thickness of the subfoveal retinal layers. Some studies correlated these changes with reduced vision.Conclusion: Changes in the vascular and neurosensory retina were found in premature children. It remains unclear whether this is related to prematurity alone or ROP and its treatment.
Automated Microaneurysm Counts on Ultrawide Field Color and Fluorescein Angiography Images. Semin Ophthalmol 2021;36(4):315-321.Abstract.
BACKGROUND: The severity and extent of microaneurysms (MAs) have been used to determine diabetic retinopathy (DR) severity and estimate the risk of DR progression over time. The recent introduction of ultrawide field (UWF) imaging has allowed ophthalmologists to readily image nearly the entire retina. Manual counting of MAs, especially on UWF images, is laborious and time-consuming, limiting its potential use in clinical settings. Automated MA counting techniques are potentially more accurate and reproducible compared to manual methods. METHOD: Review of available literature on current techniques of automated MA counting techniques on both ultrawide field (UWF) color images (CI) and fluorescein angiography (FA) images. RESULTS: Automated MA counting techniques on UWF images are still in the early phases of development with UWF-FA counts being further along. Early studies have demonstrated that these techniques are accurate and reproducible. CONCLUSION: Automated techniques may be an appropriate option for detecting and quantifying MAs on UWF images, especially in eyes with earlier DR severity. Larger studies are needed to appropriately validate these techniques and determine if they add substantially to clinical practice compared to standard DR grading.
Wide Field Swept Source Optical Coherence Tomography Angiography for the Evaluation of Proliferative Diabetic Retinopathy and Associated Lesions: A Review. Semin Ophthalmol 2021;36(4):162-167.Abstract.
Retinal imaging remains the mainstay for monitoring and grading diabetic retinopathy. The gold standard for detecting proliferative diabetic retinopathy (PDR) requiring treatment has long been the seven-field stereoscopic fundus photography and fluorescein angiography. In the past decade, ultra-wide field fluorescein angiography (UWF-FA) has become more commonly used in clinical practice for the evaluation of more advanced diabetic retinopathy. Since its invention, optical coherence tomography (OCT) has been an important tool for the assessment of diabetic macular edema; however, OCT offered little in the assessment of neovascular changes associated with PDR until OCT-A became available. More recently, swept source OCT allowed larger field of view scans to assess a variety of DR lesions with wide field swept source optical coherence tomography (WF-SS-OCTA). This paper reviews the role of WF-SS-OCTA in detecting neovascularization of the disc (NVD), and elsewhere (NVE), microaneurysms, changes of the foveal avascular zone (FAZ), intraretinal microvascular abnormalities (IRMA), and capillary non-perfusion, as well as limitations of this evolving technology.
Artificial Intelligence (AI) and Retinal Optical Coherence Tomography (OCT). Semin Ophthalmol 2021;36(4):341-345.Abstract.
Ophthalmology has been at the forefront of medical specialties adopting artificial intelligence. This is primarily due to the "image-centric" nature of the field. Thanks to the abundance of patients' OCT scans, analysis of OCT imaging has greatly benefited from artificial intelligence to expand patient screening and facilitate clinical decision-making.In this review, we define the concepts of artificial intelligence, machine learning, and deep learning and how different artificial intelligence algorithms have been applied in OCT image analysis for disease screening, diagnosis, management, and prognosis.Finally, we address some of the challenges and limitations that might affect the incorporation of artificial intelligence in ophthalmology. These limitations mainly revolve around the quality and accuracy of datasets used in the algorithms and their generalizability, false negatives, and the cultural challenges around the adoption of the technology.
Anterior Segment Imaging Devices in Ophthalmic Telemedicine. Semin Ophthalmol 2021;36(4):149-156.Abstract.
Obtaining a clear assessment of the anterior segment is critical for disease diagnosis and management in ophthalmic telemedicine. The anterior segment can be imaged with slit lamp cameras, robotic remote controlled slit lamps, cell phones, cell phone adapters, digital cameras, and webcams, all of which can enable remote care. The ability of these devices to identify various ophthalmic diseases has been studied, including cataracts, as well as abnormalities of the ocular adnexa, cornea, and anterior chamber. This article reviews the current state of anterior segment imaging for the purpose of ophthalmic telemedical care.
Ultra-Widefield Imaging for Evaluation of the Myopic Eye. Semin Ophthalmol 2021;36(4):185-190.Abstract.
Topic : Ultra-widefield (UWF) imaging of the myopic eye. Clinical Relevance : Myopes, and particularly high and pathologic myopes, present a unique challenge in fundoscopic imaging. Critical pathology is often located in the anteriormost portion of the retina, variations in posterior segment contour are difficult to capture in two-dimensional images, and extremes in axial length make simply focusing imaging devices difficult. Methods: We review the evolution of modalities for ophthalmic imaging (color fundus photography [CFP], optical coherence topography [OCT], angiography, artificial intelligence [AI]) to present day UWF technology and its impact on our understanding of myopia. Results: Advances in UWF technology address many of the challenges in fundoscopic imaging of myopes, providing new insights into the structure and function of the myopic eye. UWF CFP improves our ability to detect and document anterior peripheral pathology prevalent in approximately half of all high myopes. UWF OCT better captures the staphylomatous contour of the myopic eye, providing enhanced visualization of the vitreoretinal interface and progressive development of myopic traction maculopathy. UWF angiography highlights the posterior vortex veins, thin choriocapillaris, far peripheral avascularity, and peripheral retinal capillary microaneurysms more prevalent in the myopic eye. Researchers have demonstrated the ability of AI algorithms to predict refractive error, and great potential remains in the use of AI technology for the screening and prevention of myopic disease. Conclusion: We note significant progress in our ability to capture anterior pathology and improved image quality of the posterior segment of high and pathologic myopes. The next jump forward for UWF imaging will be the ability to capture a high quality ora to ora multimodal fundoscopic image in a single scan that will allow for sensitive AI-assisted screening of myopic disease.
OCTA Findings in Pre-Clinical Alzheimer's Disease: 3 Year Follow-Up. Ophthalmology 2021;.
Retinal Imaging Findings in Carriers With PSEN1-Associated Early-Onset Familial Alzheimer Disease Before Onset of Cognitive Symptoms. JAMA Ophthalmol 2021;139(1):49-56.Abstract.
Importance: Individuals with autosomal dominant mutations for Alzheimer disease are valuable in determining biomarkers present prior to the onset of cognitive decline, improving the ability to diagnose Alzheimer disease as early as possible. Optical coherence tomography (OCT) has surfaced as a potential noninvasive technique capable of analyzing central nervous system tissues for biomarkers of Alzheimer disease. Objective: To evaluate whether OCT can detect early retinal alterations in carriers of the presenilin 1 (PSEN1 [OMIM 104311]) E280A mutation who are cognitively unimpaired. Design, Setting, and Participants: A cross-sectional imaging study conducted from July 13, 2015, to September 16, 2020, included 10 carriers of the PSEN1 E280A mutation who were cognitively unimpaired and 10 healthy noncarrier family members, all leveraged from a homogenous Colombian kindred. Statistical analysis was conducted from September 9, 2017, to September 16, 2020. Main Outcomes and Measures: Mixed-effects multiple linear regression was performed to compare the thickness values of the whole retina and individual retinal layers on OCT scans between mutation carriers and noncarriers. Simple linear-effects and mixed-effects multiple linear regression models were used to assess whether age was an effect modifier for PSEN1 mutation of amyloid β levels and retinal thickness, respectively. Fundus photographs were used to compare the number of arterial and venous branch points, arterial and venous tortuosity, and fractal dimension. Results: This study included 10 carriers of the PSEN1 E280A mutation who were cognitively unimpaired (7 women [70%]; mean [SD] age, 36.3 [8.1] years) and 10 healthy noncarrier family members (7 women [70%]; mean [SD] age, 36.4 [8.2] years). Compared with noncarrier controls, PSEN1 mutation carriers who were cognitively unimpaired had a generalized decrease in thickness of the whole retina as well as individual layers detected on OCT scans, with the inner nuclear layer (outer superior quadrant, β = -3.06; P = .007; outer inferior quadrant, β = -2.60; P = .02), outer plexiform layer (outer superior quadrant, β = -3.44; P = .03), and outer nuclear layer (central quadrant, β = -8.61; P = .03; inner nasal quadrant, β = -8.39; P = .04; inner temporal quadrant, β = -9.39; P = .02) showing the greatest amount of statistically significant thinning. Age was a significant effect modifier for the association between PSEN1 mutation and amyloid β levels in cortical regions (β = 0.03; P = .001) but not for the association between PSEN1 mutation and retinal thickness. No statistical difference was detected in any of the vascular parameters studied. Conclusions and Relevance: These findings suggest that OCT can detect functional and morphologic changes in the retina of carriers of familial Alzheimer disease who are cognitively unimpaired several years before clinical onset, suggesting that OCT findings and retinal vascular parameters may be biomarkers prior to the onset of cognitive decline.
A quantitative comparison of four optical coherence tomography angiography devices in healthy eyes. Graefes Arch Clin Exp Ophthalmol 2021;259(6):1493-1501.Abstract.
PURPOSE: Optical coherence tomography angiography (OCT-A) is a novel imaging modality for the diagnosis of chorioretinal diseases. A number of FDA-approved OCT-A devices are currently commercially available, each with unique algorithms and scanning protocols. Although several published studies have compared different combinations of OCT-A machines, there is a lack of agreement on the consistency of measurements across OCT-A devices. Therefore, we conducted a prospective quantitative comparison of four available OCT-A platforms. METHODS: Subjects were scanned on four devices: Optovue RTVue-XR, Heidelberg Spectralis OCT2 module, Zeiss Plex Elite 9000 Swept-Source OCT, and Topcon DRI-OCT Triton Swept-Source OCT. 3 mm × 3 mm images were utilized for analysis. Foveal avascular zone (FAZ) area was separately and independently measured by two investigators. Fractal dimension (FD), superficial capillary plexus (SCP), and deep capillary plexus (DCP) vessel densities (VD) were calculated from binarized images using the Fiji image processing software. SCP and DCP VD were further calculated after images were skeletonized. Repeated measures ANOVA, post hoc tests, and interclass correlation coefficient (ICC) were performed for statistical analysis. RESULTS: Sixteen healthy eyes from sixteen patients were scanned on the four devices. Images of five eyes from the Triton device were excluded due to poor image quality; thus, the authors performed two sets comparisons, one with and one without the Triton machine. FAZ area showed no significant difference across devices with an ICC of > 95%. However, there were statistically significant differences for SCP and DCP VD both before and after skeletonization (p < 0.05). Fractal analysis revealed no significant difference of FD at the SCP; however, a statistically significant difference was found for FD at the DCP layer (p < 0.05). CONCLUSIONS: The results showed that FAZ measurements were consistent across all four devices, while significant differences in VD and FD measurements existed. Therefore, we suggest that for both clinical follow-up and research studies, FAZ area is a useful parameter for OCT-A image analysis when measurements are made on different machines, while VD and FD show significant variability when measured across devices.
Comparison of widefield swept-source optical coherence tomography angiography with ultra-widefield colour fundus photography and fluorescein angiography for detection of lesions in diabetic retinopathy. Br J Ophthalmol 2021;105(4):577-581.Abstract.
AIMS: To compare widefield swept-source optical coherence tomography angiography (WF SS-OCTA) with ultra-widefield colour fundus photography (UWF CFP) and fluorescein angiography (UWF FA) for detecting diabetic retinopathy (DR) lesions. METHODS: This prospective, observational study was conducted at Massachusetts Eye and Ear from December 2018 to October 2019. Proliferative DR, non-proliferative DR and diabetic patients with no DR were included. All patients were imaged with a WF SS-OCTA using a Montage 15×15 mm scan. UWF CFP and UWF FA were taken by a 200°, single capture retinal imaging system. Images were independently evaluated for the presence or absence of DR lesions including microaneurysms (MAs), intraretinal microvascular abnormalities (IRMAs), neovascularisation elsewhere (NVE), neovascularisation of the optic disc (NVD) and non-perfusion areas (NPAs). All statistical analyses were performed using SPSS V.25.0. RESULTS: One hundred and fifty-two eyes of 101 participants were included in the study. When compared with UWF CFP, WF SS-OCTA was found to be superior in detecting IRMAs (p<0.001) and NVE/NVD (p=0.007). The detection rates of MAs, IRMAs, NVE/NVD and NPAs in WF SS-OCTA were comparable with UWF FA images (p>0.05). Furthermore, when we compared WF SS-OCTA plus UWF CFP with UWF FA, the detection rates of MAs, IRMAs, NVE/NVD and NPAs were identical (p>0.005). Agreement (κ=0.916) between OCTA and FA in classifying DR was excellent. CONCLUSION: WF SS-OCTA is useful for identification of DR lesions. WF SS-OCTA plus UWF CFP may offer a less invasive alternative to FA for DR diagnosis. | https://eye.hms.harvard.edu/topic/imaging-diagnostics |
Note: This is a student project from a course affiliated with the Ethnography of the University Initiative. EUI supports faculty development of courses in which students conduct original research on their university, and encourages students to think about colleges and universities in relation to their communities and within larger national and global contexts.
Files in this item
|Files||Description||Format|
|
|
application/mswordTippy Research Process.doc (87kB)
|Tippy's Research Process||Microsoft Word|
|
|
application/vnd.openxmlformats-officedocument.wordprocessingml.documentFinal Paper.docx (29kB)
|Final Paper||Microsoft Word 2007|
|
|
application/vnd.openxmlformats-officedocument.presentationml.presentationFinal Presentation.pptx (105kB)
|Final Presentation||Microsoft PowerPoint 2007|
|
|
application/vnd.openxmlformats-officedocument.wordprocessingml.documentGrill-Donovan Research Process.docx (32kB)
|Grill-Donovan's Research Process||Microsoft Word 2007|
|Other Available Formats|
|
|
application/pdfTippy Research Process.doc.pdf (83kB)
|Automatically converted using OpenOffice.org|
|
|
application/pdfFinal Presentation.pptx.pdf (120kB)
|Automatically converted using OpenOffice.org|
Description
|Title:||Motivations and Consequences of Students Going Home on the Weekends: An Ethnographic Study at Illinois State University|
|Author(s):||Grill-Donovan, Katie; Gegg, Anne; Tippy, Erin|
|Subject(s):||Going home
|
student involvement
family ties
ANT 285
Fall 2009
|Abstract:||At Illinois State, student involvement is a major issue because the university’s goal is to provide an enriched college experience beyond the classroom. With over 30% of students lining in campus housing, it is still observed that students leave campus during the weekends. The purpose of the study is to study why students decide to “get involved” or not by interviews with students and staff, surveys of students, and observations of campus events. Who is involved and who is going home on the weekends can help the university to better learn how to capture the student’s interests to get them involved and want to stay on campus more.|
|Issue Date:||2009|
|Course / Semester:||The objective of this course is to provide students with hands-on training in ethnographic methods and writing and to help students become critical readers of ethnographic research. Instead of attempting to present a whole smorgasbord of research methods or to survey the vast literature on ethnographic fieldwork, we focus on a small selection of techniques that are central to much anthropological fieldwork (field note taking, participant observation, interviewing, mapping) and that are most useful and relevant for students' semester projects. Other techniques and issues are discussed and incorporated as they emerge from students' own research inquiries. Students will not conduct a full-blown research project but instead will get “a taste” of ethnographic research through a series of ethnographic exercises and students' own mini-project. At the end of the semester, students write an ethnographic research report based on their findings and reflections on the research process. This course is affiliated with the EUI and, as a part of this initiative, students are asked to try their hand at ethnographic research about their own institution, Illinois State University.|
|Type:||Text|
|URI:||http://hdl.handle.net/2142/16420|
|Date Available in IDEALS:||2010-06-12|
This item appears in the following Collection(s)
-
Student Communities and Culture
The university offers an extraordinary opportunity to study and document student communities, life, and culture. This collection includes research on the activities, clubs, and durable social networks that comprise sometimes the greater portion of the university experience for students. | https://www.ideals.illinois.edu/handle/2142/16420 |
Understand Your 4 Key Dimensions
One of the great benefits of the Highlands Ability Battery is the work done to provide you with an understanding of how the abilities work together in the real world. After all, we are whole people and each situation we are in is employing multiple abilities and skills. The 4 Key Dimension report takes four important areas of life and gives you an overview of how your results work in those areas. This further helps you to maximize your results.
The 4 key areas are: Work Environment, Learning Strategies, Problem-Solving and Communication.
Work Environment
What work environment do you thrive in? Are you energized working in groups or do you prefer working independently? Do you enjoy having a big picture and moving from one task to the next or do you work best with one task at a time? Can you easily work in a fast-paced environment or do you work better with more time to focus?
Learning Strategies
Your specialized abilities reveal how you take in new information. This translates directly into how you learn best. Do you remember what you hear, see, or read? Do you learn best by doing? Once you understand how you learn you can develop helpful strategies that you can apply to any learning environment whether at school or work or even hobbies.
Problem-Solving
Life is full of problems to solve. Do you have a Diagnostic, Analytical, Experiential or Consultative problem-solving style? How do you think through an issue? Are you methodical, do you go with your gut, do you find arriving at conclusions easy? How well can you explain your decisions to other. Understanding this Key Dimension gives you an advantage in finding your best work environment.
Communication
This key dimension is also influenced by a number of abilities. Maybe you think of Introvert vs. Extrovert, that certainly plays a role. Other abilities also contribute to the way we communicate. How patient are you? How detailed are your explanations? Do you provide examples, multiple examples? Considering we communicate every day, understanding your communication preferences gives you another tool to use to evaluate decisions about your future. | https://www.trueyoudiscovery.com/4-key-dimensions-article |
Fera is part of a new project (BIPESCO), funded under the LWEC Tree Health and Plant Biosecurity Initiative, which will develop entomopathic fungi (EPF) and botanicals to control insect pests that are a major problem in forestry and tree nurseries, and an emergent threat to trees and human health. Botanicals (essential oils, extracts or plant derivatives) with attractant or repellent properties will be used alone or with EPF in novel pest control strategies.
EPFs are considered natural mortality agents and environmentally safe, and there is worldwide interest in their use and manipulation for biological control of insects and other arthropod pests. Attractant compounds will also be used to improve pest monitoring and deployed in mass trapping programmes. The project is a collaboration between University of Swansea, Fera and Forest Research (Northern Research Station, Roslin, Midlothian).
Pest control is still heavily dependent upon the use of chemical insecticides, but pressure is increasing to develop environmentally friendly products and strategies. Key drivers include the EC-wide withdrawal of 67% of pesticides, public concerns regarding the risks posed by chemical pesticides to human health and the environment, increasing incidence of resistance developing in pest populations, and EU legislation.
Fera scientists will be contributing expertise in insect physiology (including insect immunology, biochemistry, proteomics, molecular biology, insect culturing and bio-assays), and the preparation and utilization of insect pathogens. Fera also has state-of-the-art insect quarantine facilities and a strong record in the development of more benign, sustainable and novel strategies for insect pest control based on disruption of essential physiological processes within the target pests. | https://proarbmagazine.com/pest-control-insects-threaten-tree-health/ |
This primary school resource provides five inspirational tales of human rights defenders – Fela, Maria, Bobo, Ishmael and Farai. Based on true stories, the tales chronicle the social challenges, lifestories and how each of them defended human rights in their own countries.
Each story is written in a version for older and younger learners in an accessible manner and can be used from infants up until 6th class.
Engaging with the stories of five real activists through discussion, creative thinking and character exploration can support the development of literacy.
Through the familiar medium of storytelling, human rights situations can be explored on a level which children can encounter the real life impact of human rights in practice.
Issues explored in the stories include: Housing; Poverty; Literacy; Freedom of Expression; Child Soldiers; Blood Diamonds.
Countries included: Nigeria; Zimbabwe; Sierra Leone; Myanmar; Angola. | https://developmenteducation.ie/resource/human-rights-stories-tales-of-human-rights-defenders-for-primary-schools/?sf_paged=3 |
by Northamptonshire Education and Libraries Department in Northampton .
Written in English
|ID Numbers|
|Open Library||OL14277037M|
The school has also forged links with the local public library through a scheme intended to encourage its use among pupils - something the Ofsted survey says more schools should do. Children soon to join the reception class are enrolled in the local library and encouraged to borrow 10 books from a selection set aside for the school. People use computers and internet connections at libraries for the basics. People also go to libraries to use tech resources. In this survey, 29% of library-using Americans 16 and older said they had gone to libraries to use computers, the internet, or a public Wi-Fi network. (That amounts to 23% of all Americans ages 16 and above.). Selected statistics on public school libraries/media centers, by level of school: Selected years, – through –12 , Digest of Education Statistics , Table Number and percentage of public schools with libraries/media centers and average number of staff per library/media center, by staff type and employment status. The study conclude that for effective use of Schools Libraries by the secondary school students, there is need for current and adequate school library information resources, provision of.
Library Survey Ephemera. Chilwell School Library Website. Homework. Books Just In. Accelerated Reader. All about the library. Pupil Book Review. Books you might like. Library Clubs Pupil Book Review. Pupil Reviews of the Nottingham Brilliant Book Award books. CHICAGO – New research published in the American Association of School Librarians’ (AASL) peer-reviewed online journal, School Library Research (SLR), analyzes the use of school libraries by students who receive free school meals. SLR promotes and publishes high-quality original research concerning the management, implementation, and evaluation of school libraries. Articles can be . Birmingham Public Library Patron Survey Patron suggestions are the most important resource we have in determining what materials and services to offer at Birmingham Public Library. This survey will give you the opportunity to make valuable and specific suggestions to us. • Nearly half (%) of young people said that they do not use public libraries at all. • Young people from white backgrounds use public libraries the least (%). • Public library use declines drastically and significantly with age, with % of KS2, % of KS3 and only % of KS4 pupils saying that they use their public library.
During the –11 school year, public school library media centers spent an average of $9, for all information resources (table 4). This includes an average of $6, for the purchase of books and $ for the purchase of audio/video materials. Assessment of Access and Use of School Library Information Resources by Secondary Schools Students in Morogoro Municipality, Tanzania. Special library, National library, Public library, Research library, Academic library and so on, but the focus of this study was on the School which pupils and young people of today will be creative. the UK said that they use the school library, and that school library use was associated with gender, age, socioeconomic background and ethnic background. These new data also show that % of pupils use the school library at least once a week. While there was no gender difference in frequency of using the school library. The Institute of Museum and Library Services today released the latest data from its Public Libraries Survey. The Public Libraries Survey collects information on key indicators of public library use, financial health, staffing, and resources. Explore the FY data and documentation or query the Library Search and Compare Tool. | https://lifodisohuhiso.academyrealtor.com/survey-of-school-pupils-use-of-public-libraries-book-25613ok.php |
Explores spoken word poetry as a tool for social justice, critical feminist pedagogy, and new ways of teaching.
The writing and performance of spoken word poetry can create moments of productive critical engagement. In The Fifth Element, Crystal Leigh Endsley charts her experience of working with a dynamic and diverse group of college students, who are also emerging artists, to explore the connection between spoken word and social responsibility. She considers how themes of activism, identity, and love intersect with the lived experiences of these students and how they use spoken word to negotiate resistance and to navigate through life. Endsley also examines the local and transnational communities where performances took place to shed light on concepts of social responsibility and knowledge production.
Crystal Leigh Endsley is Assistant Professor of Africana Studies at John Jay College of Criminal Justice, City University of New York. | http://www.sunypress.edu/p-6179-the-fifth-element.aspx |
Introduces physics as it analyzes the science behind "Star Trek," explaining the intricacies of warp speed and showing the difference between a holodeck and a hologram.
Great Experiments in Physics
Firsthand Accounts from Galileo to Einstein
by Morris H. Shamos
- Publisher : Courier Corporation
- Release : 1987-01-01
- Pages : 370
- ISBN : 9780486253466
- Language : En, Es, Fr & De
Starting with Galileo's experiments with motion, this study of 25 crucial discoveries includes Newton's laws of motion, Chadwick's study of the neutron, Hertz on electromagnetic waves, and more. Includes Isaac Newton's "The Laws of Motion," Henry Cavendish's "The Law of Gravitation," Heinrich Hertz's "Electromagnetic Waves," Niels Bohr's "The Hydrogen Atom," and more.
Modern Physics
The Quantum Physics of Atoms, Solids, and Nuclei: Third Edition
by Robert L. Sproull,W. Andrew Phillips
- Publisher : Courier Dover Publications
- Release : 2015-03-18
- Pages : 704
- ISBN : 048678326X
- Language : En, Es, Fr & De
Originally published: New York: Wiley, 1980.
Riemann, Topology, and Physics
A Book
by Michael I. Monastyrsky
- Publisher : Springer Science & Business Media
- Release : 2008-01-11
- Pages : 215
- ISBN : 9780817647780
- Language : En, Es, Fr & De
The significantly expanded second edition of this book combines a fascinating account of the life and work of Bernhard Riemann with a lucid discussion of current interaction between topology and physics. The author, a distinguished mathematical physicist, takes into account his own research at the Riemann archives of Göttingen University and developments over the last decade that connect Riemann with numerous significant ideas and methods reflected throughout contemporary mathematics and physics. Special attention is paid in part one to results on the Riemann–Hilbert problem and, in part two, to discoveries in field theory and condensed matter.
Introduction to Nonlinear Physics
A Book
by Lui Lam
- Publisher : Springer Science & Business Media
- Release : 2003-11-14
- Pages : 417
- ISBN : 9780387406145
- Language : En, Es, Fr & De
This textbook provides an introduction to the new science of nonlinear physics for advanced undergraduates, beginning graduate students, and researchers entering the field. The chapters, by pioneers and experts in the field, share a unified perspective. Nonlinear science developed out of the increasing ability to investigate and analyze systems for which effects are not simply linear functions of their causes; it is associated with such well-known code words as chaos, fractals, pattern formation, solitons, cellular automata, and complex systems. Nonlinear phenomena are important in many fields, including dynamical systems, fluid dynamics, materials science, statistical physics, and paritcel physics. The general principles developed in this text are applicable in a wide variety of fields in the natural and social sciences. The book will thus be of interest not only to physicists, but also to engineers, chemists, geologists, biologists, economists, and others interested in nonlinear phenomena. Examples and exercises complement the text, and extensive references provide a guide to research in the field.
The Cosmic Code
Quantum Physics as the Language of Nature
by Heinz R. Pagels
- Publisher : Courier Corporation
- Release : 2012-02-15
- Pages : 370
- ISBN : 0486485064
- Language : En, Es, Fr & De
" This is one of the most important books on quantum mechanics ever written for lay readers, in which an eminent physicist and successful science writer, Heinz Pagels, discusses and explains the core concepts of physics without resorting to complicated mathematics. "Can be read by anyone. I heartily recommend it!" -- New York Times Book Review. 1982 edition"--
Concepts in Surface Physics
2ème édition
by M.-C. Desjonqueres,D. Spanjaard
- Publisher : Springer Science & Business Media
- Release : 1996
- Pages : 605
- ISBN : 9783540586227
- Language : En, Es, Fr & De
A tutorial treatment of the main concepts of the physics of crystal surfaces. Emphasis is placed on simplified calculations and the corresponding detailed analytical derivations, that are able to throw light on the most important physical mechanisms. More rigorous techniques, which often require a large amount of computer time, are also explained. Wherever possible, the theory is compared to practice, with the experimental methods being described from a theoretical rather than a technical viewpoint. The topics treated include thermodynamic and statistical properties of clean and adsorbate-covered surfaces, atomic structure, vibrational properties, electronic structure, and the theory of physisorption and chemisorption. The whole is rounded off with new excercises.
Concepts of Space
The History of Theories of Space in Physics
by Max Jammer
- Publisher : Courier Corporation
- Release : 1954
- Pages : 196
- ISBN : 0486271196
- Language : En, Es, Fr & De
Historical surveys of the concept of space considers Judeo-Christian ideas about space, Newton's concept of absolute space, space from 18th century to the present. Numerous original quotations and bibliographical references. "Admirably compact and swiftly paced style." — Philosophy of Science. Foreword by Albert Einstein.
Basics of Medical Physics
A Book
by Daniel Jirák,František Vítek
- Publisher : Charles University in Prague, Karolinum Press
- Release : 2018-02-01
- Pages : 226
- ISBN : 802463810X
- Language : En, Es, Fr & De
The textbook Basics of Medical Physics describes the basics of medical physics and the clinical and experimental methods which a physician can be frequently encountered with. Medical physics is specific in dealing with the application of physical methods on a living organism. Therefore, it represents an interdisciplinary scientific discipline that combines physics and biological sciences. The presented textbook covers a broad range of topics; it contains eight chapters: Structure of Matter; Molecular Biophysics; Thermodynamics; Biophysics of Electric Phenomena; Acoustics and Physical Principles of Hearing; Optics; X-ray Physics and Medical Application; Radioactivity and Ionizing Radiation. The text is supplemented by many figures, which help to facilitate the understanding of the phenomena. Methods, which are explained in the book, are based on the different physical principles. Some of these methods, e.g. using optical magnifying lenses or X-rays, have been known for more than 100 years, while others are more recent such as magnetic resonance imaging or positron emission tomography. After reading this book, the readers should get a comprehensive overview of the possibilities of using various physical methods in medicine. They should be able to understand to the mentioned physical relations in the broader context.
Averroes' Physics
A Turning Point in Medieval Natural Philosophy
by Ruth Glasner
- Publisher : OUP Oxford
- Release : 2009-06-18
- Pages : 240
- ISBN : 0191609978
- Language : En, Es, Fr & De
Ruth Glasner presents an illuminating reappraisal of Averroes' physics. Glasner is the first scholar to base her interpretation on the full range of Averroes' writings, including texts that are extant only in Hebrew manuscripts and have not been hitherto studied. She reveals that Averroes changed his interpretation of the basic notions of physics - the structure of corporeal reality and the definition of motion - more than once. After many hesitations he offers a bold new interpretation of physics which Glasner calls 'Aristotelian atomism'. Ideas that are usually ascribed to scholastic scholars, and others that were traced back to Averroes but only in a very general form, are shown not only to have originated with him, but to have been fully developed by him into a comprehensive and systematic physical system. Unlike earlier Greek or Muslim atomistic systems, Averroes' Aristotelian atomism endeavours to be fully scientific, by Aristotelian standards, and still to provide a basis for an indeterministic natural philosophy. Commonly known as 'the commentator' and usually considered to be a faithful follower of Aristotle, Averroes is revealed in his commentaries on the Physics to be an original and sophisticated philosopher.
Concepts in Thermal Physics
A Book
by Stephen Blundell,Katherine M. Blundell
- Publisher : Oxford University Press on Demand
- Release : 2010
- Pages : 493
- ISBN : 0199562091
- Language : En, Es, Fr & De
This text provides a modern introduction to the main principles of thermal physics, thermodynamics and statistical mechanics. The key concepts are presented and new ideas are illustrated with worked examples as well as description of the historical background to their discovery.
The Ideas of Particle Physics
A Book
by James E. Dodd,Ben Gripaios
- Publisher : Cambridge University Press
- Release : 2020-09-24
- Pages : 328
- ISBN : 1108727409
- Language : En, Es, Fr & De
The fourth edition of this popular book is a comprehensive introduction to particle physics, including the latest ideas and discoveries.
Princeton Problems in Physics, with Solutions
A Book
by Nathan Newbury,M. Newman,Michael Newman,John Ruhl,Suzanne Staggs,Stephen Thorsett
- Publisher : Princeton University Press
- Release : 1991-02-21
- Pages : 319
- ISBN : 0691024499
- Language : En, Es, Fr & De
Aimed at helping the physics student to develop a solid grasp of basic graduate-level material, this book presents worked solutions to a wide range of informative problems. These problems have been culled from the preliminary and general examinations created by the physics department at Princeton University for its graduate program. The authors, all students who have successfully completed the examinations, selected these problems on the basis of usefulness, interest, and originality, and have provided highly detailed solutions to each one. Their book will be a valuable resource not only to other students but to college physics teachers as well. The first four chapters pose problems in the areas of mechanics, electricity and magnetism, quantum mechanics, and thermodynamics and statistical mechanics, thereby serving as a review of material typically covered in undergraduate courses. Later chapters deal with material new to most first-year graduate students, challenging them on such topics as condensed matter, relativity and astrophysics, nuclear physics, elementary particles, and atomic and general physics.
Physics in Biology and Medicine
A Book
by Paul Davidovits
- Publisher : Academic Press
- Release : 2008
- Pages : 328
- ISBN : 9780123694119
- Language : En, Es, Fr & De
This third edition covers topics in physics as they apply to the life sciences, specifically medicine, physiology, nursing and other applied health fields. It includes many figures, examples and illustrative problems and appendices which provide convenient access to the most important concepts of mechanics, electricity, and optics.
Condensed Matter Physics
A Book
by Michael P. Marder
- Publisher : Wiley
- Release : 2015-01-07
- Pages : 984
- ISBN : 9780470617984
- Language : En, Es, Fr & De
Now updated—the leading single-volume introduction to solid state and soft condensed matter physics This Second Edition of the unified treatment of condensed matter physics keeps the best of the first, providing a basic foundation in the subject while addressing many recent discoveries. Comprehensive and authoritative, it consolidates the critical advances of the past fifty years, bringing together an exciting collection of new and classic topics, dozens of new figures, and new experimental data. This updated edition offers a thorough treatment of such basic topics as band theory, transport theory, and semiconductor physics, as well as more modern areas such as quasicrystals, dynamics of phase separation, granular materials, quantum dots, Berry phases, the quantum Hall effect, and Luttinger liquids. In addition to careful study of electron dynamics, electronics, and superconductivity, there is much material drawn from soft matter physics, including liquid crystals, polymers, and fluid dynamics. Provides frequent comparison of theory and experiment, both when they agree and when problems are still unsolved Incorporates many new images from experiments Provides end-of-chapter problems including computational exercises Includes more than fifty data tables and a detailed forty-page index Offers a solutions manual for instructors Featuring 370 figures and more than 1,000 recent and historically significant references, this volume serves as a valuable resource for graduate and undergraduate students in physics, physics professionals, engineers, applied mathematicians, materials scientists, and researchers in other fields who want to learn about the quantum and atomic underpinnings of materials science from a modern point of view.
Quantum Trading
Using Principles of Modern Physics to Forecast the Financial Markets
by Fabio Oreste
- Publisher : John Wiley & Sons
- Release : 2011-08-02
- Pages : 240
- ISBN : 0470435127
- Language : En, Es, Fr & De
A cutting-edge guide to quantum trading Original and thought-provoking, Quantum Trading presents a compelling new way to look at technical analysis and will help you use the proven principles of modern physics to forecast financial markets. In it, author Fabio Oreste shows how both the theory of relativity and quantum physics is required to makes sense of price behavior and forecast intermediate and long-term tops and bottoms. He relates his work to that of legendary trader W.D. Gann and reveals how Gann's somewhat esoteric theories are consistent with his applications of Einstein's theory of relativity and quantum theory to price behavior. Applies concepts from modern science to financial market forecasting Shows how to generate support/resistance areas and identify potential market turning points Addresses how non-linear approaches to trading can be used to both understand and forecast market prices While no trading approach is perfect, the techniques found within these pages have enabled the author to achieve a very attractive annual return since 2002. See what his insights can do for you.
Physics Essentials For Dummies
A Book
by Steven Holzner
- Publisher : John Wiley & Sons
- Release : 2019-05-07
- Pages : 192
- ISBN : 1119590280
- Language : En, Es, Fr & De
Physics Essentials For Dummies (9781119590286) was previously published as Physics Essentials For Dummies (9780470618417). While this version features a new Dummies cover and design, the content is the same as the prior release and should not be considered a new or updated product. For students who just need to know the vital concepts of physics, whether as a refresher, for exam prep, or as a reference, Physics Essentials For Dummies is a must-have guide. Free of ramp-up and ancillary material, Physics Essentials For Dummies contains content focused on key topics only. It provides discrete explanations of critical concepts taught in an introductory physics course, from force and motion to momentum and kinetics. This guide is also a perfect reference for parents who need to review critical physics concepts as they help high school students with homework assignments, as well as for adult learners headed back to the classroom who just need a refresher of the core concepts. The Essentials For Dummies Series Dummies is proud to present our new series, The Essentials For Dummies. Now students who are prepping for exams, preparing to study new material, or who just need a refresher can have a concise, easy-to-understand review guide that covers an entire course by concentrating solely on the most important concepts. From algebra and chemistry to grammar and Spanish, our expert authors focus on the skills students most need to succeed in a subject.
Nuclear Physics with Polarized Particles
A Book
by Hans Paetz gen. Schieck
- Publisher : Springer
- Release : 2011-11-01
- Pages : 182
- ISBN : 364224226X
- Language : En, Es, Fr & De
The measurement of spin-polarization observables in reactions of nuclei and particles is of great utility and advantage when the effects of single-spin sub-states are to be investigated. Indeed, the unpolarized differential cross-section encompasses the averaging over the spin states of the particles, and thus loses details of the interaction process. This introductory text combines, in a single volume, course-based lecture notes on spin physics and on polarized-ion sources with the aim of providing a concise yet self-contained starting point for newcomers to the field, as well as for lecturers in search of suitable material for their courses and seminars. A significant part of the book is devoted to introducing the formal theory—a description of polarization and of nuclear reactions with polarized particles. The remainder of the text describes the physical basis of methods and devices necessary to perform experiments with polarized particles and to measure polarization and polarization effects in nuclear reactions. The book concludes with a brief review of modern applications in medicine and fusion energy research. For reasons of conciseness and of the pedagogical aims of this volume, examples are mainly taken from low-energy installations such as tandem Van de Graaff laboratories, although the emphasis of present research is shifting to medium- and high-energy nuclear physics. Consequently, this volume is restricted to describing non-relativistic processes and focuses on the energy range from astrophysical energies (a few keV) to tens of MeV. It is further restricted to polarimetry of hadronic particles.
A Course in Theoretical Physics
A Book
by P. John Shepherd
- Publisher : John Wiley & Sons
- Release : 2013-01-10
- Pages : 488
- ISBN : 1118516923
- Language : En, Es, Fr & De
This book is a comprehensive account of five extended modules covering the key branches of twentieth-century theoretical physics, taught by the author over a period of three decades to students on bachelor and master university degree courses in both physics and theoretical physics. The modules cover nonrelativistic quantum mechanics, thermal and statistical physics, many-body theory, classical field theory (including special relativity and electromagnetism), and, finally, relativistic quantum mechanics and gauge theories of quark and lepton interactions, all presented in a single, self-contained volume. In a number of universities, much of the material covered (for example, on Einstein’s general theory of relativity, on the BCS theory of superconductivity, and on the Standard Model, including the theory underlying the prediction of the Higgs boson) is taught in postgraduate courses to beginning PhD students. A distinctive feature of the book is that full, step-by-step mathematical proofs of all essential results are given, enabling a student who has completed a high-school mathematics course and the first year of a university physics degree course to understand and appreciate the derivations of very many of the most important results of twentieth-century theoretical physics.
Physics Experiments and Projects for Students
A Book
by C. Isenberg,S. Chomet
- Publisher : CRC Press
- Release : 1996-07-01
- Pages : 250
- ISBN : 9781560322801
- Language : En, Es, Fr & De
Based on a collection of undergraduate experiments and projects developed at universities and colleges in the UK. The experiments have been tried and tested by students and their lecturers for several years. | https://www.seecoalharbour.com/physics/ |
Narwhal tusk function debated after a new theory by Martin Nweeia and his team of researchers suggests it is used to sense changes in the narwhals’s environment. Biologists were quick to point out that observing narwhal’s has produced very little evidence that this is the case, but Nweeia’s dental medicine background leads him to believe otherwise. The narwhal tusk, which can grow to nine feet long and is most often sported by males, is actually a tooth, and Nweeia thinks that this makes a world of difference when thinking about its function.
Observing the anatomy of the tooth reveals three distinct layers, similar to our own teeth. The external layer covers the softer material called dentin, which covers the pulp of nerves and blood vessels that run through the extremity. In contrast to our own teeth, where there are no connections between the layers due the extreme sensitivity of the nerves contained within, the tusk of the narwhal features small channels at the barriers between each layer that allow sea water to reach the sensitive pulp beneath the tough exterior. Although it seems counterintuitive for an arctic whale to want frigid water to constantly be flowing over exposed nerves, Nweeia theorizes that the function of the narwhal tusk is allowing the narwhal to detect changes in salinity in the water it swims through.
In studies of live narwhals conducted by Nweeia and his team, the heart rate of captured narwhal’s seemed to rise and fall in response to changing levels of salt in the water they occupied. But the narwhal’s tusk function is debated by biologists who point out that a feature that imparts such as helpful ability would be available to females as well as males, especially since as mammals the narwhals are critically dependant on females to maintain their population. Although some females have been seen with tusks of their own they are most often quite short, only the males have ever been seen with the iconic long spiral tooth.
While there is agreement that being a tooth the tusk is indeed sensitive, it is contended that the conclusion of being able to detect changes in salinity was the reason for the change in heart rate in captured narwhals. As many know, animals tend to be stressed when captured, and the narwhals studied by Nweeia and his team had just been caught in nets and taken to shallow water for observation. Biologists believe that the change in heart rate that was observed had less to do with changing salt levels and everything to do with the whales being observed in a stressful environment.
As it stands, the official explanation is that the tusk is used to attract a mate, similar to bright plumage or impressive antlers. Due to their rarity and elusive nature, narwhals are difficult to capture and even more difficult to study, which may make it impossible to ever know for sure what the exact use of the tusk is. As the narwhal tusk function is debated between biologists and dentists, everyone involved may be long in the tooth before a conclusive answer is found. | https://guardianlv.com/2014/03/narwhal-tusk-function-debated/ |
Giving Western literature and art many of its most enduring themes and archetypes, Greek mythology and the gods and goddesses at its core are a fundamental part of the popular imagination. At the heart of Greek mythology are exciting stories of drama, action, and adventure featuring gods and goddesses, who, while physically superior to humans, share many of their weaknesses. Readers will be introduced to the many figures once believed to populate Mount Olympus as well as related concepts and facts about the Greek mythological tradition. | https://shop.eb.com/collections/world-story-telling-day/products/greek-gods-goddesses |
Nessie English Preschool believes in enriching your child’s learning, boosting self-confidence and developing an understanding of the community through the discovery of the outside world and through out-of-hours learning, public performances and cultural visits.
We run a successful after-school clubs programme, sponsor cultural events for our students and arrange visits to local venues to help enrich our in-class learning.
We believe that school trips, special projects and events are an integral part of our teaching programme and are essential for the development of our students. Throughout the school year, Nessie organises various educational field trips, and athletic and cultural events that are related to the curriculum.
Performances and special assemblies allow your child to shine! At various times of the year students have opportunities to perform for an audience and share their musical talents and new-found language skills with you and other family members and friends. | http://www.nessie.cz/en/clubs-1404042000.html |
Fine jacquard halter top with eye print knitted into the fabric. Straps tie at the back and neck.
Details:
Lola is 5'7", bust: 34" waist: 26" hips: 35" wearing a size S
(worn with St. Agni flared knit pants and crossover platform sandals)
About the brand:
Founded in 2014 by Paloma Lanna, Paloma Wool is a clothing brand and creative project based in Barcelona, Spain. With a focus on creating timeless pieces that are produced locally the brand channels a love of photography and art through frequent collaborations with local contemporary artists and designers. | https://shopadhoc.com/products/paloma-wool-visto-halter-top-ecru |
Welcome to 9784 Fahrenheit to Celsius, or 9784 F to C in short. Here you can find what 9784 degrees Fahrenheit to Celsius is, along with a temperature converter and the formula. For 9784 (degrees) Fahrenheit we write 9784 °F, and (degrees) Celsius or centigrades are denoted with the symbol °C. So if you have been looking for 9784 °F to °C, then you are right here, too. Read on below to learn everything about the temperature conversion.
Formula
The 9784 Fahrenheit to Celsius formula is: [°C] = ( − 32) x 5 ⁄ 9. Therefore, we get:
9784 °F in °C = 5417.778 Celsius
9784 F in C = 5417.778 degrees Celsius
Next, we explain the math.
Conversion
To convert the temperature start by start by deducting 32 from 9784. Then multiply 9752 by 5 over 9 to obtain 5417.778 degrees Celsius. Easier, however, is using our converter above.
Similar temperature conversions on our website include:
Ahead is the wrap-up of our content.
Summary
How much is 9784 degrees Fahrenheit in Celsius? By reading our article so far, or by means of our converter, you already know the answer
9784 Fahrenheit in other temperature units is:
- Newton: 1787.867 °N
- Kelvin: 5690.928 °K
- Réaumur: 4334.222 °Ré
- Rømer: 2851.833 °Ro
- Delisle: -7976.667 °De
- Rankine: 10243.67 °R
This ends our posts about 9784 °F to °C.
If you have anything to tell or in case you would like to ask something about 9784 F in C, then fill in the form below. And, if this article about 9784 F to Celsius has been helpful to you, then hit the social buttons please.
Thanks for visiting fahrenheittocelsius.org. | https://fahrenheittocelsius.org/9784-f-to-c |
answering the complex scientific questions that may arise.
The budget includes $34.9 million in program increases and $15
million in fixed costs, offset by $87.8 million in reductions for lower
priority efforts and unrequested increases.
"The USGS is committed to providing timely, objective scientific
information in support of key departmental and presidential priorities,
including Water for America, Birds Forever, Healthy Lands, and Ocean
and Coastal Frontiers," said USGS Director Mark Myers. "The proposed
budget will also strengthen our efforts in climate change studies,
priority ecosystems research and the development of a National Land
Imaging Program."
The 2009 budget includes a net increase of $8.2 million to support the
water census component of the $21.3 million Water for America
Initiative with the Bureau of Reclamation. To support the water census,
the National Streamflow Information Program is funded at $23.8 million,
including an increase of $3.7 million to upgrade 350 stream gauges with
real-time telemetry and to reinstate 50 discontinued stream gauges in
2009. Increases of $3 million for the Ground-Water Resources Program
and $1.5 million for Cooperative Geologic Mapping will provide
additional support for the water census by increasing knowledge related
to groundwater resources.
The Birds Forever Initiative is a joint effort between the U.S. Fish
and Wildlife Service and the USGS. A proposed $1 million increase to
support this initiative will fund USGS efforts to better understand
large-scale drivers of migratory bird population and habitat change
such as global warming, deforestation and urban development. This
initiative supports monitoring efforts, including the Breeding Bird
Survey and other migratory bird monitoring activities.
The budget also proposes a $3.5 million increase to expand activities
in support of the Healthy Lands Initiative, and the USGS is a
significant partner in this multi-bureau initiative. Continuing work in
southwest Wyoming, the USGS will conduct an ecological assessment in
Healthy Lands Initiative areas to develop a baseline of scientific
information related to wildlife habitat and development activities
occurring or planned. Tools, models, and protocols developed will be
transferred and applied to other areas.
In addition, the proposed budget includes an increase of $7 million for
oceans science in support of the Department's Ocean and Coastal
Frontiers Initiative and completing the work started in 2008 on the
U.S. Ocean Action Plan. Coastal and Marine Geology is funded at $47.4
million. An increase of $4 million will be used to collect data for the
extended Continental Shelf of the Arctic Ocean, working with the
National Oceanic and Atmospheric Administration, to support the
nation's claim to its mineral and energy rights in the extended
Continental Shelf. An additional $2 million will be used to conduct
merit-based ocean research projects, and $1 million will complete
funding for efforts in seafloor mapping, models to forecast response to
extreme weather events, and developing a water quality monitoring
network.
The 2009 budget reflects a restructuring to create a global change
activity and sustains $5 million of the $7.4 million increase in 2008
for climate change science. The 2009 request of $26.6 million includes
$21.6 million in base funds to continue current global change research,
$4 million to establish a pilot program in Alaska for a national
climate change network, and $1 million for climate change adaptation
studies. These components will provide critical monitoring information
needed for predictive modeling.
The 2009 budget consolidates funding for a new Global Change Activity
totaling $26.6 million that is supported by an additional $4.8 million
in Climate Change Science, bringing total climate change funding to
$31.4 million.
Priority ecosystems studies have a proposed budget of $10.4 million.
The USGS will continue funding for work in the Greater Everglades,
Chesapeake Bay, San Francisco Bay, the Mojave Desert, the Platte River,
and Yellowstone.
Land Remote Sensing is funded at $62.6 million, including a
programmatic increase of $2 million to develop a National Land Imaging
Program. This program will assess the future need for civil,
operational land imaging data and develop a blueprint to determine
future needs for acquisition of satellite data to supplement Landsat 7
imagery.
In order to focus programs on activities that are inherently
governmental and to concentrate on highest priority research, the
President's 2009 budget reduces funding to the Mineral Resources and
the National Water Quality Assessment (NAWQA) programs. A $24.6 million
net reduction to Mineral Resource Assessments is proposed, which will
result in a 2009 program of $26.3 million. A $10.9 million net
reduction to NAWQA is proposed for a total 2009 program of $54.1
million. The budget also reduces the Earthquake Hazards Program by $5
million, retaining $49.1 million for the highest priority earthquake
research projects.
For more information on the proposed FY 2009 budget, visit www.usgs.gov.
This free webinar will inform attendees on the most significant updates and modifications in the new standard, known as E1527-21 Standard Practice for Environmental Site Assessments: Phase I Environmental Site Assessment Process. | https://eponline.com/articles/2008/02/07/bush-budget-shifts-focus-for-usgs.aspx |
Dr. Vanessa Cave10 May 2022
Having spent over 15 years working as an applied statistician in the biosciences, I’ve come across my fair-share of animal studies. And one of my greatest bugbears is that the full value is rarely extracted from the experimental data collected. This could be because the best statistical approaches haven’t been employed to analyse the data, the findings are selectively or incorrectly reported, other research programmes that could benefit from the data don’t have access to it, or the data aren’t re-analysed following the advent of new statistical methods or tools that have the potential to draw greater insights from it.
An enormous number of scientific research studies involve animals, and with this come many ethical issues and concerns. To help ensure high standards of animal welfare in scientific research, many governments, universities, R&D companies, and individual scientists have adopted the principles of the 3Rs: Replacement, Reduction and Refinement. Indeed, in many countries the tenets of the 3Rs are enshrined in legislation and regulations around the use of animals in scientific research.
Replacement
|Use methods or technologies that replace or avoid the use of animals.|
Reduction
|Limit the number of animals used.|
Refinement
|Refine methods in order to minimise or eliminate negative animal welfare impacts.|
In this blog, I’ll focus on the second principle, Reduction, and argue that statistical expertise is absolutely crucial for achieving reduction.
The aim of reduction is to minimise the number of animals used in scientific research whilst balancing against any additional adverse animal welfare impacts and without compromising the scientific value of the research. This principle demands that before carrying out an experiment (or survey) involving animals, the researchers must consider and implement approaches that both:
Both these considerations involve statistical thinking. Let’s begin by exploring the important role statistics plays in minimising current animal use.
Reduction requires that any experiment (or survey) carried out must use as few animals as possible. However, with too few animals the study will lack the statistical power to draw meaningful conclusions, ultimately wasting animals. But how do we determine how many animals are needed for a sufficiently powered experiment? The necessary starting point is to establish clearly defined, specific research questions. These can then be formulated into appropriate statistical hypotheses, for which an experiment (or survey) can be designed.
Statistical expertise in experimental design plays a pivotal role in ensuring enough of the right type of data are collected to answer the research questions as objectively and as efficiently as possible. For example, sophisticated experimental designs involving blocking can be used to reduce random variation, making the experiment more efficient (i.e., increase the statistical power with fewer animals) as well as guarding against bias. Once a suitable experimental design has been decided upon, a power analysis can be used to calculate the required number of animals (i.e., determine the sample size). Indeed, a power analysis is typically needed to obtain animal ethics approval - a formal process in which the benefits of the proposed research is weighed up against the likely harm to the animals.
Researchers also need to investigate whether pre-existing sources of information or data could be integrated into their study, enabling them to reduce the number of animals required. For example, by means of a meta-analysis. At the extreme end, data relevant to the research questions may already be available, eradicating the need for an experiment altogether!
An obvious mechanism for minimising future animal use is to ensure we do it right the first time, avoiding the need for additional experiments. This is easier said than done; there are many statistical and practical considerations at work here. The following paragraphs cover four important steps in experimental research in which statistical expertise plays a major role: data acquisition, data management, data analysis and inference.
Above, I alluded to the validity of the experimental design. If the design is flawed, the data collected will be compromised, if not essentially worthless. Two common mistakes to avoid are pseudo-replication and the lack of (or poor) randomisation. Replication and randomisation are two of the basic principles of good experimental design. Confusing pseudo-replication (either at the design or analysis stage) for genuine replication will lead to invalid statistical inferences. Randomisation is necessary to ensure the statistical inference is valid and for guarding against bias.
Another extremely important consideration when designing an experiment, and setting the sample size, is the risk and impact of missing data due, for example, to animal drop-out or equipment failure. Missing data results in a loss of statistical power, complicates the statistical analysis, and has the potential to cause substantial bias (and potentially invalidate any conclusions). Careful planning and management of an experiment will help minimise the amount of missing data. In addition, safe-guards, controls or contingencies could be built into the experimental design that help mitigate against the impact of missing data. If missing data does result, appropriate statistical methods to account for it must be applied. Failure to do so could invalidate the entire study.
It is also important that the right data are collected to answer the research questions of interest. That is, the right response and explanatory variables measured at the appropriate scale and frequency. There are many statistical related-questions the researchers must answer, including: what population do they want to make inference about? how generalisable do they need their findings to be? what controllable and uncontrollable variables are there? Answers to these questions not only affects enrolment of animals into the study, but also the conditions they are subjected to and the data that should be collected.
It is essential that the data from the experiment (including meta-data) is appropriately managed and stored to protect its integrity and ensure its usability. If the data get messed up (e.g., if different variables measured on the same animal cannot be linked), is undecipherable (e.g., if the attributes of the variables are unknown) or is incomplete (e.g., if the observations aren’t linked to the structural variables associated with the experimental design), the data are likely worthless. Statisticians can offer invaluable expertise in good data management practices, helping to ensure the data are accurately recorded, the downstream results from analysing the data are reproducible and the data itself is reusable at a later date, by possibly a different group of researchers.
Unsurprisingly, it is also vitally important that the data are analysed correctly, using the methods that draw the most value from it. As expected, statistical expertise plays a huge role here! The results and inference are meaningful only if appropriate statistical methods are used. Moreover, often there is a choice of valid statistical approaches; however, some approaches will be more powerful or more precise than others.
Having analysed the data, it is important that the inference (or conclusions) drawn are sound. Again, statistical thinking is crucial here. For example, in my experience, one all too common mistake in animal studies is to accept the null hypothesis and erroneously claim that a non-significant result means there is no difference (say, between treatment means).
The other important mechanism for minimising future animal use is to share the knowledge and information gleaned. The most basic step here is to ensure that all the results are correctly and non-selectively reported. Reporting all aspects of the trial, including the experimental design and statistical analysis, accurately and completely is crucial for the wider interpretation of the findings, reproducibility and repeatability of the research, and for scientific scrutiny. In addition, all results, including null results, are valuable and should be shared.
Sharing the data (or resources, e.g., animal tissues) also contributes to reduction. The data may be able to be re-used for a different purpose, integrated with other sources of data to provide new insights, or re-analysed in the future using a more advanced statistical technique, or for a different hypothesis.
Another avenue that should also be explored is whether additional data or information can be obtained from the experiment, without incurring any further adverse animal welfare impacts, that could benefit other researchers and/or future studies. For example, to help address a different research question now or in the future. At the outset of the study, researchers must consider whether their proposed study could be combined with another one, whether the research animals could be shared with another experiment (e.g., animals euthanized for one experiment may provide suitable tissue for use in another), what additional data could be collected that may (or is!) of future use, etc.
Statistical thinking clearly plays a fundamental role in reducing the number of animals used in scientific research, and in ensuring the most value is drawn from the resulting data. I strongly believe that statistical expertise must be fully utilised through the duration of the project, from design through to analysis and dissemination of results, in all research projects involving animals to achieving reduction. In my experience, most researchers strive for very high standards of animal ethics, and absolutely do not want to cause unnecessary harm to animals. Unfortunately, the role statistical expertise plays here is not always appreciated or taken advantage of. So next time you’re thinking of undertaking research involving animals, ensure you have expert statistical input!
Dr. Vanessa Cave is an applied statistician interested in the application of statistics to the biosciences, in particular agriculture and ecology, and is a developer of the Genstat statistical software package. She has over 15 years of experience collaborating with scientists, using statistics to solve real-world problems. Vanessa provides expertise on experiment and survey design, data collection and management, statistical analysis, and the interpretation of statistical findings. Her interests include statistical consultancy, mixed models, multivariate methods, statistical ecology, statistical graphics and data visualisation, and the statistical challenges related to digital agriculture.
Vanessa is currently President of the Australasian Region of the International Biometric Society, past-President of the New Zealand Statistical Association, an Associate Editor for the Agronomy Journal, on the Editorial Board of The New Zealand Veterinary Journal and an honorary academic at the University of Auckland. She has a PhD in statistics from the University of St Andrew.
Related Reads
Dr. Andrew Illius and Dr. Nick Savill20 July 2022
Quantification holds the key to controlling disease
Background
Andrew Illius with Nick Savill have, since 2018, studied the epidemiology and control of maedi-visna virus (MV) in sheep and have been looking at understanding and finding ways of controlling this incurable disease. Accessing published data and with the use of Genstat, they aimed to find ways of controlling MV.
When one of your sheep gets diagnosed with an incurable disease, you have to worry. How quickly will symptoms develop, and welfare and productivity suffer? And how soon will it spread throughout the rest of the flock? The disease in question is maedi-visna (MV, see Box 1), notorious for its impact in Iceland, where the disease was first described; extreme measures over 20 years were required before it was finally eliminated. Culling seropositive animals is the main means of control. For the farmer, the crucial question is whether living with the disease would be more expensive than trying to eradicate it. We are addressing such questions by analysing data from long-term experiments.
1 MV – the tip of an iceberg?
Putting aside for a moment MV’s fearsome reputation, the way the pathogen works is fascinating. The small ruminant lentiviruses (SRLV, family retroviridae) are recognised as a heterogeneous group of viruses that infect sheep, goats and wild ruminants. Lentiviruses target the immune system, but SRLV does not target T-cells in the manner of immune deficiency lentiviruses such as HIV. Instead, SRLV infects monocytes (a type of white blood cell) which infiltrate the interstitial spaces of target organs (such as the lungs, mammary glands, or the synovial tissue of joints) carrying proviral DNA integrated into the host cell genome and hence invisible to the immune system. Virus replication commences following maturation of monocytes into macrophages, and the ensuing immune response eventually shows up as circulating antibodies (termed seroconversion). But it also causes inflammation that attracts further macrophages, slowly and progressively building into chronic inflammatory lesions and gross pathology. These take years to present clinical symptoms, hence the name lentivirus (from the Latin lentus, slow). By the time clinical signs become evident in a flock, the disease will have become well-established, with perhaps 30-70% of the flock infected. That is why MV is called one of the iceberg diseases of sheep – for every obviously affected individual, there are many others infected, but without apparent symptoms.
A large body of research into the pathology, immunology and molecular biology of small ruminant lentiviruses (SRLV) exists, as might be expected given its economic significance, welfare implications and its interest as a model for HIV. The main route of transmission of the virus is thought to be horizontal, via exhaled droplets of the highly infectious fluid from deep in the lungs of infected animals, suggesting a risk from prolonged close contact, for example in a sheep shed. But despite all the research into disease mechanisms, we were surprised to find that there has been almost no quantitative analysis of SRLV epidemiology, nor even an estimation of the rate of SRLV transmission under any management regime. So, our first foray into the data aimed to rectify this
We found an experiment published in 1987 with excellent detail on a five-year timecourse of seroconversions in a small infected sheep flock, and a further trawl of the Internet netted a PhD thesis that built on this with a focussed experiment. Karianne Lievaart-Peterson, its author, runs the Dutch sheep health scheme and became a collaborator. We also worked with Tom McNeilly, an immunologist at the Moredun Research Institute.
Nick Savill, a mathematical epidemiologist at Edinburgh University, did the hard work of developing and parameterising a mathematical model based on infectious disease epidemiology and a priori known and unknown aspects of SRLV biology. The model determines the probability of a susceptible ewe seroconverting when it did, and of a susceptible ewe not seroconverting before it was removed from the flock or the experiment ended. The product of these probabilities gives the likelihood of the data given the model. The model was prototyped in Python and then written in C for speed.
The striking result of this research is that MV is a disease of housing. Even brief periods of housing allow the virus to spread rapidly, but transmission is negligible between sheep kept on pasture So, although individual sheep never recover from the disease, it could be eliminated from flocks over time by exploiting the fact that transmission of the virus is too slow between grazing sheep to sustain the disease.
Our second striking result suggests the disease is unlikely to be spread by newly-infected animals, contrary to general expectation. We estimated that the time between an animal being infected and becoming infectious is about a year. This delay, termed epidemiological latency, is actually longer than the estimated time delay between infection and seroconversion.
We can now begin to see more clearly how disease processes occurring in the infected individual shape what happens at the flock, or epidemiological, level. It seems that, after a sheep becomes infected, the disease slowly progresses to the point when there is sufficient free virus to be recognised by the immune system, but then further development of inflammatory lesions in the lungs has to take place before there are sufficient quantities of infective alveolar macrophages and free virus for transmission by the respiratory route. There follows a further delay, perhaps of some years, before the disease has advanced to the stage at which symptoms such as chronic pneumonia and hardening of the udder become apparent.
Infectiousness is expected to be a function of viral load, and although we do not know the timecourse of viral load, it seems most likely that it continues to increase throughout the development of chronic disease. This suggests to us that the infectiousness of an individual is not constant, but is likely to increase as the disease progresses and symptoms emerge.
We are interested in learning how infectiousness changes over the course of an individual’s infection because of the implications at the epidemiological level. Time delays in seroconversion merely make the disease more difficult to detect and control, but the epidemiological significance of a time delay in the development of infectiousness is that it acts to slow the spread of the virus. And if ewes with long-standing infections are the most infectious, they pose the greatest risk to uninfected sheep. This would present an opportunity for the management of exposure to slow the spread of disease. For example, if ewes in their last year of production are the most infectious, then young ewes should be kept away from them when housed – an idea supported by preliminary analysis using individual-based modelling (IBM – see Box 2). Separation of younger animals from older ones may reduce the prevalence of infection to the point where the costs of disease, in terms of lost production and poor welfare, are not overwhelming or at least are less than the cost of attempting to eliminate the disease – we discuss this later in this blog.
So far, there is only very limited and tentative evidence of increasing infectiousness in the literature, and direct experimental evidence would be very hard to come by. But it is plausible that disease severity, viral load and impaired productivity are all related to the extent of inflammatory lesions in the lungs. This suggests that measurably-impaired productivity in infected sheep could be used as a proxy for viral load, and hence infectiousness. And that brings us to our current project.
2 Individual-based modelling
This is a technique to explore the consequences of probabilistic events, such as becoming infected by SRLV. The flock of ewes is modelled as individuals, and their progress through life is followed. Flock replacements are taken from the ewe lambs born to the flock; all other lambs being sold.
The figure shows the mean results (green line) of 1000 iterations of a stochastic simulation of SRLV prevalence in a flock of 400 ewes housed in groups of 100 for one month per year. The probability that an infected ewe will transmit the virus is modelled as rising exponentially with time since infection. The management regime we modelled was to segregate ewes during housing into each of their four age groups (2, 3, 4 and 5 years old) in separate pens, and to sell all the lambs of the oldest ewes, rather than retain any as flock replacements. From an initial starting prevalence of 275 infected ewes, the virus is virtually eliminated from the flock.
Eliminating SRLV from an infected flock involves either repeated testing of the whole flock, culling reactors and perhaps also artificially rearing lambs removed at birth, or entirely replacing the flock with accredited disease-free stock. So, the cost of eliminating the virus from a flock can be huge. But what about the costs of living with it? These costs arise from poor welfare leading to lost production: lactation failure, reduced lamb growth and excess lamb and ewe mortality. But under what conditions are they so substantial as to warrant an elimination strategy? That depends, again, on answers at two levels: what are the production losses for each infected ewe, and how prevalent is the disease in the flock?
We have a number of reasons to want to quantify how the costs of being SRLV+ vary over the time-course of the disease. First, it is reasonable to assume that production losses will be related to the emergence of symptoms in affected sheep, but this has never been adequately quantified. Second, if production losses are a function of the duration of infection, and we can regard them as a proxy for viral load, then it would support the idea that infectiousness also increases as the disease progresses. And third, if production losses are only apparent in sheep with long-standing infections, which is therefore restricted to older sheep, then management could focus on early detection of symptoms and culling of older ewes.
We are quantifying these processes using a large dataset from colleagues at the Lublin University of Life Sciences. Their six-year study was designed to assess the response of production parameters to SRLV infection in a flock of breeding sheep kept under standard Polish husbandry conditions. They published results suggesting that infection with SRLV was associated with higher rates of age-specific ewe mortality, especially in older ewes.
The data comprise lambing records for about 800 ewes from three breeds, with over 300 ewes being present each year and a few being present every year. There are also records from about 2800 lambs born during the trial. Ewes were blood-tested in November and June each year, and all SRLV+ ewes were housed together following the November test until the lambs were weaned in April. SRLV- ewes were housed in the same shed, but segregated from the SRLV+ group. We were able to group the ewes on the basis of the series of blood test results as: (1) seroconverted before the trial began, (2) had not seroconverted by the end, and (3) seroconverted during the trial and for whom a time since seroconversion can be estimated to within about six months.
Given the nature of the data - unbalanced design, multiple observations from individual ewes and rams over several years, and different breeds – we used Genstat to fit mixed models to distinguish random and fixed effects. We were given access to Genstat release 22 beta, which adds greater functionality for displaying and saving output, producing predictions and visualising the fit of the model.
The example below addresses pre-weaning lamb mortality (mort, either 0 or 1). We are using a generalized linear mixed model where significant fixed terms were added stepwise. The ewes and rams used to produce these lambs are obvious random terms because they can be regarded as being drawn at random from a large population. There also appears to be a strong ewe.ram interaction, with some combinations faring differently from others. We included ‘year’ as a random term because, over the six years in which data were collected, factors such as flock size and forage quality varied somewhat randomly.
The fixed terms show that the probability of mortality is strongly affected by lamb birthweight (lambbirthwt). A quadratic term (lb2) models the well-known reduction in lamb survival in very large lambs - a consequence of birth difficulties. The age of the ewe, fitted as a factor (eweageF), is the next most significant fixed effect, followed by the SRLV status of the ewe tested in November prior to lambing (ewetestNov). The interaction term of ewe age with SRLV status is highly significant, showing that the way the ageing process in ewes affects the probability of their lambs’ mortality differs according to SRLV status. From the table of back-transformed means, we see that the probability of lamb mortality ranges between about 0.02 to 0.04 in SRLV- ewes aged from 2 to 5 years, perhaps declining in older ewes. SRLV+ ewes show similar lamb mortality in ages 2-4, but a progressive increase as ewes age further, almost doubling each year.
This preliminary analysis provides some evidence that the costs of being infected by SRLV are, indeed, progressive with age. There is some way to go yet to show whether sheep with longer-standing SRLV infection have higher viral loads and are more infectious, but our current research does point to a way to potential better disease control by targeting older animals. Maedi-visna isn’t called a progressive disease for anything, and we should be able to exploit that.
We finally submitted our paper for publication in November 2019, just before the Covid 19 pandemic. One might have thought that a paper on the epidemiology and control of a respiratory virus spread by aerosol, especially indoors in close proximity and with a recommendation for social distancing, would have seemed quite topical. Ironically, early 2020 saw the subeditors and reviewers of such work all being urgently re-allocated to analysing information on the burgeoning pandemic. But we got there eventually and by October we were proudly writing a press release recommending social distancing … in sheep.
Andrew Illius writes, “My experience of Genstat dates back to the early 1980s when I think Release 3 was current. It was hard going, and we queued up to have our fault codes diagnosed at the Genstat Clinic. But having learnt Fortran programming in the punched cards era, I was used to it taking several days to get a job to run. Genstat’s exacting requirements were reassuring and it became indispensable over the following years of agricultural and ecological research. By the 1990s we had learnt that mixed models were required to account separately for random and fixed effects in unbalanced data, and I’d been on a REML course. I was especially proud to use REML as my main analytical procedure thereafter because Robin Thompson invented it just down the corridor where we work in the Zoology building at Edinburgh University, and where he worked with the Animal Breeding group. It’s been a tremendous pleasure to get back to Genstat recently after many years away – like greeting an old friend. In the past, I wrote and submitted batch jobs on a UNIX mainframe before collecting some line-printer output on my way home. Now things have really speeded up, with the menu-driven environment of the Windows version. It’s a fantastic improvement, and a pleasure to use.”
Andrew Illius is Emeritus Prof of Animal Ecology in the Institute of Evolutionary Biology, University of Edinburgh, where he taught animal production and animal ecology from 1978 to 2008 and was latterly Head of the School of Biological Sciences. Most of his work has been on the ecology and management of grazing systems and the ecophysiology and behaviour of grazing animals. He retired in 2008 to spend more time with his sheep, keeping about 400 breeding ewes. Familiarity with sheep diseases led to collaboration with Nick Savill since 2018 on the epidemiology and control of MV.
Nick Savill is a Senior Lecturer at the Institute of Immunology and Infection Research, University of Edinburgh. He teaches a range of quantitative skills to undergraduate biological science students including maths, stats, data analysis and coding. His research interests are in mathematical modelling of infectious disease epidemiology. He has worked on foot and mouth disease, avian influenza, malaria, trypanosomiasis and, most recently, maedi-visna with Andrew Illius.
Illius AW, Lievaart-Peterson K, McNeilly TN, Savill NJ (2020) Epidemiology and control of maedi-visna virus: Curing the flock. PLoS ONE 15 (9): e0238781. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0238781
Kanchana Punyawaew01 March 2021
Linear mixed models: a balanced lattice square
This blog illustrates how to analyze data from a field experiment with a balanced lattice square design using linear mixed models. We’ll consider two models: the balanced lattice square model and a spatial model.
The example data are from a field experiment conducted at Slate Hall Farm, UK, in 1976 (Gilmour et al., 1995). The experiment was set up to compare the performance of 25 varieties of barley and was designed as a balanced lattice square with six replicates laid out in a 10 x 15 rectangular grid. Each replicate contained exactly one plot for every variety. The variety grown in each plot, and the coding of the replicates and lattice blocks, is shown in the field layout below:
There are seven columns in the data frame: five blocking factors (Rep, RowRep, ColRep, Row, Column), one treatment factor, Variety, and the response variate, yield.
The six replicates are numbered from 1 to 6 (Rep). The lattice block numbering is coded within replicates. That is, within each replicates the lattice rows (RowRep) and lattice columns (ColRep) are both numbered from 1 to 5. The Row and Column factors define the row and column positions within the field (rather than within each replicate).
To analyze the response variable, yield, we need to identify the two basic components of the experiment: the treatment structure and the blocking (or design) structure. The treatment structure consists of the set of treatments, or treatment combinations, selected to study or to compare. In our example, there is one treatment factor with 25 levels, Variety (i.e. the 25 different varieties of barley). The blocking structure of replicates (Rep), lattice rows within replicates (Rep:RowRep), and lattice columns within replicates (Rep:ColRep) reflects the balanced lattice square design. In a mixed model analysis, the treatment factors are (usually) fitted as fixed effects and the blocking factors as random.
The balanced lattice square model is fitted in ASReml-R4 using the following code:
> lattice.asr <- asreml(fixed = yield ~ Variety, random = ~ Rep + Rep:RowRep + Rep:ColRep, data=data1)
The REML log-likelihood is -707.786.
The model’s BIC is:
The estimated variance components are:
The table above contains the estimated variance components for all terms in the random model. The variance component measures the inherent variability of the term, over and above the variability of the sub-units of which it is composed. The variance components for Rep, Rep:RowRep and Rep:ColRep are estimated as 4263, 15596, and 14813, respectively. As is typical, the largest unit (replicate) is more variable than its sub-units (lattice rows and columns within replicates). The "units!R" component is the residual variance.
By default, fixed effects in ASReml-R4 are tested using sequential Wald tests:
In this example, there are two terms in the summary table: the overall mean, (Intercept), and Variety. As the tests are sequential, the effect of the Variety is assessed by calculating the change in sums of squares between the two models (Intercept)+Variety and (Intercept). The p-value (Pr(Chisq)) of < 2.2 x 10-16 indicates that Variety is a highly significant.
The predicted means for the Variety can be obtained using the predict() function. The standard error of the difference between any pair of variety means is 62. Note: all variety means have the same standard error as the design is balanced.
Note: the same analysis is obtained when the random model is redefined as replicates (Rep), rows within replicates (Rep:Row) and columns within replicates (Rep:Column).
As the plots are laid out in a grid, the data can also be analyzed using a spatial model. We’ll illustrate spatial analysis by fitting a model with a separable first order autoregressive process in the field row (Row) and field column (Column) directions. This is often a useful model to start the spatial modeling process.
The separable first order autoregressive spatial model is fitted in ASReml-R4 using the following code:
> spatial.asr <- asreml(fixed = yield ~ Variety, residual = ~ar1(Row):ar1(Column), data = data1)
The BIC for this spatial model is:
The estimated variance components and sequential Wald tests are:
The residual variance is 38713, the estimated row correlation is 0.458, and the estimated column correlation is 0.684. As for the balanced lattice square model, there is strong evidence of a Variety effect (p-value < 2.2 x 10-16).
A log-likelihood ratio test cannot be used to compare the balanced lattice square model with the spatial models, as the variance models are not nested. However, the two models can be compared using BIC. As the spatial model has a smaller BIC (1415) than the balanced lattice square model (1435), of the two models explored in this blog, it is chosen as the preferred model. However, selecting the optimal spatial model can be difficult. The current spatial model can be extended by including measurement error (or nugget effect) or revised by selecting a different variance model for the spatial effects.
Butler, D.G., Cullis, B.R., Gilmour, A. R., Gogel, B.G. and Thompson, R. (2017). ASReml-R Reference Manual Version 4. VSN International Ltd, Hemel Hempstead, HP2 4TP UK.
Gilmour, A.R., Thompson, R. & Cullis, B.R. (1995). Average Information REML, an efficient algorithm for variance parameter estimation in linear mixed models. Biometrics, 51, 1440-1450.
Dr. Vanessa Cave13 December 2021
ANOVA, LM, LMM, GLM, GLMM, HGLM? Which statistical method should I use?
Unsure which statistical method is appropriate for your data set? Want to know how the different methods relate to each one another?
The simple diagram below may help you.
|Treatment factor||Categorical explanatory variable defining the treatment groups. In an experiment, the experimental units are randomly assigned to the different treatment groups (i.e., the levels of the treatment factor).|
|Blocking variable||Factor created during the design of the experiment whereby the experimental units are arranged in groups (i.e., blocks) that are similar to one another. You can learn more about blocking in the blog Using blocking to improve precision and avoid bias.|
|Continuous predictor||A numeric explanatory variable (x) used to predict changes in a response variable (y). Check out the blog Pearson correlation vs simple linear regression to learn more.|
|Unbalanced design||An experimental design is unbalanced if there are unequal sample sizes for the different treatments. Genstat provides users with a tool to automatically determine whether ANOVA, LM (i.e., regression) or LMM (i.e., a REML analysis) is most appropriate for a given data set. Watch this YouTube video to learn more.|
|Temporal correlation||Occurs when repeated measurements have been taken on the same experimental unit over time, and thus measurements closer in time are more similar to one another than those further apart. To learn more, check out our blog A brief introduction to modelling the correlation structure of repeated measures data.|
|Spatial correlation||Occurs when experimental units are laid out in a grid, for example in a field trial or greenhouse, and experimental units that are closer together experience more similar environmental conditions than those which are further apart. For more information, read our blog A brief look at spatial modelling.|
|Random effects||Represents the effect of a sample of conditions observed from some wider population, and it is the variability of the population that is of interest. The blog FAQ: Is it a fixed or random effect? can help you understand the difference between fixed and random effects.|
Dr Vanessa Cave is an applied statistician interested in the application of statistics to the biosciences, in particular agriculture and ecology, and is a developer of the Genstat statistical software package. She has over 15 years of experience collaborating with scientists, using statistics to solve real-world problems. Vanessa provides expertise on experiment and survey design, data collection and management, statistical analysis, and the interpretation of statistical findings. Her interests include statistical consultancy, mixed models, multivariate methods, statistical ecology, statistical graphics and data visualisation, and the statistical challenges related to digital agriculture.
Vanessa is currently President of the Australasian Region of the International Biometric Society, past-President of the New Zealand Statistical Association, an Associate Editor for the Agronomy Journal, on the Editorial Board of The New Zealand Veterinary Journal and an honorary academic at the University of Auckland. She has a PhD in statistics from the University of St Andrew. | https://vsni.co.uk/blogs/statistical-thinking-animal-ethics |
It's a brief summary of the key issues brought up at the Assessment strand of the E-Merging Forum 4 as viewed by the participants.
Assessment for learning
Forms
Teacher to teacher
Student to student
Student to teacher
Teacher to student
Features
Ongoing, continuous process
Specific, narrow focus, no marking
Summative
– stating the result; backward looking; used at the end of a unit / course;
Formative
– ongoing process; aimed at showing the student a perspective of their learning: where they are now, what to improve and how to do it;
Formative + Summative assessment = balanced
Wait time
Conclusion:
Wait for them to start and wait for them to finish their answers.
Benefits
More students get involved
Better answers
A key point in assessment ; through it we identify where we are
Assessment rubrics/scale
The way a teacher makes the students understand the intended outcomes;
Use it to measure, motivate,
Wide application, not just for academic purposes;
Mis-self-perception
The perception of students was measured:
Good students – tend to under estimate themselves; poor students – tend to over estimate themselves.
What can be done?
(possible solutions)
Teach them evaluation skills
Share evaluation criteria
Peer observation for learning
The main function of P.O. is to uncover opportunities
Teacher development context (teacher-teacher observation)
Can be controversial
Students might not like outsiders (observers)
Teaching and Testing
Cycle of testing what you teach first, then evaluate what you tested and teach
Not either – or! Balance is the key
What’s the most pressing issue in assessment related to your area of specialty?
We asked the plenary speakers of other strands
Literature and culture:
Assessing the unassessable
EAP
Issues:
Background of students
Unrealistic expectations on what they’ll be assessed on
Digital technology
Criteria of assessment – move assessment beyond the knowledge of the language
Young learners
Issue:
Why are we assessing?
Perspectives for Forum-5
Communicative testing – what is should be like?
Assessment literacy
Workshop on test design
Assessing the unassessable
Ways of sharing evaluation criteria with learners
Why are we assessing young learners?
Move assessment beyond the knowledge of the language: assessment criteria of digital literacy
click to watch
Alan Pulverness
click to watch
Tony Prince
Catherine Kneafsey
Nicky Hockly
click to watch
click to watch
Present Remotely
Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Assesment
No description
Tweet
Comments (0)
Please log in to add your comment. | https://prezi.com/8so1ojgux8jb/assesment/ |
The WWF is run at a local level by the following offices...
- WWF Global
- Adria
- Argentina
- Armenia
- AsiaPacific
- Australia
- Austria
- Azerbaijan
- Belgium
- Bhutan
- Bolivia
- Borneo
- Brazil
- Bulgaria
- Cambodia
- Cameroon
- Canada
- Caucasus
- Central African Republic
- Central America
- Chile
- China
- Colombia
- Croatia
- Democratic Republic of the Congo
- Denmark
- Ecuador
- European Policy Office
- Finland
- France
The launch of the guideline for forest crimes is a milestone in environmental protection in Sabah as it will provide support to the Environmental Court to enhance its sentencing where heavier and more consistent penalties are meted out to offenders, ensuring that the severity of the punishment is proportionate to the severity of the crime committed.
The guideline will assist the court in reducing disparities in sentences in cases related to forest crimes in Sabah and therefore achieving uniformity and consistency. It takes into account the level of culpability of the offences, the level of harm caused by the accused, and the aggravating and mitigating factors presented by the prosecution and the accused. This will then ensure that the offender will receive fines that commensurate with the severity of their offences.
It is hoped that heavier fines will be a deterrent and warning to those who intend to commit the same offence.
The sentencing guideline supports the collective enforcement agencies’ efforts on the ground in ensuring the protection of the State’s forests and is in line with the Sabah Forest Policy 2018 that is aimed at strengthening forest enforcement and laws.
The launch of the guideline is a result of a long-term collaboration between the Kota Kinabalu Court Working Group for Environment, the Sabah Forestry Department and WWF-Malaysia and it consultation with the Sabah Law Society, the Attorney General Chamber of Sabah, judges in Sabah as well as prosecution officers.
WWF-Malaysia applauds the Sabah Judiciary on its efforts to further enhance the existing laws against environmental crimes and hopes that with this second sentencing guideline, would-be perpetrators will toe the line.
WWF-Malaysia stresses the importance of protecting Sabah’s forests as the forest provides a myriad of benefits to communities.
“The forest is a source of livelihood, protein, water, shelters and many more. Additionally, forests provide ecosystem services including watershed protection and serve as a buffer in natural disasters like floods and heavy rain. Perhaps most importantly, the forest helps mitigate climate change through its ability to absorb harmful greenhouse gases.
“This sentencing guideline will no doubt bolster our current efforts in safeguarding our forests. Through this guideline, we are reminding would-be offenders that a forest crime is a serious crime and that the government and non-governmental organisations like ourselves will stop at nothing to ensure that our forests and its wildlife continue to be protected,” said WWF-Malaysia Executive Director/ CEO, Ms. Sophia Lim. | https://www.wwf.org.my/hometest/?29685/Second-Environmental-Related-Sentencing-Guideline-Launched |
459 F.Supp. 1189 (1978)
Millis P. PATTON, Plaintiff,
v.
Cecil B. ANDRUS, Defendant.
Civ. A. No. 77-1730.
United States District Court, District of Columbia.
February 28, 1978.
Nathan Dodell, Asst. U. S. Atty., Washington, D. C., for plaintiff.
Lawrence J. Speiser, Washington, D. C., for defendant.
OPINION AND ORDER
JUNE L. GREEN, District Judge.
This action is for an award of attorney's fees, costs and expenses incurred in connection with the successful prosecution of a sex discrimination charge at the administrative level. The parties have filed cross-motions for summary judgment which raise, as the primary issue, the question of whether administrative agencies have the discretion to award fees to plaintiffs who succeed before them in discrimination suits brought under Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e et seq.
District Court Judge Oliver Gasch of this Circuit, in the case of Smith v. Califano, 446 F.Supp. 530 (decided Jan. 28, 1978; amended opinion and order filed Jan. 31, 1978) carefully analyzed the legislative and judicial materials in this area and concluded that administrative agencies possessed the discretion in question.
This Court endorses the ruling of the Smith case to the extent that it found that federal administrative agencies possess discretion to award fees, costs and expenses. However, this Court will not go further and determine whether the Department of the Interior, the agency involved here, should exercise its discretion. This Court is not in a position to familiarize itself in a firsthand way with the novelty and difficulty of the questions which confronted the attorneys and the agency in this case, the skill which was required to prevail, time limitations imposed by the client or the circumstances, and the other factors which guide decisions to award fees and costs in Title VII cases. Evans v. Sheraton Park Hotel, 164 U.S.App. D.C. 86, 96, 503 F.2d 177, 187 (1974).
This Court is not ruling that it lacks the authority to take into its own hands the matter of fees to be awarded for work at the administrative level. The United States Court of Appeals for this Circuit has indicated that the district courts have this option. Parker v. Califano, 182 U.S.App. D.C. 322, 331 n.24, 561 F.2d 320, 329, n.24 (1977). This Court, however, thinks it wiser, whenever justice to the parties does not preclude it, to leave the initial resolution of fee requests to the tribunal which has a chance to entertain the case and observe the quality of advocacy involved. In this case, given the uncertainty surrounding the extent of administrative authority, the Interior Department's reluctance to award fees was not unreasonable. Thanks to the Smith case, the agency's path is now clearer, and the Court has no reason to believe that it will not exercise its discretionary powers in a responsible manner.
Accordingly, the Court grants plaintiff's motion for summary judgment insofar as the issue of agency power to award fees is concerned. Defendant's motion for summary judgment is therefore denied. This case is remanded to the Department of the Interior *1190 so that it may consider whether to award fees, costs and expenses to the plaintiff, and if so, how much.
IT IS SO ORDERED this 28th day of February 1978.
| |
How to drill strictly vertically with a drill. How to choose
How to drill perpendicular hole by yourself?
Quite often it is not a problem that the drill slips off the mark and the hole is not made in the right place. To fight this can help cardboard scotch tape glued to the top of the tree. Also sometimes similar difficulties appear in this case if one has to work with a very thick drill bit. Then you should first make a hole in the material with a narrower drill bit, and then drill against the mark you made. Pay attention to the quality of the sharpening. If the edges are not sharp enough or are not sharpened flat, you may have problems with precision drilling.
drilling fineness for various materials. Recommended reading. What speeds are required for drilling
One of the fundamental drilling parameters is the RPM. The speed at which you need to drill depends on the type of material and the type of drill. There is a general rule: The harder the material and the thicker the drill bit, the lower the RPM should be. As for specific figures, you can see them in the table below.
|drill bit cross section, mm||soft wood||hard wood||plastic (acrylic)||copper||aluminum||steel||Notes|
|1.5-4.8||3000||3000||2500||3000||3000||3000||When drilling metal that is thicker than 3 cm, oil the drill bit frequently.|
|6.4-9.5||3000||1500||2000||120||2500||1000|
|11.1-15.9||1500||750||1500||750||1500||600|
|17.4-25.4||750||500||–||400||1000||350|
This table is created for common twist drill bits. For specific options (Frostner drill, etc.д.) drilling speeds for different materials are slightly different from those shown above.
How to drill deep holes properly.
Modification of the auger drill bit, start by stripping the guide screw. Then blunt the edges as shown in the photo below. This modification will provide a weak “bite” and without being too aggressive helps to drill deep holes in a straight line.
Alignment. Straight metal rod, aligns V-block with control hole.
Another problem to drill deep holes, set a straight hole as a guide. A drill press would seem to be the logical tool with which to drill straight holes, but it has a limited drilling depth. But the drill still plays a big role in this operation and serves to drill a pilot hole. I start drilling the hole with a drill bit of a smaller diameter, and gradually increase the hole to the diameter that I plan to use.
When the drill rests the chuck on the V-block, remove it and drill a hole the full length of the drill bit or the depth of the hole.
As soon as the guide hole is ready, install the drill bit with the desired diameter, and secure the workpiece on the workbench with clamps. Remember that even with a good and tight hole, an off-center drill bit during drilling will not keep it away. To help guide the drill into the hole along the axis line, a V-block is made. To adjust the alignment, take a long metal rod and insert it into the pilot hole as shown in the photo above, align the V-block with the rod in the bar and secure with a clamp. I start drilling the hole with the drill on low rpm. Don’t be lazy and often clean the drill from sawdust clogging the screw, it makes the drill rotate easily and helps to guide the drill properly. Once the drill chuck hits the V-block, remove it and continue drilling the hole. Doing these simple and uncomplicated things will give you great results.
Drilling a hole straight through at a 90 degree angle is no easy task. It may take a lot of effort to keep the drill level and upright. BullseyeBore suggests doing it with several laser concentric rings.
The BullseyeBore drill uses a simple red laser to project three rings onto the surface where you are going to drill. In order to keep the drill as level as possible during the process, observe the three rings. The two inner ones will stay in the same position no matter how the drill is tilted, while the outer one can move. By aligning the biggest ring with the smaller two, you get the drill to be perfectly vertical and the hole will be just perfect.
Additionally, the concentric circles can indicate the depth of the hole. The distance between the large and medium circles will be just an indication of the drilling depth. If in the beginning there will be such a picture:
In the middle of the process, the circles will get closer together:
The nozzle looks like a small, clear disk with a connector for easy installation into any drill. Lasers, optics, and batteries are already built in. The disk is sleek, lightweight, and impact resistant.
Producers are now developing a model where all the hardware will be implanted into the drill chuck.
Drilling the hole with a feather drill bit
At first glance, when faced with the question of how to drill a hole in wood in most people there is no doubt: think wood is not metal and is easy to drill. Yes, that’s partially true if you want to make a trivial hole in the wood instead of a hole. And in order to correctly and efficiently drill a hole in wood you need to listen to the advice of woodworkers, also called wood modelers.
How to drill a vertical hole with a drill
” Drill ” How to drill a perpendicular hole with a drill
How to drill with a drill properly and smoothly
In the field of repair, the ability to drill correctly is one of the basic.
In addition to general rules, it is important to consider all the nuances when working with a particular material: concrete, tile, metal, etc.д.
All issues related to drilling will be covered in this article.
As already mentioned, drilling is the most common occupation for the repairman, and therefore it is important to learn a few basic rules related to this activity right away.
- Using the right tools. There are many drills, each of which is designed for a particular kind of material. Therefore, you should not try to drill concrete with a drill bit for wood and vice versa. It is also important to consider the conditions in which the tool will be used. For example, for repairs in the interior, you can not use an industrial drill (it’s simply not safe). The water resistance is important: outdoor work requires IP34 if the climate is humid, and IP32 if the work is to be done in good weather. Inside the room it is possible to use IPX2/
- Good marking. Before you begin any work, it is important to accurately and precisely mark out the places of drilling with a marker. You can also put paper tape under the marker to prevent the drill from sliding on the material.
- Choice of drilling speed required. Here it depends equally on the material and the diameter of the blade. When it comes to very fine drills (less than 3 mm in diameter), you have to work at low speeds, less than four hundred per minute. For other drill bits the rule is: the thinner the bit, the faster the drilling speed is needed.
It is also important to make sure that the hand lever is firmly attached to the hand drill shaft. It is also not allowed to skew the drill.
Cordless Drill Scam #shorts
How to drill metal properly
The first thing to start with is the selection of drills. They are characterized by a sharp edge, designed for easy penetration of the drill into the metal.
Conventional metal drill bits are good for not too hard metals like copper or aluminum, but for something harder (for example, stainless steel) you should get products made of titanium carbide or chrome vanadium alloy.
Separately, we should talk about the optimal speeds. A common mistake of beginners is to use too many RPMs.
In fact, for hard metals, medium speeds are used: for example, brass with a thickness of a centimeter will be optimal to drill at a speed of 2000-2500 revolutions.
There are a few more points to cover. So:
- If you want to drill a thin iron plate, it should be fixed between two pieces of wood. This is done so that the sheet metal is not torn off.
- It is necessary to use lubricating oil from time to time, which ensures the cooling of the drill bit and facilitates the drilling process.
- If you want to make a hole in a pipe, it must be firmly secured. In order to prevent the pipe from flattening under the influence of the drill, a piece of hardwood should be placed inside.
If you follow these simple rules, then even a beginner will be able to qualitatively perform drilling.
How to drill concrete walls
Drill a wall in everyday life is most often necessary, because without it it is impossible to hang a shelf, a cabinet, you can not install cornices.
Of course, it is better to use a peorator or an impact drill for this purpose. These tools are powerful enough to handle a hard material like concrete or brick.
However, if the thickness of the wall does not exceed 10-12 cm, you can get by with an ordinary drill.
But do not take the risk if you only have a low-powered tool. in contact with concrete, it can simply break. If the drill is chosen correctly, then the following tips will be useful:
- Before you start work, make sure that there are no pipes or other utility systems in the drilling area.
- Drills that are taken for such work should be made of hard metals or alloys. Diamond drills are the best option.
- If in the course of drilling will come across too dense areas of concrete, they must be punched with a small hammer and a pin (puncher).
Although it is generally possible to drill through a concrete wall with a drill, it should still be done with care. Otherwise, the breakage can be quite serious. It is best to use specialized tools for this.
How to work with tiles using a drill
Often people who do not have extensive experience in the field of repair, afraid to drill such a fragile material as tile.
However, it may become a real necessity, because you need the same for something to fix a variety of cabinets and shelves in the kitchen, bathroom or toilet.
Alas, in many respects, the condition of the tile after drilling determines not the master who takes up the drill, but the one who laid this tile.
A competent tiler arranges the tile so that no voids are formed between the tile and the wall. In this case, careful drilling in no way will not harm the tile. If there are “air s” between the wall and the tile, most likely the tile will crack.
In order to properly drill the tile, you must take a drill, masking tape, peorator (or a powerful drill with drill bits on concrete). Next, it will be necessary to perform the following actions:
- The place of drilling is marked with a piece of masking tape and a marker. The tape is here to prevent the nail from sliding on the tile surface.
- Drilling is carried out strictly perpendicular to the material. The number of revolutions should increase gradually: from the minimum possible to 150-200 revolutions per minute. This prevents damage to both the material and the tool.
- It is important to make sure that the drill bit does not overheat. If the drills start to smoke, they need to cool down immediately.
When the holes are made, you can insert the dowels. This is done with a hammer.
Drilling in cast iron. how to do it properly
Cast iron is a rather hard material, so it is very, very difficult to drill it.
How to insert the drill bit into the chuck
Turn the adjusting ring counterclockwise by hand. The jaws inside the chuck are separated by a little bit more than the drill bit.
Insert the drill bit into the chuck as deep as possible. The diameter of the tool to be clamped is allowed to be 2. 13 mm.
Clamp the drill by turning the adjusting ring clockwise by hand.
Insert the drill key into the hole on the body of the chuck so that the teeth on the key and the chuck click into place.
Turn the spanner clockwise under slight pressure until it reaches the stop, so that the drill bit is firmly seated. The keyhole comes in 2. 3, we put in each in turn for an even clamping.
Unplug the plug from the socket before carrying out any maintenance work.
To prevent the wrench from being lost, they secure it to the drill wire with duct tape.
A quick-action cam chuck that can be screwed in without a wrench is used less frequently. Comes with one or two adjusting sleeves with notches against slipping. If there is only one core, the body of the drill is held at a fixed position. If there are two guide sleeves, the part that is fixed to the spindle remains stationary. The sliding part is turned by hand: when holding the drill clockwise, when removing it counterclockwise.
How to drill a concrete wall with an ordinary drill
In the life of every person comes a time when it is necessary to make repairs in the house or apartment. And then there is the question of redoing the space: transferring outlets, updating wiring and drilling other holes for the realization of the design project of the room.
In the process of the work a variety of tools and drills are used. Of course, customers always prefer the more powerful models. There is a wide variety of tools on the market with which to organize.
Drilling ?? holes with a drill with compliance ?? coaxiality (drill bit).
A simple version of adapting household items to drill coaxial or perpendicular holes
- Make sure there are no pipes or other communication systems at the drilling site before you begin.
- The drills that are taken for this work must be made of hard metals or alloys. The best option is a diamond drill bit.
- If too dense areas of concrete are encountered during drilling, they should be pierced with a small hammer and pin.
Although a concrete wall drill using a drill bit is generally realistic, it is worth doing with caution. Otherwise, the breakdowns can be quite serious. It is better to use special tools for this.
Using the conductor
It is possible to drill a deep hole of a perpendicular sample using a special device. the conductor. And such activities are carried out not only on an even basis, but also on the rounded parts of workpieces, at angular points.
- Core to make a marking for the drill.
- Stencil made of plastic.
- Guide bushings in an amount of 6 pieces, the diameter of which corresponds to 4 mm, 5 mm, 6 mm, 8 mm, 10 mm, 12 mm.
Tools
To drill a reinforced concrete surface, it is necessary to use specialized tools. in our version, you can not do without homemade tools.
Depending on your preference and the diameter of the hole, you can use a rotary drill, also known as a small-sized electric drill. What is the best way to drill a concrete wall in an apartment or panel house: an ordinary electric screwdriver with a drill bit, can you make a hole in a concrete wall with a torch without dust, and how. Can I drill with an electric screwdriver? The first option has less power, but has less weight not a convenient design. Peorator, accordingly, has more potential, weight not size.
Read also: What you can make from a car engine
Be careful! Since the use of construction tools is often disposable, it is more rational to rent them. Given the fact that their price is quite high, thus you will save a considerable amount of money.
In addition to the existing tool, you need to buy special drill bits, which allow you to work on the concrete surface. drills with pobedite tips are chosen for the power drill.
The peorator has its own weight characteristics of a set of percussion heads, which, for the most part, are sufficient. As an additional component for the peorator, drill bits with tips are also used.
To form holes with a diameter of thirty-five to one hundred and twenty millimeters you use core bits that have teeth made of carbide. using them you can drill holes for sockets, switches, and so on.д. If you decide on such a question, how can you drill concrete with an electric screwdriver, there are also tungsten-carbide-plated drill bits. However, you should keep in mind that they are only suitable for units with a capacity of two or more than 1kW.
Advantages
Sooner or later, most people who live in homes with concrete walls, there is a need to hang a cabinet, lamp, shelf or picture. At this point there is an urgent question of how to drill a concrete wall. It is no secret that every home handyman at least once in his life has faced this problem, but not everyone knows how to solve it. Many people bravely torture themselves with a drill and their own strength, but, not achieving the desired result, give it all up until the next attempt to make a hole in the wall. But the drill eventually breaks, and the shelves still remain standing somewhere in the corner of the room or gather dust in the closet. But there are still options. you just need to know them and be able to use them.
What do professionals advise?
Holes in concrete have to be made quite often, especially in the process:
- Finishing work;
- installing furniture;
- suspension of the air conditioner;
- additional installation of electrical wiring;
- installing plumbing.
Solve the problem of holes in a concrete wall can be done in two ways:
It is worth noting that ordinary drills to make a hole in a concrete wall will not work, so before you start, you need to buy drills with specially soldered plates of high-strength pobedite alloy, which perfectly cope with concrete and brick. But it is not recommended to use them for soft materials, because pobedite drill bits do not cut them, but crush them.
What will help the home handyman?
In domestic conditions, when you need to make 2-3 holes in the concrete, you can get by with an ordinary drill, without the percussion function. To do this, it is necessary as the pobeditovogo drill into the body of the wall from time to time to break the concrete with a strong metal pin (puncher), which is the same size as the diameter of the hole. It is used when the drill starts to “stop” in the wall. At this point a steel puncher is inserted into the hole and begins to beat on it with a hammer or sledgehammer, trying to break up too dense areas and pierce the hole deeper. In doing so, the pin is rotated slightly. Then the hammerless drill can do the job again.
All of the above actions are repeated one after another until the hole increases to the desired depth. This method is quite time consuming and tedious, but for a couple of holes is quite acceptable.
Alternatively, you can use diamond-coated universal drill bits when drilling into the concrete. They are highly effective when working with metal, gravel, and concrete. They can only be installed on a regular power drill, or on a tool with the vibration function disabled.
The drill must be handled with extreme care or it will fail too quickly. Advice given by professionals. to avoid overheating the drill bit, it should be wet with cold water from time to time.
How to decide on a tool?
For more work, you need a peorator or drill, which has a percussion function, and drills with pobeditovye tips. Impact drill combines rotational with reciprocating motion, which helps it perfectly copes with lightweight concrete, and the question of how to drill a concrete wall, which is a carrier, there is a simple answer. the best helper will be a peorator, whose main purpose is the piercing of concrete fences. There is another difference:
Drill a rebar caught in the body of the concrete wall should be drilled with metal drills.
What drills large holes?
Professionals, constantly faced with the problem of drilling holes in concrete, use special equipment, which includes:
- Powerful electric motor;
- drilling motor;
- diamond drill bits of different diameters;
- diamond drill bit core drill rig secured to the base.
How To Drill A Straight Hole Without A Drill Press
diamond drill bits of different diameters can be drilled with large diameters. up to 40 cm. the process is quick, efficient, dust-free and quiet. Water is automatically applied to the drilling site, which simultaneously cools the diamond core and cleans the dust.
With diamond drilling holes are obtained precise in shape and clear in outline, with a polished inner surface. They can be made in various enclosing structures at any angle to the horizon, and the possibility of cracks or chips in the concrete is completely excluded. Destruction occurs only at the location of the future hole.
Specialized companies do diamond drilling, so if you need to drill holes in the concrete of large diameters, you can invite masters with their equipment. And you don’t have to buy a diamond drill rig specifically for this. | https://chaika.net/how-to-drill-strictly-vertically-with-a-drill-how/ |
Browsing Conservation Science Publications by Subject "IMMUNOLOGY"
Now showing items 1-3 of 3
-
Detailed monitoring of a small but recovering population reveals sublethal effects of disease and unexpected interactions with supplemental feedingInfectious diseases are widely recognized to have substantial impact on wildlife populations. These impacts are sometimes exacerbated in small endangered populations, and therefore, the success of conservation reintroductions to aid the recovery of such species can be seriously threatened by outbreaks of infectious disease. Intensive management strategies associated with conservation reintroductions can further compound these negative effects in such populations. Exploring the sublethal effects of disease outbreaks among natural populations is challenging and requires longitudinal, individual life‐history data on patterns of reproductive success and other indicators of individual fitness. Long‐term monitoring data concerning detailed reproductive information of the reintroduced Mauritius parakeet (Psittacula echo ) population collected before, during and after a disease outbreak was investigated. Deleterious effects of an outbreak of beak and feather disease virus (BFDV ) were revealed on hatch success, but these effects were remarkably short‐lived and disproportionately associated with breeding pairs which took supplemental food. Individual BFDV infection status was not predicted by any genetic, environmental or conservation management factors and was not associated with any of our measures of immune function, perhaps suggesting immunological impairment. Experimental immunostimulation using the PHA (phytohaemagglutinin assay) challenge technique did, however, provoke a significant cellular immune response. We illustrate the resilience of this bottlenecked and once critically endangered, island‐endemic species to an epidemic outbreak of BFDV and highlight the value of systematic monitoring in revealing inconspicuous but nonetheless substantial ecological interactions. Our study demonstrates that the emergence of such an infectious disease in a population ordinarily associated with increased susceptibility does not necessarily lead to deleterious impacts on population growth and that negative effects on reproductive fitness can be short‐lived.
-
Detection of neopterin in the urine of captive and wild platyrrhinesBackground: Non-invasive biomarkers can facilitate health assessments in wild primate populations by reducing the need for direct access to animals. Neopterin is a biomarker that is a product of the cell-mediated immune response, with high levels being indicative of poor survival expectations in some cases. The measurement of urinary neopterin concentration (UNC) has been validated as a method for monitoring cell-mediated immune system activation in multiple catarrhine species, but to date there is no study testing its utility in the urine of platyrrhine species. In this study, we collected urine samples across three platyrrhine families including small captive populations of Leontopithecus rosalia and Pithecia pithecia, and larger wild populations of Leontocebus weddelli, Saguinus imperator, Alouatta seniculus, and Plecturocebus toppini, to evaluate a commercial enzyme-linked immunosorbent assay (ELISA) for the measurement of urinary neopterin in platyrrhines. Results: Our results revealed measured UNC fell within the sensitivity range of the assay in all urine samples collected from captive and wild platyrrhine study species via commercial ELISA, and results from several dilutions met expectations. We found significant differences in the mean UNC across all study species. Most notably, we observed higher UNC in the wild population of L. weddelli which is known to have two filarial nematode infections compared to S. imperator, which only have one. Conclusion: Our study confirms that neopterin is measurable via commercial ELISA in urine collected from captive and wild individuals of six genera of platyrrhines across three different families. These findings promote the future utility of UNC as a promising biomarker for field primatologists conducting research in Latin America to non-invasively evaluate cell-mediated immune system activation from urine. Keywords: Neopterin, Health monitoring, Platyrrhines, Immune function, Biomarker
-
Duration of maternal antibodies against canine distemper virus and hendra virus in pteropid batsOld World frugivorous bats have been identified as natural hosts for emerging zoonotic viruses of significant public health concern, including henipaviruses (Nipah and Hendra virus), Ebola virus, and Marburg virus. Epidemiological studies of these viruses in bats often utilize serology to describe viral dynamics, with particular attention paid to juveniles, whose birth increases the overall susceptibility of the population to a viral outbreak once maternal immunity wanes. However, little is understood about bat immunology, including the duration of maternal antibodies in neonates. Understanding duration of maternally derived immunity is critical for characterizing viral dynamics in bat populations, which may help assess the risk of spillover to humans. We conducted two separate studies of pregnant Pteropus bat species and their offspring to measure the half-life and duration of antibodies to 1) canine distemper virus antigen in vaccinated captive Pteropus hypomelanus; and 2) Hendra virus in wild-caught, naturally infected Pteropus alecto. Both of these pteropid bat species are known reservoirs for henipaviruses. We found that in both species, antibodies were transferred from dam to pup. In P. hypomelanus pups, titers against CDV waned over a mean period of 228.6 days (95% CI: 185.4–271.8) and had a mean terminal phase half-life of 96.0 days (CI 95%: 30.7–299.7). In P. alecto pups, antibodies waned over 255.13 days (95% CI: 221.0–289.3) and had a mean terminal phase half-life of 52.24 days (CI 95%: 33.76–80.83). Each species showed a duration of transferred maternal immunity of between 7.5 and 8.5 months, which was longer than has been previously estimated. These data will allow for more accurate interpretation of age-related Henipavirus serological data collected from wild pteropid bats. | https://repository.sandiegozoo.org/handle/20.500.12634/15/browse?type=subject&value=IMMUNOLOGY |
Chronic Disease Registries
Management of chronic diseases like diabetes, heart failure, and asthma is changing the practice and scope of primary care medicine. With the advent of PQRI initiatives that provide financial incentives to health professionals who measure and report their adherence to quality of care standards, disease registries are starting to realize an even greater role in primary care practices – insuring compliance and providing reports to obtain the incentives.
Introduction
A computerized disease registry is a system that is used to track the important features of various chronic diseases. These are usually supplemental systems to the individual patient medical records, and are used to support the health professional by:
- Identifying patients with a given condition, and providing a simple way to track their progress, studies, and treatments, to insure follow-up, and to identify high-risk patients that have gaps in their care
- Insuring timely and appropriate care is administered through clinical reminders and audits, with the use of clinical evidence-based guidelines to drive care
- Providing a rich source of information and patient lists that can be queried to help improve the care process and systematically measure improve patient outcomes
Diseases registries are generally disease-specific, and can therefore have highly specific user interfaces, data fields, reports, and decision support built in. In addition, they can be used completely independently of an EHR, which makes them cost-effective for the vast majority of primary care practices who have not yet implemented this more comprehensive technology. Additionally, many registries can be integrated to work within the framework of an EHR, when the practice does eventually adopt one.
References
- Schmittdiel J, Bodenheimer T, Solomon NA, Gillies RR, Shortell SM. Brief report: The prevalence and use of chronic disease registries in physician organizations. A national survey. J Gen Intern Med. 2005;20(9):855-8.
- Using Computerized Registries in Chronic Disease Care. Available at: http://www.chcf.org/topics/view.cfm?itemID=21718 [Accessed February 27, 2009]. | http://clinfowiki.org/wiki/index.php/Chronic_Disease_Registries |
BACKGROUND
BRIEF SUMMARY
DETAILED DESCRIPTION
Technical Field
The present disclosure relates to a continuous-time comparator circuit of a high-speed type.
Description of the Related Art
As is known, today available are electronic circuits known as comparators, which are designed to compare an input signal, typically a voltage signal, with a reference signal, typically represented by a reference voltage, and to generate an output signal, which indicates the fact that the input signal is higher or lower than the reference signal.
In detail, comparators of the so-called clock-triggered type are known, in which the so-called decision of the comparator, i.e., switching of the output signal between a first value and a second value following upon crossing, by the input signal, of the voltage level represented by the reference voltage, takes place synchronously with a timing signal of a periodic type, generally known as clock signal. In particular, the decisions are made on the edges of the clock signal; typically, the decisions are made at the rising edges of the clock signal, whereas at the falling edges a reset is made.
In general, clock-triggered comparators make the respective decisions in very short times, since they implement latch mechanisms, which are characterized by a positive feedback. Consequently, clock-triggered comparators are characterized by a high speed and a high resolution; i.e., they are able to switch their own output signals following upon minimal deviations between the input signals and the reference voltages. However, clock-triggered comparators must in fact implement a circuitry of the reset latch type and further may not be used in the cases where it is necessary to monitor the input signal continuously.
In order to provide comparators capable of overcoming at least in part the drawbacks associated with clock-triggered comparators, comparators of a signal-triggered type have been proposed, which are also known as “continuous-time comparators”, where the decision is taken at the moment when the input signal crosses a signal level equal to the level of the reference signal. In this case, the clock signal is not used, and the comparator continuously monitors the input signal.
FIG. 1
1
In greater detail, shows a continuous-time comparator circuit .
1
1
2
1
2
3
4
5
6
7
8
B0
The comparator circuit comprises a first MOSFET M, a second MOSFET M, a third MOSFET M, a fourth MOSFET M, a fifth MOSFET M, a sixth MOSFET M, a seventh MOSFET M, and an eighth MOSFET M, which operate in saturation regime. The comparator circuit further comprises a current generator designed to generate a current Iof a d.c. type.
1
2
1
2
1
cc
2
In detail, the first and second MOSFETs M, Mform a differential pair and are of the P-channel-enrichment type. In particular, the source terminals of the first and second MOSFETs M, Mare connected to a first terminal of the current generator , the second terminal of which is connected to a first node N, which in use is set at a supply voltage V, for example, 3 V.
1
2
1
The gate terminals of the first and second MOSFETs M, Mform, respectively, a negative input terminal and a positive input terminal of the comparator circuit , which are designed to receive, respectively, the input signal and the reference signal (or vice versa), the latter signal being formed, for example, by a reference voltage.
1
2
3
4
3
4
3
4
3
4
3
4
1
2
The drain terminals of the first and second MOSFETs M, Mare connected, respectively, to the drain terminals of the third and fourth MOSFETs M, M, which are MOSFETs of the N-channel-enrichment type. More in particular, each one of the third and fourth MOSFETs M, Mis diode-connected. Consequently, the gate terminals of the third and fourth MOSFETs M, Mare connected, respectively, to the drain terminals of the third and fourth MOSFETs M,M; the gate terminals of the third and fourth MOSFETs M, Mare thus connected, respectively, to the drain terminals of the first and second MOSFETs M, M.
3
4
2
The source terminals of the third and fourth MOSFETs M, Mare connected to a second node N, which, in use, may be set at ground.
5
6
5
3
2
6
4
2
The fifth and sixth MOSFETs M, Mare of the N-channel-enrichment type. Further, the gate terminal and the source terminal of the fifth MOSFET Mare connected, respectively, to the gate terminal of the third MOSFET Mand to the second node N. The gate terminal and the source terminal of the sixth MOSFET Mare connected, respectively, to the gate terminal of the fourth MOSFET Mand to the second node N.
7
8
The seventh and eighth MOSFETs M, Mare of the P-channel-enrichment type.
7
8
1
7
8
The source terminals of the seventh and eighth MOSFETs M, Mare connected to the first node N. The gate terminals of the seventh and eighth MOSFETs M, Mare connected together.
7
8
5
6
7
7
The drain terminals of the seventh and eighth MOSFETs M, Mare connected, respectively, to the drain terminals of the fifth and sixth MOSFETs M, M. Further, the seventh MOSFET Mis diode-connected; consequently, the drain terminal and the gate terminal of the seventh MOSFET Mare connected together.
1
2
7
8
3
4
5
6
M5
M6
M3
M4
M5
M6
M3
M4
5
6
3
4
In greater detail, the first and second MOSFETs M, Mare the same as one another. The seventh and eighth MOSFETs M, Mare the same as one another. Further, the third and fourth MOSFETs M, Mare the same as one another; in addition, the fifth and sixth MOSFETs M, Mare the same as one another; these MOSFETs are such that the following relations apply: (W/L)=(W/L)=k·(W/L)=k·(W/L), where (W/L), (W/L), (W/L)and (W/L)represent, respectively, the so-called W/L ratios for the channels of the fifth, sixth, third, and fourth MOSFETs M, M, M, M.
1
2
3
4
6
8
3
In practice, as mentioned previously, the first and second MOSFETs M, Mform the input transistors of a differential pair, the load transistors of which are formed by the third and fourth MOSFETs M, M. Further, the drain terminals of the sixth and eighth MOSFETs M, Mform a third node N, which represents an output node.
1
6
3
The comparator circuit further comprises a cascade of inverters , which are connected in series, the input of the first inverter being connected to the third node N.
B1
B2
1
2
B1
B0
B2
B0
2
1
I
I
I
I
In practice, if Iand Iare, respectively, the currents that flow in the first and second MOSFETs M, M, respectively, the following relations apply:
=(/2)+Δ=(/2)−Δ
where the Δ sign depends upon the relation between the input signal and the reference voltage. For instance, if the reference voltage, present on the gate terminal of the second MOSFET M, is higher than the input signal, present on the gate terminal of the first MOSFET M, Δ is positive; instead, if the reference voltage is lower than the input signal, Δ is negative.
3
5
5
B1
4
6
6
B2
The third and fifth MOSFETs M, Mform a first current mirror, so that flowing in the fifth MOSFET Mis a current I* equal to the current I. Likewise, the fourth and sixth MOSFETs M, Mform a second current mirror, so that flowing in the sixth MOSFET Mis a current I** equal to the current I.
7
8
8
B1
The seventh and eighth MOSFETs M, Mform a third current mirror, so that flowing in the eighth MOSFET Mis a current I*** equal to the current I*, and thus to the current I.
1
I
I
I
I
B1
B0
B2
B0
Operatively, the comparator circuit functions as described in what follows, assuming that, following upon an instant (known as “crossing time”) at which the input signal and the reference voltage assume one and the same value, the input signal assumes a value that differs from the value assumed at the crossing time and is such that
=(/2)+Δ=(/2)−Δ
with Δ positive; at the crossing time, Δ=0. In practice, it is assumed that the input signal crosses the value of the reference voltage in its descending stretch.
I***=I
I
I
I
I**=I
I
B1
B0
B2
B0
B2
B0
This having been said, on account of the presence of the first, second, and third current mirrors, we have:
=(/2)+Δ=(/2)−Δ=(/2)−Δ
3
4
1
2
m
m
3
2
2
m
3
4
Further, the voltage present on the drain terminal of the third MOSFET Mincreases with respect to the corresponding value assumed at the crossing time, whereas the voltage present on the drain terminal of the fourth MOSFET Mdecreases with respect to the corresponding value assumed at the crossing time. In this connection, the impedance seen by the drain terminal of the first MOSFET Mtowards the second node Nis equal to 1/g, where gis the transconductance of the third MOSFET M. Also the impedance seen by the drain terminal of the second MOSFET Mtowards the second node Nis equal to 1/gsince it is assumed that the third and fourth MOSFETs M, Mare the same.
8
6
3
In addition, since in the eighth MOSFET Mthere flows more current than in the sixth MOSFET M, the voltage on the third node Ntends to increase, going to a high value.
1
In the case where Δ were negative, i.e., in the case where the input signal were to cross the value of the reference voltage in the ascending stretch, the behavior of the comparator circuit would be opposite to what has been described.
3
OUT
OUT
6
6
1
Switching of the voltage present on the third node Nfrom the low value to the value high (or vice versa) causes in succession switching of the outputs of the inverters and thus represents a sort of preliminary output signal. Present on the output of the last inverter is a voltage V, which forms the output signal of the comparator circuit . Switching of the voltage V, as also switching of the preliminary output signal, indicates crossing, by the input signal, of the voltage value indicated by the reference voltage, in addition to indicating the relation between the input signal and the reference voltage.
6
1
1
1
1
1
3
3
B1
3
3
3
4
3
Irrespective of the presence of the inverters , the comparator circuit is characterized in that it implements a sort of negative feedback. In fact, observing, for example, the first and third MOSFETs M, M, it may be noted how, if the voltage present on the drain terminal of the third MOSFET Mtends to increase on account of an increase in the current I, also the voltage present on the gate terminal of the third MOSFET Mtends to increase. Since an increase in the voltage present on the gate terminal of the third MOSFET Minduces a reduction of the voltage present on the drain terminal of the third MOSFET M, the latter voltage is subject to a negative-feedback mechanism, as is also true for the voltage present on the drain terminal of the fourth MOSFET M. The comparator circuit does not thus require any reset, but is characterized by relatively long times for switching of the voltage on the third node N. Further, the comparator circuit has a relatively low gain; i.e., following upon the crossing time, it is necessary for the voltage of the input signal to differ from the reference voltage by a non-negligible deviation, for the comparator circuit to carry out switching.
3
10
1
10
1
FIG. 2
In order to speed up the response of the comparator, and in particular in order to speed up the variations of voltage on the third node N, the comparator circuit illustrated in has been proposed, which is described in what follows limitedly to the differences with respect to the comparator circuit . Further, components of the comparator circuit already present in the comparator circuit are designated by the same references.
10
3X
4X
3X
4X
In detail, the comparator circuit comprises a ninth MOSFET Mand a tenth MOSFET M, referred to in what follows as the first feedback MOSFET Mand the second feedback MOSFET M, respectively.
3X
4X
The first and second feedback MOSFETs M, Mare of the N-channel-enrichment type and are the same as one another.
3X
3
2
3X
4
2
The drain and source terminals of the first feedback MOSFET Mare connected, respectively, to the drain terminal of the third MOSFET Mand to the second node N. The gate terminal of the first feedback MOSFET Mis connected to the drain terminal of the fourth MOSFET Mand thus also to the drain terminal of the second MOSFET M.
4X
4
2
4X
3
1
3X
3X
4X
The drain and source terminals of the second feedback MOSFET Mare connected, respectively, to the drain terminal of the fourth MOSFET Mand to the second node N. The gate terminal of the second feedback MOSFET Mis connected to the drain terminal of the third MOSFET M, and thus also to the drain terminals of the first MOSFET Mand of the first feedback MOSFET M. The gate terminal of the first feedback MOSFET Mis thus connected also to the drain terminal of the second feedback MOSFET M.
3X
4X
B1
B0
B2
B0
3
4
4X
4
4
3
3
I
I
I
I
The first and second feedback MOSFETs M, Mimplement a sort of positive feedback. In fact, assuming that we still have
=(/2)+Δ=(/2)−Δ,
with Δ positive, the increase in the voltage present on the drain terminal of the third MOSFET M, simultaneous to the decrease in the voltage present on the drain terminal of the fourth MOSFET M, causes increase of the voltage present on the gate terminal of the second feedback MOSFET M, this increase tending to cause in turn an acceleration of the reduction in the voltage present on the drain terminal of the fourth MOSFET M. In an altogether specular or mirrored way, the reduction in the voltage present on the drain terminal of the fourth MOSFET M, simultaneous with the increase in the voltage present on the drain terminal of the third MOSFET M, tends to cause an acceleration of the increase of the voltage present on the drain terminal of the third MOSFET M.
1
2
m
mx
mx
3X
4X
g
−g
In greater detail, the impedance seen from the drain terminal of the first MOSFET Mtowards the second node Nis
()−1
where gis the transconductance of the first and second feedback MOSFETs M, M.
10
m
mx
3
4
3X
4X
(M3,M4)
3
4
(M3X,M4X)
3X
4X
(M3,M4)
(M3X,M4X)
3
4
3X
4X
10
1
if W/L>W/L, the negative-feedback mechanism guaranteed by the third and fourth MOSFETs M, M, which are diode-connected, prevails over the positive-feedback mechanism represented by the first and second feedback MOSFETs M, M, which are cross-coupled; consequently, the comparator circuit operates in a way similar to what has been described with reference to the comparator circuit , but with faster decision times and higher gain;
(M3,M4)
(M3X,M4X)
10
if W/L<W/L, the positive-feedback mechanism prevails slightly over the negative-feedback mechanism; consequently, the comparator circuit has a response with hysteresis and may not be used as continuous-time comparator; and
(M3,M4)
(M3X,M4X)
10
10
if W/L<<W/L, the positive-feedback mechanism prevails over the negative-feedback mechanism to the point where, once the comparator circuit has carried out switching, it is no longer able to reset; consequently, also in this case the comparator circuit may not be used as continuous-time comparator.
In even greater detail, the behavior of the comparator circuit depends upon the transconductances g, g, and thus upon the W/L ratios for the channels of the third and fourth MOSFETs M, Mand of the first and second feedback MOSFETs M, M. In particular, if W/Lis the ratio between the width and the length of the channel of one of the third and fourth MOSFETs M, Mand if W/Lis the ratio between the width and the length of the channel of one of the first and second feedback MOSFETs M, M, we have the following conditions:
10
10
10
For practical purposes, the comparator circuit enables continuous-time monitoring of the input signal and thus enables detection of the instant at which the input signal crosses the voltage level of the reference signal. However, the comparator circuit , and more in general continuous-time comparators, has a decision time, and thus a speed, that is less than clock-triggered comparators. Further, the resolution of the comparator circuit is not particularly high.
Embodiments of the present disclosure provide a continuous-time comparator circuit that will overcome at least in part the drawbacks of the known art.
According to one embodiment of the present disclosure, a comparator circuit includes a first input node and a second input node configured to receive, respectively, a first current and a second current, which form a differential signal. The comparator circuit further includes a first current mirror having a first load transistor with a control terminal and a conduction terminal, and a first output transistor having a respective control terminal. The conduction terminal of the first load transistor is connected to the first node, the first output transistor being configured to be traversed by a first mirrored current that is a function of the first current. A second current mirror includes a second load transistor having a respective control terminal and a respective conduction terminal, and a second output transistor having a respective control terminal. The conduction terminal of the second load transistor is connected to the second node, and the second output transistor is configured to be traversed by a second mirrored current that is a function of the second current. A first feedback transistor has a respective conduction terminal and a respective control terminal, which are connected to the first node and to the second node, respectively. A second feedback transistor has a respective conduction terminal and a respective control terminal, which are connected to the second node and to the first node, respectively. Output circuitry, electrically coupled to the first and second output transistors is configured to generate an output signal that switches between a first value and a second value as a function of the first and second mirrored currents. A first resistor has a first terminal, which is connected to the control terminal of the first load transistor, and a second terminal, which is connected to the first node and to the control terminal of the first output transistor. A second resistor has a respective first terminal, which is connected to the control terminal of the second load transistor, and a respective second terminal, which is connected to the second node and to the control terminal of the second output transistor.
FIG. 3
FIG. 2
30
10
30
10
10
1
(M3,M4)
(M3X,M4X)
shows a comparator circuit , which is described in what follows limitedly to the differences with respect to the comparator circuit illustrated in . Further, components of the comparator circuit already present in the comparator circuit are designated by the same references. There thus apply the relations of equality between the transistors mentioned with reference to the comparator circuit , and thus the relations of equality mentioned with reference to the comparator circuit . In addition, it is assumed that the relation W/L>W/Lapplies.
30
3X
4X
In detail, the comparator circuit comprises a first resistor Rand a second resistor R, which substantially have one and the same resistance, this resistance being higher than or equal to 100 kΩ and preferably lower than 1000 kΩ.
3X
3
3X
5
1
3
4X
3X
5
4X
3X
1
3
4
4
FIG. 3
In greater detail, a first terminal of the first resistor Ris connected to the gate terminal of the third MOSFET M, whereas a second terminal of the first resistor Ris connected to the gate terminal of the fifth MOSFET M, to the drain terminals of the first and third MOSFETs M, Mand to the gate terminal of the second feedback MOSFET M, as well as to the drain terminal of the first feedback MOSFET M. In , the node formed by the gate terminals of the fifth MOSFET Mand of the second feedback MOSFET M, as well as by the drain terminals of the first feedback MOSFET Mand of the first and third MOSFETs M, M, is designated by Nand referred to as the fourth node N.
4X
4
4X
6
2
4
4X
3X
6
3X
4X
2
4
5
5
FIG. 3
Likewise, a first terminal of the second resistor Ris connected to the gate terminal of the fourth MOSFET M, whereas a second terminal of the second resistor Ris connected to the gate terminal of the sixth MOSFET M, to the drain terminals of the second and fourth MOSFETs M, Mand of the second feedback MOSFET M, as well as to the gate terminal of the first feedback MOSFET M. In , the node formed by the gate terminals of the sixth MOSFET Mand of the first feedback MOSFET M, as well as by the drain terminals of the second feedback MOSFET Mand of the second and fourth MOSFETs M, M, is designated by Nand referred to as the fifth node N.
3X
4X
3
4
10
FIG. 2
The first and second resistors R, Rform, respectively, with the gate capacitances of the third and fourth MOSFETs M, M, corresponding circuits of an RC type, which are characterized by a first time constant, which determines the speed at which the aforementioned negative-feedback mechanism is set up, this speed being lower than in the case of the comparator circuit represented in .
3X
4X
3X
4X
As regards the speed of setting-up of the positive-feedback mechanism, it depends upon a second time constant, which, not being determined by the resistances of the first and second resistors R, R, but only by the first and second feedback MOSFETs M, M, is shorter than the first time constant.
1
3
4
3X
4X
3
30
30
Operatively, following upon the crossing time and the subsequent variation of the input signal, which is present (for example) on the gate terminal of the first MOSFET M, intervention of the negative feedback caused by the presence of the third and fourth MOSFETs M, Mis delayed with respect to intervention of the positive feedback caused by the presence of the first and second feedback MOSFETs M, M. It follows that, during a first time interval, subsequent to the crossing time, there is present, to a first approximation, just the positive feedback. Consequently, the comparator circuit behaves like a latched comparator and thus presents very short decision times; i.e., it is characterized by a fast switching of the voltage on the third node N(from a low value to a high value, or vice versa); the first time interval is comparable with the first time constant. Then, also the negative feedback is set up, which prevails over the positive feedback and induces a sort of reset of the comparator circuit , which thus becomes ready to detect new crossings, by the input signal, of the voltage level represented by the reference voltage.
From what has been described and illustrated previously, the advantages that the present solution affords emerge clearly.
3
3X
4X
In particular, the present comparator circuit is of a continuous-time type, but exhibits performance, in terms of resolution and speed, comparable with that of a clock-triggered comparator, albeit without requiring any reset, thus enabling a continuous monitoring of the input signal. More in particular, it may be shown how the switching time that elapses between the crossing time and the subsequent switching of the voltage present on the third node Ndecreases as the resistance value of the first and second resistors R, Rincreases.
In conclusion, it is clear that modifications and variations may be made to what has been described and illustrated herein, without thereby departing from the scope of the present disclosure, as defined in the annexed claims.
For instance, the transistors may be of a type different from what has been described.
6
As mentioned previously, the number of the inverters may be any and they may even be absent.
B1
B2
4
5
It is further possible for the currents I, Ito be generated in a way other than with the use of a differential pair. In this connection, in general the fourth and fifth nodes N, Nform a pair of input nodes designed to receive a non-zero differential current signal.
3
3
6
FIG. 3
Finally, the comparator circuit may be different from what has been illustrated. In particular, it is possible for it to be such that the relation between the voltage on the third node Nand the currents I* and I** is different from what has been described. More in general, embodiments are possible that differ from what has been illustrated but in which once again the voltage on the third node Nis found to vary for increasing as one of the currents I* and I** increases and to decrease as the other increases. Even more in general, embodiments are possible in which, irrespective of the possible presence of the inverters and their type, the preliminary output signal is not of a single-ended type, as in the case illustrated in , but of a differential type.
The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
For a better understanding of the present disclosure, preferred embodiments thereof are now described, purely by way of non-limiting example and with reference to the attached drawings, wherein:
FIGS. 1 and 2
show circuit diagrams of comparator circuits of a known type; and
FIG. 3
shows a circuit diagram of an embodiment of the present continuous-time comparator circuit. | |
Special issue on structural impact and crashworthiness: part I a preface
Hadavinia, H. and Elmarakbi, A.
(2011)
Special issue on structural impact and crashworthiness: part I a preface.
International Journal of Vehicle Structures and Systems, 3(2),
ii.
ISSN (print) 0975-3060
| |
Madelyn Jordon Fine Art is please to present STAGING NATURE: A WORLD UNTO ITSELF, a group exhibition of modern and contemporary landscape paintings, works on paper, photography, and sculpture by a diverse group of 12 artists. Selected artists include Milton Avery, Gregory Crewdson, Katharine Dufault, Purdy Eaton, Abigail Goldman, Elissa Gore, Adam Handler, Larry Horowitz, Wolf Kahn, Sandrine Kern, Yangyang Pan, and Susan Wides. The exhibition will run from March 29 – May 11, 2019, with an opening reception on Friday, March 29, 2019 from 6:00 – 8:00 p.m. The public is welcome.
Our natural environment has been a source of inspiration and art making for millenia. This exhibition presents artists whose fresh approaches to a time-honored subject present the natural world in their work as a means to communicate and address environmental, cultural, and aesthetic perspectives. The artwork presented may be copied from reality with varying degrees of accuracy or entirely imaginary but a common thread is a reverence for the natural world and our place in it. From the picturesque, tranquil countryside, unspoiled wilderness, and wild coastlines to the domestic terrain of suburbia, these landscape works capture the essence of place in a moment of time and are able to evoke the sound and smell of its subject, not just the sight.
Milton Avery and Wolf Kahn take a distinctive modernist view of traditional landscape. Among the most prominent American artists of the 20th century, Milton Avery’s work is clearly representational yet is not concerned with creating the illusion of depth or faithful to reality. For example, in ‘Small Trees, Big Mountains,’ Avery renders a Vermont scene in multiple tones of blacks, grays, and whites, imbuing the painting with a sense of quiet isolation. Wolf Kahn, revered for his combination of Realism and Color Field, works mainly in pastel and oil paint. Known for vivid and intensely colored scenes of Vermont, Kahn handles the subject with an appealing spontaneity that keeps his compositions fresh.
Gregory Crewdson and Abigail Goldman create imaginary, narrative suburban landscapes as settings for sculptures and photographs with disturbing, surreal events. Goldman’s ‘dieoramas’ are absurdly cute miniature domestic scenes that depict macabre spectacles of mayhem and murder. Likewise, photographer Gregory Crewdson is best known for staging complex, cinematic scenes to dramatic effect. The two works featured, from ‘Natural Wonder’, have a mysterious quality, focusing on wildlife forced to the edges of suburbia. In one work a small animal den has been constructed with lush floras and fauna where a small group of woodland creatures seem to cohabitate. In the second image, a newly built cape cod style home encroaches upon a forest that is pictured in the foreground.
Larry Horowitz and Elissa Gore are traditional, plein-air artists who are informed by environmental concerns. As a former apprentice for renowned painter Wolf Kahn, Larry Horowitz’s landscape paintings incorporate bold color and a reliance on modernist form. His profound connection to the land accounts for the emotional impact of his work. Says Horowitz, “I paint the vanishing American landscape, which is disappearing as we speak.” Likewise, Elissa Gore is inspired by the Hudson River School ethos of painting, an idealized portrayal of nature. Surrounding the viewer with atmosphere that reflects a transcendental idea of the sublime, Gore’s ultimate aim is to remind us of the importance and necessity of protecting America’s natural beauty.
Purdy Eaton looks to the American landscape and sees a different environment – a contemporary world slowly encroaching upon rural areas and farms. In ‘The Coming Storm’, Purdy Eaton recreates American painter George Inness’s famous painting of the same name, updating the scene to the present, with graffiti on the rocks and a collaged group of happily grazing cows in the foreground. The classic scene is infused with the artist’s ironic sense of humor.
Yangyang Pan, Sandrine Kern and Katharine Dufault present the natural world with an intuitive rather than literal interpretation. Yangyang Pan’s love of flowers is central to her work. Applying expressive gestural brush strokes, her opulent, abstracted, garden-scapes reveal the contrasts she finds in nature. Sandrine Kern places importance on the mood of each work, highlighting surface luminosity through a high contrast color palette. The highly suggestive representation of each natural element is devoid of details, instead preferring to convey their essence. Katharine Dufault has created a new cubist vocabulary with discrete areas of saturated, opaque color juxtaposed with loosely painted areas of transparent color. The artist’s landscape compositions are primarily inspired by the view outside her studio window.
Susan Wides specializes in urban and landscape photography. Susan Wides’s photographs convey the experience of not merely being in a place, but of immersing oneself deeply in nature, almost as a sensory experience. The artist adopts a transformative vision of our natural and urban environments expressing her conceptual and intuitive responses to our relationship with a site. Wides’ photographs rely on light and time in order to dissolve, intensify, and filter our visualization of a place.
Adam Handler’s faux naïve style of painting is anarchistic, savvy and worldly at the same time. Doing away with perspective and proportion, he opts for bold, loud colors and exaggerated forms. His canvases are vigorously painted with frenetic, thick jagged brush strokes, giving his work a hurried, improvisational style. Handlerincorporates natural elements, i.e. tulips, sun, trees, as narrative or decorative elements in his paintings.
About Madelyn Jordon
For the past 16 years, MJFA has been a leading gallery for modern and contemporary art in Westchester County. We welcome the experienced collector, design professional, and general public to discover, view and acquire exciting works of art by established, mid-career, and emerging artists. As a full-service consultancy, we handle resales of art, appraisals, framing, and art conservation. With a master’s degree in art history and museum studies, Ms. Jordon has a deep understanding of the art market and current trends. Her sensibility shines through the gallery’s rotating exhibitions of painting, sculpture, photography, printmaking and installation. | https://artswestchester.org/staging-nature-a-world-unto-itself/ |
Student/teacher ratio is calculated by dividing the total number of students by the total number of full-time equivalent teachers. Please note that a smaller student/teacher ratio does not necessarily translate to smaller class size. In some instances, schools hire teachers part time, and some teachers are hired for specialized instruction with very small class sizes. These and other factors contribute to the student/teacher ratio. Note: For private schools, Student/teacher ratio may not include Pre-Kindergarten.
Union
Students at Union High School are 36% Hispanic, 28% White, 16% African American, 8% Two or more races, 8% Asian, 5% American Indian.
Union High School ranks 155th of 307 Oklahoma high schools. SchoolDigger rates this school 3 stars out of 5.
In the 2020-21 school year, 3,304 students attended Union High School. | https://www.schooldigger.com/go/OK/schools/3060001704/school.aspx |
This is a prestigious honor that very few scholars receive. This past spring, PJP learned that we had a record number of SEVEN National Merit Commended students in the Class of 2018, and Andrew will move along in the process to be considered as a Finalist. PJP was proud to have two National Merit Finalists in the Class of 2017.
The accolades do not end there for our Student Council Treasurer. This past summer, Andrew learned that he scored a 36 on the ACT. Less than one tenth of one percent of the high school students taking the test score a 36. He also earned his Eagle Scout rank this summer, which is as impressive as his academic accomplishments.
We are so proud of you, Andrew! You are going to make such an IMPACT in the world! | https://www.pjphs.org/apps/news/article/752932 |
In recent years growing attention has been given to the role that memory plays in transitional historical phases, as a new powerful link between past atrocities and a possible peaceful future.
While memory is nowadays generally recognized as a central factor in transition and a powerful agent of change, its role has still to be further defined, especially after the fall of a dictatorship.
In relation to other post-conflict situations, a post-dictatorship society always implies contrasting memories in conflict with one another, since dictatorships can only occur if a significant component of the society support or tolerate the regime. This results in memory itself becoming a potential battlefield, a contested space of hegemonic discourse, involving a rewriting of past history and redefinition of possible future developments where it is often the case that memory issues act as barriers to, rather than facilitators of, the establishment of a genuine democratic transition.
Within this general framework, this paper analyses the post-Franco situation in Spain, in relation to the recent Ley de Memoria Histórica approved in 2007. | http://versus.dfc.unibo.it/arc1b.php?articolo=816 |
Shaping the network society : the new role of civil society in cyberspace / edited by Douglas Schuler and Peter Day.
- Publication:
- Cambridge, Mass. : MIT Press, ©2004.
- Conference Name:
- DIAC (Conference) (7th : 2000 : Seattle, Wash.)
- Format/Description:
- Conference/Event
Book
1 online resource (x, 433 pages) : maps
- Subjects:
- Information technology -- Social aspects.
Computer networks -- Social aspects.
Social participation.
Civil society.
- Summary:
- Information and computer technologies are used every day by real people with real needs. The authors contributing to Shaping the Network Society describe how technology can be used effectively by communities, activists, and citizens to meet society's challenges. In their vision, computer professionals are concerned less with bits, bytes, and algorithms and more with productive partnerships that engage both researchers and community activists. These collaborations are producing important sociotechnical work that will affect the future of the network society. Traditionally, academic research on real-world users of technology has been neglected or even discouraged. The authors contributing to this book are working to fill this gap; their theoretical and practical discussions illustrate a new orientation -- research that works with people in their natural social environments, uses common language rather than rarefied academic discourse, and takes a pragmatic perspective. The topics they consider are key to democratization and social change. They include human rights in the "global billboard society"; public computing in Toledo, Ohio; public digital culture in Amsterdam; "civil networking" in the former Yugoslavia; information technology and the international public sphere; "historical archaeologies" of community networks; "technobiographical" reflections on the future; libraries as information commons; and globalization and media democracy, as illustrated by Indymedia, a global collective of independent media organizations.
- Notes:
- "An outgrowth of the Seventh DIAC symposium held in Seattle in 2000"--Introduction.
OCLC-licensed vendor bibliographic record.
- Contributor:
- Schuler, Douglas.
Day, Peter, 1954-
- ISBN:
- 9780262283250
0262283255
1417561793
9781417561797
- OCLC:
- 57183820
- Access Restriction:
- Restricted for use by site license.
-
Loading... | https://franklin.library.upenn.edu/catalog/FRANKLIN_9977816089603681 |
This open edition print is produced on textured acid-free rag paper using archival inks to guarantee they last a lifetime without fading or loss of color.
ABOUT THE ARTWORK
In 2020, as the world addressed the rise of the pandemic, the turning point of events knocked over our reality and with it, our perception of time and space. Arising from an intimate emotional experience in the face of a global incident, Paula del Rivero’s "FINE" series revolves around the physical and temporary limitations we all confronted during the first strict confinement. As the situation evolves, these constraints keep fluctuating, and as such, this new series in progress seeks to explore the fluid margins of time and space, emerging as the artist’s visual research in response to universal questions concerning our relationship to the notions of finite vs infinite.
ABOUT THE ARTIST
Paula del Rivero grew up in a family of art enthusiasts and nonprofessional artists who instilled her with their passion for contemporary art. From a young age, she got to know the work of the leading exponents of abstract expressionism and geometric abstraction.
After obtaining a degree in Translation, her thirst for knowledge and her inclination to gather a variety of experiences led her to live in Ireland, Amsterdam, and New York. She pursued her studies and worked in diverse industries, such as programming and graphic design.
Whilst looking for her true vocation, she discovered printmaking, studying under Spanish printmaker and sculptor Mar Prat, becoming her studio assistant.
Her artistically inclined upbringing encouraged Paula to finally choose a career in the Arts, growing into a self-taught fine artist focusing on a variety of media.
Her works range from printmaking and watercolors, to textile and back-lit wall sculptures, which she has exhibited at the Spanish Royal Academy of Fine Arts and the Spanish Museum of Contemporary Printmaking.
ABOUT THE PICTURALIST
The Picturalist offers a curated wall art collection featuring International emerging artists from a wide range of artistic backgrounds. | https://www.thepicturalist.com/paula-del-rivero-5.html |
Tec-Centric is a solutions-oriented IT services and outsourcing company. The services stretch across the full application life cycle right from planning, design, development to implementation, roll out and maintenance.
Our specialty lies in developing robust, scalable and secure applications. We at Tec-Centric believe that committing to Quality enables us to deliver solutions that give our customers 'Total Satisfaction'. To facilitate this we always depend on proven methodologies. We at Tec-Centric are driven by technology and put a great emphasis on teamwork.
Tec-Centric is a major player in specialized software in various fields of Engineering. We offer complete software solutions and services like customized e-commerce solutions, web based solutions, web designing. We have outstanding capabilities for delivering complex, high quality software products to the complete satisfaction of our clients on a time bound basis.
We study our customer requirements, work out the solutions using our Products and Services, and carry out implementation. The work may involve design and development of specific custom software and/or use of state-of-the-art software products in the field of Knowledge Management.
We help our clients in business process re-reengineering with construction of robust solution with the help of carefully chosen tools. The ability to combine technology expertise with business domain knowledge has been Tec-Centric ’s main focus and strength. | http://www.tec-centric.com/solutions/software-services.php |
Priority Tooling Solutions, Inc. has been a manufacturer of high quality plastic injection moulds since 1989.
Redoe Group is a world-class global mold manufacturer specializing in surface critical, multi–color/material and optical injection molds. Our state-of-the-art manufacturing sites and technical centers are…
Rocand has built its reputation and continues to innovate in the manufacturing of extrusion-blow and high tech complex moulds with on-site testing. | https://canadianassociationofmoldmakers.com/company-categories/thermoform/ |
Residential solar arrays are an increasingly significant source of electricity. When integrated with the electric grid, the weather-dependent production characteristics of solar arrays can pose challenges. For example, if solar array production drops, e.g., due to a sudden change in weather, idle power generation plants need to be turned on, at substantial additional expense. Conversely, if solar production soars, the spot price of electricity may drop, leading to losses. For purposes of grid planning, it is valuable to know the size, orientation and distribution of solar arrays in a given region. Additionally, the availability of a nationwide map of solar-powered rooftops can spur further adoption of solar power. This disclosure provides techniques to determine parameters such as size, orientation, and distribution of solar arrays from analysis of overhead aerial imagery.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License. | https://www.tdcommons.org/dpubs_series/458/ |
The International Ahimsa Foundation USA (IAF) celebrated “A Message of Lord Mahavir” on his 2618th birth anniversary and also commemorated the 150th birth anniversary of Mahatma Gandhi, at the Consulate General of India premises, in New York, on May 7, 2019.
Over 200 guests attended the event, including Congresswoman Carolyn Maloney, Professor Lawrence A. Babb of Amherst College, Assemblyman David Weprin, New York State Senators John Liu, and Kevin Thomas, and Manhattan Borough President Gale Brewer.
Maloney, who introduced a bill in the US House of Representatives to posthumously present the Congressional Gold Medal to Mahatma Gandhi in recognition of his promotion of non-violence, said Gandhi has been a “truly inspirational leader, historic figure”.
Gandhi was “transformational in so many ways” and an inspiration to all Americans and people across the world, Maloney said. She added Mandela and King both attributed their philosophy of non-violence and their leadership to Gandhi and both are recipients of the Congressional Gold Medal.
“Already Nelson Mandela and Martin Luther King have received the Medal. It’s only right that the inspirational leader for both of them was Mahatma Gandhi and so he should receive this award,” the Congresswoman added.
Maloney, who spearheaded efforts to have the US Postal Service issue the first Diwali Stamp, urged members of the Indian American community to reach out to the Congress members and friends across the nation to co-sponsor the legislation to honor Gandhi with the Congressional medal.
“We are working to get the Senate sponsor. We must pass it this year and honor his leadership and his gift to the world,” she said, adding that, “we should all work together and have a day of National Service in this special year for Gandhi and to remember him.”
She added: “There is not enough that we can do to remember and say thank you to Gandhi for his life’s work, for his gift of non-violent ways of handling problems. Gandhi brought independence to India with non-violence and recognizing his contributions to values in America.”
Maloney added that India and the US, the world’s largest democracy and the oldest democracies, have several commonalities, share the same values and have been allies across the spectrum.
She said that paying homage to the memory and teachings of Lord Mahavir, she was not aware that one of Mahavir’s most important message is ‘live and let live.’
“This slogan is one of the most famous quotes in America,” she said.
Other keynote speakers were the Consul General of India in New York Sandeep Chakravorty, and Samani Malay Pragya and Samani Neeti Pragya of Jain Vishwa Bharati of North America.
Chakravorty said Gandhi himself was deeply influenced by the work and principle of civil disobedience of American poet and philosopher Henry David Thoreau, emulating it in his life.
“Gandhi was deeply influenced by Thoreau and it shows in his life and work. Our freedom fighters were also deeply influenced by the American independence movement and the Constitution,” he said.
The celebration of non-violence, ‘A message of Lord Mahavir and Mahatma Gandhi’ began with a lamp lighting ceremony.
Jain gave welcome remarks and reiterated the importance of non-violence and teachings of Lord Mahavir and Mahatma Gandhi, now more than ever before, and emphasized why and what motivated her to start the IAF organization.
Dr. Jain, the only female Indian American elected official in New York City, was recently honored by the Society of Foreign Consuls in New York, Inc. on International Women’s Day for her tireless work in the South Asian community.
Special remarks were also delivered by Jessica Schaowski, the New York City Mayor’s office representative, Weprin, Liu, Thomas and Brewer.
The guests were entertained by colorful cultural performances by artists and performers from Manglastak Rhythm Dance Academy, and Angel Shah, Saurya Doshi, Siddharth Doshi and Shiv Ajmeri. The bhajan ‘Raghupati Raghav Raja Ram’ was sung by United Nations International School, directed by Ellen Cava.
‘Ghoomar’ was presented by Rhythm Dance Academy. Artists Khushi Ojha, Jigna Ojha, Nidhi Parikh, Aditi Parikh, Krishna Patel, Jedlina Sarita, Ashmita Saha and Krisha Patel captivated the audience.
Judge Deborah Taylor of The Honorable Society of the Inner Temple presented a video telecast commentary on Gandhi.
Dr. Jain and Vice President of IAF, Dr. Raj Bhayani, honored some dignitaries, including Sadia Faizunnesa, from the Bangladesh Consulate; Deputy Permanent Representative of Japan to the UN Toshiya Hoshino and Mrs. Hoshino; Annavaleria Guazzieri, Head of Education Section, Consulate General of Italy; and Giampiero Biagioli, Prof. of Linguistics at Rutgers University Italian Studies.
IAF was formed in 2012 to spread the message of non-violence and peace from Jain principles to the community. The goal of the foundation has been to promote the teachings of non-violence and peace in thought and action by providing dialogue, peace-building activities, and civic engagement across cultures. The IAF Foundation hopes to encourage students and the community at large to get involved in creating a better world, according to a press release. | http://www.newsindiatimes.com/international-ahimsa-foundation-celebrates-birth-anniversary-of-mahatma-gandhi-lord-mahavir/ |
An evaluation of the national Sporting Chance Program has identified a number of critical success factors associated with the program that have contributed to improved outcomes for Aboriginal and Torres Strait Islander children.
The Sporting Chance Program uses sport and recreation as a ‘hook’ to encourage young Aboriginal and Torres Strait Islander students to engage with formal schooling. The program offers two approaches: school-based Sports Academies and Education Engagement Strategies (EES). In 2011 there were around 11 000 primary and secondary school students involved in the program.
The Academies and EES projects are delivered by project providers, such as the Australian Football League, Clontarf Foundation, Bluearth Institute and Country Rugby League NSW. In late 2010 there were 22 providers operating 54 Academies and five EES projects.
ACER was commissioned to see if the program was achieving its objective of encouraging improved educational outcomes, particularly in relation to attendance, engagement, learning achievement, staying on at school, and improving the level of parent and community involvement in school. The report was released in February 2012.
One thousand students participated in the study. Principals, teachers, sports academy staff, EES staff, parents /carers and community members also provided feedback on the program.
As part of the evaluation ACER reviewed the literature on student engagement and found there is considerable variation in how student engagement is understood and conceptualised and that few studies have examined the concept of engagement in relation to Aboriginal and Torres Strait Islander students.
ACER explored four dimensions of engagement: positive self-concept, belonging, participation and attendance. The data collected during the evaluation suggests that before the cognitive or academic elements of engagement – as shown in learning achievement – can occur in Aboriginal and Torres Strait Islander students, there first needs to be present both cultural identity and a sense of belonging. This sense of belonging relates both to a broader cultural identity and an affinity with a more specific group, such as an Academy or team.
Feedback from students in the Sports Academies indicated improved levels of confidence and pride from developing new skills, having leadership opportunities, participating in team activities and undertaking other activities in a culturally safe environment. Students participating in the Education Engagement Strategies benefited from being exposed to a range of role models and activities.
A common feature of effective Academies and EES projects was the presence of highly skilled, culturally aware and dedicated staff members with the ability to build strong and trusted relationships with Aboriginal and Torres Strait Islander students.
Other critical success factors identified in the evaluation were the following: a willingness to engage communities in the planning and processes before a project was implemented; strong support from school leadership and staff; effective communication between the school and project provider and sufficient resourcing.
The evaluation also showed the importance for the Academies in particular of having strong external partnerships, such as with community and business organisations, tertiary providers and potential funders, and the importance of regular monitoring and evaluation so improvement can be shown.
Overwhelmingly, the feedback from Aboriginal and Torres Strait Islander students who participated in the evaluation was positive, particularly in relation to their attitudes to school, self-identity and self pride in being Aboriginal, and self-efficacy as learners. ■
Read the full report: | https://rd.acer.org/article/sporting-chance-program-receives-thumbs-up |
Non-communicable diseases such as heart disease, cancer and lung disease are the most common causes of death, accounting for 70 percent of deaths worldwide, and are considered ‘non-communicable’ because they are thought to be caused by a combination of genetic, lifestyle and environmental factors and cannot be transmitted between people. However, research by the Canadian Institute For Advanced Research’s (CIFAR) Humans and the Microbiome programme throws this long-held belief into question by providing evidence that many diseases may be transmissible between people through microbes (including bacteria, fungi and viruses) that live in and on our bodies.
"If our hypothesis is proven correct, it will rewrite the entire book on public health" said Dr B Brett Finlay, CIFAR Fellow and professor of microbiology at the University of British Columbia, who is lead author on the paper, ‘Are noncommunicable diseases communicable?’, published in the journal Science.
The authors base their hypothesis on connections between three distinct lines of evidence. First, they demonstrate that people with a wide range of conditions, from obesity and inflammatory bowel disease to type 2 diabetes and cardiovascular disease, have altered microbiomes. Next, they show that altered microbiomes, when taken from diseased people and put into animal models, cause disease. Finally, they provide evidence that the microbiome is naturally transmissible, for example, spouses who share a house have more similar microbiomes than twins who live separately.
"When you put those facts together, it points to the idea that many traditionally non-communicable diseases may be communicable after all," added Finlay.
Eran Elinav, an author on the paper, CIFAR fellow, and professor at the Weizmann Institute of Science, sees the proposed connection between these points of evidence as an argument for thinking about disease more broadly: "This may represent new opportunities for interventions in some of the world's most common and bothersome diseases. We can now think about modulating environmental factors and the microbiome, not just about targeting the human host."
"This paper provides a provocative new way to think about non-communicable diseases, with important implications for public health," explained Alan Bernstein, President and CEO of CIFAR. "Ideas like this are a great example of what happens when top researchers from around the world work together in an environment of trust, transparency, and knowledge sharing."
The authors added that the paper was a direct result of open and exploratory discussions at a CIFAR program meeting in March 2019.
"The idea developed from the integration of emerging biological data from animal models and from humans, coupled with critical insights with partners dealing with the same concepts, in other contexts, in anthropology and social sciences," said Elinav.
"It started as a thought experiment," explained Finlay. "But then there was huge excitement when we started thinking about what there was evidence for. As the paper started coming together and new pieces of evidence from different specialties came in, discussions were flying around the table."
While there is a lot of excitement about this hypothesis, the researchers are clear that much remains unknown about the mechanisms involved. "We still don't know in what cases transmission increases, or whether healthy outcomes can also be transmitted," added Dr Maria Gloria Dominguez-Bello, an author on the paper, CIFAR fellow, and professor at Rutgers University. "We need more research to understand microbial transmission and its effects."
"We hope the paper will inspire further research into the mechanisms and extent of communicability." concluded Finlay. "We encourage researchers studying any disease to think about what effect microbes may be having."
CIFAR's Humans & the Microbiome program benefits from its international, interdisciplinary network of fellows, advisors and CIFAR Azrieli Global Scholars that includes microbiologists, anthropologists, immunologists, and geneticists, among others. Together they are discovering how microbes that live in and on us affect our health, development and even behaviour. | https://www.seco.org/Obesity-heart-disease-and-diabetes-may-be-communicable-diseases_es_1_132.html |
Sometimes a metamorphosis is obvious.
Today the boys were transformed from shaggy...
...to ultra hip and handsome!
Sometimes a metamorphosis is heartbreaking... in the best way. I've been watching Gavin and Brian's relationship as brothers deepen and grow. All day Brian wanted Gavin to stay still so he could give him a big kiss and a hug.
Gavin finally gave in and made his little brother's day.
Sometimes a metamorphosis is literal.
Santa brought us a Butterfly Garden kit so we can watch the process of change from larvae to butterflies.
Right now they are caterpillars and have been spinning silk around themselves right in front of our eyes. Brian is fascinated and can't understand how they are making themselves change.
He's too young for my philosophical answer - how we all have to make ourselves change many times during our lives. I just stuck with "Ask your Daddy." A much safer answer from me sometimes.
And sometimes a metamorphosis is brilliant. It's a whole new beginning. Yesterday I spent the day with my Mom and my sister, Bean, as we checked out what will eventually be my Mother's new home. She's preparing to move to a wonderful retirement village nearby and we are all just thrilled for her.
This house was the only home I ever lived in before I got my first apartment. I obviously have many, many memories from living there. But as sad as it will be to leave... and as weird as it will be to someday see a new family move in to "our house"... I am so happy she is moving.
Watching my Mom's metamorphosis since my Dad died has been inspiring. (But of course it was - we're Gallaghers and we all know how to rise from the ashes and walk on!) She has handled herself with such grace during what had to be a heart wrenching first year. Thinking of her moving to a new home where she can be surrounded by friends and activity and less housework... it makes all of her children so happy. She can't bring the house with her - and she won't be able to fit all of its contents in her new place - but she can absolutely bring the thousands of happy memories that happened there because of her and my Dad. They were the best "memory makers" for all of their children. And she can leave that house with confidence that she created a happy, loving and stable home for five grateful children that didn't turn out too bad, the lot of them. | http://www.kateleong.com/2013/01/metamorphosis.html |
Some of my fellow bloggers weren’t so sure they wanted to read this book from publisher iUniverse, about survivors of the September 11th, 2001 attacks – they didn’t want to be reminded of that most dreadful day, and the heinous acts a few hateful people took upon so many in our nation. I understand where they are coming from. There is much I wish I could forget about that day too. And yet, I also believe that there is good that comes out of bad, even absolute evil. And this is the viewpoint from which Wendy Stark Healy wrote this book. Living in New York City at the time of the attacks, Wendy met many individuals whose stories needed to be shared. She shares not only the stories of how they survived the 9/11 attacks, but how that fateful day changed the direction of their lives in a positive way.
Life Is Too Short – An Inspirational Read on 9/11
Many of us feel that the 9/11 attacks changed us. Survivors, of course, felt that even more intensely. Wendy’s book shares how many survivors made major life changes in the days and years that followed. For some, it involved a physical change of address to a part of our country with a slower paced lifestyle. For others, their desire to help however they could meant their professional lives took a complete turn – big money Wall Street Traders and PR gurus became pioneers in non-profit foundations that needed leadership.
The common thread in all of these compelling stories? Each individual had a desire to honor those who lost their lives on September 11th through helping those who needed a voice, a hand, a listening ear, or a shoulder to lean on. And it wasn’t necessarily a conscious decision at the time, but rather a compelling feeling of needing to get involved.
My Thoughts on Life is Too Short: Stories of Transformation and Renewal after 9/11
Did I shed a few tears while reading this book? Yes. Some tears were for the individual stories of pain and loss on 9/11. Others were for the beauty of how these folks, so directly affected by the attacks, were able to find good in life and to transform others’ lives for the better, while inadvertently doing the same for themselves! Amazing, and what great gifts these individuals have been to those around them!
So how are they different from you and me? They aren’t, and that’s another great take away. They were ordinary citizens, living their lives, and in an instant, got a life changing wake up call. The author remarks that many of us questioned where God was in the days following September 11th. It’s very clear after reading the individual stories in her book, that God was working through these folks, and so many others.
I found this book incredibly touching, inspiring, and thought provoking. Its stories are wonderful reminders of the tenacity of the human spirit, and the power each of us has to affect good, even in the face of seemingly insurmountable evil. Each story was just a few pages long, which made for an easy read several evenings when I had a few minutes of down time. I was so moved by “Life is Too Short” that I wrote a “Honoring September 11th” post encouraging each of us to find ways to honor those who perished 9/11/01 in our daily lives. I would recommend you to add this book to your personal library. It is a great reminder that each day is indeed a blessing!
Buy Life is Too Short
You can buy Life is Too Short: Stories of Transformation and Renewal After 9/11 on iUniverse.com, or Amazon.com.
Win Life is Too Short: Stories of Transformation and Renewal After 9/11 (3 prizes) (Cl0sed)
Updated: 10-1-11 Winners Announced: Greta B, Phyllis Duer, and Charlotte Varner
iUniverse.com is giving you the chance to win 1 of 3 copies of Life is Too Short: Stories of Transformation and Renewal After 9/11.
Important Links to enter this contest:AkronOhioMoms.com on Facebook, AkronOhioMoms.com on Twitter | AkronOhioMoms.com Google Friend Connect | iUniverse Website | iUniverse on Facebook | iUniverse on Twtiter
This promotion is in no way sponsored, endorsed or administered by, or associated with, Facebook. In order to comply with the latest Facebook Promotion Guidelines (revised 5/11/11), the following statements are true:* Giveaway participants release Facebook from any responsibility whatsoever.
* Giveaways on this blog are in no way sponsored, endorsed or administered by, or associated with, Facebook.
* Giveaway participants are providing information to this blog and giveaway sponsors only; not to Facebook.
30 Day Winning Rule applies. For complete contest rules, please see our Contest Statement and Blog Disclosure.
Contest ends at 11:59pm EST on September 27, 2011 when a winner will be drawn at random. I will notify the winners and they will have 36 hours to respond!
This was not a paid post. My own opinions were used based on my perceptions and experience. Thank you to iUniverse publishers who provided the product for review and giveaway. | https://www.akronohiomoms.com/life-family/life-is-too-short-stories-of-transformation-and-renewal-after-911-review/ |
The 3rd Psychological Operations Battalion is a subordinate unit of the 4th Psychological Operations Group.
The 3rd POB serves as the PSYOP Dissemination Battalion (PDB) for the 4th PSYOP Group, providing, while deployed, media expertise. It is a functionally-oriented dissemination battalion whose three major companies possess the 4th Psychological Operations Group's organic print, radio and television broadcast, and audio-visual production and communication capabilities.
The 3rd PSYOP Battalion is capable of deploying these capabilities or they can be produced by the battalion at Fort Bragg, NC and shipped to the forward deployed PSYOP detachment in theater. If local host nation support agreements are in place, PSYOP personnel can print on foreign presses and broadcast from existing stations in theater. The battalion is made up of a Headquarters Company, a Print Company, a Broadcast Company, and a Distribution Company.
The 3rd POB is responsible for all radio, television and print assets for developing PSYOP products such as leaflets, posters, handbills, newspapers, radio and television broadcasts.
Description: A silver color metal and enamel device 1 1/8 inches (2.86cm) in height overall consisting of a shield blazoned: Vert, a stylized sword Argent grip Gules between two arced lightning flashes Or. Around the bottom of the shield a silver metal scroll inscribed "POWER TO INFLUENCE" in red.
Symbolism: Jungle green and silver gray are the colors traditionally used by Psychological Operations units. The sword represents military preparedness and has three combined cutting edges to denote teamwork and under-score the battalions numerical designation. The flashes are arced, simulating a circle, highlighting the importance of each army elements role in total combat readiness. The sword and flashes together reflect the three sources and types of propaganda.
Background: The distinctive unit insignia was authorized on 16 Nov 1995.
Shield: Vert, a stylized sword Argent grip Gules between two arced lightning flashes Or.
Crest: From a wreath Argent and Vert a double-headed chess knight Argent and Sable garnished Gules superimposed by a palm tree Or.
Shield: Jungle green and silver gray/Argent are the colors traditionally used by Psychological Operations units. The sword represents military preparedness and has three combined cutting edges to denote teamwork and under-score the battalions numerical designation. The flashes are arced, simulating a circle, highlighting the importance of each army elements role in total combat readiness. The sword and flashes together reflect the three sources and types of propaganda.
Crest: The double-headed chess knight symbolizes strategy as well as the dual nature of propaganda and psychological operations. It is banded in red to commemorate the units Meritorious Unit Commendation and is superimposed by a palm tree for war service in Southwest Asia.
Background: The coat of arms was authorized on 16 Nov 1995. | http://psywarrior.com/3rdpob.html |
We are certainly living in interesting times. There’s even an acronym some people are using to describe it: VUCA, which means Volatile, Uncertain, Complex and Ambiguous.
According to Sir Ken Robinson, international creativity, innovation and education expert: “Humanity now faces challenges that are unprecedented in our history. These challenges are being driven by many interacting forces. They include the exponential rate of technological change, the massive shifts in population, and our unsustainable demands on the earth’s resources.”
Right now, of course, we don’t know where any of this will lead and how, specifically, things will change, we just know that they will and we know we need to be prepared.
In its 2016 Future of Jobs Report, The World Economic Forum predicts that some job families are expected to expand, while others will shrink. Computer, mathematical, architecture and engineering jobs are all expected to grow, whilst manufacturing, production, office and administrative jobs are expected to decline. If you’re curious to find out more about what the immediate future of work looks like, check out the 2016 Future of Jobs report.
And if you want to see how your own job fares you can check out: Will robots take my job. Yes, really.
No one knows the precise impact artificial intelligence is likely to have but we can be sure that vast numbers of jobs will be eliminated because of it. It’s not all bad news though, as other jobs are expected to be created in their place, as has happened throughout human history with the contraction of some industries and the creation and expansion of others. What might these new jobs even be? And how do you prepare for a career in an industry that hasn’t yet been invented?
Although we can’t know the answers to these questions with absolute certainty, here are the top skills that the Future of Jobs Report suggest might be most in demand in 2020 (not that far away) and beyond:
- Complex problem solving
- Critical thinking
- Creativity
- People management
- Co-ordinating with others
- Emotional Intelligence
- Judgment and decision-making
- Service orientation
- Negotiation
- Cognitive flexibility
So, as you can see it’s not all about tech and coding.
Many of these transferable skills have a significant “human” component to them, so beefing up our emotional intelligence and “people smarts” will certainly stand us in good stead.
Fortunately our education systems are recognising the need to foster skills in our young people that go well beyond literacy and numeracy, with many schools in New Zealand and elsewhere adapting to help develop competencies in many of these areas.
In tandem with all of these changes, we are seeing a migration towards the “gig economy” which involves short-term contracts and freelance work, with fewer and fewer people being employed in traditional full-time roles. With this can come reductions in job security, employee protections and benefits such as paid annual leave and sick pay. Zero-hour and “casual” contracts are becoming more commonplace.
Despite the uncertain times in which we’re living, it’s important to realise that there are many things you can do to manage your career path.
Tune in next time when we’ll talk about what all of this might mean for you and how you can you give yourself the best chance of career success. | http://www.clearchange.co.nz/the-future-of-work/ |
Team collaboration achieves more than a single individual can on his or her own. As new disruptions shake the market and processes grow even more complex, work tasks are becoming increasingly challenging, leaving you dependent on different people with a wide range of expertise.
But here is the catch. This kind of workplace diversity can also create barriers to efficient team collaboration.
According to research published in Harvard Business Review by Lynda Gratton and Tamara J.Erickson, team members collaborate more easily and naturally if they perceive themselves alike. Meaning, the greater the diversity of background and experience, the less likely the team members are to share knowledge or show other collaborative behaviors.
Here’s the disturbing controversy: On one hand, it’s proven that similar people work well together. On the other hand, as work tasks continue to get more complex, people with different kinds of expertise and background are forced to work together.
How do you overcome this challenge? Here are five techniques to try:
1. Lead By Example
Leaders rely on the knowledge of their team members, which is why collaboration is becoming an essential ingredient for success. Therefore, your leadership style should reflect this. In general, collaborative leadership is all about the skillful management of relationships that enable the team members to succeed individually while also accomplishing a mutual objective. Although it’s easier said than done, it’s clear more focus should be directed toward resourceful relationship management.
2. Share Clearly Defined Team Objectives
To move toward one direction, people need to clearly understand the destination. Objectives and Key Results (OKR), a technique used by Google to define and track objectives and their outcomes, is worth a try. Its main goal is to connect company, team, and individual objectives with measurable results. A great value in OKR is its ability to clearly communicate leaders’ expectations and connect different-level goals into one whole. Since these goals are kept public in front of everyone, the teams can move in one direction and know what others are focusing on.
3. Promote Efficient Team Meetings
According to a survey conducted by Microsoft Office, professionals waste up to 3.8 hours a week on unproductive meetings. No matter what you call them–status updates or team gatherings–these meetings will be seen as a waste of time if there is no value in them. You can turn this around with thorough preparation, because if you fail to plan, you are planning to fail.
4. Make Individual Progress Visible To The Whole Team
If there is one effective process that can be easily taken from companies like eBay and Skype, it is the Progress, Plans, Problems (PPP) process. This process is a management technique for recurring status reporting. The PPP process provides a great overview of how everyone on the team is doing. It communicates three essential parts about every team member: biggest achievements, current goals, and major challenges.
5. Make It Fun
Working in a team should not feel like an obligation. Integrating a little bit of fun and humor to all of this can only make the team collaboration more efficient. Socializing and getting to know each other makes the team more dynamic and connected.
—Külli Koort is a fierce proponent of achieving more with less frenetic effort. That’s why she works at Weekdone, a startup that builds status report software for managers who wish to gain more insights to their teams. You can connect with her on Twitter. | https://www.fastcompany.com/3037562/5-techniques-to-make-teamwork-more-manageable |
The purpose of this assignment is to create a Team Charter that will guide the conduct and manage the work for the creation of the NR-360 Technology Presentation.
Requirements and Preparation NR 360 Unit 2 RUA Assignment
1. Successful teams begin with guidelines that help to manage their work. For this class, you and your teammates will create a set of rules called a Team Charter.
a. As a team, complete all the sections listed below to build your team charter
b. One member of each team will be designated as the team leader and will be responsible for submitting the team charter document to the dropbox of unit 2 at the time specified by your course schedule/campus/instructor. NR 360 Unit 2 RUA Technology Presentation Team Charter.
c. All team members will indicate agreement on the charter by initialing understanding of the information within the charter. The assignment is a 200-point assignment.
CHAMBERLAIN CARE Component: Team members will have the opportunity to practice active respect for other team members, consideration of one another, and communication. Be aware that team members come from different backgrounds. And have various schedules and time demands, so working collaboratively is crucial. NR 360 Unit 2 RUA Technology Presentation Team Charter.
a. Timely submission of assignment components by individual team members is also important to the success of the team. Be factual, professional, and supportive about participation or absence of participation when you communicate with each other.
b. Teams who encounter occasions when team members have not submitted/completed assignments should communicate supportively and openly with those team members and faculty with the aim of caring for peers in completing the assignment. NR 360 Unit 2 RUA Technology Presentation Team Charter.
c. Communication is a major component of nursing and healthcare, and the foundation for this assignment. Developing good communication and team building skills is essential for nurses. Nurses work with many people who do not have the same thoughts and/or methods of approaching a task. Therefore, all team members must collaborate to be successful. Remember to use the TEAM Collaboration Discussion Area to communicate. NR 360 Unit 2 RUA Technology Presentation Team Charter.
d. Follow up with one another at least weekly to be sure that everyone is on track for the final submission of RIJA assignment sections to the team leader, who will prepare the final presentation for submission/presentation.
Sections of the Team Charter
a. Section A, Individual Characteristics: Each team member will list strengths or contributions planned for the project and describe areas that he or she plans to develop while participating in it.
b. Section C, Team Goals: This section lists the team’s goals and potential barriers that might prevent achieving them. NR 360 Unit 2 RUA Technology Presentation Team Charter.
c. Section D, Project Plan: This section will include the team plan.
Similar Post: NR 360 Coursework Information Systems In Healthcare
Other posts:
NUR 670 GC Week 12 Discussion Latest
HLT 313V Entire Course Latest
Statistical Inference Case Study
MAT 150 Topic 2 Project 1 Lesson Plan GCU
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. | https://www.termpapersforme.com/nr-360-unit-2-rua-technology-presentation-team-charter/ |
Who We Are~
Enjoys knitting, playing games, going to concerts, watching movies, and reading.
Our Mission~
To engage our community, to encourage discovery and promote literacy by offering life-long learning opportunities and informative adventures.
Would you like receive our newsletter?
Sign up below
Enjoys sharing smiles, crafts of all sorts, cooking, reading, playing games, and family time.
Hobbies are reading, writing, and drawing. Also enjoys chilling with friends and playing videos games, especially playstation Virtual Reality.
Enjoys quilting, reading and napping with her three dogs.
Hobbies include reading, gardening, knitting (crafting), hiking, and local history. | http://www.marcolibrary.org/about.html |
Welcome, Club Guest!
All Hobbies
Arts and Crafts
Blacksmithing
Card Making
Counted cross-stitch
Crochet
Dolls
Embroidery
Jewelry Making
Knitting
Origami
Painting and Drawing
Pottery and Glass
Quilting
Scale Modeling
Scrapbooking
Sculpture
Sewing
Woodworking
Collecting
Do It Yourself
Entertainment
Film-making
Food
Gardening
Kids Talents
Literature and Writing
Music
Outdoor and Nature Activities
Performing Arts
Pets
Photography
Science Related
Sports or other Physical Activities
Transportation
Travel
Water Activities
Choose Language
Русский
Crochet
All Projects
Help Needed
Showcases
Tips & Hints
See Also:
Hobbymates -
Club members
with an interest in
"Crochet"
All Projects
The Latest
Most Popular
Message Board
Baby Doll Clothes
June 13 2011
Author:
HoneyR1949
Several outfits crocheted for 8 inch baby doll. ...
Votes:
0
No Comments Yet
Arts and Crafts
Crochet
Showcases
Email:
Password:
Remember me
Forgot Password?
Register
Choose Language: | http://www.hobbymates-club.com/crochet/ct598.html |
This is PERFECTLY written – I feel like these days we always have the need to even ‘write aesthetically’ in the sense that it should read well. But this – it reads REALISTICALLY, and it rings home and that’s what matters. Absolute love.
hello, hello! thank you so much for this c: i get that–sometimes i feel compelled to switch out a plain word for an ‘aesthetic’ word, etc. sleepiness/insomnia doesn’t care much for that, though, lol. reminds me of a quote on how we’re all more honest late at night b/c we’re too sleepy to be anything otherwise. tangent aside, thank you for the read and comment, noor! 🙂 | |
This paper shows the relationship between a Macroscopic Fundamental Diagram (MFD) and congestion patterns in a general network. Specifically, for a given congestion pattern, we derive a network throughput analytically by solving a dynamic user equilibrium (DUE) problem under a steady state condition. By using this analytical formula, we can investigate the effects of congestion patterns, network configurations, and route choice behaviors on the network throughput. As an application of this method, we show simple examples that different OD structures may exert different effects on the network throughput. | https://tohoku.pure.elsevier.com/en/publications/effect-of-origin-destination-structures-on-network-performance-so |
Interested in becoming a volunteer? For those people with the time and the will to learn and share, unique opportunities are available within different areas of the Museum. The interests and skills of all applicants are carefully matched with suitable volunteer opportunities. Find out more about current volunteer vacancies.
This team of volunteers undertake much of the daily administration needs in our Reception area. Tasks can include triaging phone and email queries, welcoming visitors, filing and general office tasks, and sharing your knowledge and enthusiasm of the museum. Training and refreshers are provided.
We are looking for people with excellent communication skills, a great phone manner, and a genuine desire to work in a team and help others. The role involves assisting our Administration team with a variety of tasks, and requires familiarity with Microsoft Office software.
Due to the training provided we are looking for people who can make a real commitment to this role. The administration support team work Monday to Friday, on a static roster of one half day shift per week, either 9.30am to 1.30pm, or 11am to 3pm.
These volunteers are often the first face seen by our visitors. They welcome people entering the Museum, and help them to make the most of their visit, by providing information and advice about what is going on throughout the building.
People with great communication skills who enjoy helping others and sharing their enthusiasm for the Museum and for Auckland. People who are fit and active, able to be on their feet moving around our foyer areas, and well presented.
This is a role with considerable ongoing training, so we ask these volunteers to stay in the role for years, rather than months. We ask these volunteers to make a commitment of at least half a day per week, although we can accommodate alternate weeks in some cases.
Shifts run from 9.40am to 1pm, and 12.45pm to 4pm, seven days a week.
We have many school groups who work with our Learning and Engagement team here at the Museum, and Education Volunteers work with our Educators in a support role. They work with teachers and with groups of children, and also help with making and organizing resources for programmes.
People with great communication skills who enjoy being part of a team and working with children of all ages. Fitness is essential, and a sense of humour.
We ask these volunteers to come in on school day each week, during school terms. Occasionally programmes run during holiday periods, but mainly it is term time, and the day is from 9.30am to 2pm.
Volunteer Guides take visitors on guided Highlights Tours and specialist tours of the Museum. They bring to life our treasures and our stories while guiding international tourists as well as locals around the building.
Guides should have a real interest in history and in Museums, and have excellent communication skills and a passion for continuing to learn. The training is extensive and requires full commitment in order to successfully complete the course.
We run only one intake of guides per year, and successful guides are asked to commit to guiding for at least two years, and to be available for a minimum of one tour per week. Guides who work full time and only volunteer on weekends are asked to commit to alternate weeks. As well as being regularly rostered, guides are expected to make themselves available for occasional group tours as they arise, and as they can fit around other commitments.
Many areas of the Museum are looking for volunteers who can assist with administrative tasks, and are prepared to be meticulously accurate and well organised.
People with good written and spoken English, and are computer literate, and self motivated. The ability to learn and to work independently is also very important.
We would normally ask for at least a half day per week as the basic commitment, and more opportunities exist during the working week than in the weekends.
This team of volunteers are on call, and respond to calls for help from various parts of the Museum, and the work is varied. It may be preparing resources, putting together files, photocopying, filing, stuffing envelopes or even skewering marshmallows.
Flying Squad members must be on email, and check their accounts on a daily basis.
The job suits people who want to support the Museum, but cannot commit to a regular time slot, and would prefer to be on call. Normally it involves coming in as a small team to work for a couple of hours on administrative tasks. The minimum input required is to offer to help at least 3 or 4 times in the year, but we love it if you can commit to more.
We also have other roles available for volunteers within the Museum, and if you have a particular passion for military history, or natural history and the sciences, or the library, it is worth while submitting an application just in case a vacancy for a Volunteer in those areas arises. Our volunteers fulfil many varied roles throughout the organization, and bring a wide range of skills with them.
The range of roles undertaken by volunteers in the Museum is vast, so no matter what your skills and interests, it is possible that somebody on the staff is waiting for you to come along and volunteer. | http://www.aucklandmuseum.com/your-museum/get-involved/volunteer/types-of-volunteers |
Babak Mahdavy is a PhD student in TEFL at Tarbiat Modares University and is currently teaching English at Qaemshahr Islamic Azad University. He received his MA in TEFL from Tehran University. His research interests include language testing and assessment,language socialization, identity issues and vocabulary acquisition.
TOEFL and IELTS listening tests have been said to be different in terms of theoretical foundations, research background, history and appearance and it has also been proposed that IELTS is more content based, task oriented and authentic (Farhady, 2005; Kiany, 1998). In this study cognitive demands of the two tests were compared by giving 151 language learners an actual TOEFL listening and 117 of the same participants a specimen IELTS listening test. The participants were also given a Multiple Intelligences Development Assessment Scales (MIDAS) questionnaire. The results suggest that despite the differences between IELTS and TOEFL listening tests, scores of each intelligence positively correlate with listening scores of both tests and only linguistic intelligence has a statistically significant correlation with listening proficiency as measured by TOEFL and IELTS. Furthermore, the results of regression analysis show that linguistic intelligence is included as a predictor of TOEFL and IELTS listening scores while other intelligences are excluded. The results provide quantitative evidence that only linguistic intelligence makes a statistically significant contribution to listening proficiency and despite the differences between the two listening tests, they only put a small linguistic demand on the test takers. The article suggests that English language teachers provide further assistance to language learners who might not enjoy a high level of linguistic intelligence. | http://asian-efl-journal.com/main-editions-new/the-role-of-multiple-intelligences-mi-in-listening-proficiency-a-comparison-of-toefl-and-ielts-listening-tests-from-an-mi-perspective/index.htm |
This paper explores the potential application of the Social Licence to Operate the Road System (SLORS) to assist in determining the public’s support, tolerance, or opposition to road policy issues for different age and gender groups.
Report
Generation expendable? Older women workers in the pandeconomy
Understanding the importance of secure, safe and flexible work as women age, this research project sought to benchmark work outcomes and conditions under the COVID-19 pandemic and to provide a platform for less heard voices.
Report
What's age got to do with it?
This report identifies stereotypes, attitudes and beliefs about age that prevail in Australia, and captures some of the ways in which people in Australia understand and experience their impacts.
Report
State of the (older) nation 2021
This report on the experiences and views of Older Australians tells a story of an older generation often experiencing ageism, who less often perceive themselves as happy, healthy, financially secure or connected to community.
Report
Employing and retaining older workers
This report provides insight into the current employment climate for older workers, and the shift in perceptions around Australia’s ageing workforce.
Report
Global report on ageism
This report outlines a framework for action to reduce ageism including specific recommendations for different actors. It brings together the best available evidence on the nature and magnitude of ageism, its determinants and its impact. The report also outlines what strategies work to prevent and...
Report
Opinions and experiences of unequal pay and pay transparency
This report presents the results of a survey completed with a nationally representative sample of New Zealanders in order to establish their opinions and experiences of unequal pay and pay transparency in Aotearoa New Zealand.
Report
Community co-design of digital interventions for primary prevention of ageism and elder abuse
This report evaluates the consultation and co-design processes used to create the digital intervention and how this methodology impacted on the creation of the approach to storytelling about ageism as a driver of elder abuse.
Report
Measuring social inclusion
The Inclusive Australia Social Inclusion Index provides a unique overview of social inclusion in Australia by covering a wider array of social inclusion issues in one index.
Fact sheet
Legal protections for mature workers
This fact sheet outlines the national and state legislation that protects mature workers from discrimination and upholds their right to seek flexible work arrangements. It also addresses the way workplace health and safety laws can be uniquely relevant to older Australians.
Guide
Multigenerational workforces: a guide to the rights of older workers under the Age Discrimination Act 2004 (Cth)
This guide is issued under section 53(1)(f) of the Age Discrimination Act 2004 (Cth) (the Act). It is designed to provide employers and other work providers with information about the operation of the Act and provide practical guidance about promoting the inclusion of older workers...
Journal article
A compromised balance? A comparative examination of exceptions to age discrimination law in Australia and the UK
Drawing on case studies of exceptions to age discrimination law in Australia and the UK, this article considers the normative position on age equality law that emerges from these legal boundaries.
Article
Ageism isn’t the only barrier keeping older workers out of jobs
This article is based on 'What’s age got to do with it? Towards a new advocacy on ageing and work', a report recently published by Per Capita.
Article
Why are we abusing our parents? The ugly facts of family violence and ageism
Although population ageing is overwhelmingly a good thing, representing a healthier population overall and a longer more productive lifespan for most, it also means an increase in elder abuse. There is little public awareness of the extent and nature of elder abuse. Consequently, it is...
Fact sheet
Fact Check: Is workplace discrimination against older people costing $10 billion a year?
Age Discrimination Commissioner Susan Ryan claims Australia is losing about $10 billion a year by failing to employ more people over the age of 55.
Report
Ageism in travel insurance
This project aims to examine whether older Australians face age discrimination when purchasing travel insurance products, through an understanding of their experiences with travel insurance and analysis of selected travel insurance offerings. Two phases of consumer research were undertaken to explore the attitudes and experiences...
Conference paper
Hope I die before I get old: the state of play for housing liveability in Australia
This paper presents a findings on a study of the mass-market house building industry in Australia that appears reticent to incorporate inclusiveness and universal design principles into project homes.
Report
The elephant in the room: age discrimination in employment
Why are we overlooking older workers? This report, conducted for National Seniors Australia by the late Professor Sol Encel, explores that question and the results are not pretty. It finds that age discrimination is widespread - in recruitment, in promotion, and during times of retrenchment...
Report
Age discrimination - exposing the hidden barrier for mature age workers
The purpose of this paper is to look at and raise awareness of the issues of ageism and unlawful age discrimination against mature age workers within the workplace. It is a form of discrimination that appears to sit quietly – it can go unnoticed and... | https://apo.org.au/subject/95756 |
Open Vs. Closed Kitchen Designs
As you plan your kitchen remodel, one of the most impactful decisions that you should make is whether to pursue an open kitchen floor plan or a closed plan. What do these terms mean in real life? And what are the benefits and drawbacks of each? Discover some answers to your questions.
What Is an Open Kitchen or a Closed Kitchen?
The idea of an open or closed kitchen generally refers not to the room itself but to how it integrates with surrounding spaces. Open kitchens reduce the number of walls so that one can see and move in and out of the kitchen without hindrance. A closed kitchen is designed as a fully separate room (although often without a door) from the living room, dining room, or family room nearby.
What Are the Benefits of an Open Kitchen?
Open kitchen layouts have risen in popularity for a few reasons. First, they often look and feel larger and airier. The lack of walls tends to provide more natural lighting from all angles and make the space feel more expansive — even if the actual footprint of the room remains the same.
This same lack of walls allows the kitchen to more fully integrate into the other public areas. Those working in the kitchen can participate in group activities and conversations with guests more easily. Cooks can see their kids playing in other sections of the open space. And you have more room for congregation in and around the kitchen when you entertain.
What Are the Benefits of a Closed Kitchen?
As opposed to open layouts, a closed layout provides more privacy. Walls and doors naturally hide things you don’t want guests to have full view of, such as kitchen messes or clutter. This makes it easier to entertain in the other public rooms without worrying about your kitchen. Barriers between rooms also cut down on noises and smells emanating either from the kitchen or from adjacent rooms into the kitchen.
In addition, a separated kitchen may be better suited to workspaces and storage areas. Consider that the vast majority of both surfaces and storage in a kitchen are attached to a wall. This means that the more walls you maintain in the design, the more cabinetry and other useful built-in options you can install as well.
Should You Choose Open or Closed?
Clearly, both kitchen styles have their pros and cons. To answer the question of which is right for you, you should first analyze how you use the kitchen. If you enjoy doing most of your cooking or baking alone, you may want the benefit of added privacy behind walls. And if you want to dissuade guests from wandering into the kitchen when entertaining, the less welcoming layout may help.
On the other hand, if you like to entertain larger groups — or guest lists of varying sizes — or different types of parties, the open floor plan could provide more space and flexibility. It often pairs particularly well with homeowners who like to entertain in both indoor and outdoor integrated settings.
Other practical considerations of the room itself are also important. You may not be free to remove load-bearing walls or other support structures. Or you may need to maximize the work areas in a smaller kitchen. These issues are unique to each kitchen.
Where Should You Start?
To find the right layout for your needs and interests, start by meeting with the kitchen renovation pros at DESIGNfirst Builders. Our experienced team will help you assess your kitchen options, think about what you want and need from your room, and avoid common pitfalls no matter what floor plan you choose. Call today to make an appointment. | https://designfirstbuilders.com/open-vs-closed-kitchen-designs/ |
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTIONS
1. Field of the Invention
The preset invention relates to a robot simulation device for executing simulation of a robot, and in particular, relates to a simulation device for generating a motion path by which the robot can be prevented from interfering with peripheral equipment.
2. Description of the Related Art
In the prior art, a simulation device has been proposed, wherein a motion path of a robot is generated so that the robot can carry out a predetermined operation while avoiding interference with peripheral equipment. For example, JP 2003-091303 A discloses a method and a device for automatically setting a narrow area motion path along for pulling an end effector of a multi-joint robot from a working point on a workpiece, and a wide area motion path for moving between the working points.
Further, JP 2000-020117 A discloses a method and a device for planning a motion path of a multi-joint manipulator (robot), wherein an orientation path of a robot hand is calculated by using a potential field method.
The potential method as described in JP 2000-020117 is a known algorithm used to generate a robot path, etc. Concretely, in the method, the space where the robot is operated is divided into grids, and the energy of each grid is determined so that a grid where an obstacle exists has high energy, and energy of the grid is lowered as the obstacle moves away from. Then, by moving the robot from the current position to a grid having low energy, a path, by which the robot can avoid interference with the obstacle, is generated.
However, in general, when a complicated motion path (or a path for avoiding interference) is calculated by using the potential method, calculation for generating the path may be trapped in an endless loop, or an optimum solution may not be obtained due to a local solution. In such a case, an operator must change a calculation condition or adjust a generated motion path, which is cumbersome and requires a high level of skill.
Therefore, an object of the present invention is to provide a robot simulation system capable of automatically generating a practical motion path of a robot for avoiding interference, regardless of the level of skill of an operator.
According to the present invention, there is provided a robot simulation device for carrying out a simulation of a robot by positioning three-dimensional models of the robot and a peripheral object arranged about the robot in the same virtual space, the robot simulation device comprising: a motion path obtaining part which obtains a first motion path of the robot by executing a simulation of a motion program of the robot; a teaching point specifying part which detects whether interference between the robot and the peripheral object occurs when the robot is moved along the first motion path, and specifies a first teaching point corresponding to a teaching point immediately before the interference occurs and a second teaching point corresponding to a teaching point immediately after the interference occurs; a motion path generating part which automatically adds at least one third teaching point between the first and second teaching points and generates different second motion paths by which the interference between the robot and the peripheral object does not occur, the third teaching point being separated from the first or second teaching point in a search direction determined by a random number by a search distance determined by a random number; an evaluating part which evaluates each of the second motion paths based on at least one previously determined parameter; and a motion path selecting part which selects an optimum motion path of the robot from the second motion paths based on an evaluation result by the evaluating part.
In a preferred embodiment, the motion path generating part data detects whether the interference between the robot and the peripheral object occurs when the robot is moved along a motion path from the first teaching point to one intermediate teaching point separated from the first teaching point in a search direction determined by a random number by a search distance determined by a random number, and inserts the intermediate teaching point between the first and second teaching points when the interference does not occur; and wherein the motion path generating part detects whether the interference between the robot and the peripheral object occurs when the robot is moved along a motion path from the lastly inserted intermediate teaching point to the second teaching point, and generates the second motion path by repeating a process for inserting a newly one intermediate teaching point separated from the lastly inserted intermediate teaching point in a search direction determined by a random number by a search distance determined by a random number, until the interference does not occur.
In this case, the motion path generating part may have at least one of functions to: set an initial state in which a probability that a random number for determining the search direction, by which a distance between the intermediate teaching point and the second teaching point is smaller than a distance between the first teaching point and the second teaching point, is selected, is higher than a probability that a random number for determining the search direction, by which the distance between the intermediate teaching point and the second teaching point is larger than the distance between the first teaching point and the second teaching point, is selected; and detect whether the interference between the robot and the peripheral object occurs in the motion path from the first teaching point to the intermediate teaching point, and make a setting in which a probability that a search direction by which the interference does not occur is selected in a next or later searching is higher than a probability that a search direction by which the interference occurs is selected in the next or later searching.
Further, in this case, the robot simulation device may further comprise a teaching point deleting part which deletes a surplus teaching point from the third teaching point, wherein the teaching point deleting part may have at least one of functions to: store a movement direction and a movement distance of each of the inserted third teaching points, and, when the movement directions of two consecutive third teaching points on the second motion path are the same, then combine the two consecutive third teaching points into one new teaching point by adding the movement distances of the two consecutive third teaching points; when movement directions of two consecutive third teaching points on the second motion path are opposed to each other, combine the two consecutive third teaching points into one new teaching point or delete the two consecutive third teaching points by canceling the movement distances of the two consecutive third teaching points each other; and detect whether the interference between the robot and the peripheral object in relation to a path connecting two arbitrary inconsecutive third teaching points on the second motion path, and, when the interference does not occur, delete a teaching point between the two arbitrary inconsecutive third teaching points.
In a preferred embodiment, the at least one parameter includes: (a) a motion time of the robot; (b) a minimum distance between the robot and the peripheral object; (c) an amount of heat generation of a motor for driving the robot; (d) a lifetime of a speed reducer of the robot; and (e) a consumed power of the robot, and wherein the evaluating part selects a plurality of arbitrary parameters from the parameters (a) to (e) in relation to the second motion path and each third teaching point included in the second motion path, and calculates an evaluation value by previously weighting the selected parameters.
In a preferred embodiment, the robot simulation device further comprises a teaching point adjusting part which adjusts the position of the third teaching point, wherein the teaching point adjusting part is configured to: evaluate each third teaching point included in the second motion path based on at least one predetermined parameter; repeatedly carry out a process for moving the position of the third teaching point to be adjusted by a small distance within a predetermined acceptable range and detecting whether the interference between the robot and the peripheral object occurs; and set a third teaching point having the highest evaluation mark among the third teaching points within the predetermined acceptable range as an adjusted position of the third teaching point.
In a preferred embodiment, the teaching point generating part is configured to: generate a first relay point in the vicinity of the first teaching point, where the robot does not interfere with the peripheral object even when the robot is moved by a predetermined distance in any direction; generate a second relay point in the vicinity of the second teaching point, where the robot does not interfere with the peripheral object even when the robot is moved by a predetermined distance in any direction; and generate motion paths between the first teaching point and the first relay point, between the first relay point and the second relay point, and between the second relay point and the second teaching point, respectively, by which the robot does not interfere with the peripheral object.
In a preferred embodiment, the motion path generating part is configured to: generate a plurality of blocks by dividing a plurality of teaching points included in the first motion path at a position where the interference between the robot and the peripheral object occurs; change an order of at least two of the blocks and/or reverse an order of the teaching points included in each block; and automatically generate a motion path between the last teaching point in the block and a leading teaching point in the subsequent block, by which the robot does not interfere with the peripheral object.
In a preferred embodiment, the robot simulation device is incorporated in a controller for controlling an actual robot.
FIG. 1
10
10
12
12
14
12
10
16
12
12
18
12
14
12
20
22
24
12
22
is a functional block diagram of a robot simulation device (hereinafter, also referred to as merely the simulation device) according to a preferred embodiment of the present invention. Simulation device carries out a simulation of a robot (normally, offline) by positioning three-dimensional models of robot and a peripheral object such as external equipment arranged about robot in the same virtual space. Simulation device includes: a motion path obtaining part which obtains a first motion path of robot by executing a simulation of a predetermined motion program of robot ; a teaching point specifying part which detects whether interference between robot and peripheral object occurs when robot is moved along the first motion path, and specifies a first teaching point (hereinafter, also referred to as a start point) corresponding to a teaching point immediately before the interference occurs and a second teaching point (hereinafter, also referred to as an end point) corresponding to a teaching point immediately after the interference occurs; a motion path generating part which automatically adds or inserts at least one third teaching point between the first and second teaching points and generates different second motion paths by which the interference between the robot and the peripheral object does not occur, the third teaching point being separated from the first or second teaching point in a search direction determined by a random number by a search distance determined by a random number; an evaluating part which evaluates each of the second motion paths based on at least one previously determined parameter; and a motion path selecting part which selects an optimum motion path of robot from the second motion paths based on an evaluation result by evaluating part .
10
26
12
14
28
30
28
30
Optionally, simulation device may include a displaying part which displays the virtual space, etc., where the three-dimensional models of robot and peripheral object are located, a teaching point deleting part which deletes a surplus teaching point from the third teaching point, and a teaching point adjusting part which adjusts the position of the third teaching point. The functions of deleting part and adjusting part are explained below.
FIGS. 2 and 3
10
1
12
32
12
12
12
14
Next, with reference to , the procedure in simulation device is explained. First, in step S, the simulation of the motion program previously prepared for robot is executed so as to obtain first motion path of robot . The first motion path includes a plurality of teaching points for specifying the position and orientation of a representative point (for example, a tool center point) of robot based on the motion program. In the first motion path, the interference between robot and peripheral object is not considered.
2
12
14
12
32
FIG. 3
In the next step S, it is detected whether the interference between robot and peripheral object occurs when robot is moved along the first motion path. Then, when the interference occurs, the first teaching point (or the start point) corresponding to a teaching point immediately before the interference occurs and the second teaching point (or the end point) corresponding to a teaching point immediately after the interference occurs, are specified. In the example of , since the interference occurs between teaching points Pm and Pn included in first motion path , teaching point Pm and teaching point Pn are specified as the start point and the end point, respectively.
3
34
12
14
12
14
12
34
1
34
1
34
1
FIG. 3
In the next step S, at least one third teaching point is automatically added between start point Pm and end point Pn so as to generate a second motion path by which the interference between robot and peripheral object does not occur, the third teaching point being separated from point Pm or Pn in a search direction determined by a random number by a search distance determined by a random number. In the example of , it is detected whether the interference between robot and peripheral object occurs when robot is moved along a motion path extending from start point Pm to an intermediate teaching point P which is separated from start point Pm in a search direction determined by a random number by a search distance determined by a random number. Since the interference does not occur in motion path , intermediated teaching point P is inserted between start point Pm and end point Pn. To the contrary, when the interference occurs in motion path , intermediate teaching point P is discarded and another intermediate teaching point is searched.
12
14
36
1
36
2
1
38
1
2
38
2
1
38
2
2
1
1
Next, it is detected whether the interference between robot and peripheral object occurs when the robot is moved along a motion path extending from the lastly inserted intermediate teaching point (in this case, point P) to end point Pn. In the illustrated example, since the interference occurs in motion path , a new intermediate teaching point P, which is separated from intermediate teaching point P in a search direction determined by a random number by a search distance determined by a random number, is calculated, and then, it is detected whether the interference occurs in a motion path between intermediate teaching points P and P. Since the interference does not occur in motion path , intermediated teaching point P is inserted between intermediate teaching point P and end point Pn. On the contrary, when the interference occurs in motion path , intermediate teaching point P is discarded and another intermediate teaching point is searched. In addition, when a motion path by which the interference does not occur is not obtained even after search of point P is repeated a predetermined number of times, point P is discarded and point P is searched again.
40
2
1
2
34
38
40
The above procedure is repeated until the interference does not occur. In the illustrated example, since the interference does not occur in a motion path extending from the lastly inserted intermediate teaching point (in this case, point P) to end point Pn, a motion path from start point Pm to end point Pn via intermediate teaching points P and P (i.e., a path including motion paths , and ) is generated as the second motion path by which the interference does not occur. In addition, the movement of the robot based on the search direction and the search distance determined by a random number is not limited to translational movement, and may include rotational movement.
As explained above, in the present invention, since the search direction and the search distance for calculating the intermediate teaching point are randomly determined by using a random number, arithmetic processing is not trapped in an endless loop, nor outputs a local solution, whereby an interference avoiding path can be obtained. Further, by using the random number, the motion path can be generated without depending on the level of skill of the operator. In this regard, in order to more effectively generated the second motion path, the random number for determining the movement direction of the robot may be determined as below.
(i) As for the direction of translational movement of the robot, an initial state may be set, wherein a probability that a random number for determining the search direction, by which the robot approaches the end point (i.e., the distance between the intermediate teaching point and the end point is smaller than the distance between the start point and the end point), is selected, is higher than a probability that a random number for determining the search direction, by which the robot is away from the end point (i.e., the distance between the intermediate teaching point and the end point is larger than the distance between the start point and the end point), is selected.
(ii) As for the new position (or the intermediate teaching point), the search direction (the direction of movement) of which is determined by a random number, a probability that the search direction, by which interference between the robot and the peripheral object occurs between the new position and the immediately before teaching point on the motion path, is selected in the next or later search, is set to be higher than a probability that the search direction, by which the interference occurs, is selected in the next or later search. In this regard, it is preferable that a lower limit of the probability be predetermined so as to prevent the search direction by which the interference occurs from being never selected in the later procedure.
By means of at least one of the above items (i) and (ii), the motion path of the robot (the second motion path), by which the interference does not occur, may be obtained with a smaller number of trials.
3
4
In the second motion path generated in step S, the third teaching points may include a surplus teaching point, such as a plurality of consecutive points having the same movement direction, a teaching point having the movement direction opposite to the immediately before teaching point, and a teaching point returning to the previous teaching point after passing through some teaching points, etc. Therefore, in step S, such a surplus teaching point is deleted from the inserted or added intermediate teaching points as the third teaching points. Concretely, first, when the second motion path is to be generated, the movement direction and the movement distance of each third teaching point are stored in a suitable memory, etc. Next, the movement directions of two consecutive third teaching points on the second motion path are compared. If the movement directions are the same, then a new teaching point obtained by adding the movement distances of the two teaching points is inserted, and the two consecutive third teaching points are deleted. In other words, the two consecutive third teaching points are combined into one new teaching point.
On the other hand, if the movement directions of two consecutive third teaching points on the second motion path are opposed to each other, then a new teaching point obtained by canceling the movement distances of the two teaching points each other is inserted, and the two consecutive third teaching points are deleted. In other words, also in this case, the two consecutive third teaching points are combined into one new teaching point. In this regard, if the movement directions of the two teaching points are opposed to each other and the movement distance thereof are the same, the robot is returned to the previous position after passing through the two teaching points. In such a case, the two teaching points may be merely deleted.
In addition, when the robot does not interfere with the peripheral object on a motion path (normally, a straight line) between arbitrarily selected two inconsecutive teaching points, all of teaching points between the two inconsecutive teaching points may be deleted.
4
By virtue of the above procedure, the surplus teaching point can be deleted from the third teaching points, whereby a more simple motion path of the robot may be obtained. In addition, step S is optional, since the second motion path may not include the surplus teaching point.
22
5
6
As explained above, the second motion path may be generated without depending on the level of skill of the operator, whereas the second motion path may include obviously unnecessary movement of the robot. Therefore, a simulation is executed in which the robot is moved along each of the obtained second motion paths, and each third teaching point included in the second motion path is evaluated by means of evaluating part based on a predetermined parameter (step S). Then, a teaching point having a relatively low evaluation value (in particular, a teaching point having the lowest evaluation value) is adjusted so as to increase the evaluation value thereof (step S). By repeating such procedure, the second motion path may be more practical.
As the parameter (or evaluation item) for calculating the above evaluation value, following parameters may be used, for example. The following parameters may be calculated or estimated by executing the simulation along the second motion path.
(a) a motion time of the robot
(b) a minimum distance between the robot and the peripheral object
(c) an amount of heat generation of a motor for driving the robot
(d) a lifetime of a speed reducer of the robot
(e) a consumed power of the robot
By using at least one of the above parameters, the evaluation value of each teaching point can be obtained based on the following criteria. Further, by integrating the evaluation values of the teaching points included in the same motion path, a total evaluation value of the motion path can be obtained.
(a) The motion time of the robot is calculated, and the shorter the motion time is, the higher the evaluation value is.
(b) The minimum distance between the robot and the peripheral object is calculated, and the longer the minimum distance is, the higher the evaluation value is.
(c) The amount of heat generation of the motor for driving the robot is estimated, and the smaller the amount of heat generation is, the higher the evaluation value is.
(d) The lifetime of the speed reducer of the robot is estimated, and the longer the lifetime is, the higher the evaluation value is.
(e) The consumed power of the robot, and the smaller the consumed power is, the higher the evaluation value is.
Regarding above parameter (b), when the distance between the robot and the peripheral object is too short, the interference easily occurs between the actual robot and the actual peripheral object. On the other hand, when the distance therebetween is too long, the evaluation value regarding the other parameters tends to be decreased. Therefore, in relation to parameter (b), a proper distance between the robot and the peripheral object may be predetermined (for example, 5 cm to 10 cm), and the evaluation value may be increased as a difference between the proper distance and the minimum distance is decreased. Alternatively or additionally, an upper limit of the distance between the robot and the peripheral object may be predetermined (for example, 50 cm to 100 cm), and the evaluation value may be constant when the minimum distance exceeds the upper limit.
However, two or more of above parameters (a) to (e) can be used to calculate the evaluation value. In this case, it is preferable to previously classify the weight of each parameter, and to calculate the evaluation value based on the weighting.
6
As a method for adjusting the third teaching point in step S, the position of the teaching point may be changed. Concretely, the position of the teaching point to be adjusted (i.e., the teaching point having the low or minimum evaluation value) is moved by a small distance so as to obtain a motion path of the robot between the moved teaching point and teaching points before/after the moved teaching points. When the interference between the robot and the peripheral object does not occur on the obtained motion path, the evaluation value of the new (moved) teaching point is calculated. This procedure is repeated a predetermined number of times within a predetermined acceptable range (for example, a region in which the teaching points before/after the moved teaching points are not included), and the position having the highest evaluation value is determined as the position of an adjusted teaching point. In this regard, the above “small distance” may be properly determined depending on the application of the robot and/or the distance between each teaching point, etc. For example, the acceptable range may be divided into a plurality of meshes, the length of each side of each mesh being 1 mm to 5 mm, and the teaching point may be moved into each mesh.
5
6
Otherwise, the procedure for obtaining the evaluation value of the total motion path of the may be repeated, while obtaining the motion path of the robot by which the interference does not occur, deleting the surplus teaching point, and adjusting the teaching point. Then, by selecting the motion path having the highest evaluation value, a more practical motion path may be obtained. In this case, the above procedure may be repeated a predetermined number of times, or may be repeated until the evaluation value reaches a predetermined value. In addition, steps S and S are optional.
7
In the next step S, the evaluation value of the generated second motion path is calculated. The evaluation value of the motion path can be calculated by integrating the evaluation values of the teaching points included in the motion path.
8
In the next step S, it is judged whether the generation of the second motion path is repeated a predetermined number of times. Although at least two second motion paths are generated, it is preferable that as many motion paths be generated as possible, in view of an arithmetic capacity of the simulation device, etc.
9
Finally, in step S, among the generated second motion paths, one motion path having high (normally, the highest) evaluation value is selected as an optimum motion path (or an interference avoiding path) of the robot. As such, a practical interference avoiding path of the robot can be automatically generated without depending on the level of skill of the operator.
In addition, in generating the second motion path, when the distance between the start point and end point is relatively long, or when the start and end point are positioned in recessed portions of different peripheral objects, etc., an interference avoiding path from the start point to the end point may not be generated even if the method for determining the search direction and the search distance by using a random number is repeated a practical number of times. Hereinafter, a method for generating the interference avoiding path (or the second motion path) in such a case will be explained.
FIG. 4
14
14
3
3
4
4
a
b
shows an example in which start point Pm and end point Pn are positioned in recessed portions of peripheral objects and , respectively. In this case, the second motion path may not be generated even if the method for determining the search direction and the search distance by using a random number is repeated a practical number of times. Then, first, in the vicinity of start point Pm, a first relay teaching point P is generated, by which the robot does not interfere with the peripheral object even when the robot is moved from point P in any direction by a certain distance. Similarly, in the vicinity of start point Pn, a second relay teaching point P is generated, by which the robot does not interfere with the peripheral object even when the robot is moved from point P in any direction by a certain distance.
3
For example, a concrete method for generating first relay teaching point P includes the following steps of:
(i) generate a new position which is separate from start point Pm in a direction toward the origin position of the robot by an arbitrary distance;
42
(ii) move the robot by a small distance within a designated acceptable range (for example, a circle having a diameter of 5 cm to 10 cm) about the new position, and detect the interference between the robot and the peripheral object; and
4
(iii) repeat (i) and (ii) until the robot does not interfere with the peripheral object. Similarly, for example, a concrete method for generating second relay teaching point P includes following steps of:
(iv) generate a new position which is separate from end point Pn in a direction toward the origin position of the robot by an arbitrary distance;
44
(v) move the robot by a small distance within a designated acceptable range (for example, a circle having a diameter of 5 cm to 10 cm) about the new position, and detect the interference between the robot and the peripheral object; and
(vi) repeat (iv) and (v) until the robot does not interfere with the peripheral object.
46
48
50
3
3
4
4
46
48
48
50
Next, due to the similar procedure, motion paths , and , by which the robot does not interfere with the peripheral object (or interference avoiding paths) are generated in relation to between start point Pm and relay point P in the vicinity of the start point, between relay points P and P, and between end point Pn and relay point P in the vicinity of the end point, respectively. Then, by combining motion paths and , and by combining motion paths and , the second motion path can be generated between start point Pm and end point Pn, by which the robot does not interfere with the peripheral object.
FIG. 4
3
4
In addition, when the motion path for avoiding interference between the robot and the peripheral object cannot be generated even if the relay point generated by the above procedure is used, the movement direction for generating the new position may be changed in at least one of above steps (i) and (iv). For example, in , the movement direction from start point Pm to relay point P and the movement direction from end point Pn to relay point P are opposed to each other.
FIGS. 5 to 7
FIG. 5
52
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
3
1
3
1
3
1
3
show an example, as an application of the invention, in which teaching points are divided into a plurality of blocks and an interference avoiding path is generated, when the interference occurs at a plurality of positions. In this case, as shown in , first motion path obtained by the simulation of the motion program includes teaching points A, A, A, B, B, B, C, C, C, D, D, D, E, E and E, in this order. Further, the robot may interfere with the peripheral object at four positions, i.e., between teaching points A and B, between teaching points B and C, between teaching points C and D, and between teaching points D and E.
FIG. 5
Also in the example of , an interference avoiding path may be generated by inserting respective third teaching points at the four positions, according to the procedure as explained above. However, another interference avoiding path having a higher evaluation value may be generated, by dividing teaching points into a plurality of blocks with reference to a boundary where the interference occurs, and by changing the order of the blocks and/or reversing the order of teaching points in each block.
FIG. 6
54
1
2
3
56
1
2
3
58
1
2
3
60
1
2
3
62
1
2
3
Concretely, as shown in section (a) of , first, teaching points are divided into five blocks, i.e., a first block including teaching points A, A, A; a second block including teaching points B, B, B; a third block including teaching points C, C, C; a fourth block including teaching points D, D, D; and a fifth block including teaching points E, E, E.
52
56
60
FIG. 6
Next, two arbitrary blocks are selected from the blocks obtained by dividing first motion path . In this case, second block and fourth block are selected. Then, the order of the selected blocks (including the block therebetween) is changed. In other words, as shown in section (b) of , the order of the five blocks is changed as follows:
first block->fourth block->third block->second block->fifth block
56
58
60
1
3
FIG. 6
Optionally, the teaching point in each block may be reversed. In the illustrated example, the order of the teaching points in each of the second, third and fourth blocks is reversed, whereby second block ′, third block ′ and fourth block ′ are formed. In other words, in section (b) of , the order of teaching points from B to D is reversed.
FIG. 6
FIG. 7
1
3
In the configuration of the blocks in section (b) of , an interference avoiding path is generated, according to the method as explained above, in which the last teaching point in each block is determined as the start point, and the first teaching point in the subsequent block is determined the end point. By virtue of this, a second motion path from teaching point A to teaching point E, as shown in .
By repeatedly changing the order of the blocks or reversing the order of the teaching points in each block, an optimum motion path can be obtained. Concretely, it is preferable to use an annealing method for obtaining the optimum motion path, as follows. First, after a motion path by which the robot does not interference with the peripheral object is obtained, a first evaluation value of the obtained motion path is calculated according to the procedure as explained above. Next, a second evaluation value is also calculated, in relation to a motion path after the order of the blocks is changed or the order of the teaching points in each block is reversed. When the second evaluation value is larger than the first evaluation value, the latter result (or the motion path having the second evaluation value) is used. In addition, even when the second evaluation value is smaller than the first evaluation value, the latter result is used when the difference between the first and second evaluation value is within a predetermined reference value.
By repeating the above comparison of the evaluation values while the reference value is sufficiently gradually decreased, a motion path having a higher evaluation value can be obtained without falling into a local solution. In addition, the interference avoiding path inserted between the blocks may be stored in a suitable memory. By using the stored interference avoiding path when inserting an interference avoiding path between the same blocks in the later procedure, the calculation time may be reduced.
The robot simulation device according to the present invention may be incorporated in a robot controller for controlling an actual robot to be simulated. In other words, the robot controller may have the function of the robot simulation device. When the robot controller stores information regarding the shapes of the robot and the peripheral object and the placement positions thereof, the robot controller may generate the interference avoiding path as described above. Further, by virtue of this, the actual robot can be more smoothly operated based on the result of the simulation.
According to the present invention, by generating the third teaching point for avoiding interference by using a random number, the interference avoiding path can be obtained with a high probability without falling into a local solution. Further, since the interference avoiding path is automatically generated, a practical path of the robot for avoiding interference can be generated, without depending on the level of skill of the operator.
While the invention has been described with reference to specific embodiments chosen for the purpose of illustration, it should be apparent that numerous modifications could be made thereto, by one skilled in the art, without departing from the basic concept and scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other objects, features and advantages of the present invention will be made more apparent by the following description of the preferred embodiments thereof, with reference to the accompanying drawings, wherein:
FIG. 1
is a functional block diagram of a robot simulation device according to an embodiment of the present invention;
FIG. 2
is a flowchart showing an example of a procedure in the robot simulation device of the invention;
FIG. 3
explains an example for generating an intermediate teaching point for avoiding interference;
FIG. 4
explains an example in which a relay teaching point is inserted into a second motion path;
FIG. 5
shows an example of a row of teaching points included in a robot motion path before an interference avoiding path is generated;
FIG. 6
FIG. 5
shows an example in which the row of teaching points as shown in are divided into a plurality of blocks, an order of some blocks is changed, and an order of the teaching points in each of some blocks is reversed; and
FIG. 7
FIG. 6
shows an example in which an interference avoiding path is inserted into the row of teaching points as shown in . | |
This application claims benefit of U.S. Provisional Patent Application Ser. No. 61/442,858, filed on Feb. 15, 2011. The teachings of U.S. Provisional Patent Application Ser. No. 61/442,858 are incorporated herein by reference in their entirety.
This invention relates generally to asphalt compositions and hot or cold process coal tar compositions having a reduced level of objectionable odors. The subject invention specifically relates to asphalt additive compositions which exhibit a reduced level of odors and which are useful in modifying plastic compositions.
The outstanding binding, insulating, and waterproofing characteristics of asphalt have led to its widespread utilization in a wide variety of applications including paving, roofing, weather sealing, waterproofing, and polymer modification. For instance, asphalt is used in manufacturing roofing shingles because it has the ability to bind sand, aggregate, and fillers to the roofing shingle while simultaneously providing excellent water barrier characteristics. Asphalt compositions are additionally used as processing aids for plastics.
For hundreds of years, naturally occurring asphalts have been used in numerous applications. However, today virtually all of the asphalt used in industrial applications is recovered from petroleum refining. Asphalt, or asphalt flux, is essentially the residue that remains after gasoline, kerosene, diesel fuel, jet fuel, and other hydrocarbon fractions have been removed during the refining of crude oil. In other words, asphalt flux is the last cut from the crude oil refining process.
One age-old downside associated with using hot mix asphalt is that it produces volatile materials such as hydrocarbons, sulfides, and mercaptans which generally have strong, persistent, and unpleasant odors. These odors are frequently considered to be obnoxious by persons working with the asphalt, by residents living near areas where asphalt is manufactured or paved is being done, and in general to persons who come within close proximity to the asphalt. The intensity of the unpleasant odor associated with asphalt increases with increasing temperature. Accordingly, the odor problem associated with asphalt can be severe in cases where it is heated to an elevated temperature, such as in industrial applications.
The foul odors that accompany paving roads, driveways, and parking lots are well recognized by most people in modern society. When asphalt is used in roofing applications, such as roofing shingles, roll roofing, and built-up roofing, the asphalt is typically first heated in a vessel, such as a gas-fired roofing kettle. Asphalt compositions used in plastics modification are also typically heated to an elevated temperature in mixing and processing the polymeric formulation. As the temperature of the asphalt is increased, volatile materials, such as hydrocarbons, sulfides, and mercaptans which have strong and unpleasant odors are normally emitted into the atmosphere. The odors emitted are not only unpleasant to smell, but they may also be an irritant to workers and/or other individuals in the vicinity of the vessel or to those who come within close proximity to the hot asphalt. In extreme cases, the rank fumes from the asphalt may cause headaches and/or irritation to the eyes and mucus membranes of the nose and throat, which can result in a deterioration of worker productivity and/or an increase in the number of sick days taken by workers.
Traditional odor treating compositions act as deodorizers or masking agents, function by overwhelming the undesired odor with another odor. Such techniques, however, are poor at masking strong odors. In addition, masking does not reduce the concentration of the volatiles causing the underlying undesirable odors. Accordingly, the need for an effective technique for reducing the obnoxious odors associated with asphalt containing compositions is well recognized by industrial users of asphalt and the general public.
Compositions and odor-masking additives for reducing undesirable odors emitted from odor-causing compounds are known in the art. For instance, U.S. Pat. No. 5,271,767 discloses a composition that consists essentially of (1) liquid asphalt, hot-mix asphalt, hot-mix, or cold lay asphalt with added latex and (2) an additive that contains a citrus terpene (4-isopropyl 1-methylcyclohexene) D-limonene mixed with a vegetable oil such as cottonseed oil, soya oil, rapeseed (canola) oil, peanut oil, etc. and a silicone oil dispersant. It is taught that when 0.5-1.0 parts of the composition are mixed with 99.0-99.5 parts liquid asphalt, the resulting liquid asphalt composition is substantially free of objectionable odors.
U.S. Pat. No. 5,989,662 and U.S. Pat. No. 6,107,373 disclose methods of reducing fumes produced from a kettle of molten asphalt that includes adding about 0.25 to about 6.0% by weight of a polymer (e.g. polypropylene) to the asphalt. The polymer material preferably forms a skim or skin across substantially the entire upper surface of the asphalt. These patents teach that at least a 25% reduction of the visual opacity of the fumes, at least a 20% reduction of the hydrocarbon emissions of the fumes, and at least a 15% reduction of suspended particulate emissions of the fumes is obtained.
U.S. Pat. No. 6,461,421 discloses a composition that includes (1) an odor-emitting hydrocarbonaceous material, (2) an odor-suppressing amount of an aldehyde or a ketone, and (3) a carboxylic acid ester. The odor-emitting hydrocarbonaceous material may be any hydrocarbonaceous material that emits objectionable odors at ambient or elevated temperatures. One example of a hydrocarbonaceous material given is asphalt. It is asserted that the composition significantly reduces the odor given off by asphalt.
U.S. Pat. No. 6,488,988 discloses a method and container for reducing the fuming of asphalt in a heated vessel. Trumbore teaches that a substantially insoluble blanket material is added to the liquid asphalt to form a skim on the surface of the asphalt and reduce the fuming. Examples of blanket materials include polyurethane, polyethylene terephthalate, ground soda bottles, starch, and cellulosic materials.
U.S. Pat. No. 6,987,207 discloses a composition that includes an odor-emitting hydrocarbonaceous material and an odor-suppressing amount of an additive composition that includes (1) a soy methyl ester, (2) at least one aldehyde and/or at least one ketone, and (3) at least one carboxylic acid ester. This patent teaches that the odor-emitting hydrocarbonaceous material may be any hydrocarbonaceous material that emits objectionable odors at ambient or elevated temperatures, such as asphalt. It is asserted that the use of the additive composition may significantly reduce or eliminate the odor emitted by the hydrocarbonaceous material.
U.S. Pat. No. 7,037,955 and United States Patent Publication No. 2006/0155003 disclose methods for reducing odor in an oil based medium such as asphalt. In the disclosed methods, an essential oil is added to the oil based medium in an odor reducing amount. The essential oil may be one or more essential oils or essential oil components, and includes natural extracts of various products of aromatic plants and trees. Essential oils for use in the invention include ajowan, angelica root, angelica seed, aniseed china star, carrot seed, and fir needle, among many others. Examples of essential oil components include terpenes, alcohols, aldehydes, aromatics, phenolics, esters, terpene derivatives, non-terpene essential oil components, and terpene derivatives.
pinus sylvestris
U.S. Pat. No. 7,306,666 discloses a method for reducing odor in oil based media, said method comprising mixing an odor reducing amount of an odor reducing additive with an oil based medium, wherein the odor reducing additive is a mixture of essential oils, a mixture of essential oil components, or mixtures thereof, wherein the odor reducing additive is diluted with a carrier oil, which is comprised of methyl esters of canola oil, ethyl esters of canola oil, the methyl ester of soybean oil, or a mixture thereof; and wherein the oil based medium is fuel oil, waste oil fuel oil, oil based synthetic lubricants, liquid asphalt cement, or hot mix asphalt. Some examples of odor reducing additives that can be utilized in conjunction with this method include essential oils selected from the group consisting of rosemary oil, cedarwood oil, pine needle oil, eucalyptus oil, clove oil, thyme oil, vetiver oil, vanilla oleo resin, lavender oil, and tea tree oil; terpenes selected from the group consisting of α-pinene, β-pinene, d 3 carene, dipentene, p-cymene, cineole, camphor, terpineol, bornyl acetate, cedrene, cedrol, and thymol; and other substances including limonene, pine extract and pine white oil, oil, anise seed oil, clove bud oil, aniseed oil, camphor white oil, cedarwood atlas oil, cedarwood texas oil, cedarwood virginia oil, lavandin absolute, lime distilled oil, olibanum extract, rosemary oil, sandlewood west indian oil, tocopherol alpha, and vanilla.
United States Patent Publication No. 2004/0166087 discloses the addition of two specific types of ingredients to cold or hot melt asphalt or coal tar for the dual purposes of holding, reducing or complexing the obnoxious and toxic odors from asphalt while at the same time allowing a pleasant masking fragrance to predominate. The holding agents consist of various organic compounds which can bond or complex with and effectively hold onto other molecules. Typical complexing agents include dialkylgylcol alkyl ethers and dialkylphthalates. Typical fragrances include natural and synthetic oils or extracts such as lemon oil, orange oil, peppermint, spearmint, cinnamon, bubble gum and most other common fragrances. United States Patent Publication No. 2004/0166087 specifically reveals an asphalt formulation that contains 91 weight percent hot melt asphalt, 4.5 weight percent cinnamon oil, and 4.5 weight percent diethyl phthalate.
United States Patent Publication No. 2005/0223668 discloses a faced fibrous insulation assembly that includes a fibrous blanket, a facing formed by a kraft paper sheet material, and an asphalt coating layer on the inner surface of the facing that bonds the facing to the fibrous insulation blanket. The asphalt coating layer contains an odor-reducing additive in an amount to substantially eliminate odors that would otherwise be emitted by the asphalt coating layer. It is asserted that the additive does not adversely affect the adherent qualities of the asphalt coating layer. It is disclosed that the odor-reducing additive may be essential plant oils.
United States Patent Publication No. 2009/0314184 A1 discloses a composition for reducing malodorous emissions from hydrocarbonaceous materials comprising: at least one aldehyde-containing compound having a molecular weight greater than about 100 daltons and a boiling point greater than about 375° F., said composition being free of ester-containing compounds. In one embodiment of this invention the composition further includes one or more members selected from the group consisting of ketone-containing compounds, a low fuming additives and liquid carriers. In these compositions the aldehyde-containing compound can be 2-chlorobenzaldehyde, 4-chlorobenzaldehyde, alpha-methylcinnamaldehyde, 4-anisaldehyde, epsilon-cinnamaldehyde, vertraldehyde, 4-ethoxy-3-methoxybenzaldehyde, 3-ethoxy-4-hydroxybenzaldehyde, 3-nitrobenzaldehyde, vanillin or cinnamaldehyde. The ketone-containing compound that are reported to be useful in such compositions include camphor, isophorone, isobutyrophenone, propiophenone, 4-methylacetophenone, carvone, 4-chloroacetophenone, 2-benzoylbenzoic acid, 2′-acetonaphthone, benzophenone, fluorenone, 4′-ethoxyacetophenone, 4′-chlorobenzophenone, 4-acetylbenzonitrile and 4′-hydroxyacetophenone.
United States Patent Application Publication No. 2007/0213418 discloses a compound comprising a combination of materials for manufacturing a plastic product, comprising: a blend of asphalt and resin, the asphalt being included in an amount within a range of from about 0.1% to about 40% by weight of the plastic product; and wherein the asphalt functions as at least one of (i) a colorant to change the color of the plastic product; (ii) a resin replacement to reduce the amount of resin in the plastic product, (iii) a processing aid; and (iv) an additive to increase the R-Value of a foam insulation. However, this reference does not disclose or suggest any means for mitigating the strong odor associated with the asphalt used therein.
Conventional odor treating compositions commonly act as deodorizers or masking agents, essentially overwhelming the undesirable odor with one or more desirable odors. However, these compositions do not effectively mask the odors emitted from asphalt. Thus, there remains a need in the art for a composition that effectively reduces or eliminates the odors emitted from asphalt or other hydrocarbonaceous materials without simply masking the undesirable smell, where the performance of the composition is sustainable over time, and where the composition does not pose any additional health or safety issues.
This invention is based upon the discovery that hydroxylated carboxylic acids which contain at least 17 carbon atoms and alkaline earth metal salts thereof, such as zinc ricinoleate, act effectively as deodorants in asphalt and asphalt containing compositions. The present invention more specifically discloses an asphalt additive composition which is comprised of (1) an asphalt, (2), 0.05 weight percent to about 4 weight percent of a partitioning agent, and (3) at least 0.1 weight percent of a deodorant selected from the group consisting of (a) a hydroxylated carboxylic acid which contains at least 17 carbon atoms and (b) an alkaline earth metal salt of a hydroxylated carboxylic acid which contains at least 17 carbon atoms. The asphalt additive composition will preferably be in the form of free-flowing pellets to facilitate its utilization in modifying plastic formulations. In many applications it is high beneficial for the asphalt additive composition to further contain a polymer additive, such as polyethylene, polypropylene, or ethylene vinyl acetate (EVA) to attain desired characteristics. Polymer additives will normally be included in the asphalt additive compositions of this invention at a level which is within the range of about 0.5 weight percent to about 50 weight percent, will typically be included at a level which is within the range of about 0.5 weight percent to about 30 weight percent, and will more typically be included at a level which is within the range of about 3 weight percent to about 15 weight percent. In many cases the asphalt additive compositions of this invention will be included at a level which is within the range of about 3 weight percent to about 12 weight percent.
The asphalt additive compositions of this invention are beneficial in modifying plastic formulations to improve processing characteristics, as a homogenizing agent, as an adhesion promoting agent, as a resin replacement or supplement, as an insulating agent, and as a black colorant. The asphalt additive compositions of this invention are typically added to plastics formulations at a level which is within the range of about 2 parts by weight to 25 parts by weight per 100 parts by weight of the plastic. The asphalt additive compositions of this invention are more typically added to plastics formulations at a level which is within the range of about 2 parts by weight to 10 parts by weight per 100 parts by weight of the plastic.
The present invention further reveals a modified plastic composition which is comprised of at least one thermoplastic resin and from about 2 weight percent to about 25 weight percent of an asphalt additive composition which is comprised of (1) an asphalt, (2) 0.05 weight percent to about 4 weight percent of a partitioning agent, and (3) 0.1 weight percent to 5 weight percent of a deodorant selected from the group consisting of (a) a hydroxylated carboxylic acid which contains at least 17 carbon atoms and (b) an alkaline earth metal salt of a hydroxylated carboxylic acid which contains at least 17 carbon atoms.
The subject invention also discloses a cellulose expansion joint composition which is comprised of cellulose fibers and an asphalt additive composition which is comprised of (1) an asphalt, (2) 0.05 weight percent to about 4 weight percent of a partitioning agent, and (3) 0.1 weight percent to 5 weight percent of a deodorant selected from the group consisting of (a) a hydroxylated carboxylic acid which contains at least 17 carbon atoms and (b) an alkaline earth metal salt of a hydroxylated carboxylic acid which contains at least 17 carbon atoms. This composition is particularly useful in filling expansion joints (void space) between concrete slabs. For example, it can be used to fill the joint between a concrete driveway and the floor of a garage. Such expansion joints are typically between about 0.25 inch (6.35 mm) to about 1 inch (25.4 mm) wide and are more typically about 0.375 inch (9.5 mm) to about 0.625 inch (15.9 mm) wide.
The subject invention also discloses an asphalt impregnated fabric which is comprised of a woven or non-woven fabric having intersties therein, wherein said intersties contain an asphalt additive composition which is comprised of (1) an asphalt, (2) 0.05 weight percent to about 4 weight percent of a partitioning agent, and (3) 0.1 weight percent to 5 weight percent of a deodorant selected from the group consisting of (a) a hydroxylated carboxylic acid which contains at least 17 carbon atoms and (b) an alkaline earth metal salt of a hydroxylated carboxylic acid which contains at least 17 carbon atoms.
The subject invention further discloses a deodorized asphalt composition which is comprised of an asphalt and at least 0.1 weight percent of a deodorant selected from the group consisting of (a) a hydroxylated carboxylic acid which contains at least 17 carbon atoms and (b) an alkaline earth metal salt of a hydroxylated carboxylic acid which contains at least 17 carbon atoms; wherein the asphalt composition is void of water. In some embodiments of this invention, deodorized asphalt compositions of this type are void of aggregate having a particle size of greater than 2 millimeters.
The present invention also reveals a method for preparing a deodorized asphalt composition which comprised mixing a deodorant selected from the group consisting of (a) a hydroxylated carboxylic acid which contains at least 17 carbon atoms and (b) an alkaline earth metal salt of a hydroxylated carboxylic acid which contains at least 17 carbon atoms, into an asphalt which is void of water, and wherein the mixing is conducted at a temperature which is within the range of about 250° F. to about 550° F.
The deodorants used in the practice of this invention are sparingly water-soluble or water-insoluble salts of hydroxylated carboxylic acids containing at least 17 carbon atoms and various salts thereof. These carboxylic acids, in general, form water insoluble or sparingly soluble compounds with polyvalent metal cations, for example bivalent or trivalent cations. Suitable metal ions in this respect are those that do not catalyze the decomposition reactions of higher carboxylic acids, such as iron, copper or nickel. A second aspect with regard the choice of the cations in the active substances according to the present invention is that they should be physiologically harmless. Alkaline earth metals are, therefore, preferably used, especially magnesium and calcium, as well as aluminum and, above all, zinc. The zinc salts offer an additional advantage in that they show a pronounced fungistatic activity and are therefore especially important in application where fungus is a problem.
The anions of the salts are derived from higher carboxylic acids which can be saturated or unsaturated, and are preferably singly or multiply olefinically unsaturated, and show single or multiple hydroxylation. Suitable carboxylic acids are those having from 17 to 21 carbon atoms, preferably from 17 to 19 carbon atoms. A highly preferred saturated deodorant is zinc 12-hydroxystearate. These compounds are, in general, relatively inaccessible, which means that those unsaturated, hydroxylated carboxylic acids having 17 or more carbon atoms which are accessible are preferably used. These are primarily higher unsaturated, hydroxylated fatty acids. The most important naturally occurring member of this group is ricinoleic acid. The reaction products obtainable by hydroxylation of fatty acids with multiple points of unsaturation, for example, linoleic acid and linolenic acid, can however also be used. It is a relatively simple process to hydroxylate one of the two double bonds of this doubly olefinically unsaturated carboxylic acid by a mild oxidation treatment, so that doubly hydroxylated carboxylic acids which are still unsaturated are produced. These and similar carboxylic acids are especially important in the practice of the present invention.
The most important salts are the salts of zinc, magnesium and aluminum, of ricinoleic acid, ricinelaidic acid, dihydroxyoctadecane-acid, which may be easily obtained from linoleic acid, as well as the appropriate salts of carboxylic acids with multiple hydroxylation and single multiple unsaturation obtained from the oxidation of linoleic acid. Zinc ricinoleate is a readily accessible compound and is therefore preferably used in the practice of the present invention. It possesses virtually no residual astringent action, so that an irritant action on the skin is absent.
Zinc ricinoleate is the zinc salt of 12-hydroxy-9-octadecenoic acid and is of the structural formula:
and was assigned CAS Number 52653-36-8.
In the practice of this invention, the deodorant is simply mixed into the asphalt or the asphalt composition being treated. The asphalt used will typically be void of water. The deodorant will, of course, be added at a level which is sufficient to eliminate undesired odors or to reduce such odors to an acceptable level. In most cases, the deodorant will be added at a level which is within the range of 0.1 weight percent to 5 weight percent. Typically, the deodorant will be added to the asphalt or the asphalt containing composition at a level which is within the range of 0.2 weight percent to 3 weight percent. In many cases the deodorant will be added to the asphalt or the asphalt containing composition at a level which is within the range of 0.6 weight percent to 2 weight percent. The deodorant will frequently be added to the asphalt or the asphalt containing composition at a level which is within the range of 0.8 weight percent to 1.5 weight percent.
The asphalt additive composition being treated will typically be heated to an elevated temperature to facilitate a homogeneous mixing of the deodorant throughout the asphalt additive composition. The temperature to which the asphalt additive composition is heated will depend upon the type of the asphalt used and the level and nature of additional components, such as the partitioning agent and any polymer additive, included in the asphalt additive composition. Normally, the asphalt additive composition will be heated to a temperature of at least about 250° F. (121° C.) to reduce the viscosity of the asphalt composition to a workable level for blending the deodorant therein. However, the asphalt composition will not typically be heated to a temperature of more than about 550° F. (288° C.) to limit thermally induced degradation of the asphalt and polymeric components of the asphalt additive composition. In general, the asphalt composition will be heated to a temperature which is within the range of about 325° F. (163° C.) to 500° F. (260° C.) to facilitate good mixing of the deodorant throughout the asphalt composition. Preferably the asphalt composition will be heated to a temperature which is within the range of about 400° F. (204° C.) to 475° F. (246° C.) to facilitate good mixing of the deodorant into the asphalt composition. It should be noted that the deodorant can be mixed into the asphalt composition with other constituents, such as polymer additives, or it can be added in a separate mixing step. For instance, the deodorant can be added as the final step of the preparation of the asphalt additive composition. In the alternative, the deodorant can be added prior to or concurrently with other components of the asphalt additive composition or it can even be added prior to air-blowing in cases where industrial asphalt is utilized.
Virtually any asphalt can be utilized in making the asphalt additive compositions of this invention. However, the asphalt will typically be an oxidized asphalt to facilitate making the asphalt additive compositions of this invention into free-flowing pellets. The oxidized asphalt utilized in manufacturing the asphalt additive compositions of this invention will typically have a softening point which is within the range of 85° C. to 180° C. and a penetration value of less than 15 dmm. Preferably, the oxidized asphalt will have a softening point which is within the range of 100° C. to 170° C., and a penetration value which is less than 12 dmm. In some embodiments of this invention it is beneficial for the asphalt to have a penetration value of less than 5 dmm and in some applications it is desirable for the asphalt to have a penetration value of 0 dmm.
In some embodiments of this invention, the asphalt can be (1) a Type I asphalt which has a softening point of from 135° F. (57° C.) to 151° F. (66° C.) and a penetration of from 18 dmm to 60 dmm at 77° F. (25° C.), (2) a Type II asphalt which has a softening point of from 158° F. (70° C.) to 176° F. (80° C.) and a penetration of from 18 dmm to 40 dmm at 77° F. (25° C.), (3) a Type III asphalt which has a softening point of from 185° F. (85° C.) to 205° F. (96° C.) and a penetration of from 15 dmm to 35 dmm at 77° F. (25° C.), or (4) a Type IV asphalt which has a softening point of from 210° F. (99° C.) to 225° F. (107° C.) and a penetration of from 12 dmm to 25 dmm at 77° F. (25° C.).
Penetration values can be determined at room temperature or at an elevated temperature. Unless stated otherwise, penetration values are determined at room temperature. For purposes of this invention, asphalt softening points are measured following ASTM D 36-95 “Standard Test Method for Softening Point of Bitumen (Ring- and Ball Apparatus)” and asphalt penetrations are measured following ASTM D 5-97 “Standard Test Method for Penetration of Bituminous Materials”.
The oxidized asphalt that can be utilized in the practice of this invention can be made by conventional air blowing techniques that are well known in the art to attain the desired softening point and penetration value. In such an air blowing procedure the asphalt is heated to a temperature which is within the range of 400° F. (204° C.) to 550° F. (288° C.) and an oxygen containing gas is blown (sparged) through it. This air blowing step will preferably be conducted at a temperature which is within the range of 425° F. (218° C.) to 525° F. (274° C.) and will most preferably be conducted at a temperature which is within the range of 450° F. (232° C.) to 500° F. (260° C.). This air blowing step will typically take about 2 hours to about 10 hours and will more typically take about 3 hours to about 6 hours. However, the air blowing step will be conducted for a period of time that is sufficient to attain the ultimate desired softening point and penetration value.
3
2
5
3
3
3
4
2
4
3
2
5
6
6
2
6
The oxygen containing gas (oxidizing gas) used in such an air blowing step is typically air. The air can contain moisture and can optionally be enriched to contain a higher level of oxygen or another oxidizing gas. For instance, chlorine enriched air or pure oxygen can also be utilized in the air blowing step. Air blow can be performed either with or without a conventional air blowing catalyst. However, to attain commercially viable reaction rates an air blowing catalyst is typically utilized. Some representative examples of air blowing catalysts include ferric chloride (FeCl), phosphorous pentoxide (PO), aluminum chloride (AlCl), boric acid (HBO), copper sulfate (CuSO), zinc chloride (ZnCl), phosphorous sesquesulfide (PS), phosphorous pentasulfide (PS), phytic acid (CH[OPO—(OH)]), and organic sulfonic acids.
The partitioning agent utilized in making the asphalt additive composition will typically be in the form of an inorganic powder and is typically selected from the group consisting of talc, clay, silica, calcium carbonate, wollastonite, glass fibers, and glass spheres. Organic soaps, such as zinc stearate, calcium stearate, and the like can also advantageously be used as partitioning agents in some embodiments of this invention. Polymeric partitioning agents can also be employed in some applications. To utilize the partitioning agent most efficiently, it will typically be dispersed on the surface of the pellets of the asphalt additive composition. In one embodiment of this invention, the partitioning agent can be a mixture of (i) a phenyl formaldehyde resin and (ii) a precipitated silica gel, a polyethylene wax, a polymethylene wax (Fisher-Tropsch wax), or a linear aliphatic hydrocarbon polymer. In another alternative embodiment of this invention, pellets of the asphalt additive composition can be coated with a fused resinous partitioning agent, such as polystyrene, polymethylmethacrylate, polyacrylonitrile, polyvinylchloride (PVC) or polyethylene. For instance, U.S. Pat. No. 3,813,259 describes the use of polymethylmethacrylate as a partitioning agent and U.S. Pat. No. 4,271,213 described the use of a mixture of styrene-butadiene copolymer resin and polymethylmethacrylate resin as a partitioning agent. The teachings of U.S. Pat. No. 3,813,259 and U.S. Pat. No. 4,271,213 are incorporated herein by reference for the purpose of describing such partitioning agents and the manner in which they are used to coat pellets to prevent them from agglomerating.
The partitioning agent will typically be incorporated onto pellets of the asphalt additive composition at a level which is within the range of about 0.05 weight percent to about 4 weight percent, based upon the total weight of the asphalt additive composition. The partitioning agent will more typically be incorporated onto the asphalt additive composition at a level which is within the range of about 0.1 weight percent to about 3 weight percent and will preferably be incorporated at a level of about 0.1 weight percent to about 2 weight percent. It is generally desirable to use the minimal amount of partitioning agent which is required to keep pellets of the asphalt additive composition of this invention in a free-flowing state. Pellets of the asphalt additive composition of this invention can be of various shapes that allow for good free-flow during storage, handling, and processing. For instance, the pellets of the asphalt additive composition can be in the form of pastilles, cubes, cylinders, discs, spheres, rods, briquettes, or granules. To attain the best possible free-flow characteristics it is normally preferred for the asphalt additive composition to be in the form of pellets having a generally spherical or cylindrical shape. The size of the pellets can vary widely. However, such pellets will typically weight from about 0.02 grams to about 0.8 grams and will more typically weigh from about 0.04 grams to about 0.5 grams. It is typically preferred for pellets of the asphalt additive composition of this invention have a weight which falls within the range of 0.08 grams to 0.2 grams.
The polymer additives which can optionally be incorporated into the asphalt additive compositions of this invention include EVA and polyolefin resins, such as polyethylene and polypropylene. The polymer additive (if any) will generally be incorporated into the asphalt additive composition at a level which is within the range of about 0.5 weight percent to about 50 weight percent, based upon the total weight of the asphalt additive composition. The polymer additive will commonly be incorporated into the asphalt additive composition at a level which is within the range of about 0.5 weight percent to about 30 weight percent. The polymer additive will typically be incorporated into the asphalt additive composition at a level which is within the range of about 1 weight percent to about 20 weight percent. The polymer additive will more typically be incorporated into the asphalt additive composition at a level which is within the range of about 2 weight percent to about 15 weight percent and will preferably be incorporated at a level which is within the range of 3 weight percent to 12 weight percent.
The asphalt additive compositions of this invention are beneficial in modifying plastic formulations to improve processing characteristics, as a homogenizing agent, as an adhesion promoting agent, as a resin replacement or supplement, as an insulating agent, and/or as a black colorant. The asphalt additive compositions of this invention are typically added to plastics formulations at a level which is within the range of about 2 parts by weight to about 25 parts by weight per 100 parts by weight of the plastic. The asphalt additive compositions of this invention are more typically added to plastics formulations at a level which is within the range of about 2 parts by weight to about 12 parts by weight per 100 parts by weight of the plastic. In most cases the asphalt additive compositions of this invention are added to plastics formulations in accordance with this invention at a level which is within the range of about 4 parts by weight to about 10 parts by weight.
The asphalt additive compositions of this invention can be used to modify the characteristics of virtually any type of thermoplastic polymer with the benefit of exhibiting only a minimal level of residual odor. For instance, the asphalt additives of this invention can be blended into polyolefin resins, polyamide resins, polyester resins, polycarbonate resins, and the like. For instance, a small amount of the asphalt additive composition of this invention can be blended into a plastic to improve its ability to be injection molded, blow molded, or extruded into a useful product. In some cases modifying the plastic with the asphalt additive composition of this invention improves throughput rates of molding equipment, reduces energy requirements, and reduces the incidence of defective parts. All of these potential benefits can result in significant cost savings industrial applications. In other cases, the modification of the plastic with the asphalt additive composition makes it possible to produce a given part with a particular plastic which could not otherwise be produced without the plastic modification. It should also be noted that the incorporation of the asphalt additive composition of this invention into a plastic can result in improved thermal insulation characteristics which can be beneficial in certain applications. In other applications the characteristic black color provided to the plastic by the asphalt additive composition can also be desirable.
This invention is illustrated by the following examples that are merely for the purpose of illustration and are not to be regarded as limiting the scope of the invention or the manner in which it can be practiced. Unless specifically indicated otherwise, parts and percentages are given by weight.
In this experiment 0.5 parts by weight of zinc ricinoleate was mixed into 99.5 parts by weight of oxidized asphalt having a softening point of 150° C. for 10 minutes at a temperature which was within the range of 450° F. (232° C.) to 475° F. (246° C.). This treated sample of asphalt was then qualitatively compared by a group of four people to a second sample of the same asphalt which was not treated with zinc ricinoleate. All four of the people in the test group independently reached the conclusion that the asphalt sample that was treated with zinc ricinoleate exhibited a significantly reduced level of odor as compared to the untreated sample which exhibited a odor which was characteristic of asphalt. This experiment accordingly shows that zinc ricinoleate can be incorporated into asphalt compositions to significantly reduce the odor level thereof.
While certain representative embodiments and details have been shown for the purpose of illustrating the subject invention, it will be apparent to those skilled in this art that various changes and modifications can be made therein without departing from the scope of the subject invention.
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE INVENTION
Example 1 | |
Which organisations need to do a self-evaluation for submission?
Each NHS organisation with an Education Contract (former Learning and Development Agreement) with Health Education England will need to carry out and submit a self-evaluation.
What guidance is there for Higher Education Institutions (HEIs)?
There is no separate guidance for HEIs. The generic guidance is detailed enough to assist in the self-evaluation process.
Any questions and concerns should be directed to the local LKS Lead.
Timeline and submission deadlines
What is the timeline for the baseline self-evaluation and validation process?
Submission deadline deferred until 24th September 2021.
Given Covid-19 pressures on NHS organisations, the current deadline for baseline submission from all NHS organisations with and LDA/Education Contract will now be September 2021.
Due to staffing issues/organisational changes, we would like to request an extension to the Outcomes Framework self-evaluation process
The baseline self-evaluation provides a snapshot of how the organisation and LKS meets the Outcomes Framework at the time of submission. Consequently we are unable to extend the deadline for submission.
Self-evaluation process
I want to do a preliminary self-evaluation. Is there any documentation that I can use?
Yes this is fine. Use the self-evaluation template from the documentation page and associated documents.
What will be the process for submitting the self-evaluation?
We are currently setting up a Libguides website for submission of the self-evaluation.
The website will be available from June 2021 for submission. Further detail and guidance on submission will be sent to all organisations once the website is set up.
Can you involve teams outside the LKS in the self-evaluation process?
While the Trust is responsible for the submission of the Outcomes Framework evaluation, the LKS manager will most likely complete it.
You may work with others across the organisation as well as other members of the team. It is not just for the manager to complete.
I think that some workshops for people to work together to look at this would be a good idea.
We welcome suggestions from colleagues about the workshops/topics might be of use, once the introductory webinars have been completed.
Discuss with your local HEE LKS Lead first. There may be opportunities to incorporate this into forthcoming meetings and events.
Can HEE review my submission in advance for one or more outcomes and let me know how my organisation is performing?
While we are happy to offer advice on specific issues, HEE does not have the capacity to undertake a comprehensive review of organisations’ self-valuations in advance of the baseline in 2021.
Outcome Levels
How do the levels work?
We spent a long time thinking about and devising the levels and indicators.
Using our knowledge, we considered how library and knowledge specialists operate at various levels of LKS development.
The step-progression from 1 and 2 to 3 and 4 is logical. You should find that you what is needed for levels 1 and 2 needs to be in place before you can address what is needed for 3 and 4.
How do the levels in each outcome relate to each other?
For each outcome the framework offers a spectrum of five levels. These range from:
Level 0, where a service is not developed, up to Level 4 which suggests a highly developed service in relation to the outcome.
Levels are cumulative. Each level builds on the previous one to enable service improvement.
How do we use the levels in each outcome for self-valuation and service improvement?
Within each level there are a series of indicators which suggest whether the level has been met.
If all indicators within a level have been met, and can be evidenced, this suggests the organisation and library and knowledge specialists have achieved that level and may be working at the next level.
How do we apply Low, Medium and High to each level?
If only some of the indicators within a level appear to have been met then the low, medium and high sub levels may provide a further way of tracking progress.
Low level suggests that you are at the initial stages for the particular level and can evidence this.
High suggests that you can demonstrate you are working fully within a level but as yet you are not working at the next level. This will be identified for service improvement and development.
Does the narrative need to describe and evidence why all levels have been reached or only the highest?
Yes the narrative needs to evidence both the level at which you have assessed your service and those below it.
In some cases it is not possible to evidence the higher level without evidencing the levels below.
However, where items are mentioned in lower levels and don’t automatically appear in the higher it is expected that the narrative and evidence captures these too.
I want to set myself a SMART target relating to achieving a level for the Outcomes Framework in the baseline year. How do I know what a realistic target might be?
The outcomes are not just about the LKS or individuals working within it. Rather it looks at how the organisation as a whole supports and makes use of its LKS.
Therefore, placing the responsibility for achieving certain levels in the Framework on the service – or individuals within it would be inappropriate.
The organisation will not evaluate as having a single level but rather a level per outcome. It may not be the case, or even likely, that the levels will be the same for every outcome.
To set a realistic, meaningful target for any of the outcomes in terms of a level for next year you would need to know where you are now. Otherwise the target could mean no progress at all or even going backwards.
This would require a self-evaluation on all the outcomes now and having this verified by HEE’s LKS leads. HEE will not be undertaking advance verifications of self-evaluations.
If there is a desire to set an Outcomes Framework associated SMART target we would advise looking at the indicators associated with the outcomes and linking the aims and objectives to these.
With outcome one for example the objective might be involving specific senior stakeholders within the organisation(s) served in the development of any LKS strategy or associated plans by X date.
Will validations be subject to the same traffic light system, i.e. red, amber and green services that the LQAF was?
There won’t be any overall “score” from the new process.
There will be a level for each outcome – so for example level 3 for outcome 1, level 2 for outcome 2.
HEE and the LKS Leads would have concerns about any services and Trusts who don’t engage with the process or provide a self-evaluation and plans for development.
There may also be interventions from HEE where specific problems or issues have been identified.
How will we report to our senior stakeholders if there is no overall score?
The validated self-valuation for your organisation will give a level for each outcome.
This includes a visual radar chart/spider graph demonstrating current levels and over time where improvements have been achieved.
This can be shared with stakeholders. The report will highlight areas of strength and good practice against all the outcomes, as well as areas for improvement. This will provide an accurate and more meaningful way of reporting.
If there is no overall score how will we benchmark our services?
The Quality and Improvement Outcomes Framework is a service improvement and development tool focusing on an individual organisation. Therefore benchmarking is not appropriate.
We are working with HEE Communications and HEE Quality teams on the communications to Trusts explain the new framework.
Quality leads will understand the new framework, levels and how the new process works.
Results will be reported to the national quality team and integrated with the wider quality education processes with which Trusts are familiar.
Evidence
How can I reflect the work that we have undertaken during the pandemic in my self-evaluation?
This guidance will help with this.
Contact [email protected] for the document in an accessible format.
Are you able to share good examples from LKS services that undertook the pilot last year?
We did for the pilot. However, some pilots followed the example too closely with the result that they didn’t fully demonstrate their own service and organisation.
It is difficult to pull together evidence examples as each organisation is different and will be working at different levels for each outcome.
For each outcome a list of suggested evidence has been provided. Don’t see these as a prescriptive list; there may be evidence more relevant to your organisation and service.
What is the maximum age of the evidence? For example in outcome 4 if you did a role redesign in 2017 would that count?
The evidence shows what you did and the outcomes for your service in the period being reviewed.
In this example it would only be relevant if the outcome of the role redesign was achieved during the period covered by your self-evaluation.
If you have a piece of evidence, for example an annual report, that covers all the organisations you cover how will that work if you are supposed to have a submission for each organisation?
The same piece of evidence can be used for each submission. You should ensure that, if it is generic, it covers the relevant organisations in sufficient detail to evidence the narrative at the levels you are demonstrating.
Can you use the same piece of evidence for more than one outcome?
Yes and we would encourage this approach. Think quality not quantity and cross reference against it for the different outcomes.
Ask the “So what?” question of the evidence. Why is it relevant and what does it tell us? This will help you identify “evidence” that is not appropriate so it can be omitted.
What happens when you are incorporated into a wider team for strategy, plan etc.? For example, I have 2 points in the corporate strategy.
If you are restricted in this way within wider corporate documents you will need to consider how you can further outline the plans and strategies of your service.
This may include using tools such as the Plan on the Page.
Will the Strategy and annual report be assessed separately?
No they will not. These are key documents giving a good overview of your LKS.
The strategy shows how the service is intended to develop and improve and the annual report shows what has been done over the last 12 months.
Elements of the documents could act as evidence for several of the outcomes. Cross-reference in relevant outcomes if needed.
Validation
There was a consistent national process put in place, based on the learning from undertaking the pilot of 12 services in 2018:
- The validation review was based on the guidance and principles provided, as set out in the validation section of the Framework Handbook, and the Evidence and Outcome Levels section of FAQs
- The aim was to establish a consistent national baseline, reviewing evidence from 1st April 2020 to 31st March 2021, with the option of including evidence from 1st April 2019 – 31st March 2020 where Covid had delayed activity in specific areas – see guidance
- The validation was based solely on the narrative and supporting evidence provided in the return.
- Each self-evaluation was reviewed by two validators to ensure consistency.
The validation for each self-evaluation was undertaken in two stages:
1. An individual review of the submission documentation
2. A meeting at which the validators discussed and reviewed their findings for each of the outcomes, agreed validated levels and compiled both the detailed and executive report
- Where queries arose, the validators escalated these to the national consistency panel.
- Throughout the validation consistency processes were put in place. These included sampling reports and validation decisions to ensure a consistent approach.
- All Executive Reports were reviewed to ensure consistency.
Resubmission
What type of evidence could I re-submit?
A range of different evidence documents can be included in your resubmission.
Evidence included needs to demonstrate the activity and outcome occurred in the period t1st April 2020 to 31st March 2021 – please see the guidance for evidence for 2020/21 and Covid 1
To consider additional evidence for re-submission you may wish to consider some of the feedback in the detailed report and the guidance and suggestions in the Outcomes Framework document.
There are some examples below of the different types of evidence that organisations included in their baseline submission:
Outcome 1
Some examples of evidence submitted by organisations for outcome 1:
A Board member promotes the role and value of the library and knowledge service
- Governance structure to demonstrate Board member engagement with knowledge and library service (This evidence would need to be in combination with evidence to demonstrate promotion)
- Opportunities and invitations to present to Board members and meetings, with the outcomes of the conversation e.g. minutes included from these meetings
- Social media cards or impact case vignettes from Board members used in promotion – both knowledge and library service promotion as well as other Trust promotion Regular tweets from a Board member to the organisation about the knowledge and library service
An approved strategy addresses Knowledge for Healthcare priorities, aligned to the goals and priorities of the organisation
- Strategy with the signed Board/Organisation committee cover sheet
- Copies of the approved minutes from the Board/Organisational committee that approved the strategy
- Formal communication, from the organisation or knowledge and library service, about the approval
A separately identified library and knowledge service budget allows for provision of a range of services and resources for users
A separately identified library and knowledge service budget allows for provision of a range of services and resources for users
- A copy of a budget statement – to show ownership by the knowledge and library service and sufficient funding
- Inclusion of the budget/expenditure overview in the annual report
- Extract from part 2 statistics - income and expenditure
With all the above they need to show ownership by the knowledge and library service and sufficient funding available for the resources
Outcome 2
Some examples of evidence submitted by organisations for outcome 2:
Evidence search services provided by the library and knowledge specialist support
- clinical decision making and
- non-clinical decision making
Evidence that demonstrates the searches undertaken and use of the search service:
- Extracts from spreadsheets that contain the details of clinical searches undertaken
- Extracts from spreadsheets that contain the details of non-clinical searches undertaken
- A range of impact case studies and vignettes, of both clinical and non-clinical decision making, gathered from literature searched carried out
- Inclusion in annual reports of searches undertaken and/or impact vignettes of both clinical and non-clinical decision making
Outcome 4
Some examples of evidence submitted by organisations for outcome 4:
A qualified library and knowledge specialist actively leads the service.
- Certificates of qualifications
- CILIP Professional Registration at Chartered level or above
Library and Knowledge specialist skills and capacity are considered in service planning
- Emails/Discussions with line managers/senior colleagues identifying additional capacity requirements
- Capacity and skills requirements included in strategy plans
- Reviews of team member activities
- Business cases for restructure
- Review of capacity against the recommended Staff Ratio
How will re-submitted evidence be reviewed and validated?
To ensure consistency across the process the same validation principles and guidance will be used. Please see the answer to the question How was my self-evaluation submission validated? | https://library.hee.nhs.uk/quality-and-impact/quality-and-improvement-framework/quality-and-improvement-framework-faqs |
National institutes of health funding for surgical research.
The objective was to compare National Institutes of Health (NIH) funding rates and application success rates among surgeon and nonsurgeon-scientists over the past 2 decades. Surgeons may be capable of accelerating the translation of basic research into new clinical therapies. Nevertheless, most surgeon-scientists believe they are at a disadvantage in competing for peer-reviewed funding, despite a recent emphasis on "translational science" by organizations such as the NIH. We accessed databases from the NIH and the American Association of Medical Colleges. Although total competing NIH awards rose 79.2% from 5608 to 10,052, the much smaller number of surgical awards increased only by 41.4% from 157 to 222. There was a small but statistically significant difference between total NIH and surgical application success rates (29% vs. 25%, P < 0.01). However, the persistently low percent of NIH funding going to surgical investigators was due primarily to the very small number of surgical applications, and to a much smaller increase in the absolute number of applications over time (464 vs. 23,847). As a result, the number of grants per 100 faculty members was more than 4 times higher among nonsurgical than surgical faculties at US medical schools. NIH funding to academic surgeons is declining relative to their nonsurgical colleagues. This trend will likely be reversed only by an increase in the number of grant applications submitted by surgeon-scientists. Structural changes in surgical training programs, and in the economics of academic surgery, may support a greater contribution of surgeon-scientists to the success of translational research.
| |
Boost for biodiversity and community groups
We are helping to improve the environment by supporting eleven projects which are working to help biodiversity in the our supply area.
The eleven charities and community groups will receive funding totalling over £48,400 for projects that encourage and enhance biodiversity in almost twelve hectares (equivalent to almost twelve rugby pitches) as well as having a positive community impact.The projects include wildflower and hedgerow planting, habitats for bees, an innovative project to measure migratory patterns of skylarks and the eradication of Himalayan Balsam an invasive species.
- West Midlands Ringing Group, Blithfield: using innovative technology to monitor the presence of skylarks in the Blithfield, to build up information about their migratory patterns and build a plan to develop habitats working with local farmers and landowners.
- Thomas Russell Infants School, Barton-under-Needwood: Expanding the biodiversity at the existing forest school with wildflowers, native hedging.
- St Thomas’s Community Association, Dudley: Turning a neglected area into a community green space by planting a hedgerow, native plants, shade tolerant plants to grow in damp areas, planting fruit trees and installing bird, bat and owl boxes.
- Etwall Primary School, Etwall: Creating a meadow on the school grounds.
- Kingstone Jubilee Committee, Kingstone: Creating wildflower verges and areas of the churchyard.
- Kinver Eco Collective, Kinver: Creating wildflower verges which can act as corridors between Heathlands and Sandland habitats in the area.
- Hereford & Worcester Scouts, Kinver: eradicating Himalayan Balsam across the site and through renovating a former area of coppice and planting new hedging along the boundary of the site.>
- Bird's Bush Primary School, Tamworth: create an engaging and innovative outside space, while improving biodiversity.
- Kettlebrook Short Stay School, Tamworth: Enhancing biodiversity on the school grounds by planting trees, shrubs and planting.
- Tutbury Community Forest Garden Steering Group, Tutbury: Planting hedging to create a forest garden on the edge of a new housing estate.
- Grenfell Road Allotments, Walsall: Providing more habitats for bees, by planting trees and shrubs, and buying two hives and colonies.
Since the launch of the PEBBLE fund in 2016, 54.7 hectares (equivalent to almost 55 rugby pitches) have been improved thanks to these awards.
"We’re not just here to provide our customers with high quality water, we want to improve the environment for current communities and for future generations.
We do that by supporting and funding the community groups and charities, which are working hard to increase the variety of natural living things and the diversity of the habitats where they live. This is work which not only benefits wildlife, but also enhances local communities and our open spaces.
We were pleased to receive so many applications for our PEBBLE fund this year, the successful projects were chosen by a combination of our staff volunteers across the business and customers on our online community. I’m looking forward to seeing how these projects progress."
Dan Clark, Water Resources and Environment Manager. | https://www.south-staffs-water.co.uk/news/boost-for-biodiversity-and-community-groups |
‘The old order changeth, yielding place to new…’ (From the poem Morte d’Arthur, by Alfred, Lord Tennyson, 1835).
The New Year marked the passing of an eventful 2015 for the VU, and the promise of an equally eventful year to come. I want to stand still and take stock of a momentous change heralded by the new year, a change that may have gone somewhat unnoticed within our broader community. I am referring to the formal changes in the structure of the umbrella organizations for the VU and the VUmc.
Until 31 December 2015, the VU and VUmc were two peas in a pod, and part of a single foundation, the Stichting VU-VUmc. As of January 1, 2016, the VU and VUmc follow their separate paths as independent entities, each legally housed in separate foundations, respectively the Stichting VU and the Stichting VUmc. The formation of separate entities has advantages from two standpoints: governance and flexibility. The VU and the VUmc are exceedingly complex organizations in their own right; this split into independent organizations leads to clarity of missions for each institution, and to better governance, with independent supervisory boards with the required domain-specific expertise. The independence also gives the VU and the VUmc additional flexibility in separately determining their own futures and collaborations, such as a far-reaching alliance with the AMC that the VUmc is actively exploring.
For half a century the VU and VUmc have been linked as institutions. Colocalization of the VUmc and the VU on one campus is a great good – it has given, and continues to give, both institutions enormous opportunities for achieving research and educational synergies, and allows us to jointly develop the full spectrum of research, from fundamental, curiosity-driven science, to translational work relevant in the clinic. The ‘old order’ may be changing, and we may be going separate ways, but we remain inextricably intertwined as institutions sharing a history, identity, basic philosophy, and one campus. Let’s keep it that way. | http://advalvas.vu.nl/index.php/blog/new-order |
It is rare that a community starts a planning process with a goal of setting out in a new direction to look at non-traditional community planning issues and reflect upon the changing nature and structure of their own community and economic activity. Following the devastation of Hurricanes Katrina and Rita, the Greater New Orleans Foundation and the City of New Orleans engaged residents in a planning process to envision a new sustainable future for their communities. The result of an intensive five month process, The Unified New Orleans Plan is the single, comprehensive recovery and rebuilding plan for the City of New Orleans. Through intense analysis and an aggressive timeline, the design team completed three district Recovery Plans for District 2 (Central City and the Garden District), District 8 (the Lower Ninth Ward), District 13 (English Turn), and a neighborhood plan for District 12 (Algiers). These plans were merged into one city-wide plan that includes comprehensive infrastructure recommendations for flood protection. The final city-wide plan was accepted in 2007, setting in motion the release of $117 million in federal grants for infrastructure repairs for government and non-profit entities across the city.
Plans were completed in four phases. Phase one involved interviewing key stakeholders in the area; demographic and housing characteristics analysis; economic base analysis; access, circulation and transportation; existing conditions and urban design analysis and identification/review of all existing information and data; development of case studies, and a work-session. Phase two involved the development of goals and principles for the city through the use of the visual preference survey. The third phase of work involved drafting a Development Strategy and Comprehensive Recovery Framework with a series of focus area development plans, and a final phase completing the Community Recovery Plan.
Hurricane Katrina made landfall in Louisiana in the early hours of August 29, 2005, and Hurricane Rita subsequently on September 24, 2005. The combined devastation of these two monstrous storms encompasses the states of Texas, Louisiana, Mississippi, Alabama, and Florida. It is estimated over 1,500 people died in Louisiana, while 18,000 businesses and more than 200,000 housing units were destroyed. Critical infrastructure such as roads, schools, public facilities and medical services were washed away. The largest city in Louisiana, New Orleans, once the center of transportation, economy, industry and tourism of the state, suffered an overwhelming disaster. As a result, 80% of the city was flooded, around 150,000 houses destroyed and over 450,000 residents evacuated. Storm surge pushed ashore by the Hurricane Katrina caused the city and state to suffer the worst civil engineering disaster in American history.
The citizens of New Orleans in each district were deeply involved in the project team selection process. Public meetings were held to allow citizens and neighborhood groups the opportunity to indicate their preferred planning team.
The recovery framework suggests projects and strong public/private partnerships, guided by the community and their representatives, to facilitate a sustainable urban resilient recovery of the city. Their recovery will provide not only the basic necessities of Pre-Katrina status, but a scenario to rebuild the city with holistic preparedness for the future. The strong community initiatives, programs, and recovery projects are detailed in the master plan.
District 2: A Framework for Sustainable Urban Resilience
District 2 presents a comprehensive cross section of New Orleans as a whole because it comprises multiple neighborhoods, each of which has a unique character. District 2 experienced a large variation of impact from Hurricane Katrina due to its elevation both above and below sea level. The portions below sea level were originally swamp land, drained, developed and inhabited after the areas closer to the River were settled. District 2 occupies the downriver portion of what is commonly referred to as the “crescent” of the Mississippi River, with the Warehouse District and Central Business District to its downriver side, the Uptown neighborhoods to its upriver side, residential neighborhoods and industrial use to its lake-side, and mixed use and the Port of New Orleans along its river-side. Three scenarios for recovery focus on timeframes and try to understand how the district and city might look in the future: RE Pair, RE Hab, and RE Vision. The chosen scenario was a RE Hab +, one defined by Sustainable Urban Resilience. Sustainable Urban Resilience focuses not only on infrastructure repair and public improvements, but also creating a place that is safer from disasters and improves the quality of life for all residents. The plan takes a proactive step to protect oneself while living with risk; it is the capacity to adjust to threats and to mitigate or avoid harm. Sustainable Urban Resilience is a framework that at this critical crossroad and rare moment for the future of a complex and culturally rich historical District provides a tremendous opportunity to develop the cohesiveness of the planning unit at large, providing a holistic approach that prior to Katrina was almost non-existent. The plan is the chance to turn disaster into opportunity, to break down barriers between neighborhoods, and unite them through physical and social form to facilitate recovery and sustainable urbanity.
District 8: A Framework for Sustainable Resilience in the Lower Ninth Ward
District 8 is approximately 2.8 square miles of the city, and is bounded by Florida Avenue on the north, the Orleans/ St. Bernard Parish line on the east, the Mississippi River on the south, and the Industrial Canal on the west. There are two neighborhoods within the district: Lower Ninth Ward and Holy Cross. The Lower Ninth Ward was devastated by Hurricane Katrina. When a community loses its center it suffers, and when it loses its people it’s irreparably affected. The Lower Ninth Ward’s social infrastructure and physical infrastructure were shattered on August 29, 2005.
Through an intense community planning process, the design team worked closely with the residents and neighborhood associations of the Lower Ninth to develop a recovery plan and a strong set of goals and principles to guide the Lower Ninth’s revitalization and unique place in New Orleans’ history. The residents prepared a unified vision and a framework for recovery. This initiative identifies needs, challenges, and opportunities and to provide a strategy on how the community can begin to meet their recovery head on, while laying a road map for their survivability. | https://www.h3studio.com/unop |
Jan Esterkin has been practicing Educational Therapy in Santa Monica since 2001 after graduating from UCLA's certificated program in Educational Therapy. She earned her undergraduate degree from the University of California at Berkeley, and earned two graduate degrees, one from Boston University in Elementary Education, and one from Loyola Marymount University in Education, Counseling and Guidance.
Jan began her career as a classroom teacher and a resource specialist teacher in the Los Angeles Unified School District. She was also a learning specialist at Stephen S. Wise School, and an assistant teacher at The John Thomas Dye School, St. Martin of Tours, and Kenter Canyon Elementary School. Jan was a member of The Learning Studio, an educational therapy resource program at Canyon Charter School and at Kenter Canyon Elementary School.
Through her 12 years of practicing educational therapy, Jan has developed a unique approach to working with her students. She encourages success and a positive outlook toward school and learning, while creating an energetic, educational atmosphere through her empathy, sense of humor, and enthusiasm.
Jan is a professional member of the Association of Educational Therapists (AET), an international organization dedicated to "helping individuals become successful learners" (aetonline.org). She is currently a study group leader for the association. | https://www.janesterkin.com/ |
secretariat for a new G20 Global Smart Cities Alliance.
The alliance unites municipal, regional and national governments, private-sector partners and cities’ residents around a shared set of core guiding principles for the implementation of smart city technologies. Currently, there is no global framework or set of rules in place for how sensor data collected in public spaces, such as by traffic cameras, is used. The effort aims to foster greater openness and trust as well as create standards for how this data is collected and used. This marks the first time that smart city technologies and global technology governance have been elevated to the main agenda.
The Forum will coordinate with members from the G20, Urban 20 and Business 20 communities to develop new global governance guidelines for the responsible use of data and digital technologies in urban environments. The Internet of Things, Robotics and Smart Cities team in the Forum’s Centre for the Fourth Industrial Revolution Network will take the lead and ensure accountability throughout the alliance’s members.
“This is a commitment from the largest economies in the world to work together and set the norms and values for smart cities,” said Børge Brende, President of the World Economic Forum. “We will coordinate efforts so that we can all work in alignment to move this important work forward. It is important we maximize the benefit and minimize the risk of smart city technology so all of society can benefit, not the few.”
“The advancement of smart cities and communities is critical to realize Japan’s vision for Society 5.0. It is also essential to address the world’s most pressing challenges, including climate change and inclusive economic growth,” said Koichi Akaishi, Director General for Science, Technology and Innovation for the Cabinet Office of the Government of Japan. “The Government of Japan is proud to have championed this cause as part of our G20 presidency and was pleased to see the Business 20, Urban 20 and G20 Digital Ministers all pledge their support for the creation of a global smart cities coalition. To advance this work, we are pleased to welcome the World Economic Forum Centre for the Fourth Industrial Revolution as the global secretariat for this important initiative.”
Public-private cooperation is crucial to achieving global change. Efforts to form the Global Smart Cities Alliance have been supported by four partners of the World Economic Forum: Eisai, Hitachi, NEC and Salesforce.
“Open and Agile Smart Cities is thrilled to be part of this global effort led by the World Economic Forum to support cities and communities with a global framework,” said Martin Brynskov, Chair of Open and Agile Smart Cities, an international smart cities network. “Openness and interoperability are key to scaling up digital smart city solutions that help tackle the challenges that cities are facing in the 21st century – on the cities’ terms and conditions.”
“This alliance builds on the work already done by many cities around the globe – such as the Cities Coalition for Digital Rights – to empower citizens through digital technologies. A human-centric digital society shall reflect the openness, diversity and inclusion that are at the core of our societies and value systems,” said Ada Colau, Mayor of Barcelona. “Cities must spearhead efforts to put technology and data at the service of the citizens in order to tackle big social and environmental challenges, such as feminism, affordable housing, climate change and the energy transition. We are committed to being part of this global endeavour to build a digital society that puts citizens first and preserves their fundamental rights.”
“In today’s interconnected world, global collaboration is no longer merely an option, it is a necessity, said Bill de Blasio, Mayor of New York City. “New York City is proud to have championed a model for smart cities that puts our most vulnerable residents first. We also recognize that now more than ever urban issues have global implications. As mayors, we have a unique responsibility to lead by example and demonstrate a sustainable path towards a more inclusive and equitable future.”
“As the world continues to urbanise, it is indispensable to successfully manage urban growth,” said Ichiro Hara, Secretary General of the B20 Tokyo Summit, and Managing Director of Japan Business Federation, Keidanren. “The Business 20 have called to support the implementation of Society 5.0 by fostering cooperation among smart cities. We applaud the G20 for heeding our call for a smart cities alliance and look forward for a common guiding principles to be developed through this critical initiative.”
“The Cities for All Network is excited to partner with the World Economic Forum and the G20 to help realize our shared vision for a more inclusive urban future,” said Victor Pineda, President of World Enabled and Co-Founder of Smart Cities for All. “The last industrial revolution left out a lot of people. As we move into the Fourth Industrial Revolution, we cannot risk repeating past mistakes. We need to work together to co-design robust policy frameworks to ensure that all members of society can contribute to and benefit from technological advancements.”
“Local governments and city leadership need to be at the core of decision-making when developing smart cities, said Emilia Saiz, Secretary General of United Cities and Local Governments (UCLG). “It is the guarantee to ensure the human dimension and the protection of the commons. United Cities and Local Governments is delighted to contribute in every way possible to that process and to transform the conversation around digital rights.”
“In pursuit of the Sustainable Development Goals and in line with the New Urban Agenda, UN-Habitat affirms the importance of coordinating efforts around protections and standards in deploying smart digital infrastructure to ensure that such smart technologies benefit all, particularly the vulnerable, including people with disabilities,” said Maimunah Mohd Sharif, Executive Director of UN Habitat. “We welcome this important new alliance led by the World Economic Forum, G20, mayors, national governments, multilateral organizations, and civil society groups.”
About the Centre for the Fourth Industrial Revolution Network
The Network helped Rwanda write the world’s first agile drone regulation and is scaling this up throughout Africa and Asia. It also developed actionable governance toolkits for corporate executives on blockchain, co-designed the first-ever Industrial IoT (IIoT) Safety and Security Protocol and created a personal data policy framework with the UAE.
Based in San Francisco, the World Economic Forum Centre for the Fourth Industrial Revolution brings together governments, leading companies, civil society and experts from around the world to co-design and pilot innovative approaches to the policy and governance of new technologies. More than 100 governments, companies, civil society, international organizations and experts are working together to design and pilot innovative approaches to the policy and governance of technology. Teams are creating human-centred and agile policies to be piloted by policy-makers and legislators around the world, shaping the future of emerging technology in ways that maximize the benefits and minimize the risks.
Urban Development
How cities can save on commuting time, double job access
A new report released today by the World Economic Forum pinpoints how cities can use mobility options to improve social equity and economic growth.
The white paper, How Mobility Shapes Inclusion and Sustainable Growth, identifies over 40 potential solutions to improve inclusivity in mobility, with simulations of over 40 million daily trips, global benchmarking and in-depth interviews with key stakeholders.
Prepared in collaboration with the Boston Consulting Group and University of St Gallen, the study identifies transportation ‘pain points’ in three cities – Beijing, Berlin and Chicago. Using a six-step transportation equity methodology, the white paper analyses the mobility challenges each city faces, their affected communities and how transportation is driving, or failing to drive, economic growth and well-being. It also offers recommendations that result in real gains.
This methodology fills a void in current transportation analysis and can serve as the centrepiece of a strategy for developing mobility-based social inclusion programmes and policies in the identified cities and elsewhere.
Beijing, People’s Republic of China
This high-density megacity can become nearly 30% more efficient, saving commuters about five days-worth of travel time per year:
- Pain point: Very high demand has overwhelmed Beijing’s public transit network, with queuing times to get into some train stations consistently over 15 minutes, leading many residents to choose driving as an alternative.
- Solution: A digital platform for metro reservations to flatten peak-hour demand and reduce commute time for rush hours.
- Benefit: This equates to a 29% average reduction in travel time for the service users in the modelling for Beijing, an average reduction of 115 hours waiting a year per user.
Berlin, Germany
The report shows how this compact, middleweight city is raising $295 million more per year for inclusive mobility projects:
- Pain point: As central districts have become gentrified, populations have been pushed further from the city centre, where public transport is more limited and fragmented. Berliners in these peripheral areas take about 27% more time commuting than central Berliners.
- Solution: Creating differentiated service levels for public transit increases usage and brings in additional revenue that can be used to improve public mobility systems for the underserved.
- Benefit: A differentiated service level on public transit increased the share of public transit trips by 11% while at the same time generating 28% higher revenue for the public transport operator – an equivalent of $295 million – that can be used to improve access for underserved populations.
Chicago, USA
A car-centric city such as Chicago can give low-income neighbourhoods access to hundreds of thousands of more jobs:
- Pain point: Low-income households in Chicago spend up to 35% of their income on transportation, due to the high cost of vehicle ownership and reliance on cars for mobility. Average work commute time on public transit for individuals in low-income areas is also nearly 15 minutes longer when compared to residents in some high-income areas.
- Solution: Introducing on-demand shuttles to cover the first and last mile of transport can greatly increase access for underserved communities.
- Benefit: The solution would increase the share of public transit usage in Chicago by 26% and would broaden the number of jobs reachable in 40 minutes – the rough ceiling for a desirable commuting time – by 90%; this would result in improved access to 224,000 jobs from neighbourhoods that did not have access before.
The white paper also finds that in order to foster social inclusion through mobility, both supply and demand must be considered. Purely increasing mobility infrastructure does not always yield the desired results.
For example, adding 10 new subway cars may do little to increase ridership among people with disabilities even if they do not have other transportation options, mainly because getting to a subway station is a challenge in and of itself. Other solutions such as an on-demand mobility service for the disabled community, such as Hyundai Motor Groups’s EnableLA universal mobility service, may be the more appropriate option.
Next Steps for Policymakers
Access to transportation infrastructure is essential to social development and economic growth, and improving the mobility situation for underserved population groups needs to be one of the top priorities for decision-makers.
Since every city has its own mobility and socioeconomic challenges, data collection processes and the current understanding of rider demand must be re-examined in order to gather important information about mobility challenges affecting minorities.
Understanding the baseline conditions of the mobility conditions of each urban environment is crucial in effectively determining the appropriate solutions for individual cities.
Urban Development
Public-Private Collaboration Will Define New Era for Cities
At the World Economic Forum’s inaugural Urban Transformation Summit, which closed on Wednesday, global leaders underscored the need for increased public-private collaboration to capitalize on new infrastructure funding and tackle growing urban challenges around the globe.
“Our cities and our communities are changing right before our eyes. Digitization is transforming urban economies, public health and safety concerns are tearing at the social fabric of communities—and trillions of dollars of new infrastructure funding across the globe holds the potential to transform the physical environment,” said Jeff Merritt, Head of Urban Transformation at the World Economic Forum. “Now more than ever, it is critical that public and private sector stakeholders come together to shape a future that does not just work for the privileged few but delivers for all residents.”
“We have to be real about what this moment in time presents for us and not dismiss it, said Michael Hancock, Mayor of Denver. “We have to be very intentional in our efforts and say we’re going to create a new opportunity that America has not seen and give a chance to right the wrongs of some of the great epic moments in our history.”
“Climate change is not some far distant activity. It is real, and it is occurring now,” said Yvonne Aki-Sawyerr, Mayor of Freetown, Sierra Leone.
Mike Duggan, Mayor of Detroit, said: “We don’t need any more think tanks, we don’t need any more papers. We need public-private partnerships.”
The summit, which comprised both in-person events in Detroit and virtual convenings, included more than 350 mayors, business executives, community leaders and experts in urban development from 38 countries in North America, South America, Europe, Africa, Asia and Australia.
It marked the official launch of the World Economic Forum’s new global Centre for Urban Transformation and spurred a series of new initiatives and collaborations to support the development of more sustainable and inclusive cities.
Notable outcomes and commitments:
Two cities in Europe – Stockholm and Lisbon – were added to the roster of City Strategy Dialogues planned for 2022. In collaboration with MIT, the convenings, which include both public-facing events and more intimate workshops, pair mayors and senior city leaders with global experts and business leaders to forge new approaches to pressing urban challenges.
“New technologies are promising to transform cities in a way similar to what the automobile did in the 20th century,” said Carlo Ratti, Professor of Urban Technologies and Planning Director of the MIT SENSEable City Lab. “That’s why we need new forums – such as the Urban Transformation Summit, the City Dialogues that MIT will co-host with the Forum – to share knowledge and lessons from all over the world.”
Eight cities in Latin America, Africa and Asia – Bogotá, Buenos Aires, Lagos, Dhaka, Jakarta, Kigali, Nairobi, and Rio de Janeiro – have designated neighbourhoods as urban testbeds for new businesses, products and services that can improve quality of life for local residents and mitigate social and environmental challenges associated with rapid urbanization.
Following a four-month review of key barriers to public-private collaboration in cities, Accenture has announced plans to work with the World Economic Forum and its partners to develop new resources and tools to help cities better coordinate place-based strategies and accelerate community partnerships.
Design Core Detroit will lead a participatory design process in collaboration with the Forum to design a Fellowship Program in Detroit. The design process will identify and map new opportunities to scale community-based solutions that will connect Forum business partners to contribute technical assistance towards achieving community-led goals.
Plans for the next edition of the Urban Transformation Summit have already been set in motion. The event will convene once again in Detroit, 11-13 October 2022.
Urban Development
Urban leaders, influencers, chart new path for world cities
Mayors of Mexico City, Bogotá, New Orleans, Freetown, Gaziantep and Barcelona joined other urban leaders, designers, activists and thinkers from around the world on Wednesday, to chart a new path for cities.
A launch event called Cities at the Crossroads, kicked off at the British Academy in London – marking the inaugural session of the new UN-backed Council on Urban Initiatives.
The international group of eighteen mayors, activists and academics was formed in response to UN Secretary-General’s call to use the COVID-19 pandemic as an “opportunity to reflect and reset how we live, interact, and rebuild our cities.”
In a video message showed at the event, António Guterres remembered that cities large and small, “have been epicentres of COVID-19 and are on the frontline of the climate crisis.”
They also face severe risks from climate change, which will only grow, according to UN estimates.
By mid-century, over 1.6 billion urban residents may have to survive through average summertime highs of 35 degrees Celsius. More than 800 million could be at direct risk from sea level rise.
‘A bold new narrative’
For the UN Secretary-General, the pandemic “must be an inflection point to rethink and reset how” people live, interact and build cities.
“Investment in pandemic recovery is a generational opportunity to put climate action, social justice, gender equality and sustainable development at the heart of cities’ strategies and policies”, Mr. Guterres said.
The UN Chief also noted that more and more cities across the world are committing to net zero by 2050, or before.
“The sooner we translate these commitments into concrete action to reduce emissions, the sooner we will achieve green job growth, better health, and greater equality”, he argued.
Also addressing the event, the UN-Habitat Executive Director asked for “a bold new narrative now.”
“We need to bring visionary mayors to the table to help address these interlinked global crises and reframe the discourse on the role of cities, urban governance, design and planning”, Maimunah Mohd Sharif said.
Change conversation
The Council’s mission is to ensure a healthy global debate over urban issues, to help chart a sustainable future. The work will be organized around three challenges: the JUST city, the HEALTHY city and the GREEN city, said UN-Habitat.
The new Council starts its work as the UN’s COP26 climate conference continues in Glasgow, Scotland, trying to keep the goal of 1.5 degrees of global warming, within reach.
Being responsible for approximately 75 per cent of the world’s energy consumption and over 70 per cent of global greenhouse gas emissions, cities are at the core of climate action.
A global challenge
Also this Wednesday, at the World Expo in Dubai, the UN launched the Climate Smart Cities Challenge.
The initiative is an open innovation competition to identify climate smart solutions and reduce urban impact, between the cities of Bogotá, Colombia; Bristol, United Kingdom; Curitiba, Brazil; and Makindye Ssabagabo, Uganda.
According to UN-Habitat, “the climate ambitions of these cities are impressive and addressing them will have a powerful impact in shaping how city leaders, innovators and local communities respond to the climate emergency.”
Competition
With these four cities selected, the competition is now asking innovators, including technologists, start-ups, developers, finance experts and more, to submit their best solutions to the unique challenges identified. The application period closes on 5 January.
Up to 80 finalists (up to 20 per city) will be selected to work closely with these four cities, learn more about their challenges, collaborate on solutions, and ultimately form teams to demonstrate solutions in the real-world.
The winning teams will share up to 400,000 Euros to leverage further investment and build towards system demonstration in 2023.
Around 4.5 billion people live in cities today, but that number is projected to grow by almost 50 per cent, by 2050. By mid-century, over 1.6 billion urban residents may have to survive through average summertime highs of 35 degrees Celsius.
Publications
Latest
World Bank Financing Will Strengthen Learning, Access to Education in Cambodia
The World Bank today approved financing that, along with a grant from the Global Partnership for Education, will provide US$69.25...
Tech Start-ups Key to Africa’s Digital Transformation but Urgently Need Investment
The World Economic Forum’s latest report, “Attracting Investment and Accelerating Adoption for the Fourth Industrial Revolution in Africa” analyses the...
Iraq: Three Years of Drastic Changes (2019-2022)
When the wave of the protests broke out at the beginning of October 2019 in Iraq, the Iraqi politicians did...
Construction PPE: What and when to use
Personal protective equipment is essential for construction sites. Every workplace has hazards – from offices to classrooms. However, a construction...
Croatia Has Potential to Become a Blue Economy Champion in the EU
Croatia’s coast and sea are key national assets that contribute significantly to the country’s economy and give Croatia a competitive...
The First Crypto Mortgage: Bitcoin Continues to Rapidly Expand Across the US Markets
It seems like yesterday that the Bitcoin Futures got approval by a US regulator. The subsequent bitcoin ETFs were the...
Shipyard in Finland receives major order to build icebreaker
Helsinki Shipyard has received a major order to build the largest icebreaker in Finnish history and in the marine industry... | https://moderndiplomacy.eu/2019/07/01/wef-to-lead-g20-smart-cities-alliance-on-technology-governance/ |
At work we’ve been playing the game Bananagrams (see also here and here). It’s a word game with lots of tiles with letters on them, similar to Scrabble tiles. I won’t explain the rules here; you can find them with those links.
After playing several games and having fun (yeah, we recommend it), we decided to implement a version to test our adage that “any game can be improved by drafting”. Drafting is a concept that comes to us from Magic: the Gathering – a card game in which a common competition format is to build your deck of cards by “drafting” them. Drafting involves getting a set of cards, choosing the one you want, passing the rest on to the next player, and repeating the process until you have enough cards to play the game. So at each step you get some choice of the “best” card to suit your deck, restricted by what other players have already taken from the hands of cards that are going around.
Anyway, we decided to draft from sets of 3 letter tiles, and at each stage replenish the set to 3 tiles by taking a new face-down tile at random from the unused pool before passing it to the next player. At each step of the draft, you then have a collection of letters, which grows by one letter each step. And at each step you must arrange (or rearrange) all the letters you have into words. The restriction is that you must have either 1 or 2 words; they must not intersect like a crossword (so they can’t share letters, unlike in canonical Bananagrams); and to prevent players being knocked out too easily, any single letter is considered a valid word. If at any step you can’t make one or two words, you are out for that round. Last player knocked out wins the round, then play again!
We had even more fun with this than the original game. At one stage one set of three drafting letters being passed around had two Qs in it, plus one random tile from the pool, which severely restricted the choice of whoever got that set at each step. (You can certainly take a Q if you can use it in a word, but it’s better strategy to take more easily usable letters for later in the round.)
So there you go, a half-baked idea which was suggested off the cuff, we tried it, and it turned out to actually be good! Oh, and we dubbed our new game “Mangograms”, to keep with the tropical fruit theme. | http://www.mezzacotta.net/?p=107 |
phonlamaiphoto / iStockphoto.com
Protecting blockchain products requires a deep understanding of a company’s relevant assets, so an IP audit can clarify the benefits and risks before proceeding with commercialisation, say Andrew MacArthur and Ralph Dengler of Venable.
The rise of blockchain technology and its many applications, including banking and supply chain, continues to disrupt business. Blockchain provides the benefits of being immutable and decentralised, among others. It integrates distributed networks, cryptography, and consensus algorithms in potentially new and complex ways, forcing companies to reconsider how IP—patents, trade secrets, trademarks, trade dress, and copyright—should be optimised.
This is also true as each IP measure has a different duration. Without underlying IP, the commercialised blockchain product or service could have minimal value. This article provides a background on blockchain and analyses potential IP measures that could apply.
- Background
A blockchain is generally a network of computers (also called nodes) that share the same chain of blocks. Each block contains several transactions and is linked together by a cryptographic hash, where one block holds the hash of a prior block. This generally makes each block immutable. Cryptocurrency, a blockchain use, involves the transfer of a digital currency. After various users have executed transactions, computers called miners compete to insert the next block in the chain by collecting several transactions into a single proposed block.
If you have already subscribed please login.
If you have any technical issues please email tech support. | https://www.worldipreview.com/article/how-to-conduct-a-blockchain-ip-audit |
Warning:
more...
Fetching bibliography...
Generate a file for use with external citation management software.
Physical activity is associated with a decreased occurrence of dementia. In twins, we investigated the effect of persistent physical activity in adulthood on mortality due to dementia.
Physical activity was queried in 1975 and 1981 from the members of the older Finnish Twin Cohort (n = 21,791), who were aged 24-60 years at the end of 1981. The subjects were divided into three categories according to the persistence of their vigorous physical activity. Dementia deaths were followed up to the end of 2011.
During the 29-year follow-up, 353 subjects died of dementia. In individual-based analyses the age- and sex-adjusted hazard ratio (HR) was 0.65 (95% CI 0.43-0.98) for subjects partaking in vigorous physical activities in both 1975 and 1981 compared to those who were inactive in both years. No significant change was observed after adjusting for potential confounding factors. The corresponding HR for within-pair comparisons of the less active twin versus the more active co-twin was 0.48 (95% CI 0.17-1.32). The results for analyses of the volume of physical activity were inconclusive.
Persistent vigorous leisure-time physical activity protects from dementia, and the effect appears to remain after taking into account childhood environment.
Cognition; cognitive decline; dementia; exercise; physical activity; twins
National Center for
Biotechnology Information, | https://www.ncbi.nlm.nih.gov/pubmed/25613168 |
This post my third part in a series on Tuscany slow travel. I was extremely fortunate to travel in a new, slower pace with a wonderful couple from Km Zero Tours. They take travelers on journeys in a slow way, showing them that travel is more about experience than checking off activities on a list. This post will focus on the wonderful experience of combing cashmere goats in Tuscany. For more articles see the links at the end of this post.
Contents
Nora’s cashmere goats in Tuscany
In the 70s, Nora decided that the busy New York life was not for her anymore and moved to Tuscany. At that time, Tuscany was not a famous place nor did it have any tourism infrastructure, but it was an idyllic and happy place in which Nora found peace.
She started to raise cashmere goats to harvest the hair for scarves and other garments and forty years later, she has one of only few sustainable goat farms in the world. Her life is sustainable in all other aspects. She has a vegetable and herb garden and she even grows the hay that the goats eat when there is no green grass in the fields. As expected, she is also vegetarian.
Aside from about 300 goats, there are also several huge guard dogs as hairy as the goats, and a couple of other small fluffy ones. To supplement her income, Nora also rents out a holiday home next to hers to repeat clients who come every year, some of them even help her out in the farm.
Spring time is also the time of the year when the baby goats are born and Nora had a one day old and a two day old little goat running around. They were the most adorable animals I have ever seen. Jumping and playing with their mothers, one of them, Jelly Bean, had to be hand fed because her mother was too young to produce its own milk.
And I got the best job in the world: feeding the little goat, and, boy was she hungry! Surprisingly, only at two days old, Jelly Bean saw the bottle and knew what was coming so she ran towards it. I don’t think I will ever have a sweeter experience.
About cashmere goats
Cashmere goats are typical of Central Asia countries, Northern India, Pakistan, Afghanistan, China and Mongolia and get their name from the disputed geographical area of Kashmir, in India. The cashmere wool is the undercoat the goats grow in the winter months, the goat’s second follicle, which they lose when it gets hotter, and it is praised for its extremely fine touch.
The difference between cashmere goats and sheep or mohair goats is that the later two have only one hair follicle and they are sheared once or twice a year. Cashmere goats have two hair follicles, one of which falls off in the hotter months. Therefore, the perfect time of the year for cashmere goat combing is April and May because that is when the hair falls off naturally and it is easier to comb.
Combing Cashmere goats in Tuscany
Nora spends a lot of time and effort testing the goats for genetical information and diseases. She is very interested in furthering the breed and wants to raise happy animals. When we speak about genetics, we speak about the colour, finesse and length of the hair. Nora sends samples to the lab every season to determine what goats she will breed in the next season and is now trying to breed the red hair goats that are so rare. She already has a red haired goat called cappuccino and is trying to see if she can breed more but it is difficult because red hair is a double recessive gene.
Before we started combing our Cashmere goat, we took a small sample of fiber from the same place she takes it on all goats for consistency and comparison across goats which she will send to the lab.
We got to comb a one year old goat with very knotted hair. It was her first time being combed so she did not seem to like it too much at the beginning but she probably does not remember anymore today. Usually, combing is easy as the hair falls off naturally but since this had been a rainy and colder than usual year, the goats were constantly wet and could not be combed so Nora had to wait. Living in synch with nature also means living with the challenges of climate change which are affecting everyone, including the cashmere goats.
Our goat was loosely tied to the fence so she would not move too much and Marisa and I sat down on plastic stools and started combing her. She was pretty still, occasionally bleating. Combing was not easy but girls usually know the tricks. Since the cashmere hair is the hair hidden under the main coverall coat we had to look for it with our fingers.
We used a regular comb tip to comb the hair out. Because it was the right season to comb the goats the hair fell off without any effort and without hurting the goat. After a little while she almost seemed to like it.
Sustainable cashmere
Nora is committed to organic and wild raising of the goats in her land and she has trade marked the name “Sustainable Cashmere” because her goats are fed on overgrown weeds in marginal farming and she is improving the breed and the environment. Sustainable cashmere implies that the production of the garments uses less than it consumes. Nora is probably the largest sustainable cashmere producer in the world, though there are a few smaller herders in the US.
Contrary to what I had originally expected, life in the countryside surrounded by 300 goats is anything but boring or monotonous. Nora is actively and internationally involved in cashmere goat breeding. She has organised several networking events with other cashmere goat herders across the world and spoken at several conferences in which she preaches on a betterment of the breed and a reduction in mass farming. But it has not been easy.
Most of the countries in which cashmere goats are raised are only producers of the raw materials and do not have any influence or revenue share in the final products.
To them, more goats means more income. Moreover, to the goat herders in Mongolia, where there are 10 million cashmere goats, the value of the cashmere fiber in the market is unknown and their worth and wealth is measured by the cattle heads they have so they aim to have bigger herds rather than healthier or genetically better animals.
When Nora tried to make them understand that mass feeding was causing the genetics of the goats to worsen and disease to spread, the herders did not want to know anything about it. As soon as a Chinese manufacturer would come and buy their cashmere fibers they would sell straight away.
With the vast range of quality in the raw fibers across the world and the genetical weakening of the breed, the fibers these Chinese manufacturers were getting from the Mongolian herders were nothing like the fibers Nora combs off her goats. But consumer information is scarce and most of us cannot tell the difference after the fibers have been threaded and treated.
When I visited Mongolia during the Nadaam festival two years ago, I bought a cashmere scarf which I still keep. Armed with my experience with Nora, I felt the scarf when I got back home and I understood Nora’s predicament. But when I bought the scarf, I thought I was buying a high quality garment, after all, I was in the country with one of the highest concentration of cashmere goats, surely they knew what they were doing.
Nora is very concerned about diseases being transferred across countries so she also spends time testing fibers from other countries to understand the risks and also advising other governments on how to improve their practices. But diseases are not the only problem with cashmere, fake fibers being sold as cashmere are a greater threat.
In order to control the authenticity of cashmere, 100 of the largest producers of cashmere in the world like Italian Valentino or Loro Piana decided to finance the Cashmere and Camel Hair Institute, an international institution devoted to testing the authenticity of cashmere sold in the world. The CCMI found that 86% of the fabric sold as cashmere was not actually cashmere but all sorts of other equally fine wool, from fine sheep hair to even human hair.
Sharing cashmere goat stories around the table
We talked goats and sustainable farming with Nora and her friend Christine over goat combing and lunch under the shaded patio. We plucked lettuce from Nora’s garden, picked up a few green faba beans from the vine, cut some pecorino cheese and enjoyed a wonderfully fresh lunch of potato salad with sun dried tomatoes, Tuscan bread and fabulous olive oil as the sky turned grey and blue in a moody spring day. The meal ended with strawberries, so sweet and gorgeous looking. A dessert we had at almost every meal as they were in season.
Nora’s life was anything but the quiet and peaceful life of a goat herder. She organises and attends conferences, she investigates genetic development and she even participated in a development program financed by the US Government in Afghanistan to improve management practices in cashmere goats raising. She has more recently been contacted to carry out a similar project in Nepal.
But the Afghan project is one which got her on TV at the BBC and other US channels when scandal broke about mismanagement of the project’s money. The headlines hit Washington earlier this year. The project involved ten of Nora’s goats which she traveled with to Afghanistan’s northern province in order to improve the genetics of the goats Afghan herders had and move up the value chain from purely raising goats to producing the final cashmere product.
But she soon realised that everyone in the project was getting kick backs and that management malpractice was rife so she decided to abandon the project, not without fascinating stories about her journey through Lufthansa’s Animal Lounge in Frankfurt, a terminal devoted to the transport of wildlife across the world, from polar bears to giraffes or goats.
110 million animals, mostly cats and dogs, pass through the terminal every year, that is almost twice as many as human passengers. Lufthansa does not transport dolphins or sharks and does not allow for the transport of any animals caught in the wild, unless they are being sent back home, like the two black rhinos that will be transported from the Frankfurt zoo back to Swaziland for reintroduction into the wild.
Cashmere wool: yesterday, now and tomorrow
Visiting Nora’s goats gave me a renewed point of view on rural development. Nora lets the goats graze freely on reclaimed agricultural land which she is recycling with the free and organic grazing of the goats. Because they roam freely and naturally, the goats are happier and the cashmere wool finest. Nora’s goats did indeed look like they were having a great time jumping and playing in the vast green fields. And they were adorable.
As per the finished product, we purchased some incredibly soft scarves that were incredibly useful in rarely cold spring in Tuscany and we took away wonderful memories from the one and two day old goats and the new appreciation for an ancient tradition that Nora has managed to maintain in a sustainable manner.
Having grown up in the countryside and felt the reality of trying to make a living in the harsh and usually unpredictable weather conditions, it was encouraging and inspiring to see that there are alternative ways to make it work and that the impact can be widely spread across continents.
- Check if you need a visa, get help processing it at iVisa.
- Never ever leave without travel insurance. Get affordable coverage from World Nomads or long term insurance from Safety Wing.
- I find all of my flights on KAYAK. Check their Deals section too.
- Search for all your transportation between destinations on the trusted travel booking platform Bookaway.
- I book all my day trips and tours via GetYourGuide, they are the best and their tours are refundable up to 24h in advance.
- Get USD35 off your first booking with Airbnb.
- Compare hotels EVERYWHERE at HotelsCombined and book with Booking.com. | https://www.onceinalifetimejourney.com/once-in-a-lifetime-journeys/europe/slow-travel-tuscany-combing-cashmere-goats/ |
Understanding our Responses to conflict: Passive, Aggressive and Assertive
You have read that I believe that all conflict in our world can be traced back to three main roots: Limited Resources, Unmet Basic Needs and Different Values. See my very first blog for a review here.
If you are at all interested in better communication and effective listening strategies, then you have likely heard about the three basic responses to conflict: Passive, Aggressive or Assertive.
The differences in our responses are defined by how we effect the resources, basic needs and values for ourselves and for the people with whom we are in conflict. We can guide our responses by our intentions and what we hope the end result of the conflict will be.
A Passive response generally looks like avoiding or denying a conflict in your life. Many people believe that in order to avoid being Aggressive, or violent with their body or words, that they must be Passive. How this results is that yours, the other persons or both of your resources, basic needs and values are not being met or are being compromised. Sometimes this feels like a survival technique, which it may very well be, depending on your situation. It is important to remember that survival doesn't mean that all of your needs are met, it simply means you are alive and feel safe(r) in that moment. This can lead to un-solved issues, recurring conflicts and dissatisfaction in the long-run of your relationships.
An Aggressive response means that you are doing whatever you can to make sure that your own needs are met, and usually means that the other person(s) is not getting their needs met because of your actions. If your goal is simply to protect your resources, meet your own needs and defend your own values then you may trend towards an aggressive response to conflict. It looks like the "Fight" response, in a "fight or flight" scenario, and generally holds some of the longest-lasting consequences in your relationships.
An Assertive response to conflict is characterized by putting every effort to preserve resources, meet needs and keep intact the values for all parties involved. When you see the reality of this, it can be much easier to work towards a resolution that will satisfy all people involved in a conflict. It can take time, effort and seeking to understand to be sure that all the elements for each individual are being preserved. This is the kind of response to conflict that can be sustainable, create more understanding and prevent escalating long-term discomforts in our relationships.
It's important to know that every conflict is unique and that no one response is always an available option. It is also important to understand that your intentions can guide your responses if you simply check in with yourself. Is my hope to "win" no matter what? Is my hope to just get through this so I don't have to deal in this moment? Do I want to find lasting resolution and make sure everyone involved is taken care of? Take a moment the next time you encounter some kind of conflict (probably sometime today!) and check your intentions. See if you can recognize your own roots of this conflict and the roots for the people you are in conflict with as well. The more often that we can protect the resources, meet the needs and respect the values of all involved, the more peaceful and understanding a world we will live in. Thanks for doing your part! | http://www.robinfunsten.com/robins-blog/2016/3/23/understanding-our-responses-to-conflict-passive-aggressive-and-assertive |
220 E. Chicago Ave.
Chicago, IL 60611
Opening information HERE
Brendan Fernandes’s dance-based installation in the Commons, entitled A Call and Response, explores the ways society sees and values different kinds of bodies. Using language, architecture, and gesture to understand the nature of being seen, Fernandes encourages dancers–and visitors–to collaborate and generate new forms of physical language that move and attract other bodies in space.
Fernandes (Kenyan, b. 1979) seeks to isolate everyday actions, such as running for the bus or slinging a bag over your shoulder, considering individuals’ movements in social spaces as a kind of choreography. Over the course of the exhibition, the artist poses the question: How do the shapes of our bodies and our physical proximity to others affect our sense of visibility?
The Commons Artist Project: Brendan Fernandes is organized by January Parkos Arnall, Curator of Public Programs, with Christy LeMaster, Assistant Curator of Public Programs.
The Commons Artist Project is a biannual exhibition series that provides a platform for artists to create commissioned installations that consider the big issues of our time. The projects provide direct opportunities for visitors to interact with the works and ideas of local and regional artists of national recognition.
About A Call and Response
A Call and Response consists of three movement-based projects, all of which encourage visitors to participate in their formation.
THE INSTALLATION IN THE COMMONS
A series of prompts invite visitors to explore together the ways our appearances and movements convey social meaning.
OPEN CALL
Fernandes asks visitors to answer a “call to movement” and participate in a collectively generated performance alongside professional dancers in the Commons and on the Terrace. While this piece has been performed before at other venues and institutions, it’s a totally unique experience created in the moment, with the people in the room.
CALLING TIME
With the keen eyes of visitors to the museum, Fernandes and three dancers create a new movement piece over the course of open rehearsals throughout the summer. Culminating in two final performances in the fall, Calling Time is catalyzed by architectural constructions, prompts, and visitor input.
See the schedule below for all of Fernandes’s rehearsals, activations, and performances to collaborate in the creation of new work.
EXHIBITION EVENTS AND ACTIVATIONS:
Tue, June 18: Exhibition opens
Wed, June 19, 6 pm: Talk: Brendan Fernandes with January Parkos Arnall
Sun, June 23, 11 am–1 pm: Opening Brunch & Open Rehearsal
Sat, June 29, 7–11 pm: Activation at Prime Time: "Question Everything"
Tue, July 9, 11 am–1 pm: Open Rehearsal
Fri, July 19, 6 pm: Open Call Performance
Tue, July 23, 11 am–1 pm: Open Rehearsal
Tue, Aug 6, 11 am–1 pm: Open Rehearsal
Thu, Aug 22, 7–11 pm: Activation at Prime Time: TM™
Tue, Aug 20, 11 am–1 pm: Open Rehearsal
Tue, Sep 10, 1–3 pm: Open Rehearsal
Tue, Sep 24, 1–3 pm: Open Rehearsal
Fri, Sep 27, 6 pm: Calling Time Performance
Tue, Oct 8, 6 pm: Calling Time Performance
Image: | https://www.chicagogallerynews.com/events/brendan-fernandes-a-call-response |
A one-on-one personal consultation & training session in which your Instructor will review your needs and goals with you, provide you with a fitness assessment, acquaint you with a mat and equipment workout and make recommendations to the most appropriate way for you to proceed.
Please book all training sessions online according to your own schedule.
If you need individual attention due to pain, injury or weakness we recommend you continue with private training. | http://limhamnpilatesforening.se/intro/ |
My daughter recently attended a birthday party at a local “teaching kitchen” that offers cooking classes for all ages. In a brightly lit, fully stocked presentation kitchen, fifteen five-year-olds were taken through the steps of making their own pizzas from scratch: making the dough and sauce, rolling the dough out into individual-sized portions, and personalizing them with sauce and toppings. The instructor emphasized hands-on engagement with the food. Each child was invited to touch, smell, and taste the ingredients as they went through the steps of measuring, mixing, rolling, chopping, and spreading. Parents helped as needed, but that help was minimal and most often directed at ensuring that the children were taking turns and sharing utensils or ingredients. Children and parents alike seemed to be enjoying themselves.
Beyond the obvious enjoyment, the children were very much focused on the materiality of the foods at hand, whether it was playing with the toppings they put on their pizzas or sampling all of the potential toppings for the cupcakes they decorated later. I watched as my daughter and her friends experimented with shapes, textures, colors, smells, tastes, and even sounds. Which toppings could stack easily, and which ones rolled off the frosting? Which ones bounced, and which ones squished? Which ones bled colors as they got wet from sweaty fingers, and which ones squirted liquid or chunks when they were squeezed? The children were not concerned with nutrients, calories, price, or ethics. Nor did they care about how their pizzas or cupcakes were plated or whether their creations were nutritionally appropriate. Instead, the children seemed to be focused on the materiality of the foods in front of them and the visceral experience of those foods. For them, food was more than fuel for their bodies; it was a material object to be explored, experienced, and enjoyed in multiple ways. | https://gastronomica.org/tag/exchange/ |
2) Using LSTM-RNN to form semantic representations of sentences.
3) Representations of sentences and documents. The Paragraph vector is introduced in this paper. It is basically an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents.
4) Though this paper does not form sentence/paragraph vectors, it is simple enough to do that. One can just plug in the individual word vectors(Glove word vectors are found to give the best performance) and then can form a vector representation of the whole sentence/paragraph.
We have models for converting words to vectors (for example the word2vec model). Do similar models exist which convert sentences/documents into vectors, using perhaps the vectors learnt for the individual words?
A solution that is slightly less off the shelf, but probably hard to beat in terms of accuracy if you have a specific thing you're trying to do:
Build an RNN (with LSTM or GRU memory cells, comparison here) and optimize the error function of the actual task you're trying to accomplish. You feed it your sentence, and train it to produce the output you want. The activations of the network after being fed your sentence is a representation of the sentence (although you might only care about the networks output).
You can represent the sentence as a sequence of one-hot encoded characters, as a sequence of one-hot encoded words, or as a sequence of word vectors (e.g. GloVe or word2vec). If you use word vectors, you can keep backpropagating into the word vectors, updating their weights, so you also get custom word vectors tweaked specifically for the task you're doing.
There are a lot of ways to answer this question. The answer depends on your interpretation of phrases and sentences.
These distributional models such as
word2vec which provide vector representation for each word can only show how a word usually is used in a window-base context in relation with other words. Based on this interpretation of context-word relations, you can take average vector of all words in a sentence as vector representation of the sentence. For example, in this sentence:
vegetarians eat vegetables .
We can take the normalised vector as vector representation:
The problem is in compositional nature of sentences. If you take the average word vectors as above, these two sentences have the same vector representation:
vegetables eat vegetarians .
There are a lot of researches in distributional fashion to learn tree structures through corpus processing. For example: Parsing With Compositional Vector Grammars. This video also explain this method.
Again I want to emphasise on interpretation. These sentence vectors probably have their own meanings in your application. For instance, in sentiment analysis in this project in Stanford, the meaning that they are seeking is the positive/negative sentiment of a sentence. Even if you find a perfect vector representation for a sentence, there are philosophical debates that these are not actual meanings of sentences if you cannot judge the truth condition (David Lewis "General Semantics" 1970). That's why there are lines of works focusing on computer vision (this paper or this paper). My point is that it can completely depend on your application and interpretation of vectors. | https://code.i-harness.com/en/q/1d5e8a8 |
To the Editor: Recently, Harding and colleagues ( 1) have shown that the cancer incidence rates decrease in old age and may drop to zero near the end of the human life span. The authors added a linearly decreasing factor to the Armitage-Doll multistage model of cancer ( 2, 3) and obtained a β distribution–like model function. By this model, they obtained a good fit for incidence rates of many cancers; however, this work has several drawbacks:
-
Confidence intervals for derived model variables and methods for their determination have not been provided.
-
The proposed model does not provide an appropriate fit for cancers with multimodal incidence rate distributions (such as Hodgkin's Lymphoma, testicular cancer, etc.).
-
The authors used incidence rates for ages starting at 50 years. From the observed data, they derived the upper age limit (say, B) of cancer development, assuming that in their model, the lower age limit is A = 0. However, one can show that the model variables are very sensitive to variation of A (see below for an example).
By introducing a lower age limit (A) in addition to the upper limit (B), in the β function ( 4), one can obtain the following: I(T) = aTk−1(1 − bT), where I(T) is the incidence rate, T = (t − A), b = (B − A)−1, t is the age in years, k is the number of cancer stages, and a is a combined rate constant; a and k can be used as variables, whereas A and B can be treated as a priori data extracted from observations. The use of a priori information can reduce the number of derived variables and stabilize the solution against variations of input data ( 5). As an example, consider pancreatic cancer incidence rates in males as reported by the Surveillance Epidemiology and End Results database for the years 1999 to 2003. One can evaluate A and B, for which the incidence rates are statistically distinguishable from zero, to be approximately equal to 30 and 100, respectively, and obtain a good fit with k ≈ 4. By contrast, the authors report a fit of comparable quality with k ≈ 7, when A is 0 and B ≈ 100.
We appreciate the importance of the work, but determination of confidence intervals is crucial for the model to be applicable for rigorous statistical analysis.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.
- ©2009 American Association for Cancer Research. | https://cancerres.aacrjournals.org/content/69/1/379.1 |
When I was asked to write an article to complement the release of my book, By Land, Sky & Sea: Three Realms of Shamanic Witchcraft, I was either at a blank or had too many options relating to the subject matter to decide between. Fortunately, I tend to be quite consciously reflective, and it dawned on me to write about initiation as the path of the Shamanic Witch.
In the past few months here in Australia I have been traveling to the major cities and presenting workshops with the same title as this article. As I traveled and the people and cities changed, so did my insight into the processes and teachings deepen. I began to realise important things relating significantly to what I now proudly call Shamanic Witchcraft. Below I will discuss several of them.
My focus in the past five years has been on what I believe is the underlying or core teachings of the Mystery Tradition now called Witchcraft. This foundation is essentially shamanic, meaning our cosmologies relate to a manifold expression (generally threefold) of realms and we may travel between and through them in order to learn, change, deepen, and grow. In the process we actualise our divinity, become Gods amongst Gods, and interact with the world as alive and magickally potent. The enabling factor is of course initiation. To empower and facilitate such an awakening the shadow must be made an ally.
Many teachers (especially Jungian and New Age) relegate the shadow as limited to holding and containing our repressed emotions, thoughts, and desires before they overflow and force us to recognise the "dark" qualities within. The shadow, using this philosophy, becomes nothing more than a psychic storehouse of the recesses of self. I do not deny that the shadow can become a container for such doubt, fear, anger, and hatred; however, the shadow precedes the arousal of such emotions and in my belief can be viewed as the primal template of the soul. As such the shadow is the most powerful ally in Shamanic Witchcraft because we yearn for eternal wholeness and therefore integration with the self that holds Self is required. We are of the Earth and the Starry Heavens, of this we know.
The Shadow is the infamous Guardian at the Gate. The Gate represents Initiation and the Guardian is of course Death. Death is feared because to many it represents a final ending, an unbinding of the impetus of life, and a gradual or terrifying deterioration of the physical faculties which remind us that we are, in fact, alive. Death liberates us, not from the "mortal coil" and its implied profanity, but from the idea that our ego-bound natures can never transform, evolve, or deepen. Death encourages us to experience the potencies of Life experientially and to flow with the tides of change which Death facilitates. The Shadow/Guardian/Gate-Keeper has become maligned and demonic because we ourselves can not escape fear of Death; it is only after the experience of initiation (however that should happen) that one can truly face the Shadow and take it in to meld, fuse, and become at one with the All-Self, which is inherently divine. Thus do we attain to Deity and fulfill the destiny of the self-caused being that rests in the open palm of Being itself—that Great Mystery. The blessing of the Church of All Worlds bears the essence of what I am seeking to convey—Thou art Goddess; Thou art God.
In many Craft traditions, when the neophyte is bought to the edge of the Circle blindfolded and bound and the sword is held to her chest, the fear crystallises. One is blind, bound, and unable to escape or turn back; all that can happen, and the road to true humility and an uplifting of the soul, is surrender. To many Witches the concept of "perfect love and perfect trust" represents the quintessence of surrendering. To lay down and make oneself vulnerable, to allow the "dread forces" to carve out the holy vessel and make it fit for the spirits (the Hidden Potencies of Life) is an act of surrender and the preface to the transformation of initiation.
The Witch legend of the Descent of the Goddess, the Eleusinian Mysteries, the Orphic myth of the destruction and rebirth of Dionysus and Inanna's journey to and from her sister Erishkigal's halls all have one thing in common: the concept of initiation as the journey of consciousness through Death to claim the Light within that will inspire, guide and create. The initiate bears the mark of this journey for all eternity—the experience can never be forgotten, destroyed, or forcibly taken. An initiate of Life is implicitly an initiate of Death, and to the initiate, neither is above or below, or apart. Witchcraft in its many facets teaches the cycles of sacrifice, death, decay, rebirth, growth, and triumph. Modern Witches, in our celebration of the eight sabbats of the Wheel of the Year, attune to the currents of the land as reflective of the greater cosmic cycles and become at one with the self within, which is the mirror for the Self as found without. The Goddess in the Wiccan myth however declares "for if that which you seek you find not within yourself, you will never find it without."
The self is the Self that is Creation, Continuum, Destruction, Transformation, and Rebirth. And as the Feri folk would say, "This too is Goddess" I emphatically embrace my Divinity, and yours, and as such we are joined in an endless journey of understanding, celebrating, and relating to the reflections of that which we call Great Mystery. For each one of us is a Hidden Potency, a Crystallised Form of Pure Consciousness, the Tear-Drops of the Gods. It is this intimate knowledge that broadens my perspective, deepens my experience of Life, and allows me to thrive in all earnestness. I am you, you are I…we are one with the Darkness that holds and bears the Seed of Light. Each of us is the outcome of universal creativity. The Shamanic Witch's vocation is to be witness to this Divine Truth and in service of its integrity remain conscious, alert and ready to reveal it in the spirit of reverence and celebration—for balance, for wisdom, for beauty, and for magick. | https://www.llewellyn.com/journal/print.php?id=2143 |
Chapter 4 - "The Lord is There"
When we speak of anything being
a criterion, we mean just what the dictionary gives as the
definition, i.e., 'the principle taken as the standard of
judging'; 'any established law or principle by which propositions
or opinions are compared, in order to discover their truth'. So a
criterion is that by which the truth and value of any matter is
determined. By such-and-such a principle or fact the whole thing
stands or falls; is true or false. That then is our objective in
relation to Divine Purpose. Can we put our finger definitely upon
that Divine Purpose and see that it is the climax, the
culmination of all God's ways? Well, what is that climax, that
one Divine end, by which everything is to be judged, now and for
ever?
In these chapters we have been
allowing the prophet Ezekiel to be our guide and interpreter,
seeing that the book which goes by his name is not only a book
about prophecies and history, but a book of spiritual principles
with a much greater context than earth and time. When we reach
the end of that book, we find ourselves in the presence of that
great ultimate, that universal climax, that realised purpose, and
it is all summed up in the brief, though vast, phrase:
"The
Lord is There".
What a wide field is opened by
that climacteric phrase! The Bible is bounded by this supreme
concept. It opens and closes with the presence of God with man.
It is the governing issue throughout all its pages and
phases. There are almost countless aspects of this one thing,
but, be it so, the issue is just this alone: Is the Lord there or
is He not? Is the Lord in that or is He not? Is the Lord with
that, with him or her, in that place, in that decision or course,
or is He not? That is the criterion. His presence with unfallen
man and His departure from disobedient man is an eternal
principle. His presence in the beginning indicates purpose. His
presence by the Incarnation of His Son is unto the redemption of
the purpose. His presence by the Holy Spirit is to make that
purpose actual as an inward thing.
The major aspects force us back
to basic considerations. Let us not hurry on with greatness of
vision, but pause and quietly tell ourselves that what is more
vital and important than anything else in all our life is that
the Lord is with us. Futility, vanity, disappointment and remorse
will most certainly overtake us, sooner or later, and overtake
all our undertakings if, at length, it should be found that the
Lord is not with us. It is a perilous thing to go on without the
Lord. Moses, who did know something, cried: "If thy
presence go not with us, carry us not up hence". Mere
assumption in this matter may well prove to have been fatal
presumption. "Supposing him to be in the company" may
lead to the necessity to retrieve the value of the whole journey
(Luke 2:44).
The Bible shows that nothing
can be done which will be of eternal value unless God is in it.
When we have settled this basic
fact and let it become the ever- and all-dominating principle in
life and work, we are ready to appreciate certain other things
which stand out so clearly in this connection. The first of these
is:
The Holy
Spirit's Meticulous and Scrupulous Exactness.
If the Holy Spirit is jealous
for the main object, He is shown to be equally jealous for the
detailed features. This can be seen in various connections.
If the creation and man were
intended for the presence of God, they had to be a meticulous
expression of God's mind. God was Himself the Architect. God was
Himself working scrupulously to a Pattern. (The whole Bible shows
that Pattern to be His Son.) The Holy Spirit became the Custodian
and energy of that Pattern. Nothing was haphazard, left to
chance, or left to man or angels to conceive or design.
Another great and forceful
example of the principle was the Tabernacle of Testimony. Here,
again, nothing in design, even to a pin or a stitch, a
measurement, a material, a position, was left to man. It was all
to be according to "the pattern shewn". The Holy Spirit
took charge of the artisans, and only when 'all things were
according to the pattern' did God presence Himself. The slightest
deflection would have meant that it was only an empty shell
without God.
The same is to be noted in the
Temple of Solomon and the Temple of Ezekiel's vision.
When it comes to the consummate
presentation of that which (Him who) is typified in the Old
Testament - the Incarnate Son of God - "Emmanuel, God
with us" - again, the Spirit of God takes over and
governs all the details of His conception, birth, life, history,
works, death, resurrection, etc. See the place of the Holy Spirit
in the life of Jesus. God's Son will Himself declare that
"the Son can no nothing of [out from] himself, but... the
Father" (John 5:19).
After the Person comes
The
Corporate Body - The Church.
The Architect is God the
Father. The Builder is God the Son. The Custodian and energy is
God the Holy Spirit.
Here, again, nothing in
conception and planning is left to angels or men. If man
interferes, insinuates himself, and tries to organize or run the
Church, so much the worse for the man, as the New Testament both
shows in results and declares in words. Nothing but confusion,
frustration and shame can follow man's hand upon that which
exists wholly for the presence of God.
The last chapters of the Bible
must be read in the light of all the immediately preceding
chapters. There we see the progressive judgment in every realm -
beginning with the churches - of everything unsuitable to the
presence of the Lord. The end is all that removed and a state -
symbolically represented - which is suitable to Him, and "the
Lord is there".
What a challenge all this is:
to the Christian to "walk in the Spirit"; for the
Church and the churches to be governed and sanctified by the Holy
Spirit.
The hand of man is a defiled
thing. Only "he that hath clean hands, and a pure
heart" can "ascend into the hill of the Lord". We
may not put our hand on one another for judgment or control. We
may not put our hand on the House of God. We may not (like Uzza)
put our hand on the ark. Woe to Uzzah, to Ananias and Sapphira, to
Diotrophes, who touch the holy things of the Lord's presence with
fleshly hands of natural strength, ambition, and pride!
How safe it is to be where the
Lord is if, through the Cross, we are made suitable. How
dangerous it is even to draw near without taking off the shoes of
association with the cursed world!
These are shorter chapters in
the 'Horizon' series, but they are particularly concentrated and
must be taken more for intrinsic values than for volume of
material. | https://www.austin-sparks.net/english/books/001890.html |
By; BAYO AKAMO, Ibadan
Newly promoted Aare Onibon of Ibadanland, Olooye Adegboyega Adegoke on Friday, advocated involvement of traditional institutions in governance, especially in the security aspect.
Speaking with journalists shortly after his promotion from Bada Balogun of Ibadanland to Aare Onibon by the Olubadan of Ibadanland, Oba Lekan Balogun, the new Aare Onibon of Ibadanland said this becomes necessary in view of the closeness of the traditional institutions to the people.
According to the new Aare Onibon, there is need for the government to involve traditional rulers in governance by giving them specific roles to be accommodated in the Nigerian Constitution.
“Traditional rulers are closer to the people than local government chairmen. It is very hard to see anyone who will not recognise their monarchs, but they may not even know their local government chairman”, he said.
The All Progressives Congress (APC) Senatorial aspirant pointed out that this would afford them to have first hand information on security issues in their various communities and as such, they would know how to address any security challenges that may arise from their domains.
Olooye Adegoke stressed that there is the need for constitutional amendment by the National Assembly to accommodate roles for traditional rulers in governance, adding,” it is high time the government took security issues more seriously by delegating constitutional roles to traditional rulers” .
The founder of Adegboyega Adegoke Resource Centre (AARC), maintained that, “security is germane to the peaceful coexistence of any community and national development”, adding that to reduce insecurity to the barest minimum, there should be responsibility for traditional rulers across board, which should be spelt out constitutionally.
Emphasising that security is the backbone of any society, which is tied to its social, political, economic and cultural growth, the Aare Onibon of Ibadanland, stated that inadequacy of this vital ingredient of development has led to all manners of social ills, including violent crime such as armed robbery, ritual killings, child trafficking and others.
Attributing the lingering insecurity in the society to lack of job opportunities and bad economy, noting that idle hands are devil’s tool, Olooye Adegoke appealed to all well meaning Ibadan Indigenes across the world to use their connections to bring more developmental projects into the city.
On his part, the new Aare Onibonpromised to bring more developmental projects into Ibadan to create more employment opportunities for young elements in the state. | https://newnigeriannewspaper.com/insecurity-involve-traditional-institutions-in-governance-in-nigeria-adegoke-tasks-govt/ |
The National Cat Management Strategy Group (NCMSG) is pleased to provide you with the finalised National Cat Management Strategy document. This has been the culmination of three years’ worth of work after embarking on this important journey in August 2014.
We are incredibly cognisant of the strong and disparate emotional responses that discussions around cat management evoke. However, the current status quo is not in the best interests of animal welfare, biodiversity and the community.
The options were, and are, to do nothing, or, to take a brave step and collaboratively tackle this highly complex and emotionally charged issue and demonstrate collective leadership in the absence of positive progress in this area. The latter is what the NCMSG set out to do which has been a challenging journey. There is no easy silver bullet solution but what we have produced is an evidence-based critically analysed detailed discussion document around the options currently available to ensure the cats are responsibly owned and cared for and that any potential impacts are mitigated. It is important to note that all the NCMSG members firmly believe that all cats should be treated with respect and compassion and are entitled to a ‘life worth living’. This is balanced with an equally important viewpoint that our unique native biodiversity warrants protection also.
The five national and three government organisations involved in the NCMSG implore you to please read the document in entirety. It is important that this is done with an open, solutions focused mind, rather than with pre-conceived bias, and that full impartial consideration is given to the options presented.
The NCMSG looks forward to New Zealanders working together to improve cat welfare, responsible cat and mitigate cats’ negative impact on wildlife through well designed and managed cat management that are both humane and effective. | http://www.nzcac.org.nz/nzcac/nzcac-resources/nzcac-newsletters/7-blog/83-national-cat-management-strategy-discussion-paper |
The utility model relates to a safety early-warning monitoring system, in particular to a gas leakage accident early-warning monitoring system based on internet of things. The gas leakage accident early-warning monitoring system based on the internet of things comprises gas information collectors, an internet of things information transmission platform and a system monitoring center device, wherein the gas information collectors are connected with the system monitoring center device through the internet of things information transmission platform. A danger source is connected with a monitoring server through the technology of the internet of things, the gas information collectors collect information of gas composition and gas concentration and the like in a monitoring point and transmit the information to the system monitoring center device, the system monitoring center device processes gas information data in real time, the internet of things recognizes devices arranged in the internet of things intelligently, and therefore once a gas leakage condition occurs, the internet of things can position the corresponding gas information collector quickly and early warn workers of the gas leakage condition in the monitoring spot in time, and monitoring of production accidents is strengthened. Thus, the gas leakage accident early-warning monitoring system based on the internet of things is applicable to production of enterprise producing combustible gas and poisonous gas in an oil industry, a chemical industry, a coal mine and the like. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.