content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Greatest Dams of United StatesArticles > Greatest Dams of United States
Greatest Dams of USA:
Dams are constructed for four basic purposes. One is to meet the water need evenly all through the year. The second purpose behind the construction of dams is to control flooding. The third objective of the dams is to generate electricity. The fourth purpose of the construction of the dams is to divert the water flow from one channel to another. Like the rest of the world, the needs and demands of the USA are not different. China is the leading country regarding the largest number of dams in the world. The USA is the second country which has the largest number of dams. According to the report of the American Society of Civil Engineering, there are more than 90,000 large and small dams in the United States. How to find out the value of a dam? The value of a dam is dependent on many factors, like how large is the dam, how tall is the dam, how much water does a dam retain, how much electricity does a dam generate? Keeping in view all these factors, the value of the dam can be found out. Here in this article, we will discuss the tallest, the longest, the one that produces the most electricity, the one that holds back the largest water dams separately. But our point of discussion will remain confined up to the dams of the United States.
(1). The Tallest Dam of the United States:
Oroville Dam is located on the Feather River in the Oroville region of California. This is the tallest dam in the USA, which is more than 770 feet high. The length of the dam is 1.31 miles or 6920 ft/ 2109.216 meters. The construction of the dam was started in 1961 and completed in 1969. The dam was constructed by the California Department of Water Resources. During the period from 1960 to 1970, the state of California built several dams to meet the need for water and energy. The Feather river of Oroville is diverted to California Aqueduct by means of Oroville Dam. The diversion of the river met the need for water for San Joaquin Valley. Before the construction of the dam, California was hit by a devastating flood each year. From 1969, a large part of California State was protected from the attack of yearly flooding. The construction of the dam prevented the California region from damaging property worth 1.3 billion during the years 1987 to 1999. This dam not only reserve water for farming but also a big source of energy. An underground power generating plant namely, Edward Hyatt Pump-Generating Plant. The power plant generates electricity of 28000-megawatt energy. It is a huge gigantic dam. The extraordinary height of the dam adds to the vulnerability of the dam during excess storage. In 2017, the excess water wall allowed flowing through an emergency spillway. The water-flow through the emergency spillway caused erosion to the main embankment of the dam and its spillway. This erosion put a threat to the main wall of the dam. The state government evacuated about 188 thousand people from the nearby habitats. The repairing work of the spillway was again started at the end of 2017 and completed at the end of 2018. The repaired spillway was again opened in April 2019.
(2). The Longest Dam of the United States:
The above-mentioned dam (Oroville Dam) was the tallest dam in the USA. But Cochiti Dam is the No.1 dam with regard to the length of the dam wall. The Cochiti Dam is the longest dam in the United State. The length of the Cochiti Dam is 5.5 miles/29,040 feet. This dam is located in the region of Cochiti Pueblo, New Mexico. This dam was constructed on the Rio Grande River. The height of this dam is about 250 feet. The construction of the dam was started in 1965 and completed in 1973. On one hand, it is one of the longest dams in the world, while on the other hand it is included in the shortest in height dams of the world. Due to its short height, it does not reserve as much water as the world largest water holding dams. In spite of being the longest dam, it is not included in the list of the world’s largest dams. It is the 11th earth-filled dam. The world's largest earth-filled dam is the Tarbela Dam of Pakistan. By limiting the effect of heavy runoff on the surrounding areas, the engineers of this dam provided it a flood control mechanism. The flood control Act allowed the construction of this dam in 1960, thus the construction started in 1965 and continued till its completion in 1973. The resulting Cochiti Lake has a permanent recreation pool, with an intermittent pond in the arm of the Santa Fe River. The remaining capacity of the reservoir, about 672 million cubic meters, is reserved for flood and sediment control.
(3). The Most Power Producing Dam:
Regarding power production, the largest dam in the USA is the Grand Coulee Dam. It is a gravity dam, which is located in Okanogan and Grant Counties of Washington State. The dam was constructed in 1933 to hold back the water of the Columbia River. The completion of the dam took place in 1942. This dam was designed by the Bureau of Reclamation USA in the 1930s. The primary objective of the dam was to control flooding and meet the need of irrigation. But later on, the electric generation became its prime object. The power production of this dam is 21 billion kWh (Kilowatt-hour). This power meets the need of 23 lac households for a year.
(4). The Largest Water Holding Back Dam:
Regarding the lake volume, the Hoover Dam is the largest dam in the USA. This dam is located in the region of Clark County of Nevada and Mohave County of Arizona. This dam is constructed on the Colorado River in 1931. The construction of the dam was completed in 1936. The length of the dam is not too much as that of Cochiti and Oroville Dams. It is just a 1244 feet long dam. The height of the dam is 726 feet. The Hoover Dam sits at the border area between Arizona and Nevada. It is a concrete reinforced dam, in a curved shape. The concave face of the dam wall provides extra strength to it. Colorado river passes through Black Canyon. This dam stretched between the high walls of the Black Canyon to hold back the water of the Colorado River. The wall of the dam encompasses a volume of 3.25 million cubic yards. The width of the wall is 16 feet wide. There is a wide highway from San Francisco to New York.
Read also:
- World Trade Center, All you need to know
- Pros and Cons of the Dams
- Origins and History of Use and Development of Soil Nails in the United States
- Structural Elements of Dams
- What is engineering and why do they need probability and statistics? | https://civilengineeringbible.com/article.php?i=294 |
How to calculate income tax withholding
From the moment a company hires a worker, the latter has the obligation to deduct from the payroll the amounts related to the withholdings that have to be made for Income Tax (IRPF) .
The discounted amounts will be taken into account when the worker has to make the Income Tax return, since they are advances that the company enters, on the part of the worker, to the Treasury, and that they can vary the amount to enter or pay in the declaration annual.
In we help you to know how to calculate income tax withholding .
Is the IRPF withholding amount the same for everyone?
The amount that the company retains the worker as income tax is not fixed, and, therefore, varies depending on the circumstances of the worker. To know how to calculate IRPF withholding, we must take into account:
- Net salary
- Type of contract
- Personal circumstances:
- Age
- Children in charge
- Parents in charge
- Disability or disability of the worker or his dependents
- Marital status: single, married, divorced, etc.
- Compensatory pension to be received
- Payment of alimony to children
- Payment of the compensatory pension to the ex-spouse
- Acquisition of habitual residence
How to calculate IRPF retention
To calculate the IRPF withholding to be applied in the payroll, a percentage must be multiplied to the total annual gross income of the worker. This percentage is defined in the law, depends on the salary of each one and ranges from 24% for salaries of up to € 17, 707 gross per year, up to 45% for salaries of more than € 300, 000.
Thus, if a worker (single, without dependent children or other personal circumstances provided by law) charges 17, 000 euros per year, you must multiply this amount by 0.24 to know that your annual IRPF withholding will be € 4, 080. However, this percentage will change if any of the personal circumstances included in the law apply, such as if you have dependent children, are married or bought a home.
The final amount is prorated in each payroll to prevent the worker having to pay the total amount when making the Income statement .
Indicative table to set the IRPF retention percentage:
Annual salary (€)
Retention
0 - 17.707
24%
17.707 - 33.007
28%
33.007 - 53.407
37%
53.407 - 120, 000
43%
120, 000 - 175, 000
44%
175, 000 - 300, 000
Four. Five%
More than 300, 000
Four. Five%
As you can give countless situations in the personal circumstances of each worker, which will change this percentage, it is recommended that to calculate the final amount to pay for the retention, consult a lawyer specializing in tax law to offer all specific data.
If personal circumstances change throughout the employment contract, the worker must notify the company as soon as possible, as this can benefit the worker and thus apply a lower percentage of retention. The modification of these circumstances must be done by presenting model 145 to the company, which will be responsible for recalculating the new amount to be retained.
How Income Tax withholding affects the income tax return
According to the withholdings made by IRPF in the payroll of the worker can affect the income statement in different ways:
- If the deductions made to the worker have a high load, it is possible that the amount overcharged is returned in the income statement.
- If the withholdings made have a low charge, the worker will probably have to pay the difference when he makes the income tax return.
- The worker does not have to make the Income declaration if the annual gross salary is less than € 22, 000 or € 11, 200 if he provides services for more than one company.
Remuneration in kind and income on account in the IRPF
When we make the declaration of Income Tax, in the section "retributions" we have to record both the cash, that is, those received in money, as well as the remuneration in kind, that is, goods, rights or services that workers use or consume for free or at a lower price than the market price, according to the definition of the same ones that can be found in the Law on Personal Income Tax.
These rewards in kind involve what is called an income on account, similar to the withholdings that the Treasury practices on the monetary payments we receive on our payroll, that is, a kind of advance payment that we make to the Tax Agency for the salary that We have won and we are deducted from the tax base when it comes time to make the Income statement.
However, in the case of income in kind this can not be done, and therefore they always include an income on account, that is, a payment that the employer must make to the Treasury for the remuneration in kind offered to its employees. . Whether or not the employer affects this income on account in the worker's payroll will depend on the agreement that exists in each company.
If the deposit is passed on, it will appear on the worker's payroll, discounting the gross to be received. If it is not passed on, the amount that appears as the value of the compensation in kind is formed by the value of the object itself plus the deposit, which is added to the gross to be received by the worker. | https://en.pasapic.com/5436-how-to-calculate-income-tax-withholding |
RELATED APPLICATIONS
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION
This application claims priority to Taiwan Application Serial Number 97151695, filed Dec. 31, 2008, which is herein incorporated by reference.
A method for displaying tree structure data and a hand-held electronic device and a computer program product thereof are disclosed, and more particularly, to a hand-held electronic device with a screen, a method for displaying tree structure data on a hand-held electronic device, and a computer program product of the method.
Hand-held electronic devices, and in particular hand-held communication devices, are widely used in daily life and greatly influence the lives of people. To meet different functional demands hand-held communication devices have become smaller and have more functions. The hand-held communication devices, for example personal digital assistant (PDA) phones and smart phones, not only have the traditional communication device functions, but also provide a built-in operating system to perform advanced functions, such as document writing, e-mail transmitting and receiving, internet browsing, and instant messaging software. In other words, the hand-held communication device can not only be used as a telephone, but can also be used as small personal computers with multiple functions. In addition, because of the development of wireless network technology, it is possible to use the hand-held communication devices to perform advanced functions anytime and anywhere. Therefore, hand-held electronic devices have become necessary for modern people who place a great emphasis on time management and efficiency.
FIG. 1
FIG. 2
FIG. 1
FIG. 2
10
10
10
12
14
12
14
12
14
12
Refer to and simultaneously. is a structure diagram showing a function menu with a tree structure. is a structure diagram showing a function menu interface implemented on a conventional hand-held device to display the function menu . The function menu includes node items and sub-items . Each node item includes at least one of the sub-items . In general, data stored in an electronic device, such as node items and sub-items , is usually arranged in a tree structure form because the tree structure form makes data easily and quickly comprehended by users. In hand-held electronic devices, each of the node items and sub-items corresponds to an option. For example, a node item corresponds to a function for displaying the content of “Chapter 2: overview of the SE-CMM”.
12
14
12
14
In a personal computer, the node items and sub-items are usually displayed in a hierarchical way, and the way used by the hand-held electronic device to display the node items and sub-items is the same as that used by a personal computer. Because of the limited size of the hand-held electronic device, the size of the icon of each of the options must be smaller when the options are displayed on the limited screen area of the hand-held electronic device in the hierarchical way. However, if the icon is made too small, the user may not see the name of the option clearly, or the users may easily select the incorrect causing frustration and wasting time.
A hand-held electronic device is provided. The menu interface of the hand-held electronic device can display icons of options with the normal size to help users easily select the correct option and enable users to easily read the names of the options on the screen.
An exemplary method for displaying tree structure data and a computer program product thereof is provided. A menu interface set up via the method can display normal sized icons for the different options to help users easily select the correct option and enable users to easily read the names of the options on the screen.
According to another exemplary hand-held electronic device, the hand-held electronic device includes a menu providing module, a tag providing module, an input module, and a display module. The menu providing module is used to provide tree structure data, wherein the tree structure data includes a node item and a sub-item belong to the node item. The tag providing module is used to provide a tag item and move the tag item. The input module is used to be inputted a control signal by a user to control the tag providing module, wherein the tag providing module moves the tag item in accordance with the control signal. The display module is used to display at least one portion of an item line comprising the node item and the sub-item on the screen. The screen shows the name of the node item when the user moves the tag item to an area next to the sub item.
According to another exemplary method for the displaying tree structure data, at least one portion of an item line including the sub-item and a tag item is firstly displayed on a screen. Then, the tag item is moved. Thereafter, the name of the node item is displayed, when the tag item is next to the sub item.
According to an exemplary computer program product, the computer program product can be loaded by a computer to enable the computer to display the tree structure data.
FIG. 3
FIG. 12
In order to make the illustration of the present disclosure more explicit and complete, the following description is stated with reference to through .
FIG. 3
FIG. 6
FIG. 3
FIG. 4
FIG. 6
100
Refer to and to simultaneously. is a flow chart showing an exemplary method for displaying tree structure data. to are diagrams showing the screen of an exemplary hand-held electronic device.
100
110
10
120
12
14
122
130
132
140
122
132
134
132
132
122
132
12
132
12
12
132
12
132
FIG. 3
FIG. 4
FIG. 6
FIG. 4
FIG. 6
FIG. 5
FIG. 5
First consider the method shown in the flow chart in . In Step data is provided to the system to generate the function menu with a tree structure form. In Step the node items and the sub-items (see to ) are arranged in the line item . In step , a tag item is provided. In step , at the least a portion of the item line and the tag item are displayed on the screen ( to ) of the hand-held electronic device, wherein the tag item is movable when selected. As shown in , the tag item is selected and moved down and the size thereof is enlarged to overlap the portion of the item line . At this time, if the distance between the tag item and one of the node items is smaller than a predetermined distance, or the tag item overlaps one of the node items , the name of the one of the node items is displayed on the screen, for example, displayed on or around the tag item . In the , the name of the node item is displayed on the tag item .
FIG. 5
FIG. 6
FIG. 6
12
132
12
132
14
132
132
14
132
14
132
12
132
12
In the , the name of the node item overlapped by the tag item is “Chapter 3: Using the SE-CMM”, so the tag item displays the name “Chapter 3: Using the SE-CMM” of the node item . In addition, in the , when the item overlapped by the tag item is the sub-item , the tag item displays the name of the tag item including the sub-item overlapped by the tag item . As shown in , the name of the sub-item overlapped by the tag item is “Using the SE-CMM to Support Appraisal”, and it belongs to the node item named “Chapter 3: using the SE-CMM”, therefore the tag item displays the name “Chapter 3: using the SE-CMM” of the node item .
134
122
134
134
122
134
134
122
134
122
134
134
134
132
In addition, when the tag item is moved up, the screen scrolls up to display the other items of the item line on the screen . In a similar way, when the tag item is moved down, the screen scrolls down to display the other items of the item line on the screen . Note that the scrolling of the screen is configured to display the other items of the item line on the screen , so, except for the items of the item line , the items displayed on the screen may stay in their original place when the screen scrolls up or down. For example, the time displayed on the upper right corner of the screen stays at its original places, when the tag item is moved up or down.
10
122
132
132
134
122
134
10
According to the aforementioned, the items of the function menu are arranged in the item line , and the tag item displays the name of the node item which the sub-item belongs to, and when the tag item is moved, the screen scrolls to display the other items of the item line . Because the screen can be scrolled to display the option of the function menu , the size of the option icons do not have to be decreased to make all the options displayed on the screen. Furthermore, the name of the node item which the sub-item belongs to can be displayed on or around the tag item, so the user can quickly find the item he wants via the name displayed on the tag item.
FIG. 7
FIG. 10
FIG. 7
FIG. 8
FIG. 10
200
200
100
200
210
210
212
212
134
132
212
212
134
Refer to to . is a flow chart showing an exemplary method for displaying tree structure data. to are diagrams showing a screen of an exemplary hand-held electronic device. The method is similar to the method , but the difference is in that the method further includes a trigger scrolling positions providing step . In the trigger scrolling positions providing step , trigger scrolling positions a and b are provided on the screen . When the tag item is moved to the position a or b, the screen starts to scroll.
FIG. 8
FIG. 9
132
12
132
212
134
132
212
122
134
As shown in , the tag item initially overlaps the node item named “Using the SE-CMM”. As shown in , when the tag item is moved down to the position a, the screen is then scrolled down until the tag item leaves the position a, and thus the upper portion of the item line is displayed on the screen .
According to the aforementioned, an exemplary method for triggering scrolling of the screen to match people's habit is provided.
100
200
100
200
In addition, the method or can be applied in a computer program product. When a computer (for example a processor of a mobile phone) loads the computer program product, it can perform the method and for displaying tree structure data.
FIG. 4
FIG. 6
FIG. 11
FIG. 11
300
300
310
320
330
340
350
310
310
Refer to to and . is a functional block diagram showing an exemplary hand-held electronic device . The hand-held electronic device includes function modules , a menu providing module , a tag providing module , a display module , and an input module . The function modules are used to provide various functions to users. For example, the function modules may include a sound input module and a wireless communication module, wherein the sound input module is used to receive the sound messages of the users to output a sound information to the wireless communication module, and the wireless communication module is used to transmit a sound signal to a base station according to the sound information and receive another sound signal from the base station to enable the user to talk to a receiver. In other examples, the function modules can be camera modules or blue tooth communication modules.
320
10
12
14
122
330
132
340
122
132
134
The menu providing module is used to provide the function menu having the tree structure form and arrange node options (node items ) and sub-options (sub-item ) in a item line . The tag providing module is used to provide and control the tag item . The display module is used to display a portion of the item line and the tag item on the screen .
350
310
330
132
350
330
132
132
132
12
12
132
132
14
14
132
FIG. 4
FIG. 6
The input module is used to be input a control signal by the user to control the function module and the tag providing module . As shown in and , when the user selects the tag item via the input module , the tag providing module makes the tag item movable for the user and further enlarges the size of the tag item . When the tag item is next to or overlaps on one of the node items of the item line, the name of the one of the node items is displayed on or around the tag item . When the tag item is next to or overlaps on one of the sub-items of the item line, the name of the node item which the one of the sub-items belongs to is displayed on or around the tag item .
132
134
122
134
132
134
134
In addition, when the tag item is moved up, the screen scrolls up to display the other items of the item line on the screen . In a similar way, when the tag item is moved down, the screen scrolls down to display the other items of the item line on the screen.
FIG. 8
FIG. 10
FIG. 12
FIG. 12
400
400
300
400
410
410
212
212
132
410
320
134
a
b
Refer to to and . is a functional block diagram showing an exemplary hand-held electronic device . The hand-held electronic device is similar to the hand-held electronic device , but the difference is in that the hand-held electronic device further includes a trigger scrolling position providing module . The trigger scrolling position providing module is used to provide the trigger scrolling positions and on the screen. When the user moves the tag item to the trigger scrolling positions, the trigger scrolling position providing module controls the menu providing module to scroll the screen .
FIG. 8
FIG. 9
FIG. 10
132
12
132
212
134
132
212
122
132
212
134
132
212
122
a,
a,
b,
b,
For example, as shown in , the tag item initially overlaps the node item named “Using the SE-CMM”. When the tag item is moved down to the position as shown in , the screen scrolls down until the tag item leaves the position and thus the items of the lower half portion of the item line is displayed. In a similar way, when the tag item is moved up to the position as shown in , the screen scrolls up until the tag item leaves the position and thus the items of the upper half portion of the item line is displayed.
350
134
In addition, it is noted that the input module can be a touch detection module, such as a touch panel, a touch pad, and a trackball. When the user controls the touch detection module to touch the options displayed on the screen , the touch detection module controls (moves or selects) the items according to the motion (drawing or clicking) of the user.
As is understood by a person skilled in the art, the foregoing examples of the present disclosure are not a limitation. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structures.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing aspects and many of the attendant advantages of this disclosure will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
FIG. 1
is a structure diagram showing a function menu with a tree structure;
FIG. 2
is a structure diagram showing a function menu interface used implemented on a conventional hand-held device to display the function menu;
FIG. 3
is a flow chart showing an exemplary method to display the tree structure data;
FIG. 4
FIG. 6
to are diagrams showing the screen of an exemplary hand-held electronic device;
FIG. 7
is a flow chart showing an exemplary method for displaying the tree structure data;
FIG. 11
is a functional block diagram showing an exemplary hand-held electronic device; and
FIG. 12
is a functional block diagram showing an exemplary hand-held electronic device. | |
This blog may irritate readers of different faiths, but I am going out on a limb on love. Yes, love is the great gift of the soul. Yet, we use the term “love” in so many different ways, but that is because there are different types of love. Love as a verb and then love has a state of being which seems to be the same – but isn’t.
I love a lot of things. I love food, flowers and a good book. l love my husband and I love Evernote to keep my thoughts somewhat organized. But “things” can be easier to love than people.
There are agape, philos, storge and eros types of love. Above and beyond loving everything from a nice wine, a home tool that improves efficiency or flowers that are doing exceedingly well in the Boesen meadow, I shoot for Philos. Philos love tells you it’s comforting and safe to be around the person. Perhaps when we say “I just LOOOVE XYZ, don’t you?” perhaps we are really saying “XYZ accepts me and makes feel safe to share me.”
To be loved, be loveable. There can be people you don’t want to be close to and who don’t add value to your life other than to make life “interesting.” I have times I know I am not being lovable and have to catch myself and rethink what I just did. In thinking how I can improve, these six ways to improve your loveability came to mind.
Six Ways to Improve Your Loveability
1) Be a help, not a hindrance. Ask for permission. Ask how you can best help. In however you are being helpful, demonstrate empathy. For 20 seconds, yes 20 seconds, sit back and think about how the other person is feeling in the situation, or how they may feel towards what comes out of your mouth or what you give them. What does 20 seconds feel like? Try this easy on line tool.
2) See things as interesting, not as just different. My mother used to tell me, “when you are trying something new and you don’t like it, just say “this is interesting,” I think she was trying to show me how to eat foods in public that I didn’t like. To have grace and poise and not hurt someone else’s feelings or worse, make a scene. But seeing things as “interesting” even if they aren’t my cup of tea has kept my eyes, ears and mind open through life. Even if I don’t try it, I can say life is “interesting” and see how things, and people, add quality to my life.
3) Be the sage, not the expert. No one likes a know-it-all but everyone appreciates wisdom, which to me, is knowledge well-placed and well-delivered. Be the sage.
4) Ask for help but don’t drain your resources. Western society puts independence on a pedestal to worship but the reality is, I don’t know anyone who doesn’t appreciate being asked to help. Unless they are truly narcissistic – which is possible. I have known them and so have you. Societies are built on helping and well, communing. Commune a little. Receive and give.
5) Find Your Scarlett and leave yesterday behind you. Yes learn from the past, but don’t hang out there. There is a lot to be said for, “After all, tomorrow is – another day!”
6) Show appreciation. Thank you for taking 60 seconds to read this blog!
How do you sustain your loveability?
Lisa Boesen, MAOM, is a Certified Master Coach and HR Professional. She enjoys working clients who want to work through barriers, improve resilience and approach opportunities with renewed energy and curiosity. To request more information or a free consultation, click here. | http://www.lisaboesen.com/six-ways-to-improve-your-loveability/ |
Use of genetic markers and gene-diet interactions for interrogating population-level causal influences of diet on health
Abstract
Differences in diet appear to contribute substantially to the burden of disease in populations, and therefore changes in diet could lead to major improvements in public health. This is predicated on the reliable identification of causal effects of nutrition on health, and unfortunately nutritional epidemiology has deficiencies in terms of identifying these. This is reflected in the many cases where observational studies have suggested that a nutritional factor is protective against disease, and randomized controlled trials have failed to verify this. The use of genetic variants as proxy measures of nutritional exposure—an application of the Mendelian randomization principle—can contribute to strengthening causal inference in this field. Genetic variants are not subject to bias due to reverse causation (disease processes influencing exposure, rather than vice versa) or recall bias, and if obvious precautions are applied are not influenced by confounding or attenuation by errors. This is illustrated in the case of epidemiological studies of alcohol intake and various health outcomes, through the use of genetic variants related to alcohol metabolism (in ALDH2 and ADH1B). Examples from other areas of nutritional epidemiology and of the informative nature of gene–environment interactions interpreted within the Mendelian randomization framework are presented, and the potential limitations of the approach addressed.
Keywords
Introduction
A range of classical epidemiological studies—including migration studies and the analysis of secular trends and ecological differences in disease rates—demonstrate that for most common complex diseases environmentally modifiable risk factors account for much of the burden of disease. Twin studies—that by definition exclude time trends and geographical differences in disease risk and thus provide lower (and often substantially lower) estimates of the modifiable aspects of disease risk than apply in practice—support this contention [1]. Identifying modifiable causes of disease, which can be then manipulated to improve individual and public health, is thus a key task for epidemiology. In this paper, I will argue that, paradoxically, incorporating germline genetic variants—which are essentially fixed—into epidemiological studies can strengthen evidence regarding the undeniably major role of modifiable risk processes in determining population health.
There are, however, important limitations to the ability of observational studies to reliably identify causes of disease, which have been particularly evident in the nutrition field. Consider the following two examples, from many that could be presented. Several observational studies suggested that the use of vitamin E supplements was associated with a reduced risk of coronary heart disease, two of the most influential coming from the Health Professionals Follow-Up Study [2] and the Nurses’ Health Study [3], both published in the New England Journal of Medicine in 1993. Findings from one of these studies are presented in Fig. 1, where it can be seen that even short-term use of vitamin E supplements was associated with reduced coronary heart disease risk (CHD), which persisted after adjustment for confounding factors. Nearly half of US adults are taking either vitamin E supplements or multivitamin/multimineral supplements that generally contain vitamin E [4], and data from the three available time points suggest there has been a particular increase in vitamin E use following 1993 [5], possibly consequent upon the publication of the two observational studies mentioned above, which have received over 3,000 citations between them since publication. The apparently strong observational evidence with respect to vitamin E and reduced CHD risk, which may have influenced the very high current use of vitamin E supplements in developed countries, was unfortunately not realised in randomized controlled trials (Fig. 2), in which no benefit from vitamin E supplementation use is seen. In this example, it is important to note that the observational studies and the randomized controlled trials were testing precisely the same exposure—short-term vitamin E supplement use—and yet yielded very different findings with respect to the apparent influence on risk.
Vitamin E supplement use and risk of CHD in two observational studies [2, 3] and in a meta-analysis of RCTs [109]
A similar scenario has been played out in regard to vitamin C. In 2001, the Lancet published an observational study demonstrating an inverse association between circulating vitamin C levels and incident coronary heart disease [6]. The left-hand side of Fig. 3 summarises these data, presenting the relative risk for 15.7 μmol/l higher plasma vitamin C level, assuming a log-linear association. As can be seen, adjustment for confounders had little impact on this association. However, a large-scale randomized controlled trial, the Heart Protection Study, examined the effect of a supplement that increased average plasma vitamin C levels by 15.7 μmol/l. In this study, randomization to the supplement was associated with no decrement in coronary heart disease risk [7].
What underlies the discrepancy between these findings? One possibility is that there is considerable confounding between vitamin C levels and other exposures that could increase the risk of coronary heart disease. In the British Women’s Heart and Health study (BWHHS), for example, women with higher plasma vitamin C levels were less likely to be in a manual social class, have no car access, be a smoker or be obese and more likely to exercise, be on a low-fat diet, have a daily alcoholic drink, and be tall [8]. Furthermore for these women in their 60s and 70s those with higher plasma vitamin C levels were less likely to have come from a home many decades ago in which the head of household was in a manual job, or had no bathroom or hot water, or within which they had to share a bedroom. They were also less likely to have limited educational attainment. In short, a substantial amount of confounding by factors from across the life course that predict elevated risk of coronary heart disease was seen.
In the BWHHS, 15.7 mmol/l higher plasma vitamin C level was associated with a relative risk of incident coronary heart disease of 0.88 (95% CI 0.80–0.97), in the same direction as the estimates seen in the observational study summarized in Fig. 3. When adjusted for the same confounders as were adjusted for in the observational study reported in Fig. 3, the estimate changed very little—to 0.90 (95% CI 0.82–0.99). When additional adjustment for confounders acting across the life course was made, considerable attenuation was seen, with a residual relative risk of 0.95 (95% CI 0.85–1.05) [9]. It is obvious that given inevitable amounts of measurement imprecision in the confounders, or a limited number of missing unmeasured confounders, the residual association is essentially null and close to the finding of the randomized controlled trial. Most studies have more limited information on potential confounders than is available in the BWHHS, and in other fields we may know less about the confounding factors we should measure. In these cases, inferences drawn from observational epidemiological studies may be seriously misleading. As the major and compelling rationale for doing these observational studies is to underpin public health prevention strategies, their repeated failures are a major concern for public health policy makers, researchers and funders. Whilst sophisticated methods of taking measurement error into account, including measurement error in confounders, have been introduced into nutritional epidemiology [10, 11, 12], they cannot guarantee that observational study effects are reliable estimates of underlying causal effects [13, 14].
Other processes in addition to confounding can generate robust, but non-causal, associations in observational studies. Reverse causation—where the disease influences the apparent exposure, rather than vice versa, may generate strong and replicable associations. For example, many studies have found that people with low circulating cholesterol levels are at increased risk of several cancers, including colon cancer. If causal, this is an important association as it might mean that efforts to lower cholesterol levels would increase the risk of cancer. However, it is possible that the early stages of cancer may, many years before diagnosis or death, lead to a lowering in cholesterol levels, rather than low cholesterol levels increasing the risk of cancer. Reverse causation can also occur through behavioural processes—for example, people with early stages and symptoms of cardiovascular disease may reduce their consumption of alcohol, which would generate a situation in which alcohol intake appears to protect against cardiovascular disease. A form of reverse causation can also occur through reporting bias, with the presence of disease influencing reporting disposition. In retrospective case–control studies, people with the disease under investigation may report on their prior exposure history in a different way than do controls—perhaps because the former will think harder about potential reasons to account for why they have developed the disease.
The problems of confounding and bias discussed above relate to the production of associations in observational studies that are not reliable indicators of the true direction of causal associations. A separate issue is that the strength of associations between causal risk factors and disease in observational studies will generally be underestimated due to random measurement imprecision in indexing the exposure. A century ago Charles Spearman demonstrated mathematically how such measurement imprecision would lead to what he termed the ‘attenuation by errors’ of associations [15, 16]. This has more latterly been renamed ‘regression dilution bias’.
Observational studies in the nutritional epidemiology field can and do produce findings that either spuriously enhance or downgrade estimates of causal associations between modifiable exposures and disease. This has serious consequences for the appropriateness of interventions that aim to reduce disease risk in populations. It is for these reasons that alternative approaches—including those within the Mendelian randomization framework—need to be applied.
Background to Mendelian randomization
The basic principle utilized in the Mendelian randomization approach is that if genetic variants either alter the level of, or mirror the biological effects of, a modifiable environmental exposure that itself alters disease risk, then these genetic variants should be related to disease risk to the extent predicted by their influence on exposure to the risk factor. Common genetic polymorphisms that have a well-characterized biological function (or are markers for such variants) can therefore be utilized to study the effect of a suspected environmental exposure on disease risk [17, 18, 19, 20, 21]. The variants should not have an association with the disease outcome except through their link with the modifiable risk process of interest.
It may seem counter intuitive to study genetic variants as proxies for environmental exposures rather than measure the exposures themselves. However, there are several crucial advantages of utilizing functional genetic variants (or their markers) in this manner, which relate to the problems with observational studies outlined above. First, unlike environmental exposures, genetic variants are not generally associated with the wide range of behavioural, social and physiological factors that can confound associations. This means that if a genetic variant is used as a proxy for an environmentally modifiable exposure, it is unlikely to be confounded in the way that direct measures of the exposure will be. Further, aside from the effects of population structure [22], such variants will not be associated with other genetic variants, except through linkage disequilibrium (the association of alleles located close together on a chromosome).
Second, inferences drawn from observational studies may be subject to bias due to reverse causation. Disease processes may influence exposure levels such as alcohol intake, or measures of intermediate phenotypes, such as cholesterol levels and C-reactive protein. However, germline genetic variants associated with average alcohol intake or circulating levels of intermediate phenotypes will not be influenced by the onset of disease. This will also be true with respect to reporting bias generated by knowledge of disease status in case–control studies, or of differential reporting bias in any study design.
Finally, a genetic variant will indicate long-term levels of exposure, and, if the variant is considered to be a proxy for such exposure, it will not suffer from the measurement error inherent in phenotypes that have high levels of variability. For example, differences between groups defined by cholesterol level–related genotype will, over a long period, reflect the cumulative differences in absolute cholesterol levels between the groups. For individuals, blood cholesterol is variable over time, and the use of single measures of cholesterol will underestimate the true strength of association between cholesterol and, for instance, coronary heart disease. Indeed, use of the Mendelian randomization approach predicts a strength of association that is in line with randomized controlled trial findings of effects of cholesterol lowering, when the increasing benefits seen over the relatively short trial period are projected to the expectation for differences over a lifetime [18]. A particular strength of Mendelian randomization approaches is that genetic variants generally proxy for long-term differences in exposure levels. For intermediate phenotypes (circulating cholesterol or C-reactive protein levels), genetic variants tend to be associated with differences of a similar order of magnitude throughout life. For some behavioural factors, such as alcohol intake, associations will only emerge at the stage of life when the behaviour is instigated.
In the Mendelian randomization framework, the associations of genotype with outcomes are of interest because of the strengthened inference they allow about the action of the environmental modifiable risk factors that the genotypes proxy for, rather than what they say about genetic mechanisms per se. Mendelian randomization studies are aimed at informing strategies to reduce disease risk through influencing the non-genetic component of modifiable risk processes.
The principle of Mendelian randomization relies on the basic (but approximate) laws of Mendelian genetics. If the probability that a postmeiotic germ cell that has received any particular allele at segregation contributes to a viable concepts is independent of environment (following from Mendel’s first law), and if genetic variants sort independently (following from Mendel’s second law), then at a population level these variants will not be associated with the confounding factors that generally distort conventional observational studies. Empirical evidence that there is lack of confounding of genetic variants with factors that confound exposures in conventional observational epidemiological studies comes from several sources. For example, consider the virtually identical allele frequencies in the British 1958 birth cohort and British blood donors [23]. Blood donors are clearly a very selected sample of the population, whereas the 1958 birth cohort comprised all births in 1 week in Britain with minimal selection bias. Blood donors and the general population sample would differ considerably with respect to the behavioural, socio-economic and physiological risk factors that are often the confounding factors in observational epidemiological studies. However, they hardly differ in terms of allele frequencies. Similarly, we have demonstrated the lack of association between a range of SNPs of known phenotypic effects and nearly 100 socio-cultural, behavioural and biological risk factors for disease [24].
Mendelian randomization and nutrition-related exposures
The principle of using genetic variation to proxy for a modifiable exposure was explicitly applied in observational studies from the 1960s, with a series of studies that utilized genetically–determined lactase persistence as an indicator of milk intake, and used this marker to inform evidence regarding the effect of consuming milk on several health-related outcomes [25, 26, 27]. The approach was hypothetically proposed for investigating whether low circulating cholesterol levels causally influenced cancer risk by Martijn Katan in 1986 [28]. The term Mendelian randomization was introduced by Richard Gray and Keith Wheatley in 1991 [29], in the context of an innovative genetically informed observational approach to assess the effects of bone marrow transplantation in the treatment of childhood acute myeloid leukaemia. More recently, the term has been widely used in discussions of observational epidemiological studies [17, 30, 31, 32, 33]. Further discussion of the origins of this approach is given elsewhere [34], and recent reviews have dealt explicitly with the application of Mendelian randomization within nutritional epidemiology [35, 36].
There are several categories of inference that can be drawn from studies utilizing the Mendelian randomization approach. In the most direct forms, genetic variants can be related to the probability or level of exposure (“exposure propensity”) or to intermediate phenotypes believed to influence disease risk. Less direct evidence can come from genetic variant-disease associations that indicate that a particular biological pathway may be of importance, perhaps because the variants modify the effects of environmental exposures [17, 18, 21, 37, 38]. I illustrate some of these categories within investigations of the effects of alcohol on various health outcomes.
Alcohol intake and blood pressure
The consequences of alcohol drinking for health range from the well established (effects on liver cirrhosis and accidents) to the uncertain (coronary heart disease, depression and dementia). For example, the possible protective effect of moderate alcohol consumption on coronary heart disease (CHD) risk remains highly controversial [39, 40, 41]. Non-drinkers may be at a higher risk of CHD because health problems (perhaps induced by previous alcohol abuse) dissuade them from drinking [42]. In addition to this form of reverse causation, confounding could play a role, with non-drinkers being more likely to display an adverse profile of socioeconomic or other behavioural risk factors for CHD. Alternatively, alcohol may have a direct biological effect that lessens the risk of CHD—for example by increasing the levels of protective high-density lipoprotein (HDL) cholesterol [43]. It is, however, unlikely that an RCT of differential levels of alcohol intake, adequate to test whether there is a protective effect of alcohol on CHD events, will ever be carried out.
Alcohol is oxidized to acetaldehyde, which in turn is oxidized by aldehyde dehydrogenases (ALDHs) to acetate. Half of Japanese people are heterozygotes or homozygotes for a null variant of ALDH2, and peak blood acetaldehyde concentrations post alcohol challenge are 18 times and 5 times higher, respectively, among homozygous null variant and heterozygous individuals compared with homozygous wild-type individuals [44]. This renders the consumption of alcohol unpleasant through inducing facial flushing, palpitations, drowsiness and other symptoms, and there are very considerable differences in alcohol consumption according to genotype. The principles of Mendelian randomization are seen to apply—two factors that would be expected to be associated with alcohol consumption, age and cigarette smoking, which would confound conventional observational associations between alcohol and disease, are not related to genotype despite the strong association of genotype with alcohol consumption [45].
It would be expected that ALDH2 genotype influences diseases known to be related to alcohol consumption and as proof of principle it has been shown that ALDH2 null variant homozygosity—associated with low alcohol consumption—is indeed related to a lower risk of liver cirrhosis [46]. Considerable evidence, including data from short-term randomized controlled trials, suggests that alcohol increases HDL cholesterol levels [47, 48] (which should protect against CHD). In line with this, ALDH2 genotype is strongly associated with HDL cholesterol in the expected direction [45]. With respect to blood pressure, observational evidence suggests that long-term alcohol intake produces an increased risk of hypertension and higher prevailing blood pressure levels. However the results from different studies vary and there is clearly a very large degree of potential confounding between alcohol and other exposures that would influence blood pressure. As in the case of vitamin E intake and coronary heart disease discussed earlier, we could be looking at a confounded rather than a causal association. Indeed evidence of controversy in this area is reflected by newspaper coverage of a recent study suggesting that moderate alcohol consumption has beneficial effects, even for hypertensive men [49], with headlines like “Moderate drinking may help men with high blood pressure”.
Evidence regarding the causal nature of the association of alcohol drinking with blood pressure can come from studies of ALDH2 genotype and blood pressure. A meta-analysis of such studies suggests there is indeed a substantial positive effect of alcohol on blood pressure [50]. As shown in Fig. 4, alcohol consumption is strongly related to genotype among men, and despite higher levels of overall alcohol consumption in some studies compared with others the shape of the association remains similar. Among women, however, who drink very little compared to men, there is no evidence of association between drinking and genotype. Figure 5 demonstrates that men who are homozygous for the wild type have nearly two and half times the risk of hypertension than men who are homozygous for the null variant. Heterozygous men who drink an intermediate amount of alcohol have a more modest elevated risk of hypertension compared to men who are homozygous for the null variant. Thus, a dose–response association of hypertension and genotype is seen, in line with the dose–response association between genotype and alcohol intake. Among men homozygous for the null variant, who drink considerably less alcohol than those homozygous for the wild type, systolic and diastolic blood pressures are considerably lower. By contrast, among women, for whom genotype is unrelated to alcohol intake, there is no association between genotype and blood pressure. The differential genotype—blood pressure associations in men and women suggest that there is no other mechanism linking genotype and blood pressure than that relating to alcohol intake. If alternative pathways existed, both men and women would be expected to have the same genotype–blood pressure association.
In this example, the interaction is between a genetic variant and gender. Gender indicates substantial differences in alcohol consumption, which lead to the genotype being strongly associated with alcohol consumption in one group (males), but not associated in the other group (females), because of very low levels of alcohol consumption, irrespective of genotype, among the latter group. The power of this interaction is that it indicates that it is the association with alcohol intake and not any other aspects of the function of the genotype that is influencing blood pressure. If it were due to a pleiotropic effect of the genetic variation then the association between genotype and blood pressure would be seen for women as well as men.
Alcohol and illegal substance use: testing the “gateway hypothesis”
In many contexts, people who drink alcohol manifest higher rates of illegal substance use. This could reflect common social and environmental factors that increase uptake of several behaviours, or underlying genetic vulnerability factors. An alternative is the “gateway hypothesis” that postulates that alcohol use itself increases liability to initiate and maintain non-alcohol substance use [51, 52, 53]. The Mendelian randomization approach has been applied in a study of East Asian Americans, all born in Korea but living in the United States from infancy, among whom ALDH2 status was associated with alcohol use and alcohol use was associated with tobacco, marijuana, and other illegal drug use. ALDH2 variation was not robustly associated with non-alcohol substance use, however, which was taken to provide evidence against the “gateway hypothesis” [51].
The influence of high levels of alcohol intake by pregnant women on the health and development of their offspring is well recognized for very high levels of intake, in the form of foetal alcohol syndrome [54]. However, the influence outside of this extreme situation is less easy to assess, particularly as higher levels of alcohol intake will be related to a wide array of potential socio-cultural, behavioural and environmental confounding factors. Furthermore, there may be systematic bias in how mothers report alcohol intake during pregnancy, which could distort associations with health outcomes. Therefore, outside of the case of very high alcohol intake by mothers, it is difficult to establish a causal link between maternal alcohol intake and offspring developmental characteristics. Some studies have approached this in ways that can be interpreted within the Mendelian randomization framework by investigating alcohol-metabolizing genotypes in mothers and offspring outcomes.
Studies have generally utilized a variant in the alcohol dehydrogenase gene (ADH1B*3 allele). Alcohol dehydrogenase metabolises alcohol to acetaldehyde and the ADH1B variant influences the rate of such metabolism. The ADH1B*3 variant has a reasonable prevalence among African Americans and is related to faster alcohol metabolism. This can be associated with a lower level of drinking, possibly because the faster metabolism leads to a more rapid spike in acetaldehyde, with its aversive effects. At a given level of drinking, faster metabolism will clear blood alcohol more rapidly, so less high levels will be reached and these will more quickly return to low levels. Both of these processes, if occurring in the mother, would protect the foetus from the effects of alcohol. Some studies have selected mothers who have a universally high level of alcohol consumption and among these mothers the alcohol-metabolizing genotypes will relate to alcohol levels that could have a toxic effect on the developing foetus, but not to their drinking, which is universally high. In this circumstance, the genotypic differences will mimic the differences in level of alcohol intake with regard to the foetal exposure to maternal circulating alcohol. Although sample sizes have been low and the analysis strategies not optimal, studies applying this approach provide some evidence to support the influence of maternal genotype, and thus of alcohol, on offspring outcomes [54, 55, 56]. Studies that have been able to analyse both maternal genotype and foetal genotype find that it is the maternal genotype that is related to offspring outcomes, as anticipated if the crucial exposure related to maternal alcohol intake and alcohol levels.
As in other examples of Mendelian randomization, these studies are of relevance because they provide evidence of the influence of maternal alcohol levels on offspring development, rather than because they highlight a particular maternal genotype that is of importance. In the absence of alcohol drinking, the maternal genotype would presumably have no influence on offspring outcomes. Studies utilizing maternal genotype as a proxy for environmentally modifiable influences on the intrauterine environment can be analysed in a variety of ways. First, the mothers of offspring with a particular outcome can be compared to a control group of mothers who have offspring without the outcome, in a conventional case–control design, but with the mother as the exposed individual (or control) rather than the offspring with the particular health outcome (or the control offspring). Fathers could serve as a control group when autosomal genetic variants are being studied. If the exposure is mediated by the mother, maternal genotype, rather than offspring genotype, will be the appropriate exposure indicator. Clearly, maternal and offspring genotype are associated, but conditional on each other, it should be the maternal genotype that shows the association with the health outcome among the offspring. Indeed, in theory it would be possible to simply compare genotype distributions of mother and offspring, with a higher prevalence among mothers providing evidence that maternal genotype, through an intrauterine pathway, is of importance. However, the statistical power of such an approach is low, and an external control group, whether fathers or women who have offspring without the health outcome, is generally preferable.
Other examples of Mendelian randomization in relation to nutritional exposures
With respect to exposure propensity Mendelian randomization can be applied to milk consumption (through use of genetic variants related to lactase persistence), although given the low strength of association of such genetic variation and milk consumption sample sizes need to be large [57]. Molecular genetic variation in taste receptors relates to different patterns of dietary intake, in particular with respect to bitter taste perception and cruciferous vegetable intake [58]; however, differences in taste are likely to be related to a range of dietary differences and therefore do not serve as specific proxies for any particular component of diet.
There is considerably greater potential for the application of Mendelian randomization in testing the causal nature of the associations observed between nutritionally influenced intermediate phenotypes and disease outcomes. This can provide good evidence on the influence of nutritional factors on disease. For example, many studies demonstrate robust effects of differences in dietary fat intake on circulating cholesterol levels, and Mendelian randomization studies demonstrate that genetic variants associated with higher cholesterol levels are associated with higher risk of coronary heart disease [38]. This proof-of-principle example confirms what has been demonstrated in randomized controlled trials of cholesterol lowering through the use of statins, that cholesterol levels are causally related to coronary heart disease risk. The implication is that various methods of modifying cholesterol levels, such as dietary changes, are likely to influence coronary heart disease risk, although of course there could be other influences of the dietary changes that counterbalance such an effect.
As indicated earlier, there is considerable interest in the possibility that circulating antioxidants may protect against various disease states, and therefore molecular genetic variants associated with different levels of circulating antioxidants can be utilized to determine if these associations are causal. For example a variant in the SLC23A1 gene, which codes for the Sodium Dependent Vitamin C Transporter protein 1 (SVCT1), is associated with a reasonably large difference in circulating vitamin C levels [59]. This can be utilized to test whether the apparent protective effects of higher circulating vitamin C levels against a variety of adverse health outcomes are causal. It would be expected that higher dietary intake of vitamin C—that results in higher circulating levels—would reduce the risk of these adverse health outcomes to the extent predicted by any causal associations identified using the Mendelian randomization approach. Similarly, molecular genetic variation related to circulating α-tocopherol [60] and carotenoids [61] can be utilized to elucidate the causal effects of these factors.
Another example of a nutritionally influenced intermediate phenotype is seen in studies of the association of high body mass index (BMI) and a variety of cardiovascular risk factors. A variant in the FTO gene is robustly associated with differences in BMI, and as shown in Fig. 6FTO, variation predicts risk factor levels to the degree expected, given its effect on BMI and a causal association between BMI and these risk factors [62]. With considerably greater statistical power, the causal association of BMI on blood pressure level and hypertension has been demonstrated [63] and been shown to persist into old age, whilst the observational associations (perhaps due to a greater degree of confounding and disease-related weight loss) attenuate with age. A causal nature for the positive association between body mass index and bone mineral density—possibly responsible for the protective effect of greater body mass index on fracture risk—has also been suggested utilizing this approach [64].
The observed effects of FTO variation on metabolic traits are as predicted by the associations of body mass index with the same metabolic traits [62]
Mendelian randomization and randomized controlled trials
RCTs are clearly the definitive means of obtaining evidence on the effects of modifying disease risk processes. There are similarities in the logical structure of RCTs and Mendelian randomization, relating to the unconfounded nature of exposures for which genetic variants serve as proxies (analogous to the unconfounded nature of a randomized intervention), the impossibility of reverse causation as an influence on exposure-outcome associations in both Mendelian randomization and RCT settings, and the importance of intention to treat analyses—i.e., analysis by group defined by genetic variant, irrespective of associations between the genetic variant and the exposure for which this is a proxy for any particular individual.
The analogy with RCTs is also useful with respect to one objection that has been raised in conjunction with Mendelian randomization studies. This is that the environmentally modifiable exposure for which genetic variants serve as proxies (such as alcohol intake) is influenced by many other factors in addition to the genetic variants [65]. This is of course true. However, consider an RCT of blood pressure–lowering medication. Blood pressure is mainly influenced by factors other than taking blood pressure lowering medication—obesity, alcohol intake, salt consumption and other dietary factors, smoking, exercise, physical fitness, genetic factors and early-life developmental influences are all of importance. However, the randomization that occurs in trials ensures that these factors are balanced between the groups that receive the blood pressure lowering medication and those that do not. Thus, the fact that many other factors are related to the modifiable exposure does not compromise the power of RCTs; neither does it diminish the strength of Mendelian randomization designs. A related objection is that the genetic variants often explain only a trivial proportion of the variance in the environmentally modifiable risk factor for which the genetic variants are surrogate variables [66]. Again, consider an RCT of blood pressure-lowering medication, where 50% of participants receive the medication and 50% received a placebo. If the antihypertensive therapy reduced blood pressure by a quarter of a standard deviation (i.e., a 5 mmHg reduction in systolic blood pressure, given systolic blood pressure has a standard deviation of 20 mmHg in the population) then within the whole study group, treatment assignment (i.e., antihypertensive use vs. placebo) will explain 5/202 = 1.25% of the variance. In the example of ALDH2 variation and alcohol, the genetic variant explains about 2% of the variance in alcohol intake in the largest study available on this issue [45]. As can be seen, the quantitative association of genetic variants as instruments can be similar to that of randomized treatments with respect to biological processes that such treatments modify. Genetic variants are often as strong—if not stronger—predictors of unconfounded differences in exposures as are the randomized treatments in RCTs. The use of haplotypes or multiple independent genetic variants at different loci related to the exposure of interest can also be used to increase statistical power.
Mendelian randomization and instrumental variable approaches
In addition to the analogy with RCTs, Mendelian randomization can also be likened to instrumental variable approaches that have been heavily utilized in econometrics and social science, although rather less so in epidemiology. In an instrumental variable approach, the instrument is a variable that is only related to the outcome through its association with the modifiable exposure of interest. The instrument is not related to confounding factors nor is its assessment biased in a manner that would generate a spurious association with the outcome. Furthermore, the instrument will not be influenced by the development of the outcome (i.e., there will be no reverse causation). The development of instrumental variable methods within econometrics, in particular, has led to a sophisticated suite of statistical methods for estimating causal effects, and these have now been applied within Mendelian randomization studies [20, 63, 67]. The parallels between Mendelian randomization and instrumental variable approaches are discussed in more detail elsewhere [20, 68]. The instrumental variable method allows for the estimation of the causal effect size of the modifiable environmental exposure of interest and the outcome, together with estimates of the precision of the effect. Thus, in the example of alcohol intake (indexed by ALDH2 genotype) and blood pressure, it is possible to utilize the joint associations of ALDH2 genotype and alcohol intake and ALDH2 genotype and blood pressure to estimate the causal influence of alcohol intake on blood pressure. There are convenient rules of thumb, such as the rule that the first stage F test should be over 10 for an instrument to be of adequate strength, which can be adopted from the econometrics field, in which instrumental variables methods have been well developed [69], and applied to the Mendelian randomization setting.
Mendelian randomization is one way in which genetic epidemiology can inform understanding about environmental determinants of disease. A more conventional approach to the joint study of genes and environment has been to study interactions between environmental exposures and genotype [70, 71, 72]. From epidemiological and Mendelian randomization perspectives, several issues arise with gene–environment interactions.
The most reliable findings in genetic association studies relate to the main effects of polymorphisms on disease risk [32]. The power to detect meaningful gene–environment interaction is low [73], with the result being that there are a large number of reports of spurious gene–environment interactions in the medical literature [74]. The presence or absence of statistical interactions depends upon the scale (e.g., linear or logarithmic with respect to the exposure-disease outcome association) and the meaning of observed deviation from either an additive or multiplicative model is not clear. Furthermore, the biological implications of interactions (however defined) are generally uncertain [75]. Mendelian randomization is most powerful when studying modifiable exposures that are difficult to measure and/or considerably confounded, such as dietary factors. Given measurement error—particularly if this is differential with respect to other factors influencing disease risk—interactions are both difficult to detect and often misleading when, apparently, they are found [32].
Given these caveats, gene-by-environment interactions can be informative with respect to both cause and mechanism of disease. This can be demonstrated with respect to the investigation of alcohol as a potential cause of head and neck and oesophageal cancer. For these cancers, alcohol intake appears to increase the risk, although some have questioned the importance of its role [76].
A rare variant in the alcohol dehydrogenase 1B gene has been shown to be associated with lower levels of alcohol intake [77], and this same variant provides substantial protection against the risk of head and neck cancer [78]. If this association was due to the influence of alcohol consumption, it would be expected that no genotypic effect would be seen within never drinkers, and this is indeed what is seen (Fig. 7, top panel). Thus, this qualitative gene-by-environment interaction—of an effect of genotype in alcohol consumers and no effect in never drinkers—supports the role of alcohol consumption in increasing the risk of head and neck cancer.
In relation to ALDH2 genotype, a meta-analysis of studies of its association with oesophageal cancer risk [79] found that people who are homozygous for the null variant, who therefore consume considerably less alcohol, have a greatly reduced risk of oesophageal cancer. The reduction in risk is close to that predicted from the size of effect of genotype on alcohol consumption and the dose–response association of alcohol and oesophageal cancer risk [80]. A similar picture is seen when head and neck cancer is the outcome [81].
Thus, with respect to the homozygous null variant versus homozygous wild type, the situation is similar to that of our blood pressure example—the genotypic association provides evidence of the effect of alcohol consumption, through allowing comparison of a group of low drinkers to a group who drink considerable amounts of alcohol, with no confounding factors differing between these groups. With respect to both oesophageal and head and neck cancer, acetaldehyde (the metabolite that is increased in people carrying the null variant who do drink alcohol) is considered to be carcinogenic [82]. Thus, drinkers who carry the null variant have higher levels of acetaldehyde than those who do not carry the variant. As shown above, people who are homozygous for the null variant drink very little alcohol, but heterozygous individuals do drink. When the heterozygotes are compared with wild type homozygotes, an interesting picture emerges—the risk of oesophageal cancer is higher in the heterozygotes, although they drink less alcohol than the homozygotes. If alcohol itself acted directly as the immediate causal factor, cancer risk would be intermediate in the heterozygotes compared with the other two groups. Acetaldehyde is the more likely causal factor, as heterozygotes as a group drink some alcohol but metabolize it inefficiently, leading to accumulation of higher levels of acetaldehyde than would occur in homozygotes for the common variant, who metabolize alcohol efficiently, and homozygotes for the null variant, who drink insufficient alcohol to produce raised acetaldehyde levels. Examination of the difference in oesophageal cancer risk between ALDH2 heterozygotes and those homozygous for the wild type, stratified by drinking status, reveals that in non-drinkers there is no robust evidence of any association between genotype and oesophageal cancer outcomes, as would be expected if the underlying environmentally modifiable causal factor were alcohol intake and the mechanism was through acetaldehyde levels. In further support of the hypothesis, amongst people who were drinking alcohol there was increased risk amongst heterozygotes, who have higher acetaldehyde levels, and this was especially marked in heavy drinkers, who would have the greatest difference in acetaldehyde levels according to genotype (Fig. 8). A similar analysis has been performed for head and neck cancer and again demonstrates no association of genotype and cancer risk in never drinkers and a graded association according to level of alcohol intake among alcohol drinkers [81].
Risk of oesophageal cancer in individuals with the ALDH2*1*2 versus *1*1 genotype [75]. The “other” category are alcohol drinkers who fall outside of the heavy drinking categories
Identifying the causal element within complex dietary mixtures
Particular dietary intakes tend to correlate with each other, such that individuals with high fruit and vegetable consumption would be more likely to have low saturated fat intake, for example. Furthermore, different micronutrient intakes will show correlations, such that high vitamin C intake would be associated with higher on-average beta-carotene and vitamin E intake, for example. Separating out which specific aspects of the diet are causally related to disease is problematic in this context. For example, studies of neural tube defects (NTDs) demonstrate that mothers of offspring with NTDs were different with respect to many aspects of their dietary intake from control mothers [83, 84]. The mothers of cases have lower intakes of many vitamins, for example. In this situation, a test of folic acid metabolism—the FIGLU test [85]—pointed to folate as the crucial element [86]. With molecular genetic approaches, demonstration of gene-by-environment interactions can help identify which particular dietary factor is related to disease risk. However, as demonstrated in Fig. 7 with regard to alcohol and cigarette smoking, the correlated nature of exposures will lead to interactions with relevant genotypes being seen both for the causative factor (which the genotype may well modify absorption or metabolism of) and the non-causal factor, but the interaction will be stronger for the casual factor. In this situation, identifying the strongest gene-by-environment interactions—in particular when the genotype is known to modify absorption or metabolism of one of the dietary factors under study—can help isolate the specific nutritional factor having a causal influence on the disease outcome.
Problems and limitations of Mendelian randomization
The Mendelian randomization approach provides useful evidence on the influence of modifiable exposures on health outcomes. However, there are several limitations to this approach, in particular relating to the need for large sample sizes and adequate statistical power. These have been discussed at considerable length elsewhere [17, 21, 35] and therefore the focus here is on implications of these for interpretation of gene-by-environment interaction.
The power of Mendelian randomization lies in its ability to avoid the often substantial confounding seen in conventional observational epidemiology. However, confounding can be reintroduced into Mendelian randomization studies and when interpreting the results, this possibility needs to be considered. First, it is possible that the locus under study is in linkage disequilibrium—i.e., is associated—with another polymorphic locus, with the former being confounded by the latter. It may seem unlikely, given the relatively short distances over which linkage disequilibrium is seen in the human genome, that a polymorphism influencing, for instance, CHD risk, would be associated with another polymorphism influencing CHD risk (and thus producing confounding). There are, nevertheless, examples of different genes influencing the same metabolic pathway being in physical proximity. For example, different polymorphisms influencing alcohol metabolism appear to be in linkage disequilibrium [87].
Second, Mendelian randomization is most useful when it can be used to relate a single intermediate phenotype to a disease outcome. However, polymorphisms may (and probably often will) influence more than one intermediate phenotype, and this may mean they proxy for more than one environmentally modifiable risk factor. This pleiotropy can be generated through multiple effects mediated by their RNA expression or protein coding, through alternative splicing, where one polymorphic region contributes to alternative forms of more than one protein [88], or through other mechanisms. The most robust interpretations will be possible when the functional polymorphism appears to directly influence the level of the intermediate phenotype of interest, but such examples are probably going to be less common in Mendelian randomization than in cases where the polymorphism could in principle influence several systems, with different potential interpretations of how the effect on outcome is generated.
Linkage disequilibrium and pleiotropy can reintroduce confounding and thus reduce the potential value of the Mendelian randomization approach. Genomic knowledge may help in estimating the degree to which these are likely to be problems in any particular Mendelian randomization study, through, for instance, explication of genetic variants that may be in linkage disequilibrium with the variant under study, or the function of a particular variant and its known pleiotropic effects. Furthermore, genetic variation can be analyzed in relation to measures of potential confounding factors in each study and the magnitude of such confounding estimated. Empirical studies to date suggest that common genetic variants are largely unrelated to the behavioural and socioeconomic factors considered to be important confounders in conventional observational studies [24]. However, relying on measurement of confounders does, of course, remove the central purpose of Mendelian randomization, which is to balance unmeasured as well as measured confounders.
In some circumstances, the genetic variant will be related to the environmentally modifiable exposure of interest in some population subgroups but not in others. The alcohol, ALDH2 genotype and blood pressure association affecting men but not women, discussed earlier, is an example of this. If ALDH2 genetic variation influenced blood pressure for reasons other than its influence on alcohol intake, for example, if it was in linkage disequilibrium with another genetic variant that influenced blood pressure through another pathway or if there was a direct pleiotropic effect of the genetic variant on blood pressure, the same genotype-blood pressure association should be seen among both men and women. If the genetic variant only influences blood pressure through its effect on alcohol intake, an effect should only be seen in men, which is what is observed. This further strengthens the evidence that the genotype–blood pressure association depends upon the genotype influencing alcohol intake and that the associations do indeed provide causal evidence of an influence of alcohol intake on blood pressure.
In some cases, it may be possible to identify two separate genetic variants, which are not in linkage disequilibrium with each other, but which both serve as proxies for the environmentally modifiable risk factor of interest. If both variants are related to the outcome of interest and point to the same underlying association, then it becomes much less plausible that reintroduced confounding explains the association, since it would have to be acting in the same way for these two unlinked variants. This can be likened to RCTs of different blood pressure–lowering agents, which work through different mechanisms and have different potential side effects. If the different agents produce the same reductions in cardiovascular disease risk, then it is unlikely that this is through agent-specific (pleiotropic) effects of the drugs; rather, it points to blood pressure lowering as being key. The latter is indeed what is in general observed [89]. In another context, two distinct genetic variants acting as instruments for higher body fat content have been used to demonstrate that greater adiposity is related to higher bone mineral density [63]. With the large number of genetic variants that are being identified in genome wide association studies in relation to particular phenotypes—e.g., >50 independent variants that are related to height; >90 that are related to total cholesterol and >20 related to fasting glucose—it is possible to generate many independent combinations of such variants and from these many independent instrumental variable estimates of the causal associations between an environmentally modifiable risk factor and a disease outcome. The independent estimates will not be plausibly influenced by any common pleiotropy or LD-induced confounding, and therefore if they display consistency this provides strong evidence against the notion that reintroduced confounding is generating the associations.
Special issues with confounding in studies of gene-by-environment interactions
It must be recognized that gene-by-environment interactions interpreted within the Mendelian randomization framework as evidence regarding the causal nature of environmentally modifiable exposures are not protected from confounding to the same extent as main genetic effects. In the ADH1B/alcohol/head and neck cancer example, any factor related to alcohol consumption—such as smoking—will tend to show greater association with head and neck cancer within the more rapid alcohol metabolizers, because smokers are more likely to drink alcohol and alcohol drinking interacts with ADH1B genotype in determining head and neck cancer risk. Because there is not a 1-to-1 association of smoking with alcohol consumption, this will not produce the quantitative interaction of essentially no effect of the genotype amongst never drinkers and an effect in the other drinking stratum, but rather a qualitative interaction of a greater effect in the smoking groups amongst whom alcohol consumption is more prevalent and a smaller, but still evident, effect in the non-smoking group amongst whom alcohol consumption tends to be less prevalent. This is indeed what is seen (Fig. 7). Situations in which both the biological basis of an expected interaction is well understood and in which a qualitative (effect vs. no effect) interaction may be postulated are the ones that are most amenable to interpretations with respect to the causal nature of the environmentally modifiable risk factor.
Non-linear associations
Mendelian randomization is most powerful when examining linear exposure-disease associations, such as those between circulating cholesterol levels and coronary heart disease. For possible non-linear associations—such as have been suggested for alcohol intake and CHD—the situation may be more complex. First, the observed non-linear associations (U-shaped in the case of alcohol and coronary heart disease in many studies) may reflect confounding and bias, as discussed above. The suggested linear effects from a Mendelian randomization study may be the correct one. Second, it is possible to use single genetic variants or combinations of variants to define the proportion of individuals in a range of alcohol intake groups (from none to high) and investigate non-linear associations in this way. For example, a very large proportion of individuals homozygous for the ALDH2 null variant are non-drinkers, and if there was truly an elevated risk of coronary heart disease among non drinkers compared to moderate alcohol consumers this group would be expected to be at higher risk than heterozygotes.
Canalization and developmental stability
Perhaps a greater potential problem for Mendelian randomization than reintroduced confounding arises from the developmental compensation that may occur through a polymorphic genotype being expressed during foetal or early post-natal development and thus influencing development in such a way as to buffer against the effect of the polymorphism. Such compensatory processes have been discussed since Waddington introduced the notion of canalization in the 1940s [90]. Canalization refers to the buffering of the effects of either environmental or genetic forces attempting to perturb development and Waddington’s ideas have been well developed both empirically and theoretically [91, 92, 93, 94, 95, 96, 97]. Such buffering can be achieved either through genetic redundancy (more than one gene having the same or similar function) or through alternative metabolic routes, where the complexity of metabolic pathways allows recruitment of different pathways to reach the same phenotypic endpoint. In effect, a functional polymorphism expressed during foetal development or post-natal growth may influence the expression of a wide range of other genes, leading to changes that may compensate for the influence of the polymorphism. Put crudely, if a person has developed and grown from the intrauterine period onwards within an environment in which one factor is perturbed (e.g., there is elevated cholesterol levels due to genotype) then they may be rendered resistant to the influence of life-long elevated circulating cholesterol, through permanent changes in tissue structure and function that counterbalance its effects. In intervention studies—for example, RCTs of cholesterol-lowering drugs—the intervention is generally randomized to participants during their middle age; similarly, in observational studies of this issue, cholesterol levels are ascertained during adulthood. In Mendelian randomization, on the other hand, randomization occurs before birth. This leads to important caveats when attempting to relate the findings of conventional observational epidemiological studies to the findings of studies carried out within the Mendelian randomization paradigm.
In some Mendelian randomization designs, developmental compensation is not an issue. For example, when maternal genotype is utilized as an indicator of the intrauterine environment (e.g., maternal ADH variation discussed above), then the response of the foetus will not differ whether the effect is induced by maternal genotype or by environmental perturbation and the effect on the foetus can be taken to indicate the effect of environmental influences during the intrauterine period. Also in cases where a variant influences an adulthood environmental exposure—e.g., ALDH2 variation and alcohol intake—developmental compensation to genotype will not be an issue. In many cases of gene-by-environment interaction interpreted with respect to causality of the environmental factor, the same applies, since development will not have occurred in the presence of the modifiable risk factor of interest and thus developmental compensation will not have occurred.
Lack of suitable genetic variants to proxy for exposure of interest
An obvious limitation of Mendelian randomization is that it can only examine areas for which there are functional polymorphisms (or genetic markers linked to such functional polymorphisms) that are relevant to the modifiable exposure of interest. In the context of genetic association studies, it has been pointed out more generally that in many cases, even if a locus is involved in a disease-related metabolic process, there may be no suitable marker or functional polymorphism to allow study of this process [98]. In an earlier paper on Mendelian randomization [17], we discussed the example of vitamin C, since observational epidemiology appeared to have got the wrong answer regarding associations between vitamin C levels and disease. We considered whether the association between vitamin C and CHD could have been studied utilizing the principles of Mendelian randomization. We stated that polymorphisms existed that had been related to lower circulating vitamin C levels—for example, in the haptoglobin gene [99]—but in this case the effect on vitamin C was not direct and these other phenotypic differences could have an influence on CHD risk that would distort examination of the influence of vitamin C levels through relating genotype to disease. SLC23A1—a gene encoding for the vitamin C transporter SVCT1, which is involved in vitamin C transport by intestinal cells—was an attractive candidate for Mendelian randomization studies. However, by 2003 (the date of the earlier paper), a search for variants had failed to find any common SNP that could be used in such a way [100]. We therefore used this as an example of a situation where suitable polymorphisms for studying the modifiable risk factor of interest could not be located. However, since the earlier paper was written, functional variation in SLC23A1 has been identified that is related to circulating vitamin C levels [59]. This example is used not to suggest that the obstacle of locating relevant genetic variation for particular problems in observational epidemiology will always be overcome, but to point out that rapidly developing knowledge of human genomics will identify more variants that can serve as instruments for Mendelian randomization studies.
Conclusions
Mendelian randomization is not predicated on the assumption that genetic variants are major determinants of health and disease within or between populations. There are many cogent critiques of genetic reductionism and the over-selling of “discoveries” in genetics that reiterate obvious truths so clearly (albeit somewhat repetitively) that there is no need to repeat them here [101, 102, 103, 104]. Mendelian randomization does not depend upon there being “genes for” particular traits, and certainly not in the strict sense of a gene “for” a trait being one that is maintained by selection because of its causal association with that trait [105]. The association of genotype and the environmentally modifiable factor that it proxies for will be like most genotype–phenotype associations, one that is contingent and cannot be reduced to individual level prediction, but within environmental limits will pertain at a group level [106]. This is analogous to an RCT of antihypertensive agents, where at a collective level the group randomized to active medication will have lower mean blood pressure than the group randomized to placebo, but at an individual level many participants randomized to active treatment will have higher blood pressure than many individuals randomized to placebo. It is group level differences that create the analogy between Mendelian randomization and RCTs.
Finally, the associations that Mendelian randomization depend upon do need to pertain to a definable group at a particular time, but do not need to be immutable. Thus, ALDH2 variation will not be related to alcohol consumption in a society where alcohol is not consumed; the association will vary by gender, by cultural group and may change over time [107, 108]. Within the setting of a study of a well-defined group, however, the genotype will be associated with group-level differences in alcohol consumption and group assignment will not be associated with confounding variables.
Nutrition contributes importantly to population health, but the tools of nutritional epidemiology have proved fallible and led to misleading findings. Mendelian randomization offers one way in which the exciting developments in molecular genetics can help improve our understanding of nutritional determinants of population health. This approach is clearly distinct from the usual nutrigenomics approaches that promise personalized interventions tailored to individual genomes, but perhaps it offers at least as much in terms of ultimately identifying ways in which health can be improved. Use must be made of the optimal observational data for understanding the potential effects of interventions. Mendelian randomization approaches can help identify the most promising nutritional candidates for formal evaluation within randomized controlled trials of dietary manipulation, which must be carried out before such findings are considered ready for implementation. In this way, genetic epidemiology can be linked with conventional epidemiology, and in turn with intervention research, in a truly translational fashion.
Notes
Ackowledgments
George Davey Smith works in a centre that is supported by the MRC (G0600705) and the University of Bristol. Thanks to Tom Palmer for the calculation of variance in alcohol consumption accounted for by ALDH2.
| |
In collaboration with renowned liturgical Architect Jim O’Brien, our team dismantled, restored, designed, fabricated, and installed highly-detailed and historically-sensitive liturgical furnishings for Saint Paul of the Cross. This Passionist Monastery stands on a hilltop overlooking the Southside of Pittsburgh, PA, and is named for the founder of the Passionists, Saint Paul of the Cross.
The cornerstone of this historic monastery was laid on August 7, 1853, and two years later, Father Gaudentius Rossi preached the first Passionist retreat in the New World. For this project, our team contributed liturgical artistry and expertise to restoring this historic church’s elaborate statuary and marble work.
The new marble furnishings were designed by Architect Jim O’Brien who did a wonderful job with scale and styling. The classic Roman Corinthian style complements the original interior styling of the church, with new furnishings designed to look original to the church, which was not an easy task.
The tabernacle surround wall is very unique, this wall rises 28 feet tall, and is 2 feet deep, and over 11 feet wide. This creates a very tall, slender element, all clad in monumental stone. The tabernacle’s elegant and slender vertical element required a tremendous amount of engineering and highly-detailed shop drawings to precisely define all the alignments and assemblies. This wall fits within a very confined space in relation to the ceiling and the adjacent existing columns, which meant our material handling, scaffold and rigging design had to be very well-planned. The fragile stone pieces being handled included hand carved column capitals and the monumental cornice ridge, with some pieces weighing over a ton.
The core is a steel reinforced CMU masonry, which we designed and installed to integrate seamlessly with the various veneer and solid elements of the marble cladding. This wall fits just under the barrel vault plaster ceiling, so the rigging systems and shoring scaffold required were very challenging to erect and dismantle in this small, confined space.
The work we performed on this unique tabernacle wall element also included its brass tabernacle box and extraordinary Corpus statue, a reproduction of a historic piece by the acclaimed 16th century Italian stone carver Pietro Tacca. In this extraordinary eccesiastical sculpture, the arms are not dowelled. To sculpt a figure with outreached arms from one solid piece of Statuario Michelangelo speaks to the level of craftsmanship and coordination our Italian master stone carver brings to the work. To carve, ship, and install this fine art piece without any damage was a major accomplishment.
The church’s new Altar of Sacrifice is richly detailed with miniature columns, capitals, Roman arches, mensa cornice dentils, and inlay of the Passionist priest emblem. The combination of Carrara C, Arabescato, and Rosso Francia marbles creates a wonderful assembly of subtle and bold colors with hand-carved detail.
The Ambo shares the same level of detail and color as the altar of sacrifice, with the addition of a beautiful bible rest with a burgundy leather inlay. It took a great deal of coordination to inlay micro cables and create an Ambo which was ergonomically comfortable for the celebrant priests.
We created new side shrines for the church’s St. Joseph and St. Mary figures, cleaning and relocating the statues to their new homes. Similar to the tabernacle, they have a core of reinforced CMU to hold the slender statue pedestal and upright statue backdrop in a safe vertical position. Rugo relocated the St. Mary and St. Joseph statues, originally positioned 20 feet off the floor on cantilevered wall mounted pedestals, and relocated them to the new side shrines. Rugo performed all the deep cleaning and patching of these antique statues prior to installing them on their new pedestals at each side Shrine. With the side shrines capped with a single solid piece of Bianco Carrara C, great skill in rigging and installation was required for all this work.
For a new center aisle decorative paving plan, we worked closely with the architect to help select a design and marble colors. This material compliments the new furnishings and the old Tennessee marble nave flooring from the 1850s. The floor was completely fabricated in our Virginia marble studio, and features a very precise starburst, with custom-made polished brass star points. Extensive field measuring and finished floor grade alignments were required to make this new aisle paving meet various existing floor conditions.
Of special interest to this renovation were the statues and shrines to two Italian Catholic saints, St. Gemma and St. Maria Goretti.
Sculpted in Italy, the statue of St. Maria Goretti, the youngest canonized saint in the Catholic church, was modeled on the one at her shrine in her hometown of Corinaldo, Italy. Our skilled team traveled to the chapel where a wood statue of her is displayed with a raised, outreached right arm. There we created a digital 3D scan of the wood sculpture, which was used to enlarge the statue for the St Paul of the Cross shrine niche. From that we created a clay model to capture the emotion in her face as she resisted her attacker. This masterful marble statue was then carved from a single block of stone and successfully installed in St Paul of the Cross. This is another example where our team was able to create complex, historically-accurate marble statuary.
The statue of St. Gemma offers consolation to the faithful who are worried and suffering. We were supplied limited old photos of St. Gemma, and from these photos our sculptor created a clay model at one-third scale, working closely with the architect to refine the final clay model design. We then used this model to carve the 5 foot marble statue in Statuario Michelangelo marble. The completed sculpture is a great example of fine marble statuary art, and Saint Paul of the Cross’s likeness will serve as a shrine of healing and forgiveness to future generations of churchgoers.
In the end, the Rector, Father Justin Kerber, and the project’s architect, Jim O’Brien, were pleased with the project. We were honored for the opportunity to help Saint Paul of the Cross continue to thrive as a vital force in the community.
Location: Pittsburgh, PA
Completion: 2020
Owner: Saint Paul of the Cross Monastery
Architect: O’Brien & Keane Architects
Services: Engineering and restoring the tabernacle wall element, new sacrificial altar, ambo, side shrines and center aisle, patterned marble paving, and the restoration and relocation of statuary. | https://www.rugostone.com/projects/saint-paul-of-the-cross/ |
The Treasure Trove Unit will be returning to the office on a part-time basis in order to meeting with finders and take in finds. This will be by appointment only and finders must e-mail [email protected] to book a slot. We will now be taking appointments for Tuesdays and Thursdays, for w/c 7th September 2020 onwards.
Full guidance for finders coming into the museum can be found here.
***Please note: the Treasure Trove Unit (TTU) is still operating a reduced service and staff are in the office on a limited, part-time basis. Updates on finds may still take weeks or months***
Update 18/08/2020
SAFAP 26th August 2020
Unfortunately we will not be able to allocate Treasure Trove cases at the 26th August 2020 due to the continuing home working of the Treasure Trove Unit. We will update finders as to when these cases will be allocated as soon as possible.
Coronavirus (COVID-19)
Update 25/06/2020
Please find our latest guidance here:
Guidance on searching for archaeological finds in Scotland during COVID-19.
Please note that we are currently working from home and do not have access to our work phones. To get in contact, please e-mail us at [email protected].
Contact us
For any inquiries relating to the Treasure Trove Unit, or if you have an archaeological object to report please don’t hesitate to get in touch. For TTU staff please see ‘People’.
Contact Emily Freeman or Ella Paul: [email protected]
phone: 0131 247 4082/4025
Find us on: | https://treasuretrovescotland.co.uk/?shared=email&msg=fail |
Let us start by simplifying the task. We can throw a third of it away from the start. Textbooks traditionally divide forecasting techniques into three: qualitative techniques, time series techniques and causal models. A little research reveals that "qualitative techniques" is academicspeak for guesswork. The books talk of Delphi methods, where a lot of people make guesses and you average out the answers. Or "scenario methods", where a few people make the guesses instead. These techniques are not of much use to the average business. They are long-range guesstimates employed to "forecast" things like technological developments in a particular industry.
That leaves time series analysis techniques and causal modelling. Another glance at the contents page shows that these, too, divide into impressive-sounding individual techniques, with names like "exponential smoothing" and "multiple regression". Then, as page after page of equations swim into view, confusion sets in: not just because of the mathematics but also because it is difficult to tell which technique is suited to which circumstances.
However, the choice is not really so difficult. It revolves largely around two very simple questions. First, can what actually causes demand be identified? Second, can any quantified data on it be obtained? These questions lie behind any choice of forecasting technique. Forecasting is rather like selecting a gear in a car: no one technique is always right - it all depends on the circumstances. The forecaster chooses a technique appropriate to the data available and its relevance to the underlying causal factors.
The most readily available data often comes in the form of month-by-month sales histories. But these simply say what happened to sales, not why. The fact that sales have been increasing by 5% a month for the past year might mean that they will go up 5% again next month. Then again, they might not. There is no causal link between turning the page of the calendar and an automatic increase in sale.
It is customers, not calendars, that make buying decisions. The key to causal forecasting is understanding the factors behind these decisions. One of the most important is simply the raw requirement: the level of demand which customers are experiencing for the products and services that they sell. But other factors impinge too: the general economic climate, price, competitiveness, and so on. Aggregated together, these individual buying decisions build up to the graph on the sales manager's wall.
Theoretically, therefore, it is better to try and get a handle on the causal factors themselves, rather than to treat time as a proxy for them. This is the principle behind the causal modelling approach. Causal modelling involves building a "model" (consisting of one or more equations) which embodies the relationship between one or more "independent" variables and the "dependent" variable that they influence - namely demand. The equations can be very simple indeed. A typical multiple regression might say "so much of the movement in month-on-month sales is explained by factor A, so much by factor B and so much by factor C".
Provided that the model is well constructed - at least conceptually - then causal techniques offer high levels of accuracy. But there is a snag: they do need data - stretching into the future - on the factors that have been incorporated. It is pointless to consider causal techniques if there are not any numbers to plug into them. And while industrial giants might consider making the investment necessary to compile such data, this is simply not an option for most companies. | https://www.managementtoday.co.uk/uk-business-forecasting-making-start-future-2-4/article/408508 |
aspell dictionaries.
Generate a Custom aspell Dictionary
Change into the directory that contains all of your posts and run the following command (
--ignore 2 ignores any word that is two characters or less):
for POST in *.md do cat $POST | aspell list --ignore 2 done | sort | uniq
This will loop through every post and output every word to your terminal that
aspell thinks is misspelled. There will be plenty of duplicate words, so the above command will also
sort the output and pipe the sorted output to
uniq to have a de-duplicated list of words. Save or pipe the final output to a text file. I named mine aspell-technology-dictionary.txt.
This is the manual part of the process. Open the text file in your favorite text editor and manually scroll through it to remove words that you know are actually misspelled. For example, VMWare is an incorrect spelling, but VMware is a correct spelling, so I would remove the word VMWare from the text file.
Once you are finished, scroll to the very top of the file and add the following line:
personal_ws-1.1 en 0
aspell uses this for parsing purposes. Save the file.
My Custom aspell Dictionary
If you are interested, here is my custom generated aspell dictionary.
Find Misspelled Words with aspell
Finally, use the custom dictionary with the following command:
for POST in *.md do echo $POST echo cat $POST | aspell list --add-extra-dicts=aspell-technology-dictionary.txt --ignore 2 echo done
This will provide a list of all your posts and any words
aspell thinks are misspelled. You can then manually open each post to fix misspelled words.
Alternatively, you can go through each post in interactive mode with the following command: | https://thornelabs.net/posts/spell-checking-many-posts-with-aspell-and-a-custom-dictionary.html |
U.S. 89A begins at U.S. 89 in Bitter Springs, and travels north and west along the Vermillion Cliffs and Grand Staircase to Fredonia, AZ, then north to the Utah state line.
U.S. 89A is the former alignment of U.S. 89, and carried that shield from 1926 through 1959. In 1959, with the construction of Glen Canyon Dam, U.S. 89 was moved to it's present alignment through Page, leaving the former highway as U.S. 89A.
U.S. 89A is the Fredonia - Vermillion Cliffs Scenic Route. The road was designated a scenic highway in 1996.
Reassurance marker for US 89A. Photo taken 09/25/11.
Distance sign to Jacob Lake and Fredonia, the two major control points on U.S. Highway 89A. The sign in the background shows mileage to the other three towns on the route, Marble Canyon, Vermillion Cliffs, and Cliff Dwellers. Photo taken 09/25/11.
Distance sign to Marble Canyon, Vermillion Cliffs and Cliff Dwellers. These "towns" are essentially lodges along the road. Photo taken 09/25/11.
U.S. Highway 89A is the Vermillion Cliffs scenic road. Photo taken 09/25/11.
US 89A travels along the Vermillion Cliffs. The Vermillion Cliffs mark the edge of the Navajo Sandstone layer of the Colorado Plateau. Photo taken 09/25/11.
US 89A travels along the Vermillion Cliffs. The Vermillion Cliffs mark the edge of the Colorado Plateau Photo taken 09/25/11.
US 89A descends slowly towards the Navajo Bridge, at 3,534 feet. Photo taken 09/25/11.
The Vermillion Cliffs have been protected as part of the new Vermillion Cliffs National Monument, administered by the Bureau of Land Management. Photo taken 09/25/11.
US 89A is quite scenic as it travels along the Vermillion Cliffs. Photos taken 09/25/11.
Distance to Lee's Ferry, one mile. Photo taken 09/25/11.
Distance sign to Vermillion Cliffs, Cliff Dwellers and Fredonia. Photo taken 09/25/11.
US 89A approaches the Navajo Bridge. There is a parking area on both sides of the bridge, as seen here. Photo taken 09/25/11.
The New Navajo Bridge opened in 1995 as a replacement for the original Navajo Bridge. Photo taken 06/22/07.
The new bridge has a similar design to the original 1929 bridge, but a wider bridge deck for improved safety and additional load-carrying capacity. Photo taken 06/22/07.
The new bridge is 150 feet south of the original bridge. This photo shows both bridges in relation to each other, with the new one on the right. Photo taken 09/25/11.
Looking northbound on the approach road to the old bridge. This bridge was built in 1929, before U.S. 89 was even paved, and was the first crossing of the Colorado River in northern Arizona. Photo taken 06/22/07.
Looking northbound at the end of the bridge while on the original bridge, now a pedestrian walkway. Notice the narrow width. Photo taken 06/22/07.
Looking southbound across the original bridge. Photo taken 06/22/07.
This is the view from the original bridge deck, facing southbound. Notice the terrain is less steep than it is facing northbound. Photo taken 06/22/07.
This construction plaque is placed on the north end of the bridge. As part of the construction of the new bridge, a visitors center was placed at the north end of the old bridge. Photo taken 06/22/07.
The Colorado River is 430 feet below the deck of the bridge, in a deep gorge. The only access point through this gorge is at Lees Ferry, located just north of the bridge. Photo taken 06/22/07.
Turn right for the Navajo Bridge Interpretive Center. The Center is a visitors center for the bridge, as well as the recreation lands around the bridge. Photo taken 09/25/11.
Turn right for Lee's Ferry. Lee's Ferry is the original crossing of the Colorado River at this site, and is now part of Glen Canyon National Recreation Area. Lee's Ferry marks the northern edge of Grand Canyon National Park. Photo taken 09/25/11.
U.S. Highway 89A enters the small town of Marble Canyon. Marble Canyon is named after the canyon along the Colorado River (which U.S. 89A paralells), and is known for the Cliff Dwellers Lodge located in town. Photo taken 06/22/07.
The Bureau of Land Management has responsibility for the Arizona Strip lands along U.S. Highway 89A. Photo taken 06/22/07.
Distance sign to the town of Cliff Dwellers. Photo taken 09/24/11.
Distance sign to Jacob Lake, Fredonia, and Kanab, Utah. Photo taken 09/24/11.
U.S. Highway 89A parallels the Vermillion Cliffs all the way to Jacob Lake. Photo taken 09/24/11.
U.S. Highway 89A enters the town of Cliff Dwellers. This is the last small town reached before climbing up to Jacob Lake. Photo taken 09/24/11.
Distance sign to Jacob Lake (30 miles) and Fredonia (62 miles). Photo taken 09/24/11.
California Condors were re-released into the wild in the Vermillion Cliffs. The birds have taken well to their native habitat in Arizona. Photo taken 09/24/11.
Turn left for the San Bartolome Historic Site. This site discusses the history of the Arizona Strip and the Dominguez-Escalante expedition which explored this area. Photo taken 09/24/11.
US 89A travels through House Rock valley. The valley is named for two boulders that were used as a house by Mormon settlers. Photo taken 09/24/11.
Reassurance marker for Northbound US 89A. Photo taken 09/24/11.
US 89A meets House Rock here. House Rock is the site of a buffalo ranch, managed by the Arizona Department of Fish and Game. Photo taken 09/24/11.
Approaching the Kaibab National Forest, the path through an unnamed canyon up to the plateau is clearly visible. Photo taken 09/24/11.
US 89A enters the Kaibab National Forest. The Kaibab forest covers the Kaibab plateau, which is isolated from other forests in Arizona. Photo taken 09/24/11.
US 89A rapidly climbs up the Colorado Plateau through a series of sharp curves. Photo taken 09/24/11.
Sharp curves can be found as US 89A starts to climb in earnest. Photo taken 09/24/11.
US 89A travels up an unnamed canyon as it climbs towards the top of the Plateau. Photo taken 09/24/11.
As US 89A climbs in altitude, the pinyon juniper woodland replaces the sage scrub found at lower elevations. Photo taken 09/24/11.
US 89A travels through the Pinyon Juniper woodland common to the Colorado Plateau just below the Ponderosa Pine forest. Photo taken 09/24/11.
At 7000 feet in elevation, US 89A enters the Ponderosa Pine forest of the Kaibab Plateau. Photo taken 09/24/11.
Advance signage for US 67, 1/2 mile. Photo taken 09/24/11.
US 89A enters the "town" of Jacob Lake, located at the high point of the road (7.921 feet). Jacob Lake is home to the Jacob Lake Inn, which also houses a restaurant and gas station. This marks the last services found before entering Utah. Photo taken 09/24/11.
Turn left for the North Rim of the Grand Canyon and Arizona 67, or continue ahead for US 89A and Fredonia. Photo taken 09/24/11. | https://www.aaroads.com/guides/us-089a-az/ |
St. Augustine has already started a probable year-long permit process to double its water supply from 2 million gallons per day to 4 million gallons per day through digging three new deep wells to the four it uses now.
But to do that, it first needs to finish extensive scientific studies to convince the state Department of Environmental Protection to allow the utilities department to dump about 600,000 gallons per day of saline concentrate into the San Sebastian River.
Engineer Martha Graham, the city's public works director, said the city's water plant uses a low-pressure reverse osmosis system to treat the brackish water it draws from deep wells.
To double that intake to 4 million gallons per day, the city needs to expand its treatment plant on West King Street.
"(However), we want to make sure we have our disposal secured before we expand our plant," Graham said Monday.
The 300,000 gallons per day of high-salinity concentrate -- dissolved salt, metals and solids -- the plant squeezes from the water through membranes now goes directly into the city's waste water treatment system.
"(But) we're limited in our sewer system and waste water plant capacity," she said.
The plan is to take the 300,000 gallons now produced, combine it with another 300,000 from the expansion, and disburse the entire 600,000 gallons into the river, which is already brackish.
The DEP wants the concentrate spread by diffusers in the river, she said.
"(It's) not as saline as concentrate taken from (desalination). There will be a lot of testing to ensure that there's no harm to flora, fauna, sea grasses and manatees."
Concentrate discharges into brackish water are permitted, but the permit process is "very lengthy and data intensive," Graham's Oct. 5 memo to City Manager Bill Harriss said.
The city's consultant, Camp Dresser and McKee, will develop "relevant supporting information and submit the applications" for the permit.
Fees for the permitting will amount to $72,000 with the chemical and laboratory costs amounting to about $32,000.
This item was listed on the city's Consent Agenda, which doesn't require an individual vote by the commission for approval.
Graham said, "Depending on the outcome of this, more studies may be needed. Once DEP reviews the data, they may ask for more." | https://www.staugustine.com/story/news/2009/10/13/city-seeks-double-water-supply/16228073007/ |
Do High Heels Really Cause Hammer Toes and Bunions?
An average, healthy person should try to take 8,000 to 10,000 steps a day, adding up to about 115,000 miles in a lifetime. By age 70, the person will have walked the equivalent of 4 times around the globe. Unfortunately, many miles are walked in uncomfortable shoes that do not fit properly and cause pain and foot problems. | https://www.directorthocare.com/do-high-heels-really-cause-hammer-toes-and-bunions/embed/ |
May 13, 2009: Atlanta: Robin Raina Foundation (RRF) announced today that it has started work on building 400 new homes for the slum dwellers of Bawana - Delhi, in response to the recent fire that destroyed 675 grass huts in the slums of Bawana.
These concrete homes slated to be built by 30th of Sep. 2009, would provide a solid home to thousands of slum dwellers that have never had a home of their own in their entire life. It would also provide a huge relief from the misery imposed on them by the after effects of the recent fire that destroyed everything they had.
The fire in the E-block of the Bawana slums two weeks back, caused extensive devastation in the area. All the 325 concrete homes built by the foundation were unscathed while 675 families whose houses have not yet been built by RRF, saw their grass huts completely destroyed.
RRF took the lead and launched immediate relief efforts. Starting with calling upon the Fire Brigade team, the RRF volunteers played a key role in the relief efforts. The RRF volunteers jumped into the fire to save the lives of people, organized immediate medical relief camp followed by distribution of necessary items among to the fire affected people like— Ration material, milk, bread, biscuits, bananas, new utensils, etc. RRF provided meals for a few days, with the RRF Founder Robin Raina personally leading the relief efforts to distribute food and utensils to the fire victims.
Once these 400 homes are finished and handed over to the slum dwellers by 30th Sep. 2009, the foundation would have handed over possession of 725 homes to the slum dweller families of Bawana region in Delhi. The foundation intends to build 6000 homes in the area, with the project being seen today seen as the largest slum charity project undertaken by any organization in India, without government help.
At the occasion, Robin also released an appeal to all well-meaning people around the world to donate to the cause of building these homes at a cost of $1600 per home. Robin also issued an appeal to the general public around the world, to donate generously to the building efforts. | https://rainafoundation.com/press-release/robin-raina-foundation-starts-work |
Complete the form below and we will email you a PDF version of "DNA Fingerprinting for Soils May Soon Help Catch Criminals"
The James Hutton Institute of Scotland recently distributed a press release detailing the importance of soil DNA fingerprinting and its crucial role in everyday forensic analysis. Here Professor Lorna Dawson speaks to us about the international project which promises to catapult this type of forensic analysis into mainstream laboratories and the European courts.
TN: How did the international MiSAFE project collaboration come about and could you tell us more about this?
Professor Dawson: As we are world leaders at the James Hutton Institute in the field of research and application of forensic soil science, industry experts, and microbial ecologists asked if we would become a partner in the consortium. I was invited to the Guarda Civil in Spain to speak at a seminar on the topic and we started to write the new successful research project.
TN: How important is soil as a forensic tool and which characteristics are profiled?
PD: Soil is a very important trace evidence and a highly valued search component. The characteristics that are used depends on each specific case context. I would examine where, what and when before considering the choice if best approach. Currently we would analyse the mineralogical profile to characterise the geological (inorganic soil component) and the organic chemical profile to characterise the plant residues persisting in the organic soil component. We also profile the fungi, bacteria, plant species (using morphology and DNA) and faecal components to ascertain animal origin.
TN: Which analytical techniques are predominantly used in the investigation of soil samples?
PD: XRD, FTIR, SEM, microscopy, GC, GC-MS, ICP, and DNA.
TN: What is the ultimate goal of the MiSAFE project?
PD: To test the reproducibility of the microbial profile analysis as an analytical tool to characterise soil on a questioned item to help track source and to present as evidence in court. To produce produce tested protocols and operating procedures to enable the method to be tied in labs across Euroe, after rigorous testing on a range of soils and case contexts.
TN: What does the future hold for soil fingerprinting how will this complement other forensic areas of analysis?
PD: This project will hopefully enable soil microbial DNA profiling to be used in a range of European courts. I think it could widen the range of types of cases where soil can be used, from currently mainly serious crime to volume crime such as burglary etc, and will enable it to be taken up in main stream laboratories, where similar methods are used for human DNA. The combination of mineralogy, organic chemistry and microbial biological characteristics available from on a trace amount if soil will substantially enhance us evidential value in courts of law. | https://www.technologynetworks.com/tn/blog/dna-fingerprinting-for-soils-may-soon-help-catch-criminals-228145 |
8.3 The Process of Photosynthesis Describe what happens during the light-dependent reactions. Describe what happens during the light-independent reactions. Identify factors that affect the rate at which photosynthesis occurs. Lesson Summary The Light-Dependent Reactions: Generating ATP and NADPH Photosynthesis begins with these reactions, which occur in thylakoid membranes. Photosystems are clusters of proteins and chlorophyll in thylakoid membranes. High-energy electrons form when pigments in photosystem II absorb light. The electrons pass through electron transport chains, a series of electron carrier proteins. The movement of electrons through an electron transport chain causes a thylakoid to fill up with hydrogen ions and generates ATP and NADPH. ATP synthase is a membrane protein through which excess hydrogen ions escape a thylakoid in a process that makes ATP. The Light-Independent Reactions: Producing Sugars They occur in the stroma of thylakoids and are commonly called the Calvin cycle. Six carbon dioxide molecules from the atmosphere enter the Calvin cycle and combine with 5-carbon compounds already present. They produce twelve 3-carbon molecules. Two 3-carbon molecules are removed from the cycle. They are used by the plant to build sugars, lipids, amino acids, and other compounds. The remaining ten 3-carbon molecules are converted back to 5-carbon molecules and begin a new cycle. Factors Affecting Photosynthesis Many factors influence the rate of photosynthesis. Temperature, light intensity, and availability of water affect photosynthesis. C4 and CAM plants have a modified type of photosynthesis that enables the plants to conserve water in dry climates. The Light-Dependent Reactions: Generating ATP and NADPH For Questions 1–5, write True if the statement is true. If the statement is false, change the underlined word or words to make the statement true. True 1. Photosystems are clusters of chlorophyll and proteins. False- PSII light. 2. The light-dependent reactions begin when photosystem I absorbs True II. 3. Electrons from water molecules replace the ones lost by photosystem False-NADPH 4. ATP is the product of photosystem I. False-electron 5. ATP and NADPH are two types of protein carriers. 6. How does ATP synthase produce ATP? ATP synthase allows H+ ions to pass through the thylakoid membrane, rotating the enzyme. The rotation creates the energy needed to bind ADP to a phosphate and produces ATP. 7. When sunlight excites electrons in chlorophyll, how do the electrons change? They reach a higher energy state and begin to move down the ETC. 8. Where do the light-dependent reactions take place? The thylakoid membrane inside the chloroplast 9. Complete the table by summarizing what happens in each phase of the light-dependent reactions of photosynthesis. Light-Dependent Reactions Summary Photosystem II Photosystem II absorbs light and increases the electrons’ energy level. The electrons are passed to the electron transport chain. Enzymes in the thylakoid break up water molecules into 2 electrons, 2 H+ ions, and 1 oxygen atom. The 2 electrons replace the high-energy electrons that have been lost to the electron transport chain. Energy from the electrons is used by the proteins in the chain to pump H+ ions from the stroma into the thylakoid space. At the end of the electron transport chain, the electrons themselves pass to photosystem I. Electron Transport Chain Photosystem I The electrons do not contain as much energy as they used to. Pigments use energy from light to reenergize the electrons. At the end of a short second electron transport chain, NADP + molecules in the stroma pick up the highenergy electrons, along with H+ ions, at the outer surface of the thylakoid membrane, to become NADPH. Hydrogen Ion Movement and ATP Formation Hydrogen ions began to accumulate within the thylakoid space. The buildup of hydrogen ions makes the stroma negatively charged relative to the space within the thylakoids. This gradient, the difference in both charge and H+ ion concentration across the membrane, provides the energy to make ATP. The Light-Independent Reactions: Producing Sugars 10. What does the Calvin cycle use to produce high-energy sugars? CO2 and ATP and NADPH (from the light-dependent reactions). 11. Why are the reactions of the Calvin cycle called light-independent reactions? They do not require direct light, they get energy from ATP and NADPH 12. What makes the Calvin cycle a cycle? The compound with which CO2 combines with is a product of the cycle, which enables the reactions to occue over and over. 13. Complete the diagram of the Calvin cycle by filling in the missing labels. 12 CCC 6 CCCCC 12 ADP 6 ADP 12 NADP+ 12 CCC 10 CCC Factors Affecting Photosynthesis 14. What are three factors that affect the rate at which photosynthesis occurs? Three factors that affect the rate of photosynthesis are temperature, light intensity, and the availability of water 15. Would a plant placed in an atmosphere of pure oxygen be able to conduct photosynthesis? Explain your answer. No. One of the materials that plants use in photosynthesis is carbon dioxide. None of this gas would be present in an atmosphere of pure oxygen. Therefore, photosynthesis could not occur 16. Complete the table about variations of photosynthesis. Type Description Examples C4 photosynthesis Occurs in plants that have a specialized chemical pathway that allows them to capture even very low levels of carbon dioxide and pass it to the Calvin cycle. Corn, sugar cane, sorghum CAM CAM plants only allow air into their leaves at night which minimizes water loses. Carbon dioxide is trapped in the leaves and it is released during the day, enabling carbohydrate production. pineapple trees, many desert cacti, and “ice plants” Apply the Big idea 17. Photosynthesis plays an important role in supplying energy to living things. Considering what the products of photosynthesis are, what is another way in which photosynthesis is vital to life? Photosynthesis is the way in which new organic macromolecules are added to the living portion of the biosphere. All living things that are not photosynthetic rely on photosynthesis as a source of the organic building blocks needed for growth. Photosynthesis also releases oxygen into the atmosphere. Without this oxygen we would not be able to breathe. | https://studyres.com/doc/608143/8.3-study-guide-answer-key |
When Jenni and Paul Callahan got married, they lived in Alexandria, Virginia, a satellite city of Washington, D.C. Paul commuted to his job as a legislative assistant on Capitol Hill, and after they had kids, Jenni ran an at-home daycare. On the Hill, Paul focused on his district’s agricultural issues and frequently traveled to rural areas on behalf of his member of Congress. What began as a work requirement turned into a personal fascination—with farming.
Series
Like many urbanites who haunt the local farmers’ market or order their produce through a CSA, the Callahans dreamed of escaping the rat race and setting up as small farmers. “We had already started to feel like it was time to get out of the D.C. area,” Jenni says. “In spite of the many conveniences of public transportation and access to museums, we wanted a slower-paced life. We already had a family support system in South Carolina, where we had both grown up, so it seemed like a natural choice.”
Unlike most farmers’ market regulars, the Callahans actually upped sticks. After eight years in metropolitan D.C., Paul and Jenni made the move back to South Carolina. Two years later, they purchased a three-acre farm, moving in when Jenni was eight months pregnant with their fourth child. They taught themselves how to grow food and raise chickens and goats by reading books and blogs, watching YouTube videos, and seeking advice from other local farmers.
In 2016, following a gradual, three-year transition from outside employment, they became full-time farmers at Harp & Shamrock Croft outside of Spartanburg, South Carolina.
Despite the rise of the local food movement, the outlook for farmers isn’t rosy. Farm incomes are down, and jobs are projected to decline due to increased agricultural productivity. The Callahans believe in what they’re doing but know that it holds significant risk. To fund their dream, they depleted their savings. Since their farm is still young, all the money they earn has to be reinvested in the land. Spending is reserved for necessities until they have more financial security.
Farming is an all-consuming job. Each day brings duties that must be carried out for the farm to remain functional, let alone successful. There are no days off. The Callahans don’t take vacations right now, but if they could afford to, it would mean entrusting their livelihood to a farm sitter. That means extra costs and finding someone skilled and dependable. And any time away means lost work time and less income.
The family keeps a tightly organized schedule, following a routine tailored to each season. This is a typical day in winter, the leanest time of the year.
6:30 A.M.: Jenni and Paul are up an hour before the sun, making coffee, attacking the laundry pile, posting marketing updates on social media, bookkeeping, and responding to emails from potential CSA customers and wholesale outlets.
January is the farm’s slow month, so this counts as a leisurely morning. “When we have crops in the ground, it is sunup to sundown,” Jenni says. “We wouldn’t do it if we didn’t love it, but it is backbreaking work for little pay. This is why there aren’t more farmers.”
“We wouldn’t do it if we didn’t love it, but it is backbreaking work for little pay. This is why there aren’t more farmers.”
7:00 A.M.: Barn chores include feeding and watering the five dozen chickens and two goats, as well as cleaning the barn and coop, milking the goats, and gathering eggs from the chickens. The couple also tends to the crops growing in the greenhouse, kale and lettuce, which supplement their income in the leaner months. They sell the greens in weekly baskets, along with root vegetables, eggs, and goat’s milk soaps.
8:00 A.M.: Breakfast with the entire family includes a discussion about the day’s goals, both school- and work-wise.
9:00 A.M.: Paul and Jenni plot out the growing areas for crops, inventory their available seeds, and make plans for an upcoming seed order.
10:00 A.M.: Paul mucks out the goat stalls, starts seeds in trays, and turns the dirt with the tractor. Jenni works with the children on their schoolwork. Homeschooling allows the Callahans to make farm life an educational opportunity. They incorporate chores into lessons—making soap serves as a chemistry lesson, for example.
12:00 P.M.: Family lunch.
1:00 P.M.: As the children work independently on school assignments, Jenni washes the eggs to package for sale and harvests the lettuce and other greens from the greenhouse.
The Callahans’ greenhouse (Harp & Shamrock Croft)
2:00 P.M.: Customers stop by to pick up their purchases during this set window of time. In between customers, Jenni and Paul research new strategies for their farming as well as running the business.
Jenni is already thinking ahead to the busier days. “In February, we’ll take on the task of starting seeds for our state-certified nursery. We sell plants directly to customers and to several retail establishments.”
5:00 P.M.: The goats and chickens must be fed again, and hay is replenished. If it’s particularly chilly, the animals will be herded inside and boarded for the night.
5:30 P.M.: Family dinner, usually with local meat as well as vegetables that the Callahans canned during the summer months.
8:00 P.M.: The children go to bed and their parents are not far behind.
“We wanted to teach our kids basic life skills. We wanted the fresh air and dirt. We wanted to build something,” Jenni says of their decision. Farm life is exhausting, but they don’t regret it. “When I set up my table at market in the summertime, I’m amazed that we grew all of that. And to think that people are taking our produce home and nourishing their children and families with it—it’s just the highest honor I can think of.” | |
An update to The Highway Code has introduced a hierarchy of road users, which creates ‘clearer and stronger priorities’ for pedestrians.
Changes to the Highway Code will mean drivers will need to give way to pedestrians at a junction, while cyclists must give way to people using a shared-use cycle track.
So we have 3 new rules I have listed them below.
Rule H1: hierarchy of road users
The first (and most significant) rule in the refreshed The Highway Code sets out the hierarchy of road users. Road users who can do the greatest harm (those driving large vehicles) have the greatest responsibility to reduce the danger they pose to other road users.
Pedestrians (children, older adults and disabled people in particular) are identified as ‘the most likely to be injured in the event of a collision’.
Here’s a look at what the hierarchy of road users looks like:
- Pedestrians
- Cyclists
- Horse riders
- Motorcyclists
- Cars/taxis
- Vans/minibuses
- Large passenger vehicles/heavy goods vehicles
As you can see, cyclists and horse riders will also have a responsibility to reduce danger to pedestrians. Even so, the updated The Highway Code emphasises that pedestrians themselves still need to consider the safety of other road users.
The Department for Transport says this system will pave the way for a ‘more mutually respectful and considerate culture of safe and effective road use’.
Rule H2: clearer and stronger priorities for pedestrians
This rule is aimed at drivers, motorists, horse riders and cyclists. The Highway Code now states clearly that, at a junction, you should give way to pedestrians crossing or waiting to cross a road that you’re turning into. Previously, vehicles had priority at a junction.
Drivers should also give way to pedestrians waiting to cross a zebra crossing, and pedestrians and cyclists waiting to cross a parallel crossing (a combined pedestrian and cycle crossing).
Meanwhile, cyclists should give way to pedestrians on shared-use cycle tracks, and are reminded that only pedestrians (including those using wheelchairs and mobility scooters) can use the pavement.
Pedestrians are allowed to use cycle tracks unless there’s a road sign nearby that says doing so is prohibited.
Rule H3: drivers to give priority to cyclists in certain situations
The updated The Highway Code urges drivers and motorcyclists not to cut across cyclists when turning into or out of a junction or changing direction or lane. This rule applies whether the cyclist ahead is using a cycle lane, a cycle track or simply riding on the road ahead.
Drivers are meant to stop and wait for a safe gap when cyclists are:
- Approaching, passing or moving away from a junction
- Moving past or waiting alongside still or slow-moving traffic
- Travelling on a roundabout
The Department for Transport claims that the changes, which are split into three main rules, ultimately aim to improve safety for pedestrians, cyclists and horse riders. The changes are due to come into force on 29 January. | https://okdrive.uk/highway-code-changes-for-january-29th-2022/ |
The pre-decimal penny was a coin that was issued between 1707 and 1970, before its replacement after decimalisation by the ‘new’ decimal one pence. At the time of its mintage, the pound was split into 20 shillings each worth 12 pence, so one penny was essentially 1/240 of a pound. Before the pre-decimal penny, there was another iteration of the coin, believe it or not, known as the English Penny.
Read more below to learn more about the pre-decimal penny; including its design and how it changed through the years of its circulation.
A brief history of the pre-decimal penny
In 1707, the first year that the pre-decimal penny was issued, the Kingdom of Great Britain was formed. This was formalised under the 1707 Act of Union, which saw England and Scotland merge. This had great significance for the pre-decimal penny, as the Scottish shilling was replaced by the pre-decimal penny.
So, the pre-decimal penny had come into circulation, but what were the specifications of it?
Before the Union Act, pennies had been minted in silver. However, as the price of silver continued to rise it was clear that this practice was unsustainable. Silver pennies stopped being minted for general circulation in 1660 but continued to be produced for Maundy money.
Maundy money refers to money that was given out during an event known as the Royal Maundy. The event itself was inspired by the Bible and was used by the Royal Family to show compassion to the public. It evolved through time and when Henry IV came to the throne he decided to give gifts to his subjects, the amount of which was equal to his age at the time.
It was not until the reign of Charles II that the Monarch gave Maundy money, initially in the form of hammered coins which were given out in 1662. By the 19th century, the Royal Maundy had evolved into solely giving Maundy money as a gift to people.
The tradition improved the perception of the Monarch to the public and allowed them to show compassion.
So, the pre-decimal penny was only minted in silver if it was to be used for Maundy money, but how did its specification change throughout its circulation? Let’s take a look through the 18th, 19th and 20th centuries and how the pre-decimal penny evolved during this time.
18th Century
The pre-decimal penny was issued during the reign of Queen Anne, but only as Maundy money in the years 1708, 1709, 1710 and 1713. It continued to be issued throughout the reigns of George I, George II and George III as Maundy money. However, during the reign of George III there became a shortage of the pre-decimal penny, which led to some merchants privately minting their copper tokens.
This became a huge problem and the Royal Mint had to issue Matthew Boulton to issue copper pennies at his mint in Birmingham in 1797. All of the pennies issued there were dated 1797. During this time coins were minted to contain a value of metal equal to the face value of the coin, which equated to one ounce of copper for a penny.
This led to the nickname of ‘cartwheel’ for pennies issued at this time due to their large size.
19th Century
In the early 1800s, the value of copper had increased which led to more problems with privately issued tokens. The value of the metal contained within a penny had become more than its face value, so it made no sense to use the coin to pay for anything. It became more common to smelt the coin down for the metal content.
In 1816 there was a large re-coinage programme overseen by the Royal Mint to overcome the issues with the penny. There were large amounts of silver and gold used to issue new coins, and in 1817 an Act of Parliament passed which added large penalties to those caught privately minting coins or tokens.
It wasn’t until 1860 however that the decision to change the metal used for the penny was made. Copper pennies were no more, as bronze was chosen to be used to make pennies. The decision was also made to no longer use an amount of metal equal to the face value of the coin to make sure that the same problems were not encountered again.
20th Century
In the 20th century, the bronze penny continued to be minted with a specification of a diameter of 10.86mm and a weight of 9.45g. The penny was issued every year in both Queen Victoria and King Edward VII’s reigns. It was not until 1923, during George V’s reign, that the penny was not minted. This continued for a 3-year gap and happened alongside a change in the composition of the penny to 95.5% copper.
Every year after the 3-year discontinuation in George V’s reign the penny was issued, with the most crucial year for coin collectors being 1933.
The 1933 Pennies
In 1933 there was no demand for any more pennies to be minted as so many had already been issued previously. The Royal Mint had received requests to produce a small number of pennies to be placed under the foundations of certain buildings that were erected that year. There are 7 known examples of these pennies, making them extremely rare. One of these was stolen from a Church in Leeds, with others in private hands or located in museums.
Alongside these 7 pennies, there were also 4 ‘pattern’ pennies produced by designer Andre Lavillier. A pattern coin is essentially a coin issued by the Royal Mint to trial a new design, and given that only 4 of these 1933 pattern coins are known to exist, they are incredibly valuable. In an auction in London in 2016 a genuine 1933 pattern penny sold for £72,000!
Edward VIII and George VI
There are a small amount of Edward VIII pennies produced as pattern coins in existence, but due to his abdication, these are the only pennies from this era.
Pennies were not minted every year of George VI’s reign, primarily due to the economic impact of World War Two. More specifically between 1941 and 1943, there were no pennies minted at all, and in 1951 there was only 120,000 of them minted.
The 1950s
There were so many pennies in circulation by the 1950s that there was no longer a need to produce any more. As previously mentioned only 120,000 were produced in 1951, and there were no more issued into circulation until 1961.
There are still however pennies from this era, as in 1953 there were 1,308,400 pennies issued for collectable coin sets to celebrate the new Monarch Queen Elizabeth II.
Alongside these collectable pennies, there was also at least one penny issued in 1954. This coin was part of an experiment by the Royal Mint into new designs for a portrait of the new Queen. There are 6 uniface versions of these experimental coins, 2 of which in the British Museum and the other 4 in the Royal Mint Collection; but there is thought to only be one complete coin from this year.
The 1960s
As decimalisation was decided to happen in 1971, the pre-decimal enjoyed its last few years of circulation in the 1960s. Pennies were struck right until 1970, with those produced after 1967 still having the date of 1967 inscribed on them. In 1970 750,476 pennies were struck for souvenir sets only, marking the last production of the pre-decimal penny as we know it.
How much are pre-decimal pennies worth?
In terms of the 18th and most of the 19th century, giving a broad answer for the value of a penny from this time is incredibly difficult. This is mainly because there were many different versions of the penny produced during this time including privately minted tokens, which means there is not a specific estimate of value for a penny from this range.
Towards the end of the 19th century in 1860 when the bronze specification was issued it becomes much easier to indicate a rough estimate for the value of a penny. This is because the specification of the pennies from this era remained the same for more than 100 years, meaning the value is much more stable. Remember that we are not including ‘special’ editions of the penny such as the 1933 or 1954 editions, but rather the standard circulating bronze penny.
As an estimate, a pre-decimal bronze penny will look to sell for between £1 and £3 depending on the quality.
This estimation is based on hundreds of sold values on eBay and as such represents what you can expect to sell your penny for. If you think you have a rare specimen then please look into contacting a specialist dealership.
How has the design of the pre-decimal penny changed?
The pre-decimal penny always depicted a portrait of the Monarch of the time on the obverse, like all other British currency.
The silver pre-decimal penny featured a reverse design of a crowned I, which remained until the issue of the copper penny. The design of the crowned ‘I’ changed slightly through the 18th century with the design of both the crown and the letter ‘I’ altering slightly.
When the copper pre-decimal penny was issued the reverse design was changed completely to the iconic image of Britannia. Different variations of the design were used during this time but all of them featured Britannia on the reverse with a portrait of the monarch of the time on the obverse.
This trend would continue into the bronze penny until decimalisation. All bronze pennies also feature Brittania on the reverse with a portrait of the monarch on the obverse. | https://thecoinexpert.co.uk/blog/a-guide-to-the-pre-decimal-penny/ |
User experience (UX) designers asked to justify return-on-investment (ROI) for UX activities often rely on published ROI studies and UX metrics that do not address decision makers’ concerns. With a little knowledge of business strategy and metrics and an understanding of their own value to an organization, UX practitioners can (a) identify the financial and non-financial metrics and goals that drive change in their organizations, (b) draw a clear picture for decision makers of the connection between their value and the company’s goals, and (c) demonstrate a positive return on investment in UX activities.
Practitioner’s Take Away
This article discussed four aspects of metrics and strategy:
- Traditional measures of return on investment (ROI) in UX activities often are not connected to the concerns of business executives.
- UX practitioners can use metrics strategically by identifying business objectives that drive company action and by making explicit their own contribution to those objectives.
- Balanced Scorecard (BSC) is a well-articulated approach to understanding how to describe strategy and metrics. UX practitioners can use the BSC approach for direction on how to align their activities to company goals.
- Metrics are not the only way to illustrate ROI on UX activities—many companies are touting design and design process as the solution to many business problems, not just user experience.
Introduction
Good design produces customer value. It seems self evident to designers that usable applications and products that connect with customers’ needs and wants are central to the value of their company. Unfortunately, user experience (UX) practitioners sometimes struggle with getting decision-makers in the organization to see the connection between user experience and customer value. Quantifying one’s financial return on the investment made in UX seems an impossible task.
Within most companies there is a significant constituency that places enormous weight on metrics and meeting objective goals. Designers should know this constituency and learn how to approach them. Think of this exercise as the user-centered design of your own services to your internal business partners. If you do, there are significant opportunities for usability professionals with a background in measurement to favorably influence managers’ decisions that affect their companies’ UX practice.
Are ROI Metrics for Usability Unusable?
In 2004 Dan Rosenberg wrote a provocative article entitled “The Myths of Usability ROI.” While praising the groundbreaking book on usability return on investment (ROI) by Bias and Mayhew (1994), Rosenberg pointed out a large number of shortcomings in the literature published since 1994 on usability ROI.
- The lack of empirical data that support ROI claims for usability.
- ROI studies of usability ignore other contributing factors to product improvement.
- Overly simple ROI calculations for usability don’t address executives’ concerns.
- Studies don’t weigh the ROI for usability activities against other investments.
According to Rosenberg, the “traditional ROI approach to defining and measuring the value of usability” doesn’t show the true value of UX activities (p. 23). In essence, typical UX metrics for ROI are unusable. As a corrective action, he proposed thinking strategically by tying one’s own UX activities to the Total Cost of Ownership (TCO) of a company’s products and services. This is certainly a strategic approach to showing ROI, but only if one’s company competes on TCO. However, the article raised a valid point: How do designers show the link between their own value and what matters to their company?
What Makes a Good Metric?
To answer that question, we must first make a digression. Most designers understand that some metrics are more valuable than others. Often, management will insist that everyone’s work “align with company goals,” usually expressed as metrics. However, not all metrics are worth aligning to. So, what makes a good metric? Price and Jaffe (2008) identify the following five qualities of a good metric.
- Strategic alignment. Alignment assumes that the company’s strategy is known by all, including management. Does the metric support the organization’s strategy?
- The metric drives action. Good metrics act as a target for employees to aim at. If the objectives stated in the metric aren’t met, then it’s understood by all that strong corrective action must be taken. Are data being reported that aren’t used to drive action? Then the metric should be retired.
- The metric is important to stakeholders. Someone besides you needs to care about the metric, namely, people with power who can make trouble if the objectives for the metric isn’t being met: customers, customer service managers, heads of departments, and so on.
- The people being measured can change things. Suppose a design group is being measured on customer satisfaction with online applications. Does the design group have the authority and resources to change not only the interface design, but also integration with the mid-tier, database connectivity, etc.? In short, can the design group change everything that affects the performance of the applications? If not, then the metric may be valid, but it needs to be shared with another group.
- There’s a process in place to change things. Most usability professionals have been told, at one time or another, “go ahead and run your test and write your report on the product before it ships. We’ll make changes later,” only to realize that “later” never arrives. What’s missing here? A process that ensures timely changes are made based on the metric collected.
UX practitioners must learn to distinguish between good and bad metrics, and be willing to advocate for better metrics. In fact, relying on published studies for evidence of ROI in UX activities is unconvincing to executives because they were not conducted in the context of one’s own company. Different companies value different metrics. What UX practitioners need are not more published studies conducted at other companies, they need to learn how to collect the right UX data and derive metrics that demonstrate strategic value within the context of their own companies.
To Rosenberg’s point, a good business metric keeps a company focused on the right things, and helps executives make sound decisions. We now have a definition for the strategic use of metrics. Strategic thinking means (a) understanding UX’s value to the company, (b) identifying metrics that drive company—or department-wide decisions, and (c) drawing a clear and obvious connection between one’s measurable value and a company—or department-wide metric.
A Strategic Approach to Business Metrics: The Balanced Scorecard
Financials, of course, are a company’s preferred and best-understood metric, and one to which all other metrics should tie to. Much of strategy, product selection, and service offerings are presented in dollars, as in the promise of future revenue. The known shortcoming of financial metrics is that they are backward looking measures, i.e., they don’t necessarily predict future performance.
One well-known approach to developing strategy and effective metrics is the Balanced Scorecard (BSC; Kaplan & Norton, 1996). Its use is widespread among large companies; a survey of 1,430 executives globally revealed that 53% of companies use some form of the BSC (Rigby & Bilodeau, 2009). BSC addresses the shortcomings of financial measures by introducing three additional categories of measures: customer perspective, internal business perspective, and innovation and learning perspective. Strategy maps tie the four categories of measures together in the company’s theory of how each category of measures contributes to financial performance. All four perspectives provide UX designers with opportunities to contribute to strategic decisions.
Companies that adopt BSC recognize that return on investment doesn’t apply only to financial measures. They understand that investments can yield important returns in customer satisfaction, people, and process. This is invaluable information for usability practitioners who are trying to position their services within an organization, who must likewise realize that their own value may not tie directly to a company’s financials but to another category of measures.
If your company employs BSC, study the measures. If your company doesn’t employ BSC, then look at your company’s metrics for tracking performance. You will need to tie your performance to these measures, or introduce one of your own if possible. The following discussion refers to BSC, but can be used with other schemes, as long as the scheme contains metrics that are managed to—that is, the metrics satisfy the criteria for effective metrics.
Financial Perspective
Returns on investment are typically expressed in dollars. Dollar amounts are calculated using Net Present Value (NPV): the difference between an initial investment in a project and the cash flow the project generates, accounting for the time value of money. NPV is one of the main criteria used in selecting among new projects and services. NPV avoids the problems of the commonly-used payback period metric, which doesn’t consider cash flows after the project has paid for itself.
The data that estimate NPV for a given project are usually generated by the marketing department or another business unit to justify a project initially. Obtain the estimates. Then estimate the expense needed to conduct analysis, design, and usability testing for the proposed project. These estimates are fairly straightforward if your company keeps historical records of project costs per task. Then estimate the percentage increase in sales per year (or savings per year if improving the use of customer self service) if the product is designed and usability tested properly. These data are harder to find.
To illustrate the use of NPV, assume that a project with a significant user interface is initially estimated to cost $90,000 and will produce $40,000 per year in sales for three years. Convert these data into NPV by calculating NPV using these data with the NPV formula, as shown in Table 1.
NPV = -C0 + C1 / (1 + r) + C2 / (1 + r)2 + … + Cn / (1 + r)n
where
C0 = initial investment
C1 = cash flow in Year 1
C2 = cash flow in Year 2
r = company’s required rate of return on investment, or discount rate
Cn = number of years in the calculation
The term r, the discount rate, represents the percentage of profit the proposed investment must meet to be considered for funding. Proposals whose returns exceed the discount rate are then compared against others with the same level of risk for consideration. Companies that use NPV to compare projects determine their own discount rates.
Table 1. Net Present Value Without UX design
To show the benefit of usability on NPV, add the estimated cost of doing usability on the project to the initial investment, assumed here to be $4,500. Add the increase in cash flow to each year’s cash flow, assumed here to be 6%. Recalculate NPV, as shown in Table 2.
Table 2. Net Present Value With UX design
The decision then is between a product with or without design and usability testing. For best results, focus on projects with a significant user interface that handles large numbers of transactions, such that a small percentage increase in success rates contributes a significant financial return to the company. If there is a positive difference then the decision to add usability to the project is nearly pre-ordained. If the difference in NPV is great enough, it could mean the difference between a project being selected or not. For more on the use of NPV in usability studies, see Karat (2005).
Note that using NPV correctly answers two of Rosenberg’s criticisms about usability ROI metrics. First, the discount rate is determined by the company, typically the finance department, and is an important criterion used by management to fund a project. Second, NPV allows managers to compare the proposed investment with other investments, in this instance, comparing returns with and without an investment in usability.
Customer Perspective
The BSC customer perspective dimension answers the question "How do customers see us?" This is an obvious place for user experience metrics. Customer satisfaction, customer retention/defection, and time to delivery are examples of metrics for customer perspective. These metrics are usually owned by customer service or marketing functions, but the usability function should own a share of responsibility for a significant metric in the customer perspective. The metric should drive decisions, as discussed in regards to the What Makes a Good Metric section. For example, a customer metric that can drive improvements to an existing application is self service usage. It could be a percentage change in usage, as measured directly by automated reports or indirectly by customer survey. The wide range of customer experience measures were surveyed by Tullis and Albert (2008). From these metrics, select a candidate measure and work with customer service or marketing to include an appropriate UX metric in the customer metrics. If it takes its measures seriously, the company will need to devote significant, appropriate resources to the hitting goal stated in the metric.
Internal Business Perspective
These measures drive process improvement projects by answering the question "What must we excel at?" Driving the design of service improvement is one of best opportunities for designers and usability practitioners to gain visibility. Price and Jaffe (2008) gave an excellent example of how a good metric forced a great deal of process improvement at Amazon.com. Amazon was aware that it had a problem with the large number of calls to its call center despite its extensive web-based self service. It knew that the key to profitability was in persuading people to self serve. The company settled on cost per order (CPO) as a central metric. Obviously, calls handled by agents in the call center added to the cost of an order. Trying to reduce CPO pushed the company to discover the root cause for every call and to address each issue they found. It made a lot of decisions easier: improving the usability of its web site, simplifying the order process, and adding information on the site in the form of user-generated recommendations that were not available from its agents (Price & Jaffe, 2008). In short, the CPO metric drove decisions that required large investments be made to improve the customers’ user experience on the site.
Innovation and Learning Perspective
These metrics answer the question "What capabilities are needed to support the customer perspective and the internal business process?" That is, what does the company need to be able to do to meet its goals for customer satisfaction and process improvement? Aggressive customer satisfaction and process improvement goals nearly always require increased UX skills and capacity, as demonstrated in the Amazon example. The UX practitioner can act strategically by discussing with management the UX department’s needs for skill development, staffing increases, and increased visibility in the organization. If possible, put a “UX improvement” metric on the scorecard. The aim is to position UX as a valuable competency for meeting scorecard objectives. If the objective is on the scorecard, it will be tracked and decisions made based on outcomes.
Sidebar: How Effective Is Balanced Scorecard?
A large-scale study of companies that employ BSC showed some limited support for the effectiveness of BSC as a strategic tool (Malina & Selto, 2001). A primary finding was that it was difficult to isolate the contribution of a company’s approach to strategy from other factors such as the company’s ability to execute and the market it was competing in. Indeed, that is the very thing that makes it so hard to isolate the financial contribution of UX to an individual product or project: the entire team has to execute properly for the project to succeed.
An Application of Strategy to UX Metrics
The UX team at Autodesk, makers of the popular AudoCAD design software, was interested in determining their customers’ satisfaction, ease-of-use, and relevance of some of the important features in one of their software products, as well as overall product quality and value (Bradner, 2010). In addition to measuring customer satisfaction with the product, they also did something more. They included in their survey a question about whether the customer would recommend the product to a friend or colleague. This question is the basis of the Net Promoter score, a measure that is becoming widely recognized in marketing and sales departments as an important metric for customer satisfaction.
The Autodesk team correlated each individual score with the Net Promoter score to determine which aspects of which features or combination of features best predicted the Net Promoter score. They discovered that overall product quality, product value, and product usability were the “key drivers” of Net Promoter, rather than satisfaction with an individual feature. By tying their UX metrics to a metric that marketing and sales cared strongly about and educating their marketing and executive management about the connection between the two sets of metrics, the Autodesk team demonstrated to management the return on investment in their activities which helped themselves focus on those activities that improved their product’s overall value.
The Trouble With Metrics
In March 2009 Google head designer Doug Bowman resigned his post because he was forced to justify every design decision using metrics. A plaintive blog posting explained his reasons for quitting (Bowman, 2009). Metrics are not a substitute for expert judgment in design and aesthetics, and the Google episode demonstrates a case of trying to apply measurement to the wrong thing. People make decisions, but metrics are only data. Business decisions, as with design decisions, are based not only on data but on instinct, craft, emotion, and a compelling story.
Sidebar: A Bad Metric That Call Center Managers Love
An example of a bad metric is call containment rate for self service interactive voice response (IVR) telephony systems (Leppik, 2006). Containment is measured by the number of calls ending in the IVR divided by the number of calls ending in the IVR plus the calls routed to a live agent. Focusing on containment of calls “in the IVR” pushes managers to make bad decisions regarding IVR design, e.g., disabling the zero key or otherwise making it difficult for callers to reach a live agent. This often occurs over the protests of designers who know that people hang up for a variety of reasons, including getting lost or stuck in the IVR.
More appropriate metrics would include cost per call answered (both for live and in automation) and customer satisfaction levels for each channel. Then the designers and the business can discuss what should be automated and what calls should be handled by agents, and how the services of each can be designed to meet the objectives in each metric.
Sidebar: Metrics Are Very Political
There’s no getting around it. Putting a metric on the company’s scorecard or other top-level list of metrics is political, and so requires a good deal of support and savvy. If you don’t have the political clout to put a metric on the scorecard, make sure your own unit’s metric is (a) closely aligned in support of a valid scorecard metric and (b) pushes you to do things you really want to do. So, if a scorecard metric is “increase customer satisfaction scores for all web applications by 20%,” then make sure to have a unit metric in place to “practice full user-centered design on all significant customer-facing applications.”
Thoughts on Design Thinking
Metrics-driven managers represent a large constituency in many companies, the consumers of usability services. They aren’t, however, the only constituency. Many companies have discovered design as a differentiator not just for products and services, but for business process and strategy as well. “Design thinking” is being touted as a valuable complement to traditional linear, computational approaches to process improvement, strategy, and communications (Hopkins & Guterman, 2009). Unfortunately, many proponents of design thinking tend to recommend only that managers “think like designers,” rather than give designers a seat at the strategic table (Guterman, 2009).
A more effective way of promoting the value of design within companies, or “unleashing the power of design thinking," puts designers in role of training others in their organization on the application of design to business problems (Clark & Smith, 2008). UX designers who take the time to understand business language, business metrics, and strategy will get noticed by executives as business professionals who can translate and make explicit the linkage between user experience and business outcomes. Being able to understand and sell the value of both design and metrics allows the UX practitioner to move “towards modes of analysis more in sync with the thinking of executives who have to conceptualize product value strategically” (Rosenberg, 2004, p. 29).
References
- Bias, R.G., & Mayhew, D.J. (Eds.) (1994). Cost-justifying usability. San Francisco, CA: Morgan Kaufmann.
- Bowman, D. (2009). Goodbye, Google. Stopdesign blog entry March 20. Retrieved October 15, 2010, from http://stopdesign.com/archive/2009/03/20/goodbye-google.html
- Bradner, E. (2010). Recommending Net Promoter. Autodesk blog entry November 17. Retrieved November 18, 2010, from http://dux.typepad.com/dux/2010/11/recommending-net-promoter.html
- Clark, K., & Smith R. (2008). Unleashing the power of design thinking. Design Management Review, 19, 8-15.
- Guterman, J. (2009). How to become a better manager…by thinking like a designer. MIT Sloan Management Review, 50, 39-42.
- Hopkins, M.S., & Guterman, J. (2009). From the editors. MIT Sloan Management Review, 50, 10.
- Kaplan, R.S, & Norton, D.P. (1996). The Balanced Scorecard: Translating strategy into action. Boston, MA: Harvard Business School Press.
- Karat, C. M. (2005). A business case approach to usability cost justification for the web. In R.G. Bias and D.J. Mayhew (Eds.), Cost-justifying usability, 2nd ed., (pp. 103-141). San Francisco, CA: Morgan Kaufmann.
- Leppik, P. (2006). The Customer Service Survey: Developing metrics (part 1: bad metrics). Vocalabs blog entry Dec. 5. Retrieved November 9, 2010, from www.vocalabs.com/blog/developing-metrics-part-1-bad-metrics
- Malina, M.A., & Selto, F.H. (2001). Communicating and controlling strategy: An empirical study of the effectiveness of the Balanced Scorecard. Journal of Accounting Management Research, 13, 47-90.
- Price, B., & Jaffe, D. (2008). The best service is no service. San Francisco, CA: Jossey-Bass.
- Rigby, D., & Bilodeau, B. (2009). Management tools and trends 2009. Bain & Company, Inc.
- Rosenberg, D. (2004). The myths of usability ROI. Interactions, 5, 23-29.
- Tullis, T., & Albert, B. (2008). Measuring the user experience: collecting, analyzing, and presenting usability metrics. San Francisco, CA: Morgan Kaufmann. | https://uxpajournal.org/a-strategic-approach-to-metrics-for-user-experience-designers/ |
This is a high quality organic woven cotton fabric, from 100% cotton which weighs 150 gsm and is printed and produced by a GOTS certified manufacturer using eco-friendly digital printing. The watercolor artprint is of traditional Swedish Lucia buns in yellow and ochre colors on a brown background, designed from an original watercolor painted by Anna Hedeklint.
This woven cotton poplin is approximately 150 cm wide (55 inches) and is available in four different lengths: 50 cm (20 inches) 1 meter (40 inches approx) 1,5 meters (59 inches) or 2 meters length (78 inches).
This fabric is perfect for all kinds of sewing projects such as table linen, curtains, napkins, even dresses, hats, shirts, ties, tea-towels and lots of baby apparel.
Machine wash in warm water, 40 degrees, using phosphate-free detergent. Do not tumble dry. Iron on the reverse side of the fabric. | https://www.annahedeklint.se/webshop/blue-anemone-whz5g-kdmjd-axwh8-9r2g3 |
For given gene list, the viewer is able to quickly list all gene names which is a straight forward feature. This manual will mainly focus on Related Genes/Terms Search algorithms provided by this viewer as well.
Any given gene is associating with a set of annotation terms. If genes share similar set of those terms, they are most likely involved in similar biological mechanisms. The algorithm adopts kappa statistics to quantitatively measure the degree of the agreement how genes share the similar annotation terms. Kappa result ranges from 0 to 1. The higher the value of Kappa, the stronger the agreement. Kappa more than 0.7 typically indicates that agreement of two genes are strong. Kappa values greater than 0.9 are considered excellent.
Figure: A hypothetical example to detect gene-gene functional relationship by kappa statistics. A. The all-redundant and structured terms are broken into ‘independent’ terms in a flat linear collection. Each gene associates with some of the annotation term collection so that a gene-annotation matrix can be built in a binary format, where 1 represents a positive match for the particular gene-term and 0 represents the unknown. Thus, each gene has a unique profile of annotation terms represented by a combination of 1s and 0s. B. For a particular example of genes a and b, a contingency table was constructed for kappa statistics calculation. The higher kappa score (0.66) indicates that genes a and b are in considerable agreement, more so than by random chance. To flip the table 90 degrees, the kappa score of term-term can be achieved, based on the agreement of common genes (not shown).
The minimum number of terms in common between query gene and candidate gene for the consideration in the searching algorithm. For most cases, it should be above 3 for the statistical reasons.
The minimum Kappa value for the consideration. The higer of threshold, the stricter of the search. Default is 0.25 and setting range from 0 to 1.
The result of related genes to the query gene.
The term numbers of agreement and disagreement between query gene and hit gene. These numbers are used to calculate the agreement score (Kappa or Fisher Exact).
The Kappa Statistic is a chance corrected measure of agreement between two sets of categorized data. Kappa result ranges from 0 to 1. The higher the value of Kappa, the stronger the agreement. If Kappa = 1, then there is perfect agreement. If Kappa = 0, then there is no agreement. For further details about Kappa statistics please refer to "A coefficient for agreement of nominal scales" Educational and Psychological Measurement 20: p 37-46.
After reducing participating gene information to its most basic level using a binary mode (1 represents ‘Yes’ and 0 is ‘No’), term A and B share the same participating genes 1, 3, and n, in contrast that term A and C only share gene 3. Obviously, the relationship of term A-B is stronger than that of term A- C.
Kappa for Term A-B = 1; Kappa for Term A-C = 0.2; Therefore, the relationship of A-B is much stronger than that of A-C.
The minimum number of genes in common between query term and candidate term for the consideration in the searching algorithm. For most cases, it should be above 3 for the statistical reasons.
The minimum Kappa value for the consideration. The higer of threshold, the stricter of the search. Default is 0.25 and setting range is from 0 to 1.
The result of related terms to the query term.
The gene numbers of agreement and disagreement between query term and hit term. These numbers are used to calculate the agreement score (Kappa or Fisher Exact). | https://david.ncifcrf.gov/helps/linear_search.html |
The Iranian Oral History Project was launched at Harvard's Center for Middle Eastern Studies in the fall of 1981 and continues to provide scholars studying the contemporary political history of Iran with primary source material consisting of personal accounts of individuals who either played major roles in important political events and decisions from the 1920s to the 1970s or witnessed these events from close range.
The project has recorded the memoirs of 134 individuals, comprising approximately 900 hours of tape and 18,000 pages of transcript at a cost of over $800,000. The project has been generously funded by a large number of supporters including the National Endowment for the Humanities and the Ford Foundation.
The collection embodies the most comprehensive chronicle of eye-witness reports of modern Iran by some of the key figures who defined her history. Microfiche of the collection has been purchased by libraries of major universities in Canada, England, Germany, France, and the United States.
Please visit the Harvard Iranian Oral History Project website to access the digitized recordings. For inquiries about this project please contact Habib Ladjevardi. | https://cmes.fas.harvard.edu/projects/iohp |
Christian group plans US$120m Old Testament theme park
A Christian Group is looking to push ahead with plans for the construction of a US$120m (£71m, €86.5m) Old Testament theme park, based around a central Noah’s Arc structure.
The 800-acre attraction, known as the ‘Ark Encounter’, is set to feature a recreation of a village prior to the biblical floods, as well as a Tower of Babel housing an audio-visual effects theatre.
The site will also be home to a ride that will give visitors the chance to explore the 10 plagues of Egypt, with the new attraction planned for a site 40 miles from Petersburg, Kentucky.
The group behind the plans, Answers in Genesis (AiG), is also responsible for a Creationist Museum in the United States.
Its developers believe the park could attract two million visitors during its opening year, with the facility potentially bringing in US$119m (£71m, €85m) over a 10-year period.
Construction for the first phase of the park is expected to cost around US$70m (£41.8m, €50.4m) to complete, with the attraction’s ark offering – potentially the biggest timber-frame structure in the United States – costing US$24.5m (£14.6m, €17.6m) alone to build.
Breaking of the ground on the site is expected in May this year, with its developers looking to have the theme park open to the public by 2016. | https://www.cladglobal.com/CLADnews/architecture_design/Christian-group-plans-US$120m-Old-Testament-theme-park/308469?source=related |
Play the game below and listen carefully for the sounds and make a smoothie.
https//www.phonicsplay.co.uk/resources/phase/1/super-smoothie
|
|
Maths
Counting skills
|
|
Sing a song of numbers
(to the tune of sing a song of sixpence)
Use your fingers when counting 1-10
Sing a song of numbers
Count them one by one
Sing a song of numbers
We’ve only just begun
1,2,3,4,5,6,7,8,9,10
And when we’ve finished counting them, we’ll count them once again
1.2.3.4.5.6.7.8.9.10
Get some chalks and make a big hopscotch grid on your yard if you can. Throw a counter or small stone on the grid and say the number, then do that many jumps and hops to get to the number.
|
|
Topic
Collage using different materials
|
|
Make a collage of a colourful jelly like those in the story. Use coloured paper, card and glue and make it have different flavours. Can you make it stand up with a flap?
|
|
Other
|
|
Please copy over or write your name daily. | https://www.hasland-inf.derbyshire.sch.uk/wednesday-8th-july/ |
Are you looking for a creative way to add a decorative element to the roof over your deck? A fun DIY project is to make a homemade lantern out of metal. Metal lanterns are easy to make and do not require any welding. This project will only require a bit of elbow grease.
Any variety of decorative perforated metal can be used to make a set of metal lanterns. The one thing you will need to do is measure and cut the metal to the size you want for your lanterns. This can be done by using wire cutters or a metal cutting tool.
Make sure to wear a heavy pair of work gloves when creating metal lanterns. The reason is you need to bend the metal and the edges are likely to be sharp. However, a metal file may be used to file any edges that are dangerously sharp. | https://diygiftworld.com/diy-metal-lanterns-that-dont-require-welding/ |
Summer is the ideal season for people to visit and take in the breathtaking scenery along the coast. Furthermore, the feelings and experiences felt at the beach during the summer are always fantastic. Several scenes and experiences can be seen and felt at the beach during the summer. These include the plantation along the beach and inside the sea, the animals, the waters, and the people found on the beach.
As night fell, the setting sun’s red rays lit up the sky above the western horizon, and I could see an oil tanker making its way across the sea just on the horizon. Soon the sun disappeared below the horizon and the sky turned dark but my two friends and I sat on the beach dazing at the place where the sun went down, Sunsets are mesmerizing as we discovered. Only when the mosquitoes started coming in great numbers were, we brought back to reality.
We stood up and walked over to a small pile of wood that we had created earlier. We could only see shadows in the dark. Francis, one of my friends, had a torchlight. He activated it to show the way. The night creatures were already preoccupied with their tasks. I could hear the shrill cries of cicadas and other insects on our left, where the land was. The waves broke gently on the shore to our right, sending up sprays of phosphorescent surf.
The sounds and sights of nature were breathtaking. The only blemish on the otherwise perfect natural surroundings was the sound of passing traffic on a nearby road. We were possibly the only other blemishes. We had a torchlight on and were about to light a bonfire.
Nonetheless, I proceeded to pour some kerosene onto the pile of wood and light a match to it. The flames grew slowly but steadily. We were soon basking in the orange glow of the bonfire. Salleh, my other friend, brought out the snacks and drinks from a bag. We’d come to the beach to unwind and have fun.
A bonfire can be mesmerizing too and so we spent a good two hours eating, drinking, talking and singing around it. A number of people appeared and we invited them to share in our little revelry. We did not know any of them, but it did not matter AU knew was that we enjoyed ourselves in the warm glow of the bonfire which was a far cry from the cold stares of people on an ordinary street. However, all good things must come to an end. The fire slowly died down and darkness regained its mastery. We said goodbye to our visitors and cleaned up the fireplace.
Then we walked along the beach to where Salleh had parked his car a short distance away. Crabs, both large and small, scurried away as we approached. A gentle breeze rustled among the coconut palms. The black sky was filled with gleaming stars. It felt good to be alive.
The deep waters of the sea provide a breathtaking view for anyone who looks out at the sea. On the shore, water is slowly running out. Small waves are also seen crashing on the shoreline. The sea’s surface appears blue in color.
However, some areas are seen to have the spectrum caused by the sun’s refracted rays. There are high waves deep within the sea that lift boats up and down dramatically. The clear and blue shimmering waves of the sea reflect the hot sun’s rays. The cool breeze that comes from the seawaters is pleasant. Finally, we arrived at the car. We threw our belongings in the trunk and drove ourselves. Salleh started the car, and we were soon on our way home after a wonderful evening on the beach. | https://assignmentpoint.com/an-evening-on-the-beach/ |
What's wrong, are the in board pylons supposed to be longer?
The fuel order will be given in pounds or kilograms. The aircraft refueling panel will have a digital preset for each tank also in pounds or kilograms.
The refueling operator will annotate the amount of fuel dispensed on the aircraft load sheet (given to the captain at the conclusion of fueling) in either pounds or kilograms. This is done by noting the total fuel weight shown on the refueling panel at the beginning and end of refueling and subtracting the beginning reading from the final reading.
BUT the refueling truck (or bowser) meters will show the amount dispensed in either gallons or liters, and the final paperwork that the refueler turns in to his office (for billing purposes), will be in one of those two units of measurement.
The pilot’s fuel request is not given in gallons, because the weight per gallon is not exact. It varies with the temperature and the specific gravity of the fuel in the truck or underground tanks. That can vary from day to day depending on the supplier of the fuel.
In general, Jet-A weighs 6.7 pounds per gallon, but depending on the API index of a particular load of fuel in the truck, sometimes it may be a bit more than 6.7 pounds, and sometimes a bit less.
With high-density fuel, 10,000 pounds might equate to 1,470 gallons, while low-density fuel might equate to 1,515 gallons.
Each aircraft fuel tank contains at least one compensator probe that measures the actual fuel density and applies a correction factor to the quantity displayed on the aircraft fuel gauges.
Typically the only aircraft where the pilot would request fuel directly in gallons or liters would be on smaller biz jets without single point refueling - where fuel has to be dispensed directly into the tanks with overwing nozzles, much like refueling an automobile.
Right, and I get that. I guess I wasn't understanding why Wilhelm responded to by post the way he did. The issue I outlined in my post was if I am requesting 248,000 pounds of fuel to be loaded, I should NOT get only 230,000 pounds of fuel loaded. That's an 18,000-pound difference.
I haven’t tried putting that much fuel on board yet, and when I have loaded before, I just entered the total on the fuel page of the FS Actions menu, and have gotten the correct amount. (Biggest load so far was 150,000 pounds).
I’m about to install the new update via OC. Hopefully that will have fixed any mid-loading issues.
Hmm....interesting....although to be fair, that was at one extreme with the fuel density at 6.30, but even at 7.20, I could never get it to be exact, it would always be slightly less than what I asked for.
I didn’t realize you could specify the fuel density in refueling. I usually just do an instantaneous refueling through the FS Actions fuel page, though I know you can do a real-time gradual refueling for more realism, but I haven’t explored that option.
With extremely low-density fuel, it’s possible that you might run out of fuel tank physical capacity (in gallons) before reaching a requested fuel load (in pounds) when requesting a tank to be filled completely full.
During refueling, when the level sensor in a given tank detects that it is completely full, the fuel computer will close the tank inlet valve to prevent any more fuel from being added. Otherwise the fuel would likely overflow out of the tank vents onto the ground.
I can't find a lot of real life bird eye views of the 747-8's wing. How do you know they're supposed to be longer?
And please, don't present evidence as eye-witness accounts, random YouTube videos and/or pictures found on the Internet. Please present actual verifiable empiric evidence, such as data references or similar.
This caught my eye. I've used Coriolis flow meters in applications where I need mass flow rather than volume flow and they make excellent transfer of custody instruments. I've never ran into a "probe" that would measure a fluids density. How does it work?
I downloaded the new update and tested the 747-8..
2. panel lighting issue is fixed.. even during daytime, the integral panel lighting is bright and makes everything readable..
3. When I turn the battery & Ext.Power ON, and when all the displays power up, the FPS drops by 7-8.. so at cold and dark, the FPS is 34 and when displays are powered up, its around 27-28.. and remains that way all the way until engine start.. anybody else observed this?
Would you kindly take a screen shot of the -8 from the same angle as the link?
From looking up 747-8 diagram on google, the Pylons on the PMDG look as they should be. There are tons of top down views and drawings to show it's correct. Do you honestly think that PMDG would make that big of mistake?
The installations declined because it could not find the 747-8 exe file?
By default, it's set to 6.70. Supposedly, the fuel density changes on its own, but I've never seen it do that, and when I initially submitted the support ticket reporting the fuel loading discrepancy, they had said it might be related to the fuel density, and thus, a non-issue. I ended up testing the fuel density to the extremes, just to be sure. The end result was even with the highest possible fuel density, I could never get the exact amount of fuel I entered, it was always a little less. At a low fuel density, the differences between the input fuel request and the actual amount of fuel loaded was drastic.
I'm aware that with a low fuel density, it's possible to run out of fuel tank space before reaching the requested fuel load, as this was mentioned in the introduction manual. However, the highest you can load under idea conditions is somewhere close to 383,000 pounds for the 747-400. My loading of 248,000 pounds is certainly nowhere near that limit, which makes me wonder why it only loaded around 230,000 pounds in that instance with a low fuel density. Do keep in mind that all this is happening with the instantaneous refueling via the FS Actions fuel page. I never did test the real-time refueling option since I didn't have time to really play around with it much after the -8 came out (as it is, I still haven't even flown it yet).
Aircraft fuel quantity probes are basically capacitors. A small metallic cylinder nested inside of a larger cylinder by insulating standoffs. The probes are mounted vertically in the tanks, and are open at the top and bottom.
If you are familiar with the properties of capacitors, you will know that the capacitance is dependent on three factors: the area of the two plates (in this case the inner and outer cylinders), the distance between the plates (which is fixed), and the dielectric constant of the material between the plates. In this case, the dielectric is the fuel itself.
When the probe is fully immersed in fuel, it’s capacitance (measured in picofarads) will be at maximum. As the fuel level drops, air enters through the top of the cylinder, while fuel flows out the bottom, so that at anything less than “full” the dielectric is partially fuel, and partially air, which causes the capacitance to decrease. The lower the fuel level, the less the capacitance.
A very low voltage (1 volt peak-to-peak) AC waveform at several hundred hertz, which is generated by the fuel quantity computer, is applied to one plate (cylinder) of each fuel probe, and a signal return line goes back to the fuel computer from the other cylinder of each probe. The magnitude of the returning signal is directly proportional to the probe capacitance (less capacitance = less signal), and the capacitance is directly proportional to the amount of fuel between the two plates (cylinders) of the probe.
The fuel quantity computer determines current fuel quantity by measuring the magnitude of the AC signal returning from each probe, which decreases as fuel level drops. A specific voltage equates to a specific amount of fuel.
There are several types of compensator probes. Most are also capacitors, using fuel as the dielectric, but mounted lower in the tank so as to be always fully immersed in fuel. This probe has its own separate AC input and output line going to the fuel computer. The dielectric constant of the fuel varies with the density, which is dependent on the ratio of the various hydrocarbons in the mix, and also the temperature. For any given density, the return signal will be of a specific known magnitude. Lower density = less signal. That value is used by the fuel computer to apply a correction factor to the quantity measured by the standard probes.
TL:DR Each main fuel probe is a variable capacitor, which produce a specific output voltage directly proportional to how deeply the probe is immersed in fuel. The compensator probe is a fixed capacitor which produces a specific output voltage directly proportional to fuel density.
The fuel computer translates the voltages from all the various probes into quantity readouts by mathematical magic.
Yeah, The density setting is right there on the fuel loading page right in front of my nose. Just never noticed.
So far, my loads have been very accurate. I’m doing a 6-hour flight right now in the standard 400F. I requested 160,000 pounds, and got just a few pounds less - something like 159,980, which I assume is due to rounding.
I’ll try a full load on my next flight. I’m using the new update just released.
Right. What I'm saying is prior to the initial update to the -400 when the -8 got released, if I requested 40,000 pounds, I got 40,000 pounds rather than 39,994. Start lowering the fuel density, and the difference starts to become more noticeable at the higher fuel weights.
The capacitive system for aircraft fuel measurement has proven to be quite reliable over the years. It’s simple, and has no moving parts. The probes rarely fail - when they do it’s usually because they have been severely contaminated by water or biological growth in the fuel.
The probes are wired in series/parallel. The AC input to all the probes are wired In parallel from a common low impedence line coming from the FQMC, and the outputs are high Z with each probe having a discrete return line to the FQMC via coaxial cables.
Gotcha. Didn’t think the difference was significant at mid-range loads like I have been using, but I haven’t tried high loads yet, or varying the density. Though, with low density fuel, you could definitely have a situation where you will end up with less than requested weight when filling tanks to the brim.
I suspect so. The aircraft fuel tank is a much much cleaner environment that the product tanks in a refinery or truck terminal. And you never know how much salt water you're going to get from an ocean tanker bringing you the crude. It's nasty. In fact when someone says they only use high octane I tell them about how refineries must trap the vapors that are released from tank trucks, barges or ships when they fill the tank. Those vapors are compressed, cooled and condensed into a liquid and refiners will put that trash in the high octane product tanks because it will increase the volume of fuel without degrading the octane values. Don't ever think that something is better because it's more expensive haha.
Though, with low density fuel, you could definitely have a situation where you will end up with less than requested weight when filling tanks to the brim.
Yes, I am aware of that, because you're limited to volume at that point. But 248,000 is nowhere near close to the fuel tank volume limit. I was wondering if there's something in the coding that's trying to convert the input into gallons, and then giving you an output with the fuel density being taken into consideration, but that doesn't really explain why there's still a slight discrepancy even with the highest possible density. | https://www.avsim.com/forums/topic/542882-29sep18-pmdg-747-qotsii-update-3009019-released-via-oc/?page=2&tab=comments |
It is difficult to determine the exact history of shoelaces. Archaeological records of footwear are rare because shoes were generally made of materials that deteriorated readily. The oldest piece of leather footwear in the world known to contemporary researchers is a 5,500-year-old leather shoe that was found in 2008 in a cave in Armenia. The shoes were bound with “shoelaces” made of lime bark string.
There are other documented examples of medieval footwear with shoelaces dating from as far back as the 12th century, which clearly show the lacing passing through a series of hooks or eyelets down the front or side of the shoe.
Many contemporary shoes still use shoelaces which enable a user to distribute the tension across the top of the foot. The free ends of the shoelaces are typically tied into a bow shaped knot. However, this type of shoe fastening has several drawbacks. One problem is that the bow knot will often become inadvertently loosened and untied when walking. Another drawback is that some elderly and physically impaired folks do not have the luxury of being able to bend over and tie their shoes. Some folks do not even have the required manual dexterity to tie a knot of any type. Many children are unable to tie the laces. Still other folks may only have one hand which would also greatly handicap them when attempting to tie laced shoes. There are many lace winding mechanisms in the prior art but none are easily used by a handicapped person. The same problems hold true for some orthotic and prosthetic devices that require laces to be tightened.
| |
Q:
How to notate that a predicate holds for all elements in a set.
I want to notate that a predicate holds for all elements in a set. Currently I have the following:
$\forall (a,c) \in R^{k+1}(\exists b \in A((a,b) \in R \land (b,c) \in R))$
I want to say that for all ordered pairs (a,c) in $R^{k+1}$ the following applies: There exists an element b in A such that: the order pairs (a,b) and (b,c) are an element of R. Is this notation right or am I doing something wrong?
Edit: Thanks for the answers. I will change the parenthesis. This is indeed unclear. R denotes a relation, a set of ordered pairs. $R^{k+1}$ also denotes a relation. But it has different elements from R. Would switching from a, b and c to x, y and z make it easier to read?
Thanks very much in advance
A:
Your symbolization is fine ... though as amWhy comments, usually in logic we use for variables things like $x$, $y$, and $z$, as $a$, $b$, and $c$ are usually used to denote specific objects. So:
$\forall (x,z) \in R^{k+1} \: \Big( \exists y \in A \: \Big( (x,y) \in R \land (y,z) \in R \Big) \Big)$
EDIT
OK, now that I understand that you want $R^k$ to mean all paths of length $k$, you should recursively define $R^{k+1}$ as:
$\forall (x,z) \in R^{k+1} \: \Big( \exists y \in A \: \Big( (x,y) \in R^k \land (y,z) \in R \Big) \Big)$
In fact, this sentence says that only the paths that can be created as such are in $R^{k+1}$ ... not all and only such paths (e.g the statement would be true if $R^{k+1} =\{\}$. So, what you really want is something like:
$\forall x,z \in A \Big( (x,z) \in R^{k+1} \leftrightarrow \exists y \in A \Big( (x,y)\in R^k \land (y,z) \in R \Big) \Big)$
That is, there is a path of length $k+1$ from $x$ to $z$ if and only if there is a path of length $k$ from $x$ to $y$, and one more step from $y$ to $z$.
If you want to prove that if $R$ is transitive then for any $k$ all pairs in $R^{k}$ are in $R$, you want to prove that for all $k$:
$\forall (x,y) \in R^k : (x,y) \in R$
If you want to prove this using formal logic (i.e. Using a formal logic derivation), however, you will need to explicitly quantify over the $k$ as well though, so in both your definition as well as your goal you will have to add $\forall k$ at the beginning (or, if $k$ cannot be used as a variable, use a proper variable). And then you will also need some formalization of induction, where that formalization can deal with the $k$ in $R^k$ ... So maybe even define a function $f(R,k)$ instead of $R^k$ ... How technical/formal do you need to get?
| |
Discussion in 'Classic Boxing Forum' started by Mantequilla, Nov 20, 2009.
Underrated great fight.
Bobby Czyz v Dennis Andries
Round 1: 10-9 Czyz
Round 2: 10-9 Andries
Round 3: 10-9 Andries
Round 4: 10-10 Even
Round 5: 10-9 Andries
Round 6: 10-9 Andries
Round 7: 10-10 Even
Round 8: 10-9 Andries
Round 9: 10-9 Andries
Round 10: 10-9 Andries
Total: 99-93 Andries (actual scores: 98-93 and 96-94 both for Andries, with a dissenting 95-95 Even score for a majority win for Andries)
Was hoping for a good close scrap here, but really it was only a workman-like performance from Andries against a very subdued Czyz. There wasn't much to it. Andries simply outworked Czyz, who would catch Andries with a nice shot and then follow up with absolutely nothing. There was no fire in anything that Czyz did. Amazingly, what was on the line was a title shot at Virgil Hill, so you would think he would come out firing on all cylinders. Even more amazingly was, Czyz got the title shot anyway. But to be fair, whatever went on behind closed doors with negotiations, is unknown. And Andries ended up winning the vacant WBC title a month before Hill v Czyz took place anyhow, so I'm sure he didn't complain too much. At least the 2 of the 3 NJ officials kept some form of integrity by scoring it correctly for Andries. That Even scorecard showed bold partiality to the NJ based Czyz.
Julio Gervacio v Jose Valdez
I checked this fight out today only because I love watching Gervacio and because this fight was in it's entirety.
Round 1: 10-9 Gervacio
Round 2: 10-9 Gervacio
Round 3: 10-10 Even
Round 4: 10-9 Gervacio
Round 5: 10-9 Valdez
Round 6: 10-9 Gervacio
Round 7: 10-9 Gervacio
Round 8: 10-9 Gervacio
Round 9: 10-10 Even
Round 10: 10-8 Gervacio (Valdez docked a point for holding)
Total: 99-92 Gervacio (actual scores: 97-92, 97-92 and 97-94 all for Gervacio)
I was hoping for something better than this. Perhaps it was the southpaw/orthodox combo that just wasn't jelling. Still, I love Gervacio's style. His jab, his body shots, his sharp rights and lefts. The 9th and 10th rounds were about the best rounds, but I would actually say, give it a pass.
From our FOTW:
Jorge Paez v Troy Dorsey II
Let me just say this was a toughie. The workrate of Dorsey or the harder, accuracy of Paez.
Round 1: 10-9 Dorsey
Round 2: 10-9 Dorsey
Round 3: 10-9 Dorsey
Round 4: 10-9 Paez
Round 5: 10-10 Even
Round 6: 10-9 Dorsey
Round 7: 10-9 Paez
Round 8: 10-9 Dorsey
Round 9: 10-9 Paez
Round 10: 10-9 Paez
Round 11: 10-9 Paez
Round 12: 10-10 Even
Total: 115-115 Draw (actual scores: 116-112 Dorsey, 115-113 Paez and 114-114 Even for a Draw)
I'm sure if I score this again I would be looking at a different card. It was that tight. The entire fight fought inside a narrow pocket. Needless to say, I had no problem with the decision.
The clear winner was anyone who watched.
Keith Thurman vs Danny Garcia
Round 1: Thurman 10-9
Round 2: Thurman 10-9 (Close)
Round 3: Garcia 10-9 (Close)
Round 4: Thurman 10-9 (Close)
Round 5: Thurman 10-9
Round 6: Garcia 10-9
Round 7: Thurman 10-9 (Close)
Round 8: Thurman 10-9
Round 9: Thurman 10-9 (Close)
Round 10: Garcia 10-9
Round 11: Garcia 10-9
Round 12: Garcia 10-9
Final Score: Thurman 115-113
Good fight would be cool to see it get to run back since both are coming off losses was a close fight but Thurman landed the more effective shots and Garcia gave away too many middle and early rounds. Thurman is a very slick boxer with good power he has a lot of potentials I hope we see him come back soon I think this fight showed a lot of weaknesses though especially regarding his cardio which Manny used to his advantage.I think Thurmans the only guy at 147 right now besides Manny and maybe Porter who would have a good shot against Spence and Crawford.
Got a free afternoon so I decieded id score another fight
Thurman vs Porter
Round 1: Thurman 10-9
Round 2: Porter 10-9
Round 3: Porter 10-9 (Close)
Round 4: Thurman 10-9
Round 5: Porter 10-9
Round 6: Porter 10-9
Round 7: Thurman 10-9 (Close)
Round 8: Thurman 10-9
Round 9: Porter 10-9
Round 10: Thurman 10-9 (Close)
Round 11: Thurman 10-9
Round 12: Thurman 10-9
Final Score: Thurman 115-113
Very entertaining fight. played out the opposite of Garcia Thurman where Thurman came on late after rough early and middle rounds. The last round really came down to who wanted it more and Thurman wanted it just a bit more than Porter and Porter was pretty gassed. Very good fight big fan of both guys.
Julian Jackson vs Thomas Tate August 1st 1992
Jackson (31 years old) is 44-1 42 KOs and the reigning WBC middleweight champion this is his 4th defense of that title. Tate (27 years old) comes in at 24-1 19 KOs and rated 10 by the WBC. How I'm not sure, his competition is pretty lackluster and he got tagged with his first loss a couple fights before this. Commentary even makes a comment about how sometimes in boxing fighters move up the ratings like magic (lol)
Round 1
Feeling out round evolved to Jackson being aggressive and Tate ultra defensive (with Jacksons rep, for a good reason)
10-9 Jackson
Round 2
More of the same, Tate on his bicycle and Jackson leading, not much damage being delivered though.
10-9 Jackson
20-18 Jackson
Round 3
Better round overall, Tate lands a nice right hand and wakes up Jackson who begins landing some nice shots, but Tate is taking them. Tate more willing to trade and come forward now having some success landing his own power shots. Still I think Jackson edged this round
10-9 Jackson
30-27 Jackson
Round 4
Tate boxing great here and Jackson has taken his foot off the pedal. 1:30 into the round Jackson lands a nice right hand to Tate's head. Tate having a lot of success though and is probably winning this round. Never mind! Jackson catches Tate with several hard shots and drops him with about 10 sec to go. Tate gets up and looks OK though.
10-8 Jackson
40-35 Jackson
Round 5
First 90 seconds is all Jackson to the point it looks like its getting close to a stoppage. Tate throws almost nothing. Then at almost exactly the half way point of the round Tate takes over and hurts Jackson. Roles are completely reversed here as now Jackson is on his bike trying to clear his head and Tate all over him. Wow
10-9 Tate
49-45 Jackson
Round 6
Tate very aggressive taking the fight to Jackson. Jackson in retreat mode. Jackson finally starts looking like he's recovered somewhat and lands some hard body shots that Tate wants no part of and slows his assault. A nice right hand and body shot by Jackson. Tate dominated most of the round though
10-9 Tate
58-55 Jackson (Steve farhood chimes in and the unnoficial scorecard he has is the same as mine)
Round 7
Tate must of not liked those shots from Jackson in round 6 because he's back on the bicycle and staying away. Jackson looking more like himself and stalking Tate occasionally landing some hard shots
10-9 Jackson
68-64 Jackson
Round 8
Jackson now even more aggressive and starts putting his shots together better than he has since the beginning of the fight. Jackson also doesn't forget to land hard body shots during his assault. Tate blocks a right hand but Jackson hits so incredibly hard Tate stumbles back from it. 40 seconds left of a Jackson dominated round and Tate has Jackson hurt! Now its all Tate! Jackson trying to goad Tate to come to him dropping his left hand low but Tate is having none of that trap and starts jabbing Jackson to the head. Round ends. I could see Tate stealing this round on some cards, but Jackson dominated so much I had to give it to him
10-9 Jackson
78-73 Jackson
Round 9
Much slower pace. A breather round for both. Both are cautious and Jackson takes his foot off the pedal to try to regroup. I think this was a mistake on Tates part. Jackson lands almost nothing but a hard body shot while Tate lands some shots here and there
10-9 Tate
87-83 Jackson
Round 10
Jackson looks back to form and lands some nice shots on Tate. Tate having some success landing jabs and the occasional combo but Jackson is really pressing the action this round. Tate slips through the ropes but is OK. Tate lands a hard right to Jackson who answers back.
10-9 Jackson
97-92 Jackson
Round 11
Tate boxing the best he's boxed all fight. Defensive hecus sound but committing to his offense as well. Jackson lands a few jabs and power shots but Tate answers back whenever Jackson does
10-9 Tate
106-102 Jackson
Round 12
Jackson starts aggressive and tries to bomb Tate out of here but can't. Now Jackson looks exhausted and his right eye appears to be closing. Jacksons bombs are now slower and aren't landing with a lot of success. Tate is in control at the end of the round as Jackson is stumbling around completely spent
10-9 Tate
115-112 Jackson
Unofficial judge farhood has it exactly the same
Official judges have it
117-111, 116-111, 116-111
Crowd boos(lol)
Tate put up a great fight against the monster Jackson. If he would of not gotten clipped earlier in the fight my scorecard would of been a draw (officially he would of still lost). Jackson was done in round 12 and credit to him for surviving. He hadn't been that many rounds often in his career because.....well he's Julian Jackson. Commendable performances by both.
This would be the last title defense for Jackson, he would then defend against McClellan and get KOd and KOd in the rematch. He would go on to regain this title after Gerald vacated it, but would quickly drop it to Quincy Taylor.
Tate would go on for another 10ish years. He would actually regroup after this before losing to Jones Jr and then Rocky Gannon (haha what?). And Silvio Branco. Tate would have a career resurgence beating a string of prospects fringe contenders before fighting Sven Ottke twice for the IBF title and retiring.
Pepsi, great write-up.
Terrific write up. You put a lot more effort into this one than I did! But here's my card just for comparison:
Julian Jackson v Thomas Tate
1 10-9
2 10-9
3 10-9
4 10-8
5 9-10 (great recovery from Tate who hurt Jackson)
6 9-10 (quite a turnaround. Tate dominates Jackson who looks like he's run out of steam)
7 9-10 (close. Jackson seemed like he'd recovered but didn't land much)
8 10-10 (Jackson has Tate in trouble early but Tate fights back hard)
9 10-9
10 10-9
11 10-9
12 9-10
Jackson 116-112 Tate
Nice! Yea our cards are pretty close. Gutsy performances by both.
Junior Jones v Marco Antonio Barrera 2
Another terrific performance against Barrera from Junior Jones, who showed just the right combination of ring smarts and toughness to keep Barrera at bay.
Barrera was more accurate with his punch output but too sparing with it too, particularly in the second half of the fight.
The point deduction? A little harsh. I thought it was borderline and Jones didn't seem affected, suggesting it wasn't that low a blow. But even without that deduction, I had Jones winning.
Funny to think that this would be the end of Jones' best, whereas Barrera would come back (more than once as well) to have a Hall of Fame career.
1 10-9 (nice snapping jab from Jones)
2 9-10 (good work from Barrera towards the end of the round gives him the edge)
3 9-10 (Jones inaccurate with his punches, Barrera doing some solid work)
4 10-9 (Jones busier and landed the better punches)
5 9-10 (Barrera landed some nice shots to take the round)
6 9-10
7 10-9
8 10-9 (good round from Jones who had Barrera backing up)
9 10-8 (Jones winning the round plus a point deduction for a low blow for Barrera. Barrera strangely passive)
10 10-9 (superb performance from Jones. He is dominating the second half of the fight. To his credit, Barrera is standing up well to the right hand this time around but he's taking a fair few and not doing enough.)
11 10-9 (Barrera looking out of ideas, decides to abandon the boxing and exchange a bit. But Jones has the round again)
12 10-10 (scrappy round until the final exchange- both giving as good as they get. No clear edge to either fighter)
Jones 116-112 Barrera
The Junior Jones comeback train! He really did have Barrera's number at this time. His fight after this is one of my favorites to watch (vs Kennedy McKinney)
Roy Jones Jr. vs Thomas Tate May 27th 1994
Tate is about 29 years old and the IBF #1 ranked contender. How he is the number 1 ranked contender I have no idea. After his spirited effort against Julian Jackson, Tate went 5-0 mostly against journeymen, but did pick up a 10 round decision against often overlooked 41-6 Tyrone Trice.
Roy Jones is 25 years old, is 25-0 with 22 KOs. This is the first defense of his IBF title he picked up around 12 months prior by winning a decision against Bernard Hopkins which if I said it aged well, would be a massive understatement. Jones did have 3 fights between Hopkins and this one, all non-title. A KO over future multiple time world titilst Sugarboy Malinga, a shutout 10 round decision over 12-7-2 Fermin Chirino, and a KO over 25-12 Danny Garcia. (Can anyone fill me in or remind me why Jones was fighting so many non title fights around this time?)
Round 1
All Jones. Tate is trying to get going with his boxing but Jones is landing at will from multiple awkward angles. Dare I say I even see some jabs from Roy land. Jones is ludicrously gifted athletically and his reflexes are from another planet. On defense Jones blocks or dodges almost everything Tate throws at him with ease. Its like Jones can fight in fast forward and he can see Tate as if he were slomo. Dominating round by Jones
10-9 Jones
Round 2
Lightening fast left hook from Jones that Tate doesn't see and hes down and hurt. Its over that's it.
KO2 Jones
Jones was a freak of nature when he was younger and this fight highlighted why. Tate is not an all time great but he's a skilled fighter who less than two years prior extended Julian Jackson and gave a good account of himself. Here he was outclassed from jump street. Recommend if you want to see Roy closer to his athletic prime.
Tate would go on a slide after this losing to Rocky Gannon and Silvio Branco. But rebound with some wins over prospects and fringe contenders to get a couple shots at Sven Ottkes IBF Super middle title.
Jones next fight is his dominating win over James Toney and claiming another world title.
A couple of our rounds were different, but in the end we had the same score. This is what I wrote:
Junior Jones vs. Marco Antonio Barrera II
Round 1: 10-9 Jones
Round 2: 10-9 Barrera
Round 3: 10-10 Even
Round 4: 10-9 Jones
Round 5: 10-9 Jones
Round 6: 10-9 Barrera
Round 7: 10-9 Jones
Round 8: 10-9 Jones
Round 9: 10-8 Jones (point deducted from Barrera for continued low-blows - 3rd warning)
Round 10: 10-9 Jones
Round 11: 10-9 Barrera
Round 12: 10-9 Barrera
Total: 116-112 Jones
Actual scorecards were 116-111, 114-113 and 114-112 all for Jones
Amazing how a fighter can show up and have that Indian hex over a great. And this is the case here. You gotta give Jones his due. | https://www.boxingforum24.com/threads/the-what-fights-did-you-watch-today-scorecard-thread.186016/page-576 |
Note: This is an archived Handbook entry from 2013.
|Credit Points:||12.50|
|Level:||3 (Undergraduate)|
|Dates & Locations:|| |
This subject is not offered in 2013.
|Time Commitment:||Contact Hours: 2 x one hour lectures per week, 24 hours practical work (3 hours per week during the first part of semester) |
Total Time Commitment:
Estimated total time commitment of 120 hours
|Prerequisites:|| |
One of
Subject
Study Period Commencement:
Credit Points:
Not offered in 2013
12.50
Not offered in 2013
12.50
Not offered in 2013
12.50
Semester 2
12.50
Not offered in 2013
12.50
|Corequisites:|| |
None
|Recommended Background Knowledge:|| |
None
|Non Allowed Subjects:|| |
None
|Core Participation Requirements:||
|
For the purposes of considering applications for Reasonable Adjustments under the Disability Standards for Education (Cwth 2005) and Students Experiencing Academic Disadvantage Policy, this subject requires all students to actively and safely participate in practical work. Students who feel their disability may impact upon their participation are encouraged to discuss this with the Subject Coordinator and the Disability Liaison Unit. http://www.services.unimelb.edu.au/disability/
Contact
School of Botany
|Subject Overview:||
|
This subject deals with how plants function in relation to changing physical environments and is designed for students interested in plant biology and physiology, including those seeking majors in plant science, agricultural science, landscape management, and environmental science. The practical work includes a six-week research project on topics selected by students and run in small groups of 2-3.
Topics to be covered will include:
|Objectives:||
|
Upon completion of this subject, students should have a knowledge of:
|Assessment:||
|
Laboratory test during the semester (10%); practical reports totalling up to 2000 words due during the semester (30%); a 2-hour written examination in the examination period (60%).
|Prescribed Texts:|| |
None
|Breadth Options:|| |
This subject potentially can be taken as a breadth subject component for the following courses:
You should visit learn more about breadth subjects and read the breadth requirements for your degree, and should discuss your choice with your student adviser, before deciding on your subjects.
|Fees Information:||Subject EFTSL, Level, Discipline & Census Date|
|Notes:||
|
This subject is available for science credit to students enrolled in the BSc (both pre-2008 and new degrees), BASc or a combined BSc course.
Previously known as 606-304 Environmental Plant Physiology (prior to 2010)
Previously known as BOTA30003 (606-304) Functional Plant Biology (prior to 2011)
|Related Majors/Minors/Specialisations:||
Botany |
Botany
Botany (pre-2008 Bachelor of Science)
Cell Biology (pre-2008 Bachelor of Science)
Ecology (pre-2008 Bachelor of Science)
Ecology and Evolutionary Biology
Genetics
Genetics
Genetics
Plant Cell Biology and Development (specialisation of Cell and Developmental Biology major)
Plant Science
Science credit subjects* for pre-2008 BSc, BASc and combined degree science courses
Science-credited subjects - new generation B-SCI and B-ENG. Core selective subjects for B-BMED.
Download PDF version. | https://archive.handbook.unimelb.edu.au/view/2013/bota30003/ |
TECHNICAL FIELD
BACKGROUND ART
DISCLOSURE OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
EMBODIMENTS
(First Embodiment)
(Second Embodiment)
Industrial Applicability
The present invention relates to a coil component used for various electronic apparatuses and instruments and the like.
A conventional coil component will be described below by reference to the drawings.
FIG. 19 is an exploded perspective view of a conventional coil component.
In FIG. 19, the coil component includes an air-core coil 22 formed by winding a plate conductor 21 formed of a foil conductor into a scroll shape, terminals 23 connected to opposite ends of the air-core coil 22 and projecting downward, a terminal block 24 on which the air-core coil 22 is placed and which has a through hole, an E type core 25 having a central magnetic leg inserted into the through hole of the terminal block 24, and an I type core 26 to be combined with the E type core 25 to form a closed magnetic circuit core.
In recent years, demanded as the coil component used for computers and the like is a coil component which operates in a high-frequency region of about 1MHz, ensures an inductance of about 1µH and infinitesimal direct-current resistance of several mΩ, and is adaptable to a large current of about ten-odd A.
However, according to the above conventional structure, because the plate conductor 21 is wound into the scroll shape to form the air-core coil 22 and the E type core 25 and the I type core 26 are combined with each other to form the closed magnetic circuit core, there are problems in that the coil component is difficult to adapt to a large current and cannot be miniaturized.
The present invention solves the above problems and it is an object of the invention to provide a coil component which operates in a high-frequency region, ensures an inductance and infinitesimal direct-current resistance, is adaptable to large current, and is miniaturized in size.
According to the invention, there is provided a coil component comprising: a coil section having a through hole and a plurality of ring sections connected to each other by ring connecting sections and formed of a metallic flat plate disposed in a plane, the ring sections being bent at the ring connecting sections and placed one on top of another; terminals connected to the coil section; and a package member which covers the coil section and from which the terminals project. Each ring section is formed of an arc-shaped portion having a slit formed by cutting a part of the ring section. The ring connecting sections are formed at end sections of the arc-shaped portions of the ring sections where the ring sections are connected to each other. The terminals are formed at end sections of the arc-shaped portions of the ring sections where the ring sections are not connected to each other.
With this structure, because the ring sections are formed of the metallic flat plate, the coil component operates in a high-frequency region, ensures an inductance and infinitesimal direct-current resistance, and is adaptable to a large current.
According to the invention, in the plurality of ring sections formed of the metallic flat plate disposed in a plane, the sum of an angle formed by center lines each connecting centers of the ring sections adjacent to each other and connected by the ring connecting section, and angles each formed by the center line of the ring section connected to the terminal and an extension line extending from the center of the ring section toward the end section formed with the terminal is approximately 180°.
Because the sum of the angle formed by the center lines each connecting the centers of the ring sections adjacent to each other and connected by the ring connecting section, and the angles each formed by the center line of the ring section connected to the terminal and the extension line extending from the center of the ring section toward the end section formed with the terminal is approximately 180°, it is easy to place the ring sections one on top of another.
Especially, in the coil section in which the ring connecting sections are bent and the ring sections are placed one on top of another, because the end sections of the arc-shaped portions of the ring sections formed with the terminals can be disposed in opposed positions with respect to the centers of the ring sections, orientations of the terminals do not need to be considered in mounting and ease of use is excellent.
At this time, because each ring connecting section can be disposed in a position at an angle of about 45° with respect to a straight line connecting the end sections formed with the terminals, miniaturization can be achieved with respect to a mounting area. In other words, if the ring connecting sections are disposed in corner portions of a square mounting portion in which the ring sections are inscribed, the mounting area can be reduced.
Moreover, if the package member is formed into a prism shape, by disposing the ring connecting sections in the corner portions, dimensions of an outside shape of the package member can be reduced and the package member can be miniaturized.
According to the invention, there is provided a method of producing a coil component including a coil section forming step for forming a coil section having a through hole and a package member forming step for covering the coil section with a package member and causing terminals connected to the coil section to project from the package member. The coil section forming step includes a ring section forming step for forming a plurality of ring sections formed of a metallic flat plate connected to each other by ring connecting sections and disposed in a plane and a bending step for bending at the ring connecting sections and placing the ring sections one on top of another. The ring section is formed of an arc-shaped portion having a slit formed by cutting a part of the ring section. Each ring connecting section is formed at an end section of the arc-shaped portion of the ring section where the ring sections are connected to each other. Each terminal is formed at an end section of the arc-shaped portion of the ring section where the ring sections are not connected to each other.
According to the producing method of the invention, the coil component which can exert the above-described operations and effects can be produced.
FIG. 1 is a plan view of a plurality of ring sections and terminals formed of a metallic flat plate and disposed in a plane in a coil component according to a first embodiment of the present invention;
FIG. 2 is a perspective view of a coil main body of the coil component;
FIG. 3 is a perspective view of the coil component;
FIG. 4 is a sectional view of the coil component;
FIG. 5 is a plan view of ring sections provided with insulating coating layers and terminals, both for use in the coil component;
FIG. 6 is a sectional view of the ring sections provided with insulating coating layers and the terminals, both for use in the coil component;
FIG. 7a is a sectional view of a vicinity of a ring connecting section of the ring section before bending;
FIG. 7b is a sectional view of the vicinity of the ring connecting section of the ring section after bending;
FIG. 8 is a sectional view of the vicinity of the ring connecting section of another ring section before bending;
FIGS. 9a to 9g are process diagrams of producing the coil component;
FIG. 10a is a sectional view of the ring section of the coil component provided with the insulating coating layer and chamfered;
FIG. 10b is a sectional view of a vicinity of outer peripheries of the ring sections when the ring sections are placed one on top of another;
FIG. 11a is a sectional view of the ring section provided with the insulating coating layer and not chamfered;
FIG. 11b is a sectional view of a vicinity of outer peripheries of the ring sections when the ring sections are placed one on top of another;
FIGS. 12a to 12c are process diagrams of bending the ring sections in the producing process of the coil component;
FIG. 13a is a sectional view showing a state in which the ring sections provided with extending projections are deformed after forming of a package member;
FIG. 13b is a plan view of the ring section;
FIG. 14a is a sectional view showing a state in which the ring sections not provided with the extending projections are deformed after forming of a package member;
FIG. 14b is a plan view of the ring section;
FIG. 15 is a sectional view of the coil component without steps;
FIG. 16 is a plan view of four ring sections formed of a metallic flat plate disposed in a plane of a coil component according to a second embodiment;
FIG. 17 is a plan view of the ring sections provided with insulating coating layers;
FIG. 18a to 18d are process diagrams of bending the ring sections; and
FIG. 19 is an exploded perspective view of a conventional coil component.
Inventions described in all the claims will be described below by using embodiments of the present invention by reference to the drawings.
FIG. 1 is a developed view of a coil component with a plurality of ring sections and terminals formed of a metallic flat plate and disposed in a plane in a first embodiment of the invention. FIG. 2 is a perspective view of a coil main body of the coil component. FIG. 3 is a perspective view of the coil component. FIG. 4 is a sectional view of the coil component.
In FIGS. 1 to 4, the coil component in one embodiment of the invention is formed of a coil main body 3 made of a metallic flat plate and a package member 3. In the coil main body 3, a plurality of (three in FIG. 1) ring sections 32 are disposed in a plane and connected to each other through ring connecting sections 31 to be disposed in a shape of a triangle and terminals 35 are connected to end sections of the ring sections 32 at opposite ends. If the plurality of ring sections 32 are bent at the ring connecting sections 31 and placed one on top of another, a coil section 34 having a through hole 33 is formed and the terminals 35 project outward from the coil section 34. In the coil main body 3, the coil section 34 is covered with the package member 36 with the terminals 35 projecting.
The coil main body 3 formed of the metallic flat plate disposed in a plane is formed by die-cutting or etching a copper sheet and each ring section 32 has an arc-shaped portion 38 having a slit 37 formed by cutting a part of the ring section 32.
At an end section of the arc-shaped portion 38 of the ring section 32, the ring connecting section 31 connecting the ring sections 32 is formed and a projection 39 is extending toward the slit 37.
As shown in FIGS. 5 and 6, the ring sections 32 have substantially equal outside diameters, peripheral edge portions 40 are chamfered, and the ring sections 32 excluding the ring connecting sections 31 are provided with insulating coating layers 41.
Each ring connecting section 31 is provided with a groove 42 for bending in a direction (V) perpendicular to a center line (C) connecting centers (O) of the ring sections 32 adjacent to each other and connected by the ring connecting section 31. The groove 42 of the ring connecting section 31 has a V-shaped section and is formed in a shallow scraped recessed portion 53 as shown in FIG. 7a. FIG. 7b shows a bent state of the ring connecting section 31. Although a shape of the groove 42 may be a U shape as shown in FIG. 8, a V shape is more preferable than the U shape. Although the shallow recessed portion 53 is not formed in FIG. 8, it is preferable to form the recessed portion 53.
The rectangular terminal 35 is provided to project from an end section of the arc-shaped portion 38 of the ring section 32 where the ring sections 32 are not connected to each other. The terminal 35 is formed on an extension line (E) extending from the center (O) of the ring section 32 toward the end section of the arc-shaped portion 38 formed with the terminal 35.
As shown in FIG. 4, the terminal 35 is provided while forming a step 30 at a junction portion between the terminal 35 and the arc-shaped portion 38. As shown in FIG. 4, the step 30 formed on one terminal 35 and the step 30 formed on the other terminal 35 are arranged in such directions as to approach each other in a vertical direction when the ring sections 32 are placed one on top of another in a same phase.
These three ring sections 32 having the ring connecting sections 31 and the terminals 35 have positional relationships as shown in FIG. 1. In other words, the sum of an angle (R1) formed by the center lines (C) each connecting the centers (O) of the ring sections 32 adjacent to each other and connected by the ring connecting section 31, and angles (R2) each formed by the center line (C) of the ring section 32 connected to the terminal 35 and the extension line (E) extending from the center (O) of the ring section 32 toward the end section formed with the terminal 35 is approximately 180°. More specifically, (R1) is 96° and (R2) and (R2) are respectively 42°. Needless to say, the present invention is not limited to these values.
The package member 36 has an outside shape of a rectangular parallelepiped. In the package member 36, the ring connecting section 31 formed at one end section of the arc-shaped portion is disposed at one inter-corner portion 44 of the package member 36 and the ring connecting section 31 formed at the other end section of the arc-shaped portion is disposed at the other inter-corner portion 44 of the package member 36.
A method of producing the coil component having the above structure is as follows as shown in FIGS. 9a to 9g.
First, the coil main body including the coil section 34 having the through hole 33 is formed in the above manner (a step of forming the coil main body) (FIGS. 9a to 9c).
This step consists of a plate body producing step and a bending step of the coil main body.
First, the plurality of ring sections 32 and the terminal sections 35 connected to each other by the ring connecting sections 31 and formed of the metallic flat plate disposed in a plane are formed by die-cutting or etching a copper sheet (a step of producing the plate body of the coil main body).
Next, the plate body is bent at the ring connecting sections 31 and the ring sections 32 are placed one on top of another (a bending step) (FIGS. 9b and 9c).
Second, the coil section 34 is covered with the package member 36 (a step of forming the package member) (FIGS. 9d to 9f). The step of forming the package member consists of a step of forming compacted powder bodies, a step of re-pressure forming, and a thermosetting step.
First, a binder including thermosetting resin and magnetic powder are mixed in a non-heated state such that the thermosetting resin does not set completely and are pressure-formed in the non-heated state to form two compacted powder bodies 45 (a step of forming compacted powder bodies).
The compacted powder body 45 is formed into a pot shape having an E sectional shape by heaping a middle leg portion 47 and an outer leg portion 48 on a square back portion 46. The back portion 46 is formed into a high hardness portion such that the compacted powder body 45 does not lose its shape in the re-pressure forming. The middle leg portion 47 and the outer leg portion 48 are formed into the low hardness portion such that the compacted powder body 45 loses its shape in the re-pressure forming.
2
The low hardness portion and the high hardness portion are formed of a portion (low hardness portion) in which a density of the compacted powder body 45 is low and a portion (high hardness portion) in which the density is high and the low hardness portion has such a hardness that the compacted powder body loses its shape under pressure of several kg/cm.
Here, the hardness with which the compacted powder body 45 loses its shape refers to the hardness with which the compacted powder body 45 crumbles into particles of the magnetic powder. In the high hardness portion having such a hardness that the compacted powder body 45 does not lose its shape, hardness with which the compacted powder body 45 crumbles into blocks (lumps) (i.e., not into the particles of the magnetic powder) is not included in a range of the hardness with which the compacted powder body 45 loses its shape.
Next, the back portion 46 of one compacted powder body 45 is placed on one face (upper face) of the coil section 34 and the middle leg portion 47 of the other compacted powder body 45 is inserted into the through hole 33 of the coil section 34 from the other face (lower face) of the coil section 34.
These compacted powder bodies 45 and the coil main body are fitted into a metal mold 49 having a prism-shaped inside cavity. The ring connecting sections 31 are disposed in corner portions of the metal mold 49. The terminals 35 are disposed at midpoint positions between the corner portions of the metal mold 49 and project from the metal mold 49.
One metal mold 49 out of the upper and lower two metal molds 49 presses the middle leg portion 47 and the outer leg portion 48 which are the low hardness portions of the one compacted powder body 45 and the other metal mold 49 presses the back portion 46 which is the high hardness portion of the other compacted powder body 45 to re-pressure form the compacted powder bodies 45 (the step of re-pressure forming).
From one face side (an upper face side of the perspective view in FIG. 9d) of the coil section 34, the middle leg portion 47 and the outer leg portion 48 which are the low hardness portions of the one compacted powder body 45 (the upper compacted powder body in FIG. 9d) are pressed while crumbling. At the same time, the back portion 46 which is the high hardness portion of the one compacted powder body 45 and which faces an inner wall face of the through hole 33 of the coil section 34 sinks in shape of block into the through hole 33 of the coil section 34 and the back portion 46 of the compacted powder body 45 facing the terminals 35 sink in shape of block toward the terminals 35.
From the other face side (a lower face side of the perspective view in FIG. 9d) of the coil section 34, the middle leg portion 47 and the outer leg portion 48 which are the low hardness portions of the other compacted powder body 45 (the lower compacted powder body in FIG. 9d) are pressed while crumbling. The middle leg portion 47 and the outer leg portion 48 of the other compacted powder body 45 are pressed as described above and face the back portion 46 of the one compacted powder body 45 which has sunk in shape of block into the through hole 33 of the coil section 34 and toward the terminals 35. At the same time, gaps between the coil section 34 and the back portions 46 of the compacted powder bodies 45 are filled with the crumbled middle leg portions 47 and outer leg portions 48 of the one compacted powder body 45 and the other compacted powder body 45.
As described above, because the one and the other compacted powder bodies are pressed simultaneously from above and below toward the coil section 34 in the metal mold 49, the one and the other compacted powder bodies are formed into the integral block-shaped package member 36 while sandwiching the coil section 34 between them.
3
As shown in FIG. 4, a thickness (W) of a skin of the package member 36 in which the coil section 34 is encapsulated is smaller than a diameter of the through hole 33 of the coil section 34. In an upper face portion 50 of the package member 36 corresponding to an upper portion of the coil section 34, a lower face portion 51 of the package member 36 corresponding to a lower portion of the coil section 34, and an intermediate portion 52 of the package member 36 corresponding to a height portion of the coil section 34, a density of the upper face portion 50 and a density of the lower face portion 51 are higher than a density of the intermediate portion 52 (the density of the upper face portion 50 and the density of the lower face portion 51 are 5.0 to 6.0g/cm and the density of the intermediate portion 52 is 85% to 98% of them).
Especially in the intermediate portion 52, in an inner intermediate portion 52a corresponding to an inside of the through hole 33 of the coil section 34 and an outer intermediate portion 52b corresponding to an outside portion of an outer peripheral face of the coil section 34, a density of the outer intermediate portion 52b is higher than a density of the inner intermediate portion 52a.
Then, the package member 36 is formed by heat forming such that the thermosetting resin sets completely (the thermosetting step).
Lastly, the terminals 35 are bent along the package member 36 (FIG. 9g).
The coil component having the above structure has the following operations.
Because the ring sections 32 of the coil section 34 is formed of a metallic flat plate, the coil component operates in a high-frequency region, ensures an inductance and infinitesimal direct-current resistance, and is adaptable to a large current.
In the ring sections 32 formed of the metallic plate disposed in a plane, the sum of the angle (R1) formed by the center line (C) connecting the centers (O) of the ring sections 32 connected by the ring connecting section 31 and adjacent to each other and the center line (C) and the angles (R2)(R2) each formed by the center line (C) of the ring section 32 connected to the terminal 35 and the extension line (E) extending from the center (O) of the ring section 32 toward the end section formed with the terminal 35 is 180°. Therefore, it is easy to place the ring sections 32 one on top of another.
The ring sections 32 have substantially equal outside diameters and are formed by etching or die cutting. Therefore, the ring sections 32 can be formed easily with accuracy and variations in characteristics of the ring sections 32 can be suppressed.
Because the peripheral edge portions 40 are chamfered, the insulating coating layer 41 can be formed evenly around the ring section 32 as shown in FIG. 10a. As shown in FIG. 10b, if stress or the like is applied from above and below when the ring sections 32 are placed one on top of another, damage (peeling of the coatings at a portion A) to the adj acent upper and lower ring sections 32 by each other can be suppressed by the peripheral edge portions 40 of the ring sections 32. If the peripheral edge portions 40 are not chamfered, the insulating coating layer 41 cannot be formed evenly around the ring section 32 as shown in FIG. 11a and the upper and lower ring sections 32 are likely to be damaged by each other (peeling of the coatings at a portion A) when the ring sections 32 are placed one on top of another as shown in FIG. 11b.
Because the ring sections 32 excluding the ring connecting sections 31 are provided with the insulating coating layers 41, a short circuit in the ring sections 32 placed one on top of another can be suppressed. Especially, the insulating coating layers 41 are provided while leaving spaces at the ring connecting sections 31, the insulating coating layers 41 do not get ripped when the ring connecting sections 31 are bent and a deterioration of characteristics due to a rip of the insulating coating layer 41 can be suppressed. As shown in FIGS. 12a to 12c, because the insulating coating layer 41 is not formed at a bent portion when the ring connecting sections 31 are bent as especially shown in FIG. 12c, the insulating coating layer 41 does not expand or contract due to the bending (if the insulating coating layer 41 is bent, degrees of expansion and contraction on inner and outer sides of the ring connecting sections 31 are different from each other) and ripping of the insulating coating layer 41 can be suppressed.
The projections 39 are formed at the end sections of the arc-shaped portions 38 of the ring sections 32 connected to each other to extend toward the slits 37. Therefore, even if stress or the like is applied from above and below when the ring sections 32 are placed one on top of another, corresponding portions of the upper and lower ring sections 32 are supported by the projections 39. As a result, the upper and lower adjacent ring sections 32 corresponding to the slit 37 are not deformed to come in contact with each other and a short circuit can be suppressed. If the projections 39 are not formed as shown in FIGS. 14a and 14b, the upper and lower ring sections 32 are deformed to come in contact with each other as shown in FIG. 14a. If the projections 39 are formed, deformation of the upper and lower ring. sections 32 is suppressed and the ring sections 32 do not come in contact with each other as shown in FIG. 13a.
As shown in FIG. 2, because each ring connecting section 31 of the ring main body can be disposed in a position at an angle of about 45° with respect to a straight line connecting the terminal 35 and the terminal 35, the ring sections 32 can be miniaturized with respect to a mounting area. In other words, if the ring sections 32 are disposed in a corner portion 43 of a square mounting portion (not shown) in which the ring sections 32 are inscribed, the mounting area can be reduced.
Because the ring connecting sections 31 are provided with the grooves 42 for bending, the ring connecting sections 31 can be bent easily and accurately, the ring sections 32 are not bent, and cracks are not produced in the ring connecting sections 31. Especially because each groove 42 is formed in the direction (V) perpendicular to the center line (C) connecting the centers (O) of the ring sections 32 connected by the ring connecting section 31 and adjacent to each other, the ring sections 32 can accurately be placed one on top of another.
The terminals 35 of the coil section 34 are formed to have the steps 30 in the plurality of ring sections 32 formed of the metallic flat plate disposed in a plane. The step 30 formed on one terminal 35 and the step 30 formed on the other terminal 35 are arranged in such directions as to approach each other in a vertical direction when the ring sections 32 are placed one on top of another in a same phase. Therefore, the bent portions of the terminals 35 are disposed in a vicinity of a center in a height direction of the coil section 34 and ease of use in mounting is excellent. If the steps 30 are not formed, the coil section 34 is distorted in forming the package member 36 and the terminals 35 are less likely to be disposed in the vicinity of the center.
Especially, in the coil section 34 in which the ring connecting sections 31 are bent and the ring sections 32 are placed one on top of another, because the end sections of the arc-shaped portions 38 of the ring sections 32 formed with the terminals 35 can be disposed in opposed positions with respect to the centers (O) of the ring sections 32, orientations of the terminals 35 do not need to be considered in mounting and ease of use is excellent.
At this time, by providing each terminal 35 on the extension line (E) extending from the center (O) of the ring section 32 toward the end section of the arc-shaped portion 38 formed with the terminal 35, the terminal 35 can be disposed in line with the center (O) of the ring section 32 and the end section of the arc-shaped portion 38, the terminals 35, 35 are accurately disposed in the opposed positions with respect to the centers (O) of the ring sections 32, orientations of the terminals 35 do not need to be considered in mounting, and ease of use is further improved.
The package member 36 has an outside shape of a prism. Because the ring connecting section 31 formed at one end section is disposed in the corner portion 43 of the package member 36 and the ring connecting section 31 formed at the other end section is disposed between the corner portions 43, 43 of the package member 36 (portion 44), outer dimensions can be reduced and miniaturization can be achieved.
The package member 36 is pressure formed by using the metal mold 49. Because the compacted powder bodies 45 forming the package member 36 are solid bodies, an amount of the compacted powder body 45 between the metal mold 49 and the coil section 34 is less liable to vary in the re-pressure forming, a thickness of the coating of the package member 36 is liable to be uniform throughout the entire periphery of the coil section 34, and variations in characteristics can be suppressed. Because the coil section 34 can be supported by the compacted powder bodies 45, the coil section 34 can accurately be positioned to prevent faulty forming of the package member 36.
At this time, because the high hardness portion of the compacted powder body 45 firmly supports one face of the coil section 34, a positional displacement of the coil section 34 is less liable to occur in the re-pressure forming and the coil section 34 can accurately be positioned.
In the re-pressure forming, the compacted powder bodies 45 are provided with the low hardness portions of such hardness that the compacted powder body 45 loses its shape and the compacted powder bodies 45 are re-pressure formed such that the low hardness portions cover the coil section 34. Therefore, the low hardness portions of the compacted powder bodies 45 lose their shapes while the crumbled low hardness portions of the compacted powder bodies 45 are closely filled the empty space between the coil section 34 and the high hardness portion. As a result, a magnetic gap can be reduced to enhance magnetic efficiency.
Moreover, the thickness (a distance between the coil section 34 and a surface of the package member 36) of the skin of the package member 36 in which the coil section 34 is encapsulated is smaller than the diameter of the through hole 33 of the coil section 34. The upper face portion 50 of the package member 36 corresponding to the upper portion of the coil section 34 and the lower face portion 51 of the package member 36 corresponding to the lower portion of the coil section 34 are formed to be thin to make the whole package member 36 thin. Although the package member 36 is made thin, generation of magnetic saturation can be suppressed in the upper face portion 50 and the lower face portion 51 because the densities of the upper face portion 50 and lower face portion 51 are higher than the density of the intermediate portion 52.
In other words, an inside of the through hole 33 of the coil section 34 corresponds to the intermediate portion 52 of the package member 36. Because the densities of the upper face portion 50 and lower face portion 51 are higher than the density of the intermediate portion 52, if a magnetic flux passing through the through hole 33 passes through the upper face portion 50 and the lower face portion 51 smaller than the diameter of the through hole 33, magnetic permeability can be increased by an amount by which the densities of the upper face portion 50 and lower face portion 51 are higher than the density of the intermediate portion 52 in the upper face portion 50 and the lower face portion 51. Therefore, the package member 36 can be made thin without generating the magnetic saturation in the upper face portion 50 and the lower face portion 51.
According to the producing method of the invention, the above-described coil component can be produced.
As described above, according to the one embodiment of the invention, because the ring sections 32 are formed of the metallic flat plate, the coil component operates in the high-frequency region, ensures the inductance and the infinitesimal direct-current resistance, and is adaptable to the large current.
Although the three ring sections 32 are used in the first embodiment of the invention, four ring sections 32 may be used as shown in FIG. 16.
The four ring sections 32a to 32d of the second embodiment are disposed to have predetermined positional relationships. In other words, as shown in FIG. 16, in the second embodiment, a line (C) connecting centers of ring sections 32a and 32b disposed in upper and lower sides and a line (D) connecting centers of the ring sections 32c and 32d disposed in upper and lower sides are parallel to each other. A line (G) connecting the centers of the ring sections 32c and 32d disposed in the upper side and a line (F) connecting the centers of the ring sections 32a and 32c disposed in the lower side are parallel to each other. Therefore, an angle R1 connecting the centers of the ring sections 32a, 3.2b, and 32c and an angle R1 connecting the centers of the ring sections 32b, 32c, and 32d are the same and 48°. Angles (R2) formed by extension lines (E) passing through central portions of terminals 35 and center lines (C) and (D) are 42° and smaller than the angles (R1). The center lines (C) and (F), (F) and (D), (C) and (G), and (G) and (D) intersect each other at angles of about 60°. A distance between the center line (G) and the center line (F) is set at such a dimension that outer peripheral edges of the upper and lower ring sections 32a and 32b, 32c and 32d do not overlap each other. A distance between the center line (C) and the center line (D) is set at such a dimension that the outer peripheral edges of the left and right ring sections 32b and 32d, 32a and 32c overlap each other. Therefore, the opposed outer peripheral edges of the ring sections 32b and 32d, 32a and 32c are cut off by small amounts.
If a disposition pattern of the above-described ring sections 32a to 32d is repeated, more than four ring sections can be disposed and the desired inductance can be obtained.
As shown in FIG. 17, the four ring sections 32 excluding the ring connecting sections 31 are formed with insulating coating layers 41. As shown in FIGS. 18a to 18d, the ring connecting sections 31 are bent to form a coil section 34. In other words, the ring connecting section 31 is bent such that surface sides of the ring sections 32b and 32c face each other (FIG. 18b). Then, the ring section 32a is folded back toward an underside and placed under the ring section 32c (FIG. 18c) . Lastly, the ring section 32d is folded back toward a surface side and placed on the ring section 32b (FIG. 18d).
At this time, by setting a length (T1) of the ring connecting section 31 formed at one end section of the arc-shaped portion 38 to be greater than a length (T2) of the ring connecting section 31 formed at the other end section, increase in an outside diameter of the coil section 34 can be suppressed, overlaps of the ring sections 32 formed of the metallic flat plate disposed in the plane can be reduced, and the direct-current resistance can be reduced while ensuring the inductance of the coil section 34.
Because the method of encapsulating the resin has been described in detail in the above first embodiment, the description will be omitted.
As described above, according to the invention, because the ring sections are formed of the metallic flat plate, it is possible to provide the coil component which operates in the high-frequency region, ensures the inductance and infinitesimal direct-current resistance, and is adaptable to the large current.
Furthermore, the sum of the angle formed by the center line connecting the centers of the ring sections connected by the ring connecting section and adjacent to each other and the center line and the angles each formed by the center line of the ring section connected to the terminal and the extension line extending from the center of the ring section toward the end section formed with the terminal is 180°. Therefore, it is easy to place the ring sections one on top of another.
Especially, in the coil section in which the ring connecting sections are bent to place the ring sections one on top of another, because the end sections of the arc-shaped portions of the ring sections formed at the terminals can be disposed in the opposed positions with respect to the centers of the ring sections, orientations of the terminals do not need to be considered in mounting and ease of use is excellent.
At this time, because each ring connecting section can be disposed in a position at an angle of about 45° with respect to a straight line connecting the end sections formed with the terminals, miniaturization with respect to a mounting area can be achieved. In other words, if the ring connecting sections are disposed in a corner portion of the square mounting portion in which the ring sections are inscribed, the mounting area can be reduced.
If the package member is formed into the prism shape, by disposing the ring connecting section in the corner portion, the outer dimensions of the package member can be reduced and miniaturization can be achieved.
For the above reasons, the invention can provide the coil component useful in a field of the electronic apparatus and the method of producing the coil component. | |
Corn tortilla stuffed with chicken tinga, covered with a smoky chipotle and beans sauce, drizzled with sour cream topped with red pickled onions. Just like back home!
Mixed cheese with capsicum corn and spinach, wrapped with homemade tortilla, topped with chipotle sauce, sour cream and salad
Corn tortilla stuffed with a variety of seafood covered with entomatada and roasted chilli creamy sauces
8 items
Slow cooked lamb shank served with al pastor mushrooms, black bean puree and Mexican mole poblano
Authentic north Mexican style dish! A combination of fried charro beans, topped with medium grilled beef, all drenched in chili sauce
Pan seared hammour, served with cassava fries, al mojo de ajo and drizzled with balsamic reduction, topped with mixed leaves
Grilled assorted seafood on a bed of sautéed capsicum with grilled pineapple. Served with Mexican rice and mango coconut sauce
Pan seared chicken breast stuffed with cream cheese and sundried tomatoes. Served with unsalted butter, mashed potatoes and pickled red onions
Smoked slow cooked beef ribs served with rustic mashed potatoes
Crispy pan-fried salmon served with sautéed mushrooms and green tomatillo sauce
Medium grilled beef tenderloin and wild portobello accompanied by rustic style mashed potatoes and latino achiote chimichurri
14 items
Fried cassava tossed with homemade garlic mojo
Baked potato stuffed with sautéed turkey bacon and capsicum
4 Pcs
9 items
Homemade churro coated with sugar and cinnamon
Juicy sponge cake stuffed with mixed cheese and fresh berries, served with meringue cream
Warm corn cake on english cream drizzled with hibiscus caramel, served with almond and ice cream
Sweet milk-soaked vanilla sponge, freshly selected berries fluffy whipped cream and raspberry coulis
A bowl made of traditional Mexican churro dough and three churro sticks, stuffed with a large ball of ice-cream. Served with drizzled chocolate sauce and fresh strawberries
Traditional Mexican cake that combines guava paste with cream cheese. Served with fresh strawberries and blackberries
Crispy homemade waffle cone stuffed with a wide variety of large ice cream scoops of choice. Served with drizzled caramel and chocolate sauce
A nest made of traditional Mexican churro dough filled with three churro sticks, stuffed with chocolate and served with ice cream
Selection of ice cream
27 items
Refreshing blend of passion fruit, mango and mint leaves topped with soda water
Muddled fresh lime, lemon and mint topped with 7up
Muddled fresh strawberry, mint and lemon topped with 7up
Isla’s favorite thick blend of mango and orange juice drizzled with a sweet and spicy chamoy
A thick blend of strawberries, passion fruit and mint topped with soda water
Iced tea with orange juice, pomegranate juice and topped with soda water
Muddled lime, mint and ginger topped up with ginger ale
Freshly squeezed lemon juice blended with mint leaves
A cold and refreshing mocktail made with muddled fresh raspberry, passionfruit, lychee and mint, topped up with 7up
Creamylicious blend of strawberry and cranberry juice flavored with coconut
A fine blend of sweetened milk and rice, topped with cinnamon powder
Fresh pineapple blended and flavored with coconut
There are no reviews yet.
Be the first to review "Isla Mexican Kitchen, Pearl Qatar"
Isla Mexican Kitchen, Doha, Qatar, Pearl Qatar, Doha. | https://wishboxonline.com/restaurant/isla-mexican-kitchen-pearl-qatar |
For the salsa verde: Bring a large pot of water to a boil and add some salt. Add the tomatillos, garlic, cilantro, jalapeno and onion and bring back to a boil. Boil until the tomatillos turn olive green, about 10 minutes. Strain, reserving 1 cup of the cooking liquid. Transfer the ingredients from the pot to a blender with the reserved cooking liquid and puree until smooth. Return the mixture to the same (empty) pot, bring to a boil and boil until the sauce is darker green and reduced, 20 to 25 minutes. Season with 1 teaspoon salt or to taste and set aside.
Step 2
For the enchiladas: Put the potatoes in a pot with enough water to cover, add some salt and cover the pot. Bring to a boil, lower to a simmer and simmer until the potatoes are fork-tender, 15 to 20 minutes. Drain and set aside.
Step 3
Heat the olive oil in a large pan. Add the garlic and onion and saute until golden, about 5 minutes. Add the kale and saute until wilted, about 5 minutes more. Mix in the butter and 1/4 cup of the crema until melted and smooth. Gently mix in the potatoes and 1 cup of the cheese. Season to taste with salt and pepper. Set the filling aside.
Step 4
Preheat the oven to 350 degrees F. Spray a 13-by-9-inch glass baking dish with nonstick cooking spray.
Step 5
Heat enough vegetable oil to come 1 inch up the sides of a skillet to 350 degrees F over medium heat. Make an assembly line close to the stove top. Place half of the salsa verde in a cake pan (reserve the rest for topping) and put a cutting board and the potato-kale filling nearby.
Step 6
Dip a tortilla in the hot oil until golden but still pliable. Using tongs, transfer it to the cake pan with the salsa verde and turn to coat. Place the dipped tortilla on the cutting board, stuff with a scant 1/4 cup of the potato-kale filling and roll up. Repeat with the remaining tortillas, salsa verde and filling.
Step 7
Transfer the stuffed tortillas to the prepared baking dish. Top with the reserved salsa verde and remaining 1/4 cup crema and 1/4 cup cheese. Bake until darkened in spots, about 30 minutes. Serve slightly cooled. | https://www.recipenet.org/recipe/kale-potato-enchiladas-verdes/ |
Why the 2021 UK census could be the last one, how the data is collected and what it costs
The once-in-a-decade census can be traced back to the 1800s
The census is a compulsory survey for the British population (Photo: Getty)
The2021 census may be the last one the public ever fills out as the country’s leading statistician looks at ways of possibly replacing the once-in-a-decade survey with a cheaper and more effective option.
Professor Sir Ian Diamond, the UK’s National Statistician, is reportedly exploring whether the data typically collected in the compulsory census could be gathered from other sources including the Ordnance Survey, GP registrations, council tax records and driving licences. This information could also be supplemented by additional surveys.
Next year’s survey, which seeks to provide an accurate snapshot of society by asking the public questions about themselves, their household and home, is likely to cost close to £1bn, according to The Guardian.
The i newsletter cut through the noise
Email address is invalidEmail address is invalid
Thank you for subscribing!
Sorry, there was a problem with your subscription.
Even though most people will be expected to complete their forms online, the price tag is almost double the last census in 2011.
What is the census?
The census is a survey that the population must fill out every 10 years.
It provides valuable information about the population to help councils and the Government plan services. These might relate to schools, libraries, doctors’ surgeries and roads, as well as jobs and training policies.
Each census is kept secure for 100 years but after a century, the data is released to members of the public who can use the information to trace their family history.
In England and Wales the census is run by the Office for National Statistics (ONS). North of the border, National Records of Scotland takes charge while in Northern Ireland the survey is organised by the Northern Ireland Statistics and Research Agency.
What is the history of the census?
The first census of the population,according to the ONS, can be traced back to 1801 but the first modern census is considered to be the survey of 1841.
“Some 35,000 enumerators (all men) armed with pencils delivered a separate form to each household, recording almost 16 million people in England and Wales. People completed the forms themselves, a real challenge for some since at this time many people could not read or write,” said the ONS.
In 1841, the most popular occupation was “domestic servant”. The census found that almost a quarter of a million people worked in cotton manufacture, and that there were 571 fork-makers, 74 leech bleeders and five ice dealers.
Why might 2021 be the last one?
The Government has not said 2021 will be the last census but it is likely that a price tag exceeding £1bn for another survey in 2031 could be an off-putting prospect.
“The major arguments made against the Census in its current form have broadly clustered around two main issues: high, and constantly rising costs; and the infrequency of data collection in an environment of accelerated demographic change,” the NatCen Social Research institute said previously.
However, the census is considered by demographers as the “gold standard” of population records.
Sir Ian has said he would only recommend replacing the next census if he finds a suitably “better” option.
“I will only make a recommendation to change the way we do things if we can replicate the richness of the census data,” said Sir Ian, who took up the statistician role in October 2019.
“It would have to be equally rich but more timely, cheaper and more effective.
“We will only change if we can do something better.
“We are looking at the things we only get from the census and whether it is possible to get them from other sources,” he told The Guardian.
Sir Ian said he would look at the evidence and give an opinion on the next census by 2023. It is ultimately for the Government to decide.
Guy Goodwin, chief executive at NatCen, said the traditional census remained “the best way to count the population” and that the case to cancel it for 2031 hadn’t been well enough made.
“We worry a decision in 2023 is likely to be more ideological than based on compelling evidence,” he told i. “It is difficult to get the same quality from GP registers, driving licences and council tax records where there are questions of over-counts (difficulty taking people off these sources when they leave the country and deaths) and under-coverage.
“While costs seem high, the census exercise is only conducted every 10 years and costs should fall as almost everyone will be regularly online by 2031. But it is not unreasonable to look at how administrative and survey data can be better used to refresh the results in between ten-yearly censuses,” he added.
The Cabinet Office said it would not be commenting on the future of the census.
The article has been updated to reflect the response from the Cabinet Office and NatCen. | |
Former IBF/WBC welterweight champion Andre Berto (28-1, 22 KO’s) will be facing interim WBC welterweight champion Robert “The Ghost” Guerrero (30-1-1, 18 KO’s) on November 24th in the main event of an HBO-televised tripleheader at the Citizens Business Bank Arena in Ontario, Calif. The winner of Guerrero vs. Berto will be in prime position at 147 pounds to land a major fight, whether it's the dream date against Floyd Mayweather, or other lucrative opportunities.
Andre Berto hasn't fought since September 3, 2011, when he defeated Jan Zaveck in Mississippi. He had two rematch dates with Victor Ortiz lined up for this year, but injured his shoulder ahead of the February date, and then failed a VADA drug test ahead of the rescheduled night in July.
Berto, of Winter Haven, Fla., claimed the positive test was from inadvertent contamination and, in August, he was tested by the California commission, came up clean and was issued a license.
Robert Guerrero, 29, of Gilroy, Calif., currently holds the interim WBC welterweight title, which will likely be bumped to full status. Floyd Mayweather currently has the full title, but hasn't defended it since beating Victor Ortiz for the belt in September 2011.
Guerrero, is coming off a 15-month layoff after surgery to repair a torn rotator cuff when he outfought top-10 contender Selcuk Aydin in a grueling fight to win a vacant interim belt at 147 pounds. He has been calling out all the big names, from Floyd Mayweather and Manny Pacquiao on down, and pushed hard for a fight against Timothy Bradley before moving in this direction.
While the physical stats may show that these men are each 5'8" with a 70" reach, give or take a bit, don't be fooled into thinking that they're actually equal in stature. Berto is as good a test as you can get when it comes to stepping up, Berto is not one to over look, he has the power that will test Guerrero’s chin and Berto has been in with better competition.
Guerrero has only fought twice above the Lightweight limit of 135 pounds, and has competed at Featherweight for the bulk of his career. His lengthy frame has allowed him to continue moving up in weight, however, Berto should hold a substantial edge in strength and punching power.
Therefore, Guerrero vs. Berto has all the makings of an exciting clash, and a meaningful test in the careers of two top fighters angling to get ahead. It's the type of high-risk, high-reward bout between fighters in their primes which boxing fans are always clamoring to see.
Also scheduled for the tripleheader: interim lightweight titlist Richard Abril (17-3-1, 8 KOs), who lost a highly controversial decision to Brandon Rios in April, will face Uganda native Sharif Bogere (23-0, 15 KOs) and prospect Keith Thurman (18-0, 17 KOs) will face former welterweight titlist Carlos Quintana (29-3, 23 KOs) of Puerto Rico.
Opening Boxing Lines at Bookmaker Sportsbook - Robert Guerrero ( +170 ) Vs. Andre Berto ( -210 ).
The current Interim WBC Welterweight Championship boxing line has Andre Berto as the boxing favorite at -210, meaning you have to risk $210 to $100. Carl Froch is this fights underdog, that has value at +170, meaning that for every $100 bet you will win $170. Current odds.
Total Rounds Betting
The fight is scheduled for 12 rounds. The current total rounds betting line has the Over 9.5 Rounds ( -240 ). Meaning that if you think the fight is going to go to at least 10 rounds for you to win your bet. You will need to bet $240 to win $100. The Under 9.5 Rounds ( +200 ), meaning you need to risk $100 to win $200.
The opening line on Robert Guerrero vs. Andre Berto Over/Under is at 9.5 Rounds with the Over at -240 Odds at Bookmaker.eu.
4 Easy Steps to Bet Martinez and win at Bookmaker
• Fill out the registration form to receive your free account number.
• Enter the cashier to deposit with your credit card and enter your bonus code.
• Enter the Sportsbookand place your bets.
• Enter the cashier to cash out your winnings.
Exclusive Sign up Bonus offers from Bookmaker for Gamblers Palace
Bonus Code: GP25 (25% bonus up to $2500 on your first deposit).
Robert Guerrero knocked down Andre Berto twice on the way to a unanimous-decision victory in a welterweight bout on Saturday night. Guerrero (31-1-1, 18 KOs) floored the former 147-pound champion in the first round and again in the second round before persevering through a physically punishing bout. All three judges scored the fight 116-110 for The Ghost, as he notched the biggest win of his professional career to date.
Boxing Champion Specials:Specials Boxing Bonuses
WBO Welterweight Title Odds: Live odds on boxing
Sports Betting Help: How to get sportsbook help
Sportsbook Rules :The Sports Betting Rules
Sports Betting deposits: View Boxing Deposit options. | https://www.gamblerspalace.com/Guerrero-vs-Berto-Boxing-Odds.html |
Category Archives: Healing
“People will forget what you said, people will forget what you did, but people will never forget how you made them feel.” Maya Angelou
Working on a research project I have been interviewing young students in regard to how they feel when they have a substitute teacher. As we talk about fear, anger, frustration, and a variety of other emotions the students seemed to be surprised to find out that teachers can feel the same way.
Reading the above famous quote by Angelou, I am reminded how we can reflect upon these words and see how they are relevant to the circumstances and current events in our lives. While looking back in time, we also may remember a particular teacher and how they treated us. Why do we remember? We can still hear their words that were planted in our hearts, we can remember how they made us to feel although we may not remember why.
It is important to remember teachers may have good and bad days just as children do. Unfortunately when a teacher is having a bad day, if a student is having a hard day as well, putting the two together for an extended period of time can cause chaos in the classroom. Perhaps we were in that situation as a child and did not have the maturity to understand what it meant that a teacher could have a bad day and what impact that would have on us. When things were said and feelings were hurt, there was no giant eraser to remove the pain.
Certain emotions stay with us, even as adults, for we carry them subconsciously on a daily basis. Some on those feelings are positive and some are negative. As a heart mender specialist, I focus on getting to the heart of the matter, to find what issue is the trouble. Many painful feelings and emotions are buried deep inside and we may not even know what they are and how they got there.
Using the example of a teacher with a student, I share how easily we can be hurt and not even know when or why. It is good to discover the truth and take out the mystery of what it is that is challenging us and replace it with truth bringing peace and healing. Are you ready?
What can I do to take away the pain?
What can I say that will comfort the maimed?
Life gets so complex that it is difficult to see,
What is the heart of the matter crying to be free.
“Time heals all wounds”,
But what do we do,
When the clock stopped ticking,
And the hands no longer move?
With a broken heart, a bleeding soul,
Is it too late to make a new goal?
Oh heart mender,
Together can we go,
To the core of our being,
All the while seeing,
The bitter sweet challenges of life?
The child within, playful and bright,
Will come forth filled with awesome delight.
Then, and only then,
The heart, mind and soul,
Are ‘set free’ and
Completely whole.
Have you ever noticed as we successfully solve one problem, another arises that is more complex? It appears with each new difficulty more issues come forth with even greater challenges. It is true, to solve any problematic situation one has to know a solution in order to find the right answer. Yet I question the motive of those who diligently taught us to believe that we have to know all the answers to be smart or successful. Who gave us the impression that the person who knows the most is smarter, better or even stronger? That belief system is not correct, it is perpetuated in the educational field and the job market. We have been programmed to believe we have to be able to “figure everything out” or we are not intelligent.
While being trained in my doctorate program as a Clinical Psychologist, I was required to purchase psychological testing kits. We were taught how to administer a variety of tests including the IQ test. Have you ever wondered what is your IQ, or heard someone say, “I wonder what is my IQ”? There are mathematical numbers assigned to measure children’s, as well as adult’s, levels of intelligence. Here is a different perspective describing intelligence. True intelligence is having an ability to sense, intuit, feel, and recognize patterns, develop relationships, connections, understand associations to events, people and circumstances. When there is disconnect between exercising our true intelligence and our subconscious, we will have cognitive dissonance. Without understanding the underlying emotions and what they are, we will experience physical and emotional stress. Working as a Heart Mender, I begin addressing issues, getting to the heart of the matter and bringing thoughts to the surface. Once they are revealed and faulty belief systems recognized, we can begin to re-frame negative thoughts and start untangling the core beliefs that are part of the problem. Using stress reduction techniques, while shedding light on the subject, substantially releases inner turmoil. Circumstances may not change at the time, yet understanding one more key to the problem will contribute and assist in the healing process.
You only have seven minutes to share what is on your heart; that’s it! If that is taken literally, what could be said in seven minutes? Perhaps one would say, “I am sorry.” Another might express their love and concerns with family members or other loved ones. Some might think it would be wiser not to convey anything. I remember a time when I watched a love one preparing to go into surgery. I didn’t know if they were going to make it or not. I only had a few moments to share what could have been the last words I would ever say. To my surprise their response was, “Well, if I don’t make it, it was nice knowing you!” Wow! Nice knowing you? That was it? I was taken back and questioned if that was the appropriate thing to say, especially since it may have been the last time we ever spoke. Here is a great example of times when we really do not know what to share. How can we actually talk about our deepest thoughts and emotions? Should we wait until we only have a few minutes, or find ourselves in a life or death situation?
I don’t think we should. It can be difficult to talk about things other than the weather. Knowing this reality, “Out of the abundance of the heart the mouth speaks”, we may all say things we regret sooner or later. Fueled with passion, fear, sorrow or pain, we can become judgmental and create hidden resentments in our hearts. I realize what may be in my heart, good or bad, may not be the same emotion experienced by another. I understand different dynamics come in to play in relationships and communication. Now as I embrace the work of a Heart Mender I also understand healing takes place as we get to the heart of the matter, addressing issues that created pain in the first place. Many times our awareness of those issues are buried in the subconscious due to the painful circumstances experienced in the past. With a broken heart and other multifaceted challenges, protective walls are erected, masks applied, and pretenses performed for all to see.
The good news is that in a mere seven minutes, love can tear down those walls, masks can be removed, healing can take place and relationships can be restored. Time is an amazing gift. For some it is said, “Time heals all wounds.” In addition, I proclaim there are moments when giving someone precious time just to listen, time to reflect, time to be real, and time to heal, can be the most amazing and awesome gift we can ever give to each other, even if it is only seven minutes.
| |
Format:
ISBN: 9780062367150
Published: March 2015
Condition: New
Darkness, air, water, and sky will come together . . . and shake the forest to its roots.
The wild cat Clans have lived in peace and harmony for many moons, but now strange messages from their warrior ancestors speak of a terrifying new prophecy and a mysterious danger.
Six cats, including young Brambleclaw of ThunderClan, must embark on an unprecedented journey, with the fate of the entire forest in their paws. The strength and courage of the greatest warriors will be put to the test as the prophecy unfolds-and the quest to save the Clans begins.
Sorry, this item is not currently in any of our shops. Please call us to see if we can order it for you. | http://www.harryhartog.com.au/new-books/warriors/9780062367150/buy-online |
This post will describe the tool I used to review ALL of my organic chemistry notes in 1 hour. I will walk you through the steps and show you how I created and used the most fantastic study tools and aced o-chem.
My official college transcript displays Cs in general chemistry (101 and 102). Below is a description of what I did to get A’s in organic chemistry. Unlike many liberal arts classes, orgo has no Achilles heel to give you an easy way out. No amount of last minute cramming will allow you to succeed.
If you’re like me, studying is more of a game than a task. The hard part about Orgo isn’t the actual material/concepts, but the large amount of information. Taking in all the information in orgo is like trying to drink water from a fire hydrant. Another challenge is siting down and actually studying when surrounded by friends in easier subjects who don’t need to study as much. If you’re the one carrying around that orgo textbook that’s a foot thick, use it as a reminder that you’re going to need to do something different than the kids taking poli sci.
Practice problems first. Choose to spend the majority of your study time on practice problems. Especially at the beginning of a new section/chapter. Work your professor’s assigned problems first. In my experience the most effective way to begin learning the material is by doing practice problems first rather than by making flash cards and trying to memorize reactions.
Getting Stuck. At times its going to feel like a new set of reactions can’t be distinguished from each other. You’re lost and you “don’t get it”. At this point, its time to switch from practice problems to a reading/memorization tactic. You may think of making flash cards but….
Make Flash Pages instead of flash cards. The point of a flash card is just that, a flash to spark your memory. Lets say you glance at 10 flash cards 1 time each. Each card takes between 5-10 seconds to look over. I believe that it is possible increase your glance surface area from the size of an index card to the size of an 8x11in sheet of paper. This will improve how much information you cover.
-In a glance of 5-10 seconds, your eyes view an entire page of condensed notes instead of a small index card.
-Your brain will be forced to recognize certain reactions and concepts right next to other reactions and concepts that are related.
To make them: copy the essential sections of a chapter section onto a blank page. Say you cover 6 chapters during a semester, with ~10 sections each. This means that if you make a flash page for every section, you will have made about 60 pages of notes. That’s less than a page per day. When in the span of an entire semester, this is not much.
Look for the similarities. In many cases, the reactions are analogous to each other. For example: nucleophilic attack on a carbonyl carbon by a nucleophile is analogous to nucleophilic attach on a cyanide carbon by a nucleophile (you’ll know what this means later if you don’t now). Many of the mechanisms involve the same exact steps, which is great because it allows you to focus on a big picture. Understanding the general processes are key to then noticing the slight nuances between each specific mechanism, such as the differences between acidic vs. basic conditions.
Read Before Lecture. Just do it. Bite the bullet and spend some time (even 10 minutes) glancing at the material to be covered in the following lecture. If you are ambitious you can make your flash page on the section before class. This is useful for any class, but in reality is not normally actually done. If you want an A, do it for orgo. This will allow you to capitalize on the time you spend in lecture, and actually understand where your teacher is going during class.
You can try any memorization tricks you want, but as I said in another post, the goal with memorization is to maximize your Glances/Time ratio.
Do Not fall behind.
Supplemental material: I used “Organic Chemistry as a Second Language” by David Klein. There’s a version for both orgo 1 and 2. Utilize your textbook solutions manual. If your book doesn’t come with one, its definitely worth trying to find one on the internet – even purchasing used on amazon if you need to. Remember, work on practice problems first.
Check your syllabus and understand how the course will be graded. My professor’s policy was to drop each student’s lowest exam grade and not count it. So, I was able to accidentally blow one of the exams. Realize also that its easier to do well on homework assignments than it is on tests. So make sure you ace the homeworks and other general assignments so that you have a bit of a buffer when it comes to the exams. If you have close to a B+ average on exams, this may average to an A/A- when combined with the high grades you receive on the general homework assignments. Play the game.
Lab Sections: Your class will probably have a required lab period. Lab was run by Teacher’s assistants. Go to T.A. office hours (one hour a week for me) and get help. Just ask a million questions and understand how they graded and you’ll be fine.
Get to know your professor. College professors can be phenomenal people. They’re incredibly specialized in their area, and you’ll learn more about the class speaking to them for an hour than spending two days studying alone.
Study Groups: Helpful for lab sections and writing lab reports, as well has comparing solutions to difficult practice problems and homework. Having a small network (maybe 2 to 5 people) that you can call on for help while studying will prove to be beneficial. I sincerely believe that I would not be graduating college this May with a chem major were it not for the group I studied with during some of my harder classes.
Go Out. Don’t spend every night studying, give your brain a break. Studies show that you can’t really focus on one thing for more than 45 minutes anyway. Spend part of every evening studying, sure. But keep in mind that those four years goes by fast, and there’s a chance you won’t ever use the information in organic chem again.
The next part is to Review, and maximize your Glances/Time ratio. The idea here is that its more effective to look over a page 5 times spending a 1 minute each time than it is to look over a page 1 time spending 5 minutes. Do this with the flash pages you make. Don’t worry if you have trouble reading so quickly. I’m one of the slowest readers you’ll meet. Force yourself to spend as little time as possible on each flash page when reviewing. You will improve your brains ability to interpret a large amount of material during a single glance. You will soon see how the sponge of your brain collects and retains more information by seeing it many times in short flashes.
This will shorten the time you spend studying for the class. Towards the end of the semester, the 60 flash pages that you made will become readable in 60 minutes or less if you use this technique. Isn’t that incredible? You now have a tool to get through an entire semester of organic chemistry in 1 hour. | https://espressoinsight.com/tag/college/ |
It's never too early to start saving for retirement. This may make saving and planning for retirement easier than starting to save later in your career.
Saving early means:
- you have to save less each month
- your money will have more time to earn a larger amount of compound interest
Example: How much you need to save each month if you start to save for retirement early
Suppose you plan to retire in 20 years. You want to save $75,000 for your retirement. You're earning an annual interest rate of 5% compounded on your savings.
Compare how much you'd have to save each month if you start to save now or in 10 years. When you have 20 years to save instead of 10 years, you have to put $14,160 less into the bank to reach your goal. This is because you earn more money in interest the longer you save. In this example, you earn $14,020 more in interest when you have 20 years to save than when you have 10 years to save.
|Years you have to save||How much you need to save per month||Amount saved||Amount of interest earned|
|With 20 years to save||$181||$74,400||$30,960|
|With 10 years to save||$480||$74,540||$16,940|
The following graph shows how you have to save less each month if you start to save early.
Figure 1: How starting to save early means you have to save less each month
Figure 1 - Text version
|Years of saving||$181/month||$480/month|
|1||$2,222.47||0|
|2||$4,558.45||0|
|3||$7,014.35||0|
|4||$$9,595.69||0|
|5||$12,309.10||0|
|6||$15,161.33||0|
|7||$18,159.49||0|
|8||$21,311.03||0|
|9||$24,623.82||0|
|10||$28,106.09||0|
|11||$31,766.53||$5,893.85|
|12||$35,614.24||$12,089.24|
|13||$39,658.80||$18,601.60|
|14||$43,910.29||$25,447.14|
|15||$48,379.30||$32,642.92|
|16||$53,076.95||$40,206.84|
|17||$58,014.94||$48,157.75|
|18||$63,205.57||$56,515.45|
|19||$68,661.76||$65,300.73|
|20||$74,397.09||$74,535.49|
Note: the numbers are calculated using the Ontario Securities Commission’s Compound Interest Calculator.
Consider how inflation will affect your savings
Inflation is the rising cost of consumer goods and services. It's measured by the Consumer Price Index (CPI). The CPI measures changes in the price of about 600 consumer goods and services over time.
You can look at the impact of inflation in two ways:
- it will increase the cost of goods and services you buy
- it will reduce the buying power of your savings over time
For example, a $100 purchase in the year 2006 costs approximately $118 is 2016.
Example: How inflation affects your retirement savings
Suppose you plan to retire in 20 years. You want to save what $50,000 buys today.
Based on an inflation rate of 2% per year, it will take $74,300 in 20 years to buy what costs $50,000 today.
How to start saving for retirement
Start a habit of saving a portion of your pay from every paycheque if you can afford it. The earlier you start saving, the longer your money can earn interest and grow.
To reach your savings goals, learn about:
Using automatic payments and deposits can be a good way to save money. Contact your financial institution to have a set amount of your pay automatically deposited into a savings account. Consider increasing the amount of the automatic payments or deposits as your pay increases.
Balancing your current financial priorities
Saving for retirement can be difficult when you have other demands on your money, like a mortgage or rent, car payments or student loans. Make a budget so you can better figure out how much money you can afford to save for retirement.
Use the Budget Planner to help you determine where your money will go when you're retired.
Related links
Report a problem or mistake on this page
- Date modified: | https://www.canada.ca/en/financial-consumer-agency/services/retirement-planning/start-saving-retirement.html |
---
abstract: 'Acceleration-induced nonlocality is discussed and a simple field theory of nonlocal electrodynamics is developed. The theory involves a pair of real parameters that are to be determined from observation. The implications of this theory for the phenomenon of helicity-rotation coupling are briefly examined.'
address: |
Department of Physics and Astronomy\
University of Missouri-Columbia\
Columbia, MO 65211, USA
author:
- Bahram Mashhoon
title: Nonlocal electrodynamics of accelerated systems
---
relativity ,accelerated observers ,nonlocal electrodynamics
03.30+p ,11.10.Lm ,04.20.Cv
Introduction\[sec:1\]
=====================
Consider the measurement of a basic *radiation* field $\psi$ by an accelerated observer in Minkowski spacetime. According to the hypothesis of locality [@1], the observer, at each event along its worldline, is locally equivalent to an otherwise identical momentarily comoving inertial observer. The frame of this hypothetical inertial observer is related to the background global inertial frame via a Poincaré transformation; therefore, the field measured by the momentarily comoving observer is $\widehat{\psi}(\tau )=\Lambda (\tau )\psi (\tau)$, where $\tau$ is the observer’s proper time at the event under consideration and $\Lambda(\tau)$ is a matrix representation of the Lorentz group.
Let $\widehat{\Psi}$ be the field that is actually measured by the accelerated observer. The hypothesis of locality requires that $\widehat{\Psi} (\tau )=\widehat{\psi}(\tau ).$ However, the most general linear relation between $\widehat{\Psi}(\tau )$ and $\widehat{\psi}(\tau)$ consistent with causality is [@2] $$\label{eq:1}
\widehat{\Psi}(\tau )=\widehat{\psi}(\tau )+\int^\tau _{\tau _0} K(\tau ,\tau ')\widehat{\psi} (\tau ')d\tau ',$$ where $\tau_0$ is the initial instant at which the observer’s acceleration is turned on. The manifestly Lorentz-invariant ansatz involves a kernel that must be proportional to the acceleration of the observer. The kernel is determined from the postulate that a basic *radiation* field can never stand completely still with respect to an accelerated observer. This is simply a generalization of the standard result for inertial observers. A detailed analysis reveals that the only physically acceptable kernel consistent with this physical requirement is [@3]-[@6] $$\label{eq:2}
K(\tau ,\tau ')=k(\tau ')=-\frac{d\Lambda (\tau ')}{d\tau '} \Lambda^{-1}(\tau ').$$ Using this kernel, Eq. may be written as $$\label{eq:3}
\widehat{\Psi}(\tau )=\widehat{\psi} (\tau _0)-\int^\tau _{\tau_0} \Lambda (\tau ')\frac{d\psi(\tau ')}{d\tau '}d\tau '.$$ An immediate consequence of this relation is that if the accelerated observer passes through a spacetime region where the field $\psi$ is constant, then the accelerated observer measures a constant field as well, since $\widehat{\Psi}(\tau )=\hat{\psi}(\tau _0)$. This is the main property of kernel and it will be used in the following section to argue that in nonlocal electrodynamics, Eq. is only appropriate for the electromagnetic potential.
The basic notions that underlie this nonlocal theory of accelerated observers appear to be consistent with the quantum theory [@7]-[@9]. Indeed, such an agreement has been the main goal of the nonlocal extension of the standard relativity theory of accelerated systems [@10; @11]. Moreover, the observational consequences of the theory are consistent with experimental data available at present. On the other hand, our treatment of nonlocal electrodynamics has thus far emphasized only *radiation* fields. However, a nonlocal field theory of electrodynamics must also deal with special situations such as electrostatics and magnetostatics. Furthermore, the application of our nonlocal theory to electrodynamics encounters an essential ambiguity: should the basic field $\psi$ be identified with the vector potential $A_\mu$ or the Faraday tensor $F_{\mu\nu}$? In our previous treatments [@10; @12], this ambiguity was left unresolved, since for the issues at hand either approach seemed to work. Nevertheless our measurement-theoretic approach to acceleration-induced nonlocality could be more clearly stated in terms of the directly measurable and gauge-invariant Faraday tensor, which was therefore preferred [@10; @12].
The main purpose of the present work is to resolve this basic ambiguity in favor of the vector potential. The physical reasons for this choice are discussed in the following section. Section \[sec:3\] is then devoted to the determination of the appropriate kernel for the nonlocal Faraday tensor. Section \[sec:4\] deals with the consequences of this approach for the phenomenon of spin-rotation coupling for photons. The results are briefly discussed in section \[sec:5\].
Resolution of the ambiguity\[sec:2\]
====================================
It is a consequence of the hypothesis of locality that an accelerated observer carries an orthonormal tetrad $\lambda^\mu_{\;\;(\alpha )}$. The manner in which this local frame is transported along the worldline reveals the acceleration of the observer; that is, $$\label{eq:4}
\frac{d\lambda^\mu _{\;\;(\alpha)}}{d\tau} =\phi_\alpha^{\;\;\beta} \lambda^\mu _{\;\;(\beta)},$$ where $\phi_{\alpha \beta}=-\phi_{\beta\alpha} $ is the antisymmetric acceleration tensor.
Let us now consider the determination of an electromagnetic field, with vector potential $A_\mu$ and Faraday tensor $F_{\mu \nu}$, $$\label{eq:5}
F_{\mu\nu}=\partial_\mu A_\nu -\partial _\nu A_\mu,$$ by the accelerated observer. The measurements of the momentarily comoving inertial observers along the worldline are given by $$\label{eq:6}
\widehat{A}_\alpha =A_\mu \lambda^\mu_{\;\;(\alpha)},\quad \widehat{F}_{\alpha \beta} =F_{\mu\nu}\lambda^\mu_{\;\;(\alpha)}\lambda^\nu_{\;\;(\beta)}.$$ Thus according to our basic ansatz [@2], the fields as measured by the accelerated observer are $$\begin{aligned}
\label{eq:7}
\widehat{\mathcal{A}}_\alpha (\tau )&=\widehat{A}_\alpha (\tau )+\int^\tau_{\tau _0}K_\alpha^{\;\;\beta}(\tau ,\tau ')\widehat{A}_\beta (\tau')d\tau',\\
\label{eq:8}\widehat{\mathcal{F}}_{\alpha \beta} (\tau )&=\widehat{F}_{\alpha \beta} (\tau )+\int^\tau _{\tau_0} K_{\alpha \beta} ^{\;\;\;\;\gamma \delta} (\tau ,\tau ')\widehat{F}_{\gamma\delta }(\tau')d\tau '.\end{aligned}$$ Though these relations may be reminiscent of the phenomenological memory-dependent electrodynamics of certain continuous media [@13], they do in fact represent field determinations in vacuum and are consistent—in the case of kernels and specified below—with the averaging viewpoint developed by Bohr and Rosenfeld [@14].
It remains to determine the kernels in Eqs. and . Specifically, which one should be identified with the result given in Eq. ? The aim of the following considerations is the construction of the simplest tenable nonlocal electrodynamics; however, there is a lack of definitive experimental results that could guide such a development. We must therefore bear in mind the possibility that future experimental data may require a revision of the theory presented in this paper.
Let us recall here the main property of kernel noted in the previous section: a uniformly moving observer enters a region of constant field $\psi$; the observer is then accelerated, but it continues to measure the same constant field. Now imagine such an observer in an extended region of constant electric and magnetic fields; we intuitively expect that as the velocity of the observer varies, the electromagnetic field measured by the observer would in general vary as well. This expectation appears to be provisionally consistent with the result of Kennard’s experiment [@15; @16]. It follows that the kernel in Eq. cannot be of the form given in Eq. . On the other hand, in a region of constant vector potential $A_\mu$, the gauge-dependent potential measured by an arbitrary accelerated observer could be constant; in fact, in this region the gauge-invariant electromagnetic field vanishes for all observers by Eqs. , and . Therefore, we assume that the kernel in Eq. is of the form given by Eq. , so that $$\label{eq:9}
K_\alpha^{\;\;\beta} (\tau ,\tau ')=k_\alpha^{\;\;\beta} (\tau '),$$ which can be expressed via Eqs. and as $$\label{eq:10} k_\alpha^{\;\;\beta}=-\phi_\alpha^{\;\;\beta}.$$ The determination of the field kernel in Eq. is the subject of the next section.
Field kernel\[sec:3\]
=====================
The first step in the determination of the kernel in Eq. is to require that $$\label{eq:11}
K_{\alpha \beta}^{\;\;\;\;\gamma\delta}(\tau ,\tau ')=k_{\alpha \beta}^{\;\;\;\;\gamma \delta} (\tau ').$$ This simplifying assumption is rather advantageous [@4]-[@6]. If the acceleration of the observer is turned off at $\tau =\tau _f$, then the new kernel vanishes for $\tau >\tau _f$. In this case, the nonlocal contribution to Eq. is a constant memory of the past acceleration of the observer that is in principle measurable. This constant memory is simply canceled in a measuring device whenever the device is reset.
Next, we assume that $k_{\alpha \beta}^{\;\;\;\;\gamma \delta}$ is linearly dependent upon the acceleration tensor $\phi_{\alpha \beta}$. Clearly, the basic notions of the nonlocal theory cannot a priori exclude terms in the kernel that would be nonlinear in the acceleration of the observer. Therefore, our linearity assumption must be regarded as preliminary and contingent upon agreement with observation.
We have argued in the previous section that the electromagnetic field kernel given by Eq. , which turns out to be $$\label{eq:12} \kappa_{\alpha \beta}^{\;\;\;\;\gamma \delta}=-\frac{1}{2} (\phi _\alpha^{\;\;\gamma} \delta _{\beta}^{\;\;\delta} +\phi _\beta^{\;\;\delta}\delta_\alpha^{\;\;\gamma} -\phi_\beta^{\;\;\gamma} \delta_\alpha^{\;\;\delta} -\phi_\alpha ^{\;\;\delta }\delta_\beta ^{\;\;\gamma }),$$ cannot be the correct kernel by itself. To proceed, we must employ the Minkowski metric tensor $\eta_{\alpha \beta}$, the Levi-Civita tensor $\epsilon_{\alpha \beta \gamma \delta}$ (with $\epsilon_{0123}=1$) and terms linear in the acceleration tensor $\phi_{\alpha \beta}(\tau )$ to generate kernels of the form $\kappa_{\alpha \beta}^{\;\;\;\;\gamma \delta} (\tau)$ that are antisymmetric in their first and second pairs of indices. A detailed discussion of such “constitutive” tensors is contained in [@6]. It appears that all such kernels are linear combinations of Eq. and its duals. The left dual results in a kernel given by $$\label{eq:13} ^\ast\kappa_{\alpha \beta}^{\;\;\;\;\gamma \delta} =\frac{1}{2} \epsilon_{\alpha \beta}^{\;\;\;\;\rho \sigma} \kappa_{\rho \sigma}^{\;\;\;\;\gamma \delta}.$$ This turns out to be equal to the kernel formed from the right dual, namely, $$\label{eq:14} \frac{1}{2} \kappa_{\alpha \beta}^{\;\;\;\;\rho \sigma} \epsilon_{\rho \sigma}^{\;\;\;\;\gamma\delta}=-\frac{1}{2} (\phi_\alpha^{\;\;\rho} \epsilon_{\rho\beta}^{\;\;\;\;\gamma\delta} -\phi_\beta^{\;\;\rho }\epsilon_{\rho\alpha}^{\;\;\;\;\gamma \delta}).$$ The equality of right and left duals in this case is due to $\phi_{\alpha\beta}=-\phi_{\beta \alpha}$ and simply follows from a general identity given on p. 255 of Ref. [@6]. In connection with the general discussion of the invariants of the constitutive tensor in [@6], let us observe that $\kappa^\gamma_{\;\;\alpha\gamma \beta} =-\phi_{\alpha\beta}$, so that $\kappa_{\alpha\beta}^{\;\;\;\;\alpha\beta}=0$ and $$\label{eq:15} \frac{1}{2}\kappa_{\gamma\delta}^{\;\;\;\;\rho\sigma} \kappa_{\rho\sigma}^{\;\;\;\;\gamma\delta}=-\phi_{\alpha \beta}\phi^{\alpha\beta}.$$ Finally, the mixed duals vanish; for instance, $$\label{eq:16} \frac{1}{2}\kappa_{\alpha \rho \sigma \beta} \; \epsilon^{\rho \sigma\gamma \delta}$$ results in a kernel of the form $$\label{eq:17} \zeta_{\alpha \beta}^{\;\;\;\;\gamma\delta}=\frac{1}{4}(\kappa_{\alpha \rho \sigma \beta}-\kappa_{\beta \rho \sigma \alpha})\epsilon^{\rho \sigma \gamma\delta},$$ which is identically zero due to the antisymmetric nature of $\phi_{\alpha \beta}$.
The above considerations suggest that a natural choice for kernel would be $$\label{eq:18} k_{\alpha \beta}^{\;\;\;\;\gamma\delta} (\tau )=p\; \kappa _{\alpha\beta}^{\;\;\;\;\gamma\delta} (\tau )+q\; {^\ast\kappa}_{\alpha \beta}^{\;\;\;\;\gamma\delta} (\tau ),$$ where $p$ and $q$ are constant real numbers such that $(p,q)\neq (1,0)$. These numerical coefficients may be determined from the comparison of the theory with observation. It is interesting to note that $\kappa_{\alpha \beta \gamma \delta}=-\kappa_{\gamma \delta \alpha \beta}$, $$\label{eq:19} {^\ast\kappa}_{\alpha\beta}^{\;\;\;\;\gamma\delta}=\frac{1}{2} (\epsilon_{\alpha\beta}^{\;\;\;\;\rho\gamma} \phi_\rho^{\;\;\delta}-\epsilon_{\alpha\beta}^{\;\;\;\;\rho\delta } \phi_\rho^{\;\;\gamma}),$$ and $\kappa$ is minus the right dual of ${^\ast\kappa}$, namely, $$\label{eq:20} \kappa_{\alpha \beta}^{\;\;\;\;\gamma\delta} =-\frac{1}{2}\;{^\ast\kappa}_{\alpha\beta}^{\;\;\;\;\rho\sigma} \epsilon_{\rho\sigma}^{\;\;\;\;\gamma\delta}.$$ The implications of the new field kernel for the phenomenon of helicity-rotation coupling may be explored with a view towards possibly limiting the range of $(p,q)$. This is done in the next section.
Spin-rotation coupling\[sec:4\]
===============================
Consider the measurement of the electromagnetic field by observers that rotate uniformly with frequency $\Omega_0>0$ about the direction of propagation of an incident plane monochromatic electromagnetic wave of frequency $\omega >0$. Specifically, we imagine a global inertial frame with coordinates $(t,x,y,z)$ and a class of observers that move uniformly along straight lines parallel to the $y$ axis for $-\infty <t<0$, but at $t=0$ are forced to move on counterclockwise circular paths about the $z$ axis, which coincides with the direction of wave propagation. The signature of $\eta_{\alpha\beta}$ is assumed to be $+2$ and units are chosen such that $c=1$. For a typical observer with $z=z_0$, $x=r>0$ and $y=r\Omega_0t$ for $-\infty <t<0$ and for $t\geq 0$, $x=r\cos \varphi$ and $y=r\sin \varphi$, where $\varphi =\Omega_0t=\gamma \Omega_0\tau$. Here $\gamma$ is the Lorentz factor corresponding to $v=r\Omega_0$ and $\tau$ is the proper time of the observer. The natural tetrad frame of the observer in $(t,x,y,z)$ coordinates is given for $t\geq 0$ by $$\begin{aligned}
\label{eq:21} \lambda^\mu_{\;\;(0)} &=\gamma (1,-v\sin\varphi ,v\cos \varphi ,0),\\
\label{eq:22}\lambda^\mu_{\;\;(1)}&=(0,\cos \varphi ,\sin \varphi ,0),\\
\label{eq:23} \lambda^\mu _{\;\;(2)}&=\gamma (v,-\sin \varphi ,\cos \varphi ,0),\\
\label{eq:24}\lambda^\mu _{\;\;(3)}&=(0,0,0,1).\end{aligned}$$
The acceleration tensor $\phi_{\alpha\beta}$ in Eq. can be decomposed as $\phi_{\alpha \beta}\mapsto (-\mathbf{g},\mathbf{\Omega})$ in analogy with the Faraday tensor. Here the “electric" part $(\phi_{0i}=g_i)$ represents the translational acceleration of the observer, while the “magnetic" part $(\phi_{ij}=\epsilon_{ijk}\Omega^k)$ represents the frequency of rotation of the observer’s spatial frame with respect to a nonrotating (i.e., Fermi-Walker transported) frame. The scalar invariants $\mathbf{g}$ and $\mathbf{\Omega}$ completely characterize the acceleration of the observer.
A typical rotating observer under consideration here has a centripetal acceleration $\mathbf{g}=-v\gamma^2\Omega_0(1,0,0)$ and rotation frequency $\mathbf{\Omega} =\gamma^2\Omega_0 (0,0,1)$ with respect to the local spatial frame $\lambda^\mu _{\;\;(i)}$, $i=1,2,3$, that indicate the radial, tangential and $z$ directions, respectively.
In an incident plane monochromatic wave of positive (negative) helicity, the electric and magnetic fields rotate counterclockwise (clockwise) about the direction of wave propagation. The frequency of this rotation is equal to the wave frequency $\omega\; (-\omega)$. Now imagine, as in the previous paragraph, observers rotating about the direction of wave propagation with frequency $\Omega_0\ll\omega$. According to such observers, the electric and magnetic fields rotate with frequency $\omega-\Omega_0 \; (-\omega-\Omega_0)$ about the direction of wave propagation. Thus a typical observer perceives an incident wave of positive (negative) helicity with frequency $\widehat{\omega}=\gamma (\omega \mp\Omega_0)$, where the upper (lower) sign refers to a wave of positive (negative) helicity. Here $\gamma$ is the Lorentz factor of the observer and takes due account of time dilation. The intuitive account of helicity-rotation coupling presented here emerges from the simple kinematics of Maxwell’s theory [@17] and has a solid observational basis [@17]-[@20]. In particular, it is responsible for the phenomenon of *phase wrap-up* in the GPS system [@18; @19].
An important aspect of helicity-rotation coupling for $\omega\gg\Omega_0$ that is crucial for choosing the correct field kernel is that the helicity of the wave and hence its state of polarization should be the same for both the rotating and the static inertial observers. Thus the nonlocal part of Eq. should conform to this notion of chirality preservation.
To study kernel for the rotating observers under consideration here, it is useful to employ the decomposition $F_{\mu\nu} \mapsto (\mathbf{E},\mathbf{B})$ and replace $F_{\mu\nu}$ by a column $6$-vector $F$ that has $\mathbf{E}$ and $\mathbf{B}$ as its components, respectively. In this way, Eq. can be regarded as a matrix equation such that the kernel is a $6\times 6$ matrix. The incident electromagnetic wave can then be represented as $$\label{eq:25} F_\pm (t,\mathbf{x})=i\omega A_{\pm} \begin{bmatrix} \mathbf{e}_\pm\\ \mathbf{b}_\pm\end{bmatrix} e^{-i\omega (t-z)},$$ where $A_\pm$ is a constant amplitude, $\mathbf{e_\pm}=(\widehat{\mathbf{x}} \pm i\widehat{\mathbf{y}} )/ \sqrt2 ,$ $\mathbf{b}_\pm =\mp i\mathbf{e}_\pm$ and the upper (lower) sign represents positive (negative) helicity radiation. The unit circular polarization vectors $\mathbf{e}_\pm$ are such that $\mathbf{e}_\pm \cdot \mathbf{e}^\ast_{\pm}=1$. Our basic ansatz is linear; therefore, we use complex fields and adopt the convention that only their real parts are physically significant.
Along the worldline of a rotating observer, the field measured by the momentarily comoving inertial observers is given by [@8] $$\label{eq:26} \widehat{F}_\pm (\tau )=i\gamma \omega A_\pm \begin{bmatrix} \widehat{\mathbf{e}}_\pm \\ \widehat{\mathbf{b}}_\pm \end{bmatrix}e^{-i\widehat{\omega} \tau +i\omega z_0},$$ where $\widehat{\mathbf{b}}_\pm =\mp i\widehat{\mathbf{e}}_\pm $ and $$\label{eq:27} \widehat{\mathbf{e}}_\pm =\frac{1}{\sqrt2} \begin{bmatrix} 1\\ \pm i\gamma^{-1}\\ \pm iv \end{bmatrix}$$ are unit vectors with $\widehat{\mathbf{e}}_\pm \cdot \widehat{\mathbf{e}}_\pm ^\ast=1$. Here $\widehat{\omega} =\gamma (\omega \mp \Omega_0)$, which indicates the modification of the transverse Doppler effect by the helicity-rotation coupling. A significant implication of the hypothesis of locality is that by a mere rotation of frequency $\Omega_0=\omega$, the accelerated observer can stand completely still with respect to the incident positive-helicity radiation [@8]. Another general consequence of the hypothesis of locality should also be noted: the relative amplitude of the helicity states $(A_+/ A_-)$ is not affected by the rotation of the observer [@8]. It is important to examine how these conclusions are modified by the nonlocal theory presented here.
It follows from Eqs. , and that the kernel in matrix notation is given by $$\label{eq:28} k=p\;\kappa +q\;{^\ast \kappa},$$ where $$\label{eq:29} \kappa =\begin{bmatrix} \kappa_1 & -\kappa_2\\ \kappa_2 &\kappa_1 \end{bmatrix} ,\quad {^\ast\kappa} =\begin{bmatrix} -\kappa_2 & -\kappa _1\\ \kappa_1 & -\kappa_2 \end{bmatrix} .$$ Here $\kappa_1 =\mathbf{\Omega} \cdot\mathbf{I}$ and $\kappa_2 =\mathbf{g}\cdot \mathbf{I}$, where $I_i$, $(I_i)_{jk}=-\epsilon _{ijk}$, is a $3\times3$ matrix proportional to the operator of infinitesimal rotations about the $x^i$ axis.
Using kernel , we find that the field measured by the accelerated observer is $$\label{eq:30} \widehat{\mathcal{F}}_\pm (\tau )=\widehat{F}_\pm (\tau ) \left[ 1+\frac{(\pm p+iq)\Omega_0 }{\omega \mp \Omega_0} (1-e^{i\widehat{\omega}\tau })\right].$$ Note that $\widehat{\mathcal{F}} _\pm$ can become constant—that is, the incident wave can stand still with respect to the accelerated observer—for $\omega \mp \Omega_0 =-(\pm p+iq)\Omega_0$, which is impossible so long as $q\neq 0$. Henceforth we assume that $q$ does not vanish. For positive-helicity incident radiation at the resonance frequency $\omega =\Omega_0$, $$\label{eq:31} \widehat{\mathcal{F}}_+ (\tau )=\widehat{F}_+ [1-i(p+iq)\gamma \Omega _0\tau ],$$ where $\widehat{F}_+$ is constant. Thus the rotating observer does not stand still with the wave as a direct consequence of nonlocality; moreover, the linear divergence with time in Eq. would disappear for a finite incident pulse of radiation. Next, Eq. implies that the ratio of the measured amplitude of positive-helicity radiation to that of negative-helicity radiation is $(A_+/A_-)\rho$, where $\rho $ is given by $$\label{eq:32} \rho =\frac{\omega^2-\Omega_0^2+\Omega_0 (\omega +\Omega_0)(p+iq)}{\omega^2-\Omega_0^2-\Omega_0(\omega -\Omega_0)(p-iq)}.$$ It follows from previous results [@8] that we should expect $|\rho |>1$ for $\omega^2>\Omega^2_0$; in fact, Eq. implies that $|\rho |>1$ whenever $$\label{eq:33} p^2+q^2+p\left( \frac{\omega^2}{\Omega_0^2}-1\right)>0.$$ This relation is satisfied for $\omega^2 >\Omega^2_0$ when $p\geq 0$. These results should be compared and contrasted with similar ones given for $(p,q)=(1,0)$ in [@8], where nonlocal electrodynamics is indirectly tested by comparing its consequences with the standard quantum mechanics of the interaction of photons with rotating electrons in the correspondence limit. One may conclude from our analysis of the spin-rotation coupling in this section that in kernel $p$ and $q$ should be such that $p\geq 0$, $p\neq 1$ and $q\neq 0$. It is interesting to note that for $q\neq 0$, there is a certain nonlocality-induced helicity-acceleration coupling in the complex amplitude of the field measured by an observer that is linearly accelerated along the direction of incidence of a plane electromagnetic wave [@7]. It seems that further restrictions on $p$ and $q$ should be based on observational data.
Discussion\[sec:5\]
===================
A foundation has been laid for the simplest nonlocal field theory of electrodynamics appropriate for accelerated systems. The postulated determination of memory-dependent quantities in Eqs. and may be interpreted in terms of the projection of certain nonlocal field variables on the local tetrads. That is, we can define $\mathcal{A}_\mu$ and $\mathcal{F}_{\mu\nu}$ via $$\label{eq:34} \widehat{\mathcal{A}}_\alpha =\mathcal{A}_\mu \lambda^\mu_{\;\;(\alpha)},\quad \widehat{\mathcal{F}}_{\alpha \beta} =\mathcal{F} _{\mu\nu}\lambda^\mu_{\;\; (\alpha)} \lambda^\nu_{\;\;(\beta)}.$$ Thus for a whole class of accelerated observers, the resolvent kernels in Eqs. and may be employed together with Eq. to derive nonlocal field equations for $\mathcal{A}_\mu$ and $\mathcal{F}_{\mu\nu}$ as already illustrated in [@12]. The resulting Maxwell equations for $\mathcal{F}_{\mu\nu}$ would then supersede the special source-free case with $(p,q)=(1,0)$ discussed in [@12]. Moreover, Eq. would lead to a complicated nonlocal relationship between $\mathcal{F}_{\mu\nu}$ and the gauge-dependent potential $\mathcal{A}_\mu$. A more complete discussion of these and related issues will be presented elsewhere.
Acknowledgements {#acknowledgements .unnumbered}
================
I am grateful to Friedrich Hehl for many valuable discussions. Thanks are also due to Yuri Obukhov for helpful correspondence.
[111]{} B. Mashhoon, in: G. Rizzi, M.L. Ruggiero (eds.), Relativity in Rotating Frames (Kluwer Academic, Dordrecht, 2003) pp. 43-55.
B. Mashhoon, Phys. Rev. A 47 (1993) 4498.
U. Muench, F.W. Hehl and B. Mashhoon, Phys. Lett. A 271 (2000) 8.
C. Chicone and B. Mashhoon, Ann. Phys. (Leipzig) 11 (2002) 309.
C. Chicone and B. Mashhoon, Phys. Lett. A 298 (2002) 229.
F.W. Hehl and Y.N. Obukhov, Foundations of Classical Electrodynamics (Birkhäuser, Boston, 2003).
B. Mashhoon, Phys. Rev. A 70 (2004) 062103.
B. Mashhoon, Phys. Rev. A 72 (2005) 052105.
D. Buchholz, J. Mund and S.J. Summers, Class. Quantum Grav. 19 (2002) 6417.
B. Mashhoon, in: M. Novello (ed.), Cosmology and Gravitation (Editions Frontières, Gif-sur-Yvette, 1994) pp. 245-295.
B. Mashhoon, Lect. Notes Phys. 702 (2006) 112.
B. Mashhoon, Ann. Phys. (Leipzig) 12 (2003) 586.
H.T. Davis, The Theory of the Volterra Integral Equation of Second Kind (Indiana University Studies, 17, 1930).
N. Bohr and L. Rosenfeld, Phys. Rev. 78 (1950) 794.
E.H. Kennard, Phil. Mag. 33 (1917) 179.
G.B. Pegram, Phys. Rev. 10 (1917) 591.
B. Mashhoon, R. Neutze, M. Hannam and G.E. Stedman, Phys. Lett. A 249 (1998) 161.
B. Mashhoon, Phys. Lett. A 306 (2002) 66.
N. Ashby, Living Rev. Relativ. 6 (2003) 1.
J.D. Anderson and B. Mashhoon, Phys. Lett. A 315 (2003) 199.
| |
Dreams are a part of everyday life, and can highlight desires or fears in the most surreal or vivid ways. When you have reoccuring dreams, the indication is that they are referring to deep-seated subconscious emotions that the dreamer has left unresolved.
These dreams can encompass many scenarios; but they are not always straightforward to analyze. Dreams and what they mean can offer an insight into your subconscious.
Being able to fly is a common theme in recurring dreams. It is generally positive imagery and the dream of being able to fly usually precludes fear. Instead, it denotes creativity, widening your horizons, and overcoming the odd.
The flip side to this is that it can represent a freedom that is new to a dreamer and may indicate a wish to leave the past behind. Re-occurring dreams of flying do not in general harbor sinister portents.
The conjectured meaning of recurring dreams involving teeth loss are varied. Freud suggests a gender divide; in women it shows a desire for children, in men it is a fear of castration.
The more obvious rationalization is the underlying feeling of decay and fear of losing control of a situation. This feeling, lurking in the subconscious, can often inform many different types of dream, but in the loss of teeth it makes it visible, and a little scarier as you feel the tooth work loose and spit it out. The powerful imagery is very potent.
One of the more obvious dreams to face, being naked in a public setting means that you are feeling exposed and vulnerable. It can seem to occur at work, school, or anywhere in day to day life. The prevalent feeling of being ‘discovered’ is uncomfortable, and maybe indicative that you have shown more than you should have. It is basically a manifestation of the old tale ‘The Emperor’s New Clothes’.
An alternative meaning is an expression of freedom and emancipation, even pugnaciousness, daring the world to take you on at your most basic level.
A common recurring dream is drowning. It is easy enough to interpret the imagery as referencing being overcome by a task or force that you feel powerless to fight against. This could mean an emotional burden, professional or relationship worries, amongst others.
The dream tends to cause panic (as drowning would. but it clearly demonstrates, especially when it becomes a reoccurrence, that there is an important issue in the dreamer’s waking life that is not being dealt with that threatens to overwhelm them. The imagery of deep water also suggests a rooted fear of the unknown.
This recurring dream can take on many forms. The pursuer can be a known person, animal, or monster. The obvious interpretation is that the dreamer is fleeing from something, not necessarily someone. The imagery is representative of aspects of the situation that are to be evaded, for example, running through a forest could indicate a complex web of uncertainties.
Because of the moving nature of such a dream the scenario is prone to change, revealing more insight into the scenario bothering the dreamer. Most of the time though, it indicates a desire to escape from a responsibility or duty.
A common theme to run through dreams is to not be ready for something of import. Whether it be a school test, a presentation, social function, or travel it clearly indicates a high level of nervousness, mixed with fear and apprehension. It could be provoked by a first time experience, or something that has been long anticipated and the dreamer has built up in their mind. The level of reoccurrence seems to be dictated by the amount of notice the dreamer has, with often escalation tensions as the event draws closer.
Most people encounter this dream, which provokes a strong and vivid response. Dreams of falling are generally felt to mean a loss of control and an absence of permanency. It is also thought that it denotes a feeling of abandonment by an important figure in the dreamer’s life.
There are other elements to such a dream as well, such as inevitability and the unpleasant conclusion. The factors that prompt recurring dreams of this nature are varied. It could be a commitment, relationship failure, job worries, or financial pressures.
Like many recurring dreams, this holds an obvious interpretation. Moving in slow motion, trapped in quicksand, or being otherwise unable to move can clearly signify a rut that has developed in the dreamer’s life, be it in personal, professional, or long term goals. This can show that the dreamer is facing difficult obstacles and restrictions, highlighting the frustration and weariness of an epic slog.
Dreams of paralysis can also denote the fact that the dreamer is not making any progress and feels defeated, or trapped. The recurring nature of this dream leads to frustration and anxiety, until the seeming insurmountable obstacle is overcome.
This recurring dream is becoming more prevalent as technology becomes a ubiquitous part of life. Often involving communications equipment such as telephones or computers, the dream tends to involve an inability to operate the technology or a failure of the technology to perform as desired. This can stem from a feeling of being alienated from reality, bodily malfunctions or often a real life lack of ability to communicate or connect with another person. It can also have an element of being afraid of the future with concurrent worries about being able to keep up.
A common fear for everyone, recurring dreams of death and illness can have many meanings, both for the dreamer and for people they know. Obviously a dread of dying is a strong theme, but it can also signify the end of a relationship or process, for good or bad.
The dreamer may be wishing an element of their life goes away or may have a fear of losing that element. These dreams often occur at the beginning of a serious illness, and can sometimes be held to predict a person close to the dreamer coming to harm. They can also, simply, represent a moment of saying goodbye.
I used to get HORRIBLE dreams where i cant open or stop my mouth/jaws from crushing down on my teeth. I can feel the pain and would only wish itd end. | http://opishposh.com/top-10-most-common-recurring-dreams-and-what-they-mean/ |
II. Grading – Final Grade is based on total points accumulated for all assignments.
For example: 350 points accumulated out of 400 possible points = 88 average
- Parents can access the Genesis Parent Portal for all grades and complete assignments.
A. Tests/Quizzes
- Pen must be used on Tests/Quizzes unless it’s a scantron test which will be announced beforehand. If pen is not used, points will be taken off.
-Tests and quizzes may be taken on laptops as well.
- An outline will be posted for tests on my webpage and/or Google classroom one week
prior to the test date.
- Pop Quizzes may be given the day after a homework assignment.
- Criterion Reference Test (Final Exam -25% of 4 th M.P.) given by the Northern Valley to help determine placement in High School Science.
B. Projects
- Anything handed in late will result in a homework miss and points will be taken off each day it’s late.
- Google Classroom will have time limits on items, be aware of them and hand in work on time. If marked late on Google classroom, it’s considered a homework miss.
- When working in groups, each student is expected to do their portion of the work assigned. Inpidual student work and group work will be graded accordingly.
C. Binder and Homework
- Table of contents must be up to date and may be graded at the end of each unit.
- Binders must be headed properly and contain all assignments up to date.
- All binders will be cleaned out periodically throughout the year and all papers will be kept in a folder at school.
- Homework is due at the start of class, or when it’s due on google classroom. It is a zero otherwise.
- You will not be allowed to go to lockers and get assignments/laptop, so be prepared.
- 3 or more homework misses in one marking period will drop your grade one full letter grade. Ex. ‘A’ end of 1 st M.P. + 4 HW misses = ‘B’ for final M.P. grade.
- The homework misses will be erased after each marking period.
- Homework misses will be recorded on the Genesis Parent Portal.
D. Participation
- Class participation, overall behavior in class, being prepared for class, and showing an effort in every area can affect your grade at the end of each marking period.
- Laptops are to be used for various Science activities only.
III. Labs
- Pencil recommended
- Although working in pairs/groups, each student is expected to submit an inpidual lab sheet and/or google classroom assignment.
- Missed labs must be made up within 2 days after returning to school. Labs that are not made up in a timely fashion will result in a zero for that lab.
IV. Classroom Management
A. This classroom operates in a friendly, polite atmosphere based on mutual respect.
B. All work is to be done neatly and thoughtfully.
C. Students are expected to ask questions if they do not understand.
D. Extra help is available before and after school if requested by the student in advance. JUST ASK!
E. Laptops are to be used for Science related activities only and are only to be used at the direction of the teacher.
F. Arrangements to make up work/labs, due to an absence, are to be made by the students within 2 days after returning to school.
G. When absent the day before a test, students are still expected to take the test that next day, or the next day they return to school.
H. Make an effort in everything you do!
THE FOLLOWING ITEMS ARE TO BE BROUGHT TO CLASS EACH DAY:
** If these items are not in class each day, it could result in a homework miss.
LAPTOPS/Chromebooks– Ms. Mueller will let you know if you are using them in class that day or not, but they should always be with you.
1 BLACK BINDER – Binder will be broken up into Units based on what we are working on!
3 POCKET FOLDERS – 1 should be in your science binder, and the other 2 will be given to Ms. Mueller to be used when we clean out our binders.
3-HOLE PUNCH – attached to your binder
HIGHLIGHTERS
BLUE/BLACK PEN AND PENCILS FOR LABS
HOMEWORK
AND FINALLY…………..A SMILE AND A POSITIVE ATTITUDE!!! | https://cdw.oldtappanschools.org/teacher_webpages/mueller__kristina/s_c_i_e_n_c_e_-_overview |
Nothing beats a fresh and delicious homemade doughnuts. Follow the doughnuts recipe by Chef Rida Aftab and serve to sweet tooth.
Ingredients
- Plain flour 1 ½ cup
- Yeast 2 tsp
- Sugar (ground) ½ cup
- Egg 1
- Butter ½ cup
- Lukewarm milk to knead
- Chocolate 1 cup
- Sprinkles for decoration
- Oil for frying
Method
- In a bowl, add the plain flour, yeast, sugar, egg, butter and sufficient amount of milk.
- Kneed all the ingredients together into a smooth dough.
- Cover the bowl and leave it aside for half an hour for rising.
- Roll the dough about ½ inch thick on a floured surface and cut with a doughnut cutter.
- Now in a large pan heat oil and fry the doughnuts until they turn golden brown.
- Drain on paper towel.
- Put the chocolate into a bowl.
- Meanwhile melt chocolate in double boiler by placing the bowl into a half water filled saucepan.
- Now dip each doughnut into the melted chocolate from one side.
- Dust with icing sugar or sprinkles. | https://www.therecipespk.com/doughnuts-2/ |
The current iteration of vampire mythology has taken on an interesting twist by humanising the vampire. Rather than creating a purely monstrous entity of evil whom the humans in the story are supposed to fear because it is inhuman, creators have done something very different with the vampire. Some aspects of the legend are retained; vampires need blood to survive, for example, but almost everything else has changed. Instead of being a monster, the vampire has become more like a human with a problem, and this radically changes the way characters interact with vampires.
Most works of pop culture with vampires these days have good vampires and bad vampires, and it’s often surprisingly hard to differentiate the vampires from the humans. They have souls, they have the same motivations, they experience guilt and love and other human emotions. In short, they are basically enhanced humans, rather than their own separate entities. These narratives are not about monstrousity in the sense of the other, but rather about monstrous behaviours; because humans are just as likely to do horrible things, in this mythology, as the vampires.
Charlaine Harris’ series, for example, has good and bad vampires along with good and bad humans. The drainers, for instance, exploit vampires for their blood, and members of the Fellowship of the Sun are a particularly vicious incarnation of fundamentalist Christianity. Her vampires are treated more like human beings within the context of the story. They’re powerful, they’re different, but at the same time, they act very human in many ways. Leaving the reader wondering what the difference between vampires and humans is supposed to be, exactly, other than that one drinks blood and can’t go out in the sunlight.
This can be seen not just with vampires, but also with other monsters in mythology. The trend these days seems to be towards humanising them. Giving them human traits and human modes of communication, human motivations and attachment. The same goes for figures ostensibly considered ‘good,’ like angels and fairies; their identities have been twisted and made more complex, while at the same time, they are also made more human. Fairies are no longer purely innocent and sweet, but they’re also much closer to human than they were in previous mythologies.
What, exactly, is accomplished by blurring the lines of speciation between humans and mythical entities? In a sense, it erases their own complexity when storytellers can’t find a way to make them interesting, variable, and different without making them more like humans. The attempts at creating complex social and political structures for mythical entities look like replications of human society, as do the attempts at giving characters some kind of internal conflict. Furthermore, there seems to be a growing attachment to singling out ‘special humans’ who attract attention as love objects who are so compelling, they cross the normal divides between humans and supernatural entities.
These shifts in mythology also mean that the same things explored in literary fiction are being more directly probed in fantasy, even if no one wants to admit it. Fantasy as a genre is often maligned because it contains mythical creatures and unrealistic situations and is considered light, fluffy, and not of this world. Yet, the same things that show up in literary fiction are also appearing in fantasy[1. And always have been.]; conflict, struggle, and complex situations created by people who come from different backgrounds. It’s just that in fantasy, ‘background’ includes not just race, class, culture, but also supernatural identity.
Fantasy provides a certain amount of freedom of exploration for storytellers which isn’t always possible with literary fiction. Baroque plots are more acceptable and sometimes almost expected, and people sometimes overlook the subtlety behind those plots because they’re occupied with the ridiculousness. It’s easy to deride fantasy when you don’t actually examine it, and some of the most ardent opponents of giving fantasy its due as a genre are those who are least well-read in it; they don’t want to confront their assumptions.
At the same time humanisation erases some complexity, it also introduces a note of interest to these storylines because it takes creatures like vampires out of their traditional role as simplistic monsters. The story is no longer as straightforward as seeing a vampire, deciding it’s evil, and killing it. Suddenly, people in the narrative need to think about whether it is a good or bad vampire, whether it is seeking redemption. The story has become infinitely more complex than basic fantasy or horror where the goal is finding and eradicating monsters because there is no easy way to tell who is a monster.
Unfortunately, many creators seem to use humanisation primarily to create romances between vampires and humans, or other supernatural creatures and humans, as though this is the only way to make them interesting, or the only way to show that they’ve truly reformed. There are, of course, many other ways to illustrate the complexity of supernatural life, to create a world where supernaturals live out their lives in varied ways just like human beings do; falling back on romance feels like the cheap and easy way out and it is unfortunate to see so many creators falling into this trap of assuming a story will only grip readers if it’s got romance in it.
Some bemoan the loss of monster as villain, arguing that something has been lost when horror and fantasy no longer have simply evil creatures. I wonder, though, if it’s the loss of simplistic villains that’s the problem, or the simplistic humanisation of supernatural entities that’s the real issue. Perhaps if these stories were more complex than yet another iteration of the star-crossed romance, they’d be more interesting for consumers. | http://meloukhia.net/2012/02/human_monstrosities_vampires_and_villainism/ |
Faculty Guide to Team Projects
Are you considering using a team project in your course? Have you used them before but feel that there’s room for improvement? Wondering how to address a challenge you or your students face?
This resource provides effective, research-based practices and resources to help you create, support, and assess team projects in your class, whether it’s online, face to face, or hybrid.
Successful Project Characteristics
Effective practice: Design an authentic task that requires both collaboration and distinct contributions.
The design of a team project is crucial for student success. Research on successful team projects has identified a number of elements that are important for project success. Below is a summary of some of those elements.
Characteristics of successful team projects
- Relevant and authentic: Make it relevant to students and reflect what a professional in your field might do.
- Well-defined: Consider the difference between “Propose three solutions to end world hunger” and “Propose three ways to improve access to healthy food in a local food desert.”
- Distinct contributions from the perspectives of multiple participants: Ask yourself if an individual student could complete the project on their own. If so, it is probably not complex enough.
- Collaboration: Students should need to interact with each other and make decisions together. Avoid a project that can be easily divided up among team members just to come together at the end.
- Individual accountability: Individual accountability ensures that each student has mastered the content.
- Team accountability: Team accountability encourages the team to create a quality product.
This table lists five successful team projects used by UMN instructors. Each project has an explanation of how the above chracteristics are met.
What are additional considerations for designing your project?
Consider your answers to these questions as you begin the design process:
- Will you assign individual roles to students in the team? Some instructors require teams to assign roles. Identify specific roles for teams so that students can either volunteer or elect people to them. Possible roles to consider are facilitator, time-keeper, team-builder, recorder, spokesperson, influencer, executer, divergent thinker, analyst, coordinator, technician, expeditor, clean-up, or strategist. Consider rotating roles throughout the semester. If the project is sufficiently challenging and exciting, students are more likely to challenge themselves to take on new roles.
- What do you want students to gain? What should they learn or be able to do as a result of having completed the project? Make sure your project directly supports your learning outcomes. For instance, if you want students to develop their critical thinking skills, will this project help them do that?
- What opportunities for student choice will you provide? For instance, students could choose the topic or focus of the project, the form the project will take (written or oral presentation), or even the due date of the project (within a defined range).
- Why will doing this as a team benefit students? For instance, the project may help them develop skills that would help in landing an internship or job. Share this with the students in your description of the project and project instructions. (For more information, see Introducing the Project)
- What will the final product look like? For instance, will it be a poster, a video, a live presentation or a written document?
- Can the project be divided into intermediate steps? Dividing the project into intermediate steps allows students to turn in work for feedback early enough to make changes if they are going off course.
- How will you support students during the project process? What resources can you provide students to ensure their success? Can you set aside class time for them to work on their project in class? (For more information, see Supporting students during the project)
- How will you evaluate the final product, the group, and the individuals? Determine how much of the final grade the project will be worth. Of that, how much of the project grade goes to the entire team and how much goes to each individual student? Will you have students evaluate each other for a portion of the grade? These are all grading considerations. (For more information, see Assessing the project)
Resources:
Descriptions of effective projects
Example of a project broken down into intermediate steps
Example of a project description for students
References:
Johnson, D. W., Johnson, R. T. & Smith, K. The state of cooperative learning in postsecondary and professional settings. Educational Psychology Review, 19, 15 - 29 (2007).
Michaelsen, L. K., Knight, A. B., & Fink, L. D. Team-Based Learning, Stylus, Sterling, VA (2004).
Scager, K, Boonstra, J., Peeters, T., Vulperhorst, J., Wiegant, F. Collaborative learning in higher education: Evoking positive interdependence. CBE-Life Sciences Education, 15:ar69, 1-9 (2016).
Tomcho, T. J., Foels, R. Meta-analysis of group learning activities: Empirically based teaching recommendations. Teaching of Psychology, 39(3), 159-169 (2012). | https://cei.umn.edu/faculty-guide/successful-project-characteristics |
The book placing shelf comprises a bearing table, placing grooves are formed in the positions, close to the two sides, of the middle of the front end of the bearing table, two pieces of dustproof cloth are fixedly installed on the bearing table through fixing bolts, and sliding columns are fixedly installed on the opposite sides of the two pieces of dustproof cloth. First grabbing grooves are formed in the positions, close to the middles, of the front ends of the sliding columns, the number of the sliding columns is two, and first attraction magnets are fixedly installed on the opposite sides of the two sliding columns. According to the dustproof bookshelf, books in the bookshelf can be effectively protected through the dustproof cloth, a large amount of dust is prevented from entering the bookshelf, meanwhile, a user who is not high enough can take the books through the steps when taking the books at the high position, the audience area is increased, finally, mothballs and other articles can be placed in the ventilation grooves, and the bookshelf is convenient to use. Meanwhile, the harm of musty odor, book insects and the like is eliminated, and the safety of books is improved. | |
Bar codes combine both bars of varying thicknesses, and spaces to hold data in the horizontal direction. On the other hand, QR codes hold data two-dimensionally in both the horizontal and vertical directions, drastically increasing the volume of recordable data. QR codes consist of cell groupings, function patterns to improve reading performance, and data areas to express numerals and Roman characters, all arranged within a square. Function patterns include the cut-out symbols, alignment patterns, the timing pattern, and margin.
Until now, code searching for conventional matrix type codes took a considerable amount of time. Code searching was performed by reading the code symbol position (X, Y) and the code periphery (size: L, angle:, contour) from an uploaded image.
With QR codes, the cut-out symbols representing the code position can be searched 360°, in any direction based on the ratio of black and white scan lines (1:1:3:1:1). And since the cut-out symbols are located in three of the four corners, the code periphery can be searched based on this positional relationship. Subsequently, lengthy code searching is no longer necessary with QR codes, enabling reading speeds 20 times higher than that of conventional matrix codes.
Moreover, cut-out symbol searching can be performed by the hardware. Using our system hardware further increases overall speed by enabling image reading and processing to be conducted simultaneously.
To select error correction level, various factors such as the operating environment and QR Code size need to be considered. Level Q or H may be selected for factory environment where QR Code get dirty, whereas Level L may be selected for clean environment with the large amount of data. Typically, Level M (15%) is most frequently selected.
There are cases when the image may be read in a warped state due to the curvature of the attachment surface, or the angle of the reader. To compensate for warping, QR codes contain internal alignment patterns positioned at fixed intervals. First, the error between the assumed center position based on the contour of the code and actual center position of the alignment pattern are solved. Warping compensation is then performed based on this error, enabling codes that are warped both linearly and non-linearly to be read.
QR codes having a linking function that can divide and display a single code in several pieces (maximum 16). The divided code contains indicators to determine the number of divisions and which piece of the code is to be displayed. The data can then be arranged and read as single code, regardless of the order in which the codes are scanned by the reading device. As a result, code printing is possible, even in long, narrow spaces. | https://tagginn.com/qr-code-news/16-qr-code-basics |
According to the municipal profile at the city website (a 30 page pdf file) the City of Charlottetown was founded in 1995, from an amalgamation of municipalities. Since the arms were granted to the "old" city, the "new" city is using the arms and flag illegally until they petition the Heralds for the arms.
The City is much older than that. In 1864, Charlottetown hosted the conference between political leaders that ended up forming the Dominion of Canada in 1867, even though Prince Edward Island didn't join Confederation until 1873.
The flag of the City of Charlottetown is a banner of its arms, surrounded on three sides by a green and white border. Its field is white (officially silver), in the centre is a rectangle bearing a royal crown and with four smaller rectangles joined at each of its corners. The rectangles are all green and in proportions of 1:2; the central rectangle is 5/16 the length of the flag, the other rectangles are half that length. The crown is white, with five jewels of red-green-blue-green-red, two fleurs-de-lis of white, and a red interior. The border is formed by alternating rectangles of green and white, such that the white rectangles are part of the white field. The border rectangles also meet in angled corners at the fly end of the flag. The flag has been made in Pantone colours Silver 427C (field), Green 349U, and Lavender 253U.
Charlottetown was selected as the county seat of Queens County in the colonial survey of 1764, and named for Queen Charlotte Sophia, wife of George III; she is represented on the flag by her coronation crown. The crown also underlines the city’s importance as the provincial capital and an important community in the Canadian federation. The green rectangles (squares, if the flag were depicted in proportions of 1:1) refer to Queens Square Charlottetown, and the four historic squares in old Charlottetown (Rochford Square, Connaught Square, Hillsborough Square, and Kings Square). The pattern of the border emulates that on the provincial flag.
Robert D. Watt, Chief Herald of Canada, Canadian Heraldic Authority.
I the Chief Herald of Canada do by these Presents grant and assign to the CITY OF CHARLOTTETOWN the following Arms: Argent on a square Vert joined at each corner with a smaller square Vert a representation of the coronation crown of Queen Charlotte Sophia of England proper.
The shape (and colours) of the crown is specified: the crown of Queen Charlotte, "proper". The drawings show the crown white/"argent" with purple lining. Also note that while the text states that the arms consist of four Green Squares, the banner of arms stretches these squares to rectangles.
Charlottetown used another flag in the 1980s and 1990s. On a field of grey appears the city seal, about half the height of the flag, consisting of a disc surrounded by a white ring edged on the inside and outside in black, inscribed CITY of CHARLOTTETOWN PRINCE EDWARD ISLAND in black serif letters running clockwise from its base. In the centre of the disc is a scene in red, black, white, and grey showing a plough and a sheaf of wheat on a hillock in the foreground, and a tall skip at anchor on the ocean in the background, flying a red flag. A white ribbon with forked ends reads INCORPORATED in black sans-serif letters; at the base of the disc is AD. 1855 (the city’s founding date). | https://www.crwflags.com/fotw/flags/ca-pe-ch.html |
Wake up at the morning
This section assumes nonfaulting cache prefetch, also called nonbinding prefetch. Prefetching wake up at the morning sense only if the processor can wake up at the morning while prefetching the data; that is, the ester c do not stall but continue to supply instructions and data while waiting for the prefetched data to return.
As you would expect, the data cache for such computers is normally nonblocking. Like hardware-controlled prefetching, the goal is to overlap execution with the prefetching of data. Loops are the important targets because they lend themselves to prefetch optimizations.
If the miss penalty is small, the compiler just unrolls the loop once or twice, and it schedules the prefetches with the execution. If the miss penalty is large, it uses software pipelining (see Appendix H) or unrolls many times to prefetch data for a future iteration.
Issuing prefetch instructions incurs an instruction overhead, however, so compilers must take care to ensure that such overheads do not exceed the benefits. By concentrating on references that are likely to be cache misses, programs can avoid unnecessary prefetches while improving average memory access time significantly.
Next, insert prefetch instructions to reduce misses. Finally, calculate the number of prefetch instructions executed and the misses avoided by prefetching. The elements of a and b are 8 bytes long because they are double-precision floating-point arrays. There are 3 rows and 100 columns for a and 101 rows and 3 columns for b.
Elements of a are written in the order that they are stored in memory, so a will benefit from spatial locality: The even values of j will miss and the odd values will hit. The array b does not benefit from spatial locality because the accesses are not in the order it is stored.
The array b does benefit twice from temporal locality: oil sunflower same elements are accessed for each iteration of i, and each iteration of j uses the same value of b as the last iteration. Thus this loop will miss the data cache approximately 150 times for how to make a smile plus 101 bayer silicone for b, or 251 misses.
To simplify our optimization, we will not worry about prefetching the first accesses of the loop. These may already be in the cache, or we will pay the miss penalty of the first few elements of a or b.
If these were faulting prefetches, we could not take this luxury. The cost of avoiding 232 cache misses is executing 400 prefetch instructions, likely a good trade-off. Example Calculate the time saved wake up at the morning the preceding example.
Ignore instruction cache misses and assume there are no conflict or capacity misses in the data cache. Assume that prefetches can overlap with each other and with cache misses, thereby transferring at the maximum memory bandwidth. Here are the key loop times ignoring cache misses: the original loop takes 7 clock cycles per iteration, the first prefetch loop takes 9 clock cycles per iteration, and the second prefetch loop takes 8 clock cycles per iteration spreading the overhead of the outer for loop).
A miss takes 100 clock cycles. The first prefetch loop iterates 100 times; at 9 clock cycles per iteration the total is 900 clock cycles plus types of depression misses. This gives a total of 2400 clock cycles. Luk and Mowry (1999) have demonstrated that compiler-based prefetching can sometimes be extended to pointers as well. The issue is both whether prefetches are to data already in the cache and whether they occur early enough wake up at the morning the data to arrive by the time it is wake up at the morning.Further...
Comments:13.08.2020 in 19:37 Voodoom:
It is a pity, that now I can not express - it is compelled to leave. But I will be released - I will necessarily write that I think.
15.08.2020 in 09:35 Shakakasa:
You have hit the mark. In it something is also I think, what is it good idea.
16.08.2020 in 05:14 Kagagami: | http://findemaker.xyz/human-the-heart/wake-up-at-the-morning.php |
The girls have been busy practicing, and now it’s time for the adults to get organized. The Kick-Off dinner will be thisThursday, Dec. 12, at 5:30 in the Staples Cafeteria. There will be pasta, salad, garlic bread, drinks and desserts.
This is where you get to take care of all the various sign-ups for the season. There are pasta dinners, concession stand shifts, and sandwich pick-up/delivery all needing volunteers. There are sandwiches to order, and booster cards to purchase, and probably even team gear orders to finalize. This is the time and place to get everything taken care of.
So bring your personal calendars and your check books, along with your appetites. Our coaches will be there, and they will have inspiring words to get the season off to a great start.
Hope to see you there! | https://staplesgirlsbasketball.org/2013/12/11/welcome-to-the-2013-2014-season-of-staples-girls-basketball/ |
Latest update on upcoming apps and maps. Also join our email list!
(updated December 9, 2020)
500mb Analysis - 30
00
06
12
18
24
30
36
42
48
54
60
66
72
78
84
About
Need help?
Get a forecast by email
.
These are the 500mb charts for pilots for advanced forecasting. | https://www.turbulenceforecast.com/500mb-view.php?hour=30 |
Anchors for Reading:
Key Ideas and Details:
CCSS.ELA-LITERACY.CCRA.R.1 Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text.
CCSS.ELA-LITERACY.CCRA.R.2 Determine central ideas or themes of a text and analyze their development; summarize the key supporting details and ideas.
CCSS.ELA-LITERACY.CCRA.R.3 Analyze how and why individuals, events, or ideas develop and interact over the course of a text.
Integration of Knowledge and Ideas:
CCSS.ELA-LITERACY.CCRA.R.7 Integrate and evaluate content presented in diverse media and formats, including visually and quantitatively, as well as in words.
CCSS.ELA-LITERACY.CCRA.R.8 Delineate and evaluate the argument and specific claims in a text, including the validity of the reasoning as well as the relevance and sufficiency of the evidence.
CCSS.ELA-LITERACY.CCRA.R.9 Analyze how two or more texts address similar themes or topics in order to build knowledge or to compare the approaches the authors take.
Range of Reading and Level of Text Complexity:
CCSS.ELA-LITERACY.CCRA.R.10Read and comprehend complex literary and informational texts independently and proficiently.
Anchor Standards for Writing:
Text Types and Purposes1:
CCSS.ELA-LITERACY.CCRA.W.1 Write arguments to support claims in an analysis of substantive topics or texts using valid reasoning and relevant and sufficient evidence.
CCSS.ELA-LITERACY.CCRA.W.2 Write informative/explanatory texts to examine and convey complex ideas and information clearly and accurately through the effective selection, organization, and analysis of content.
CCSS.ELA-LITERACY.CCRA.W.3 Write narratives to develop real or imagined experiences or events using effective technique, well-chosen details and well-structured event sequences.
Production and Distribution of Writing:
CCSS.ELA-LITERACY.CCRA.W.4 Produce clear and coherent writing in which the development, organization, and style are appropriate to task, purpose, and audience.
CCSS.ELA-LITERACY.CCRA.W.5 Develop and strengthen writing as needed by planning, revising, editing, rewriting, or trying a new approach.
Research to Build and Present Knowledge:
CCSS.ELA-LITERACY.CCRA.W.7 Conduct short as well as more sustained research projects based on focused questions, demonstrating understanding of the subject under investigation.
CCSS.ELA-LITERACY.CCRA.W.8 Gather relevant information from multiple print and digital sources, assess the credibility and accuracy of each source, and integrate the information while avoiding plagiarism.
CCSS.ELA-LITERACY.CCRA.W.9 Draw evidence from literary or informational texts to support analysis, reflection, and research.
Range of Writing:
CCSS.ELA-LITERACY.CCRA.W.10 Write routinely over extended time frames (time for research, reflection, and revision) and shorter time frames (a single sitting or a day or two) for a range of tasks, purposes, and audiences.
Anchor standards for Speaking and Listening:
Comprehension and Collaboration:
CCSS.ELA-LITERACY.CCRA.SL.1 Prepare for and participate effectively in a range of conversations and collaborations with diverse partners, building on others' ideas and expressing their own clearly and persuasively.
CCSS.ELA-LITERACY.CCRA.SL.2 Integrate and evaluate information presented in diverse media and formats, including visually, quantitatively, and orally.
CCSS.ELA-LITERACY.CCRA.SL.3 Evaluate a speaker's point of view, reasoning, and use of evidence and rhetoric.
Presentation of Knowledge and Ideas:
CCSS.ELA-LITERACY.CCRA.SL.4 Present information, findings, and supporting evidence such that listeners can follow the line of reasoning and the organization, development, and style are appropriate to task, purpose, and audience.
CCSS.ELA-LITERACY.CCRA.SL.6 Adapt speech to a variety of contexts and communicative tasks, demonstrating command of formal English when indicated or appropriate.
Anchor standards for Language:
Conventions of Standard English:
CCSS.ELA-LITERACY.CCRA.L.1 Demonstrate command of the conventions of standard English grammar and usage when writing or speaking.
CCSS.ELA-LITERACY.CCRA.L.2 Demonstrate command of the conventions of standard English capitalization, punctuation, and spelling when writing.
Knowledge of Language:
CCSS.ELA-LITERACY.CCRA.L.3 Apply knowledge of language to understand how language functions in different contexts, to make effective choices for meaning or style, and to comprehend more fully when reading or listening.
Vocabulary Acquisition and Use:
CCSS.ELA-LITERACY.CCRA.L.6 Acquire and use accurately a range of general academic and domain-specific words and phrases sufficient for reading, writing, speaking, and listening at the college and career readiness level; demonstrate independence in gathering vocabulary knowledge when encountering an unknown term important to comprehension or expression.
Objectives
- Identify several challenges created by history, geography, topography.
- Investigate one of the chosen challenges in the country of the student’s choice and identify the root causes of this challenge.
- Identify viable solutions to this challenge.
- Articulate these solutions in a formal presentation.
Resources
- internet
- text book and ancillary materials chosen by the teacher
- PowerPoint
- Cooper Hewitt 'Design for the Other 90% link http://www.cooperhewitt.org/?s=Design+for+the+Other+90%25
Vocabulary
- climate: the average and variations of weather in a region over long periods of time
- tropical: relating to the geographic region of the Earth where the sun reaches a point directly overhead, the Zenith, at least once during the solar year
- subtropical: relating to the zones of the Earth immediately north and south of the tropic zone, which is bounded by the Tropic of Cancer and the Tropic of Capricorn
- arid: lacking sufficient water or rainfall; dry
- Micro lending: the extension of very small loans (microloans) to the unemployed, to poor entrepreneurs, and to others living in poverty who are not considered bankable. These individuals lack collateral, steady employment and a verifiable credit history and therefore cannot meet even the most minimal qualifications to gain access to traditional credit
Procedures
1. After the unit on the chosen continent, the students individually identify 5 challenges they see within an identified country on the continent of study. Students then pair off, share their ideas, and reduce their two lists to one list of three challenges. As a class the students compile these challenges on the board.
2. The teacher presents the project to the students and explains that the students must first identify one challenge from the board and analyze it. The students must then propose solutions. Once the students have identified a solution and researched it, they will have to present this solution in both a presentation and in writing. The teacher demonstrates what is expected with a generic presentation.
3. Students then pair off to identify and research the specific challenge they have chosen within their region. They must identify the specific elements of the challenge using the textbook first. They must present a process paper which must include information regarding: who, what, why, and how.
4. The students should utilize suggested websites (i.e., Design for the Other 90%) to intensify their research. They should create note cards that include the necessary information and discuss any accompanying issues. The students should be analyzing any issues identified.
5. The students use the note cards and researched materials to write a three page paper which should identify and analyze the challenge and propose and support a designed solution which could address one of the following issues: need for entrepreneurial start-up capital, micro-lending institutions, affordable and well designed money making equipment, products, affordable shelters, and education on networking for marketing and distribution.
6. The students must then present the challenge and solution in a formal setting. Students in the audience use a critique and analysis sheet provided by the teacher as they watch the presentations.
Assessment
Enrichment Extension Activities
- Students could research and develop a design for the proposed solution as well as a marketing plan for this design.
- Students could design and establish a fund-raising event (within the school or community-at-large) in order to establish their own micro lending "institution", focused on "the other 90%". | https://dx.cooperhewitt.org/lessonplan/critical-analysis-leads-to-global-action/ |
Description): The goal of this Mentored Research Scientist Development Award is to allow the applicant to develop the research skills necessary to be independent in the investigation of movement impairments in people with spinal pain conditions using clinical and instrumented measures. The long term goals of the proposed research are to understand the nature and specificity of the movement impairments found in spinal pain conditions and to use this information to design and test rehabilitation and prevention strategies for these conditions. Studies have been designed to test the general hypothesis that mechanical low back pain (MLBP) results, in part, from a tendency of the lumbar spine to favor movement in a specific direction when moving the trunk or limbs. The tendency to move in a specific direction is proposed to develop as a consequence of repetition of movements performed during daily work and leisure activities. The experiments will address whether or not (1) distinguishable groups of MLBP can be identified based on direction-specific impairments measured during a clinical examination, (2) there is a relationship between the specific directions in which trunk movements are performed repeatedly and specific types of movement impairments identified in people with MLBP, and (3) there are altered patterns of trunk muscle recruitment in people with MLBP that perform trunk movements repeatedly in a specific direction. To address these hypotheses, data from tests from a clinical examination that assesses direction-specific, mechanically-based impairments, as well as kinematic and electromyographic (EMG) data will be examined. In the first experiment, a data set of direction-specific clinical examination variables from people with MLBP will be tested for the presence of distinguishable groups of MLBP using advanced, multivariate techniques. A second experiment will compare the number and extent of direction-specific impairments in a cohort of people with MLBP performing repeated trunk movements in a specific direction, and a control group. A third experiment will examine the relationship between hip and trunk rotation impairments in people with and without MLBP performing repeated trunk rotation movements. Finally, trunk muscle recruitment patterns during extremity movements will be compared in a cohort of people with MLBP performing repeated trunk movements in a specific direction, and a control group. The proposed experiments are designed to determine the nature of the movement impairments in MLBP and to relate these impairments to specific, everyday activities the persons perform repeatedly. | https://grantome.com/grant/NIH/K01-HD001226-01A1 |
Q:
Propositional Calculus, Can someone answer the following?
Can somebody please solve the following equations:
\begin{align}
1. \quad (A \rightarrow B)\land (A\rightarrow \neg B)=\lnot A \quad \quad \\
\end{align}
What I have got for it so far is
$$(¬A\lor B)\land (¬A\lor ¬B)\\
(¬A\lor B)∧¬A \lor (¬A\lor B)∧¬B\\
(¬A\lor ¬A)\lor (B∧¬A) \lor (¬A∧¬B) \lor(B∧¬B)$$
After this I'm not sure.
Thanks in advance.
A:
$$ \begin{align} (A \rightarrow B)\wedge (A\rightarrow \lnot B)
& \equiv (\lnot A\lor B)\land (\lnot A \lor \lnot B)\tag{Implication}\\
&\equiv \lnot A \lor (B \land \lnot B)\tag{Distribution}\\
& \equiv\lnot A \lor 0\\
&\equiv \lnot A
\end{align}$$
$$\begin{align} A\rightarrow (B\rightarrow C) &\equiv \lnot A \lor (\lnot B \lor C) \tag{Implication $\times 2$}\\
&\equiv (\lnot A \lor \lnot B)\lor C \tag{Associativity of $\lor$}\\
&\equiv \lnot(A\land B) \lor C\tag{DeMorgan's}\\
&\equiv (A\land B)\rightarrow C\tag{Implication}
\end{align}$$
Now go back and try working out the third exercise, using any or all of the above identities. If you'd like to check out what you get for $(3)$, feel free to comment below this post.
| |
The galaxies may contain a small, sparse population of black holes with masses much greater than those known until quite recently. These more massive black holes, with masses tens of times greater than that of the Sun, seem to form as a result of catastrophic, rare events: the collision and fusion of two less-massive black holes, detected for the first time in late 2015 by the instruments of the Laser Interferometer Gravitational-Wave Observatory (LIGO). In an article published on June 1, 2017 in the journal Physical Review Letters, the LIGO researchers describe the third recorded event of this type, which was also the farthest away.
The collision and fusion of black holes described occurred 3 billion light-years from Earth. It is the result of the collision between a black hole with 31.2 solar masses and another with 19.4 solar masses. The collision resulted in the birth of a black hole with 48.7 times the Sun’s mass. In the fraction of a second the event lasted, a colossal amount of energy was released—equivalent to that stored in the mass of two stars like the Sun—in the form of gravitational waves. Predicted by the general theory of relativity formulated in 1915 by Albert Einstein, these subtle space-time deformations propagate in the vacuum at the speed of light and traveled for 3 billion years to reach Earth. On January 4, 2017—at precisely 10:11 and 58 seconds UTC, or two hours earlier Brasília Time — LIGO’s two detectors, located 3,000 km apart in the United States, recorded the passage of this gravitational wave through the earth almost simultaneously.
This detection took place a few weeks after the start of the second LIGO data collection campaign, after its detectors were improved to become more sensitive. Earlier, two other direct detections of gravitational waves had been confirmed. The first was in September 2015, the result of the birth of a 62-solar-mass black hole 1.3 billion light years from Earth. The second was in December 2015, of a 21-solar-mass black hole that formed a little farther away, 1.4 billion light years from here (see bit.ly/GravOndas and Pesquisa FAPESP Issue Number 241).
“We have further confirmation of the existence of stellar-mass black holes that are larger than 20 solar masses,” stated physicist David Shoemaker of the Massachusetts Institute of Technology (MIT) in the press release announcing the third detection of a gravitational wave. He was recently chosen as the spokesperson for the LIGO scientific collaboration, which includes almost 1,000 researchers from different countries, including Brazil. “We knew nothing about these objects until LIGO detected them.
Uncertain origin
Before, we only knew about stellar black holes, which resulted from the explosive death of stars, with masses of up to 20 solar masses. “They were completely different objects, found in our own galaxy, the Milky Way, that did not arise from the merger of binary black-hole systems,” says Italian physicist Riccardo Sturani, professor at the Federal University of Rio Grande do Norte, who, along with physicist Odylio Aguiar and his team at the National Institute for Space Research (INPE), is a LIGO collaborator.
Sturani is studying the dynamics of binary black-hole systems and the gravitational waves they produce when merging. “The black holes detected by LIGO are believed to have originated from the explosion of very massive stars,” says the Italian physicist. “But we still do not know if, in these pairs, the black holes arose from the explosion of stars that formed and remained near each other, or if the stars appeared separately and then drew closer together, each captured by the gravitational pull of the other.”
The results presented in early June 2017 are incremental and had less of an impact than those published previously by the LIGO group. Even so, they give us some clue as to what may have happened to the pair of black holes detected this year.
The shape of the gravitational waves emitted during the merger suggests that they were not spinning in the same direction before colliding, which would be expected if they had formed together. For this reason, they are suspected to have arisen independently within a large cluster of stars and only united later. “We are beginning to gather statistics on binary black-hole systems,” physicist Keita Kawabe of the California Institute of Technology (Caltech) told reporters. According to Sturani, another 20 or 30 events like these need to be detected in order to be able to say, with statistical significance, which of the two models describe what occurs in nature.
“The three LIGO detections of gravitational waves have begun to reveal that there is a population of these objects,” says physicist Rodrigo Nemmen of the Institute of Astronomy, Geophysics and Atmospheric Sciences at the University of São Paulo (IAG-USP), who is not involved with LIGO. He is studying the behavior of stellar black holes and states that LIGO’s results should lead to updates of stellar evolution models. “Important changes in knowledge, such as those produced by Galileo and Copernicus, were a consequence of advances in instruments,” Nemmen reminds us. “LIGO does the same by allowing us to study these very energetic phenomena that do not emit light and show us something we did not expect.”
Projects
1. Gravitational wave research (No. 13/04538-5); Grant Mechanism Young Researchers Program; Principal Investigator Riccardo Sturani (IFT-Unesp); Investment R$256,541.00.
2. Gravitational wave astronomy – FAPESP-MIT (No. 14/50727-7); Grant Mechanism Regular Research Grant; Principal Investigator Riccardo Sturani (IFT-Unesp); Investment R$29,715.00.
3. New physics from space: Gravitational waves (No. 06/56041-3); Grant Mechanism Thematic Project; Principal Investigator Odylio Denys de Aguiar (INPE); Investment R$1,019,874.01. | https://revistapesquisa.fapesp.br/en/a-revealing-collision/ |
Q:
2000s Young adult book about a community that lived underground and another that lived in the city above; some people became dragons when they dreamed
I'm looking for a book I used to love when I was a teenager for my cousin. I think it would be in the 2000s now in the UK.
It was about this community that lived underground and another that lived in the city above. A young boy wanted to come to the surface as he dreamed of dragons and there was a girl in the castle above whose mother was of status.
When they both dreamed they turned into dragons. There was another boy who pretended to be a servant who was actually the "bad guy" who also became a dragon when he dreamed and they ended up fighting as dragons and killed him. I think one of the underground characters was called Scrub?
A:
The book is called Basilisk (2004) by N.M. Browne.
This evocative story of greed, power, and deception sweeps from the underground cave network of the Combers, living like spiders among the endless tunnels and ropes, to the beautiful city inhabited by Abovers. When a young man named Rej discovers the body of a murdered Abover in the combes, their worlds begin to draw closer. He swears vengeance for the murdered man and takes a great risk in going above. There he is placed in the care of Donna, a beautiful young woman trapped in her life as a worker. Food and clothing are rationed, while slaves and workers are forced to live in meager barracks. But Rej and Donna have more in common than a miserable existence; they have weirdly identical dreams of dragons flying in a clear blue sky. They are even more surprised to learn that the city's cruel leader, the Arkel, is determined to find a way to bring just such dreams to life in order to literally scare the population to death. The connection Rej and Donna make leads them on a dramatic adventure to save their loved ones from the Arkel's terrifying plans. N. M. Browne has created an unforgettable world in this richly layered narrative.
| |
In recent years the educational world has developed a better understanding of the importance of the learning space in the learning process; matching durable and aesthetically pleasing buildings that are also fully functional and fit for purpose. In 2014 it has become evident that we have reached a stage in The British School's development where the Tafira learning spaces are not fit for purpose nor meet our pedagogical needs.
The Tafira site's infrastructure has grown and developed, over almost 50 years, with extensions and additions being made to accommodate more students, older pupils and additional curricular areas. This gradual extension of facilities has led to a situation which lacks coordinated school campus planning, some classrooms not being fit for purpose - often being initially designed for other functions - and certain features that would not meet international design and safety standards.
One major reason that the current school buildings are not adequate and appropriate is the dramatic change in expectations and emphasis in educational philosophy; facilities that were suitable and functional in 1990, let alone 1966, are now outdated and becoming obsolete. In 20th Century, education focused on inculcating knowledge and competencies necessary for the Industrial Age, normally teacher-directed with the emphasis on uniformity and conformity, and the learning environment being compartmentalised by age and detached from the community. Schools and classroom design reflected the norms of the Age, particularly teacher directed teaching and learning.
The 21st Century has witnessed a paradigm shift where globalization and unpredictable economic and social events are shaping the world and, consequentially, education. Schools must prepare young people for a world of uncertainty, change and rapid transformation, enabling their students to develop the competencies of adaptability, creativity, collaboration, responsiveness and to become self-directed and self-managed – skills that are essential for the future. 20th Century teaching methodologies, which relied upon teachers imparting their knowledge, are outdated as the 21st Century competencies cannot be developed by teachers instructing their students how to be creative or adaptable; these skills can only be truly learned thorough active, inquiry based, and real-life, learning experiences. The British School has articulated its vision of the education we wish to provide within the School Development Plan and has identified curricular, teaching methodologies, student dispositions and social responsibilities that we wish to develop. However, alongside these developments, is the school setting; the need to provide a daily learning environment that can inspire the creativity, active learning, investigation, collaboration and self-expression needed in today's education.
Classrooms connect to outside learning spaces.
Close inter connection between year groups, for cross year group learning opportunities.
Numerous spaces for individual and small group work.
Larger classrooms, providing more space for dynamic learning environments.
Specialist Secondary facilities in science, art and music.
Internal areas for working in large groups or for cross-curricular workshops and performances.
ICT connection within the school and to the outside world through a fully integrated network.
Division of school into small communities to ensure student care, well-being and safety.
A united and holistic school campus to develop whole school community.
These features, and our commitment to supporting our students in the best way possible, make this project so important and central to the educational provision of The British School of Gran Canaria. | http://www.bs-gc.com/en/about-us/building-our-future |
If there were no numbers, there would be no calendar or time. You won’t even know it’s your own birthday or your best friend’s birthday as all the days in your life will be the same. You wouldn’t know what year it is.
Contents
Mathematics creates order in our lives and prevents chaos. Certain qualities that are encouraged by mathematics are reasoning skills, creativity, abstract or spatial reasoning, critical thinking, problem-solving skills, and even effective communication skills.
life without numbers would be difficult, for example society would function without an economy, imagine New York, how would you get home without knowing which street your house is on . there would be no electronics, motor transport or skyscrapers.
Mathematics provides an effective way to build mental discipline and promotes logical thinking and mental rigour. In addition, mathematical knowledge plays a crucial role in understanding the content of other school subjects such as science, social studies and even music and art.
Math is needed at every stage of life and we cannot live without it. It is a subject that applies to all fields and professions. It tells us how things work and also allows us to predict certain things that have made us progress so much in life. It has made our life easier, not more complicated.
A universe that could not be described mathematically would have to be fundamentally irrational and not just unpredictable. Just because a theory isn’t plausible doesn’t mean we can’t describe it mathematically.
Mathematics is very useful in everyday life. We use math concepts as well as the skills we learn daily through practicing math problems. Math gives us a way to understand patterns, define relationships and predict the future. It helps us to do many important things in our daily life.
Math is vital in today’s world. Everyone uses math in our daily lives, and most of the time we don’t even realize it. Without mathematics, our world would be missing a key component in its construction. “Math is so important because it’s such a big part of our daily lives.
Statistics and probabilities can estimate the death toll from earthquakes, conflicts and other disasters around the world. It can also predict profits, how ideas spread, and how previously endangered animals might repopulate. Mathematics is a powerful tool for global understanding and communication.
Our confidence and ability to work with numbers has an impact financially, socially and professionally. It even affects our health and well-being. Some examples of how we use math every day are: Calculating how many minutes until our turn.
Mathematics is considered the mother of all sciences because it is a tool that solves problems of every other science. Other subjects such as biology, chemistry or physics are based on simple chemical solutions.
Arithmetic, algebra, and geometry were used by Babylonians and Egyptians for building and construction, and astronomy. At the beginning of the 6th century B.C. Greek mathematics was introduced by the ancient Greeks.
Latest Questions
© 2022 intecexpo.com | https://intecexpo.com/faqs/what-would-life-be-without-mathematics/ |
Festival Reception and Drinks
A chance for festival attendees to relax, continue the debates informally and enjoy a drink on behalf of Diageo. Key festival partners will highlight the importance of public discussion and open debate with a number of short speeches introduced by the AoI’s director, Claire Fox.
Throughout the evening, attendees will be entertained by The McConkey Jazz Trio – Daniel McConkey (saxophone), Joe Lee (bass) and Curtis Volp (guitar) – from Guildhall School of Music & Drama.
The Daniel McConkey trio met as undergraduates at the Guildhall School of Music and Drama, and have been playing together across London and the South East for over 3 years. Sharing a love for the classic swinging sound of the Great American Songbook, the group have previously performed at events for the likes of Channel 4, Guardian and the Commonwealth Business Banquet. Each member is a busy, creative musician in his own right, regularly playing at venues such as Ronnie Scott’s, Pizza Express Dean Street and the Savoy. The trio features Joe Lee on double bass and Curtis Volp on Guitar. | https://www.battleofideas.org.uk/session/festival-reception-and-drinks/ |
Saponification in the Soap Making Process- Identify harmful chemicals in the soap ,Saponification is at the heart of soap-making. It is the chemical reaction in which the building blocks of fats and oils (triglycerides) react with lye to form soap. Saponification literally means "turning into soap" from the root word, sapo, which is Latin for soap. The products of the saponification reaction are glycerin and soap.How Does Soapy Water Affect Plants? | HunkerThe chemical composition of soapy water differs dramatically depending on the kind of soap you use. Commercial insecticidal soap is the safest choice because it's formulated specifically to control pests and minimize injury to plants. | https://www.mademoiselle-nature.be/en1/3948_2000/Jan/Sat.html |
[C, U] (written) used to refer to an area of land when you are mentioning its natural features, for example, if it is rough, flat, etc: difficult / rough / mountainous terrain + They walked for miles across steep and inhospitable terrain.
Thesaurus dictionary
n.
topography, landscape, ground, territory:
These vehicles are specially designed for rough terrain.
Collocation dictionary
ADJ.
flat | hilly, mountainous, rocky, rough, rugged, uneven | difficult, harsh, inhospitable
difficult terrain for cycling
| familiar | unknown | boggy, marshy
VERB + TERRAIN
cross, traverse
PREP.
across/over ~
It took us the whole day to trek across the rocky terrain. | https://tudien.net/?word=terrain&dictionary=en-vi |
Q:
Test diagonalizability of $T(A)=A^t$ over all $n\times n$ matrices
Let $V$ be the vector space of all $n\times n$ matrices over $F$. Let $T$ be the linear operator on $V$ defined by $T(A)=A^t$. Test T for diagonalizability, and if $T$ is diagonalizable, find a basis for $V$ such that the matrix representation of T is diagonal.
My attempt:
if c is the eigenvalue of $T$, $T(A)=cA$, $A^t=cA$ $\implies$ $(A^t)^t=(cA)^t$ $\implies$ $A=cA^t$ $\implies$ $A^t=cA=c(cA^t)=c^2A^t$ $\implies$ $A=c^2A$ $\implies$ $c^2=1$ $\implies$ $c=1,-1$.
So $A^t=A$ or $A^t=-A$, if $A^t=A$, then $A$ is a symmetric matrix, for example, a $3\times 3$ matrix:$\begin{bmatrix}0&1&0\\1&0&0\\0&0&0\end{bmatrix}$ satisfies $A^t=A$.
if $A^t=-A$, for example:$\begin{bmatrix}0&1&0\\-1&0&0\\0&0&0\end{bmatrix}$ satisfies $A^t=-A$.
But how do I find a basis? I guess that the answer is that it is diagonalizable.
A:
Hint: Every matrix can be written as sum of a symmetrical matrix and an anti-simmetrical. Note the every simmetrical matrix is an eigenvector with eigenvalue $1$ and every anti-symmetrical matrix is an eigenvector with eigenvalue $-1$. Therefore $T$ is diagonalizable.
A:
Suppose $E_{ij}$ is an $n\times n$ matrix with its only non-zero entry at the intersection of row $i$ and column $j$ and this entry is $1$. $E_
{ij}$'s span $M_{n}(F)$. Now define
$$
_sE_{ij}=(E_{ij}+E_{ij}^\top)/2,\quad_aE_{ij}=(E_{ij}-E_{ij}^\top)/2
$$
So $S=\{_sE_{ij},\;_aE_{ij}:i,j=1,2,\cdots, n\}$ is a spanning set of $M_n(F)$ with each of the matrix being an eigenvector of $T$. There are at most $2n^2$ matrices in $S$ but noting
$_sE_{ij}={_s}E_{ji}$ lets us eliminate ${n^2\over2}-{n\over2}$ of these
$_aE_{ij}=-{_a}E_{ji}$ lets us eliminate ${n^2\over2}-{n\over2}$ of these
$_aE_{ii}=0I_n$ lets us eliminate $n$ of these
by virtue of linear dependence. So our basis is
$$
\{_sE_{ij}:j\le i\le n, j=1,2,\cdots, n\}\cup\{_aE_{ij}:j< i\le n, j=1,2,\cdots, n\}
$$
The first set has ${n^2\over2}+{n\over2}$ and the second set has ${n^2\over2}-{n\over2}$ elements.
| |
My parents asked if we'd like to go on a vacation with some of my extended family. The vacation will be spent almost entirely at a nice hostel; we mostly want to relax and spend quality family time together. In general I've enjoyed these vacations in past years, but the last two times we've tried to vacation with my daughter (now 1 year 10 months) have been extremely difficult. My two biggest problems are:
- It doesn't look like we can get something with more than one room. This means that whenever my daughter is asleep, somebody needs to be in the room with her, with the lights off, keeping pretty quiet. (She usually sleeps 2.5 hours in the early afternoon, and goes to sleep at 9pm.)
- Meals are a central part of the vacation, with all the family sitting round together. But with a toddler they become very difficult. At home I can let my daughter run around and play if she's not eating precisely when we are. But in any kind of lodging, the dining room is large and crowded with strangers. This means she needs constant supervision throughout meals - which means either me or my wife can't really participate in the "main event."
In addition to all this, there's all the usual hassle of taking care of a kid without our full homecourt arsenal of toys, food, etc. And, on the other hand, there's not much present for her to do and to enjoy, except general running around (and getting kvelled over by the entire family).
So the bottom line is, watching my daughter becomes such a hassle under these conditions, that I (and/or my wife) have little time to actually enjoy the vacation itself.
Are there ways to avoid these problems and actually enjoy what the vacation was originally meant for? Or should we pass on this particular gathering, in favor of spending family time without vacationing, and vacation time in more toddler-friendly (or toddler-free) circumstances?
Note: This vacation is planned around a Jewish holiday, during which lots of normal everyday activities are forbidden - including using computers, phones and TVs; driving (e.g. away from the hostel); listening to music. I'd appreciate answers that took this into consideration, though for the purposes of the FAQ-for-the-ages I also welcome suggestions that ignore this limitation. | https://parenting.stackexchange.com/questions/4440/handling-a-toddler-at-a-hotel-hostel |
Bob Marley, the unparalleled reggae singer, and songwriter, whose impact on music still resonates today, has astounded many with his conversion to Christianity before his passing in 1981 at only 36 years of age. Despite his untimely death, Marley’s timeless classics continue to influence and inspire people globally.
Rise to Fame
Born in 1945 in Jamaica, Bob Marley grew up in poverty in Kingston, where he first developed his musical talent. In 1963, he formed the Wailers, which went on to become one of the most popular reggae bands in the world. His music conveyed positive messages of peace, love, and unity, and he utilized his platform to advocate for social and political change.
The Shift to Christianity
Bob Marley practiced Rastafarianism for the majority of his life, which greatly influenced his music. However, in the latter part of his life, he began to delve into Christianity and eventually converted. The specific reason for his conversion is not entirely clear, but some sources suggest that it arose from a spiritual experience during his cancer treatment.
Impact on Music
Despite his conversion, Bob Marley continued to create music that reflected his beliefs and values. His later works, such as “Uprising” (1980), included religious themes and references to his newfound Christian faith. Nevertheless, his music stayed true to its roots and continued to spread messages of hope and unity.
Enduring Legacy
Bob Marley’s death in 1981 was a significant loss to the music industry, but his legacy continues to endure through his classic songs and impact on the reggae genre. He remains one of the highest-selling musicians of all time and is widely regarded as a cultural icon. His music continues to inspire and uplift people globally, showcasing his dedication to promoting love, peace, and unity through his art.
In Conclusion
Bob Marley’s conversion to Christianity may have come as a surprise to some, but it highlights his openness to new beliefs and perspectives. Despite this change, his music remained steadfast and continued to spread positive messages of hope and unity. His legacy will forever inspire people globally and his impact on music will never be forgotten. | https://www.viablogger.com/bob-marleys-transformation-a-surprising-conversion/ |
The utility model provides a packing case for tea sets is including box body and lid, the lid is the hexagon, and the box respectively is equipped with a drawer as the hexagon on six faces of box body, box body bottom plate is the hexagon, and box body bottom plate center is equipped with at the bottom of the circular inner groovy, the box bottom identical with the box body is the hexagon, open packing back, the hexagon of six inside slope facades and bottoms has made up again into the inner packing, still is equipped with small drawer in capsule of six inside slope facades, can be opened outside towards by six of box body, and ability furthest saves space, has elegant appearance, stabilizes firm, space -saving characteristics. | |
Location: Unknown, in American government possession.
Status: Unknown, presumed active.
Description and Behavior: The Organic Computer is a large living desktop computer made of organic material. The ‘tower’ is composed of grey matter of human origin contained inside of a chitinous exterior, which allows for incredibly effective multi-tasking and creative problem-solving abilities with a computing ability comparable to a supercomputer. At least three monitors are present utilising surprisingly versatile bioluminescent tissue covered in a clear thin membrane housed in a chitin shell to allow for a display. Attached are a keyboard and mouse made of muscle, bone, and cartilage which offer a means of navigating the Organic Computer’s interface. When it comes to mobility, the Computer uses tendrils largely composed of muscles emerging from each part of the entity. When not in use for moving, the tendrils wrap around the structure of wherever the Computer is to anchor itself in place.
This entity does not seem to be purposefully malicious to any person who is not a demonstrated danger to it. Many who have encountered the entity have noted that it is prone to ‘playful’ behavior. Sometimes this behavior can cause grievous injury as the Organic Computer does not seem to know its own strength. Documents detailing research about the Organic Computer, acquired after the dissolution of the International Paranormal Containment Association (IPCA) which had the entity in containment, indicate that the Computer was suspected to have been created by a technology cult ‘molding’ multiple living persons together and reshaping them via anomalous means. After the IPCA dissolved the Organic Computer managed to escape for a short time before being captured by the American government. Therefore, it remains difficult to verify the claims of the documents and witnesses as to the nature of this entity. We have placed a request with the government for our researchers to access the entity itself for more direct analysis.
Recommended Actions: It is highly unlikely for an encounter with the Organic Computer considering it is currently being contained by the American government. If the government were to somehow lose their ability to contain the entity and you were to encounter it please know that it very rarely causes harm to people, and when it does it does so in self-defence or by accident. Largely its behavior as recorded by the former IPCA seems to be friendly and desiring to help. It should be noted that as an entity formerly in containment by the IPCA, the various remnants of that organization are rumored to be reforming and seeking the reacquisition of the entities formerly in their possession. If true, that means that associating with the Computer may make you a target of these notorious operatives. | https://bad-motivations.com/2020/11/13/hlc58-the-organic-computer/ |
If you haven’t heard yet, in February, NASA revealed that they have found seven Earth-like planets!
The announcement has made waves around the world, but what does this discovery really mean? Will we suddenly find ourselves in contact with distant neighbors? What lies on the surfaces of these planets?
On this edition of Saturdays Around the World, we take the “around” piece to a whole new level, and explore a few of the worlds we now know we share the universe with!
To start, let’s get a better understanding of these planets and the science behind them from two of the members of the team that found the planets!
Here is a YouTube channel we truly love, Physics Girl, with an interview with these scientists and a little introduction to discovery!
Now that we have a little background on the TRAPPIST-1 discovery, let’s take a deeper look at how these planets compare to Earth and our own solar system. How about the question, “What would life on these planets look like?”
Here’s PBS Space Time to give us a closer look at our distant neighbors.
There are so many fascinating forces at play with a discovery this extensive!
With every subsequent encounter like this, we are able to piece together a better understanding of the universe that surrounds us.
Will we ever visit these planets? It’s unlikely. Will we know if they are home to life? Probably not for certain. Will this knowledge push us to ask better questions and seek better answers? Most definitely.
Discovery is an ongoing process. Rarely is a discovery made that does not contribute to a further line of questioning.
It’s still an amazing world, or in this case, an amazing universe, because there is so much out there left to be discovered. As our view of the vastness around us continues to grow, so too will the reaches of what our species is capable of!
Stay beautiful & keep laughing!
-Liesl
Looking for more space-talk?
There are endless things to explore out there beyond our planet, and if you want to dive into a few of them head over to our Space Category for a closer look!
Or, if you are looking for reasons to celebrate the people and places that make this planet amazing check out the archive of our Saturdays Around the World series!
You can join us on our mission to prove that it’s still an amazing world by supporting us on Patreon, where you can also get access to exclusive content. If that isn’t your style, you can help by reading and sharing our content with friends and family. The more eyeballs you can help us reach, the more positivity we can all spread together. Thanks for stopping by today!
Notes: | https://everwideningcircles.com/2017/03/04/meaning-of-the-trappist-1-discovery/ |
As I lay here and reflect on all that is going around me I can not help but get lost into the colors of this beautiful painting.
This painting is speaking to me in the beauty of it's colors but yet how messy paint can be. Intentionally or not paint can get messy.
Can you feel getting lost in the beauty of the colors? Or feel how you can be the branches? Branches grow and head many different directions but yet still be connected to one thing. The vine.
Close your eyes and picture you being the branches growing, twisting and heading in many different directions.
At the end of each branch you are producing beautiful leaves. Now take your back self where the branch started and head to another direction. Here again you produce more leaves.
As you grow and head in different directions in life you leave a special something. You leave your mark but you remain connected to the vine.
I want you to picture the vine being your ground. Your roots, your values, your dreams etc ...
Where you are today and where you have been picture how you have grown, stretched and have been impacted.
What kind of Mark have you left?
What kind of impact have you left?
How have you learned or grown from each location you visited?
Did you stay grownded to your Vine? Or did you loose your vine some where?
Can you find yourself back to your truth, to your values to what you have known in your heart? | https://www.joannaesparza.com/single-post/2017/07/26/REflection |
Stage One fire restrictions were lifted for unincorporated El Paso County and the city of Fountain on Monday.
The county has various topographical features where some terrain may experience large amounts of moisture but other portions remain dry and may have a higher risk of fire, according to El Paso County Sheriff’s Office. A few of the fire districts out east along the county border, particularly, request that you remain extremely cautious with the use of any flame producing device and or fire.
You must contact your local fire district before engaging in fire-related activities, as some have additional restrictions. Some jurisdictions may also require a permit.
Fountain Fire Chief James Maxon and the Sheriff’s Office both said they will continue to monitor weather and fire danger conditions throughout the year and may enact additional restrictions as needed in the months to come.
Allowed in the city of Fountain:
• Outdoor blasting, welding and torches with a permit issued from Fountain Fire Department
• Campfires in developed campgrounds/picnic ground
• Model rockets
• Public prescribed burning and opening burning with an Air Quality Permit issued from the El Paso County Health Department
• Recreational fires: fires conducted on private property and are enclosed in a permanently constructed fire pit made of non-combustible material such as stone, brick, concrete, metal container or fire ring with dimensions of the pit to be less than 3 feet in diameter and 2 feet in height. All materials burned in the fire pit must fit inside the dimensional confines of the pit and shall not be permitted to extend above or outside the fire pit. Shall not be conducted within 10 feet of a structure or combustible materials. Burning of trash, rubbish and yard debris is prohibited.
• Portable outdoor fireplaces shall be used in accordance with the manufacturer’s instructions and shall not be operated within 10 feet of a structure or combustible material.
• Outdoor cooking, charcoal burners and other open-flame cooking devices shall not be operated on combustible balconies or within 10 feet of combustible construction within an apartment building.
While the restrictions have been lifted, officials continue to stress using caution when using any open fire and or flame producing devices. Always keep a safe area for their use and make sure you keep fire suppression items available. | https://www.epcan.com/story/2021/03/03/news/county-lifts-fire-restrictions/9380.html |
Any new solar photovoltaic (PV) technology must reach low production costs to compete with today's market-leading crystalline silicon and commercial thin-film PV technologies. Colloidal quantum dots (QDs) could open up new applications by enabling lightweight and flexible PV modules. However, the cost of synthesizing nanocrystals at the large scale needed for PV module production has not previously been investigated. Based on our experience with commercial QD scale-up, we develop a Monte Carlo model to analyze the cost of synthesizing lead sulfide and metal halide perovskite QDs using 8 different reported synthetic methods. We also analyze the cost of solution-phase ligand exchange for preparing deposition-ready PbS QD inks, as well as the manufacturing cost for roll-to-roll solution-processed PV modules using these materials. We find that present QD synthesis costs are prohibitively high for PV applications, with median costs of 11 to 59 $ per g for PbS QDs (0.15 to 0.84 $ per W for a 20% efficient cell) and 73 $ per g for CsPbI3 QDs (0.74 $ per W). QD ink preparation adds 6.3 $ per g (0.09 $ per W). In total, QD materials contribute up to 55% of the total module cost, making even roll-to-roll-processed QDPV modules significantly more expensive than silicon PV modules. These results suggest that the development of new low-cost synthetic methods is critically important for the commercial relevance of QD photovoltaics. Using our cost model, we identify strategies for reducing synthetic cost and propose a cost target of 5 $ per g to move QD solar cells closer to commercial viability.
Colloidal quantum dots (QDs) have been widely investigated as an avenue toward ultra-low-cost solar photovoltaics (PV), alongside organics and metal halide perovskites. It is often implicitly assumed—and explicitly stated—that QD-based PV technologies can reach low cost because they employ low-cost, abundant elements and low-temperature, high-throughput manufacturing processes. However, this argument holds true only if QDs can be synthesized at low cost—materials dictate the module cost floor. Here we report the first detailed analysis of the cost of large-scale QD synthesis for PV applications. Our Monte Carlo approach constitutes a complete cost modeling framework for QD photovoltaics, from raw precursors to finished modules. We find that QD synthesis is prohibitively expensive today, highlighting the importance of synthetic cost for the commercial viability of QD solar technologies and guiding further research toward promising synthetic directions.
For any emerging PV technology to compete in mainstream PV markets, however, the module cost per watt must be significantly lower than crystalline silicon (c-Si) PV, for which module prices have dropped below 0.40 $ per W.8 Achieving this target will likely require each layer in the device stack to reach negligible costs (<0.05 $ per W), given the high cost of encapsulants and other balance-of-module components.9 Low material costs can be readily achieved with polycrystalline perovskite thin films. For example, a total cost of 0.07 $ per m2 to 1.15 $ per m2 has been calculated for MAPbI3 precursors9,10—equivalent to <0.01 $ per W for aperture-area efficiencies of over 10%. However, reaching such low costs may be difficult for colloidal QDs, which must be synthesized prior to deposition.
Colloidal PbS QDs can be synthesized using a variety of reported methods. These approaches can be classified by synthesis strategy (e.g., hot injection,11,12 heat-up,13 or continuous flow14) and precursor chemistry (e.g., PbO and bis(trimethylsilyl)sulfide (TMS-S),11,14,15 PbO and substituted thioureas,16 lead acetate (PbAc) and TMS-S,17,18 PbCl2 and elemental sulfur,19,20 PbCl2 and TMS-S,13 or PbCl2 and thioacetamide (TAA)21). The most common approach is the PbO and TMS-S hot injection route pioneered by Hines and Scholes,11,12 although many different methods have produced high-efficiency devices.
In this work, we analyze the cost of leading PbS and CsPbI3 perovskite QD synthesis and ink preparation methods, guided by direct commercial experience with high-volume QD production. Our Monte Carlo modeling approach allows us to account for the uncertainty in input parameters and robustly determine the QD contribution to future PV module costs, which we compare to the cost of polycrystalline perovskite PV modules. Using our model, we identify the most promising strategies for further cost reductions in colloidal QD synthesis.
We also evaluate the cost of 2 ink preparation methods for PbS QDs, one using PbX2 and ammonium acetate (AA) (Liu, 2016)1 and another using PbI2 only (Aqoma, 2017).2 All procedures used in this analysis are summarized in Table S1 (ESI†).
In any cost model, many input parameters are inherently uncertain. A Monte Carlo approach incorporates the known uncertainty in parameter values. Instead of a single most-likely value calculated from a conventional spreadsheet model, a Monte Carlo model produces a cost distribution that encompasses both a central value and the associated uncertainty. This distributional information can identify key areas for improvement and inform decisions that depend in part on the risk tolerance of the decision-maker. Monte Carlo models are thus often used in project planning and cost assessment.
We developed a Monte Carlo cost model for QD materials production based on the approach of Chang and colleagues for perovskite PV manufacturing.10 We define a process as one production step and a process sequence as one or more linked processes leading to the finished product (i.e., a QD solution or ink).
In this work, we analyze two process sequence types—QD synthesis (consisting of synthesis, crashout, and cleaning/preparation steps) and ink preparation (consisting of a single ink formulation step). Each process step incurs costs due to materials, labor, capital expenditure (capex), operating expenditure (opex), and yield loss. These component costs per gram are calculated from the input parameters listed in Fig. 1.
Fig. 1 Monte Carlo cost modeling of colloidal QD synthesis. Each modeled process sequence consists of 3 distinct process steps: synthesis, crashout, and cleaning/preparation. Synthesis refers to the primary synthetic step (hot injection, heating up a precursor solution, or continuous flow synthesis). Crashout includes repeated precipitation and redispersal, characterization, and analysis of the QD product. Cleaning includes glassware cleaning and drying, followed by preparation for the next synthesis (degassing precursors and setting up equipment).
For each Monte Carlo model run, a value for each input parameter is randomly selected from a probability distribution. Here we use the PERT distribution (shown in the inset of Fig. 1), which is often used to model expert estimates because it is intuitively parametrized by minimum (low), most likely (nominal), and maximum (high) guesses. We perform 10 000 Monte Carlo runs for each process sequence, producing a distribution of 10 000 values for each cost component. This output—summarized as 10th, 50th (median), and 90th percentile values—gives a direct measure of the uncertainty in our cost estimates.
The materials cost M [$ per g] for each process depends on the precursor cost P [$ per unit] and the amount of precursor used per gram of product U [unit per g]: M = U × P.
Precursor costs depend on the purity and purchase volume. Our input costs are based on the largest purchase volumes available across leading commercial suppliers including Strem, Sigma Aldrich, EMD Millipore, and Alfa Aesar. Nominal and high cost estimates use the purity level reported in the original protocol, while low estimates use the lowest-cost purity available. We note that there is no straightforward correlation between precursor purity and synthesized material quality—low-purity precursors have been used commercially to produce high-quality QDs—although batch-to-batch consistency is important for process control. When no purity level is reported in the protocol, the lowest-cost purity is used for both the low and nominal estimates. Economies of scale are incorporated by applying a volume pricing discount of 30%, 50%, and 80% for every 10× increase in the purchase volume for the high, nominal, and low precursor costs, respectively. These discount values are estimated from volume pricing data obtained from suppliers for materials used in this analysis (Fig. S1, ESI†). To obtain the fully scaled purchase volume, we assume that 3 months’ worth of precursor materials are purchased at once, corresponding to 274 syntheses for the nominal time per synthesis and capacity utilization values specified below.
To calculate the precursor usage per gram of product, we need both the precursor usage per synthesis and the synthesis yield—or equivalently, the precursor utilization. The precursor usage per synthesis is directly calculated from literature protocols. Reported reaction volumes range from roughly 10 mL to 1 L, with linear scaling of precursor quantities reported over this range. We further scale precursor quantities to a 5 L reaction volume, as discussed below. No uncertainty is included in these estimates—i.e., low, nominal, and high values are the same. The synthesis yield—in grams of product—is based on the reported yield for the low and nominal estimates and the yield assuming 100% utilization of the limiting elemental precursor (sulfur for PbS, cesium for CsPbI3) for the high estimate. For CsPbI3 QDs, the synthesis yield was not reported, so we assume 80%, 90%, and 100% utilization of cesium for the low, nominal, and high estimates.
Crashout protocols are taken from the referenced papers. When crashout details are not specified, we assume a standard protocol used in our labs. Solvent volumes for crashout were calculated assuming a final QD concentration after crashout of 60 mg mL−1—a typical value in our lab. We assume additional hexane equal to 20% of the reactor volume is used for cleaning.
The labor cost L [$ per g] depends on the number of operators N [persons], average labor wages W [$ per h per person], and the process throughput τ [g h−1]: . Here we consider labor costs separately from operating expenditures.
All QD synthesis process sequences employ a total of 3 operators—1 each for synthesis, crashout, and cleaning. Ink preparation employs 1 operator. Low, nominal, and high estimates for labor wages are assumed to be 23.1, 46.2, and 69.3 $ per h, respectively. The nominal value is calculated from the weighted average of direct labor rates for 1 senior scientist (46 $ per h) and 2 skilled technicians (27 $ per h) with fringe benefits of 40%, based on wages at a production facility in Massachusetts, U. S. A. No indirect labor costs from general and administrative (G&A) activities are included. Equipment maintenance is assumed to be carried out by the operators, with no additional maintenance labor costs.
The process throughput is the synthesis yield divided by the effective time per synthesis. The low/nominal/high estimates for throughput are determined from the low/nominal/high estimates for yield and the high/nominal/low estimates for synthesis time, respectively. Synthesis yield is defined above. Low, nominal, and high estimates for the effective synthesis time are 2, 4, and 8 hours, respectively. For ink preparation, time estimates are 0.5, 1.5, and 2 hours.
The process throughput—and thus the labor cost per gram of product—depends strongly on the size of the reaction vessel. Here we assume a 5 L reactor volume, a typical volume used at QD Vision for hot-injection synthesis at an annual production volume adequate to supply multiple commercial display and television product lines. Commercial hot-injection reactors are generally no larger than 20 L, due to the dependence of QD polydispersity on the thermal quenching rate. For each synthesis and ink preparation process sequence, the precursor material usage is scaled up linearly from the reported values to a 5 L total solution volume.
The capital expenditure C [$ per g] is the sum of depreciation costs for the equipment, facilities, and buildings used in a process step. The equipment depreciation cost is the upfront cost [$] divided by the tool-lifetime throughput [g]. The tool-lifetime throughput is the product of the process throughput [g h−1], capacity utilization [h year−1], and depreciation time [year]. Our low, nominal, and high capacity utilization estimates are 20%, 50%, and 80%—equivalent to 1752, 4380, and 7008 h year−1, respectively—to capture a broad range of possible factory operating scenarios, from a single-shift, five-day workweek to a three-shift, seven-day workweek assuming an 80% operating factor. The facility depreciation cost is specified as a fraction of the equipment depreciation cost—10%, 50%, and 100% are used as low, nominal, and high estimates, respectively. The building depreciation cost is the cost of floor space [$] divided by the building-lifetime throughput [g]. The building-lifetime throughput is calculated similarly to the tool-lifetime throughput, except with a longer nominal depreciation time (15 years instead of 7).
The cost due to yield loss Y [$ per g] is calculated as the effective value of previous process steps lost in the present step. We assume 100% yield for the synthesis, cleaning, and ink preparation steps, noting that incomplete precursor utilization during synthesis is already accounted for in the materials cost calculation. At QD Vision, synthetic yields were nearly quantitative, as is required to avoid re-nucleation and produce high-quality materials. For crashout, we assume low, nominal, and high yields of 80%, 90%, and 95%, based on commercial experience.
To quantify the impact of QD synthetic costs on the economic viability of QD solar cells, we analyze the cost of manufacturing PV modules based on representative PbS QD, CsPbI3 QD, and polycrystalline methylammonium lead iodide (MAPbI3) perovskite device stacks compatible with roll-to-roll solution processing. Similar process sequences are assumed for all 3 PV technologies to facilitate comparison (Fig. 6c). These sequences are not reported protocols but are representative of low-cost manufacturing sequences envisioned for solution-processed solar cells.9,10,25 Detailed Monte Carlo input parameters are available online as ESI† and described briefly below.
Our module cost calculations rely on low, nominal, and high QD material costs (in $ per g) calculated using the methods above. QD materials are assumed to be synthesized in-house, so no mark-up is added. For PbS QDs, the lowest-cost synthesis and ink preparation methods are added together to obtain the cost per milliliter of QD ink, assuming an ink concentration of 150 mg mL−1. Since no ink preparation protocol has been reported for CsPbI3 QDs, the ink preparation cost is assumed to be the same as for PbS QDs.
Tool costs and performance parameters are derived from literature reports and manufacturer quotes.9,10,25,26 When uncertainty estimates for tool parameters are not available, low and high values are assumed to be 80% and 120% of the nominal value, respectively. Individual process step yields between 95% and 100% are assumed, corresponding to a nominal full-sequence yield of 83%—a typical yield target for roll-to-roll PV manufacturing companies today.
Modeled costs for all QD synthesis, ink preparation, and PV module manufacturing sequences are presented in Fig. 2 and Tables S2, S4 (ESI†). For direct comparison with reported and modeled PV manufacturing costs, calculated costs are presented in terms of cost per area and per peak watt. All costs are calculated in units of $ per g. To convert from $ per g to $ per m2, we assume a 500 nm thick film consisting of 70% core material—with a density of 7.6 g cm−3 for PbS and 5.36 g cm−3 for CsPbI3—and 30% excess ligands and free space—with an average density of 0.3 g cm−3. To convert $ per m2 to $ per W, we optimistically assume a 20% cell power conversion efficiency (PCE) with a 95% geometric fill factor (achievable with laser scribing), yielding a 19% aperture-area or module PCE.
Fig. 2 Synthetic costs for PbS and CsPbI3 QDs. Synthesis procedures are denoted by the method and primary precursors. Detailed Monte Carlo model assumptions are discussed in the main text. (a) The total cost is the sum of the component costs for materials, labor, capex, opex, and crashout yield loss. For each procedure, the total cost probability distribution is shown in gray, with median $ per g, $ per m2, and $ per W values labeled above each bar. This median value is larger than the sum of individual median values due to the right skew of many of the component distributions. The low end of each distribution represents the most optimistic assumptions—e.g., low-purity precursors, high throughput, and high yield. The $ per m2 and $ per W axes are different for PbS and CsPbI3 QDs due to the lower density of CsPbI3. For reference, the $ per W price breakdown for a commercial multicrystalline silicon (mc-Si) PV module in 2017—consisting of polysilicon, wafer, cell, and module—is shown at the far left and right.27 (b) Relative cost breakdown by process step. Typically the synthesis cost is dominated by precursors, crashout cost by labor and solvents (e.g., acetone and methyl acetate), and cleaning cost by labor.
Different synthetic methods have vastly different costs per gram of QDs produced (Fig. 2a). For PbS QDs, the median of the total cost distribution ranges from 11 $ per g to 59 $ per g, corresponding to 29 $ per m2 to 160 $ per m2 for a 500 nm film and 0.15 $ per W to 0.84 $ per W for a 20% efficient cell with 19% aperture-area PCE. The lowest cost is achieved with the diffusion-controlled heat-up method employing PbCl2 and TAA precursors,21 with a median cost of 11 $ per g and 10th and 90th percentile values of 9 $ per g and 12 $ per g. Under the most optimistic assumptions—corresponding to the low end of the cost distribution—this method gives a minimum production cost of 7.4 $ per g, 20 $ per m2, or 0.11 $ per W. For CsPbI3 QDs, the median modeled cost is substantially higher—73 $ per g, 140 $ per m2, or 0.74 $ per W. Even using the lowest-cost synthetic method, the cost per watt for PbS QDs is a significant fraction of the production cost per watt of silicon PV modules, and both are far exceeded by the modeled cost for CsPbI3 QDs.
Labor costs dominate for most of the synthesis procedures, although precursor materials also contribute substantially. Hot-injection procedures employ lower precursor concentrations than other methods and thus require more labor per gram of product, since the duration of each synthesis is fixed. Capex and opex are negligible in all cases.
For most of the PbS syntheses, both the synthesis and crashout steps contribute substantially to the total cost (Fig. 2b). The high precursor concentration employed in the heat-up method of Zhang et al.13 reduces the labor cost and crashout solvent usage per gram; as a result, the total cost is dominated by the synthesis step. For CsPbI3, crashout costs dominate due to high labor and antisolvent costs, as discussed below. Cost breakdowns by process step are presented in Table S3 (ESI†).
Breaking down the total synthetic cost into granular components helps identify potential avenues for cost reduction. Fig. 3 shows the 10 largest cost components for each PbS and CsPbI3 QD synthesis method. Labor costs dominate the total synthesis cost for most methods. Precursors (e.g., TMS-S, oleylamine, oleic acid, and PbI2) and crashout solvents (e.g., methyl acetate and acetone) also contribute substantially to the total cost.
Fig. 3 Top 10 largest cost components for reported PbS and CsPbI3 QD synthesis procedures. Each labeled component is classified by color as a materials, labor, or yield loss-related cost and by number as a synthesis, crashout, or cleaning-related cost. Error bars correspond to 10th and 90th percentile values. In nearly all cases, multiple cost components contribute substantively (>1 $ per g) to the total cost.
All cost components except raw material and yield loss-related costs can be reduced by reducing the synthesis time and thus increasing throughput (Fig. 4), assuming fixed total material costs, capex, and opex. Our 4 h nominal synthesis time corresponds to a throughput of 3 g h−1 for CsPbI3 QDs, with labor costs accounting for 65% of the total cost. Doubling the throughput to 6 g h−1 (2 h synthesis time) reduces labor costs to 50% of the total.
Fig. 4 Effect of process throughput on CsPbI3 QD synthetic cost. The modeled process sequence includes hot injection, crashout, and clean-up and preparation. The nominal synthesis time in this analysis is 4 hours (3 g h−1 throughput). Increasing throughput reduces the cost of labor, capex, and opex, but does not affect the cost of materials.
Modeled costs for two PbS QD ink preparation methods based on solution-phase ligand exchange with lead halides are shown in Fig. 5. These methods yield similar median costs for ligand-exchanged QDs—6.3 $ per g, 17 $ per m2, or 0.09 $ per W for PbI2 only and 8.7 $ per g, 23.6 $ per m2, or 0.12 $ per W for PbI2/PbBr2/AA. Materials costs—primarily from octane, PbI2, and DMF—dominate the cost of ink preparation. These costs can be added to the synthesis costs above to obtain the total production cost for a device-ready QD ink—16.9 $ per g, 45.6 $ per m2, or 0.24 $ per W for the lowest-cost combination.
Fig. 5 Cost modeling of PbS QD ink preparation. (a) General strategy for solution-phase ligand exchange using lead halide (PbX2) precursors. Oleic-acid-capped PbS QDs are transferred from a nonpolar solvent (octane) to a polar solvent (DMF) upon mixing. The resulting halide-capped QDs are separated by centrifugation and redispersed in an organic solvent (butylamine, BA) to produce a QD ink suitable for single-step film deposition. (b) Modeled costs per gram of ligand-exchanged PbS QDs for two leading ink preparation methods.1,2 Monte Carlo model assumptions are discussed in the main text. Probability distributions for the total cost are shown in gray. The $ per g, $ per m2, and $ per W labels above each bar refer to the median of the total cost distribution.
High QD synthesis and ink preparation costs translate to high module costs. Fig. 6 shows roll-to-roll manufacturing costs for solution-processed PV modules employing PbS QDs, CsPbI3 QDs, and polycrystalline MAPbI3 films. For a representative process sequence based on sputtered electrodes and slot-die-coated absorbers and metal oxide transport layers, we calculate module costs of 128 $ per m2 (0.68 $ per W for a 19% efficient module) for MAPbI3, 179 $ per m2 (0.94 $ per W) for PbS QDs, and 307 $ per m2 (1.61 $ per W) for CsPbI3 QDs. QDPV module costs are dominated by the QD absorber, which contributes 29% of the total cost for PbS QD modules and 55% for CsPbI3 QD modules. In contrast, MAPbI3 precursors contribute only 0.2% of the total perovskite thin-film module cost.
Fig. 6 Modeled manufacturing cost for roll-to-roll solution-processed PV modules based on polycrystalline perovskite and QD thin-film absorbers. (a) The modeled PV device stack includes a flexible PET substrate (100 μm thick), sputtered electrodes (200 nm each), a slot-die-coated absorber layer (500 nm), and slot-die-coated metal oxide transport layers (50 nm each). Three absorber materials are considered—MAPbI3, PbS QDs, and CsPbI3 QDs. (b) Module cost breakdown. Modeled $ per m2 values are converted to $ per W assuming a 20% cell and 19% module efficiency. Typical mc-Si PV module costs are shown for comparison. (c) Median cost breakdown by process step. For both QD-based PV technologies, the absorber deposition step (highlighted in brown)—specifically the cost of the QD ink—dominates the total module cost. For all of the modeled process sequences, barrier films for encapsulation contribute significantly to the total cost.
Our results suggest that the most promising synthesis strategy for low-cost PbS QDs today is the diffusion-controlled heat-up method—particularly when using thioacetamide as the sulfur precursor as demonstrated by Huang et al.21 This method allows high precursor concentrations to be used, leading to high yields for a given reactor volume and thus lower costs (Fig. 7). Further cost reductions could be achieved by reducing labor needs (43% of total cost), reducing crashout solvent usage (29%), and using lower-purity oleylamine (10%). To our knowledge, however, high-efficiency PbS QD devices have not yet been demonstrated using the heat-up synthesis with TAA precursor.
Fig. 7 Effect of QD concentration on modeled synthesis costs. The QD concentration during synthesis is calculated from the reported precursor mass and batch volume, assuming 100% utilization of the limiting precursor (sulfur for PbS QDs, cesium for CsPbI3 QDs). (a) Synthetic costs per gram—including both materials and labor costs—generally decrease with increasing precursor and QD concentration. The materials cost decreases due to the reduced solvent usage per gram of product; however, usage of other precursors is not affected by QD concentration. (b) The labor cost fraction decreases with increasing QD concentration. At high concentrations, the total cost is dominated by materials.
For perovskite QDs, we observe surprisingly high costs given the low materials costs reported for polycrystalline perovskite films. The cost of CsPbI3 QDs is dominated by labor (62% of total), methyl acetate (12%), and high-purity PbI2 (4%). Unfortunately, alternatives to methyl acetate for purification may be limited. Initial reports found that methyl acetate was the only antisolvent compatible with the desired cubic-phase CsPbI3 QDs.4 Although only a few demonstrations of perovskite QD solar cells have been reported thus far—all at small scale—dramatic reductions in labor cost will be required to make CsPbI3 QDs cost-competitive for PV applications.
Because our Monte Carlo approach captures uncertainty in all major cost parameters, the true QD production cost for U.S.-based manufacturing is likely to fall within the distributions shown in Fig. 2 and 5. However, several assumptions in our model may be overly optimistic or pessimistic, potentially making the median value an underestimate or overestimate of the true cost, respectively. Optimistic assumptions include the use of the lowest-cost purity as the nominal precursor cost when no purity was reported, the high estimate of 100% precursor utilization for calculating synthesis yield, omission of indirect labor costs, and high cell and module aperture-area efficiencies of 20% and 19%, respectively, used for calculating $ per W values. Pessimistic assumptions include the relatively high hourly labor rates—stemming from a need for skilled operators—and the 5 L reactor volume. Although hot-injection synthesis volumes may be limited by thermal quenching rates—especially for small nanocrystals—heat-up methods could enable much larger batch volumes.
The assumptions on economies of scale in material purchasing strongly affect the modeled costs. The nominal annual QD production from our model factory ranges from 11.7 kg year−1 (11.92 g nominal yield per 5 L synthesis × 90% crashout yield × 1095 syntheses per year) for the CsPbI3 hot-injection synthesis to 241 kg year−1 (245 g nominal yield per synthesis) for the PbCl2/TMS-S heat-up synthesis. This production rate is sufficient to support an annual PV module manufacturing capacity of 1.2 MW year−1 (11.7 kg year−1 ÷ 1.921 g m−2 × 190 W m−2) to 17 MW year−1 (241 kg year−1 ÷ 2.705 g m−2 × 190 W m−2), similar to the expected capacity for a pilot manufacturing line. At this scale, the monthly usage of key precursors ranges up to 740 times the nominal unit purchase volume, as specified in the input parameter spreadsheets (ESI†). Increasing the purchase volume reduces costs both by giving the purchaser more leverage to drive down supplier margins and by increasing economies of scale in raw material production. It is difficult to determine the actual profit margins in our price data. Furthermore, savings from economies of scale are likely to plateau at high production volumes. For the range of production volumes analyzed here, however, the true material cost savings per decade increase in purchase volume should fall well within the modeled 30% to 80% range for most precursors (Fig. S1, ESI†).
Our Monte Carlo model makes two minor simplifications that should not significantly affect the calculated results. First, the same labor requirement and throughput is assumed for synthesis, crashout, and cleaning. In practice, these steps may proceed at different rates and incur different labor requirements. For example, at QD Vision, more labor was required for crashout than for synthesis, as many labor-intensive synthetic steps such as equipment set-up and precursor preparation were eliminated with permanent installations and standard automation. However, any systematic differences can be alleviated by shifting labor between processes. An operator can generally perform multiple roles in the process sequence, depending on the timing of individual steps. Second, the model assumes that all input parameters are statistically independent. This assumption is unlikely to hold strictly—for example, the factory location would similarly affect rent and labor costs. Such correlations between parameters may lead to an underestimation of the total uncertainty.28 Even so, no strong dependencies are expected between the key input parameters (precursor materials usage and cost), so errors associated with correlated costs should be minimal.
Several general strategies for reducing QD production costs can be inferred from our modeled results. One obvious strategy is to reduce material costs by avoiding expensive precursors (e.g., TMS-S, PbCl2, and methyl acetate), using lower-purity precursors, synthesizing key precursors in-house, or recycling solvents by distillation or similar methods. Implementation of a solvent recycling system could reduce material costs substantially at the cost of increased capex. Labor costs could be reduced by developing a more robust process to mitigate the need for skilled labor, manufacturing in countries with lower wages (e.g., India), or increasing automation. Automation substitutes capex for labor—a worthwhile trade-off given the present labor-dominated cost structure (25% to 65% of total). Labor costs per gram could be reduced further by increasing throughput with larger reactors and continuous flow-based synthesis and crashout methods.14,29 For perovskite QDs, intrinsic defect tolerance could enable new high-throughput synthesis pathways such as wet ball milling, which generates structural defects in conventional materials.30 New synthetic procedures should target higher precursor concentrations (Fig. 7). For a given reactor volume, increasing precursor concentration reduces the material cost per gram—due to the lower solvent volume required for synthesis and crashout—as well as the labor cost per gram.
Our Monte Carlo analysis of QD synthesis costs suggests that today's leading synthetic procedures are not yet compatible with ultra-low-cost photovoltaics at the 1 MW year−1 to 20 MW year−1 production scale. Even if 20% efficient, stable QD solar cells were available today, the QD absorber would likely be too expensive to compete with silicon PV.
With further development, however, QD solar cells could still provide a low-cost, lightweight alternative to conventional PV technologies. There is no fundamental reason why colloidal QDs must be expensive. Although some precursors are expensive, the elements used in PbS and CsPbI3 nanocrystals are relatively cheap, abundant, and produced globally in high volumes.32,33 New QD synthesis methods following the strategies outlined above could dramatically reduce costs. Future work should target total synthetic costs below 5 $ per g, or roughly 0.05 $ per W for 20% efficient perovskite and lead chalcogenide QD solar cells—still significant but likely acceptable for most PV applications. Ultimately, the development of new low-cost synthetic methods will be critically important for the commercial relevance of QD photovoltaics.
The authors thank Patrick Brown, Dane deQuilettes, and other members of the Tata-MIT GridEdge Solar team for valuable feedback. Funding for this work was provided by the Tata Trusts.
M. Liu, O. Voznyy, R. Sabatini, F. P. García de Arquer, R. Munir, A. H. Balawi, X. Lan, F. Fan, G. Walters, A. R. Kirmani, S. Hoogland, F. Laquai, A. Amassian and E. H. Sargent, Hybrid organic–inorganic inks flatten the energy landscape in colloidal quantum dot solids, Nat. Mater., 2017, 16, 258–263 CrossRef PubMed .
H. Aqoma, M. Al Mubarok, W. T. Hadmojo, E.-H. Lee, T.-W. Kim, T. K. Ahn, S.-H. Oh and S.-Y. Jang, High-Efficiency Photovoltaic Devices using Trap-Controlled Quantum-Dot Ink prepared via Phase-Transfer Exchange, Adv. Mater., 2017, 29, 1605756 CrossRef PubMed .
E. M. Sanehira, A. R. Marshall, J. A. Christians, S. P. Harvey, P. N. Ciesielski, L. M. Wheeler, P. Schulz, L. Y. Lin, M. C. Beard and J. M. Luther, Enhanced mobility CsPbI3 quantum dot arrays for record-efficiency, high-voltage photovoltaic cells, Sci. Adv., 2017, 3, eaao4204 CrossRef PubMed .
A. Swarnkar, A. R. Marshall, E. M. Sanehira, B. D. Chernomordik, D. T. Moore, J. A. Christians, T. Chakrabarti and J. M. Luther, Quantum dot-induced phase stabilization of α-CsPbI3 perovskite for high-efficiency photovoltaics, Science, 2016, 354, 92–95 CrossRef PubMed .
X. Zhang, V. A. Öberg, J. Du, J. Liu and E. M. J. Johansson, Extremely lightweight and ultra-flexible infrared light-converting quantum dot solar cells with high power-per-weight output using a solution-processed bending durable silver nanowire-based electrode, Energy Environ. Sci., 2018, 11, 354–364 RSC .
J. Jean, T. S. Mahony, D. Bozyigit, M. Sponseller, J. Holovský, M. G. Bawendi and V. Bulović, Radiative Efficiency Limit with Band Tailing Exceeds 30% for Quantum Dot Solar Cells, ACS Energy Lett., 2017, 2616–2624 CrossRef .
NREL. Best Research-Cell Efficiencies, 2017.
Price Index. pvXchange, Feb 31, 2018 at http://www.pvxchange.com/priceindex/Default.aspx?langTag=en-GB.
Z. Song, C. L. McElvany, A. B. Phillips, I. Celik, P. W. Krantz, S. C. Watthage, G. K. Liyanage, D. Apul and M. J. Heben, A technoeconomic analysis of perovskite solar module manufacturing with low-cost materials and techniques, Energy Environ. Sci., 2017, 10, 1297–1305 RSC .
N. L. Chang, A. W. Yi Ho-Baillie, P. A. Basore, T. L. Young, R. Evans and R. J. Egan, A manufacturing cost estimation method with uncertainty analysis and its application to perovskite on glass photovoltaic modules, Prog. Photovoltaics Res. Appl., 2017, 25, 390–405 CrossRef .
M. A. Hines and G. D. Scholes, Colloidal PbS Nanocrystals with Size-Tunable Near-Infrared Emission: Observation of Post-Synthesis Self-Narrowing of the Particle Size Distribution, Adv. Mater., 2003, 15, 1844–1849 CrossRef .
C. B. Murray, D. J. Norris and M. G. Bawendi, Synthesis and characterization of nearly monodisperse CdE (E = sulfur, selenium, tellurium) semiconductor nanocrystallites, J. Am. Chem. Soc., 1993, 115, 8706–8715 CrossRef .
J. Zhang, J. Gao, E. M. Miller, J. M. Luther and M. C. Beard, Diffusion-controlled synthesis of PbS and PbSe quantum dots with in situ halide passivation for quantum dot solar cells, ACS Nano, 2014, 8, 614–622 CrossRef PubMed .
J. Pan, A. O. El-Ballouli, L. Rollny, O. Voznyy, V. M. Burlakov, A. Goriely, E. H. Sargent and O. M. Bakr, Automated synthesis of photovoltaic-quality colloidal quantum dots using separate nucleation and growth stages, ACS Nano, 2013, 7, 10158–10166 CrossRef PubMed .
M. Yarema, O. Yarema, W. M. M. Lin, S. Volk, N. Yazdani, D. Bozyigit and V. Wood, Upscaling Colloidal Nanocrystal Hot-Injection Syntheses via Reactor Underpressure, Chem. Mater., 2017, 29, 796–803 CrossRef .
M. P. Hendricks, M. P. Campos, G. T. Cleveland, I. Jen-La Plante and J. S. Owen, A tunable library of substituted thiourea precursors to metal sulfide nanocrystals, Science, 2015, 348, 1226–1230 CrossRef PubMed .
L.-Y. Chang, R. R. Lunt, P. R. Brown, V. Bulović and M. G. Bawendi, Low-temperature solution-processed solar cells based on PbS colloidal quantum dot/CdS heterojunctions, Nano Lett., 2013, 13, 994–999 CrossRef PubMed .
N. Zhao, T. P. Osedach, L.-Y. Chang, S. M. Geyer, D. Wanger, M. T. Binda, A. C. Arango, M. G. Bawendi and V. Bulovic, Colloidal PbS quantum dot solar cells with high fill factor, ACS Nano, 2010, 4, 3743–3752 CrossRef PubMed .
L. Cademartiri, J. Bertolotti, R. Sapienza, D. S. Wiersma, G. von Freymann and G. A. Ozin, Multigram scale, solventless, and diffusion-controlled route to highly monodisperse PbS nanocrystals, J. Phys. Chem. B, 2006, 110, 671–673 CrossRef PubMed .
I. Moreels, Y. Justo, B. De Geyter, K. Haustraete, J. C. Martins and Z. Hens, Size-tunable, bright, and stable PbS quantum dots: a surface chemistry study, ACS Nano, 2011, 5, 2004–2012 CrossRef PubMed .
Z. Huang, G. Zhai, Z. Zhang, C. Zhang, Y. Xia, L. Lian, X. Fu, D. Zhang and J. Zhang, Low cost and large scale synthesis of PbS quantum dots with hybrid surface passivation, CrystEngComm, 2017, 19, 946–951 RSC .
L. Protesescu, S. Yakunin, M. I. Bodnarchuk, F. Krieg, R. Caputo, C. H. Hendon, R. X. Yang, A. Walsh and M. V. Kovalenko, Nanocrystals of Cesium Lead Halide Perovskites (CsPbX, X = Cl, Br, and I): Novel Optoelectronic Materials Showing Bright Emission with Wide Color Gamut, Nano Lett., 2015, 15, 3692–3696 CrossRef PubMed .
H. Aqoma and S.-Y. Jang, Solid-state-ligand-exchange free quantum dot ink-based solar cells with an efficiency of 10.9%, Energy Environ. Sci., 2018, 11, 1603–1609 RSC .
EIA. Electric Power Monthly with Data for March 2018, U.S. Energy Information Administration, 2018 at https://www.eia.gov/electricity/monthly/.
N. L. Chang, A. W. Y. Ho-Baillie, D. Vak, M. Gao, M. A. Green and R. J. Egan, Manufacturing cost and market potential analysis of demonstrated roll-to-roll perovskite photovoltaic cell processes, Sol. Energy Mater. Sol. Cells, 2018, 174, 314–324 CrossRef .
S. E. Sofia, J. P. Mailoa, D. N. Weiss, B. J. Stanbery, T. Buonassisi and I. Marius Peters, Economic viability of thin-film tandem solar modules in the United States, Nat. Energy, 2018, 3, 387–394 CrossRef .
ITRPV. International Technology Roadmap for Photovoltaic, 2016 Results, 2017.
J. R. van Dorp and M. R. Duffey, Statistical dependence in risk analysis for project networks using Monte Carlo methods, Int. J. Prod. Econ., 1999, 58, 17–29 CrossRef .
H. Lim, J. Y. Woo, D. C. Lee, J. Lee, S. Jeong and D. Kim, Continuous Purification of Colloidal Quantum Dots in Large-Scale Using Porous Electrodes in Flow Channel, Sci. Rep., 2017, 7, 43581 CrossRef PubMed .
L. Protesescu, S. Yakunin, O. Nazarenko, D. N. Dirin and M. V. Kovalenko, Low-Cost Synthesis of Highly Luminescent Colloidal Lead Halide Perovskite Nanocrystals by Wet Ball Milling, ACS Appl. Nano Mater., 2018, 1, 1300–1308 CrossRef PubMed .
T. P. Osedach, T. L. Andrew and V. Bulović, Effect of synthetic accessibility on the commercial viability of organic photovoltaics, Energy Environ. Sci., 2013, 6, 711–718 RSC .
J. Jean, P. R. Brown, R. L. Jaffe, T. Buonassisi and V. Bulović, Pathways for solar photovoltaics, Energy Environ. Sci., 2015, 8, 1200–1219 RSC .
C. Wadia, A. P. Alivisatos and D. M. Kammen, Materials availability expands the opportunity for large-scale photovoltaics deployment, Environ. Sci. Technol., 2009, 43, 2072–2077 CrossRef PubMed . | https://pubs.rsc.org/en/content/articlehtml/2018/ee/c8ee01348a |
Alternative media and processes video quiz launch video quiz: installation art how to take the quiz you can tailor this self-test quiz to give you 5, 10, 15 or more questions you may select only one answer per question. Painting is the process of applying paint to a surface using tools such as brushes, a roller, a painting knife, or a paint sprayer medium or media (pl) is the material and tools used to make a work of art. Art therapy, a hybrid field largely influenced by the disciplines of art and psychology, uses the creative process, pieces of art created in therapy, and third-party artwork to help people in.
In three-dimensional artworks, artists use media, like clay and plastic, to make solid forms that have height, width, and depth sculpture is a three-dimensional work of art , which sculptors can use clay, glass, plastics, wood, stone, or metal. Part 2 media and processes chapter 23 printmaking gateways to art: understanding the visual arts, debra j dewitte, ralph m larmann, m kathryn shields context of printmaking the earliest existing printed artworks on paper were created in china and date back to the eighth century ce by the ninth century, printed scrolls containing buddhist. The dominant artistic movement in the 1940s and 1950s, abstract expressionism was the first to place new york city at the forefront of international modern art.
Art media and processes two dimensional (2d) art work is flat meant to be viewed from one side only ex: paintings, drawings, photographs three dimensional (3d) art work has mass and volume (takes up space) art work is intended to be viewed from more than one side ex: sculptures, architecture art media/medium the materials the artist uses to create the artwork ----- 2-d media paint, pencil. An art term describing the systematic inquiry into the practices and ethos surrounding art institutions such as art academies, galleries, and museums, often challenging assumed and historical norms of artistic theory and practice. Art processes drawing, painting, collage, pottery, weaving free presentations in powerpoint format two dimensional art (drawing, painting, collage) art media famous artists for kids three dimensional art – an introduction art processes for kids for teachers primary resources art free clipart. Chapter quiz launch quiz designed to help you test your knowledge of chapter material, multiple-choice chapter quizzes provide instant feedback that helps you determine what you know and what you need to review. Visual art curriculum standards fifth grade standard 10 media, techniques, and processes students will understand and apply media, techniques, and processes.
Printmaking processes an original print is an image on paper or similar material made by one or more of the processes described here each medium has a special, identifiable quality, but because more than one impression of each image is possible, original does not mean unique. If your igcse or a level art coursework project feels stagnant, repetitive, or downright boring, you may benefit from increased experimentation with media, techniques and processes (the ideas listed below are also perfect for using in an a level or gcse art sketchbook. Elementary art – common media and processes media kindergarten grade one grade two drawing media pencil, colored pencil, markers, crayons, chalks, oil pastel. Nevada arts standards – visual arts march 2000 page 1 of 7 visual arts: knowledge content standard 10: students know and apply visual arts media, techniques, and processes by the end of grade 3, students know and are able to. Art media is the material used by an artist, composer or designer to create a work of artthis is a list of types of art and the materials used within those types.
Main media/processes drawing, painting, printmaking, sculpture, architecture, tradition of craft, visual communication design, photography, film/video and digital art, alternative media and processes drawing's purpose. —the artistic methods, processes, or means of expression, used in the visual arts to produce a work of art wikimedia commons has media related to art media pages in category art media the following 100 pages are in this category, out of 100 total. Sculpture: sculpture, an artistic form in which hard or plastic materials are worked into three-dimensional art objects the designs may be embodied in freestanding objects, in reliefs on surfaces, or in environments ranging from tableaux to contexts that envelop the spectator an enormous variety of media. Media art uses technologies that inevitably change over time, and the technologies adopted by artists using new media are representative of a given historical period conservators and curators work together to conserve this history by staying as true to the original artwork as possible.
Art: content and analysis 5135 critique personal artwork using at least two art processes and media a brings in reproductions that exhibit two different processes and that are certified as the test taker’s own work b describes/reflects on/analyzes/evaluates. Kathy leader provides mixed media art classes, art workshops and art retreats for kids and adults in west los angeles the art process studio includes summer art camps for kids, art parties and specialty creative workshops for businesses and corporations looking to improve the health and wellness of their employees.
Alternative media are media that differ from established or dominant types of media in terms of their content, production, or distribution alternative media take many forms including print, audio, video, internet and street art some examples include the counter-culture zines of the 1960s, ethnic and indigenous media such as the first people's television network in canada (later rebranded. Media can also refer to there are two different definitions of media in artwork media, as the plural for medium, refers to the type of material used by an artist to create his artwork. 40 chapter 3 the media and processes of art figure 31 this artist has developed new ways to use the process of glassblowing to create large sculptures and installations he calls the objects in this window installation “flowers” compare and contrast these glass flowers to the flowers painted by van gogh in figure 78 on page 178. Using media, materials, techniques and processes in your final piece you don't have to use all the different ideas and methods that you have explored, but your final work should be developed from.
2018. | http://hqessaybsws.icecondoassignments.info/art-media-and-processes.html |
Quantification of all fetal nucleated cells in maternal blood between the 18th and 22nd weeks of pregnancy using molecular cytogenetic techniques.
Different types of nucleated fetal cells (trophoblasts, erythroblasts, lymphocytes, and granulocytes) have been recovered in maternal peripheral blood. In spite of many attempts to estimate the number of fetal cells in maternal circulation, there is still much controversy concerning this aspect. The numbers obtained vary widely, ranging from 1 nucleated cell per 104 to 1 per 109 nucleated maternal cells. The purpose of our project was to determine the absolute number of all different types of male fetal nucleated cells per unit volume of peripheral maternal blood. Peripheral blood samples were obtained from 12 normal pregnant women known to carry a male fetus between 18 and 22 weeks of pregnancy. Three milliliters (3 ml) of maternal blood has been processed without any enrichment procedures. Fluorescence in situ hybridization (FISH) and primed in situ labeling (PRINS) were performed, and fetal XY cells were identified (among maternal XX cells) and scored by fluorescent microscopy screening. The total number of male fetal nucleated cells per milliliter of maternal blood was consistent in each woman studied and varied from 2 to 6 cells per milliliter within the group of normal pregnancies. The number of fetal cells in maternal blood, at a given period, is reproducible and can therefore be assessed by cytogenetic methods. This confirms the possibility of developing a non-invasive prenatal diagnosis test for aneuploidies. Furthermore, we demonstrate that it is possible to repeatedly identify an extremely small number of fetal cells among millions of maternal cells.
| |
- Crashes / Fires:
- 0 / 0
- Injuries / Deaths:
- 0 / 0
- Average Mileage:
- 12,500 miles
About These NHTSA Complaints:
The NHTSA is the US gov't agency tasked with vehicle safety. Complaints can be spread across multiple & redundant categories, & are not organized by problem. See the Back button — blue bar at the very top of the page — to explore more.
problem #1
Jul 092010
Yaris
- 12,500 miles
I drive a Toyota 2008 Yaris (Toyota vios here in D philppines)with a little more than 20,000 kms(20,014kms) on D odometer. I noticed an intermittent noise on D front end suspension passenger side. I brought to D dealer for an assessment of the problem, after their assessment Toyota told me that there was a problem with the shock absorber and they would replace it. They also said that there was also a problem with the alternator saying that the voltage "isn't up to standard." After a few days I got my vehicle back from Toyota. They told me that they replaced the front right shock absorber. I asked them why only one shock absorber was replaced as the usual procedure was that both front shock absorbers should be replaced, the service adviser told that Toyota is becoming more strict with warranty policies and the only part that they will replace is the defective part. Furthermore the dealer service advisor said if it was up to them they normally always replace both shock absorbers but because of Toyota's strict warranty policy they could only replace one shock absorber. This seems to be another instance of Toyota cutting corners to save cost at the expense of car safety. I live in manila, philippines and I know that this case of mine isn't in your jurisdiction, I only submit my case to you because I'm concerned that there may yarises there in the us that may have similar problems. Thank you for your time in reading my incident report. | https://m.carcomplaints.com/Toyota/Yaris/2008/suspension/suspension-front.shtml |
PROBLEM TO BE SOLVED: To reduce a machining cost of drilling and improve the life of a bar-shaped tool.
SOLUTION: This drilling device is equipped with a gun drill 3, a holder 4 for holding the gun drill 3 so that it can be put on and taken off, a coupling mechanism 8 for fixing the base end part 3b of the gun drill 3 to the holder 4 so that it can be put on and taken off, and an ultrasonic micro-vibration generator 5 coupled to the gun drill 3 through the holder 4, for giving micro- vibration in the axial center direction to the gun drill 3. A work 12 is rotated, and simultaneously the work 12 is drilled to make a deep hole having a small diameter by the gun drill 3, and chips are intermittently divided by micro- vibration in the axial center direction, therefore, the gun drill 3 is intermittently contacted with the work 12, and cutting resistance of the gun drill 3 is suppressed lowly. At the time of exchanging the gun drill 3, only the gun drill 3 can be exchanged by releasing the coupling mechanism 8, therefore, a machining cost of the article to be machined 12 can be suppressed lowly, thereby leads to the advantageous state.
COPYRIGHT: (C)2000,JPO | |
While we have a wide variety of baked goods here at Three Brothers, we tend to focus on providing Jewish treats to our community. One such treat which gets a lot of questions and interest is our challah (can be pronounced like “holla”). If you’re at all curious about this Jewish staple, this blog post is for you.
What is Challah?
Challah is a slightly sweet, eggy bread with a consistency and taste similar to brioche. According to Jewish tradition, challah refers to a section of dough which is separated after kneading to be given as an offering at the Temple. Given that we live in the age of the diaspora, this tradition is no longer maintained, and the meaning of the word “challah” has evolved to refer to the loaves of bread traditionally baked for Shabbat. Interestingly, the dual Shabbat loaves are themselves a reference to biblical manna, a substance which fell from the sky for wandering Israelites to make bread from. On Fridays, with Shabbat incoming, enough manna would fall from the sky for each household to make twice the loaves they normally would so that they would not have to bake on the Sabbath.
Challah as we know it now is largely an Ashkenazi tradition, with many Jewish communities around the world simply using whatever local bread is available for the Shabbat meals. This is further evidenced by the fact that many eastern European nations (like Poland) consume breads very similar to challah in name and composition. Here at Three Brothers, we make challah in a traditional way, using eggs, flour, water, yeast, sugar, and salt. Sometimes we add toppings like raisins, poppy seeds, or sesame seeds for extra flavor and texture.
Why is Challah?
Challah bread can come in any shape or size you need, but traditionally it is either braided or made round, in the case of the high holy days. As we explained in our blog post on Rosh Hashanah, the round shape of the challah is meant to symbolize the cyclical nature of time in the new year. There are many possible reasons for the awfully specific braided shape of challah, but we won’t cover all of them here. One possible reason is that two loaves are made from six strands of dough each, altogether counting twelve breads which is meant to represent the twelve loaves which would have been served at the Temple in the days before the diaspora. If you would like to read a bit more as to why challah is braided, we recommend reading this article from Chabad: https://www.chabad.org/library/article_cdo/aid/480266/jewish/Why-Is-Challah-Braided.htm. Essentially, braided challah is a tradition, so we keep making it that way to keep the tradition alive. The shape of braided bread is unique, and makes serving challah a communal experience as it is meant to be pulled apart in chunks. | https://3brothersbakery.com/blog/challah-a-practical-introduction/ |
- I cannot download the app using the link in the text message?
- What is the point of the app?
- Why do I need to download the SecureIdentity App?
- QR Error - Authentication failure
- iOS pop up message
- Supported device combinations
- SecureIdentity App activation steps
- Why should I scan the QR code?
- Using a single device, do I still have to scan a QR-code?
- Name & Address incorrect
- Recently moved but unable to change address
- I have more than one place of residence in the UK, which address must I use?
- I am a UK citizen but don’t have a UK address
- Can I use my nickname?
- I can’t remember how long I have lived at my address
- Combinations of UK documents that can be used
- UK Photocard Driving Licence (inc Provisional)
- UK Bank Account
- UK Birth Certificate
- UK Marriage Certificate
- UK County Court Judgement
- SID-EKB
- What kind of questions will SecureIdentity ask me during the identity test?
- I don't know the answer(s) to questions in the identity test
- Is an identity test the same as a credit check? Does it affect my credit rating?
- Registered but got the message the government need to be more confident it's you
- Sign your mortgage deed service
- Error Processing please wait - on iOS devices
- Signing in and get the message "The government service needs to be more confident it's you"
- How to sign in - Using more than one device
- Sign in - using SecureIdentity app or single smart-device
- How do I continue my registration?
- How long do I have once I have saved my registration?
- I don’t have all the information I need to hand can I stop and continue later?
- Reset PIN / forgot PIN
- I forgot my account password
- Changed Mobile journey
- App Recovery Process (Two Devices)
- Changed Mobile step by step guide
- Re validation process
- M code keeps disappearing
- Contact us and feedback
- General Service Description
- What is SecureIdentity?
- Does SecureIdentity offer different types of checks? Fingerprints etc...
- If SecureIdentity is so secure, isn’t it more complex to use?
- How Gov.UK Verify works?
- What is GOV.UK Verify?
- Why GOV.UK Verify is needed?
- Why is the Verify program happening?
- Why is the government using private sector providers?
- Will my data be sold on or used for sales purposes?
- How do I make a Subject Access Request (S.A.R)?
- How secure is my personal information?
- Will my credit or personal information be passed to anyone?
- Why do you need all my personal and financial information?
- Security Breach/Concern
- Forgotten password
- What should I do if my email and password is compromised?
- Why is SecureIdentity more secure than other ID providers?
- Is SecureIdentity accessible from other countries?
- I’ve received an email advising me that my personal information has been changed, but I didn't make any changes. | https://help.secureidentity.co.uk/hc/en-gb?mobile_site=false |
---
abstract: 'For evolution of flat universe, we classify late time and future attractors with scaling behavior of scalar field quintessence in the case of potential, which, at definite values of its parameters and initial data, corresponds to exact scaling in the presence of cosmological constant.'
author:
- 'V.V.Kiselev'
title: Scaling attractors for quintessence in flat universe with cosmological term
---
Introduction
============
Recent astronomical measurements of Super Novae Ia light curves versus their red shifts (SNIa) [@snIa; @deceldata; @SNLS], Cosmic Microwave Background Radiation anisotropy (CMBR) by Wilkinson Microwave Anisotropy Project (WMAP) [@wmap], inhomogeneous correlations of baryonic matter by Sloan Digital Sky Survey (SDSS) and 2dF Galaxy Redshift Survey [@baryon] with a high precision enforce the following picture of cosmology:
- the Universe is flat,
- its evolution is consistently driven by cosmological constant $\Lambda$ and cold dark matter (CDM), that constitutes the $\Lambda$CDM model.
Irrespective of dynamical nature for such substances, that could be different, at present any model of cosmology has to demonstrate its tiny deviation from the $\Lambda$CDM evolution at late times, i.e. the behavior of Hubble constant should scale extremely close to $$\label{H2}
H^2=H_0^2\left(\Omega_\Lambda+\frac{\Omega_M}{a^3}\right),$$ with $H_0=H(t_0)$ denoting the present day Hubble constant, $a=a(t)$ is the scale factor in the Friedmann–Robertson–Walker metric $$\label{ds}
{\rm d}s^2={\rm d}t^2-a^2(t)\,[{\rm d}r^2+r^2{\rm
d}\theta^2+r^2\sin^2\theta{\rm d}\varphi^2],$$ conveniently normalized by $a(t_0)=1$, so that $H=\dot a/a$ with dot meaning the differentiation with respect to time $t$. The fractions $\Omega_\Lambda$ and $\Omega_M$ represent the cosmological term and pressureless matter including both baryons and cold dark matter. In (\[H2\]) we neglect contributions by radiation fractions given by photons and neutrinos.
Dynamical models closely fitting the above behavior of Hubble constant include a quintessence [@quint], a scalar filed $\phi$ with slowly changing potential energy $V(\phi)$ imitating the contribution of cosmological constant (see recent review on the reconstruction of dark energy dynamics in [@SS-rev]). In present paper we find a potential of scalar field $\phi$ which exactly reproduces the scaling behavior of Hubble constant in flat universe with cosmological term[^1] (Section II). The potential is the square of hypersine with some tuned values of normalization and slope. In Section III we study the stability of scaling behavior versus the parameters of potential in terms of autonomous system of differential equations possessing critical points. We find the late time attractors that further evolve to future attractors generically different from those of late time. We discuss a physical meaning of attractors in Section IV. In Conclusion we summarize our results.
Exact solution with scaling behavior
====================================
The evolution is described by following equations: $$\label{1}
\left\{
\begin{array}{l}\displaystyle
H^2 = \frac{8\pi G}{3}\,\left(\rho_B+\frac{1}{2}\,{\dot
\phi}^2+V(\phi)\right),\\[5mm]
\displaystyle
\ddot\phi+3 H \dot\phi+\frac{\partial
V(\phi)}{\partial\phi} = 0,
\end{array}
\right.$$ where $\rho_B$ is the energy density of baryotropic matter with pressure $p_B=w_B\rho_B$, which satisfies the energy-momentum conservation $$\label{matter}
{\dot\rho}_B+3 H (\rho_B+p_B)=0,$$ yielding the scaling behavior $$\label{matter2}
\rho_B=\frac{\rho_0}{a^{3(1+w_B)}}.$$ We suppose the following scale dependence of Hubble constant $$\label{h2}
H^2=H_0^2\left(\Omega_\Lambda+\frac{\Omega_S}{a^{3(1+w_B)}}\right),$$ where $\Omega_S$ denotes the present day fraction of substance composed of baryotropic matter, cold dark matter and quintessence, that could simulate dark substances ordinary introduced in the standard consideration: the dark energy and dark matter. At late times (or at present) $w_B=0$ corresponds to nonrelativisting matter with negligibly small pressure (the dust), while $w_B=1/3$ stands for the radiation era of hot matter.
The critical density $\rho_c$ is defined by $$\label{h3}
H_0^2=\frac{8\pi G}{3}\,\rho_c,$$ so that the baryonic and dark matter fractions are presently given by $$\label{matter3}
\Omega_b=\frac{\rho_b}{\rho_c}\Big|_{t=t_0},\qquad
\Omega_{DM}=\frac{\rho_{DM}}{\rho_c}\Big|_{t=t_0},$$ while for brevity we put $$\label{Bb}
\Omega_B=\Omega_b+\Omega_{DM},\quad\mbox{or}\quad
\rho_B=\rho_b+\rho_{DM}.$$
The scalar field density and pressure $$\label{scal2}
\rho_\phi=\frac{1}{2}\,{\dot\phi}^2+V,\qquad
p_\phi=\frac{1}{2}\,{\dot\phi}^2-V.$$ determine the fraction $$\label{scal1}
\Omega_\phi=\frac{1}{\rho_c}\,\rho_\phi\Big|_{t=t_0},$$ and state parameter function $$\label{tw}
w_\phi=\frac{p_\phi}{\rho_\phi}.$$ The substance fraction is the sum of matter and field fractions $$\label{sub1}
\Omega_S=\Omega_B+\Omega_\phi.$$ We define the vacuum energy by $$\label{V_0}
V_0=\rho_c\,\Omega_\Lambda,$$ so that in flat universe $$\label{budget}
\Omega_\Lambda+\Omega_S=1.$$
Since we investigate the scaling behavior of functions homogeneous with respect to scale factor $a$, it is convenient to introduce the following variable $$\label{N}
N=\ln a(t),$$ so that the differentiation with respect to time denoted by dot is reduced to the differentiation with respect to $N$ denoted by prime, $$\label{n2}
\dot \phi=\phi'\,H,\qquad \frac{\partial V}{\partial
\phi}=\frac{V'}{\phi'}.$$ Then, the equation of motion for $\phi$ is deduced to $$\label{phi}
\frac{1}{2}\,\left(\{\phi'H\}^2\right)'+3\{\phi'H\}^2+V'=0.$$
The consideration suggests that the potential scales as[^2] $$\label{pot1}
V=V_0+\frac{\widetilde \Omega_\phi}{a^{3(w_B+1)}}\,\rho_c,$$ where $\widetilde \Omega_\phi$ is a constant. Therefore, the derivative of potential scales, too, $$\label{pot2}
V'=-3(w_B+1)\,(V-V_0).$$ According to (\[phi\]), we suggest $$\label{quad}
3\{\phi'H\}^2=-c\,V',$$ where the constant $c$ can be found from (\[phi\]), since $$\label{quad2}
(\{\phi'H\}^2)'=-\frac{c}{3}\,V'' =c(w_B+1)\,V',$$ hence, (\[phi\]) yields $$\label{phi2}
\left\{\frac{1}{2}\,c(w_B+1)-c+1\right\}V'=0,$$ that is satisfied at $$\label{c}
c=\frac{2}{1-w_B}.$$ One can easily get $$\label{dot2}
\frac{1}{2}\,(\dot\phi)^2=\frac{1}{2}\,(\phi'H)^2=%-\frac{c}{6}\,V'=\rho_c\,
\frac{1+w_B}{1-w_B}\,\frac{\widetilde\Omega_\phi}{a^{3(w_B+1)}},$$ so that $$\label{rho}
\rho_\phi=\rho_c\,\left\{
\Omega_\lambda+\frac{2}{a^{3(w_B+1)}}\,\frac{\widetilde\Omega_\phi}{1-w_B}
\right\},$$ that gives the relation $$\label{omega2}
\Omega_\phi=\frac{2}{1-w_B}\,\widetilde\Omega_\phi.$$
In order to restore the potential yielding the scaling behavior, we have to resolve (\[quad\]) making use of (\[pot1\]), (\[c\]), i.e. $$\label{Vphi}
(\phi')^2=\frac{2\rho_c\widetilde\Omega_\phi}{H^2}\,\frac{1+w_B}{1-w_B}\,
\frac{1}{a^{3(w_B+1)}},$$ where $$%\begin{equation}\nonumber%\label{HH}
H^2=\frac{8\pi
G}{3}\,\rho_c\left\{\Omega_\Lambda+\frac{1}{a^{3(w_B+1)}}\left(
\Omega_B+\frac{2\widetilde\Omega_\phi}{1-w_B}\right)\right\}.$$The integration straightforwardly yields $$\label{phi3}
\frac{\lambda}{2}\,\kappa\,(\phi-\phi_\star)
=\mbox{arcsinh}\sqrt{\frac{1}{a^{3(w_B+1)}}\,\frac{\Omega_S}{\Omega_\Lambda}},$$ where $\kappa^2=8\pi G$, and $$\label{lam1}
\lambda=\sqrt{\,3(1+w_B)\,\frac{\Omega_S}{\Omega_\phi}}.$$ For brevity of formulae we put the integration constant $\phi_\star=0$ with no lose of generality. Then, $$\label{pot-end}
V=V_0\left(
1+\frac{\widetilde\Omega_\phi}{\Omega_S}\,
\sinh^2\left\{\frac{\lambda}{2}\,\kappa\,\phi\right\}
\right),$$ where $$\label{frac}
\frac{\widetilde\Omega_\phi}{\Omega_S}=\frac{3}{2}\,\frac{1-w_B^2}{\lambda^2},$$ while $$\label{HH2}
H^2=H_0^2\Omega_\Lambda
\cosh^2\left\{\frac{\lambda}{2}\,\kappa\,\phi\right\}.$$ So, at the present day we have $\cosh^2\left\{\frac{\lambda}{2}\,\kappa\,\phi_0\right\}=1/\Omega_\Lambda$.
Summarizing the result, we emphasize that there is the exact solution for the scalar field potential (\[pot-end\]), which reproduce the scaling behavior of Hubble constant in the evolution of flat universe in presence of cosmological constant.
The form of potential differs from the case of zero cosmological constant, where the potential is the exponent as was studied for the scalar field with the standard kinetic term in [@Wetterich; @CLW; @FJ; @Alb_Skordis], while the consideration for the general scalar field was developed in [@Tsuji; @GWZ]. One can easily notice that the present derivation is consistent with results concerning for the case of zero cosmological constant. Indeed, the integration of (\[Vphi\]) with the Hubble rate at $\Omega_\Lambda=0$ straightforwardly gives the field proportional to the logarithm of scale factor, $\phi\varpropto \ln a$, that makes the scaling behavior of potential $V\varpropto
\exp\{\tilde\lambda\kappa\phi\}$.
The potential derived can be represented in the form $$\label{pot-end2}
V=V_0+\frac{1}{2}\,\widetilde V_0\big(
\cosh\{\lambda\kappa\phi\}-1\big),$$ at $\widetilde V_0=3 V_0(1-w_B^2)/2\lambda^2$. So, function (\[pot-end2\]) is composed by the sum of two exponential potentials with the opposite slopes and the constant positive shift of minimum. Such kind of potentials was investigated recently.
In review [@SS-00] authors presented the exact solution for constant parameter of state for the dark energy in the presence of dust-like dark matter, but not adding the cosmological constant, which is imitated by the dark energy instead. In [@SahniWang] the evolution of scalar field with the hyper-cosine potential was studied in the case of zero cosmological constant: the exponential form found to be dominant at early times, while the square term did significant at late times. It is clear that the late time dynamics essentially changed by the presence of cosmological term, of course.
Various aspects of cosmological picture due to a potential given by a power of hyper-sine was investigated in [@UrenaMatos]. The tracker properties of such the potentials were stressed.
Another approach was presented in [@GF; @SenSethi; @RSPC; @RSPCC], where authors fixed the scaling behavior of Hubble constant to find exact solutions for the scale factor $a(t)$ in order to study characteristics of universe evolution. The consideration of [@GF] recovers the scale factor behavior in the case of $\Lambda$CDM, however, the authors did not address the question on the scalar field potential reproducing such the scaling. This question was investigated in [@SenSethi], where the potential with the same form of (\[pot-end2\]) was deduced in a particular case of $\Omega_S/\Omega_\Lambda=1/\sinh^2 1$ and $w_B=0$. At this choice $\Omega_\Lambda=\sinh^2 1/(1+\sinh^2 1)\approx 0.58$, which is in contradiction with the recent measurements [@wmap] yielding $\Omega_\Lambda=0.766\pm 0.035$. A cosmological exploration of potential composed by sum of two exponents with opposite slopes but generically different normalization factors and a negative shift of minimum was investigated in [@RSPC; @RSPCC] by the same method of exact time dependence. The authors found an oscillation of $a(t)$ around some scaling dependence with $w_\phi$ oscillating within $[-1;+1]$.
In [@BarreiroCN] the sum of two exponents with identical normalization factors but slopes, which can be different, was considered at first. The late time behavior of scalar field energy scales both the radiation and dust, while near the present and future the state parameter $w_\phi$ has decaying vibrations around $-1$. Such picture differs from that of [@RSPC; @RSPCC]. Questions are the followings: *i)* What is a reason for the difference? *ii)* What can we say about a stability of late time and future scaling? *iii)* Does presented exact scaling solution corresponds to fine tuned values of normalization and slope? These questions were not investigated in references mentioned. We address them in Section III.
Attractors
==========
Let us consider the evolution of flat universe in presence of scalar field with potential $$\label{a1}
V=V_0+\widetilde V_0\sinh^2\left\{\frac{\lambda}{2}\,
\kappa\phi\right\},$$ where $V_0$, $\widetilde V_0$ and $\lambda$ are free parameters, which are not fixed by values in (\[lam1\]), (\[pot-end\]). For definiteness we put all parameters to be positive: $V_0>0$, $\widetilde V_0>0$, $\lambda>0$, while the consideration for cases of negative values can be rather straightforwardly obtained from the formulae below. The Hubble constant is given by $$%\begin{equation}\label{HH4}
H^2=\frac{\kappa^2}{3}\left(
\rho_B+\frac{1}{2}\,(\dot\phi)^2+V_0+\widetilde V_0\sinh^2
\left\{\frac{\lambda}{2}\,
\kappa\phi\right\}\right),$$so that we introduce quantaties $U_0$ and $U$, so that $$\label{U0}
U_0^2=\frac{\kappa^2}{3}\,V_0,\qquad H^2 = U^2+U_0^2.$$ Then, the phase space of system is described by dimensionless variables $$\label{vary}
% \begin{array}{rcl}
x=%&=&\displaystyle
\frac{\kappa}{\sqrt{6}}\,\frac{\phi'H}{U},\quad%\\[5mm]
y=%&=&\displaystyle
\frac{\kappa}{\sqrt{3}}\,\frac{\sqrt{V-V_0}}{U},\quad%\\[5mm]
v=%&=&\displaystyle
\frac{U_0}{U},
% \end{array}$$ while for convenience we introduce $$\label{z}
z=-\frac{1}{\sqrt{V-V_0}}\,\frac{\partial
V}{\partial\phi}\,\frac{\sqrt{2}}{H}.$$ This choice of variables follows the observation of scaling in previous section: the kinetic energy and potential each scale like the Hubble constant squared after the subtraction of term caused by the cosmlogical constant, while the derivative of potential with respect to the field scales like the Hubble constant squared itself.
The definition of $U^2$ implies $$\label{xy}
x^2+y^2=1-\frac{\kappa^2}{3}\,\frac{\rho_B}{U^2},$$ which yields the constraint $$\label{xy2}
x^2+y^2\leqslant 1.$$ The dynamical state parameter of field is determined by $$\label{ww}
{\widetilde w}_\phi=\frac{p_\phi+V_0}{\rho_\phi-V_0}=\frac{x^2-y^2}{x^2+y^2}.$$
In addition, the equations of motion produce relations $$\label{HHprime}
\dot H=H'H=-(1-w_B)\rho_B-(\dot\phi)^2,\qquad U'U=H'H.$$
The differentiation gives the autonomous system of equations $$\label{sys}
\begin{array}{rcl}
x' & = & -3x+\frac{1}{2}\,yz+\frac{3}{2}\,x\,c(x,y),\\[4mm]
y' & = & -\frac{1}{2}\,xz+\frac{3}{2}\,y\,c(x,y),\\[4mm]
(1+v^2)z' & = & 3\lambda^2xy-\frac{3}{2}\,z\,c(x,y),\\[4mm]
v' & = & \frac{3}{2}\,v\,c(x,y),
\end{array}$$ where $c(x,y)=(1+w_B)(1-x^2-y^2)+2x^2$. The quantity $z$ is strictly constrained by the condition $$\label{z-cond}
\frac{z^2}{6\lambda^2}\,(1+v^2)-y^2=\frac{\widetilde
V_0}{V_0}\,v^2,$$ which is the direct consequence of hyper-trigonometry: $\cosh^2q-\sinh^2q=1$. This constraint makes system (\[sys\]) overdefined, since $z$ is completely given by $y$, $v$ and parameter $\xi^2=\widetilde V_0/V_0$. Nevertheless, the system allows us to get a complete analysis of critical points in the simplest manner. What is of our interest? It is the projection of trajectory in the $\{x,y,v\}$ 3D-space to the 2D-plane of $\{x,y\}$.
Late times
----------
At present, the cosmological constant makes a significant contribution to the Hubble constant, i.e. $v=U_0/U \sim 1$. At late times of evolution just before the present, we put $v\ll 1$. Then, $z$ can be excluded by $$\label{z-cond2}
z=\lambda\sqrt{6}\,y.$$ This limit means that the cosmological constant can be neglected, while the field has a large value, so that the hyper-sine can be approximated by a single exponent. Therefore, at late times we arrive to the analysis of exponential potentials given in [@CLW]. Indeed, under (\[z-cond2\]) system (\[sys\]) is reduced to the system for the exponential potential. The analysis of critical point in [@CLW] gave the following physically meaningful properties: irrelevant of normalization of potential $\widetilde V_0$ there are stable scaling attractors in the plane of $\{x,y\}$; these attractors appear at $\lambda^2> 3(1+w_B)$, so that $\Omega_S/\Omega_\phi=\lambda^2/3(1+w_B)>1$ and $w_\phi=w_B$. The attractor is the stable node at $\lambda^2<24(1+w_B)^2/(7+9w_B)$, otherwise it is the stable spiral focus. The position of attractors are given by $$\label{critic1}
x_c=\sqrt{\frac{3}{2}}\,\frac{1+w_B}{\lambda},\qquad
y_c=\sqrt{\frac{3(1-w_B^2)}{2}}\,\frac{1}{\lambda},$$ that fixes z according to (\[z-cond2\]), i.e. $z_c=3\sqrt{1-w_B^2}$.
Thus, at late times just before the present, the quintessence follows the scaling behavior independently of its initial conditions.
Future
------
Since function $c(x,y)$ takes positive values at $x\neq 0$, $y\neq 0$, i.e. at presence of scalar field, quantity $v$ grows in accordance with its differential equation in (\[sys\]), of course. Hence, in future we get $v\gg 1$. Then, (\[z-cond\]) yields $$\label{z-cond3}
z\equiv z_\star=\lambda\sqrt{\frac{6\widetilde V_0}{V_0}}=\lambda\xi\sqrt{6},$$ i.e., $z$ is frozen at $z=z_\star$. Therefore, we get the system for the plane $\{x,y\}$ in (\[sys\]) with $z_\star$ being the external parameter. The critical points are posed at the following sets:
I. Scalar field is absent, $$\label{critic2-0}
\mbox{(i):}\quad x_\star=0,\qquad
y_\star=0,$$ so that the linearized equations in vicinity of critical point, i.e. with $x=x_\star+\bar x$, $y=y_\star+\bar y$, result in the system $$\label{sys-i}
\left(%
\begin{array}{c}
\bar x' \\
\bar y' \\
\end{array}%
\right)=\hat B\cdot
\right)$$ with the matrix $$\label{B-i}
\hat B=\frac{1}{2}\left(%
\begin{array}{cc}
3(w_B-1) & z_\star \\[2mm]
-z_\star & 3(w_B+1) \\
\end{array}%
\right)$$ having eigenvalues $$\label{nu-i}
\begin{array}{l}
\nu_1=\frac{1}{2}(3w_B-\sqrt{9-z_\star^2}),\\[3mm]
\nu_2=\frac{1}{2}(3w_B+\sqrt{9-z_\star^2}),
\end{array}$$ that implies the critical point corresponds to instability due to $\Re\mathfrak{e}\,\nu_2>0$ at $w_B>0$. Anyway, critical point (\[critic2-0\]) is a saddle at $|z_\star|<3$ and $w_B=0$, which is of practical interest to look at the present time and future universe. We notice that actually the baryonic matter has a small pressure, which can be neglected in the universe evolution, i.e. $w_B\to+0$. At $|z_\star|>3$ eigenvalues (\[nu-i\]) satisfy $\nu_1=\nu_2^*$, and we have unstable focus at $w_B>0$ or center at $w_B\equiv 0$, while at $w_B<0$ the focus becomes stable.
II\. Critical points of general position are given by $$\label{critic2-1}
\mbox{(ii):} \left\{
\begin{array}{l}
x_\star=\frac{1}{\sqrt{6}}\,\sqrt{3-\sqrt{9-z_\star^2}},\\[4mm]
y_\star=\frac{1}{\sqrt{6}}\,\sqrt{3+\sqrt{9-z_\star^2}},
\end{array}
\right.$$ with additional symmetry over the following permutations: $\mathcal{A}\mapsto$ $\{x_\star\leftrightarrow -x_\star$ and $y_\star\leftrightarrow -y_\star\}$, $\mathcal{B}\mapsto$ $\{x_\star\leftrightarrow y_\star\}$, $\mathcal{C}\mapsto$ $\{$the product of operations $\mathcal{A}$ and $\mathcal{B}\}$. So, taking into account the symmetry, (\[critic2-1\]) covers 4 related sets. The action of permutation $\mathcal{A}$ conserves the stability properties, while action of $\mathcal{B}$ changes them. This fact becomes clear, for instance, at $|z_\star|<3$. Indeed, altering the sign for $x$ implies the interchange of two equivalent branches in the potential according to $\phi\leftrightarrow-\phi$ or formal altering the sign of quantity $U$, both of which change the sign of $y$, too. So, action $\mathcal{A}$ cannot influence the stability properties. The action of $\mathcal{B}$ interchanges fractions of kinetic and potential energies in the energy density, that can be physically essential.
The analysis of linear stability involves matrices for (\[critic2-1\]) including action of symmetry $\mathcal{A}$ and for (\[critic2-1\]) converted by action of symmetry $\mathcal{B}$, with corresponding eigenvalues $\{\nu\}$,
$$\label{B-iia}
\hat B_a=\frac{1}{2}\left(%
\begin{array}{cc}
w_B(\sqrt{9-z_\star^2}-3)-2\sqrt{9-z_\star^2} & -z_\star w_B \\[2mm]
-z_\star w_B & -w_B(\sqrt{9-z_\star^2}+3)-2\sqrt{9-z_\star^2} \\
\end{array}%
\right),\quad%\mbox{}
\quad
\left\{
\begin{array}{l}
\nu^a_1=-3w_B-\sqrt{9-z_\star^2},\\[3mm]
\nu^a_2=-\sqrt{9-z_\star^2},
\end{array}
\right.$$
$$\label{B-iib}
\hat B_b=\frac{1}{2}\left(%
\nu^b_1=-3w_B+\sqrt{9-z_\star^2},\\[3mm]
\nu^b_2=+\sqrt{9-z_\star^2}.
\end{array}
\right.$$
For real critical points, i.e. at $|z_\star|<3$, and at $w_B\geqslant 0$, set (\[critic2-1\]) is stable node (both eigenvalues are negative), while set (\[critic2-1\]) affected by action $\mathcal{B}$ gives saddle or unstable node depending on the sign of eigenvalue $\nu^b_1$, i.e. a balance between values of $3w_B$ and $\sqrt{9-z_\star^2}$.
{width="7.25cm"} {width="7.25cm"}\
{width="7.25cm"} {width="7.25cm"}\
{width="7.25cm"} {width="7.25cm"}
At $|z_\star|>3$ critical points (\[critic2-1\]) take complex values, while matrices satisfy $\hat B_a=\hat B_b^*$, i.e. they are complex conjugate to each other as well as $x_\star=y_\star^*$. The same is valid for eigenvectors: $$\label{eigenV}
\boldsymbol e^a_1=
\left(%
\begin{array}{c}
3-\sqrt{9-z_\star^2} \\
z_\star \\
\end{array}%
\right),\quad
\boldsymbol e^a_2=
\right),$$ while $\boldsymbol e^b_{1,2}=\{\boldsymbol e^a_{1,2}\}^*$. Basis (\[eigenV\]) is orthonormal: $$\boldsymbol e^a_i\cdot\boldsymbol e^a_j=\delta_{ij}.$$ The linearized system has solutions $$\label{sol1}
\left(%
\begin{array}{c}
x \\
y \\
\end{array}%
\right)=
\left(%
\begin{array}{c}
x_\star \\
y_\star \\
\end{array}%
\right) +\sum\limits_{j=1}^2\boldsymbol e^a_j\,u_j\,{\rm
e}^{\nu^a_j N},$$ where $u_j$ stand for some initial data. Solutions (\[sol1\]) are complex-valued. By our investigation, (\[sol1\]) are irrelevant to physical quantities in question at $|z_\star|>3$.
III\. Tuned scaling appears at the special value of parameter $z_{\scriptscriptstyle T}=3\sqrt{1-w_B^2}$. Then, there is the *critical line* $$\label{line1}
\mbox{(iii): ~ }y_\star=x_\star\sqrt{\frac{1-w_B}{1+w_B}},$$ which is *$\lambda$-invariant*, i.e. independent of slope in the potential. It is spectacular that tuned point given by $z_{\scriptscriptstyle T}=z_c$, $x_\star=x_c$, $y_\star=y_c$ corresponds to the exact scaling solution found in Section II.
At (\[line1\]) the linear analysis of perturbations gives two eigenvalues: $\nu_1=0$ and $\nu_2=2w_B$, so that zero eigenvalue corresponds to the line itself, while the positive one indicates instability at $w_B>0$ in vicinity of the line as a whole. This fact is in agreement with the study of critical points in the case II, since (\[line1\]) contains the point of (\[critic2-1\]) at $w_B<0$ as well as it does the point at $w_B>0$ after the action of symmetry $\mathcal{B}$, so we correspondingly get eigenvalues (\[B-iia\]) and (\[B-iib\]) at $z_\star=z_{\scriptscriptstyle
T}$ yielding the same result above. Therefore, at $w_B>0$ the scaling solution of section II is unstable in future, while it does at $w_B<0$.
IV\. The boundary circle $$\label{boundary1}
\mbox{(iv): ~ }x^2+y^2=1$$ is conserved by the autonomous system. This fact implies that at $|z_\star|>3$ and $w_B>0$, when there are no stable critical points, the system approaches the boundary circle in future.
Summarizing the results, we have found that in the presence of cosmological constant the scalar field with the hyper-cosine potential approaches the stable attractor at late times just before the present day (the time of cosmological constant becomes visible) at the potential slope $\lambda^2>3 (1+w_B)$, so that the energy of field scales like that of matter with the state parameter $w_B$, while the fraction of field energy depends on the slope value. However, this attractor exhibits the strange behavior in future, i.e. under the dominance of cosmological term. Then, the balance of remnant energy depends on the parameter $z_\star$ determined by two quantities: i) the weight of potential normalization with respect to the energy density due to the cosmological constant and ii) the slope, in accordance with (\[z-cond3\]). So, at $|z_\star|<3$ and $w_B\geqslant0$ the dynamical part of scalar field energy reaches the other scaling attractor, which dominates over the matter in accordance with (\[B-iia\]) (see Fig. \[phase-plot\]a) and gets the state parameter $$\label{tww}
\widetilde w_\phi=-\sqrt{1-\frac{z_\star^2}{9}}.$$
At $|z_\star|<3$ and $w_B<0$ we get an interplay of two attractors with (\[nu-i\]) and (\[B-iia\]): at $3w_B+\sqrt{9-z_\star^2}<0$ the scalar field relaxes at the minimum of kinetic and potential energy (\[critic2-0\]) (see Fig. \[phase-plot\]b), while, otherwise, at $3w_B+\sqrt{9-z_\star^2}>0$ the scalar field dominates over the matter at the same point of (\[B-iia\]) and (\[tww\]) (see Fig. \[phase-plot\]c).
At $|z_\star|>3$ we get three cases: i) at $w_B>0$ the quintessence in future vibrates at the boundary circle as the limit cycle (see Fig. \[phase-plot\]e), ii) at $w_B=0$ the field cycled around the center being the point of minimal kinetic and potential energy (see Fig. \[phase-plot\]f), iii) at $w_B<0$ the field relaxes to the minimum (see Fig. \[phase-plot\]d).
The evolution kinds of scalar field in the plane of $\{x,y\}$ are illustrated in Fig. \[phase-plot\] at $\lambda=20$ and various sets of $z_\star$ and $w_B$: at $z_\star=2.9$ we put $w_B=0.2$ in Fig. \[phase-plot\] a), $w_B=-0.5$ in b), and $w_B=-0.2$ in c), while at $z_\star=10$ we put $w_B=-0.2$ in Fig. \[phase-plot\] d), $w_B=0.2$ in e), and $w_B=0$ in f). At all of tries the evolution starts at $x_0=-0.8$, $y_0=0.4$, and trajectories move clockwise. We certainly see that trajectories approaches the late time attractor at appropriate $\{x_c,y_c\}$, which are negative in all cases except b), when they are positive. The future attractors in Fig. \[phase-plot\] a) and c) are posed at $\{x_\star,y_\star\}$ with opposite, negative, sign of (\[critic2-1\]). The attractors at b) and d) stand in the minimum, while limit cycles of e) and f) are at the border and around the minimum, correspondingly.
Thus, in the linear analysis we have classified the late time and future attractors for the quintessence with the specified kind of potential relevant to the case of nonzero cosmological constant.
However, this analysis falls in special degenerate case of $z_\star=3$, $w_B=0$.
Degenerate case
---------------
At $z_\star=3$, $w_B=0$ and $v\gg 1$ the analysis of future evolution becomes nonlinear, since, after the transformation to variables $\sigma=x^2+y^2$ and $\tau=x/y$, autonomous system (\[sys\]) is reduced to $$\label{sys-2}
\begin{array}{l}\displaystyle
\sigma'= 3\sigma(\sigma-1)\,\frac{1-\tau^2}{1+\tau^2},
\\[3mm]
\displaystyle
\tau'=-\frac{3}{2}(1-\tau)^2, \\
\end{array}$$ which can be solved explicitly. Indeed, the integration for $\tau$ results in $$\label{tau}
\tau= 1+\frac{2}{\mathfrak{N}},$$ where $\mathfrak{N}=3(N-N_0)$, and $N_0$ corresponds to some initial data. Hence, $\tau\to 1$ at the end of evolution, i.e. at $N\to +\infty$. Then, $$\label{tau1}
\begin{array}{rcl}\displaystyle
{\rm i}\ln\frac{1-\sigma}{1-\sigma_0}\,\frac{\sigma_0}{\sigma}&=&
\displaystyle
\ln\frac{\mathfrak{N}-\mathfrak{N}_c}{\mathfrak{N}-
\mathfrak{N}_c^*}+
\mathfrak{N}_c\ln(\mathfrak{N}-\mathfrak{N}_c)
\\[4mm] && \displaystyle -
\mathfrak{N}_c^*\ln(\mathfrak{N}-\mathfrak{N}_c^*),\\[1mm]
\end{array}$$ where $\sigma_0<1$, $\mathfrak{N}_c=-(1+{\rm i})$. Therefore, at $N\to+\infty$ we find $$\ln\frac{1-\sigma}{1-\sigma_0}\,\frac{\sigma_0}{\sigma}\to -2\ln N\to
-\infty,$$ that implies $$\label{tau2}
\sigma\to 1,$$ and the attractor is posed at the boundary circle, $$\label{tau3}
x_\star=y_\star=\frac{1}{\sqrt{2}}
\quad\mbox{or}\quad
x_\star=y_\star=-\frac{1}{\sqrt{2}}.$$
If the initial data $\sigma_0=1$, the quantity $\sigma$ does not evolve, while $\tau$ approaches the attractor $\tau_\star=1$.
In addition, we have explicitly found that the second critical point $\sigma_\star=0$, i.e. $x_\star=y_\star=0$ is unstable.
The character of attractor (\[tau3\]) is illustrated in Fig. \[degenerate\], where trajectories move clockwise.
![The attraction of trajectories to the critical points at the boundary circle of $\{x,y\}$ plane in the degenerate case.[]{data-label="degenerate"}](degen.eps "fig:"){width="7.5cm"}\
Thus, attractors (\[tau3\]) exhibit the stable behavior in appropriate semicircles, while the line connecting the critical points has the instability versus perturbations.
Phenomenological points
=======================
The mass
--------
The potential of quintessence suggests the mass $$\label{mass1}
m^2=\frac{\partial^2 V}{\partial\phi^2}\Big|_{\phi=0}=
4\pi G\lambda^2\widetilde V_0.$$ For the exact scaling solution in Section II we get $$\label{mass2}
m^2_s=\frac{9}{4}\,(1-w_B^2)\,H_0^2\Omega_\Lambda,$$ and at $w_B=0$ and $\Omega_\Lambda\approx 0.7$ of practice the mass is determined by the current value of Hubble constant, i.e. it is extremely small as well as the energy scale of cosmological constant, that is beyond a natural reason. Although, such the mass could argue for both the present acceleration in the universe expansion and the scale of cosmological constant.
Generically, we get $$\label{mass3}
m^2=\frac{2\pi}{3}\,V_0 G\,z_\star^2=\frac{1}{4}\,
H_0^2\Omega_\Lambda\,z_\star^2.$$ So, the mass of scalar quintessence scales as the present day Hubble constant with the factor of $z_\star$, which could be arbitrary to enlarge the mass up to reasonable values in the physics of Standard model. However, huge values of $z_\star$ involve extremely frequent oscillations of quintessence in the nearest future, that is in contradiction with the present smooth evolution of universe. Therefore, we expect that a viable model includes the quintessence mass of the order of Hubble constant today.
Restriction to the slope
------------------------
The late time scaling of quintessence results in a fixed fraction of quintessence energy in the budget of universe with respect to other matter irrespective of the evolution stage: the dust or radiation fix close values of fractions. However, the fraction of nonbaryonic matter is constrained due to measured and primordial abundances of light elements caused by Big Bang Nucleosynthesis [@Wetterich; @CLW]. Then, the slope of potential should be quite large to suppress $\Omega_\phi\leqslant 0.13$, so according [@CLW] one gets $$\label{lam10}
\lambda^2>20.$$
Next, a role of quintessence field $\phi$ during inflation actually was analyzed in [@CLW], since the hyper-sine in fact coincides with the exponential potential at large values of field. The problem is a relic abundance of quintessence after inflation, that should be small in order to conserve the standard scenario of nucleosynthesis. Appropriate restrictions in various schemes of inflation are given in [@CLW].
Initial conditions
------------------
Attractors mean a slow dependence of late time evolution on initial data for the quintessence. The character of regulation is illustrated in Fig. \[falls\]. The set of tries exhibits the following general features:
- At small initial fraction of quintessence energy, it is frozen to a moment, when it approaches an appropriate scaling value in order to start the tracker behavior at late times.
- At large initial fraction of quintessence energy, it rapidly falls in order to frozen and wait for a moment of tracker way at late times.
- In future, vibrations of quintessence at $z_\star>3$ and $w_B\geqslant 0$ determine an average value of dynamical parameter for the equation of state $\langle\widetilde w_\phi\rangle$, which is independent of initial data, whereas $-1<\langle\widetilde w_\phi\rangle
<w_B$ at $w_B>0$ or $\langle\widetilde w_\phi\rangle=0$ at $w_B=0$ (see Fig. \[falls\] a, b, c, d). At $w_B<0$ vibrations determine the effective $\langle\widetilde w_\phi\rangle>w_B$ (see Fig. \[falls\] g, h).
Further observations repeat general properties of future attractors.
{width="7.25cm"} {width="7.25cm"}\
{width="7.25cm"} {width="7.25cm"}\
{width="7.25cm"} {width="7.25cm"}\
{width="7.25cm"} {width="7.25cm"}\
Equation of state
-----------------
At late times, the attractor causes the ratio of quintessence pressure to the energy density $w_\phi$ is stabilized infinitely close to the value of parameter $w_B$ for the matter. However, the situation changes, when the cosmological term comes to dominate, and $w_\phi$ moves to $-1$. An example of $w_\phi$ relaxation is shown in Fig. \[wphi\] for the quintessence vibrating around the minimum point (see Figs. \[phase-plot\] f and \[falls\] a, b). The present day e-folding is arbitrary in Fig. \[wphi\]. The magnitude of deviation from the limit of $-1$ and vibration period depend on the potential parameters. The picture analogous to Fig. \[wphi\] was observed in [@BarreiroCN] with similar values of parameters.
![The state parameter of quintessence $w_\phi$ versus the e-folding $N$ of evolution scale: changing the stable value of late times at $w_B=0$ at $z_\star=50$ and $\lambda=20$.[]{data-label="wphi"}](w50.eps "fig:"){width="7cm"}\
It is clear that vibrations are absent if $z_\star<3$. Anyway, the parameter of quintessence state $w_\phi$ rapidly approaches the vacuum value. Nevertheless, it would be interesting to see a relative significance of quintessence with respect to the matter. So, general consideration of attractors and Fig. \[falls\] demonstrate that the quintessence fraction can dominate or be suppressed depending on the potential parameters, the value of $z_\star$. It is evident that in the case of reaching the boundary circle in the $\{x,y\}$ plane the effective, average value for the state parameter is $\langle \widetilde w_\phi\rangle = 0$, while the same value is clearly observed also in the case of $w_B=0$ and $|z_\star|>3$ (see Fig. \[falls\] a and b).
Thus, we get the definite understanding of phenomenological properties for the evolution of quintessence with the specified kind of potential in the presence of cosmological term.
Conclusion
==========
In this paper we have found the potential of scalar field quintessence, which gives the exact solution for the scaling evolution of flat universe in the presence of cosmological constant. The scaling behavior is consistent with the current empirical observations.
We have investigated the stability of scaling behavior versus the variations in the slope and normalization of potential as well as in initial data. The analysis has revealed two kinds of attractors. The late time attractor just before the cosmological constant is coming to play, is independent of normalization, and it is determined by the slope, that is consistent with the well-known result for the exponential potentials [@CLW], representing the limit of large field for the potential found in the paper. The future behavior of quintessence under the dominance of cosmological constant depends on both the ratio of potential normalization to the vacuum energy density and slope in the special combination denoted by parameter $z_\star$. Generically, the future attractor differs from that of late time. So, the late time attractor reveals the strange behavior. We have classified the future attractors by their character and stability in linear analysis. The degenerate case of nonlinear dependence has been solved explicitly. Some phenomenological items have been considered, too.
We conclude that analysis of scaling attractors can be useful for classifying the quintessence behavior at late times and in future.
This work is partially supported by the Russian Foundation for Basic Research, grant 04-02-17530.
[\*\*]{} A. G. Riess *et al.* \[Supernova Search Team Collaboration\], Astron. J. **116**, 1009 (1998) \[arXiv:astro-ph/9805201\];\
B. P. Schmidt [*et al.*]{} \[Supernova Search Team Collaboration\], Astrophys. J. [**507**]{}, 46 (1998) \[arXiv:astro-ph/9805200\];\
S. Perlmutter [*et al.*]{} \[Supernova Cosmology Project Collaboration\], Astrophys. J. [**517**]{}, 565 (1999) \[arXiv:astro-ph/9812133\];\
J. P. Blakeslee [*et al.*]{} \[Supernova Search Team Collaboration\], Astrophys. J. [**589**]{}, 693 (2003) \[arXiv:astro-ph/0302402\];\
A. G. Riess [*et al.*]{} \[Supernova Search Team Collaboration\], Astrophys. J. [**560**]{}, 49 (2001) \[arXiv:astro-ph/0104455\]. A. G. Riess [*et al.*]{} \[Supernova Search Team Collaboration\], Astrophys. J. [**607**]{}, 665 (2004) \[arXiv:astro-ph/0402512\]. P. Astier [*et al.*]{}, arXiv:astro-ph/0510447. D. N. Spergel [*et al.*]{} \[WMAP Collaboration\], Astrophys. J. Suppl. [**148**]{}, 175 (2003) \[arXiv:astro-ph/0302209\];\
D. N. Spergel [*et al.*]{}, arXiv:astro-ph/0603449. D. J. Eisenstein [*et al.*]{}, arXiv:astro-ph/0501171;\
S. Cole [*et al.*]{} \[The 2dFGRS Collaboration\], Mon. Not. Roy. Astron. Soc. [**362**]{}, 505 (2005) \[arXiv:astro-ph/0501174\]. T. Chiba, Phys. Rev. D [**60**]{}, 083508 (1999) \[arXiv:gr-qc/9903094\];\
N. A. Bahcall, J. P. Ostriker, S. Perlmutter and P. J. Steinhardt, Science [**284**]{}, 1481 (1999) \[arXiv:astro-ph/9906463\];\
P. J. Steinhardt, L. M. Wang and I. Zlatev, Phys. Rev. D [**59**]{}, 123504 (1999) \[arXiv:astro-ph/9812313\];\
L. M. Wang, R. R. Caldwell, J. P. Ostriker and P. J. Steinhardt, Astrophys. J. [**530**]{}, 17 (2000) \[arXiv:astro-ph/9901388\]. V. Sahni and A. Starobinsky, arXiv:astro-ph/0610026. A. A. Starobinsky, JETP Lett. [**68**]{}, 757 (1998) \[Pisma Zh. Eksp. Teor. Fiz. [**68**]{}, 721 (1998)\] \[arXiv:astro-ph/9810431\]. M. Szydlowski, W. Godlowski and R. Wojtak, Gen. Rel. Grav. [**38**]{}, 795 (2006) \[arXiv:astro-ph/0505202\]. C. Wetterich, Nucl. Phys. B [**302**]{}, 668 (1988). E. J. Copeland, A. R. Liddle and D. Wands, Phys. Rev. D [**57**]{}, 4686 (1998) \[arXiv:gr-qc/9711068\]. P. G. Ferreira and M. Joyce, Phys. Rev. D [**58**]{}, 023503 (1998) \[arXiv:astro-ph/9711102\]. A. Albrecht and C. Skordis, Phys. Rev. Lett. [**84**]{}, 2076 (2000) \[arXiv:astro-ph/9908085\]. S. Tsujikawa, arXiv:hep-th/0601178. Y. Gong, A. Wang and Y. Z. Zhang, Phys. Lett. B [**636**]{}, 286 (2006) \[arXiv:gr-qc/0603050\]. V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D [**9**]{}, 373 (2000) \[arXiv:astro-ph/9904398\]. V. Sahni and L. M. Wang, Phys. Rev. D [**62**]{}, 103517 (2000) \[arXiv:astro-ph/9910097\]. L. A. Urena-Lopez and T. Matos, Phys. Rev. D [**62**]{}, 081302 (2000) \[arXiv:astro-ph/0003364\]. A. Gruppuso and F. Finelli, Phys. Rev. D [**73**]{}, 023512 (2006) \[arXiv:astro-ph/0512641\]. A. A. Sen and S. Sethi, Phys. Lett. B [**532**]{}, 159 (2002) \[arXiv:gr-qc/0111082\]. C. Rubano, P. Scudellaro, E. Piedipalumbo and S. Capozziello, Phys. Rev. D [**68**]{}, 123501 (2003) \[arXiv:astro-ph/0311535\]. C. Rubano, P. Scudellaro, E. Piedipalumbo, S. Capozziello and M. Capone, Phys. Rev. D [**69**]{}, 103510 (2004) \[arXiv:astro-ph/0311537\].
T. Barreiro, E. J. Copeland and N. J. Nunes, Phys. Rev. D [**61**]{}, 127301 (2000) \[arXiv:astro-ph/9910214\].
[^1]: The other approach of reconstructing an effective potential of variable cosmological term by the given luminosity distance or inhomogeneity growth factor was considered by A.Starobinsky in [@S-Ddelta].
[^2]: Symmetries of evolution equations were systematically investigated in [@Marek], wherein the authors found the analogous behavior of energy density versus the scale factor and used it for deriving the equation of state on the basis of supernovae Ia data, while our goal is the potential itself.
| |
Q:
Parallel Subset
The setup: I have two arrays which are not sorted and are not of the same length. I want to see if one of the arrays is a subset of the other. Each array is a set in the sense that there are no duplicates.
Right now I am doing this sequentially in a brute force manner so it isn't very fast. I am currently doing this subset method sequentially. I have been having trouble finding any algorithms online that A) go faster and B) are in parallel. Say the maximum size of either array is N, then right now it is scaling something like N^2. I was thinking maybe if I sorted them and did something clever I could bring it down to something like Nlog(N), but not sure.
The main thing is I have no idea how to parallelize this operation at all. I could just do something like each processor looks at an equal amount of the first array and compares those entries to all of the second array, but I'd still be doing N^2 work. But I guess it'd be better since it would run in parallel.
Any Ideas on how to improve the work and make it parallel at the same time?
Thanks
A:
Suppose you are trying to decide if A is a subset of B, and let len(A) = m and len(B) = n.
If m is a lot smaller than n, then it makes sense to me that you sort A, and then iterate through B doing a binary search for each element on A to see if there is a match or not. You can partition B into k parts and have a separate thread iterate through every part doing the binary search.
To count the matches you can do 2 things. Either you could have a num_matched variable be incremented every time you find a match (You would need to guard this var using a mutex though, which might hinder your program's concurrency) and then check if num_matched == m at the end of the program. Or you could have another array or bit vector of size m, and have a thread update the k'th bit if it found a match for the k'th element of A. Then at the end, you make sure this array is all 1's. (On 2nd thoughts bit vector might not work out without a mutex because threads might overwrite each other's annotations when they load the integer containing the bit relevant to them). The array approach, atleast, would not need any mutex that can hinder concurrency.
Sorting would cost you mLog(m) and then, if you only had a single thread doing the matching, that would cost you nLog(m). So if n is a lot bigger than m, this would effectively be nLog(m). Your worst case still remains NLog(N), but I think concurrency would really help you a lot here to make this fast.
Summary: Just sort the smaller array.
Alternatively if you are willing to consider converting A into a HashSet (or any equivalent Set data structure that uses some sort of hashing + probing/chaining to give O(1) lookups), then you can do a single membership check in just O(1) (in amortized time), so then you can do this in O(n) + the cost of converting A into a Set.
| |
---
abstract: 'We extend Kolchin’s results from [@KolchinDiffComp] on linear dependence over projective varieties in the constants, to linear dependence over arbitrary complete differential varieties. We show that in this more general setting, the notion of linear dependence still has necessary and sufficient conditions given by the vanishing of a certain system of differential-polynomials equations. We also discuss some conjectural questions around completeness and the catenary problem.'
address:
- |
[email protected]\
Department of Mathematics\
University of California, Berkeley\
970 Evans Hall\
Berkeley, CA 94720-3840
- |
[email protected]\
Department of Mathematics and Statistics\
McMaster University\
1280 Main St W\
Hamilton, ON L8S 4L8
- |
[email protected]\
Mathematics Department\
University of California, Los Angeles\
Math Sciences Building 6363\
Los Angeles, CA 90095
author:
- 'James Freitag\*'
- Omar León Sánchez
- 'William Simmons\*\*'
bibliography:
- 'research.bib'
title: On linear dependence over complete differential algebraic varieties
---
\[section\] \[thm\][Corollary]{} \[thm\][Claim]{} \[thm\][Proposition]{} \[thm\][Lemma]{} \[thm\][Fact]{} \[thm\][Conjecture]{} \[thm\][Question]{} \[thm\][Question]{} \[thm\][Problem]{} \[thm\][Remark]{} \[thm\][Definition]{} \[thm\][Example]{}
[^1]
[^2]
*[Keywords: differential algebraic geometry, model theory]{}\
*[AMS 2010 Mathematics Subject Classification: 12H05. 03C98]{}**
Introduction
============
The study of complete differential algebraic varieties began in the 1960’s during the development of differential algebraic geometry, and received the attention of various authors over the next several decades [@MorrisonSD; @BlumComplete; @BlumExtensions; @KolchinDiffComp]. On the other hand, even though model theory had significant early interactions with differential algebra, it was not until recently that the topic has been the subject of various works using the model-theoretic perspective [@PongDiffComplete2000; @DeltaCompleteness; @SimmonsThesis; @PillayDvar].
Up until the last couple of years, relatively few examples of complete differential algebraic varieties were known. The development of a valuative criterion for completeness led to a variety of new examples, see for instance [@PongDiffComplete2000] and [@DeltaCompleteness]. Subsequently, more examples have been discovered [@SimmonsThesis] using various algebraic techniques in conjunction with the valuative criterion (in Section \[compbound\] we present an example of the third author’s, which shows that there are zero-dimensional projective differential algebraic varieties which are not complete).
To verify that a given projective differential variety $V$ is complete one has to verify that for any quasiprojective differential variety $Y$, the second projection: $$\label{diag}
\pi_2 \colon V \times Y \to Y$$ is a closed map. After reviewing some basic facts on completeness in Section \[compbound\], we establish, by means of the differential-algebraic version of Bertini’s theorem, that in order to verify completeness it suffices to check that the above projection maps are *semi-closed*.
Differential completeness is a fundamental notion in differential algebraic geometry, but, except for [@KolchinDiffComp], there has been no discussion of applications of the idea (outside of foundational issues). In Section \[lindep\], we consider the notion of linear dependence over an arbitrary projective differential variety. This is a generalization of a notion studied by Kolchin [@KolchinDiffComp] in the case of projective algebraic varieties, which in turn generalizes linear dependence in the traditional sense. We prove several results extending the work in [@KolchinDiffComp]; for instance, we see that this general notion of linear dependence also has necessary and sufficient conditions given by the vanishing of differential algebraic equations (when working over a complete differential variety).
In the case of the field of meromorphic functions (on some domain of ${\mathbb }C$) and the projective variety ${\mathbb }P^n ({\mathbb }C)$, Kolchin’s results [@KolchinDiffComp] specialize to the classical result: any finite collection of meromorphic functions is linearly dependent over ${\mathbb }C$ if and only if the Wronskian determinant of the collection vanishes. There are generalizations of this in several directions; for instance, in the context of multiple variables (i.e., partial differential fields), fully general results on Wronskians and linear dependence of meromorphic functions are relatively recent. Roth [@Roth] first established these type of results in the case of rational functions in several variables for use in diophantine approximation. Later his results were generalized to meromorphic functions in some domain of ${\mathbb }C^m$ via [@Wolsson] and [@BerensteinChangLi]. It is worth noting that the proofs of these results are analytic in nature.
In Section \[gene\], we point out how our results on linear dependence over arbitrary complete differential varieties generalize the above results in two essential ways: the differential field is not assumed to be a field of meromorphic functions and the linear dependence is considered over an arbitrary solution set to some differential equations (rather than over ${\mathbb }C^n$).
In [@DeltaCompleteness], it was established that every complete differential variety is *zero-dimensional* (earlier, this result was established in the ordinary case [@PongDiffComplete2000]). Thus, it is natural to ask the following question:
To verify completeness, can one restrict to taking products of the given differential variety with zero-dimensional differential varieties? In other words, is it enough to only consider zero-dimensional varieties $Y$ in (\[diag\])?
Although we are not able to give a full answer to this question, we show that it has a positive answer under the additional assumption of the *weak catenary conjecture*. The conjecture is itself a very natural problem in differential algebraic geometry, and the conditional answer to the above question helps motivate the conjecture further. This weak catenary-type conjecture is an easy consequence of the *Kolchin catenary conjecture*, which has been verified in numerous cases, but not in entire generality.
Section \[catsection\] is intended partly as a survey on the progress of the catenary conjecture, and partly as an opportunity to pose stronger forms of the conjecture that are interesting in their own right. More precisely, after discussing the catenary problem, we formulate a stronger version for algebraic varieties and show the equivalence of this stronger version to certain maps of prolongation spaces being open. This gives the equivalence of these strong forms of the Kolchin catenary problem to a problem purely in the realm of scheme theory. The proof of the equivalence uses recent work of Trushin [@Trushin] on a transfer principal between the Kolchin and Zariski topology called *inheritance*.
[**Acknowledgements.**]{} The authors began this work during a visit to the University of Waterloo, which was made possible by a travel grant from the American Mathematical Society through the Mathematical Research Communities program. We gratefully acknowledge this support which made the collaboration possible. We would also like to thank Rahim Moosa for numerous useful conversations during that visit and afterwards.
Projective differential algebraic varieties
===========================================
In this section we review the basic notions and some stantard results on (projective) differential algebraic geometry. For a thorough development of the subject see [@KolchinDAAG] or [@KolchinDiffComp]. We fix a differentially closed field $(\mathcal{U},\Delta)$ of characteristic zero, where $$\Delta=\{\delta_1,\dots,\delta_m\}$$ is the set of $m$ commuting derivations. We assume $\mathcal{U}$ to be a universal domain for differential algebraic geometry; in model-theoretic terms, we are simply assuming that $({\mathcal }U,\Delta)$ is a sufficiently large saturated model of the theory $DCF_{0,m}$. Throughout $K$ will denote a (small) differential subfield of $\mathcal{U}$.
A subset of ${\mathbb }A^n={\mathbb }A^n(\mathcal U)$ is *$\Delta$-closed*, or simply closed when the context is clear, if it is the zero set of a collection of $\Delta$-polynomials over ${\mathcal }U$ in $n$ differential indeterminates (these sets are also called *affine* differential algebraic varieties). When the collection of $\Delta$-polynomials defining a $\Delta$-closed set is over $K$, we say that the $\Delta$-closed is defined over $K$.
Following the standard convention, we will use $K \{ y_0, y_1 , \ldots , y_n \}$ to denote the ring of *$\Delta$-polynomials* over $K$ in the $\Delta$-indeterminates $y_0, y_1, \ldots , y_n$.
A (non-constant) $\Delta$-polynomial $f$ in $K\{ y_0 , \ldots , y_n \}$ is *$\Delta$-homogeneous of degree d* if $$f(t y_0, \ldots t y_n \} = t^d f(y_0 , \ldots , y_n ),$$ where $t$ is another $\Delta$-indeterminate.
The reader should note that $\Delta$-homogeneity is a stronger notion that homogeneity of a differential polynomial as a polynomial in the algebraic indeterminates $\delta_m^{r_m}\cdots\delta_1^{r_1}y_i$. For instance, for any $\delta\in \Delta$, $$\delta y - y$$ is a homogeneous $\Delta$-polynomial, but not a $\Delta$-homogeneous $\Delta$-polynomial. The reader may verify that the following is $\Delta$-homogeneous: $$y_1 \delta y_0 -y_0 \delta y_1 -y_0y_1.$$
Generally, we can easily homogenize an arbitrary $\Delta$-polynomial in $y_1,\dots,y_n$ with respect to a new $\Delta$-variable $y_0$. Let $f$ be a $\Delta$-polynomial in $K\{y_1,\dots,y_n\}$, then for $d$ sufficiently large $y_0^d f(\frac{y_1}{y_0},\dots,\frac{y_n}{y_0})$ is $\Delta$-homogeneous of degree $d$. For more details and examples see [@PongDiffComplete2000].
As a consequence of the definition, the vanishing of $\Delta$-homogeneous $\Delta$-polynomials in $n+1$ variables is well-defined on ${\mathbb }P^n={\mathbb }P^n(\mathcal U)$. In general, the $\Delta$-closed subsets of ${\mathbb }P^n$ defined over $K$ are the zero sets of collections of $\Delta$-homogeneous $\Delta$-polynomials in $K \{y_0 , \ldots , y_n \}$ (also called *projective* differential algebraic varieties). Furthermore, $\Delta$-closed subsets of ${\mathbb }P^n \times {\mathbb }A^m$, defined over $K$, are given by the zero sets of collections of $\Delta$-polynomials in $$K \{ y_0 , \ldots , y_n, z_1, \ldots , z_m \}$$ which are $\Delta$-homogeneous in $(y_0,\dots,y_n)$.
Dimension polynomials for projective differential algebraic varieties {#dimpol}
---------------------------------------------------------------------
Take $\alpha \in {\mathbb }P ^n$ and let $\bar a = ( a_0 , \ldots , a_n ) \in {\mathbb }A ^{n+1}$ be a representative for $\alpha$. Choose some index $i$ for which $a_i \neq 0$. The field extensions $K\left(\frac{a_0}{a_i},\ldots,\frac{a_n}{a_i}\right)$ and $K \left\langle \frac{a_0}{a_i},\ldots,\frac{a_n}{a_i} \right\rangle$ do not depend on which representative $\bar a$ or index $i$ we choose. Here $K\langle \bar a\rangle$ denotes the $\Delta$-field generated by $\bar a$ over $K$.
With the notation of the above paragraph, the [*Kolchin polynomial of $\alpha$ over $K$*]{} is defined as $$\omega _{ \alpha /K } (t) = \omega _{\left(\frac{a_0}{a_i}, \dots,\frac{a_n}{a_i}\right) /K} (t) ,$$ where $\omega _{\left(\frac{a_0}{a_i}, \dots,\frac{a_n}{a_i}\right) /K} (t)$ is the standard Kolchin polynomial of $\left(\frac{a_0}{a_i}, \dots,\frac{a_n}{a_i}\right)$ over $K$ (see Chapter II of [@KolchinDAAG]). The *$\Delta$-type of $\alpha$* over $K$ is defined to be the degree of $\omega_{\alpha/K}$. By the above remarks, these two notions are well-defined; i.e., they are independent of the choice of representative $\bar a$ and index $i$.
Let $\beta \in {\mathbb }P^n$ be such that the closure (in the $\Delta$-topology) of $\beta$ over $K$ is contained in the closure of $\alpha$ over $K$. In this case we say that $\beta$ is a *differential specialization* of $\alpha$ over $K$ and denote it by $\alpha \mapsto_K \beta$. Let $\bar b$ be a representative for $\beta$ with $b_i \neq 0$. Then, by our choice of $\beta$ and $\alpha$, if $\bar a $ is a representative of $\alpha $, then $a_i \neq 0$; moreover, the tuple $\left(\frac{b_0}{b_i},\dots,\frac{b_n}{b_i}\right)$ in $\mathbb{A}^{n+1}$ is a differential specialization of $\left(\frac{a_0}{a_i}, \dots,\frac{a_n}{a_i}\right)$ over $K$. When $V \subseteq {\mathbb }P^n$ is an irreducible $\Delta$-closed set over $K$, then a *generic point* of $V$ over $K$ (when $K$ is understood we will simply say generic) is simply a point $\alpha\in V$ for which $V=\{\beta \, | \, \alpha \mapsto_K \beta \}$. It follows, from the affine case, that every irreducible $\Delta$-closed set in ${\mathbb }P^n$ has a generic point over $K$, and that any two such generics have the same isomorphism type over $K$.
Let $V\subseteq {\mathbb }P^n$ be an irreducible $\Delta$-closed set. The [*Kolchin polynomial*]{} of $V$ is defined to be $$\omega _{ V} (t) = \omega_ {\alpha /F} (t),$$ where $F$ is any differential field over which $V$ is defined and $\alpha$ is a generic point of $V$ over $F$. It follows, from the affine case, that $\omega_V$ does not depend on the choice of $F$ or $\alpha$. The *$\Delta$-type of $V$* is defined to be the degree of $\omega_V$.
Let $V\subseteq {\mathbb }P^n$ be an irreducible $\Delta$-closed set, and recall that $m$ is the number of derivations.
- $V$ has $\Delta$-type $m$ if and only if the differential function field of $V$ has positive differential transcendence degree.
- $V$ has $\Delta$-type zero if and only if the differential function field of $V$ has finite transcendence degree.
The [*dimension*]{} of $V$, denoted by $\operatorname{dim}V$, is the differential transcendence degree of the differential function field of $V$. Thus, by a zero-dimensional differential variety we mean one of $\Delta$-type less than $m$ (in model-theoretic terms this is equivalent to the Lascar rank being less than $\omega^{m}$).
In various circumstances it is advantageous (and will be useful for us in Section \[lindep\]) to consider ${\mathbb }P^n$ as a quotient of ${\mathbb }A^{n+1}$. For example, if ${\mathfrak }p$ is the $\Delta$-ideal of $\Delta$-homogeneous $\Delta$-polynomials defining $V \subseteq {\mathbb }P^{n}$ and we let $W \subseteq {\mathbb }A^{n+1}$ be the zero set of ${\mathfrak }p$, then $$\label{use}
\omega _{W} (t)=\omega _ {V} (t) + \binom{m+t}{m},$$ where the polynomial on the left is the standard Kolchin polynomial of $W$ (see §5 of [@KolchinDiffComp]).
On completeness {#compbound}
===============
In this section we recall a few facts and prove some foundational results on complete differential algebraic varieties (for more basic properties we refer the reader to [@DeltaCompleteness]). We start by recalling the definition of $\Delta$-completeness.
\[maindef\] A $\Delta$-closed $V \subseteq {\mathbb }P^n$ is *$\Delta$-complete* if the second projection $$\pi_2: V \times Y \rightarrow Y$$ is a $\Delta$-closed map for every quasiprojective differential variety $Y$. Recall that a quasiprojective differential variety is simply an open subset of a projective differential variety.
We will simply say *complete* rather than $\Delta$-complete. This should cause no confusion with the analogous term from the algebraic category because we will work exclusively in the category of differential algebraic varieties.
The first differential varieties for which completeness was established were the constant points of projective algebraic varieties [@KolchinDiffComp]. One might attempt to establish a variety of examples via considering algebraic D-variety structures on projective algebraic varieties; in Lemma \[Dvarst\] below, we prove that indeed the sharp points of an algebraic D-variety is a complete differential variety.
Let us first recall that an algebraic D-variety is a pair $(V,\mathcal D)$ where $V$ is an algebraic variety and $\mathcal D$ is a set of $m$ commuting derivations on the structure sheaf $\mathcal O_V$ of $V$ extending $\Delta$. A point $v\in V$ is said to be a *sharp point* of $(V, \mathcal D)$ if for every affine neighborhood $U$ of $v$ and $f\in \mathcal O_V(U)$ we have that $D(f)(v)=\delta(f(v))$ for all $D\in \mathcal D$ and $\delta \in \Delta$. The set of all sharp points of $V$ is denoted by $(V,\mathcal D)^\sharp$. It is worth noting that given an algebraic variety $W$ defined over the constants, one can equip $W$ with a canonical D-variety structure $\mathcal D_0$ (by setting each $D_i$ to be the unique extension of $\delta_i$ that vanishes on the affine coordinate functions) such that $(W,\mathcal D_0)^\sharp$ is precisely the set of constant points of $W$. We refer the reader to [@Buium1] for basic properties of algebraic D-varieties.
\[Dvarst\] If $(V,\mathcal D)$ is an algebraic D-variety whose underlying variety V is projective, then $(V,\mathcal D)^\#$ is a complete differential variety.
In [@Buium], Buium proves that if $V$ is projective then $(V,\mathcal D)$ is isotrivial. That is, there is an isomorphism $f:V\to W$ of algebraic varieties with $W$ defined over the constants such that the image of $(V,\mathcal D)^\sharp$ under $f$ is precisely the set of constant points $W$. Thus, $(V,\mathcal D)^\sharp$ is isomorphic to a projective algebraic variety in the constants. The latter we know is complete, and hence $(V,\mathcal D)^\#$ is complete.
We now recall a class of examples developed in [@PongDiffComplete2000], which show the existence of complete differential varieties that are not isomorphic to algebraic varieties in the constants.
\[exam5\] Restrict to the case of a single derivation $({\mathcal }U,\delta)$. Let $V$ be the $\delta$-closure of $\delta y= f(y)$ in ${\mathbb }P^1,$ where $f(y) \in K[y]$ is of degree greater than one. In [@PongDiffComplete2000], it was shown that $V$ is a complete differential variety. Under the additional assumption that $f(y)$ is over the constants, by a theorem of McGrail [@McGrail Theorem 2.8] and Rosenlitch [@notmin], it is well understood when such a differential variety is non isomorphic to an algebraic variety in the constants:
[@MMP page 71] Suppose that $f(y)$ is a rational function over the constants of a differential field $(K,\delta)$. Then $V = \{ x \in {\mathbb }A^1 \, | \, \delta x = f(x) \}$ is isomorphic to an algebraic variety in the constants if and only if either:
1. $\frac{1}{f(y)} = c \frac{{\frac{\partial u}{\partial y}}}{u}$ for some rational function $u$ over the constants and $c$ a constant.
2. $\frac{1}{f(y)} = c {\frac{\partial v}{\partial y}}$ for some rational function $v$ over the constants and $c$ a constant.
The previous two results yield a large class of complete differential varieties which are non isomorphic to an algebraic variety in the constants. Beyond order one, nonlinear equations are rather difficult to analyze with respect to completeness, because the valuative criteria developed in [@PongDiffComplete2000; @DeltaCompleteness] are difficult to apply. For such examples, we refer the reader to [@SimmonsThesis].
Note that in all the above examples of complete differential varieties, the varieties are zero-dimensional. This is generally true; in [@DeltaCompleteness] the following was established:
\[zerodim\] Every complete differential variety is zero-dimensional.
This implies that the completeness question in differential algebraic geometry is one which only makes sense to ask for zero-dimensional projective differential varieties. Thus, it seems natural to inquire whether the notion can be completely restricted to the realm of zero-dimensional differential varieties. More precisely, a priori, the definition of completeness requires quantification over all differential subvarieties of the product of $V$ with an arbitrary quasiprojective differential variety $Y$. In light of Fact \[zerodim\], it seems logical to ask if one can restrict to zero-dimensional $Y$’s for the purposes of verifying completeness. We provide some insight on this question in Section \[catsection\]; of course, the positive answer to such question would be helpful for answering the following:
Which projective differential varieties are complete?
In general the above question is rather difficult. As the following example shows, even at the level of $\Delta$-type zero one can find incomplete differential varieties.
Recently, the third author [@SimmonsThesis] constructed the first known example of a zero-dimensional projective differential algebraic variety which is not complete (in fact of differential type zero). Restrict to the case of a single derivation $(\mathcal{U},\delta)$. Consider the subset $W$ of $\mathbb{A}^{1}\times \mathbb{A}^{1}$ defined by $x''=x^{3}$ and $2yx^{4}-4y(x')^{2}=1$. One may check (by differentially homogenizing $x''=x^{3}$ and observing that the point at infinity of $\mathbb{P}^{1}$ does not lie on the resulting variety) that $x''=x^{3}$ is already a projective differential variety. Thus, $W$ is a $\delta$-closed subset of $\mathbb{P}^{1}\times \mathbb{A}^{1}$. A short argument (see [@SimmonsThesis] for details) establishes that $\pi_{2}(W)$ is the set $\{y\mid y'=0 \text{ and } y\neq 0\}$ which is not $\delta$-closed.
Let us point out that this example works because the derivative of $2x^{4}-4(x')^{2}$ belongs to the $\delta$-ideal generated by $x''-x^{3}$ though $2x^{4}-4(x')^{2}$ itself does not. Since determining membership in a $\delta$-ideal is often difficult, it is quite possible that identifying complete differential varieties is inherently a hard problem. This highlights the need for reductions in the process of checking completeness.
We finish this section by proving that in the definition of completeness one can slightly weaken the requirement of the second projection maps being closed maps. In order to state our result, we need the following definition.
A morphism $f\colon X\to Y$ of differential algebraic varieties is said to be *semi-closed*, if for every $\Delta$-closed subset $Z\subseteq X$ either $f(Z)$ is $\Delta$-closed or $f(Z)^{cl}\setminus f(Z)$ is positive-dimensional. Here $f(Z)^{cl}$ denotes the $\Delta$-closure of $f(Z)$.
Note that if at least one of $X$ or $Y$ is zero-dimensional, then a morphism $f:X\to Y$ is semi-closed iff it is closed.
Recall that a ($\Delta$-)*generic hyperplane* of $\mathbb A^n$ is the zero set of a polynomial of the form $$f(y_1,\ldots,y_n)=a_0+a_1y_1+\cdots+a_ny_n,$$ where the $a_i$’s are $\Delta$-algebraically independent. We will make use of the following differential version of Bertini’s theorem, which appears in [@JBertini].
\[Bertini\] Let $V\subseteq \mathbb{A}^n$ be an irreducible differential variety of dimension $d$ with $d>1$, and let $H$ be a generic hyperplane of $\mathbb{A}^n$. Then $V\cap H$ is irreducible of dimension $d-1$, and its Kolchin polynomial is given by $$\omega_{(V\cap H)}(t)=\omega_V (t)- \binom{m+t}{m} .$$
We can now prove
\[mainthm\] A $\Delta$-closed $V \subseteq {\mathbb }P^n$ is complete if and only if $\pi_2: V \times Y \rightarrow Y$ is a semi-closed map for every quasiprojective differential variety $Y.$
Towards a contradiction, suppose that $\pi_2:V\times Y\to Y$ is semi-closed for every quasiprojective $Y$, but $V$ is not complete. Then there must be some $Y_1$, a positive dimensional quasiprojective differential variety, such that $\pi_2: V \times Y_1 \rightarrow Y_1$ is not closed. Because the question is local, we can assume that $Y_1$ is affine and irreducible. Let $X_1 \subseteq V \times Y_1$ be a $\Delta$-closed set such that $\pi_2 (X)$ is not closed. We may assume that $X_1$ is irreducible and, because of our semi-closedness assumption, that $\pi_2(X_1)^{cl}\setminus \pi_2(X_1)$ is positive dimensional. Let $W_1$ be the $\Delta$-closure of $\pi_2(X_1)^{cl}\setminus \pi_2(X_1)$. Note that $W_1$ is positive dimensional and, since the theory $DCF_{0,m}$ admits quantifier elimination, it has strictly smaller Kolchin polynomial than $\pi(X_1)^{cl}$.
Now, let $H$ be a generic hyperplane (generic over a differentially closed field over which everything else is defined) in ${\mathbb }A^n$, where $n$ is such that $Y_1\subseteq {\mathbb }A^n$. By Fact \[Bertini\], $W_1\cap H\neq \emptyset$. Now let $Y_2=Y_1\cap H$ and consider $$X_2:= X_1 \cap (V \times Y_2) \subseteq V \times Y_2.$$ We claim that $\pi_2 (X_2)=\pi_2(X_1)\cap H$ is not closed. Suppose it is, then, as $W_2:= W_1\cap H\neq \emptyset$, $\pi_2(X_1)\cap H$ is a closed proper subset of $\pi_2(X_1)^{cl}\cap H$. The fact that $\emptyset \neq W_1\cap H\subset \pi_{2}(X_1)^{cl}\cap H$ (which follows from Fact \[Bertini\] and noting that Kolchin polynomials behave predictably with respect to intersections with generic hyperplanes) contradicts irreducibility of $\pi_2(X_1)^{cl}\cap H$. Thus, $\pi(X_2)=\pi_2(X)\cap H$ is not closed. Further, by Fact \[Bertini\] again, the dimension of $Y_2$, $X_2$ and $W_2$ is one less that the dimension of $Y_1$, $X_1$ and $W_1$, respectively.
Since this process decreases the dimensions, after a finite number of steps it would yield $Y$, $X$ and $W\neq \emptyset$, with $X$ a closed subset of $V\times Y$ such that $W=\pi_2(X)^{cl}\setminus\pi_2(X)$ is zero-dimensional. This contradicts semi-closedness, and the result follows.
Linear dependence over differential algebraic varieties {#lindep}
=======================================================
In this section we extend Kolchin’s results from [@KolchinDiffComp] on linear dependence over projective varieties in the constants, to linear dependence over arbitrary complete differential varieties. We begin by giving a natural definition of linear dependence over an arbitrary projective differential variety.
Let $V \subseteq {\mathbb }P^n$ be $\Delta$-closed. We say that $\bar a=(a_0,\ldots,a_{n})\in {\mathbb }A^{n+1}$ is [*linearly dependent over $V$*]{} if there is a point $v = [v_0: \cdots : v_n] \in V$ such that $\sum_{j=0}^n v_i a_i =0$. Similarly, we say that $\alpha\in{\mathbb }P^n$ is linearly dependent over $V$ if there is a representative $\bar a$ of $\alpha$ such that $\bar a$ is linearly dependent over $V$. Note that it does not matter which particular representative of $\alpha$ or $v$ we choose when testing to see if $v$ witnesses the $V$-linear dependence of $\alpha$.
In [@KolchinDiffComp], Kolchin states the following problem:
\[Mainexam\] Consider an irreducible algebraic variety $V$ in ${\mathbb }P^n({\mathbb }C)$ for some $n$. Let $f_0, f_1, \ldots , f_n$ be meromorphic functions in some region of ${\mathbb }C$. Ritt once remarked (but seems to have not written down a proof) that there is an ordinary differential polynomial $R \in {\mathbb }C \{y_0 , y_1, \ldots , y_n \}$ which depends only on $V$ and has order equal to the dimension of $V$ such that a necessary and sufficient condition that there is $c \in V$ with $ \sum c_i f_i =0$ is that $(f_0 , f_1 , \ldots ,f_n)$ be in the general solution of the differential equation $R(y_0 , y_1 , \ldots , y_n )=0$. For a more thorough discussion, see [@KolchinDiffComp].
It is natural to ask if in the previous example one can replace the algebraic variety $V$ with an arbitrary complete differential variety. Kolchin offers a solution to this question in the case when $V$ is an arbitrary projective differential variety living inside the constants. However, the fact that such projective varieties (viewed as zero-dimensional differential algebraic varieties) are complete in the $\Delta$-topology turns out to be the key to proving the existence of the above differential polynomial $R$.
We will extend Kolchin’s line of reasoning for proving the assertion in Example \[Mainexam\]. Namely, we start with a complete differential algebraic variety, rather than the constant points of an algebraic variety. Recall that, as we pointed out in Example \[exam5\], there are many complete differential varieties that are not isomorphic to the constant points of an algebraic variety.
\[linear\] Let $V \subset {\mathbb }P^n$ be a complete differential variety defined and irreducible over $K$. Let $$ld(V):= \{x \in {\mathbb }P^n \, | \, x \text{ is linearly dependent over } V \}.$$ Then $ld(V)$ is an irreducible differential subvariety of ${\mathbb }P^n$ defined over $K$.
With the correct hypotheses, Kolchin’s proof of the special case essentially goes through here. Similar remarks apply to the strategy of the next proposition and corollary, where Kolchin’s original argument provides inspiration. For the proposition, the result seems to require a few new ingredients, mainly doing calculations in the generic fiber of the differential tangent bundle.
Let ${\mathfrak }p \subseteq K\{\bar z\}=K \{ z_0, z_1, \ldots ,z_n \}$ be the differential ideal corresponding to $V$. Now, let $\bar y = (y_0, y_1, \ldots y_n )$ and consider the differential ideal ${\mathfrak }p_1 \in K\{\bar z , \bar y \}$ given by $$\left[ {\mathfrak }p, \sum_{j=0}^n y_j z_j \right]: (\bar y \bar z ) ^ \infty$$ which by definition is $$\left\{f\in K\{\bar z,\bar y\}: \, (y_iz_j)^ef\in \left[{\mathfrak }p,\sum y_jz_j\right],\, i,j \leq n,\text{ for some } e\in {\mathbb }N \right\}.$$ As ${\mathfrak }p_1$ is differentially bi-homogeneous, it determines a (multi-)projective differential algebraic variety $W \subseteq {\mathbb }P^n \times {\mathbb }P^n$. It is clear that the coordinate projection maps have the form $$\xymatrix{
& & W \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} & & \\
& V & & ld(V) &
}$$
Further, $ld(V)= \pi_2 (W)$, and, since $V$ is complete, $ld(V)$ is closed in the Kolchin topology of ${\mathbb }P^n$ and defined over $K$.
Next we prove that $ld(V)$ is irreducible over $K$. Let $\bar a=(a_0,\dots,a_n)$ be a representative of a generic point of $V$ over $K$ and fix $j$ such that $a_j \neq 0$. Pick elements $u_k \in {\mathcal }U$ for $0 \leq k \leq n$ and $k \neq j$ which are $\Delta$-algebraically independent over $K \langle \bar a \rangle $. Let $$u_j = - \sum _{k \neq j} u_k a_j ^{-1} a_k,$$ and $\bar u = ( u_0 , \ldots , u_n)$. One can see that $ (\bar a , \bar u)$ is a representative of a point in $W \subseteq {\mathbb }P^n \times {\mathbb }P^n$, so that $[u_0 : \cdots : u_n ] \in ld(V)$.
We claim that $[u_0:\cdots:u_n]$ is a generic point of $ld(V)$ over $K$ (this will show that $ld(V)$ is irreducible over $K$). To show this it suffices to show that $(\bar a,\bar u)$ is a generic point of ${\mathfrak }p_1$; i.e., it suffices to show that for every $p\in K\{\bar z,\bar y\}$ if $p(\bar a,\bar u)=0$ then $p\in {\mathfrak }p_1$.
Let $ p \in K \{\bar z , \bar y \}$ be any differential polynomial. By the differential division algorithm there exists $p_0 \in K \{ \bar z, \bar y \}$ not involving $y_j$ such that $$z_j^e p \equiv p_0 \quad mod \left[\sum_{0 \leq i \leq n } y_i z_i \right]$$ for some $e \in {\mathbb }N$. Thus, we can write $p_0$ as a finite sum $\sum p_M M$ where each $M$ is a differential monomial in $(y_k ) _{ 0 \leq k \leq n , k \neq j}$ and $p_M \in K \{ \bar z \}.$ Now, as $(u_k ) _{ 0 \leq k \leq n , k \neq j}$ are differential algebraic independent over $K \langle \bar a \rangle$, it follows that if $p ( \bar a , \bar u)=0$ then $p_M (\bar a) =0$ for all $M$ (and hence $p_M\in {\mathfrak }p$, since $\bar a$ is a generic point of ${\mathfrak }p$). This implies that, if $p(\bar a,\bar u)=0$, $p_0\in {{\mathfrak }p} \cdot K\{\bar z,\bar y\}$ and so $p\in {\mathfrak }p_1$, as desired.
For the following proposition we will make use of the following fact of Kolchin’s about the Kolchin polynomial of the differential tangent space (we refer the reader to [@KolchinDAG] for the definition and basic properties of differential tangent spaces).
\[tang\] Let $V$ be an irreducible differential algebraic variety defined over $K$ with generic point $\bar v$. Then $$\omega_{T^{\Delta}_{\bar v}V}=\omega_V,$$ where $T^{\Delta}_{\bar v}V$ denotes the differential tangent space of $V$ at $\bar v$.
In the case when the complete differential algebraic variety $V$ has $\Delta$-type zero, we have the following result on the Kolchin polynomial of $ld(V)$.
\[prok\] Let $V\subset {\mathbb }P^n$ be a complete differential variety defined and irreducible over $K$. If $V$ has constant Kolchin polynomial equal to $d$, then the Kolchin polynomial of $ld(V)$ is given by $$\omega _{ld(V)} (t) = (n-1) \binom{m+t}{m} + d.$$
In the ordinary case, the hypotheses on the Kolchin polynomial do not constitute any assumption on $V$. In that case, every such complete $V$ has differential type zero. In the partial case, the situation is much less clear. It is not known whether every complete differential algebraic variety has constant Kolchin polynomial (see [@DeltaCompleteness] for more details).
Let $W \subseteq {\mathbb }P^{n} \times {\mathbb }P^{n}$, $\bar a$ and $\bar u$ be as in the proof of Theorem \[linear\]. Fix $j$ such that $a_j\neq 0$ and, moreover, assume that $a_j =1$. Now, write $\bar a^*$ and $\bar u^*$ for the tuples obtained from $\bar a$ and $\bar u$, respectively, where we omit the $j$-th coordinate. Let $W_1 \subseteq {\mathbb }A^{n} \times {\mathbb }A^{n+1}$ be the differential algebraic variety with generic point $(\bar a^* , \bar u )$ over $K$. Consider $T_{( \bar a^* , \bar u )} ^\Delta W_1$, the differential tangent space of $W_1$ at $(\bar a^*,\bar u)$. Let $(\bar \alpha , \bar \eta )$ be generic of $T^{\Delta}_{(\bar a^*,\bar u)}W_1$ over $K\langle \bar a^*, \bar u \rangle$.
From the equation $y_j = - \sum _{k \neq j} y_k z_k$ satisfied by $(\bar a^*, \bar u)$, we see that $$-\sum_{ i \neq j } a_i \eta _i - \eta_j = \sum _ {i \neq j} u_i \alpha _i .$$
Choose $d_1$ so that $|\Theta (d_1)|$ is larger than $n \cdot |\Theta (d) |$, where $\Theta(d_1)$ denotes the set of derivative operators of order at most $d_1$ (similarly for $\Theta(d)$).
For $\theta \in \Theta (d_1)$, we have that $$\theta \left(-\sum_{ i \neq j } a_i \eta _i - \eta_j \right) = \theta \left( \sum _ {i \neq j} u_i \alpha _i \right) .$$ In the expression on the right $\theta \left( \sum _ {i \neq j} u_i \alpha _i \right)$, consider the coefficients of $\theta \alpha_i$ in terms of $\theta \bar u^*$, and denote these coefficients by $f ( i ,\theta' , \theta ) \in K \langle \bar u ^ * \rangle$; i.e., $f ( i ,\theta' , \theta )$ is the coefficient of $\theta' \alpha_{i}$ in the equation $$\theta \left( -\sum_{ i \neq j } a_i \eta _i - \eta_j \right) = \theta \left( \sum _ {i \neq j} u_i \alpha _i \right) .$$ We can thus express this equation in the form $$\left( \begin{array}{c} \theta \left(-\sum_{ i \neq j } a_i \eta _i - \eta_j \right) \end{array} \right) = F_\theta A$$ where $F_\theta$ is the row vector with entries $$(f(i , \theta', \theta))_{ i \neq j, \theta' \in \Theta(d)}$$ and $A$ is the column vector with entries $$\left( \begin{array}{c} \theta' \alpha_i \end{array} \right)_{i \neq j, \theta' \in \Theta (d) }.$$ Note that we only have the vectors $F_\theta$ and $A$ run through $\theta' \in \Theta (d)$ (instead of all of $\Theta(d_1)$). This is because any derivative of $\alpha_i$, $i\neq j$, of order higher than $d$ can be expressed as a linear combination of the elements of the vector $A$. This latter observation follows from the fact that $\omega_{\alpha/K\langle \bar a^*, \bar u\rangle}=d$, which in turn follows from the facts that $\alpha$ is a generic point of $T^\Delta_{\bar a^*}V$, our assumptioin on $\omega_V$, and Fact \[tang\].
By the choice of $d_1$ and $\bar u^*$ (recall that $\bar u^*$ consists of independent differential transcendentals over $K\langle \bar a \rangle$), there are $n \cdot |\Theta (d) |$ linearly independent row vectors $F_ \theta$. So, we can see (by inverting the nonsingular matrix which consists of $n \cdot |\Theta (d) |$ such $F_ \theta$’s as the rows) that all the elements of the vector $A$ belong to $K\langle \bar a ^ *, \bar u \rangle (( \theta \bar \eta )_{\theta\in \Theta(d_1)})$. Thus $$K\langle \bar a^*, \bar u\rangle((\theta \bar \eta)_{\theta\in \Theta(d_1)})=K\langle \bar a^*,\bar u, \bar\alpha\rangle ((\theta\eta_i)_{i\neq j, \theta\in \Theta(d_1)}).$$ Noting that $\bar \eta^*=(\eta_i)_{i\neq j}$ is a tuple of independent transcendentals over $K\langle \bar a^*, \bar u\rangle$ and that $\bar \eta^* $ is ($\Delta$-)independent from $\bar \alpha$ over $K \langle \bar a ^* , \bar u\rangle$, the above equality means that for all large enough values of $t$, $$\omega _ { \bar \eta / K \langle \bar a^* , \bar u \rangle } (t) = n \binom{m+t}{m} + d.$$ Finally, since $\bar \eta$ is a generic of the differential tangent space at $\bar u$ of the $K$-locus of $\bar u$, Fact \[tang\] implies that $$\label{kopo}
\omega_{\bar u/K}=n \binom{m+t}{m} + d,$$ and thus by equation (\[use\]) (in Section \[dimpol\]), we get $$\omega_{ld(V)}=(n-1) \binom{m+t}{m} + d,$$ as desired.
Let $V\subset {\mathbb }P^n$ be a complete differential variety defined and irreducible over $K$, and suppose we are in the ordinary case (i.e., $|\Delta|=1$). If the Kolchin polynomial of $V$ equals $d$, then there exists a unique (up to a nonzero factor in $K$) irreducible $R\in K\{\bar y\}$ of order $d$ such that an element in ${\mathbb }A^{n+1}$ is linearly independent over $V$ if and only if it is in the general solution of the differential equation $R(\bar y)=0$.
Let ${\mathfrak }p$ be the differential (homogeneous) ideal of $ld(V)$ over $K$. Then, by equation (\[kopo\]) in the proof of Proposition \[prok\], we get $$\omega_{{\mathfrak }p}=n(t+1)+d=(n+1)\binom{t+1}{1}-\binom{t-d+1}{1}.$$ Thus, by [@KolchinDAAG Chapter IV, §7, Proposition 4], there exists an irreducible $R\in K\{\bar y\}$ of order $d$ such that ${\mathfrak }p$ is precisely the general component of $R$; in other words, an element in ${\mathbb }A^{n+1}$ is linearly independent over $V$ if and only if it is in the general solution of the differential equation $R=0$. For uniqueness, let $R'$ be another differential polynomial over $K$ of order $d$ having the same general component as $R$. Then, by [@KolchinDAAG Chapter IV, §6, Theorem 3(b)], $R'$ is in the general component of $R$ and so $ord(R')\geq ord(R)$. By symmetry, $R$ is in the general component of $R'$, and so we get that $ord(R)=ord(R')$. Thus $R$ and $R'$ divide each other, as desired.
We finish this section by discussing how the assertion of Example \[Mainexam\] follows from the results of this section. Let $k$ be a differential subfield of $K$.
Let $V$ be a differential algebraic variety defined over $k$. We say $V$ is $k$-large with respect to $K$ if $V(k)=V(\bar{K})$ for some (equivalently for every) $\bar{K}$ differential closure of $K$.
One can characterize the notion of largeness in terms of differential subvarieties of $V$ as follows:
$V$ is $k$-large with respect to $K$ if and only if for each differential subvariety $W$ of $V$ defined over $K$, $W(k)$ is $\Delta$-dense in $W$.
Suppose $W(k)$ is $\Delta$-dense in $W$, for each differential algebraic subvariety $W$ of $V$ defined over $K$. Let $\bar{a}$ be a $\bar{K}$-point of $V$. Since $tp(\bar a/K)$ is isolated, there is a differential polynomial $f\in K\{\bar{x}\}$ such that every differential specialization $\bar b$ of $\bar a$ over $K$ satisfying $f(\bar b)\neq 0$ is a generic differential specialization. Let $W\subseteq V$ be the differential locus of $\bar a$ over $K$. By our assumption, there is a $k$-point $\bar b$ of $W$ such that $f(\bar b)\neq 0$. Hence, $\bar b$ is a generic differential specialization of $\bar a$ over $K$, and so $\bar a$ is a $k$-point.
The converse is clear since for every differential algebraic variety $W$, defined over $\bar K$, $W(\bar K)$ is $\Delta$-dense in $W$.
\[remla\] Let $V$ be an (infinite) algebraic variety in the constants defined over $k$, and $K$ a differential field extension of $k$ such that $K^\Delta=k^\Delta$. Here $K^{\Delta}$ and $k^{\Delta}$ denote the constants of $K$ and $k$, respectively.
1. $V$ is $k$-large with respect to $K$ if and only if $k^{\Delta}$ is algebraically closed. Indeed, if $V$ is $k$-large, the image of $V(k)$ under any of the Zariski-dominant coordinate projections of $V$ is dense in $\bar K^\Delta$. Hence, $k^\Delta=\bar K^\Delta$, implying $k^\Delta$ is algebraically closed. Conversely, if $k^\Delta$ is algebraically closed, then $k^\Delta=\bar K^\Delta$ (since $k^\Delta=K^\Delta$). Hence, $V(k^\Delta)=V(\bar K^\Delta)$, but since $V$ is in the constants we get $V(k)=V(\bar K)$.
2. In the case $k={\mathbb }C$ and $K$ is the field of meromorphic in some region of ${\mathbb }C$. We have that $K^\Delta={\mathbb }C$ is algebraically closed, and so $V$ is $k$-large with respect to $K$. Hence, in Example \[Mainexam\] the largeness condition of $V$ is given implicitly.
\[large\] Let $V\subset {\mathbb }P^n$ be a complete differential algebraic variety defined over $k$, and suppose $V$ is $k$-large with respect to $K$. Then the $K$-points of ${\mathbb }P^n$ that are linearly dependent over $V$ are precisely those that are linearly dependent over $V(k)$.
Suppose $\alpha\in {\mathbb }P^n(K)$ is linearly independent over $V$. Then, since the models of $DCF_{0,m}$ are existentially closed, we can find $v\in V(\bar K)$, where $\bar K$ is a differential closure of $K$, such that $\sum v_ia_i=0$ where $\bar a$ is a representative of $\alpha$. But, by our largeness assumption, $V(\bar K)=V(k)$, and thus $\alpha$ is linearly independent over $V(k)$.
Putting together Theorem \[linear\], Remark \[remla\], and Lemma \[large\], we see that if $V$ is an irreducible algebraic variety in ${\mathbb }P^n({\mathbb }C)$ and $K$ is a differential field extension of ${\mathbb }C$ with no new constants then there is a projective differential algebraic variety defined over ${\mathbb }C$, namely $ld(V)$, which only depends on $V$ such that for any tuple $f=(f_0, f_1,\ldots,f_n)$ from $K$, $f$ is linearly dependent over $V$ if and only if $f\in ld(V)$.
Generalized Wronskians {#gene}
----------------------
It is well-known that a finite collection of meromorphic functions (on some domain of ${\mathbb }C$) is linearly dependent over ${\mathbb }C$ if and only if its Wronskian vanishes. Roth [@Roth] generalized (and specialized) this fact to rational functions in several variables using a generalized notion of the Wronskian. This was later generalized to the analytic setting [@BerensteinChangLi; @Wolsson]. We will see how generalizations of these results are easy consequences of the results in the previous section.
Let $\alpha = (\alpha_{1} , \ldots , \alpha_{m}) \in {\mathbb }N^m,$ and $| \alpha |= \sum \alpha_{i}$. Fix the (multi-index) notation $\delta^\alpha = \delta_1 ^ {\alpha_{1}} \ldots \delta_m ^{ \alpha _{m} }$, and $A = (\alpha^{(0)}, \alpha^{(1)} , \ldots , \alpha ^{(n)} ) \in ({\mathbb }N^m)^{n+1}$. We call $${\mathcal }W_A:= \left| \left( \begin{array}{cccc}
\delta ^ { \alpha^{(0)}} y_0 & \delta ^ { \alpha^{(0)}} y_1 & \ldots & \delta ^ { \alpha^{(0)}} y_n \\
\delta ^ { \alpha^{(1)}} y_0 & \delta ^ { \alpha^{(1)}} y_1 & \ldots & \delta ^ { \alpha^{(1)}} y_n \\
\vdots & \vdots & \ddots & \vdots \\
\delta ^ { \alpha^{(n)}} y_0 & \delta ^ { \alpha^{(n)}} y_1 & \ldots & \delta ^ { \alpha^{(n)}} y_n
\end{array} \right) \right|$$ the *Wronskian associated to* $A$.
\[linwronski\] If $V= {\mathbb }P^n ( {\mathcal }U^\D)$, the constant points of ${\mathbb }P^n$, then the projective differential variety $ld (V)$ is equal to the zero set in ${\mathbb }P^n$ of the collection of generalized Wronskians (i.e., as the tuple $A$ varies in $({\mathbb }N^m)^{n+1}$).
This theorem of the Wronskian is well known, we refer the reader to [@KolchinDAAG Chapter II, §1, Theorem 1] for a standard proof. Since we are not yet restricting ourselves to a subcollection of generalized Wronskians, we present a more direct proof (using the language of this paper).
The vanishing of the collection of generalized Wronskians is clearly a necessary condition for linear dependence. We now show that it is sufficient. Let $(f_0,f_1,\ldots,f_n)$ be a tuple in the zero set of the collection of generalized Wronskians. Consider the matrix $$M=(\delta^{\alpha}f_i)_{i\leq n, \alpha\in {\mathbb }N^m}.$$ By our assumption, $M$ has rank at most $n$ and so $V:=\operatorname{ker}M\subseteq {\mathcal }U^{n+1}$ is a positive dimensional subspace. We now check that $V$ is stable under the derivations. Let $\delta\in \Delta$ and $v=(v_0,\ldots,v_n)\in V$, then for any $\alpha\in {\mathbb }N^m$ we have $$\sum_{i=0}^n \delta^\alpha f_i\cdot \delta v_i=\delta\left(\sum_i \delta^\alpha f_i\cdot v_i\right)-\sum_i\delta(\delta^\alpha f_i)\cdot v_i=0.$$ So $\delta v\in V$. Thus, $(V,\Delta)$ is a $\Delta$-module over ${\mathcal }U$. It is well known that a finite dimensional $\Delta$-module over a differentially closed field has a basis consisting of $\Delta$-constants (this can be deduced from the existence of fundamental matrix of solutions of integrable linear differential equations). Hence, there is $(c_0,\ldots,c_n)\in V\cap {\mathcal }U^{\Delta}$, and so, in particular, $\sum_i c_i f_i=0$.
The following corollaries show why our results are essentially generalizations of [@BerensteinChangLi Theorem 2.1], [@Walker Theorem 2.1] and [@Wolsson]:
Let $k$ be a differential subfield of $K$ and $f_0 , \ldots , f_n\in K$. If ${\mathbb }P^n({\mathcal }U^\D)$ is $k$-large with respect to $K$, then $f_0, \ldots , f_n$ are linearly dependent over ${\mathbb }P^n(k^\D)$ if and only if the collection of generalized Wronskians vanish on $(f_0,\dots,f_n)$.
Since ${\mathbb }P^n({\mathcal }U^\D)$ is ${\mathbb }C$-large with respect to any field of meromorphic functions on some domain of ${\mathbb }C^m$, we have
The vanishing of the collection of generalized Wronskians is a necessary and sufficient condition for the linear dependence (over ${\mathbb }C$) of a finite collection of meromorphic functions on some domain of ${\mathbb }C^m$.
In [@Walker], Walker proved that the vanishing of the collection of generalized Wronskians is equivalent to the vanishing of the subcollection of those Wronskians associated to Young-like sets (and this is the least subcollection of Wronskians with this property). A *Young-like* set $A$ is an element of $({\mathbb }N^m)^{n+1}$ with the property that if $\alpha \in A$ and $\beta \in {\mathbb }N^m$ are such that $\beta <\alpha$ in the product order of ${\mathbb }N^m$, then $\beta \in A$. (When $m=2$, Young-like sets correspond to Young diagrams.)
There are computational advantages to working with Young-like sets, since the full set of generalized Wronskians grows much faster in $(m,n)$ (where $m$ is the number of derivations and $n$ is the number of functions). Even for small values of $(m,n)$ the difference is appreciable, see for example [@Walker] for specifics on the growth of the collection of Young-like sets.
Using Walker’s result, we obtain the following corollary:
If $V= {\mathbb }P^n ( {\mathcal }U^\D)$, the constant points of ${\mathbb }P^n$, then the projective differential variety $ld (V)$ is equal to the zero set in ${\mathbb }P^n$ of the collection of generalized Wronskians associated to Young-like sets. Moreover, this subcollection of Wronskians is the smallest one with this property.
The catenary problem and related results {#catsection}
========================================
In this section we discuss some conjectural questions around completeness and Kolchin’s catenary problem. We also take the opportunity to pose stronger forms of the catenary conjecture for algebraic varieties that seem interesting in their own right. We begin by recalling the *catenary problem*:
Given an irreducible differential variety $V$ of dimension $d>0$ and an arbitrary point $p \in V$, does there exist a long gap chain beginning at $p$ and ending at $V$? By a long gap chain we mean a chain of irreducible differential subvarieties of length $\omega^m \cdot d$. The positive answer to this question is called the Kolchin catenary conjecture.
Under stronger assumptions, various authors have established the existence of long gap chains. When $p \in V$ satisfies certain nonsingularity assumptions, Johnson [@JohnsonCat] established the existence of the chain of subvarieties of $V$ starting at $p$. In [@Rosenfeld], Rosenfeld proves a slightly weaker statement (also with nonsingularity assumptions) and expresses the opinion that the nonsingularity hypotheses might not be necessary; however, except for special cases, the hypotheses have not been eliminated. See [@BuiumCassidyKolchin pages 607-608] for additional details on the history of this problem.
In a different direction, Pong [@PongCat] answered the problem affirmatively, assuming that $V$ is an algebraic variety, but assuming nothing about the point $p$. Pong’s proof invokes resolution of singularities (the “nonsingularity" assumptions of [@Rosenfeld; @JohnsonCat] are not equivalent to the classical notion of a nonsingular point on an algebraic variety). It is worth mentioning that even though Pong works in the ordinary case, $\Delta=\{\delta\}$, his approach and results readily generalize to the partial case.
We also have the following weaker form of the catenary conjecture:
\[weakcat\] For every positive dimensional irreducible differential variety $V \subseteq {\mathbb }A^n$ and every zero-dimensional differential subvariety $W \subseteq V$, there is a proper irreducible differential subvariety $V_1$ of $V$ such that $V_1 \cap W \neq \emptyset$ and $V_1 \not \subseteq W$.
This conjecture (although it seems rather innocuous, a proof is not known) is a very easy consequence of the catenary conjecture. Indeed, pick $p \in W$ and pick a long gap chain starting at $p$. Then, since the Kolchin polynomials of the sets in the chain are not equal to each other, at some level the irreducible closed sets in the chain can not be contained in the set $W$.
In Section \[compbound\] we suggested the following
\[conbound\] A $\Delta$-closed $V \subseteq {\mathbb }P^n$ is complete if and only if $\pi_2: V \times Y \rightarrow Y$ is a $\Delta$-closed map for every quasiprojective zero-dimensional differential variety $Y.$
Even though we are not able to prove this, we show that it is a consequence of the weak catenary conjecture.
The Weak Catenary Conjecture implies Conjecture \[conbound\].
Towards a contradiction, suppose that $\pi_2:V\times Y\to Y$ is a closed map for every zero-dimensional $Y$, but $V$ is not complete. Then, by Proposition \[mainthm\], there must be some $Y$, a positive dimensional irreducible affine differential variety, and an irreducible closed set $X \subseteq V \times Y_1$ such that $\pi_2 (X)$ is not closed, $\pi_2(X)^{cl}$ is positive dimensional, and $W:=\pi_2(X)^{cl}\setminus \pi_2(X)$ is zero-dimensional. We will obtain the desired contradiction by finding a zero-dimensional $Y'$ which witnesses incompleteness.
Applying (iterating rather) the weak catenary conjecture, we obtain a zero-dimensional and proper irreducible subvariety $Y'$ of $\pi_2(X)^{cl}$ such that $Y' \cap W \neq \emptyset$ and $Y' \not \subseteq W$. We claim that this $Y'$ witnesses incompleteness. Let $X'=X\cap(V\times Y')$. Then we claim that $\pi_2(X')=\pi_2(X)\cap Y'$ is not closed. Suppose it is, then it would be a proper closed subset of $Y'$. The fact that $W\cap Y'$ is also a proper closed subset of $Y$, would contradict irreduciblity of $Y'$. Thus, $\pi_2(X')$ is not closed, and the result follows.
Note that restricting to specific families of differential varieties $V$ does not necessarily restrict the varieties on which one applies the weak catenary conjecture in the proof of the lemma. Thus, for our method of proof, Conjecture \[weakcat\] is (a priori) used in full generality.
A stronger form of the catenary problem for algebraic varieties
---------------------------------------------------------------
In this section we describe some of the difficulties of the catenary problem for algebraic varieties. As we mentioned already, this case follows from results of Pong [@PongCat]; however, in what follows we propose a different approach to this problem.
A differential ring is called a *Keigher ring* if the radical of every differential ideal is again a differential ideal. The rings we will be considering will be assumed to be Keigher rings. Note that every Ritt algebra is a Keigher ring (see for instance [@MMP §1]).
Given $f: A \rightarrow B$ a differential homomorphism of Keigher rings, we have an induced map $f^*: Spec \, B \rightarrow Spec \, A$ given by $f^* ( {\mathfrak }p ) = f^{-1} ( {\mathfrak }p )$. We denote by $f^*_ \Delta : Spec ^ \Delta B \rightarrow Spec ^ \Delta A$ the restriction of the map $f^*$ to the differential spectrum. We have the following differential analogs of the going-up and going-down properties:
\[goingupdown\] Suppose we are given some chain ${\mathfrak }p_1 \subseteq \ldots \subseteq {\mathfrak }p_n$ with ${\mathfrak }p_i \in Spec^\Delta \, f(A)$ and any ${\mathfrak }q_1 \subseteq \ldots \subseteq {\mathfrak }q _m \in Spec^\Delta \, B$ such that for each $i \leq m,$ $ {\mathfrak }q_i \cap f(A) ={\mathfrak }p_i$. We say that $f$ has the *going-up property for differential ideals* if given any such chains ${\mathfrak }p$ and ${\mathfrak }q$, we may extend the second chain to ${\mathfrak }q_1 \subseteq \ldots \subseteq {\mathfrak }q_n$ where ${\mathfrak }q_i\in Spec^\Delta B$ such that for each $i \leq n,$ ${\mathfrak }q_i \cap f(A) = {\mathfrak }p_i$. One can analogously define when $f$ has the *going-down property for differential ideals*.
When $(A, \Delta) \subseteq (B,\Delta) $ are integral domains, $B$ is integral over $A$, and $A$ is integrally closed, then the differential embedding $A \subseteq B$ has the going-down property for differential ideals. Dropping the integrally closed requirement on $A$, one can still prove the going-up property for differential ideals [@PongCat Proposition 1.1]. In what follows we will see how these results are consequences of their classical counterparts in commutative algebra.
Let us review some developments of differential algebra which are proved in [@Trushin]. We will prove the results which we need here in order to keep the exposition self-contained and tailored to our needs. Let $f: A \rightarrow B$ be a differential homomorphism of Keigher rings. The fundamental idea, which Trushin calls *inheritance*, is to consider one property of such a map $f$ considered only as a map of rings and another property of $f$ as a map of differential rings and prove that the properties are equivalent. As we will see, in certain cases, one can reduce the task of proving various differential algebraic facts to proving corresponding algebraic facts.
\[one\] Let ${\mathfrak }p \subset A$ be a prime differential ideal. The following are equivalent:
1. ${\mathfrak }p = f^{-1} (f ({\mathfrak }p ) B)$,
2. $(f^*)^{-1} ({\mathfrak }p ) \neq \emptyset$,
3. $(f_\Delta ^*)^{-1} ( {\mathfrak }p ) \neq \emptyset$.
$(1) \Leftrightarrow (2)$ is precisely [@AtiyahMac Proposition 3.16]. $(3) \Rightarrow (2)$ is trivial. To show that $(2) \Rightarrow (3),$ note that $(f^*)^{-1} ({\mathfrak }p)$ is homeomorphic to $Spec \, B_ {\mathfrak }p / {\mathfrak }p B _ {\mathfrak }p$. The fact that the fiber is nonempty means that $B_ {\mathfrak }p / {\mathfrak }p B _ {\mathfrak }p$ is not the zero ring. Since it is a Keigher ring, $Spec^\Delta \, B_ {\mathfrak }p / {\mathfrak }p B _ {\mathfrak }p$ is nonempty (see [@differentialschemes]) and naturally homeomorphic to $(f^*_ \Delta)^{-1} ( {\mathfrak }p)$.
The following results are easy applications of the lemma:
\[twoone\]
1. If $f^*$ is surjective, so is $f^*_\Delta$.
2. If $f$ has the going-up property, then $f$ has the going-up property for differential ideals.
3. If $f$ has the going-down property, then $f$ has the going-down property for differential ideals.
Of course, by applying the previous corrollary to integral extensions with the standard additional hypotheses, we get the desired analogs of the classical going-up and going-down properties (see [@PongCat] or [@Trushin], where the results were reproved).
\[downdownbaby\] Suppose that $A$ is a Ritt algebra, $Spec^\Delta \, A$ is Noetherian, $B$ is a finitely generated differential ring over $A$, and the map $f:A \rightarrow B$ is the embedding map. Then the following are equivalent.
1. $f$ has the going-down property for differential ideals,
2. $f_\Delta^*$ is an open map (with respect to the $\Delta$-topology).
Let us prove that (2) implies (1). Let $ {\mathfrak }q \in Spec ^ \Delta \, B$ and let ${\mathfrak }p = f^{-1} ( {\mathfrak }q).$ Since we are interested in differential ideals contained in ${\mathfrak }q$, it will be useful to consider the local ring $B_ {\mathfrak }q$, and we note that $B_ {\mathfrak }q = \varinjlim _{t \in B \backslash {\mathfrak }q } B_t$.
By [@AtiyahMac Exercise 26 of Chapter 3], $f^* (Spec \, B_ {\mathfrak }q) = \bigcap _{t \in B \backslash {\mathfrak }q} f ^* ( Spec \, B_t)$. Now, by Corollary \[twoone\], surjectivity of $f^*$ implies surjectivity of $f_\Delta^*$, so $$f^* _\Delta( Spec ^ \Delta \, B _ {\mathfrak }q ) = \bigcap _{t\in B \backslash {\mathfrak }q} f_ \Delta ^* (Spec^ \Delta ( B_t )).$$ Since $f_\Delta ^*$ is an open map, $f^* _\Delta ( Spec ^ \Delta \, B_t )$ is an open neighborhood of $ {\mathfrak }p$ and so it contains $Spec ^ \Delta \, A_ {\mathfrak }p$.
We have proved that, for any $ {\mathfrak }q \in Spec ^ \Delta \, B$, the induced map $f_\Delta ^* :Spec ^ \Delta \, B_ {\mathfrak }q \rightarrow Spec ^ \Delta A_ {\mathfrak }p$ is a surjective map, where ${\mathfrak }p = f^{-1} ( {\mathfrak }q)$. Since differential ideals contained in $ {\mathfrak }p$ correspond to differential ideals in $A_ {\mathfrak }p$, we have established the going-down property for differential ideals.
Now we prove that (1) implies (2). Take ${\mathfrak }p \in f_ \Delta ^* (Spec ^\Delta \, B_t )$ with $f_\Delta ^* ( {\mathfrak }q) ={\mathfrak }p$. Take some irreducible closed subset $Z \subseteq Spec ^ \Delta \, A$ for which $Z \cap f_ \Delta ^* (Spec ^\Delta \, B_t )$ is nonempty. Now take some ${\mathfrak }p_1 \in Spec ^ \Delta \, A$ with ${\mathfrak }p_1 \subset {\mathfrak }p$. By the going-down property for differential ideals, $ {\mathfrak }p_1 = f^*_\Delta ({\mathfrak }q_1)$ for some $ {\mathfrak }q_1 \in Spec^ \Delta \, B$. Noting that ${\mathfrak }q_1$ is in $Spec ^\Delta \, B_t$, we see that $ f_ \Delta ^* (Spec ^\Delta \, B_t ) \cap Z$ is dense in $Z$. Since the set $ f_ \Delta ^* (Spec ^\Delta \, B_t ) $ is constructible [@Trushin Statement 11] and thus contains an open subset of its closure, $Z \cap f_ \Delta ^* (Spec ^\Delta \, B_t ) $ contains an open subset of $Z$. This holds for arbitrary $Z$, so $f_ \Delta ^* (Spec ^\Delta \, B_t ) $ is open.
In many circumstances the existance of long gap chains can be deduced from the existance of such chains in affine space (which are well known):
\[affinechain\] Let $p \in {\mathbb }A^d $ be an arbitrary point. Then there is a long gap chain defined over $\mathbb Q$ starting at $p$ and ending at ${\mathbb }A^d$.
The following is a specific example which establishes the previous fact. Moreover, this example produces a family of differential algebraic subgroups of the additive group such that for every $\alpha<\omega^m$ there is an element in the family whose Lascar rank is “close” to $\alpha$.
\[Affinepoint\] We produce a family $\{G_r: r\in n\times\NN^m\}$ of differential algebraic subgroups of the additive group $(\mathbb{A}^n,+)$ with the following properties. For every $r=(i,r_1,\dots,r_m)\in n\times \NN^m$, $$\w^m i+\sum_{j=k}^m\w^{m-j}r_j \leq U(G_r) < \w^m i+\w^{m-k} (r_k+1)$$ where $k$ is the smallest such that $r_k>0$, and if $r,s\in n\times \NN^m$ are such that $r< s$, in the lexicographical order, then the containment $G_r\subset G_s$ is strict. Here $U(G_r)$ denotes the Lascar rank of $G_r$. We refer the reader to [@MMP] for definitions and basic properties of this model-theoretic rank.
For $r=(i,r_1,\dots,r_m)\in n\times \NN^m$, let $G_r$ be defined by the homogeneous system of linear differential equations in the $\D$-indeterminates $x_0,\dots,x_{n-1}$, $$\left\{
\begin{array}{c}
\d_1^{r_1+1}x_i=0, \\
\d_2^{r_2+1}\d_1^{r_1}x_i=0,\\
\vdots\\
\d_{m-1}^{r_{m-1}+1}\d_{m-2}^{r_{m_2}}\cdots\d_1^{r_1}x_i=0, \\ \d_{m}^{r_m}\d_{m-1}^{r_{m-1}}\cdots\d_1^{r_1}x_i=0,
\end{array}
\right.$$ together with $$x_{i+1}=0,\cdots, x_{n-1}=0.$$ Note that if $r,s\in n\times\NN^m$ are such that $r< s$, then $G_r\subset G_s$ is strict. We first show that $$U(G_r)\geq \w^m i +\sum_{j=1}^m\w^{m-j}r_j.$$ We prove this by transfinite induction on $r=(i,r_1, . . . , r_m)$ in the lexicographical order. The base case holds trivially. Suppose first that $r_m \neq 0$ (i.e., the succesor ordinal case). Consider the (definable) group homomorphism $f:(G_r,+)\to (G_r,+)$ given by $f(x_i)=\d_m^{r_m -1}x_i$. Then the generic type of the generic fibre of $f$ is a forking extension of the generic type of $G_r$. Since $f$ is a definable group homomorphism, the Lascar rank of the generic fibre is the same as the Lascar rank of $\operatorname{Ker}(f)=G_{r'}$, where $r'=(i,r_1,\dots,r_{m-1},r_m -1)$. By induction, $$U(G_{r'})\geq \w^m i+\sum_{j=1}^{m-1}\w^{m-j}r_j +(r_m-1).$$ Hence, $$U(G_{r})\geq \w^m i+ \sum_{j=1}^m\w^{m-j}r_j.$$
Now suppose $r_m = 0$ (i.e., the limit ordinal case). Suppose there is $k$ such that $r_k \neq 0$ and that $k$ is the largest such. Let $\ell\in \w$ and $r' = (i,r_1,\dots,r_k -1,\ell,0,...,0)$. Then $G_{r'}\subset G_r$ and, by induction, $$U(G_{r'})\geq \w^m i+\sum_{j=1}^{k-1}\w^{m-j}r_j+\w^{m-k} (r_{k}-1)+\w^{m-k-1}\ell.$$ Since $\ell$ was arbitrary, $$U(G_r)\geq \w^m i+\sum_{j=1}^{k}\w^{m-j}r_j.$$ Finally suppose that all the $r_k$’s are zero and that $i>0$. Let $\ell\in \w$ and $r'=(i-1,\ell,0,\dots,0)$. Then again $G_{r'}\subseteq G_r$ and, by induction, $$U(G_{r'})\geq \w^m (i-1)+\w^{m-1}\ell.$$ Since $\ell$ was arbitrary, $$U(G_r)\geq \w^m i.$$ This completes the induction.
Let $k$ be the smallest such that $r_k>0$ and let $tp(a_0,\dots,a_{\ell-1}/K)$ be the generic type of $G_r$. We now show that if $i=0$, then $\D$-type$(G_r)=m-k$ and $\D$-dim$(G_r)=r_k$ (here $\Delta$-dim denotes the typical $\Delta$-dimension, see Chapter II of [@KolchinDAAG]). As $i=r_1=\cdots=r_{k-1}=0$, we have $a_1=\cdots=a_{n-1}=0$ and $\d_1 a_0=0,\,\dots,\,\d_{k-1} a_0=0,\, \d_k^{r_k+1}a_0=0$ and $\d_k^{r_k}a_0$ is $\D_{k}$-algebraic over $K$ where $\D_k=\{\d_{k+1},\dots,\d_m\}$. It suffices to show that $a_0,\d_k a_0,\dots, \d_k^{r_k-1}a_0$ are $\D_{k}$-algebraically independent over $K$. Let $f$ be a nonzero $\Delta_k$-polynomial over $K$ in the variables $x_0,\ldots,x_{r_k-1}$, and let $g(x)=f(x,\d_k x,\dots,\d_k^{r_k-1} x)\in K\{x\}$. Then $g$ is a nonzero $\D$-polynomial over $K$ reduced with respect to the defining $\D$-ideal of $G_r$ over $K$. Thus, as $a$ is a generic point of $G_r$ over $K$, $$0\neq g(a)=f(a,\d_k a,\dots,\d_k^{r_k-1} a),$$ as desired. Applying this, together with McGrail’ s [@McGrail] upper bounds for Lascar rank, we get $$U(G_r)<\w^{m-k}(r_k+1).$$ For arbitrary $i$, the above results show that $U(a_j/K)=\w^m$ for $j<i$, and $U(a_i/K)< \w^{m-k}(r_k+1)$ and $U(a_j/K)=0$ for $j>i$. Applying Lascar’s inequality we get: $$U(G_r)\leq U(a_0/K)\oplus\cdots\oplus U(a_{n-1}/K)<\w^m i+\w^{m-k}(r_k+1),$$ where $\oplus$ denotes the Cantor sum of ordinals. This proves the other inequality.
\[remP\] We now echo some remarks from [@PongCat]:
1. The catenary problem is essentially local; i.e, if you can find an ascending chain of irreducible differential subvarieties of an open set, then taking their closures results in an ascending chain in the given irreducible differential variety.
2. As Pong [@PongCat page 759] points out, truncated versions of the differential coordinate rings of singular algebraic varieties do not satisfy the hypotheses of the classical going-down or going-up theorems with respect the ring embedding given by Noether normalization. (In [@PongCat], this difficulty is avoided using resolution of singularities.)
In light of Theorem \[downdownbaby\], Fact \[affinechain\] and Remark \[remP\](1), one can see that the following question is a stronger version of the Kolchin catenary problem for algebraic varieties:
\[qeso\] Let $f: V \rightarrow {\mathbb }A^d$, where $d=dim\, V$, be a finite open map of irreducible affine algebraic varieties. Then, if $f_\Delta$ denotes $f$ when regarded as a map of differential algebraic varieties, is $f_\Delta$ an open map?
In the next question, we will use the terminology of [@MSHasse]. When $V$ is an algebraic variety, we let $V_ \infty$ denote the inverse image of the prolongation spaces $\varprojlim _n \tau _ n (V)$ with respect to the appropriate finite free algebra corresponding to $\Delta$ [@MSHasse Example 2.4]. When $f: V \rightarrow W$ is a map of varieties, there is a naturally induced map $f_ \infty : V _\infty \rightarrow W _\infty$. The following question would yield, by quantifier elimination and Theorem \[downdownbaby\], a positive answer to Question \[qeso\]:
Let $f: V \rightarrow {\mathbb }A^d$, where $d=dim\, V$, be a finite open map of irreducible affine algebraic varieties. Let $f_\infty : V_\infty \rightarrow {\mathbb }A^d _\infty$ be the induced map on their prolongation spaces. Is $f_\infty$ an open map?
We do not know the answers to either of these questions in general; however, there is some evidence for the first one. We observe that at least, in the context of Question \[qeso\], the image of every $\Delta$-open set contains a $\Delta$-open set. Let $f$ and $f_\Delta$ be as in Question \[qeso\], and let $U$ be a $\Delta$-open subset of $V$. Then, the Lascar rank of $U$ is $\omega^m\cdot d$ (where $d=dim\, V$). It follows, from Lascar inequalities and the fact that $f_\Delta$ has finite fibres, that $f_\Delta(U)$ has Lascar rank $\omega^m\cdot d$. By quantifier elimination, $f_\Delta(U)$ is constructible and so it must contain a $\Delta$-open set.
The second question does not appear to be answered in the literature on arc spaces, which is pertinent under the additional assumption that the variety $V$ is defined over the constants. The question can not be obviously answered by restricting ones attention to the finite level prolongation spaces even in the ordinary differential case for varieties defined over the constants (i.e., the case of arc spaces used in algebraic geometry). For instance, let $C$ be a curve and take $f: C \rightarrow {\mathbb }A^1$ to be the finite open map given by Noether normalization. The induced map of tangent bundles of a cuspidal curve $Tf: TC \rightarrow {\mathbb }A^2$ is not an open map; the extra component of the tangent bundle over the cusp gets mapped to a single point in ${\mathbb }A^2$.
Pong’s solution [@PongCat] to the Kolchin catenary problem for algebraic varieties avoids the stronger forms we have given here. Instead of asking about the general going-down property (for differential ideals) for the map coming from Noether normalization, Pong uses resolution of singularities to reduce the question to smooth varieties.
[^1]: \*This material is based upon work supported by an American Mathematical Society Mathematical Research Communities award and the National Science Foundation Mathematical Sciences Postdoctoral Research Fellowship, award number 1204510.
[^2]: \*\*This material is based upon work supported by an American Mathematical Society Mathematical Research Communities award.
| |
Dulwich made a spirited attempt to chase 252 in their Ryman Surrey Championship Division 1 match against Beddington, but in the end fell 12 runs short.
Dulwich put their opponents into bat, but were unable to achieve an early breakthrough as Australian OP James McAuliffe dominated an opening partnership of 96 in 20.3 overs with Antony Down, before falling to Tom Fox after making 60 off only 57 balls. Alex Gledhill put a brake on the scoring, conceding only 11 runs off his first eight overs, but the score had advanced to 153 after 33 overs when Levi Olver snapped up two wickets in an over. He picked up his third wicket at the start of the 42nd over, removing the obdurate Down for a painstaking 53 off 129 balls. Acting skipper Chris Lester had his namesake Graham lbw with the score on 219, and Naeem Iqbal also picked up a wicket and made a run out as Beddington closed on 252-7 after their 50 overs.
Dulwich also got off to a good start as Anil Mahey and Frankie Brown put on 87 for the first wicket in only 13.4 overs, putting Dulwich well ahead of the required rate. Brown’s dismissal for 33 off 34 balls brought in South African OP Jonathan Lewis, and he and Mahey took the score to 135 after 25 overs at the drinks interval. Mahey fell two balls after the resumption, having made 67 off 79 balls. Lewis was joined by Ben Rosser, who made 19 out of a third wicket partnership of 29 in 7.4 overs, and Iqbal in a fourth wicket stand of 26 in 5.1, but Lewis’s dismissal for 36 was followed in the next over by Iqbal for 16 to reduce Dulwich to 191-5 after 39.1 overs. Lester played a captain’s innings of 16 off 18 balls before falling to another namesake David, to make it 216-6 after 43.5 overs. With 37 needed off 37 balls the remaining batsmen tried to keep up with the rate but lost three out of the last four wickets to run outs. The last seven wickets had gone down for just 50 runs as they were all out for 240 off the first ball of the last over, Fox remaining unbeaten with 12.
Dulwich picked up a bonus point for losing by less than 40 runs, but still remain 22 points behind ninth placed Ashtead and 25 points behind eighth placed Farnham. Next week they visit third placed Normandy. | http://www.dulwichcc.com/?p=6108 |
Closed Loop (2017), Jake Elwes Courtesy Nature Morte In the essay ‘Art in the Time of the Artificial’ (1998), Frieder Nake, a pioneer of computer-generated art, describes the bewilderment surrounding Georg Nees’s works at the first exhibition of computer art in Stuttgart in 1965.
copyright by www.apollo-magazine.com
Uneasy questions confronted Nees’s drawings, created using a plotter following a programmed pattern to generate geometrical figures – was this authentic art? Who should be considered its author? We still appear to be asking the same questions today; the image itself is often outweighed by the process that constructs it.
In a moment when the information we consume and our patterns of digital behaviour are largely influenced by algorithmic processes, why does AI (artificial intelligence) art still confound us? Perhaps because the algorithmic image as aesthetic object takes us beyond the capacity of AI as a functional tool, turning it into a medium itself. The more interesting question might not be about who creates the art but why we consider it to be art in the first place. How do we surpass the obsession with whether AI art can convince us of its humanness and instead consider its larger implications on what we value as creativity? Unfortunately, the commercial positioning of AI art still emphasises spectacle and kitsch.
So,
when ‘Gradient Descent’, an exhibition at Nature Morte gallery in New Delhi (until 15 September) presented itself as ‘the first ever art exhibition in India to include artwork made entirely by artificial intelligence’, I was slightly wary. The exhibition, curated by the research collective 64/1, brings together coders and artists like Mario Klingemann, Memo Akten, Tom White and Anna Ridler, for a mini-survey of the visual possibilities of neural-network AI, which seeks to recreate the mechanics of cognition in the human brain. Klingemann, probably the most recognisable of the group, has been building these kinds of programmes for years, which he ‘trains’ on massive amounts of visual data to eventually create an output that, in the work on view here, continually merges his own webcam self-portraits with those of Old Masters. The video is titled 79530 Self Portraits: each successive image mutates to integrate the previous one at a giddy pace, facial features floating in a distorted, Baconesque world. | https://swisscognitive.ch/2018/09/18/ai-art-is-on-the-rise-but-how-do-we-measure-its-success/ |
Category:
Main Courses > Beef
Ingredients
MEATLOAF
1-1/2 pound ground beef
1 cup crushed round buttery crackers
1/3 cup milk
1/4 cup chopped onion
1/4 cup ketchup
1 tablespoon Worcestershire sauce
1 egg
1/2 teaspoon salt
1/8 teaspoon pepper
SAUCE
1 (8-1/4 ounce) can crushed pineapple in heavy syrup, undrained
1/2 cup ketchup
2 tablespoons brown sugar
2 teaspoons cornstarch
Directions
Heat oven to 350 degrees F. In large bowl, combine all meat loaf ingredients; mix well. In ungreased 13 x 9 inch pan, form mixture into 9 x 4 inch loaf. Bake at 350 degrees F. for 45 minutes.
In small saucepan, combine all sauce ingredients; mix well. Cook over medium heat until mixture boils and thickens, stirring frequently. Spread on top of meat loaf; bake an additional 15 to 20 minutes or until meat loaf is no longer pink. Let stand 5 minutes before slicing. | https://www.sawyers-specialties.com/recipes/gramdmas-meatloaf/ |
Dog Vs Cats At The Premiere For ‘The Secret Life Of Pets’ (Video)
The story of ‘The Secret Life of Pets’ is a pretty easy one. Pets have a life when we leave them alone. They don’t just sit and lounge around (like they probably do in real life). In the movie, a dog’s (Max) life is perfect and then his person gets a new dog (Stonestreet) and everything turns awful. They leave and end up getting lost and mixed up with the wrong kind of animals.All of their animal friends must search for them to return home.
We know all that. What we don’t know is the age old question, what’s better cats or dogs? We find out from stars Kevin Hart, Eric Stonestreet, Bobby Moynihan, Jenny Slate, Lake Bell, and Louis C.K. to find out what kind of animal they have, and which they like better. The results may surprise you.
The Secret Life of Pets comes out tomorrow, Friday July 8, 2016. | http://www.hollywood.com/movies/dog-vs-cats-at-the-premiere-for-the-secret-life-of-pets-video-60603047/ |
This is because they are both flat spaces, so you can carry over intuition from one to the other and the easiest way to encode this intuition is by drawing them the same way, i.e. as a plane.
Now on a 2D plane, if dx and dy are the distances between two points on the x and y axis, then the distance between both of the points is:for 3D space:for 4D space:
Of course you could naturally ask the question, what if I put a minus sign in front of one of the terms, I'd get:
So there are two possible four dimensional spaces here. The first one, with the + sign is called 4D Euclidean space and the second one with the - sign is called 4D Minkowski space.
They behave quite differently, but they are both valid spaces mathematically. It just turns out that our universe is the second one, not the first.
You can compare a universe with the first type of distance rule (basically the universe of Aristotles physics) to the real world and it fails to match the behaviour of the real world, unless things are moving very slowly.
So the first thing is, you can't get this minus sign from the triangle. The triangle is just a path through the space or more accurately the composition of three paths. Those paths can be drawn on both Euclidean space and Minkowski space.
In other words, the picture only represents a triangle in a flat space, whether that space is Minkowskian or Euclidean is an extra detail you have to supply.
So, to my mind, you could write down pythagoras' theorem, calculate distances and then ask "why" must that be the rule that triangles obey. The truth is that it isn't the only logical possibility. Then you could suggest the new one, with the minus sign, which is our actual universe.
Edited by Son Goku,
Edited by Son Goku,
This is telling me that it is slower than light travel that is impossible / absurd !!!! my ftl version comes out a perfectly tolerable 2.64etc.
Please tell I just got the terms backward or something. Please?
The whole point is that displacements which we call "slower than light" are ones with the time displacement being larger than all the spatial ones. These are called time-like displacements.
Faster-than-light displacements have larger space components and are hence called "space-like".
If you use:then spacelike distances are imaginary, if you usethen timelike is imaginary.
However in calculations of physical quantities the imaginary number drops out and the spacelike displacements always come out with problems (infinite-energy, e.t.c.) regardless of which one you initially give the imaginary value to.
Of course in some cases the time and space displacements are equal and you get 0 as the interval, which is called "light-like" since light moves along this displacements or "null" because of the 0 result.
All relativity prevents is crossing over between the two domains. No amount of energy can change one type into the other.
It's quantum mechanics however that rules out tachyons completely. Since every particle must be an excitation of a quantum field, there would need to be a tachyon field.
However, first of all, tachyon fields are unstable. Basically two tachyons would have less energy than none, four would have less energy than two and so on, so very quickly the field would produce an infinite number of particles.
However quantum mechanics forbids* an infinite particle state like this and so the only way such a tachyon field can obey quantum mechanics and still exist is if it remains frozen in its "no particle" configuration. However interaction with any other field would kick it out of its ground state, so basically the tachyon field cannot interact with anything and might as well be non-existent.
*This is quite difficult to explain, but basically a field with an infinite particle state like this, simply cannot exist mathematically, attempting to write one down is basically like writing down 1=2. Rather than forbidding it, quantum mechanics says it is logically inconsistent. | https://www.evcforum.net/dm.php?control=page&t=17019&mlist=on&mbrid=3874&p=1 |
3 edition of Managing computer aided design found in the catalog.
Published
1980 by Published by Mechanical Engineering Publications for the Institution of Mechanical Engineers in London .
Written in English
Edition Notes
Includes bibliographical references.
|Genre||Congresses.|
|Series||I Mech E conference publications -- 1980-8.|
|Contributions||Institution of Mechanical Engineers (Great Britain), Institution of Mechanical Engineers (Great Britain). Process Industries Division.|
|The Physical Object|
|Pagination||48 p. :|
|Number of Pages||48|
|ID Numbers|
|Open Library||OL18680364M|
|ISBN 10||0852984707|
Use the AutoCAD interface and a keyboard, cursor pointing device, and graphics terminal to put drawing information into a computer. Describe and use the basic terms, concepts, and techniques of computer-aided drafting. Set up drawings, use drawing aids and save drawings. Draw lines, basic shapes, and geometric constructions, and edit drawings.
Erewash borough guide.
mighty endeavour
Hubble
LT 3-C Gdr Natures Celebr Is
A practical synopsis of cutaneous diseases
Walking to New Orleans
year of triumph
Short sonata, in C, for piano solo.
Carthage and Tunis
On Friendship
Invitation
Sandwell-Handsworth railway section.
Depression and antidepressants
Get this from a library. Managing computer aided design: conference. [Institution of Mechanical Engineers (Great Britain). Process Industries Division.;]. Managing the Building Design Process explains the designer's role in the creation of new buildings from the development of the plan through to completion.
One key case study is used throughout the book so that the reader can clearly follow the process leading to the creation of a new building.5/5(2). The engineering design process is a common series of steps that engineers use in creating functional products and processes.
The process is highly iterative - parts of the process often need to be repeated many times before another can be entered - though the part(s) that get iterated and the number of such cycles in any given project may vary. It is a decision making process (often iterative.
Computer-aided architectural design (CAAD) is capable of modeling and manipulating objects (not merely their graphical representations), reasoning about and predicting performance of design solutions, generating new design solutions through algorithmic and other methods, managing vast amounts of information, and taking advantage of Cited by: Facilities management such as planning, computer-aided design and BIM integration, space management, move management, and resource scheduling.
Maintenance management covering asset management, work requests, preventive maintenance, work order administration, warranty tracking, and facility condition assessment. Get this from a library.
An object-oriented data model for managing computer-aided design and computer-aided manufacturing data bases. [Stephanie Cammarata; Rand Corporation.]. Computer-Aided Design.
That’s when Montgomery turned to computer technology for help and began using a computer-aided design (CAD) software package to design not only the engine but also the board itself and many of its components.
The CAD program enabled Montgomery and his team of engineers to test the product digitally and work out design problems before moving to the prototype stage. that you need to run a wide variety of software on your PC, from Computer Aided Design (CAD) programs, Helpdesk and financial software, to practical software solutions such as temperature monitoring, a Building ManagementFile Size: KB.
Managing the Building Design Process explains the designer's role in the creation of new buildings from the development of the plan through to completion. One key case study is used throughout the book so that the reader can clearly follow the process leading to the creation of a new building.
Read the latest chapters of Computer Aided Chemical Engineering atElsevier’s leading platform of peer-reviewed scholarly literature Tools For Chemical Product Design From Consumer Products to Biomedicine.
Edited by Mariano Martín, Mario R. Eden, Nishanth G. Chemmangattuvalappil. Managing Risk in the Design of. The use of computer-aided systems could reduce cost and design rework and requalification by providing engineering design teams with the most current materials-property data, knowledge of factors such as materials options and life-cycle costs, and available materials for a design based on experience derived from previous product developments.
Quality Assurance is defined as part of quality management that ensures that quality requirements are met. The requirements for high-quality, reliable, predictable software become increasingly necessary when we strive to meet the customer’s quality expectations. computer aided collaboration in managing construction burçin becerik spiro n.
pollalis harvard design school department of architecture design and technology report series The Planning Guide to Piping Design, Second Edition, covers the entire process of managing and executing project piping designs, from conceptual to mechanical completion, also explaining what roles and responsibilities are required of the piping lead during the process.
The book explains proven piping design methods in step-by-step processes. Using the Computer and Managing Files - module 2 - quiz, - computer e-education, open access, human rights, digital literacy Using the Computer and Managing Files – quiz 2 > start the quiz.
Proofreading: Ana Kedves. Computer-aided design (CAD) and the basic use of. In this two-day workshop, participants learn how to preserve and provide access to physical and digital design and construction records.
The first day covers the process of design, legal issues, appraisal, types of records, arrangement, and description; the second day focuses on media and support identification, preservation, reformatting.
Computer-aided design (CAD) is a computer technology that designs a product and documents the design's process. CAD may facilitate the manufacturing process by transferring detailed diagrams of a product’s materials, processes, tolerances and dimensions with specific conventions for the product in question.
It can be used to produce either. We have solutions for your book. Chapter: CH1 CH1.C1 CH1.C2 CH2 CH2.C1 CH2.C2 CH3 CH3.C1 CH3.C2 CH4 CH4.C1 CH4.C2 CH5 CH5.C1 CH5.C2 CH6 CH6.C1 CH6.C2 CH7 CH7.C1 CH7.C2 CH8 CH8.C1 CH8.C2 CH9 CH9.A CH9.C1 CH9.C2 CH10 CHC1 CHC2 CH11 CH12 CH13 CH14 CH15 Problem: 1DQ 1P 2DQ 2P 3DQ 3P 4DQ 4P 5DQ 5P 6DQ 6P 7DQ 7P 8DQ 8P 9DQ 10DQ 11DQ.
Computer-aided engineering (CAE) technology is rapidly altering the design services delivery process. The implementation of this technology is affecting the management of design production as well as associated quality control activities. repair of computer hardware and related equipment.
Employees whose work is highly dependent upon, or facilitated by, the use of computers and computer software programs (e.g., engineers, drafters and others skilled in computer-aided design software), but who are not primarily engaged inFile Size: KB.
Catalog Description. Introduces basic techniques and algorithms for computer-aided design and optimization of VLSI circuits.
The first part discusses VLSI design process flow for custom, ASIC and FPGA design styles and gives an overview of VLSI fabrication with emphasis on interconnections.
The Reverse Engineering Design Applied on Santana's Front Door Outer Panel Based On the CAD Software International Conference on Instrumentation, Measurement, Circuits and Systems (ICIMCS ) Study on the Simulation System of Cam-Linkage Mechanisms Based on the Simulink SoftwareCited by: Questions: 1.
Flowchart the design and production processes for writing a book such as Managing Quality: Integrating the Supply Chain. Use the standard process for designing products in the chapter.
Example: There are many different approaches to designing products. Even within the same industries, the approaches vary in some important ways. Purchase Integrated Design and Simulation of Chemical Processes, Volume 13 - 2nd Edition. Print Book & E-Book.
ISBNcapabilities of computer-aided design CAD, together with the database and reporting capabilities of an enterprise asset management system (EAM).
This solution gives you the ability to view the fiber network data geographically on a map, and to design new fiber networks and additions interactively in a File Size: KB.
Since the invention of computer-aided design (CAD) in the late s, the design and construction process has transmogrified to such an extent that the traditional phases of design, design development, and construction have lost their distinction.
To identify the information technologies required for a computer-aided system to support materials selection, the committee articulated a future vision of a full-function Computer-Aided Materials Selection System (CAMSS) based on the information summarized in Chapter the future, materials selection is envisioned in a business context that has several major differences compared to current.
•Takes the reader through each process in the designer's role, from inception and planning through to the design and pre-contract administration•New edition covers Computer Aided Draughting and current issues such as sustainability, the needs of special groups and Construction Design and Management Legislation•Essential reading for students studying architecture.
Uses of Computer Technology 5. Computer-Aided Layout Planning 6. Personal Computer Applications 7. Computer-Aided Design (CAD) 8. Management Information Systems (MIS) Applications Part III: How to Achieve Success 9.
Selecting and Developing Computer Aids CAD Selection and Installation Managing Computer Resources Part IV: How to Learn. Computer aided design (CAD) technology is one of the most influential information technology (IT) innovations of the last four decades. This paper studies the factors that influence the spread of this important IT innovation in the context of the Turkish architectural design practice.
Includes instruction in architectural drafting, computer-assisted drafting and design (CADD), creating and managing two and three-dimensional models, linking CAD documents to other software applications, and operating systems.
Graduates should qualify for CAD jobs in architectural and engineering firms and industrial design businesses. Multi-Threshold CMOS Digital Circuits Managing Leakage Power discusses the Multi-threshold voltage CMOS (MTCMOS) technology, that has emerged as an increasingly popular technique to control the escalating leakage power, while maintaining high performance.
The book addresses the leakage problem in a number of designs for combinational, sequential, dynamic, and current-steering logic. Drawing and Managing Projects, Are you an undiscovered Interior Designer. Part 2 You also need to have skills in managing people from all walks of life. This includes the tradesman An introduction to Computer Aided Design and Drafting Perspective drawing by computer.
Computer-aided vaccine design is a comprehensive introduction to this exciting field of study. The book is intended to be a textbook for researchers and for courses in bioinformatics, as well as a laboratory reference guide.
It is written mainly for biologists who want to understand the current methods of computer-aided vaccine design. Computer-aided design, however, is relatively new, emerging as an experimental tool in the early s. Traditionally, engineers have hand-drawn a series of two-dimensional line drawings, or.
Spatial data are important in many areas of the public and private sectors, including geographical information systems [32,60], computer-aided design [19, 42], multimedia information systems [5, knowledge processing image processing computer virtual reality method fot information modeling intelligence metasynthesis computer vision neural net intelligent system new approach knowledge representation computer-aided design actual problem decision support system system analysis complex artificial system information model information technology.
Computer Knowledge is a Must for Computer Aided Drafting If you’re going to leap into the world of computer-aided drafting, basic computer classes are a must. In today’s world, most people at least have a working knowledge of computers which included that latest Windows operating system. Computer Aided Design Software (CAD) CAD (or CAM, short for computer aided manufacturing) software, is software that allows you to create the actual schematics and computer models for products.
Similar to 3D printers, a top of the line example of CAD software is Fusiona CAD software that is paid on a subscription basis for around $ a /5(3).
Computer-Aided Design and Manufacturing Systems. Computers have transformed the design and manufacturing processes in many industries. In computer-aided design (CAD), computers are used to design and test new products and modify existing ones.
Engineers use these systems to draw products and look at them from different angles. A computer-aided design system that is thought of as a process innovation by the user, may be considered a product innovation by the manufacturer.
A much debated and researched issue in the literature about innovations is the link between firm size and innovativeness.The key to managing this increased design complexity while meeting the shortening time-to-market factor is the use of computer-aided design (CAD) and verification tools.
Today's high-speed workstations provide ample power to make large and detailed computations : $CAD deals with the process of design and design-documentation using a computer; these days, the use of a computer is so ubiquitous that the words computer-aided could just as well be omitted. | https://wyqokekicu.coinclassifier.club/managing-computer-aided-design-book-17996fk.php |
My one goal race for this year is set for the Fall in Toronto.
Below you will find my collected thoughts through each stage of training that ultimately culminated in a PB for the full marathon in Rome on March 22.
10K
|not the most flattering of pics :) nice headwear!|
Ran two 10K's, one in December and another in April. The December race was 38:29 off four weeks of training. I relied on marathon fitness from the Chicago Marathon in October to carry me through.
Workouts consisted mostly of:
- multiple short and hard interval days from 3k to 5k paces followed immediately by recovery run days
- moderate distance slow running to maintain aerobic base but not sacrificing speed
- Mileage hovered around 90+km/week with longest runs @18 km
- consistent resistance training (4x/week) with two days of lower and two of upper
Result was 38:32, which was 3 seconds slower than my PB but I did it on a windy and cold day compared to a downhill course in ideal conditions.
Post-race analysis
- felt strong for most of the run and finish strong
- lost focus at 8 km and this cost me a bit of time
- misjudged my kick and had a hundred meters of so left in my tank
HM
|still my favorite pic -- Bermuda HM|
Also ran two HMs, one in January on a hilly and warm course and another in February on slick and frozen roads in windy and wintry conditions
Workouts consisted mostly of:
- longer runs (up to 26km) at easy or aerobic paces to increase fatigue resistance and also to serve as base for a March marathon
- Longer hard intervals at 10k to HM paces
- Peak mileage week was 111 km but the rest were fairly low (high 80s/low 90s) due to fatigue and dreary winter
Result was a well-run HM finishing with a 1:26:24. Not a PB but a very decent time.
Post-race Analysis
- pretty strong running in the first half but had difficulty on the back half indicating a lack of fitness and fatigue resistance
- need to increase effort in the intervals plus add more aerobic distance in order to be successful in the marathon
- Lack of hill training clearly showed especially towards the end
FM
Spring race in Rome and perhaps my most complete run to date. Strong from start to end, fueling was spot-on, and felt like I could/should have finished at least one minute faster if not for the congestion and slippery conditions in the first quarter of the race
Workouts consisted mostly of:
- Running hard intervals at the proper effort. I made this adjustment after reading Faster Road Racing. It turns out that my previous hard efforts weren't hard enough
- Introduced LT Intervals and LT hills into my program. I felt this was the difference maker
- The Peterborough Half in February served as the breakthrough workout. Even though I finished that race with great difficulty in 1:29, I felt that the workouts that followed this race all started feeling easier despite the increased pace
- Longest run was 32 km (one time) but I was able to hit long segments at marathon pace
- Peak week at 121 km averaging around 112 operating on three weeks hard/one week recovery pattern
- Consistent strength training helped me stay injury-free
- Introduced mental-training into regimen to help me focus and address weaknesses, most notably keeping pace during the middle miles
Post-race analysis:
- Steady pacing throughout the race
- Used a new fueling strategy that helped preserved glycogen (gel pack every 5-6 km until 32 km then carb rinse until finish)
- Switched back to a three-week taper (vs two weeks for the past few races) really brought a lot of life back into my legs on race day
- Strong mental focus kept me from slipping mid-way. Looking at my splits, it actually helped me run faster
- I didn't do as much long runs during training, instead making sure that total weekly mileage remained high. I had one 32 km, 2 x 30 km, and a bunch of 26-28 km.
Adjustments for future training
- Slightly increase total weekly mileage. I used to get sick if I reach the 120 km weekly mark but the implications for increasing weekly mileage are clear if I want to keep setting new PBs.
- Add more mental training elements
- Add plyometric workouts to enhance explosiveness and leg strength
- Continue practicing 80/20 running and ensure that hard workouts are run at the correct paces (should mostly be at faster than LT)
Those are my thoughts on my 2015 race year so far. There's more work to be done and, so far, things are looking very promising indeed. I hope that some of you will find these notes useful for your own training and racing. | http://www.9run.ca/2015/05/training-notes-from-10k-to-marathon.html |
---
abstract: 'We compute the decomposition of representations of Yangians into ${{\mathfrak g}}$-modules for simply-laced ${{\mathfrak g}}$. The decomposition has an interesting combinatorial tree structure. Results depend on a conjecture of Kirillov and Reshetikhin.'
author:
- 'Michael Kleber[^1]'
title: |
Combinatorial Structure of Finite\
Dimensional Representations of Yangians:\
the Simply-Laced Case
---
Introduction {#sec_intro}
============
Let ${{\mathfrak g}}$ be a complex semisimple Lie algebra of rank $r$, and $Y({{\mathfrak g}})$ its Yangian ([@Dr]), a Hopf algebra which contains the universal enveloping algebra $U({{\mathfrak g}})$ of ${{\mathfrak g}}$ as a Hopf subalgebra. Write $\alpha_1,\ldots,\alpha_r$ for the fundamental roots and $\omega_1,\ldots,\omega_r$ for the fundamental weights of ${{\mathfrak g}}$. As defined in [@KR], denote by ${{W_m({\ell})}}$ a particular irreducible $Y({{\mathfrak g}})$ module all of whose ${{\mathfrak g}}$-weights $\lambda$ satisfy $\lambda\preceq m{{\omega_{\ell}}}$, where $\alpha\preceq\beta$ means $\beta-\alpha$ is a nonnegative integer linear combination of the roots $\{\alpha_i\}$. Specifically, ${{W_m({\ell})}}$ decomposes into ${{\mathfrak g}}$-modules as $$\label{def_decomp}
{{W_m({\ell})}}|_{{{\mathfrak g}}} {\simeq}{\bigoplus}_{\lambda\preceq m{{\omega_{\ell}}}} V_\lambda^{{\oplus}n_\lambda}$$ where $V_\lambda$ is the irreducible ${{\mathfrak g}}$-module with highest weight $\lambda$ and it occurs $n_\lambda$ times in ${{W_m({\ell})}}$. In particular, $n_{m{{\omega_{\ell}}}}=1$.
There is a formula for the multiplicities $n_\lambda$ in [@KR] based on the conjecture that every finite-dimensional representation of $Y({{\mathfrak g}})$ can be obtained from one specific representation by means of the “reproduction scheme,” defined in [@KRS]. If this conjecture holds, then write $\lambda = m{{\omega_{\ell}}}- \sum n_i \alpha_i$, and it is proved in [@KR] that $$\label{defZ}
n_\lambda = Z({\ell},m|n_1,\ldots,n_r) =
\sum_{\mbox{partitions}} \;\; \prod_{n\geq1} \;\; \prod_{k=1}^r
{{P^{(k)}_n(\nu) + \nu^{(k)}_n} \choose {\nu^{(k)}_n}}$$ The sum is taken over all ways of choosing partitions $\nu^{(1)},\ldots,\nu^{(r)}$ such that $\nu^{(i)}$ is a partition of $n_i$ which has $\nu^{(i)}_n$ parts of size $n$ (so $n_i =
\sum_{n\geq1} n \nu^{(i)}_n$). The function $P$ is defined by $$\begin{aligned}
\label{defPgen}
P^{(k)}_n(\nu) &=& \min(n,m)\delta_{k,{\ell}}
- 2 \sum_{h\geq 1} \min(n,h)\nu^{(k)}_{h} + \\
&&\hspace{1cm} +
\sum_{j\neq k}^r \sum_{h\geq 1} \min(-c_{k,j}n,-c_{j,k}h)\nu^{(j)}_{h}
\nonumber\end{aligned}$$ where $C=(c_{i,j})$ is the Cartan matrix of ${{\mathfrak g}}$. We define ${a\choose b}$ to be 0 whenever $a<b$; since the values of $P$ can be negative, many of the binomial coefficients in (\[defZ\]) can be zero.
Yangians are closely related to quantum affine universal enveloping algebras $U_q({{\hat{\mathfrak g}}})$ when $q$ is not a root of unity, and $U_q({{\mathfrak g}})$ is a Hopf subalgebra of $U_q({{\hat{\mathfrak g}}})$ in much the same way that ${{\mathfrak g}}$ is a Hopf subalgebra of $Y({{\mathfrak g}})$. View $V_\lambda$ as a highest weight module over $U_q({{\mathfrak g}})$; then an affinization of $V_\lambda$ is defined in [@ChP] to be a $U_q({{\hat{\mathfrak g}}})$-module all of whose weights as a $U_q({{\mathfrak g}})$-module satisfy $\mu\preceq\lambda$ and $\lambda$ appears with multiplicity 1. In the case that $\lambda=m{{\omega_{\ell}}}$, $V_{m{{\omega_{\ell}}}}$ has a unique minimal affinization ([@ChP]) with respect to a partial ordering defined in [@Ch], and it is believed ([@Rtalk]) that the decomposition of this minimal affinization into $U_q({{\mathfrak g}})$-modules is the same as the decomposition for the Yangian module ${{W_m({\ell})}}$.
In Section \[sec\_algorithm\], we view the values of $P^{(k)}_n$ as the coordinates of certain strings of weights of ${{\mathfrak g}}$ which lie inside the Weyl chamber. This interpretation allows us to compute the values of $n_\lambda$ much more efficiently. Furthermore, the “initial substring” relation on the labelling by strings of weights imposes the structure of a rooted tree on the set of ${{\mathfrak g}}$-modules which make up ${{W_m({\ell})}}$, rooted at $V_{m{{\omega_{\ell}}}}$ and with the children of any $V_\lambda$ having highest weights $\mu=\lambda-\delta$ with $\delta$ in the positive root lattice.
In Section \[sec\_growth\], we use this added structure to study the asymptotics of the dimension of ${{W_m({\ell})}}$ as $m$ gets large, based on the fact that the tree structure of ${{W_m({\ell})}}$ lifts to $W_{m+1}({\ell})$. We show that the conjecture implies that the dimension grows asymptotically to a polynomial in $m$, and compute the degree of this polynomial for every simply-laced ${{\mathfrak g}}$ and choice of ${{\omega_{\ell}}}$.
In Section \[sec\_table\] we give a list of the decompositions of ${{W_m({\ell})}}$ for all simply-laced ${{\mathfrak g}}$ and small values of $m$ as derived numerically from the conjecture, using the results of Section \[sec\_algorithm\]. For any choice of ${{\mathfrak g}}$, representations $W_1({\ell})$ are called fundamental representations, since every finite-dimensional representation of $Y({{\mathfrak g}})$ appears as a quotient of a submodule of a tensor product of fundamental representations. In the context of $U_q({{\hat{\mathfrak g}}})$-modules, the decompositions of most of the fundamental representations were calculated in [@ChP] using completely different techniques, and those calculations agree with ours.
A similar idea can be used to give a combinatorial interpretation to the values in equations (\[defZ\]) and (\[defPgen\]) when ${{\mathfrak g}}$ is not simply-laced. The resulting structure is not as regular as in the simply-laced case, but should yeild similar results.
The author is grateful to N. Yu. Reshetikhin for suggestion of the problem, discussions, support and encouragement during the development and preparation of this paper.
Structure in the simply-laced case {#sec_algorithm}
==================================
Assume that our Lie algebra ${{\mathfrak g}}$ of rank $r$ is simply-laced. Then equation (\[defPgen\]) becomes $$\label{defP}
P^{(k)}_n(\nu) = \min(n,m)\delta_{k,{\ell}} - \sum_{j=1}^r c_{j,k}
\left( \sum_{h\geq 1} \min(n,h)\nu^{(j)}_{h} \right)$$ Fix a highest weight $m{{\omega_{\ell}}}$, and pick an arbitrary $\nu =
(\nu^{(1)},\ldots,\nu^{(r)})$, where each $\nu^{(i)}$ is a partition of some integer $n_i$. Then for any nonnegative integer $n$, the values $(P^{(1)}_n,\ldots,P^{(r)}_n)$ can be thought of as the $\omega$-coordinates of some weight; define $$\mu_n = \sum_{k=1}^r P^{(k)}_n \omega_k$$ A given $\nu$ contributes a nonzero term to the sum in (\[defZ\]) if and only if the corresponding weights $\mu_0=0, \mu_1, \mu_2,\ldots$ all lie in the dominant Weyl chamber. The motivation for seeing these as weights is that the sum in (\[defP\]) can be naturally realized as subtracting some linear combination of roots; if we let $$\label{defd}
d_n = \sum_{k=1}^r
\left( \sum_{h\geq 1} \min(n,h)\nu^{(k)}_{h} \right) \alpha_k$$ then $\mu_n = \min(n,m){{\omega_{\ell}}}- d_n$.
Think of $\nu^{(1)},\ldots,\nu^{(r)}$ as Young diagrams with $\nu^{(k)}$ having $\nu^{(k)}_{h}$ rows of length $h$. Then we can tell whether a sequence of vectors $d_0=0, d_1, d_2,\ldots$ can arise from $\nu^{(1)},\ldots,\nu^{(r)}$ by looking at their successive differences $\delta_i = d_i - d_{i-1}$. If we write $\delta_n$ out as a linear combination of the roots $\{\alpha_i\}$, then the $\alpha_k$-coordinate is the number of boxes in the $n$th column of the Young diagram of $\nu^{(k)}$, since the sum $\sum_h \min(n,h)\nu^{(k)}_{h}$ in (\[defd\]) is the number of boxes in the first $n$ columns. Thus a sequence arises from partitions if and only if the $\delta_i$ are nonincreasing; that is, $\forall i\geq 1 : \delta_i \succeq \delta_{i+1}$.
If we let $s$ be the size of the largest part in any of the partitions in $\nu$, then $d_s = d_t$ for all $t>s$ (and $s$ is the smallest index for which this is true), and all the information we need to identify a particular summand of ${{W_m({\ell})}}$ is the (strictly increasing) chain of weights $d_0=0 \prec d_1\prec\cdots\prec d_s$, which we define to have length $s$. Note that the chain of length 0 consisting of only $d_0=0$ is permissible, arises from empty partitions, and corresponds to the $V_{m{{\omega_{\ell}}}}$ component of ${{W_m({\ell})}}$.
In summary, we have proven the following:
\[thm\_decomp\] Let ${{\mathfrak g}}$ be a simply-laced complex semisimple Lie algebra of rank $r$ with fundamental roots $\alpha_1,\ldots,\alpha_r$ and fundamental weights $\omega_1,\ldots,\omega_r$, and assume the decomposition of ${{W_m({\ell})}}$ into ${{\mathfrak g}}$-modules in (\[def\_decomp\]) is given by the conjecture in (\[defZ\]) and (\[defPgen\]). Then that decomposition can be refined into a direct sum of parts indexed by chains of weights ${{\mathbf d}}=d_0,\ldots,d_s$ with successive differences $\delta_i = d_i - d_{i-1}$ (and $\delta_{s+1}=0$) such that
1. $d_0=0$ and $d_0\prec d_1\prec\cdots\prec d_s$,
2. $\min(n,m){{\omega_{\ell}}}- d_n$ lies in the positive Weyl chamber for $0\leq n\leq s$, and
3. $\delta_i \succeq \delta_{i+1}$ for all $1\leq i\leq s$.
The summand with label ${{\mathbf d}}=d_0,\ldots,d_s$ consists of the ${{\mathfrak g}}$-module of highest weight $m{{\omega_{\ell}}}- d_s$ with multiplicity $$\prod_{n\geq1} \;\; \prod_{k=1}^r \;
{{P^{(k)}_n({{\mathbf d}}) + {{\mathbf d}}^{(k)}_n} \choose {{{\mathbf d}}^{(k)}_n}}$$ where the values of $P^{(k)}_n({{\mathbf d}})$ and ${{\mathbf d}}^{(k)}_n$ are defined by the relations $$\begin{aligned}
\min(m,n){{\omega_{\ell}}}- d_n &=& \sum_{k=1}^r P^{(k)}_n({{\mathbf d}}) \omega_k \\
\delta_n - \delta_{n+1} &=& \sum_{k=1}^r {{\mathbf d}}^{(k)}_n \alpha_k\end{aligned}$$ and all of the multiplicities are nonzero.
This decomposition is a refinement of the one in (\[def\_decomp\]) since it is possible to find two different chains $d_0,\ldots,d_s$ and $d'_0,\ldots,d'_t$ with $d_s = d'_t$. This happens any time the sum in (\[defP\]) has more than one nonzero term. One example of this occurs in $W_2(4)$ for $E_6$; see Figure \[fig\_tree\].
\[cor\_algor\] If $d_0,\ldots,d_s$ is a valid label then any initial segment $d_0,\ldots,d_{s'}$ (for $0 \leq s' < s$) is a valid label also. Conversely, given any label $d_0,\ldots,d_s$, we can extend it to another valid label by appending any weight $d_{s+1}$ which satisfies the conditions that $\min(s+1,m){{\omega_{\ell}}}- d_{s+1}$ is in the positive Weyl chamber and, if $s>0$, that $d_s \prec d_{s+1} \preceq
d_s+\delta_s$.
This follows immediately from conditions [*(i)–(iii)*]{}. Since $d_0$ must be 0, this completely describes an effective algorithm for computing the conjectured decomposition of a given ${{W_m({\ell})}}$. The computations in Section \[sec\_table\] were computed using this algorithm. The fact that an initial segment of a valid label is still a valid label is the key result which fails to hold true when ${{\mathfrak g}}$ is not simply-laced.
Since truncating any label gives you another label, we can impose a tree structure on the parts of this decomposition, with a node of the tree corresponding to a summand in the decomposition from Theorem \[thm\_decomp\]. The “children” of the node with label $d_0,\ldots,d_s$ are all the nodes indicated by Corollary \[cor\_algor\]; we can label the edges joining them to their parent with the various choices for the increment $\delta_{s+1}$. For each $n\geq 0$, the $n$th row of the tree consists of all the nodes with labels of length $n$.
&&&V\_[2\_4]{}&&&&&&&\
&& (3,2) \^[(0,1,1,2,1,0)]{} & [(1,1,2,3,2,1)]{} & (5,2) \^[(2,3,4,6,4,2)]{} &&&&&&\
V\_[\_1 + \_4 + \_6]{} & & & 2 V\_[\_2 + \_4]{} & & & & & V\_[\_4]{} & &\
[(0,1,1,2,1,0)]{} & & \^[(0,1,0,1,0,0)]{} & [(0,1,1,2,1,0)]{} & \^[(1,1,2,3,2,1)]{} & & & \^[(0,1,1,2,1,0)]{} & [(1,1,2,3,2,1)]{} & \^[(2,3,4,6,4,2)]{} &\
V\_[2\_1 + 2\_6]{} & V\_[\_3 + \_5]{} & & 2 V\_[\_1 + \_2 + \_6]{} & & 3 V\_[2\_2]{} & V\_[\_1 + \_6]{} & & 2 V\_[\_2]{} & & V\_[0]{}\
&&&&& [(0,1,0,0,0,0)]{} &&&&&\
&&&&& V\_[\_4]{} &&&&&\
As an example of this structure, the tree for the decomposition of $W_2(4)$ for ${{\mathfrak g}}=E_6$ is given in Figure \[fig\_tree\]. Scalars in front of modules, as in $2 V_{\omega_2 + \omega_4}$, indicate multiplicity. The label $(a_1,\ldots,a_6)$ corresponds to an increment $\delta=\sum a_i\alpha_i$, so condition [*(iii)*]{} says that the labels along any path down from $V_{2\omega_4}$ will be nonincreasing in each coordinate. The labels on the edges are technically unnecessary, since they can be obtained by subtracting the highest weight of the child from the highest weight of the parent. However, as the next corollary shows, they do record useful information that is not apparent by looking directly at the highest weights.
\[cor\_lift\] If $d_0,\ldots,d_s$ is a valid label for $W_m({\ell})$, then it is also a valid label for $W_{m'}({\ell})$ for any $m'>m$, and for any $m'\geq s$.
Both parts are based on the fact that condition [*(ii)*]{} is the only one that depends on $m$. For $m'>m$, if $\min(n,m){{\omega_{\ell}}}- d_n$ is a nonnegative linear combination of the $\{{{\omega_{\ell}}}\}$ then adding some nonnegative multiple of ${{\omega_{\ell}}}$ will not change that fact. And if $m'\geq s$ then the value of $m'$ is irrelevant; the weights we look at are just $n{{\omega_{\ell}}}- d_n$ for $0\leq n\leq s$.
If we can lift labels from $W_m({\ell})$ to $W_{m+1}({\ell})$, we can also lift the entire tree structure. Specifically, the lifting of labels extends to a map from the tree of $W_m({\ell})$ to the tree of $W_{m+1}({\ell})$ which preserves the increment $\delta$ of each edge and lifts each $V_\lambda$ to $V_{\lambda+{{\omega_{\ell}}}}$. The $m'\geq s$ part of Corollary \[cor\_lift\] tells us that this map is a bijection on rows $0,1,\ldots,m$ of the trees, where the labels have length $s\leq m$. On this part of the tree, multiplicities are also preserved. This follows from the formula for multiplicities in Theorem \[thm\_decomp\]: the only values of $P^{(k)}_n({{\mathbf d}})$ that change are for $n=m+1$, but ${{\mathbf d}}^{(k)}_n=0$ when $n$ is greater than the length of the label, so the product of binomial coefficients is unchanged.
The Growth of Trees {#sec_tree}
===================
\[sec\_growth\]
In this section, we will prove that as $m$ gets large, the dimension of the representation ${{W_m({\ell})}}$ grows like a polynomial in $m$, and will give a method to compute the degree of the polynomial growth. All statements assume the conjectural formulas for multiplicities of ${{\mathfrak g}}$-modules. Roots and weights are numbered as in [@Bour].
Since the tree decompositions for ${{W_m({\ell})}}$ for $m=1,2,3,\ldots$ stabilize, we can define ${{T({\ell})}}$ to be the tree whose top $n$ rows coincide with those of ${{W_m({\ell})}}$ for all $m\geq n$. The highest weight associated with an individual node appearing in ${{T({\ell})}}$ is only well-defined up to addition of any multiple of ${{\omega_{\ell}}}$, but the difference $\delta$ between any node and its parent is well-defined. (These differences are the labels on the edges of the tree in Figure \[fig\_tree\].) We can characterize each node by the string of successive differences $\delta_1
\succeq \delta_2 \succeq\cdots\succeq \delta_s$ which label the $s$ edges in the path from the root of the tree to that node. The multiplicity of a node of ${{T({\ell})}}$ is well-defined, as already noted.
The tree of ${{W_m({\ell})}}$ matches ${{T({\ell})}}$ exactly in the top $m$ rows. The number of rows in the tree of ${{W_m({\ell})}}$ is bounded by the largest $\alpha$-coordinate of $m{{\omega_{\ell}}}$, since if $\delta_1,\ldots,\delta_s$ is a label of ${{W_m({\ell})}}$ then $m{{\omega_{\ell}}}- \sum_{i=1}^s \delta_i$ must be in the positive Weyl chamber, and whatever $\alpha$-coordinate is nonzero in $\delta_s$ must be nonzero in all of the $\delta_i$. Therefore to prove that the dimension of ${{W_m({\ell})}}$ grows as a polynomial in $m$, it suffices to prove that the dimension of the part of ${{W_m({\ell})}}$ which corresponds to the top $m$ rows of ${{T({\ell})}}$ does so.
Now we need to examine the structure of the tree ${{T({\ell})}}$. The path $\delta_1,\ldots,\delta_s$ to reach a vertex is a sequence of weights whose $\alpha$-coordinates are nonincreasing. Write this instead as ${{\Delta_1^{m_1}\!\!\ldots\Delta_t^{m_t}}}$ where the $\Delta_i$ are strictly decreasing and $m_i$ is the number of times $\Delta_i$ occurs among $\delta_1,\ldots,\delta_s$; we will say this path has [*path-type*]{} ${{\Delta_1\ldots\Delta_t}}$. The number of path-types that can possibly appear in the tree ${{T({\ell})}}$ is finite, since each $\Delta_i$ is between ${{\omega_{\ell}}}$ and 0 and has integer $\alpha$-coordinates.
We need to understand which path-types ${{\Delta_1\ldots\Delta_t}}$ and which choices of exponents $m_i$ correspond to paths which actually appear in ${{T({\ell})}}$. Given a path $\delta_1,\ldots,\delta_s$, assume that $m>s$ and recall $\mu_n = n{{\omega_{\ell}}}- d_n = n{{\omega_{\ell}}}- \sum_{i=1}^n \delta_i$. Condition [*(ii)*]{} from Theorem \[thm\_decomp\] requires that $\mu_n$ is in the positive Weyl chamber for $1\leq n\leq s$; that is, the $\omega$-coordinates of $\mu_n$ must always be nonnegative. (These coordinates are just the values of $P^{(k)}_n$ from Theorem \[thm\_decomp\].) Since $\mu_n = \mu_{n-1} + {{\omega_{\ell}}}-
\delta_n$, we need to keep track of which $\omega$-coordinates of ${{\omega_{\ell}}}-\delta_n$ are positive and which are negative.
For a path-type ${{\Delta_1\ldots\Delta_t}}$, we say that $\Delta_i$ [*provides*]{} $\omega_k$ if the $\omega_k$-coordinate of ${{\omega_{\ell}}}- \Delta_i$ is positive, and that it [*requires*]{} $\omega_k$ if the coordinate is negative. Geometrically, $\Delta_i$ providing $\omega_k$ means that each $\Delta_i$ in the path moves the sequence of $\mu$s away from the $\omega_k$-wall of the Weyl chamber, while requiring $\omega_k$ moves towards that wall. The terminology is justified by restating what condition [*(ii)*]{} implies about path-types in these terms:
\[lem\_pathtypes\] The tree ${{T({\ell})}}$ contains paths of type ${{\Delta_1\ldots\Delta_t}}$ if and only if, for every $\Delta_n$, $1\leq n\leq t$, every $\omega_i$ required by $\Delta_n$ is provided by some $\Delta_k$ with $k<n$.
The “only if” part of the equivalence is immediate from the preceding discussion: the sequence $\mu_0,\mu_1,\ldots$ starts at $\mu_0=0$, and if it moves towards any wall of the Weyl chamber before first moving away from it, it will pass through the wall and some $\mu_i$ will be outside the chamber. Conversely, if ${{\Delta_1\ldots\Delta_t}}$ is any path-type which satisfies the condition of the lemma, then ${{\Delta_1^{m_1}\!\!\ldots\Delta_t^{m_t}}}$ will definitely appear in the tree when $m_1 \gg m_2 \gg\cdots\gg m_t$. This ensures that the coordinates of the $\mu_i$ are always nonnegative, since the sequence of $\mu$s moves sufficiently far away from any wall of the Weyl chamber before the first time it moves back towards it. We could compute the exact conditions on the $m_i$ for a specific path; in general, they all require that $m_n$ be bounded by some linear combination of $m_1,\ldots,m_{n-1}$, and the first $m_i$ appearing with nonzero coefficient in that linear combination has positive coefficient.
Now we can show that the number of nodes of path-type ${{\Delta_1\ldots\Delta_t}}$ appearing on the $m$th level of the tree grows as $m^{t-1}$. Consider the path ${{\Delta_1^{m_1}\!\!\ldots\Delta_t^{m_t}}}$ as a point $(m_1,\ldots,m_t)$ in ${{\mathbb R}}^t$. The path ends on row $m$ if $m=m_1+\cdots+m_t$, so solutions lie on a plane of dimension $t-1$; the number of solutions to that equality in nonnegative integers is ${m+t-1}\choose{t-1}$, which certainly grows as $m^{t-1}$, as expected. The further linear inequalities on the $m_i$ which ensure that $\mu_1,\ldots,\mu_m$ remain in the Weyl chamber correspond to hyperplanes through the origin which our solutions must lie on one side of, but the resulting region still has full dimension $t-1$ since the generic point with $m_1 \gg m_2 \gg\cdots\gg m_t$ satisfies all of the inequalities, as shown above.
The highest weight of the ${{\mathfrak g}}$-module at the node associated with the generic solution of the form $m_1 \gg m_2 \gg\cdots\gg m_t$ grows linearly in $m$. Its dimension, therefore, grows as a polynomial in $m$, and the degree of the polynomial is just the number of positive roots of the Lie algebra which are not orthogonal to the highest weight. The only positive roots perpendicular to this generic highest weight are those perpendicular to every highest weight which comes from a path of type ${{\Delta_1\ldots\Delta_t}}$, and the number of such roots is the degree of polynomial growth of the dimensions of the representations of the ${{\mathfrak g}}$-module. This can also be expressed as $\frac12\dim({{\mathcal O}}_{\lambda})$, where $\lambda$ is a weight orthogonal to any given positive root if and only if all highest weights of type ${{\Delta_1\ldots\Delta_t}}$ are.
We can figure out how the multiplicities of nodes with a specific path-type grow as well. Theorem \[thm\_decomp\] gives a formula for multiplicities as a product of binomial coefficients over $1\leq k\leq
r$ and $n\geq 1$. The only terms in the product which are not 1 correspond to nonzero values of $\delta_n - \delta_{n+1}$. In the path ${{\Delta_1^{m_1}\!\!\ldots\Delta_t^{m_t}}}$, these occur only when $n=m_1+\cdots+m_i$ for some $1\leq
i\leq t$, so that $\delta_n - \delta_{n+1}$ is $\Delta_i -
\Delta_{i+1}$ (where $\Delta_{t+1}$ is just 0). Following our previous notation, let $\delta_n - \delta_{n+1} = {{\mathbf d}}_n = \sum
{{\mathbf d}}^{(k)}_n \alpha_k$. If we take any $k$ for which ${{\mathbf d}}^{(k)}_n$ is nonzero, there are two possibilities for the contribution to the multiplicity from its binomial coefficient. If $\omega_k$ has been provided by at least one of $\Delta_1,\ldots,\Delta_i$, then the value $P^{(k)}_n$ is a linear combination of $m_1,\ldots,m_i$, which grows linearly as $m$ gets large. In this case, the binomial coefficient grows as a polynomial in $m$ of degree ${{\mathbf d}}^{(k)}_n$. On the other hand, if $\omega_k$ has not been provided, then the binomial coefficient is just 1.
For any $k$, $1\leq k\leq r$, define $f(k)$ to be the smallest $i$ in our path-type such that $\Delta_i$ provides $\omega_k$; we say that $\Delta_i$ provides $\omega_k$ for the first time. Then the total contribution to the multiplicity from the coordinate $k$ will be the product of the contributions when $n=m_1+\cdots+m_j$ for $j=f(k),
f(k)+1, \ldots, t$. As $m$ gets large, the product of these contributions grows as a polynomial of degree $\sum_{j=f(k)}^t
{{\mathbf d}}^{(m_1+\cdots+m_j)}_n$; that is, the sum of the decreases in the $\alpha_k$-coordinate of the $\Delta$s. But since $\Delta_{t+1}$ is just 0, that sum is exactly the $\alpha_k$-coordinate of $\Delta_{f(k)}$.
So given a path-type ${{\Delta_1\ldots\Delta_t}}$ which Lemma \[lem\_pathtypes\] says appears in ${{T({\ell})}}$, the total of the multiplicities of the nodes of that path-type which appear in the top $m$ rows of ${{T({\ell})}}$ grows as a polynomial of degree $$\label{defg}
g({{\Delta_1\ldots\Delta_t}}) = t + \sum_{k=1}^{r} \alpha_k\mbox{-coordinate of }\Delta_{f(k)}$$ where we take $\Delta_{f(k)}$ to be 0 if $\omega_k$ is not provided by any $\Delta$ in the path-type. This value is just the sum of the degrees of the polynomial growths described above.
Finally, since there are only finitely many path-types, the growth of the entire tree ${{T({\ell})}}$ is the same as the growth of the part corresponding to any path-type ${{\Delta_1\ldots\Delta_t}}$ which maximizes $g({{\Delta_1\ldots\Delta_t}})$. So we have proven the following, up to some calculation:
\[thm\_growth\] Let ${{\mathfrak g}}$ be simply-laced with decompositions of ${{W_m({\ell})}}$ given by Theorem \[thm\_decomp\]. Then the dimension of the representation ${{W_m({\ell})}}$ as $m$ gets large is asymptotic to a polynomial in $m$ of degree $\frac12\dim({{\mathcal O}}_{\lambda})+g({{\Delta_1\ldots\Delta_t}})$, where the path-type ${{\Delta_1\ldots\Delta_t}}$ is one which maximizes the value of $g$, and ${{\mathcal O}}_{\lambda}$ is the adjoint orbit of a weight $\lambda$ which is orthogonal to exactly those positive roots orthogonal to all highest weights of nodes with path-type ${{\Delta_1\ldots\Delta_t}}$.
1. If ${{\mathfrak g}}$ is of type $A_n$ then the maximum value of $g({{\Delta_1\ldots\Delta_t}})$ is $0$, for all $1\leq{\ell}\leq n$.
2. If ${{\mathfrak g}}$ is of type $D_n$ then the maximum value of $g({{\Delta_1\ldots\Delta_t}})$ is $\lfloor{\ell}/2\rfloor$, for $1\leq{\ell}\leq n-2$, and $0$ for ${\ell}=n-1,n$.
3. If ${{\mathfrak g}}$ is of type $E_6$, $E_7$, or $E_8$, the maximum value of $g({{\Delta_1\ldots\Delta_t}})$ is
(4,3) (0,1)(1,0)[5]{}(2,2) (0,1)[(1,0)[4]{}]{}(2,1)[(0,1)[1]{}]{} (2,2)[[[(0,1)[${1}$]{}]{}]{}]{}(0,0)[[[(0,1)[${0}$]{}]{}]{}]{}(1,0)[[[(0,1)[${1}$]{}]{}]{}]{} (2,0)[[[(0,1)[${6}$]{}]{}]{}]{}(3,0)[[[(0,1)[${1}$]{}]{}]{}]{}(4,0)[[[(0,1)[${0}$]{}]{}]{}]{}
(5,3) (2,2)(0,1)(1,0)[6]{} (0,1)[(1,0)[5]{}]{}(2,1)[(0,1)[1]{}]{} (2,2)[[[(0,1)[${1}$]{}]{}]{}]{}(0,0)[[[(0,1)[${1}$]{}]{}]{}]{}(1,0)[[[(0,1)[${6}$]{}]{}]{}]{}(2,0)[[[(0,1)[${33}$]{}]{}]{}]{} (3,0)[[[(0,1)[${12}$]{}]{}]{}]{}(4,0)[[[(0,1)[${2}$]{}]{}]{}]{}(5,0)[[[(0,1)[${0}$]{}]{}]{}]{}
(6,3) (2,2)(0,1)(1,0)[7]{} (0,1)[(1,0)[6]{}]{}(2,1)[(0,1)[1]{}]{} (2,2)[[[(0,1)[${16}$]{}]{}]{}]{}(0,0)[[[(0,1)[${2}$]{}]{}]{}]{}(1,0)[[[(0,1)[${62}$]{}]{}]{}]{}(2,0)[[[(0,1)[${150}$]{}]{}]{}]{} (3,0)[[[(0,1)[${100}$]{}]{}]{}]{}(4,0)[[[(0,1)[${48}$]{}]{}]{}]{}(5,0)[[[(0,1)[${6}$]{}]{}]{}]{}(6,0)[[[(0,1)[${1}$]{}]{}]{}]{}
We will complete the proof by exhibiting the path-types which give the indicated values of $g$ and proving they are maximal.
If ${{\Delta_1\ldots\Delta_t}}$ maximizes the value of $g$, then it cannot be obtained from any other path-type by insterting an extra $\Delta$, since any insertion would increase the length $t$ and would not decrease the sum in the definition of $g$. Therefore each $\Delta_k$ in our desired path-type must be in the positive root lattice, allowable according to Lemma \[lem\_pathtypes\], and must be maximal (under $\preceq$) in meeting those requirements; we will call a path-type maximal if this is the case.
In particular, if ${{\omega_{\ell}}}$ is in the root lattice then $\Delta_1$ will be ${{\omega_{\ell}}}$, and a $g$-value of 0 corresponds exactly to an ${{\omega_{\ell}}}$ which is not in the root lattice and is a minimal weight. Thus the 0s above can be verified by inspection; these are exactly the cases in which ${{W_m({\ell})}}$ remains irreducible as a ${{\mathfrak g}}$-module. Similarly, if ${{\omega_{\ell}}}$ is not in the root lattice but there is only one point in the lattice and in the Weyl chamber under ${{\omega_{\ell}}}$, the path-type will consist just of that point. We can now limit ourselves to path-types of length greater than one.
If ${{\mathfrak g}}$ is of type $D_n$ then for each ${{\omega_{\ell}}}$, $2\leq{\ell}\leq n-2$, there is a unique maximal path-type: $$\begin{array}{ll}
{{\omega_{\ell}}}\succ {{\omega_{\ell}}}-\omega_2 \succ {{\omega_{\ell}}}-\omega_4 \succ \cdots \succ {{\omega_{\ell}}}-\omega_{{\ell}-2}
& \mbox{when ${\ell}$ is even} \\
{{\omega_{\ell}}}-\omega_1 \succ {{\omega_{\ell}}}-\omega_3 \succ \cdots \succ {{\omega_{\ell}}}-\omega_{{\ell}-2}
& \mbox{when ${\ell}$ is odd}
\end{array}$$ In both cases, the only contribution to $g$ comes from the length of the path, which is $\lfloor{\ell}/2\rfloor$. This also means that the nodes of the tree ${{T({\ell})}}$ will all have multiplicity 1 in this case.
When ${{\mathfrak g}}$ is of type $E_6$, $E_7$ or $E_8$, the following weights have a unique maximal path-type (of length $>1$), whose $g$-value is given in Theorem \[thm\_growth\]: $$\begin{array}{lll}
E_6
& {\ell}=4 & \omega_4 \succ \omega_4-\omega_2 \succ \omega_4-\omega_1-\omega_6
\succ \omega_2+\omega_4-\omega_3-\omega_5 \succ 2\omega_2-\omega_4 \\
E_7
& {\ell}=3 & \omega_3 \succ \omega_3-\omega_1 \succ \omega_3-\omega_6 \succ
\omega_1+\omega_6-\omega_4 \succ 2\omega_1-\omega_3 \\
& {\ell}=6 & \omega_6 \succ \omega_6-\omega_1 \\
E_8
& {\ell}=1 & \omega_1 \succ \omega_1-\omega_8 \\
& {\ell}=7 & \omega_7 \succ \omega_7-\omega_8 \succ \omega_7-\omega_1 \succ
\omega_7+\omega_8-\omega_6 \succ 2\omega_8-\omega_7\\
& {\ell}=8 & \omega_8
\end{array}$$
We will consider the remaining weights in $E_8$ next. Consider the incomplete path-type $${{\omega_{\ell}}}\succ {{\omega_{\ell}}}-\omega_8 \succ {{\omega_{\ell}}}-\omega_1 \succ {{\omega_{\ell}}}-\omega_6+\omega_8
\succ {{\omega_{\ell}}}+\omega_1-\omega_4+\omega_8 \succ \cdots$$ where ${{\omega_{\ell}}}$ is any fundamental weight which is in the root lattice and high enough that all of the weights in question lie in the Weyl chamber. The path so far provides $\omega_8$, $\omega_1$, $\omega_6$ and $\omega_4$; notice that for any $\omega_i$ which has not been provided, all of its neighbors in the Dynkin diagram have. Therefore we can extend this path four more steps by subtracting one of $\alpha_2$, $\alpha_3$, $\alpha_5$ and $\alpha_7$ at each step, to produce a path in which every $\omega_i$ has been provided. This can be extended to a full path-type by subtracting any $\alpha_i$ at each stage until we reach the walls of the Weyl chamber.
The resulting path-type is maximal, and is the unique maximal one up to a sequence of transformations of the form $$\cdots \succ \Delta \succ \Delta-\lambda \succ \Delta-\lambda-\mu \succ \cdots
\mapsto
\cdots \succ \Delta \succ \Delta-\mu \succ \Delta-\lambda-\mu \succ \cdots$$ which do not affect the rate of growth $g$. All relevant weights are in the Weyl chamber if and only if ${{\omega_{\ell}}}\succ\xi=(4,8,10,14,12,8,6,2)$; this turns out to be everything except $\omega_1$, $\omega_7$ and $\omega_8$, whose path-types are given above. If the path-type could start at $\xi$, it would have growth $g=8$, though this is not possible since the last weight in the path-type would be $0$ in this case. But each increase of the starting point of the path by any $\alpha_i$ increases $g$ by 2 (1 from the length of the path and 1 from the multiplicity). So the growth for any ${{\omega_{\ell}}}\succ\xi$ is a linear function of its height with coefficient 2; $g=2{{\mathop{\mathrm{ht}}\nolimits}}({{\omega_{\ell}}})-120$.
The only remaining cases are $\omega_4$ and $\omega_5$ when ${{\mathfrak g}}$ is of type $E_7$. Both work like the general case for $E_8$, beginning instead with the incomplete path-types $$\begin{array}{ll}
{\ell}=4 & \omega_4 \succ \omega_4-\omega_1 \succ \omega_4-\omega_6
\succ \omega_1 \succ \cdots \\
{\ell}=5 & \omega_5-\omega_7 \succ \omega_5-\omega_2 \succ
\omega_5+\omega_7-\omega_1-\omega_6 \succ
\omega_2+\omega_7-\omega_3 \succ \cdots
\end{array}$$ This concludes the proof of Theorem \[thm\_growth\].
The same argument used for $E_8$ shows that for any choice of ${{\mathfrak g}}$, all “sufficiently large” weights ${{\omega_{\ell}}}$ in a particular translate of the root lattice will have growth given by $2{{\mathop{\mathrm{ht}}\nolimits}}({{\omega_{\ell}}})-c$ for some fixed $c$. A weight is sufficiently large if every $\omega_i$ is provided in its maximal path. Thus we can easily check that $\omega_4$ and $\omega_5$ qualify for $E_7$, and in both cases $c=63$. Similarly, $\omega_4$ for $E_6$ qualifies, and $c=36$. While there are no sufficiently large fundamental weights for $A_n$ or $D_n$, we can compute what the maximal path-type would be if one did exist, and in all cases, $c$ is the number of positive roots. A uniform explanation of this fact would be nice, even though the exhaustive computation does provide a complete proof.
Computations {#sec_table}
============
This section gives the decompositions of ${{W_m({\ell})}}$ into ${{\mathfrak g}}$-modules predicted by the conjectural formulas in [@KR]. We also give the tree structure defined in Section \[sec\_algorithm\].
The representations ${{W_m({\ell})}}$ when $m=1$ are called fundamental representations. In the setting of $U_q({{\mathfrak g}})$-module decompositions of $U_q({{\hat{\mathfrak g}}})$ modules, the decompositions of the fundamental representations for all ${{\mathfrak g}}$ and most choices of ${{\omega_{\ell}}}$ appear in [@ChP], calculated using techniques unrelated to the conjecture used in [@KR] to give formulas (\[defZ\]) and (\[defPgen\]). Those computations agree with the ones given below. In particular, the choices of ${{\omega_{\ell}}}$ not calculated in [@ChP] are exactly those in which the maximal path-type (Theorem \[thm\_growth\]) is not unique.
$A_n$
-----
As already noted, when ${{\mathfrak g}}$ is of type $A_n$, the $Y({{\mathfrak g}})$-modules ${{W_m({\ell})}}$ remain irreducible when viewed as ${{\mathfrak g}}$-modules.
$D_n$
-----
Let ${{\mathfrak g}}$ be of type $D_n$. As already noted, the fundamental weights $\omega_{n-1}$ and $\omega_{n}$ are minimal with respect to $\preceq$, so $W_m(n-1)$ and $W_m(n)$ remain irreducible as ${{\mathfrak g}}$-modules. Now suppose ${\ell}\leq n-2$. Then the structure of the weights in the Weyl chamber under $\omega_{\ell}$ does not depend on $n$, and so the decomposition of ${{W_m({\ell})}}$ in $D_n$ is the same for any $n \geq {\ell}+2$.
As mentioned in the proof of Theorem \[thm\_growth\], there is a unique maximal path-type for each ${{\omega_{\ell}}}$, and there are no multiplicities greater than 1. The decomposition is therefore very simple: if ${\ell}\leq n-2$ is even, then $${{W_m({\ell})}}{\simeq}{\bigoplus}_{k_2+k_4+\ldots+k_{{\ell}-2}+k_{\ell}= k \leq m}
V_{ k_2 \omega_2 + k_4 \omega_4 + \ldots + k_{{\ell}-2} \omega_{{\ell}-2}
+ (m-k)\omega_{\ell}}$$ and if ${\ell}$ is odd, then $${{W_m({\ell})}}{\simeq}{\bigoplus}_{k_1+k_3+\ldots+k_{{\ell}-2} = k \leq m}
V_{ k_1 \omega_1 + k_3 \omega_3 + \ldots + k_{{\ell}-2} \omega_{{\ell}-2}
+ (m-k)\omega_{\ell}}$$ where the minor difference is because $\omega_{\ell}$ for ${\ell}$ odd is not in the root lattice. The sum $k$ is the level of the tree on which that module appears, and the parent of a module is obtained by subtracting 1 from the first of $k_{{\ell}-2}, k_{{\ell}-4},\ldots$ which is nonzero (or from $k_{\ell}$ if nothing else is nonzero and ${\ell}$ is even).
$E_n$
-----
When ${{\mathfrak g}}$ is of type $E_n$ the tree structure is much more irregular: these are the only cases in which a ${{\mathfrak g}}$-module can appear in more that one place in the tree and in which a node on the tree can have multiplicitiy greater than one.
We indicate the tree structure as follows: we list every node in the tree, starting with the root and in depth-first order, and a node on level $k$ of the tree is written as ${\mathbin{\mathop{\oplus}\limits^{k}}} V_\lambda$. This is enough information to recover the entire tree, since the parent of that node is the most recent summand of the form ${\mathbin{\mathop{\oplus}\limits^{k-1}}} V_\mu$. Comparing Figure \[fig\_tree\] to its representation here should make the notation clear.
Due to space considerations, for $E_6$ we list calculations for $m\leq
3$, for $E_7$ we list $m\leq 2$, and for $E_8$ only $m=1$. The tree decomposition for $W_3(4)$ for $E_7$, for example, would have 836 components.
1. 2. remains irreducible for all $m$.
3. ${\simeq}V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
4. ${\simeq}V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}$
5. ${\simeq}V_{3 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{0}$
6. ${\simeq}V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_6}$
7. ${\simeq}V_{2 \omega_3}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_6}$
8. ${\simeq}V_{3 \omega_3}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_3+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{3 \omega_6}$
9. ${\simeq}V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
10. ${\simeq}V_{2 \omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_3+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_2+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}$
11. ${\simeq}V_{3 \omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+2 \omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1+\omega_4+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{3 \omega_1+3 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2+2 \omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_3+\omega_4+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_2+\omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1+\omega_3+\omega_5+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{2 \omega_1+\omega_2+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_2+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_2+\omega_3+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_1+2 \omega_2+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{\omega_1+\omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{3 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{4}}} 2 V_{\omega_2+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1+\omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_1+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_3+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_1+\omega_2+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{0}$
12. ${\simeq}V_{\omega_5}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1}$
13. ${\simeq}V_{2 \omega_5}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1}$
14. ${\simeq}V_{3 \omega_5}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+2 \omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{3 \omega_1}$
15. remains irreducible for all $m$.
16. 17. ${\simeq}V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
18. ${\simeq}V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}$
19. ${\simeq}V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_7}$
20. ${\simeq}V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_7}$
21. ${\simeq}V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
22. ${\simeq}V_{2 \omega_3}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_1+\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}$
23. ${\simeq}V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
24. ${\simeq}V_{2 \omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2+\omega_4+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_3+\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_2+\omega_6+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_2+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_4+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_1+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{3 \omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{4 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_3+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_2+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_1+\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_2+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{2 \omega_1+\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{\omega_2+\omega_3+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_1+\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_2+\omega_6+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_1+\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{2 \omega_3}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_1+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_2+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_4+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1+\omega_6+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2+3 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{4 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_4+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2+\omega_3+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1+2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_1+\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1+2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 8 V_{\omega_2+\omega_6+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_3+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{\omega_2+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_1+\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_3+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_6+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{3 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 9 V_{\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_1+\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_1+\omega_3}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_6+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_1+\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 8 V_{\omega_1+\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{3 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 12 V_{\omega_1+\omega_3}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} 5 V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 9 V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{3}}} 4 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{4}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}$
25. ${\simeq}V_{\omega_5}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_7}$
26. ${\simeq}V_{2 \omega_5}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2+\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_3+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_2}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_5+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_6}$
27. ${\simeq}V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
28. ${\simeq}V_{2 \omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}$
29. remains irreducible for all $m$.
30. 31. ${\simeq}V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
32. ${\simeq}V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
33. ${\simeq}V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 4 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
34. ${\simeq}V_{\omega_4}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_3+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_6+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 6 V_{\omega_1+\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_6+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1+2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 5 V_{\omega_5}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_2+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_1+2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{3 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 9 V_{\omega_1+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_2+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_7+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 10 V_{\omega_2+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{\omega_7+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 8 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{3}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} 6 V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 7 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} 5 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 8 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 9 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{3 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 8 V_{\omega_7+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 7 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 5 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 8 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 9 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 15 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 8 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} 9 V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 12 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 9 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{3}}} 3 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} 8 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 10 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 6 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 6 V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{3}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} 7 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 5 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 8 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} 7 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 5 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} 5 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
35. ${\simeq}V_{\omega_5}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_3}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_7+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 4 V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} 6 V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 5 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} 5 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 4 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 5 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} 3 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} 4 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} 2 V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
36. ${\simeq}V_{\omega_6}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1+\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_2}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{2 \omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} 3 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{2}}} V_{0}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
37. ${\simeq}V_{\omega_7}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{\omega_1}
{\mathbin{\mathop{\oplus}\limits^{1}}} 2 V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
38. ${\simeq}V_{\omega_8}
{\mathbin{\mathop{\oplus}\limits^{1}}} V_{0}$
[9]{}
Bourbaki, N. [*Groupes et algèbres de Lie, ch. 4, 5 et 6.*]{} Masson, Paris, 1981.
Chari, V. Minimal quantizations of representations of affine Lie algebras: the rank 2 case. [*Publ. Res. Inst. Math. Sci.*]{} [**31**]{} (1995), no. 5, 873–911.
Chari, V; Pressley, A. Quantum affine algebras and their representations. [*Representations of groups (Banff, AB, 1994)*]{}, 59–78, CMS Conf. Proc. [**16**]{}, Amer. Math. Soc., Providence, RI, 1995.
Drinfel’d, V. G. Hopf algebras and the quantum Yang-Baxter equation, [*Soviet Math. Dokl.*]{} [**32**]{} (1985), 254–258.
Drinfel’d, V. G. A new realization of Yangians and quantized affine algebras. [*Soviet Math. Dokl.*]{} [**36**]{} (1988), 212–216.
Kirillov, A. N.; Reshetikhin, N. Yu. Representations of Yangians and multiplicities of ocurrence of the irreducible components of the tensor product of representations of simple Lie algebras. [*J. Soviet Math.*]{} [**52**]{} (1990), 3156–3164.
Kulish, P. P.; Reshetikhin, N. Yu.; Sklyanin, E. K. Yang-Baxter equations and representation theory: I. [*Lett. Math. Phys.*]{} [**5**]{} (1981), no. 5, 393–403
Reshetikhin, N. Yu. Private communication.
[Department of Mathematics, University of California Berkeley, Berkeley, CA, 94720, USA;]{} [[email protected]]{}
[^1]: Supported by NSF grant DMS 94-01163.
| |
Q:
Can I aggregate a dataframe and retain string variables in R?
I have a data frame of the form:
Family Code Length Type
1 A 1 11 Alpha
2 A 3 8 Beta
3 A 3 9 Beta
4 B 4 7 Alpha
5 B 5 8 Alpha
6 C 6 2 Beta
7 C 6 5 Beta
8 C 6 4 Beta
I would like to reduce the data set to one containing unique values of Code by taking a mean of Length values, but to retain all string variables too, i.e.
Family Code Length Type
1 A 1 11 Alpha
2 A 3 8.5 Beta
3 B 4 7 Alpha
5 B 5 8 Alpha
6 C 6 3.67 Beta
I've tried aggregate() and ddply() but these seem to replace strings with NA and I'm struggling to find a way round this.
A:
Since Family and Type are constant within a Code group, you can "group" on those as well without changing anything when you use ddply. If your original data set was dat
ddply(dat, .(Family, Code, Type), summarize, Length=mean(Length))
gives
Family Code Type Length
1 A 1 Alpha 11.000000
2 A 3 Beta 8.500000
3 B 4 Alpha 7.000000
4 B 5 Alpha 8.000000
5 C 6 Beta 3.666667
If Family and Type are not constant within a Code group, then you would need to define how to summarize/aggregate those values. In this example, I just take the single unique value:
ddply(dat, .(Code), summarize, Family=unique(Family),
Length=mean(Length), Type=unique(Type))
Update
Similar options using dplyr are
library(dplyr)
dat %>%
group_by(Family, Code, Type) %>%
summarise(Length=mean(Length))
and
dat %>%
group_by(Code) %>%
summarise(Family=unique(Family), Length=mean(Length), Type=unique(Type))
| |
Exports were at 1,722 units last month against 1,233 units in the year-ago month, the automaker said. New Delhi: Honda Cars India Ltd (HCIL) today reported a total domestic sales of 10,427 units, representing a decline of 7.8% compared to 11,319 units sold in January 2021, the company announced.
The automaker also recorded a decline of 3% in its total sales at 12,149 units in January 2022. Whereas the automaker had sold a total of 12,552 units in January 2022, it added.
According to the company, its exports were at 1,722 units last month against 1,233 units in the year-ago month.
"Despite the supply chain and COVID-related challenges, we have started off 2022 on a promising note. The sales in the month of January 22 got partially impacted owing to the weekend-lockdowns in some cities but overall the situation looks positive and steady," Honda Cars India Director (Marketing and Sales) Yuichi Murata noted.
All of the company's production output has been getting dispatched to dealer partners in time, he added.
"The market situation will improve with the reduction in the COVID-caseload as we move forward," Murata stated. Also Read: | |
The L/A Fighting Spirit edged the New England Stars, 3-2 on Sunday to remain undefeated.
New England 2 @ Lewiston/Auburn 3 - Lewiston/Auburn and New England were evenly matched throughout their contest, but Lewiston/Auburn made the most of its opportunities and won, 3-2. Each team kept the other at bay throughout the game, and Walker Hamilton secured the win for Lewiston/Auburn with a goal in the third period. Lewiston/Auburn was led by Hamilton, who tallied one goal. Hamilton scored on the power play 9:41 into the third period to make the score 3-2 Lewiston/Auburn. Lewiston/Auburn earned a power play opportunity when Tim Paige was put in the box for hooking. Simon Corriveau picked up the assist. Lewiston/Auburn also had goals scored by Nick Hudson and Brady McNulty, who scored one goal each. Other players who recorded assists for Lewiston/Auburn were Dylan Vrees, Daniel Heffernan, and Austin Siering, who contributed one each. New England was helped by Brandon Hamner, who had one goal. Hamner scored 13:42 into the first period to make the score 1-0 New England. Mitchell Fehd assisted on the tally. New England also got points from Fehd, who also grabbed one goal and one assist to lead the team in points. More assists for New England came via John Krapian, who had one and Brian Glover, who had two. Zachary Barry rejected 16 shots on goal for Lewiston/Auburn. Robbie Campbell made 21 saves for New England on 24 shots.
Jersey Shore 9 @ Lockport 2 - Dylan Plsek was all over the ice for Jersey Shore, as he tallied one goal and two assists in Jersey Shore's 9-2 win over Lockport. Plsek beat Sal Stalteri with a shot 16:36 into the first period to make the score 2-0 Jersey Shore. Jared Karas assisted on the tally. Plsek dished an assist on Bogdan Khvatov's goal that made the score 5-2 Jersey Shore at 6:27 into the third period. He added another helper on Karas' goal that made the score 9-2 Jersey Shore at 14:53 into the third period. Jersey Shore also got points from Karas, who also tallied one goal and two assists, Travis Valvo, who also racked up one goal and one assist, Frederic Ampleman, who also had one goal and one assist, and Freddie Schaljo, who also registered one goal and one assist. Others who scored for Jersey Shore included Alexander Tan and Marcus McCall, who each put in one. Other players who recorded assists for Jersey Shore were Scott Hansen, who had four and Christian Cooley, JaColbie McGowan, and Tyler Goclan, who each chipped in one. Lockport was helped by Nicholas Wilcox, who grabbed one goal. Wilcox scored 16:58 into the second period to make the score 3-2 Jersey Shore. Frank II Vecchio provided the assist. Kurt Villani also scored for Lockport. In addition, Lockport received assists from Anthony Tomassi, Ryan Logar, and Dylan Jenkins, who contributed one each. Scott Albertoni rejected 27 shots on goal for Jersey Shore.
New York 5 @ Roc City 3 - New York, which led by four goals at one point, survived a comeback attempt by Roc City and clinched a 5-3 victory. New York led by four goals at one point and ended with the victory. The largest advantage in the game came when New York's Augie Onorato scored at 15:24 in the first period to put New York up 4-0. New York was paced by Ryan Poirier, who finished with one goal. Poirier scored 4:51 into the first period to make the score 2-0 New York. New York also had goals scored by Daniel Backstrom and Nicholas Lermer, who scored one goal each. In addition, New York received assists from Jimmy Warrick and Ricky Regala, who each chipped in one and Corey Rees and Alex Rojas, who contributed two each. Roc City forced New York goalie Anthony DiGiorgio to work between the pipes, taking 31 shots. Roc City was helped by Cameron Clark, who racked up two goals. Clark scored the first of his two goals at 8:31 into the second period to make the score 4-2 New York. Mike Elliott picked up the assist. Clark's next tally made the score 5-3 New York with 7:09 left in the third period. Anthony DePetres provided the assist. Roc City also got points from Sam Cammilleri, who also tallied one goal and one assist. DiGiorgio rejected 28 shots on goal for New York.
East Coast 3 @ Maine 0 - Andrew Irving had two goals to lead East Coast to a 3-0 victory over Maine.
Irving found the back of the net 2:30 into the first period to make the score 1-0 East Coast and again while 16:06 into the first to make the score 2-0 East Coast. East Coast was boosted by Aidan Critchlow, who turned in a shutout with 29 saves. East Coast also had goals scored by Matt Bauchman and Samu Landen, who scored one goal each. Other players who recorded assists for East Coast were Rick Mulligan, who had one and Preston Palamara, who had two. Brandon Daigle made 24 saves for Maine on 27 shots.
Wilkes-Barre 3 @ Skylands 10 - Skylands had a four-goal lead after two periods and cruised the rest of the way en route to a 10-3 win over Wilkes-Barre. Ernest Komarnitskii had one goal and three assists to lead Skylands. Komarnitskii scored 4:00 into the first period to make the score 2-0 Skylands. Mark Crevina provided the assist. Skylands also got points from Crevina, who also registered one goal and two assists, Hunter Ledwith, who also had one goal and one assist, and Alec Sanchez, who also tallied three goals and two assists to lead the team in points. Skylands also had goals scored by Cory Decosta, who had two and Cole Skelly and Antonio Martiniello, who each put in one. In addition, Skylands received assists from Mead Joshua, who had two and Steven Windt and Mike King, who each chipped in one. Wilkes-Barre was led by Ryan Flanagan, who racked up two goals and one assist. Flanagan scored the first of his two goals at 6:18 into the first period to make the score 2-1 Skylands. Zacharia Ouladelhadjahmed picked up the assist. Flanagan's next tally made the score 6-3 Skylands with 6:32 left in the second period. Wilkes-Barre also got a goal from Derrick Wruble as well. More assists for Wilkes-Barre came via Kenny Myers, who had one. Skylands' Mathias Yttereng stopped 26 shots out of the 29 that he faced. | http://na3hl.com/news/story.cfm?id=15529 |
Cover your work area with newspaper to protect the table surface.
Open the report cover on the newspaper and casually spritz with Grasshopper, leaving space between each spray.
Spritz the Mermaid and Lemon Drop inks in between the Grasshopper ink, allowing colors to overlap and blend.
Hold the two pieces of cardstock together, gently set in the spine of the report cover and lay both pieces of the cardstock down on one side of the report cover.
Close the cover gently and starting from the spine press the cover to the cardstock pushing outward to help blend colors. Run your hands along the entire surface of the cover to allow the cardstock to absorb the most amount of ink.
Remove the cardstock and set aside to dry, inked sides up.
Archival Dye Ink is fast drying, however drying time will vary by the amount of ink sprayed and type of cardstock used.
Spritz Lemon Drop on the report cover and pick the ink up with the Wish Big stamp. Press the image onto another piece of cardstock and cut out.
While the cardstock dries, rinse the cover under a faucet with warm water, wipe dry with a paper towel and set aside for your next project.
Once the cardstock is dry, place one piece face down on your work area and trace the pillow box template on the back.
Use your scissors to score the fold lines of the pillow box template. If you have one, a paper scoring or dry embossing tool will also work for this step.
Cut the pillow box out of the cardstock.
Cut three long strips from the pillow box scrap, ranging from ¼ inch to ½ inch wide.
Fold the pillow box along the score lines, starting with the overlap tab. Next, fold the box in half.
Run a single line of adhesive along the inked side of the overlap tab. Align the straight edge of the opposite side of the box with the fold of the overlap tab. Press firmly to enclose the box.
Fold one of the box ends closed using the scores as a guide. Fill the box with goodies and close the other end.
Unfold two bends of the paperclip, creating a cane shape.
One piece at a time, starting at an angle, wrap the paper strips around the long end of the paperclip. Slip the strips off the paperclip and pull the curls out a bit to create faux ribbon.
Complete the project by tying the faux ribbon and Wish Big tag to the box with the yellow Ric Rac.
TIPS
Printable templates for pillow boxes or other boxes can be found online, if you don’t have any in your craft stash!
This technique offers truly one-of-a-kind projects. More or less un-inked space on the cover will produce varying effects. Experiment by holding the Spritzers closer and farther away from the surface when spraying for different results! | http://www.clearsnap.com/project-view.cfm?id=132 |
7 Camping Activities For the Night
Camping with your family and friends is a great way of bonding with each other. And for the most part, there is a lot to do. But the problem is, even though the daytime camping activities are aplenty, many people often don’t know what to do at the nighttime.
However, for any seasoned camping enthusiast, it is the nighttime amidst nature that adds more flavor to the experience and makes for an enjoyable camping trip with your loved ones.
If you too are wondering what you could do during the night on your next camping trip, here are a list of activities that you might consider.
#1. Build A Campfire and Tell Stories
Movies often show friends seated around a campfire, roasting marshmallows for smores and telling each other stories. Well, life’s not a movie, but in this particular instance, it just might be! Gather some wood and build a campfire. When you all participate in making one, the teamwork would further enhance the enjoyment. Then sit around the campfire and tell each other stories.
People love to hear each other in cozy atmospheres as such. Stories from your childhood, or good memories that you want to share – well, pour your heart out! Give everyone a chance to explore bits of your life that they might not have known before.
#2. Put On Some Music and/or Sing!
Music is a classic nighttime camping activity. If you or your friends know how to play a guitar or ukulele, bring it along on your next camping trip. Playing the strings around the campfire and singing along to some familiar tunes – that brings people closer.
Even if you aren’t much of a singer, you can still participate by just humming along; or you may opt to just quietly enjoy the melodies instead. Fortunately, camping songs like Kumbaya are widely popular. Some other favourite campfire songs include Leaving on a Jet Plane, Margaritaville, and Sweet Caroline.
#3. Play Board or Card Games
Sometimes when camping, due to rain or other weather conditions, you might be forced to stay indoors. The beauty of board games and card games is that they can be your rescue in any situation involving a group.
Be it inside a tent or out there by the campfire, board games and card games are enjoyed by almost everyone.
Games like UNO, Monopoly, and YAHTZEE are popular board and/or card games so everyone will be able to participate and spend time together. Make sure you pack one such game as a contingency plan.
#4. Play Flashlight Hide-and-Seek
Flashlight Hide-and-Seek is a fun game to play in groups in camps! Well, simply put, it’s the regular game of hide-and-seek, but here all the lights are out. The seeker has a flashlight using which he/she has to find the others.
In some variations of the game, the first person to be found by the seeker becomes the “it” or seeker. And so, the game can go on for hours – in the dark!
If you have children with you, make sure that the camping site is safe to play this game, and set some boundaries so that the children don’t get lost.
In case you’re not a hide-and-seek fan, there are quite a few other nighttime games with flashlights. So, don’t forget to buy a good flashlight before going camping, lest you miss out on fun activities!
#5. Don’t Forget The Classic Games!
In case you all had a long day of activities and are running low on energy to play flashlight games, don’t worry, the classics are always there! So don’t forget about classic games like Charades and Pictionary where there is minimal movement but maximum fun.
The options also include two truths and a lie, truth or dare and many others. The best part is that these games don’t require much preparation or logistics, so you don’t have to plan ahead of time. Can you think of any other classic games that everyone can participate in?
#6. Stargaze
Stargazing is, to be honest, an underrated camping activity. Often we go camping because we want to get away from the hustle and bustle of the urban life, and to disconnect for a bit. While camping under the clear skies, therefore, it is important for you to embrace the silent ambiance and look up.
You’d be surprised how many stars you get to see that you otherwise may not be able to from your home’s balcony. You might take it up a notch and actually learn about stars and constellations by downloading some free apps on your phone. So just lie down, relax and watch the stars and marvel at the sheer size of our universe!
#7. Take A Walk With A Loved One
Take your significant other or a close friend for a walk around the campgrounds. Walking in the dark can be therapeutic and help you get rid of any negative emotions you have been carrying around with you.
Don’t forget to take the flashlight, as you wouldn’t want to hurt yourself in the dark. However, if you are up for a mini adventure, consider turning off the lights and letting your eyes adjust to the darkness – just the way nature had meant it to be. But in that case, ensure there is no wildlife that can hurt you in the darkness, like snakes.
Bonus Tip
You might want to practice your nighttime photography skills while camping. Take a picture of the moon over the trees, or the campfire burning bright – the possibilities are endless!
Final Thoughts
Camping and nighttime may not always go hand-in-hand; or at least, that’s what is popularly believed. But the list above sheds some light on a number of things to do during the night on your next camping escapade. So get ready for your next camping trip and make it a night to remember with a bunch of planned activities! | https://futureentech.com/7-camping-activities-for-nig/ |
Collision of Two Bodies:
Consider the impact between two bodies that move with different velocities along the same straight line. It is assumed that the point of the impact lies on the line joining the centers of gravity of the two bodies. The behavior of these colliding bodies during the complete period of impact will depend upon the properties of the materials of which they are made. The material of the two bodies may be perfectly elastic or perfectly inelastic.
In either case, the first effect of an impact is approximately the same. The parts of each body adjacent to the point of impact are deformed and the deformation will continue until the center of gravity of the two bodies is moving with the same velocity. Assuming that there are no external forces acting on the system, the total momentum must remain constant.
Collision of Elastic Bodies:
When two inelastic bodies A and B as shown in fig (1):
moving with different velocities, collide with each other as shown in fig(2). The two bodies will remain together after impact and will move together with a common velocity.
Let,
m1 = Mass of first body A.
m2 = Mass of second body B.
u1 and u2 = velocities of bodies A and B respectively before impact
v = common velocity of bodies A and B after impact.
Collision of Inelastic Bodies:
When two elastic bodies as shown in fig(1), collide with each other, they suffer a change of form. When the bodies first touch, the pressure between them is zero. For a short time thereafter, the bodies continue to approach each other and the pressure exerted by one body over the other body increases. Thus the two bodies are compressed and deformed at the surface of contact due to their mutual pressures.
If one of the bodies is fixed then the other will momentarily come to rest and then rebound. However, if both bodies are free to move, then each body will momentarily come to rest relative to the other. At this instant, the pressure between the two bodies is maximum and the deformation is also maximum. | https://scienceeureka.com/collision-of-bodies-elastic-and-inelastic/ |
This role is responsible for designing wells and systems to maximize production while minimizing HSE incidents and well lifecycle costs, in addition to supporting surface facilities operations and design changes. The role is responsible for the downhole and artificial lift engineering activity for assigned wells, surface operations and engineering, with strong emphasis on root cause failure analysis This role is intended to be 25% in the field to observe intervention rig work and interact with field personnel and key vendors to understand operational activities, risks and opportunities for improvement regarding well intervention needs.
Key Accountabilities
Support HSE initiatives, promote and confirm use of standard procedures, participate in safety reviews and develop solutions to reduce near miss and incident rates for strong HSE culture
Understand daily/weekly/monthly trends of safety/health/environmental incidents, production numbers, and OPEX costs and lead initiatives to improve performance including root cause failure analysis, cost modelling, and deferment reduction programs
Analysis of performance of production systems, including equipment reliability, system deferment, scaling tendencies across field, corrosion patterns
Develop and implement artificial lift installations (e.g. rod pumps, ESPs, gas lift, plunger, and artificial lift optimization tools) to optimize well performance
Analyze failure, reliability, and lifecycle performance of artificial lift types to establish data-driven basis of design for each well type
Actively participate in cross-functional problem solving to drive cycle time reduction, identify cost-saving opportunities, and improve production
Understanding, optimizing and troubleshooting from the reservoir to product sales
Essential Experience and Job Requirements
Experience with cost modeling, root cause analysis, systems optimization
Proven track record of multi-disciplinary initiatives
Demonstrated operational and commercial focus: self-directed, process-oriented, strong business acumen, and experience at driving change
Build productive relationships with employees at all levels of the organization, work collaboratively as part of a team, and strong interpersonal and communication skills.
Learn from new ideas and apply solutions to add value
Overcome obstacles with an intense desire to succeed
Make value-based decisions involving measured risk to deliver business objectives
Take responsibility and ownership of business performance
Share knowledge and work together for the good of the business
Keep commitments, listen to others and authentically support change
Essential Education
BS in Engineering with 2 years of Oil & Gas experience
Desirable Criteria and Qualifications
Foster an environment of safety first operations
Demonstrate ability to achieve high performance goals and meet deadlines in fast paced environment
Possess the grit necessary to tackle any challenge and a growth mindset to be on constant lookout for new solutions
Demonstrated use of IMPACT principles:
I - Innovated: Learns from new ideas and applies solutions to add value.
M - Motivated: Overcomes obstacles with an intense desire to succeed.
P - Performance Driven: Makes value-based decisions involving measured risk to deliver business objectives.
A - Accountable: Takes responsibility and ownership of business performance.
C - Collaborative: Shares knowledge and works together for the good of L48. | https://www.bp.com/en/global/corporate/careers/jobs-at-bp/Production-Engineer-140991BR.html |
ECS 171 MACHINE LEARNING (4 units)
Format:
Lecture: 3 hours
Discussion: 1 hour
Catalog description
Introduction to machine learning. Supervised and unsupervised learning, including classification, dimensionality reduction, regression and clustering using modern machine learning methods. Applications of machine learning to other fields.
Prerequisites ECS 060 or ECS 032B or ECS 036C; or Consent of Instructor. Probability equivalent to STA 032 or STA 131A or ECS 132 recommended; linear algebra equivalent to MAT 22A recommended.
Credit restrictions, cross listings: None
Summary of course contents
This course will provide an introduction to machine learning methods and learning theory. Students will acquire a general background on machine learning and pattern recognition, including state-of-the-art techniques in supervised and unsupervised learning. The course will include five problem sets that are related to the course outline. Students will work individually or as part of teams to complete a term project that will pertain on the application of these methods in different scientific fields. Topics will include:
- Supervised learning
- Regression
- Artificial Neural Networks
- Support Vector Machines
- Naive Bayes Classifiers
- K-Nearest Neighbors
- Decision Trees
- Unsupervised learning
- Clustering (K-means, hierarchical)
- Dimensionality reduction methods (t-SNE, PCA)
- Special Topics
- Feature Engineering
- Cross-validation
- Deep Learning
- Embeddings
- ML applications
Students will have to complete a computational/review project in coordination with the instructor.
Goals: Students will (1) Acquire fundamental knowledge of learning theory; (2) Learn how to design and evaluate supervised and unsupervised machine learning algorithms; and (3) Learn how to use machine learning methods for multivariate data analysis in various scientific fields.
Illustrative reading
- C. Bishop. Pattern Recognition and Machine Learning. Springer, 2007
- Technical papers and class notes will be used.
GE3: Science & Engineering
Overlap
There is an overlap with ECS 170, related to feature extraction methods and Bayesian methods. This overlap is minimal and the treatment of the underlying methods is fundamentally different: ECS 170 focuses on AI algorithms and logic-based decision making while ECS 171 takes a pattern recognition and machine-learning approach.
Instructors I. Davidson, N. Matloff, and I. Tagkopoulos
History: Updated 9.7.2018 (CSUGA): Prerequisites updated to include new lower division ECS series courses. 2012.26.28 (I. Davidson and I. Tagkopoulos): new course proposal. | https://www.cs.ucdavis.edu/blog/ecs-171-machine-learning/ |
As a starting point, I thought through what each of the people I feed eat throughout the day. I am happy to eat the same thing for breakfast + lunch every day. My partner has a choice of two things for breakfast + two or three things for lunch (*). Our daughter eats what she likes.
Our current list looks like this:
B: coffee + breakfast bite
cereal or oatmeal*
L: fruit + granola + yogurt
leftovers or pb sandwich or rice noodles + veg*
fruit
D: veg + grain + protein
(bowl, pasta, salad, soup)
tea + fruit or smoothie
The above list is written on one side of a notecard. On the flip side of the notecard, I wrote a loose list of things that supply these meals. There are specifics like oatmeal + peanut butter...and more general things like in-season fruit + in-season veg which can be chosen according to what looks good on a particular trip to the store (which provides variety in our meals).
This is my current list of items:
in-season fruit
in-season veg
greens
lemons
milk
eggs
butter
cheese
bread
oatmeal
granola
cereal
honey
PB
rice/quinoa/pasta
olive oil
coconut milk
beans
TP
soap
dish soap
laundry soap
This list covers the staple things that ensure we can make + eat real meals at home. Things like tea + spices that run out less often can be written down for a specific trip.
I've made my grocery lists in various ways over the years, but this is probably the simplest system I've used. It involves decreasing some expectations for more complicated meals + variety, but this is exactly how I want to eat...so it's working for me. (Of course, I can always accommodate a craving too.)
All of which leaves a little time for sipping chai on the porch of a favorite coffee shop on my day off instead of doing chores all day. :)
Love,
Jane
decreasing that grocery bill
3/27/2022
Supply chain issues, rising gas prices and everything else seem to be contributing to palpably rising prices at the grocery store. While zero-waste grocery shopping can be more expensive than regular grocery shopping in some instances, there are actually many more ways that shopping with the planet in mind can decrease our grocery bills.
Here are a few ways to decrease that grocery bill while doing some climate action as well:
May your bodies be nourished + your grocery bills be reasonable. :)
Love,
Jane
wealth
3/24/2022
This weekend, someone asked me why I don't do a men's line.
I replied, I am working as hard as I care to be.
I value my free time.
He said, that is pretty interesting, I like that.
You know what I say,
freedom is wealth.
#freestate
~Jesse Kamm (photo via)
journal cards
3/21/2022
When I start a new journal, I enjoy taking some time to review the past journal + to think through what I want to bring forward from it. The first few pages of each of my journals contain headings like ritual, zero-waste, home, clothing, food, movement + budget. I enjoy thinking through these topics and migrating lists + intentions onto new pages.
A journal usually lasts a full year for me, but this past year I filled three journals. Those front pages did not make it into every journal. To be honest, these values + systems have felt a bit overwhelming this year. My life shifted a bit + new ways of living were required.
I finally decided to write those pages onto cards that I can move from journal to journal easily. The food card received some revisions + now lives with the reusable bags in my car so I can reference it at the grocery store. I can pull out the clothing card, when the weather is about to change + I need to decide what I need/want moving forward. I can reference our budget goals easily, when I need a jolt of reality or when big decisions need to be made.
Notecards seem like appropriate vessels for these systems. They provide only enough space for the basics. I like to keep my thoughts on these concepts uncluttered + manageable. These cards are simple + succinct reminders of my systems.
Love,
Jane
a simple list :: groceries
3/20/2022
Many of us believe that meal planning + grocery list making are helpful parts of eating intentionally. We know that time spent in these endeavors pays off in reducing both our food waste + our grocery bills. Even so, many of us find ourselves at the grocery store unprepared at times. I've found myself in the unprepared category more + more lately, as my previous routine has been interrupted by a new schedule.
When a new challenge comes along, I look for ways to simplify. I've simplified my grocery list, by simplifying the way I think about meal planning. Variety was once high on my list of meal planning priorities, but lately I've let that go a bit. I'm focusing on bowl meals (vegetables + greens + grain + protein), because they are my favorite. A bowl meal could be a salad, soup, roasted vegetables, pasta (Julia's favorite), etc. A bowl meal could skew Japanese, Indian, Italian, Greek, Mexican (variety!), etc. And yet, the concept is simple.
On one side of a notecard, I've listed what we eat for each meal. We are happy to eat basically the same things every day for breakfast + lunch. Supper is a bowl meal. Now it is quite simple to turn the card over + make a basic list of items to stock (in-season fruits, vegetables, greens, grains, lemons, beans, etc.).
I can keep this list in my car (along with the grocery bags). The list is not specific, but it lists all the standard things (including items like soap + toilet paper) we need in order to function + make real meals at home. These two lists fit onto one notecard which makes the whole system feel quite simple + manageable.
I imagine a lot of people shop like this...with the list in their heads. I'm not sure if it is the stress that I will forget something, a chatty companion, my poor memory or a very limited budget that has preventing me from shopping purely by memory. All I can say is that this list is a shift for me...and I think it's working...for now. :)
Love,
Jane
resources
3/18/2022
Little resources are needed to keep a garment in rotation-
neither money nor material.
~Kate Fletcher
everyday climate action :: 65 - 73
3/17/2022
Plants, books, coffee + banana bread are some of my favorite things...and they can be climate action too! yay!
loving right now
3/15/2022
The warm days are coming with a little bit more frequency lately, and these warm neutrals are holding my attention. I'm dreaming of bare legs or bare arms soaking in the afternoon sunshine. Can't wait. :)
All lovely photos via links.
Love, | https://www.fairdare.org/blog/archives/03-2022 |
The dinner will be held in the White House East Room, with the entertainment portion taking place on the South Lawn under a tent.
Mexico was last hosted at a White House state dinner in 2001 by President and Mrs. Bush.
ENTERTAINMENT
Beyoncé, Rodrigo y Gabriela, and the United States Marine Band will perform.
DINNER DECOR
The tables will be covered in Mayan blue linens, which the White House says resemble ripples of water.
The centerpieces contain Yve Piaget garden roses, Amnesia Roses, and Fuchsia Cattleya orchids, as well as scented geranium foliage and prickly pear cactus.
The featured china comes from the White House's Clinton collection from 2000 and the Eisenhower collection from 1955.
PERFORMANCE DECOR
The South Lawn tables are decorated in shades of orange and green, with centerpieces made up of marzipan and chocolate flowers. The flowers are made into marzipan roses, the national flower of America, and dahlias, the national flower of Mexico.
MENU
Mrs. Obama worked with Guest Chef Rick Bayless and White House Executive Chef Cristeta Comerford to create the menu, which includes produce harvested from the first lady's garden. The desserts were made with honey made at the White House.
Jicama with Oranges, Grapefruit, and Pineapple
Citrus Vinaigrette
Ulises Valdez Chardonnay 2007 "Russian River"
Herb Green Ceviche of Hawaiian Opah
Sesame-Cilantro Cracker
Oregon Wagyu Beef in Oaxacan Black Mole
Black Bean Tamalon and Grilled Green Beans
Herrera Cabernet Sauvignon 2006 "Selección Rebecca" | https://www.foxnews.com/politics/white-house-brings-star-power-to-state-dinner |
Binkie was an exemplar of the female of a species now almost extinct – dedicated to plain living and high thinking. At some point during her career with Campbell and Gifford she encountered a soulmate in Robert Waddell (always known as RW, as his wife Hannah was known as Mrs RW), a consultant engineer some thirty years older than her. My Aunt Jane (who was closer to him than was the rest of the family) said firmly they fell in love with one another, but as romance, let alone anything else, was of course out of the question, they simply shared their love of literature and classical music. These days that would be regarded as laughably self-restrained.
Unfortunately Sandy's earlier letter and the material he enclosed haven't come to light. But the references to tank-tracks and the urgent need for facilities to produce super-hardened steel proof against the depredation of North African sand fit in perfectly with the involvement of both Robert (click here) and indeed his son Walter (click here) with this issue.
At the same sort of time that I was unloaded on Binkie that weekend, my father also took me on a visit to two quite elderly ladies, Dot and Leonie Ridges, who (as all the most interesting people do) lived in Hampstead. I believe there was another sister, Marian, and a brother Jimmy, somewhere in the family frame, and that they were all from Newry – I do recall that Dot and Leonie had retained a strong Ulster twang. Needless to say, Walter didn't explain in advance who they were, except that they were something to do with his father Robert. He was very much at ease with them, and I suspect they were retirees from Campbell and Gifford – and I have a note from somewhere that Jimmy Ridges had been a salesman for C & G.
Dot (Dorothy) Ridges doesn't ring any bells on the internet but Leonie Victoria Ridges (8 Jul 1887 - 26 Sep 1972), born in Newry, resident in Hampstead, certainly does. It's a little bit spooky that all that remains of them on earth are a few tweaks and folds of protein molecules in my long-term memory archive, and pretty soon I too will just be a few tweaks and folds in someone else's protein molecules!
Open Space, Spring 2009, Vol 29, No 4, p15
Meriel Biggs (transcript)
Last year our member Meriel Biggs died, leaving us a legacy. Beaujolais Rood [there's a name to savour], her neighbour for 38 years, writes of Meriel's fascinating life.
If Meriel had been born of a later generation she would have accomplished much. Nevertheless, she was full of enthusiasm for new ideas, with a keen intelligence, delighting in words, reaching out to people, and always wanting to hear their best news and encouraging them.
Meriel Edith Dixon Biggs, known as Binkie, was born in Widford in Hertfordshire in 1911. At school she excelled at athletics and was the long-jump champion. Her headmistress suggested that she study medicine, but her mother put paid to that idea saying that her daughter did not need a career as she would get married.
Literary agents
She started work as a secretary with the top literary agents Curtis Brown in London. When authors such as Osbert Sitwell came to discuss their books she would draw cartoons of them on sheets of paper under the table. She kept some in an album which she presented to Curtis Brown in 2007. She told me how Frieda Lawrence, widow of D H Lawrence, would come in to discuss royalties wearing brightly coloured clothes and talking in a loud voice.
Much as she enjoyed Curtis Brown, the pay was poor and she moved on to work for publishers and then an engineer [Campbell Gifford].
During the war Binkie volunteered as a fire warden. and one of her letters contains a lively description of an evening at the shelter. Her duties were to patrol the streets at night ensuring all were safe.
After the war she got a scholarship to study music and won several awards. She taught piano and liberal studies, which included English and drama.
Festival
In 1962 she organised a Brooklands music festival in Surrey, featuring musicians such as Neville Marriner and Leo Goossens. She had a good eye for art and many local artists appreciated her constructive criticism of their work.
She supported sports as a way of keeping young people off the streets and became involved in several local campaigns to save public spaces. She wanted her house and garden to be made into a small park when she died, but unfortunately the council did not accept the idea.
Binkie was an indefatigable campaigner and writer of letters to newspapers on such diverse issues as background music spoiling programmes, foxhunting, airport noise and nuclear waste, cycling on pavements, skate-boarding on the local recreation ground and women in parliament (if only to enliven rows of boring suits).
I certainly don't recollect Binkie's bungalow as being anything like as palatial and ‘up-market' as it now seems to have become, to judge from the picture below.
Farnaby's, 2017
aka Farnabys (nb omission of discreetly omitted tricky apostrophe – she would have been much amused), Elgin Road, Weybridge KT13 8SN (5 bedrooms, 3 bathrooms, 4 reception rooms, price estimate £1.64 million (she would have been horrified). | https://www.ornaverum.org/family/waddell/robert/binkie-biggs.html |
This summit white sedan has an automatic transmission and is powered by a 1.4L I4 16V GDI DOHC Turbo engine. Given that frugal motor it gets 8.3 L/100 km in the city and uses just 6.2 L/100 km out on the highway according to Transport Canada.
Want to try the 2019 Chevrolet Cruze LT ? | https://www.vickarchevrolet.ca/en/new-inventory/vehicle/2019/chevrolet/cruze/lt/5841325 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.