content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
The Avars were a confederation of heterogeneous (diverse or varied) people consisting of Rouran, Hephthalites, and Turkic-Oghuric races who migrated to the region of the Pontic Grass Steppe (an area corresponding to modern-day Ukraine, Russia, Kazakhstan) from Central Asia after the fall of the Asiatic Rouran Empire in 552 CE. They are considered by many historians to be the successors of the Huns in their way of life and, especially, mounted warfare. They settled in the Huns' former territory and almost instantly set upon a course of conquest. After they were hired by the Byzantine Empire to subdue other tribes, their king Bayan I (reigned 562/565-602 CE) allied with the Lombards under Alboin (reigned 560-572 CE) to defeat the Gepids of Pannonia and then took over the region, forcing the Lombards to migrate to Italy.
The Avars eventually succeeded in establishing the Avar Khaganate, which encompassed a territory corresponding roughly to modern-day Austria, Hungary, Romania, Serbia, Bulgaria down to and including parts of Turkey. The departure of the Lombards for Italy in 568 CE removed another hostile people from Pannonia, enabling Bayan I to expand his territories with relative ease and found the empire which lasted until 796 CE, when the Avars were conquered by the Franks under Charlemagne.
Remove Ads Advertisement
Origins & Migration
The first mention of the Avars in Roman history comes from Priscus of Panium in 463 CE.
The precise origin of the Avars (like that of the Huns) is debated, but many historians, such as Christoph Baumer, link them with the Rouran Khaganate of Mongolia, north of China. The Rouran Khaganate was overthrown by the Gokturks in 552 CE, and the people, led by the Xianbei Mongolians, fled west to escape their rule. This claim seems the most likely but is not accepted by all scholars. The Ju-Juan tribe of Mongolia allied themselves with the White Huns against the people known as the Toba (who were Turkish) in numerous engagements and established themselves as an empire in the Mongolian region c. 394 CE. This empire became known as the Rouran Khaganate, which fell to the Gokturks in 552 CE, shortly before the Avars appear in the Steppe c. 557 CE, and so Baumer, and those who agree with him, appear to be correct.
The first mention of the Avars in Roman history comes from Priscus of Panium in 463 CE, who mentions the Avars in connection with a tribe known as the Sabirs who appear to be a subset of the Huns. Priscus is one of the primary sources on the Huns (he met and dined with Attila in 448/449 CE while on a diplomatic mission) and took note of their activities following the death of Attila in 453 CE. The Hunnic Empire which Attila established was in the process of disintegrating at this time (c. 463 CE), beginning with the Hun defeat by Ardaric of the Gepids in 454 CE at the Battle of Nedao.
Feast of Attila by Fine Arts in Hungary (Public Domain)
Following Nedao, other nations that had been subjugated by the Huns rose against them, and the Hunnic Empire was dismantled by 469 CE. Whether the Avars mentioned by Priscus are the same coalition as those who fled Mongolia in 552 CE is debated. Many of the so-called "barbarian" tribes mentioned by Roman writers (the Alemanni, for example) changed in ethnic make-up from the time they are first mentioned to their later references. Most likely, as historians such as Peter Heather and Denis Sinor claim, the latter Avars were a different group of the same name. The earlier Avars appear to be an established confederacy of the region, while the later Avars were refugees from Central Asia fleeing the Gokturks who, it seems, pursued them.
Remove Ads Advertisement
Contact with Rome
Regarding their origin and flight west, Heather writes:
[The Avars] were the next major wave of originally nomadic horse warriors, after the Huns, to sweep off the Great Eurasian Steppe and build an empire in central Europe. Thankfully, we know rather more about them than about the Huns. The Avars spoke a Turkic language and had previously starred as the dominant force behind a major nomadic confederation on the fringes of China. In the earlier sixth century they had lost this position to a rival force, the so-called Western Turks [Gokturks], and arrived on the outskirts of Europe as political refugees, announcing themselves with an embassy that appeared at Justinian's court in 558. (401)
Justinian I (482-565 CE) received the embassy and agreed to hire them to fight against other troublesome tribes. The Avars performed their duties admirably and expected continued payment from the empire. They wanted their own homeland to settle where they could feel secure from the pursuing Turks. The king of the Avars, Bayan I, tried to lead his people south of the Danube River but was prevented by the Romans. He then led the Avars north but encountered resistance from the Franks under their king Sigebert I. They continued as nomads in the service of Rome until the death of Justinian in 565 CE. His successor, Justin II (c. 520-578 CE), canceled their contract and, when the Avar embassy asked for permission to cross the southern Danube, it was denied. They again sought to break through to the north but were repelled by Sigebert I's army. Bayan I then turned his attention to Pannonia or, according to other sources, was invited to go there by Justin II to displace the Gepids.
The Lombards under Alboin were already in Pannonia in conflict with the Gepids who controlled most of the region. As with the Avars, sources conflict on whether the Lombards migrated to Pannonia on their own or were invited by the empire to drive out the Gepids. Bayan I wanted to take the capital city of Sirmium but did not know the region and needed the help of those more familiar with it. He allied himself with Alboin and the Lombards and, in 567 CE, the two armies joined to crush the Gepids between them. Bayan I negotiated the terms of the alliance with Alboin before they went into battle: if they should win, the Avars would be given the Gepid lands, wealth, and people as slaves, and the Lombards would be allowed to live in peace. Why Alboin agreed to this unequal agreement is unknown, but it is clear that he did. As with the Huns and their policies toward other nations, it is possible that Bayan I threatened Alboin with conquest if he did not comply with Avar interests.
Remove Ads Advertisement
Alboin from the Nuremberg Chronicle by Michel Wolgemut, Wilhelm Pleydenwurff (Public Domain)
The armies met in battle some distance from Sirmium and the Gepids, under their king Cunimund, were defeated. Sources differ on what happened in the aftermath: according to some accounts, Bayan I killed Cunimund and had his skull turned into a wine cup - which he then presented to Alboin as a comrade in arms while, according others, Alboin killed Cunimund and made his skull into a cup which he then wore on his belt.
The armies marched on Sirmium but the Gepids had already called for help from the Eastern Empire, agreeing to surrender the city to them; by the time Bayan I and Alboin reached Sirmium, it was heavily defended and they were driven back. Since they had not prepared themselves for an extended siege, the armies withdrew.
Remove Ads Advertisement
Rise of the Avar Empire
Although Sirmium remained untaken, the Avars now controlled most of Pannonia and the Lombards found that the deal they had brokered earlier was an unfortunate one for them. Alboin tried to form an alliance with the Gepids against the Avars by marrying Cunimund's daughter Rosamund whom he had taken after the battle. It was now too late, though, as the Avars were simply too powerful to contest. In 568 CE, Alboin led his people out of Pannonia to Italy where, in 572 CE, he would be assassinated in a plot hatched by his wife to avenge her father.
The Assassination of Alboin by Dreweatts Auction Catalogue (Public Domain)
The Avars under Bayan I then set about building their empire on the plains of Pannonia. That there seems to have been a core "Avar" ethnicity among the larger Avar confederation is seen in some of Bayan I's military decisions and decrees. The historian Denis Sinor writes:
The ethnic composition of the Avar state was not homogeneous. Bayan was followed by 10,000 Kutrighur warrior subjects already at the time of the conquest of the Gepids. In 568 he sent them to invade Dalmatia, arguing that casualties they may suffer while fighting against the Byzantines would not hurt the Avars themselves. (222)
Under Bayan I's leadership, the Avars expanded across Pannonia in every direction and, through conquest, enlarged their empire. A number of Slavic people had followed the Avars into Pannonia, and these were now subjects of Avar rule and seemed to be treated with the same lack of regard accorded the Kutrighur soldiers Sinor mentions. Bayan I oversaw the selection of the Avar base of operations in their new homeland and may have chosen it for its association with the Huns. Historian Erik Hildinger comments on this, writing:
The Avars established their headquarters near Attila's old capital of a hundred years before and fortified it. It was known as The Ring. Now well established in Pannonia, Bayan fought the Franks of Sigebert again and defeated them in 570. A dozen years later Bayan attacked Byzantine territory and seized the city of Sirmium on the Sava River. He followed this with further campaigns against the Byzantines, the Avars taking Singidunum (Belgrade) and ravaging Moesia until they were defeated near Adrianople in 587. To the Byzantines, it must have seemed like a reprise of the Hunnic aggression of the fifth century. (76)
Avar Conquest
With Sirmium now taken, and operating efficiently from The Ring, Bayan I continued his conquests. Christoph Baumer writes how Bayan I drove his armies into the Balkans and demanded tribute from the Eastern Empire for peace and then, "together with the beaten Slavs, whom they abused as a kind of 'cannon fodder', they invaded Greece in the 580's" (Volume II, 208). They operated in warfare with tactics similar to those used by the Huns a century before. Like the Huns, the Avars were expert horsemen. Baumer notes that, "The iron stirrup came to Europe only with the invading Avars in the second half of the sixth century." The stirrup "enabled riding in a squatting or almost standing position, which improved the rider's mobility, but also increased the impact of an attacking cavalry" (Volume I, 86). The stirrup greatly enhanced the already formidable Avar cavalry and made them the most feared and invincible mounted military force since the Huns. Baumer writes:
Remove Ads Advertisement
In his famous military handbook Strategikon, the Byzantine emperor Maurice (reigned 582-602) aptly described the battle style of the Avars, whom he compared to the Huns, as follows: `they prefer battles fought at long range, ambushes, encircling their adversaries, simulated retreats and sudden returns, and wedge-shaped formations...When they make their enemies take to flight, they are not content, as the Persians and the Romans, and other peoples, with pursuing them a reasonable distance and plundering their goods, but they do not let up at all until they have achieved the complete destruction of their enemies...If the battle turns out well, do not be hasty in pursuing the enemy or behave carelessly. For this nation [the steppe nomads] does not, as other do, give up the struggle when worsened in the first battle. But until their strength gives out, they try all sorts of ways to assail their enemies. (Volume I, 265-267)
Justin II had begun a war against the Sassanids in 572 CE and, with imperial forces drawn to the east, Bayan I invaded further into Byzantine territories. He demanded higher and even higher tribute and defeated the imperial armies sent against him. It was not until 592 CE, with the conclusion of the empire's war with the Sassanids, that the emperor Maurice was able to send an army of adequate force against Bayan I. The Avars were driven from the Balkans and back into Pannonia by the imperial troops under the general Priscus, almost to their capital. The Avars would most likely have been destroyed en masse were it not for the insurrection in Constantinople known as Phocas' Rebellion in 602 CE.
East Roman Empire, 6th century CE by William R. Shepherd (Public Domain)
Maurice refused to allow the army to stand down and ordered them to winter in the Balkans in case the Avars should mount an unexpected attack. The soldiers rebelled and, according to the historian Theophanes (c. 760-818 CE), chose the centurion Phocas (547-610 CE) as their leader:
The soldiers put Phocas at their head, and marched on Constantinople, where he was speedily crowned, and Maurice with his five sons executed. This was on the 27th of November, 602. The usurpation of Phocas was followed by an attack on the empire, both east and west, by the Persians on the one hand and the Avars on the other. But two years later the Khagan [King of the Avars] was induced to make peace by an increased annual stipend (451).
At this same time (602 CE), a plague broke out in the Balkans and swept across the surrounding regions. It is likely that Bayan I was one of the many victims of the disease. Bayan I was succeeded by his son (whose name is not known) who attempted to carry on his father's empire. In 626 CE he led a campaign against Constantinople, allied with the Sassanid Empire, in a land and sea attack. The formidable defenses of the Theodosian Walls (built under the reign of Theodosius II, 408-450) repelled the land attack, while the Byzantine fleet defeated the naval assault, sinking many of the Avar ships. The campaign was a complete failure and the surviving Avars returned home to Pannonia.
The Decline of the Avar Empire
The emperor at this time was Heraclius (reigned 610-641 CE), who immediately stopped the payments to the Avars. Baumer notes that, "this deprived the Avar Khaganate, whose tribes and clans depended on regular distribution of goods, of their economic basis" (Volume II, 208). When Bayan's son died in 630 CE, the Bulgars of the region rose in revolt and civil war broke out between the Avars and the Bulgars. The Bulgars appealed to the Eastern Empire for assistance but they were too busy fighting off an attack by the Arabs to help, so the Bulgars pressed on by themselves. Although the Avars won this struggle, the conflict was costly and the power of the Avars declined. Baumer writes:
Remove Ads Advertisement
Archaeological research shows that Avar material culture changed after 630, for in male graves the number of weapons as burial objects declined considerably. The economy of the Avar Empire ceased to be based on wars and raids, being gradually replaced by agriculture; the former horse warriors exchanged lance and armour for the plough and now lived in houses with saddleback roofs which were dug into the ground. (Volume II, 209)
Peter Heather notes that, "just like the Huns, the Avars lacked the governmental capacity to rule their large number of subject groups directly, operating instead through a series of intermediate leaders drawn in part from those subject groups" (608). This system of government worked well as long as Bayan I ruled but, without him, led to disunity. When Charlemagne of the Franks rose to power in 768 CE, the Avars were in no position to challenge him. Charlemagne conquered the neighboring Lombards in 774 CE and then moved on the Avars but had to halt his campaign to deal with a revolt by the Saxons. Instead of taking advantage of this reprieve to strengthen their defenses and mobilize, the Avars fought among themselves and the conflict finally broke into open civil war in 794 CE in which the leaders of both factions were killed. The subordinate authority left in charge offered the remnants of the Avar Empire to Charlemagne, who accepted, but then attacked anyway in 795 CE, taking The Ring easily and carrying off the hoard of Avar treasure. The empire officially ended in 796 CE with the official surrender and, after that date, the Avars were ruled by the Franks. The Avars revolted in 799 CE but were crushed by the Franks by 802/803 CE and, afterwards, merged with other people.
Avar Belt Mount by Metropolitan Museum (Copyright)
Their legacy, however, was to forever change the ethnic make-up of the regions they had conquered. Peter Heather writes:
There is every reason to suppose that [the Avar Empire's system of government] had the political effect of cementing the social power of chosen subordinates, further pushing at least their Slavic subjects in the direction of political consolidation [and to] both prompt and enable a wider Slavic diaspora, as some Slavic groups moved further afield to escape the burden of Avar domination. Large-scale Slavic settlement in the former east Roman Balkans - as opposed to mere raiding - only became possible when the Avar Empire (in combination with the Persian and then Arab conquests) destroyed Constantinople's military superiority in the region. (608)
Like the Huns, to whom they are often compared, the Avars radically changed the world they inhabited. They not only displaced large numbers of people (such as the Lombards and the Slavs) but broke the political and military power of the latter half of the Roman Empire. They were among the fiercest mounted warriors in history but, as Howorth phrases it, they were also "herdsman and freebooters, and doubtless were dependent on their neighbors and slaves for their handicrafts, except perhaps that of sword-making" (810). Even their swords were linked to the Huns in that "`Hunnic swords' are referred to by the Frank chroniclers, by which perhaps Damascened blades are meant, such as those found in large numbers in a boat at Nydam in Denmark, apparently dating from this period" (Howorth, 810). The legacy of the Avars is still recognized in the present day in the populations of the lands they conquered. They are so often compared with the Huns for good reason: through their military campaigns, they significantly altered the demographics of the regions they raided, uprooting and displacing large numbers of people who then established their cultures elsewhere.
| |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
This invention generally relates to the art of fluid controls and, more particularly, to fuel controls for combustion engines such as gas turbine engines that provide primary or secondary power to a vehicle.
Cost and size of engine components are of constant concern in vehicular engine applications. This is particularly true for small turbojet engines that are designed for use in missiles and other short-life/disposable applications.
It is known to use a pulse width modulated valve (PWM valve) on the high pressure side of a fuel pump to meter the fuel flow to a gas turbine engine by cycling the PWM valve between an on and off position. Fuel flow is determined by the time period that the valve is open during each cycle and by the cycle frequency. Typically, such systems utilize a regulator valve to control the inlet pressure to the PWM valve by bypassing fuel flow from the high pressure side of the fuel pump back to the fuel tank. Examples of such systems are shown in U.S. Pat. Nos. 3,568,495 to Fehler et al.; 3,936,551 to Linebrink et al.; and 4,015,326 to Hobo et al.
Two disadvantages associated with these systems are the size and cost of the PWM valve components which must be designed to withstand the output pressure of the fuel pump, which commonly is in the range of 100-200 psig to provide adequate fuel injection pressure to the combustor.
Another disadvantage associated with these systems is the wasted power input into the pressurized fuel flow that is bypassed by the regulator valve from the high pressure side of the fuel pump back to the fuel tank. The wasted power is particularly critical in missiles and other vehicles having a limited fuel capacity and a mission profile that may be determined by the time required to deplete the stored fuel.
Yet another disadvantage associated with these systems is the pulsating flow generated by the PWM valve as it cycles between its open and closed positions. Such pulsating flow can result in combustor flameout and/or deleteriously affect the combustor stability. Accordingly, depending on the engine and combustor parameters, these systems typically require some form of accumulator/damper in the high pressure fuel line connecting the PWM valve to the combustor to dampen the pulses in the fuel flow to the combustor. The accumulator/damper is an additional component that adds cost, complexity and weight to the system and introduces a potential failure point in the system.
Thus, it can be seen that there is a need for a small, low-cost, and efficient fuel control system for gas turbine engines and, in particular, for small turbojet engines.
It is the principal object of the invention to provide a new and improved fluid flow control system.
More specifically, it is an object to provide a small, low cost fluid flow control, and particularly a small, low-cost fuel control system for a gas turbine engine and, in particular, for small turbojet engines.
It is a further object of the invention to provide a fluid flow control system that utilizes a PWM valve to meter the fluid flow without requiring any additional components dedicated to damping pulses in the fluid flow generated by the PWM valve.
It is a further object of the invention to provide a fuel control system that reduces or eliminates the energy wasted in bypassing pressurized fuel flow from a pump outlet back to a fuel tank.
These and other objects of the present invention are attained in a fluid flow control in the form of a fuel control system that utilizes a PWM valve to meter a fuel flow to the inlet of a fuel pump that pumps the metered fuel flow to an engine. By virtue of this construction, the PWM valve is not subjected to the output pressure of the fuel pump. This allows the fuel control system to utilize a small, low-cost PWM valve, such as is commonly used in connection with automotive fuel injectors. Further, because the fuel is metered prior to entering the fuel pump, the fuel pump only pumps the precise amount of fuel required for the engine and no energy is wasted in pumping a fuel flow that must be bypassed back to a fuel tank. Additionally, because the PWM valve is on the inlet side of the fuel pump, the fuel pump can be utilized to dampen the PWM valve generated pulses in the fuel flow by operating with a vapor core wherein fuel is vaporized at the pump inlet and reformed back to liquid at the pump outlet, thereby damping the pulses.
1
2
2
According to one aspect of the invention, a method for controlling a fluid flow rate from a pump is provided and includes the steps of providing a pump having a pump inlet and a pump outlet, and a fluid flow path to the pump inlet. The fluid flow path is cyclically restricted to achieve a fuel flow to the pump inlet that cycles between a first flow rate for a time period T and a second flow rate for a time period T, with the second flow rate T being greater than the first flow rate. The fluid flow to the pump inlet is pumped by the pump from the pump inlet to the pump outlet.
1
According to another aspect of the invention, the method further includes the steps of vaporizing at least a portion of the fluid flow at the pump inlet for at least a portion of the time period T and reforming the vaporized fluid flow back to liquid at the pump outlet.
According to another aspect of the invention, an improvement is provided in a method for controlling the fluid flow rate from a pump including the steps of providing a pump having a pump inlet and a pump outlet, providing a substantially liquid fluid flow to the pump inlet, pumping the fluid flow with the pump from the pump inlet to the pump outlet while creating a pressure at the pump outlet that is above the vapor pressure of the fluid flow at the outlet. The improvement includes repetitively reducing the pressure at the pump inlet to a value below the vapor pressure of the fluid flowing into the pump inlet to provide a vapor core within the pump sufficient to dampen pulses in the fluid flow.
Other objects, advantages and novel features of the present invention will be apparent to those skilled in the art upon consideration of the following drawing and detailed description of the preferred embodiments.
BRIEF DESCRIPTION OF THE DRAWING
The FIGURE is a diagrammatic illustration of a fluid flow control unit in the form of a fuel control system embodying the present invention in combination with a gas turbine engine.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
12
With reference to the FIGURE, an exemplary embodiment of a fluid flow control system made according to the invention is described and illustrated in connection with a fuel control system for a gas turbine engine, shown generally at . However, it should be understood that the invention may find utility in other applications, and that no limitations to use as a fuel control system for a gas turbine engine is intended except insofar as expressly stated in the appended claims.
14
16
18
14
16
20
18
1
2
22
14
20
The fuel control system includes a pressurized fuel storage device or fuel tank ; a fuel pump ; a fuel flow path from the fuel storage device to the fuel pump ; a restricting means, shown in the form of a PWM valve , for cyclically restricting the fluid flow path to achieve a fuel flow to the pump inlet that cycles between a first flow rate for a time period T and a second flow rate for a time period T, with the second flow rate being greater than the first flow rate; and means, shown in the form of a regulator valve , for regulating pressure in the storage device to achieve a desired average pressure differential in the fuel flow across the PWM valve .
12
24
26
28
24
28
26
12
12
The gas turbine engine may be of any known construction and includes a compressor section , a turbine section , and a combustor assembly . As is known, the compressor section supplies a pressurized airflow to the combustor assembly where the airflow is mixed with fuel and combusted to produce a hot gas flow that is expanded through the turbine section to produce shaft power and/or thrust from the gas turbine engine . It is anticipated that the fuel control system will be particularly useful with gas turbine engines in the form of small turbojets, such as those disclosed in U.S. Pat. Nos. 5,207,042, issued May 4, 1993 to Rogers et al. and 4,794,742, issued Jan. 3, 1989 to Shekleton et al., the entire disclosures of which are herein incorporated by reference.
14
30
32
30
30
34
24
30
36
32
18
The pressurized fuel storage device may be of any known construction and is shown in the form of a pressure tank or chamber and a fuel bladder contained within the pressure chamber . The pressure chamber includes a pressure port for receiving a regulating air pressure flow from the compressor section . The pressure chamber further includes a fuel outlet port for supplying fuel from the fuel bladder to the fuel flow path .
20
40
42
44
46
48
20
20
12
The PWM valve includes a valve inlet , a valve outlet , and an electromagnetically actuated spool assembly including a solenoid and a metering spool . It should be appreciated that any known type of PWM valve may be utilized in the fuel control system and that the valve selected will depend upon the environment and installation requirements, the fuel flow requirements and the operating parameters of the particular engine selected for use with the system.
16
50
52
54
56
12
52
26
58
The fuel pump may be of any known type and is shown in the form of a centrifugal pump including a pump inlet , a pump outlet , and a centrifugal impeller that is driven by a shaft powered by the gas turbine engine . The pump outlet is connected to the combustor assembly by a high pressure fuel conduit .
18
60
36
40
62
42
50
The fuel flow path is shown in the form of a first conduit that directs flow from the fuel outlet port to the valve inlet , and a second conduit that directs flow from the valve outlet to the pump inlet .
22
20
22
64
66
68
64
66
22
70
72
73
68
70
72
73
74
70
75
62
42
50
72
76
78
66
34
64
24
80
The regulator valve is basically conventional and is to provide a regulated, constant pressure differential across the PWM valve . The regulator valve includes an air inlet , an air outlet , and a regulating spool for metering the airflow from the air inlet to the air outlet . The valve further includes pressure chambers and separated by a piston or diaphragm . The regulating spool is controlled by the pressure differential between pressure chambers and acting upon the diaphragm and by a biasing spring . The pressure chamber is connected by a pressure tap to the conduit between the valve outlet and the pump inlet . The pressure chamber is connected by a pressure tap to an airflow conduit between the air outlet and the pressure port . The air inlet is connected to the compressor section by an airflow conduit .
90
92
20
94
96
90
92
20
90
42
50
90
92
2
1
42
50
90
92
2
1
A controller in the form of a digital electronic controller provides control signals to the PWM valve based on engine speed and power command signals and engine parameter signals , as is known. The controller preferably utilizes conventional digital techniques for providing the control signal to the PWM valve , as is known. Accordingly, further description of the constructional details of the controller are not required, it being sufficient to note, that to increase the fuel flow rate from the valve outlet to the pump inlet , the controller adjusts the control signal to cause an increase in the time period T for the second flow rate and a decrease in the time period T for the first flow rate. Conversely, to decrease the fuel flow rate from the valve outlet to the pump inlet , the controller adjusts the control signal to cause a decrease in the time period T for the second flow rate and an increase in the time period T for the first flow rate.
100
101
100
24
100
34
100
14
An alternative gas pressurization supply is provided for engine starting. A check valve in the airflow conduit prevents reverse flow of the gas from the supply into the compressor section . Preferably, the supply is in the form of compressed air tank or a start squib. During engine starting, the pressure port receives a pressure flow from the supply for pressurizing the storage device .
40
32
60
50
20
62
20
92
90
44
1
2
12
44
42
50
1
2
u
1
In operation, fuel flow is supplied to the valve inlet at a pressure Pvia the fuel bladder and the conduit . Fuel flow is supplied to the pump inlet at a pressure Pvia the PWM valve and the conduit . The fuel flow through the PWM valve is controlled by a signal from the controller which causes the spool assembly to cycle between a first position that allows a first flow rate for a time period T and a second position that allows a second flow rate for a time period T. Typically, the first flow rate will be equal to zero or substantially equal to zero, and the second flow rate will be equal to or greater than the maximum fuel flow rate required for the gas turbine engine . Preferably, the spool assembly is cycled at a fixed frequency and the fuel flow rate from the valve outlet to the pump inlet is controlled by adjusting one or both of the time periods T, T, as is known.
20
92
20
22
70
74
72
76
68
70
72
24
14
40
78
14
22
40
14
22
20
62
20
u−
i
u
i
u
i
u
u−
i
u
u
In order to insure that the flow through the PWM valve has a relatively predictable relationship to the control signal , it is important to maintain a relatively constant pressure drop &Dgr;P (&Dgr;P=PP) across the PWM valve . This function is performed by the regulator valve which senses the pressures Pand Pand controls the pressure Pto maintain a relatively constant &Dgr;P. More specifically, the pressure chamber is pressurized to Pby the pressure tap and the pressure chamber is pressurized to the pressure Pby the pressure tap . The position of the metering spool is controlled by the pressure differential, &Dgr;P=PP, in the pressure chambers , to regulate a bleed airflow from the compressor section to the pressurized fuel storage device . It should be noted that the above explanation assumes that the pressure Pat the valve inlet is equal to the pressure in the airflow conduit and the pressurized fuel storage device . It is believed that this assumption is essentially correct for most pressurized fuel storage devices utilizing a fuel bladder. However, the regulator valve will still perform satisfactorily in any system where the pressure Pat the valve inlet is dependent upon the pressure inside the storage device . Preferably, the regulator valve has sufficient damping to accommodate any pressure pulses generated by the PWM valve in the conduit while maintaining a relatively constant &Dgr;P across the PWM valve .
16
50
28
58
28
b.
b
The fuel pump pumps the fuel from the pump inlet to the combustor assembly via the conduit at a pressure PThe fuel pump should be designed to attain the maximum pressure required by the combustor assembly . For a small turbojet engine, Pwill typically vary from 25-160 psia during operation.
20
16
50
1
52
1
2
50
1
20
16
50
16
20
34
50
16
To prevent combustor flame-out or deleterious effects on combustor stability, it is preferred that the pulsating fuel flow output from the PWM valve be damped to closely approximate steady state flow. In the preferred embodiment, this damping is primarily provided by a pulsating vapor core in the fuel pump . More specifically, the damping is provided by vaporizing a portion of the fuel flow at the pump inlet for at least a portion of the time period T and re-forming the vaporized fuel back to liquid at the pump outlet throughout the time periods T and T. Fuel is vaporized at the pump inlet during the time period T because the PWM valve is essentially closed at this time while the pump continues to operate. This causes the pressure at the pump inlet to drop, resulting in such vaporization which forms the vapor core within the pump . When the PWM valve again opens, fuel at about the pressure at the pressure port is available at the inlet . This pressure is sufficiently close or above the vapor pressure of the fuel with the result that vaporization is reduced or ceases altogether, causing pulsating of the vapor core within the pump .
16
52
52
20
16
14
16
18
20
22
50
16
At the same time, the geometry of the pump is such that pressure at its outlet is always above the vapor pressure of the fuel. Consequently, only liquid fuel flows from the outlet . This flow is at a relatively constant pressure because the changing length of the vapor core within the pump as the vapor core forms and collapses in pulsating fashion acts as a damper for the pulsating liquid fuel flow through the PWM valve . The ability of centrifugal pumps to reform slugs of vaporized fuel back into liquid form is known and is dependent upon the flow characteristics of the pump and the pump inlet and outlet pressures. Accordingly, it is preferred that the pump be a centrifugal pump and that the components , , , , and of the fuel system be designed to provide a pressure Pi at the pump inlet that allows for sufficient amount of vapor damping in the fuel pump .
12
12
20
b
b
While the exact amount of damping in the fuel flow required will be highly dependent upon the particular engine selected for use with the system, it has been determined that for some systems and engines the damping should be sufficient to reduce the pulse amplitude of Pto approximately 10% of the mean value of Pbased on an operating frequency of 50 hertz for the PWM valve .
20
16
From the foregoing, it will be appreciated that, by placing the PWM valve on the low pressure side of the fuel pump , the fuel control system may utilize a relatively small and low-cost PWM valve, such as is commonly used in connection with automotive fuel injectors.
50
52
It should further be appreciated that, by metering the fuel flow to the inlet of the fuel pump, rather than from the outlet of the fuel pump, the energy required to pressurize the fuel flow to the combustor is minimized because excess flow at high pressure does not exist and therefore need not be returned to the tank as in prior art systems.
20
16
16
20
It should also be appreciated that the placement of the PWM valve on the inlet side of the fuel pump provides the beneficial advantage of utilizing the fuel pump to provide damping via a pulsating vapor core thereby to minimize the effects of the pulsated fuel flow from the PWM valve .
20
20
18
50
1
2
20
18
14
20
While a PWM valve is preferred, any electromechanical or solenoid valve capable of metering fuel flow by cyclically restricting the fuel flow path to achieve a fuel flow to the pump inlet that cycles between a first flow rate for a time period T and a second flow rate for a time period T may be utilized. Further, while pulse width modulated control is preferred, any form of control, including cycle frequency control, capable of causing a valve to provide the desired cyclical restriction of the flow path may be utilized. By way of further example, it is anticipated that some systems may utilize a fuel storage device that is not pressurized and, further, may not require a relatively constant pressure differential &Dgr;P across the valve . | |
Every year, the Florence School of Banking and Finance organises an executive seminar, an occasion for the members of its advisory council to gather and debate on a pressing topic in the European banking and financial landscape.
This year’s executive seminar, publicly web streamed, was centred on the following question: what is behind the disinflation over the course of the last three decades?
For Charles Goodhart, the co-author of ‘The Great Demographic Reversal: Ageing Societies, Waning Inequality, and an Inflation Revival’ by Goodhart and Pradhan, it is a combination of demography and globalisation. In their presentation, the authors explained the core arguments raised in the manuscript and reflected on their relevance for the post-pandemic
According to Goodhart, demographic changes and globalisation produced a ‘massive supply shock’, providing employers with an unseen supply of labour force. For one part, it was the shift of production to countries with growing working age population and lower labour costs – like China – combined with reimporting of goods to Europe and the US. In addition, the drastic increase of women’s labour force participation, witnessed from the 1950s and driven by decreasing birth rates and the availability of consumer durables, further contributed to the vast growth of available labour force.
Demographic changes and globalisation led to a dramatic change in the ratios of workers’ wages across the globe – from the US to Western Europe, to Eastern Europe and China.
The authors argue that this meant a sharp decrease in inequality across the world economies, yet maintained and further enhanced inequalities within national economies leading to the rise of populism and anti-globalisation movements. Another crucial effect of the demographic change, notably the aging of population, has been an important increase in consumption by the elderly. With the increase of the average life expectancy, the impact of age-related diseases has led to a sharp increase in healthcare costs and spending. ‘We, old people, are expensive!’ exclaims professor Goodhart. For labour markets aging population means more workforce allocated to healthcare: care homes and other facilities.
The demographic structural forces that underlie our economy provide significant new challenges.
Goodhart compares the expenditure and growth of public debt of the Covid-19 pandemic to wars. Luckily, these are finite. Unlike the deficits accumulated as a result of wars or pandemics, the growing expenses and deficits stemming from the aging population are permanent. Therefore, the main long-term concerns raised by the authors are not Covid-19 related. Where the solution, normally, would be faster growth, more productivity or increases in taxation, these are unlikely due to demographic changes (falling numbers of labour force preclude fast growth and productivity) or are politically undesirable (taxation). Against this background, inflation appears to be ‘a political optimum’: although unappealing, inflation does not entail the need to take politically extremely unattractive measures.
The emphasis on the preconditions and models to project inflation are extremely timely.
Manoj Pradhan emphasised this by discussing the changing role of China on disinflation. According to Pradhan, the predominant view of China as capable of lowering inflation indefinitely, is wrong. China’s most-favoured-nation status in 2000s lead to a labour release but did not create unemployment. Instead, it produced a shift in the composition of growth, procured by the plummeting of the interest rates towards zero and the push forward of the housing boom. As a result, the construction employment went up while manufacturing employment went down. This shift normalised the difference between an economy’s actual output and its maximum potential output expressed (output gap) without rising inflation. Today, however, the role of China does not allow for disinflation and for the American housing market to raise at a constant pace.
How will the pandemic impact Charles Goodhart and Manoj Pradhan’s thesis?
High monetary aggregates due to inability to spend in lockdowns also led to a sharp decrease in the velocity of money. Although Covid-19 income has been higher than Covid-19 expenditure, the authors argue that this is likely to change in 2022, leading to ‘surprise’ increases in inflation.
The presentation by the speakers was followed by questions and comments from the member of the Advisory Council of the Florence School of Banking and Finance, as well as questions raised by the audience. | https://fbf.eui.eu/executive-seminar-on-demography-inequality-and-inflation/ |
Ashland University's outdoor track and field teams continue to pile up Great Lakes Intercollegiate Athletic Conference weekly awards.
As announced by the GLIAC on Tuesday (April 9) morning, senior Myles Pringle (Men's Track), junior Alex Hill (Men's Field) and redshirt freshman Lindsay Baker (Women's Field) are among the Week 3 GLIAC Athletes of the Week. Hill and Baker performed over the weekend at the Northeast Ohio Quad, while Pringle competed at the Sun Angel Track Classic in Tempe, Ariz.
Pringle, in his first outdoor outing of the season, won the men's 400-meter dash in Tempe in a NCAA Division II-leading time of 45.62 seconds. That time also would be No. 3 in NCAA Division I this spring, and is the fifth-best men's 400 outdoor time in the world in 2019.
Baker was impressive in three throwing events at the Quad, winning the women's shot put (16.41 meters/53-feet-10¼), finishing second in the women's discus throw (50.54 meters/165-feet-10) and placing third in the women's hammer throw (58.04 meters/190-feet-5). Baker is ranked No. 1 in the country in the shot, No. 4 in the hammer and No. 8 in the discus.
Also at the Quad, Hill won the men's hammer at 65.91 meters/216-feet-3, and was second in the men's discus at 52.66 meters/172-feet-9. Hill is the No. 3-ranked hammer thrower in D-II, and is No. 9 in the discus.
This is Pringle's 13th all-time GLIAC weekly award, Hill's seventh and Baker's third.
Ashland athletes have earned seven of 12 GLIAC outdoor awards so far in 2019.
The Eagle men and women will compete at two events this week – the Tennessee Relays on Thursday-Saturday (April 11-13) and the Walsh Invitational on Saturday. | http://www.goashlandeagles.com/sports/track/2018-19/releases/20190409bv2yxd |
parison: An even balance of clauses, syllables, or other elements in a sentence (OED).
The word is not related to comparison; nor, as I once supposed, is it a variant on pari sono. It is the neuter of the Greek adjective parisos whose meaning is nearly equal. The Gettysburg Address contains eight examples, as was pointed out by the American classicist Charles N Smiley in The Classical Journal of November 1917. Smiley wondered—as many people have—how Abraham Lincoln, with no classical education and no training in rhetoric, was able to write a speech that has had, for millions of people, a significance to rival sacred scriptures. The 272-word speech has become in itself an historic event; and Lincoln’s use of parison—balance, antithesis and parallel—is notable.
Why is parison so inherent in oratory? It even appears in the speech that Tacitus attributes to the captured British chieftain Caratacus when he is pleading for mercy from the Emperor Claudius. It is omnipresent: and the reason, one might argue, is that it is not a figure of speech, but actually a figure of thought. The mental process of balancing, paralleling, contrasting, answering like with like and like with unlike is a dialectic so powerful, so compelling and so persuasive that it dictates the words. A man with the ability of Abraham Lincoln needed no rhetorical training to think and speak in this way.
See Texts menu for a copy of the Gettysburg Address. | http://teacherofclassics.com/?p=704 |
In a town where green space is a topic of heated debate, the District is trying to walk a fine line between accommodating development while preserving the environment.
District council recently adopted a new tree management bylaw on Oct. 23.
“Tree bylaws are one of the trickiest bylaws,” said Chris Wyckham, director of engineering. “There’s this competing tension... between, ‘I want to do what I want to do – it’s my backyard’... versus, ‘These trees affect the entire neighbourhood.’”
Perhaps the biggest highlight of Tree Management Bylaw No. 2640, 2018 is the inclusion of density targets – that is, ensuring a certain number of trees are planted within a certain area.
For the District, the magic number is 50 trees per net developable hectare.
So for instance, if a developer decides to clearcut a hectare completely covered by forest, 50 trees must be replanted within that space.
“We tried to strike a balance,” said Wyckham.
“That was a very large change which – I think it allayed a lot fears in the development community that this would be impractical. We’re looking for something that’s practical, but incentivizes having a comfortable number of trees in our neighbourhood.”
The Squamish River Watershed Society provided assistance in drafting the new bylaw.
Its executive director explained how 50 trees per hectare turned out to be the number conservationists and municipal staffers settled on.
“What we based it on was the measures being undertaken in the Lower Mainland, in coastal British Columbia and coming out of Washington State,” said Edith Tobe.
“Looking at what’s been implemented – biologically what makes most sense in maintaining the integrity of the landscape.”
There were also considerations regarding global warming, and calculations were made as to how many trees would be needed for water to penetrate the soil rather than run off the surface, she said.
Tobe said the number was “extremely conservative” – she would’ve preferred seeing more than 50 trees per hectare.
A higher number was initially pitched, she noted, but Tobe said her group is content with the outcome and called it a “strong start.”
There are other highlights of the tree management bylaw.
Two replacement trees must be planted for every tree removed, with a 50 replacement-tree ceiling each hectare.
Special favour is given to bigger “significant” trees, which are defined as those more than 80 centimetres in diameter.
Six replacement trees must be planted for every significant tree removed, with a 50 replacement-tree cap each hectare.
“We’re really trying to incentivize find[ing] a way to keep those big trees if you can,” Wyckham said.
Developers also get a three-tree credit for every significant tree they keep, he added.
So if they decided to clearcut a hectare but keep 17 significant trees, that would fulfill the 50-tree density requirement.
In cases where it’s not possible to replant trees in areas where they’ve been removed, the developer must pay $250 for every replacement tree that can’t be planted. The money will go to an environmental fund.
So if one tree gets removed and nothing is planted in its stead, a developer must pay $500 in lieu of the two replacements that would’ve been required.
These rules apply mainly to larger swaths of land – backyard tree cutters will likely have little to worry about.
Tree removal or management permits are not required for residential land parcels that are 0.4 hectares – that’s one acre – or less, where a home exists.
There are a few exceptions, though, such as if you’re removing a significant tree, among other things. | https://www.squamishchief.com/news/local-news/new-tree-removal-bylaw-attempts-to-walk-a-fine-line-1.23490101 |
Directs the activities of a software applications development function for software application enhancements and new products. This position oversees the analysis, design, programming, debugging, and modification of computer programs for end user applications. Analyzes and investigates engineering tasks and prepares design specifications, analysis, and recommendations as appropriate. Interacts with project managers, marketing, sales, and users to define application requirements and/or necessary modifications. May also have responsibility for testing, documentation, and procedures for installation and maintenance. Selects, develops, and evaluates personnel ensuring the efficient operation of the function. Analyzes, designs, debugs, and modifies software enhancements and/or new products used in local, networked, or Internet-related computer programs.
Should have at least 5 years significant experience as a software engineer in a formal product development environment.
Managed or supervised teams of 3 or more engineers for at least 2 years, with hiring and employee development responsibility.
Must have implemented successful solutions to a wide variety of challenges in application architecture, design and coding, functionality and usability, and optimization.
Effective verbal communication skills to technical and non-technical audiences. Be able to effectively communicate technical concepts to non-technical people. Experience presenting to external customers a plus. | http://careers.nvp.com/jobdetail.php?jobid=1144201 |
« The Origin of Ideas – Jasper’s Dream – What if?
Click on the image to view more of this artwork.
The first 4 canvases (top row, left to right) of my painting Observation Without Manipulation are an exploration of subject and surface grounded in a process orientated painting technique. Imagery is intentionally restrained; my intent being to limit representational clues as to the nature of the subject, which by the very nature of the painting process becomes as much about painting and personal narrative as it does subject. During the creation of these paintings I became aware of two things: firstly the difference between thinking about something and the physical act of making something, and secondly that the highly reflective surfaces of the paintings could potentially capture my notions of inferred pictorial space as related to my ideas about painted mirrors. I realised that by staging objects to be reflected in these canvases surfaces, I could add another level of engagement by the viewer that bears direct reference to my own observations and ideas. The second 4 canvases are therefore paintings of staged imagery reflected on the surface of the first 4 (1 through 4 respectively). It should be noted here that the properties of mirrors as well as painted mirrors are being referenced, but without their inclusion. This, I feel leads the viewer to seek to potentially ‘solve’ the visual clues in front of them, given enough observation and consideration. For example, the first 4 canvases surfaces will reflect their surroundings, whereas the second 4 are presented reflections. Therefore this ‘effect’ of solving the visual clues places the viewer on the spot by connecting them directly to the subject. In other words, the paintings are referring to the act of observation, as well as the reflective action of consciousness within the observer.
The next phase of this painting involved the appropriation of a cartoon, where two characters are engaged in the act of observing an image, one of which misinterprets the true nature of that image. I created a canvas to the same dimensions as all 8 previous canvases combined (inclusive of gaps), then substituted the cartoon’s image for my 8 canvases, thereby creating a stage by which to reference more deeply the nature of the observer to painting relationship. These canvases then form a whole piece, which then means that the painted characters would have to also be observing two more painted characters observing another 8 paintings, which is indeed what I painted.
This entry was posted in The Origin of Ideas, Wordpress and tagged Artist Talking, Contemporary Painting, Developing Ideas in Art, Large-Scale Painting, Painted Mirror, Process of making art, Reflections, Thinking & Doing. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL. | http://www.leegascoyne.com/the-origin-of-ideas-observation-without-manipulation-large-scale-painting |
Analysis of Global Properties of Shapes (thesis)
Abstract:
With increasing amounts of data describing 3D geometry at scales small and large, shape analysis is becoming increasingly important in fields ranging from computer graphics to robotics to computational biology. While a great deal of research exists on local shape analysis, less work has been done on global shape analysis. This thesis aims to advance global shape analysis in three directions: symmetry-aware mesh processing, part decomposition of 3D models, and analysis of 3D scenes.
First, we propose a pipeline for making mesh processing algorithms “symmetry-aware”, using large-scale symmetries to aid the processing of 3D meshes. Our pipeline can be used to emphasize the symmetries of a mesh, establish correspondences between symmetric features of a mesh, and decompose a mesh into symmetric parts and asymmetric residuals. We make technical contributions
towards two of the main steps in this pipeline: a method for symmetrizing the geometry of an object, and a method for remeshing an object to have a symmetric triangulation. We offer several applications of this pipeline: modeling, beautification, attribute transfer, and simplification of approximately symmetric surfaces.
Second, we conduct several investigations into part decomposition of 3D meshes. We propose a hierarchical mesh segmentation method as a basis for consistently segmenting a set of meshes. We show how our method of consistent segmentation can be used for the more specific applications of symmetric segmentation and segmentation transfer. Then, we propose a probabilistic version of mesh segmentation, which we call a “partition function”, that aims to estimate the likelihood that a given mesh edge is on a segmentation boundary. We describe several methods of computing this structure, and demonstrate its robustness to noise, tessellation, and pose and intra-class shape variation. We demonstrate the utility of the partition function for mesh visualization, segmentation, deformation, and registration.
Third, we develop a system for object recognition in 3D scenes, and test it on a large point cloud representing a city. We make technical contributions towards three key steps of our system: localizing objects, segmenting them from the background, and extracting features that describe them. We conduct an extensive evaluation of the system: we perform quantitative evaluation on
a point cloud consisting of about 100 million points, with about 1000 objects of interest belonging to 16 classes. We evaluate our system as a whole, as well as each individual step, trying several alternatives for each component. | https://www.cs.princeton.edu/research/techreps/TR-871-10 |
On the right side of the editing area is a vertical Sidebar containing further options for the new article, in a series of boxes. (If your computer's display is very small, the Sidebar may appear at the lower end of the page). Like the Comments box beneath the editing area, many of the boxes in the Sidebar can be minimized by clicking on the blue link in the box name.
Article types other than "news" may contain different boxes from those shown below. Whenever you edit any of the fields in the boxes, you will need to click either the Save All button at the top of the edit area, or any of the Save buttons in individual boxes.
The Actions menu contains short-cuts to commonly used functions:
The Status drop-down menu indicates the copy flow state of the article. There are four states that the article can be in:
The Language menu is to the right of the Actions and Status menus. If multiple languages have been configured for the publication, a drop-down menu will enable switching between translated versions of the article. If not, the language of the article will be displayed here.
This box enables you to schedule the article to be published, unpublished, promoted or demoted at a certain date and time. It is only visible if the article has the status Publish with Issue or Published. Click the Add Event button to open a window with a calendar and publishing options, such as showing the article on its section page, or the publication's front page, at the specified time.
Note that the date fields have a fixed syntax of YYYY-MM-DD (four year digits, two month digits and two day digits, in that order). If you enter dates manually in any other format, you may get incorrect results.
Clicking the Add button in the Geolocation box opens a pop-up window which enables you to set points of interest (map references) for the article. Points of interest from multiple articles can be displayed on a single map by your Newscoop templates.
First, enter a title for the map, and then search for a place name to centre the map on. Click the place name in the search results to centre the map on that location. Then use the vertical control on the left side of the map, with plus and minus buttons, to zoom in to an appropriate scale.
On the right side of the pop-up window, set the horizontal and vertical size of the map using the plus and minus buttons, and choose a base layer from the available mapping providers.
You can now add points of interest to the map by clicking on places, and entering names and descriptions for them.
Click on the Edit link to enter more details about the point of interest in a pop-up window, including external URL, image and video links. You can also change the colour of the point marker in the pop-up window.
Click in the Keywords field to enter words that describe your article to search engines, then click the Save button. In the Topics box, click the Edit button to select from a list of topics and subtopics in a pop-up window. Topics allow you to set attributes for the article, which may be used to display the article in a certain way.
If the topics already created by your Newscoop administrator are insufficient to describe the article, you can click the Add new topic button. New topics that you create should be categorised with a parent topic, if appropriate, and the language of the topic phrase.
After clicking the Save and Close button, the Topics you have selected are displayed in the Keywords & Topics box in the sidebar. Click the blue x icon on the right side of each row to remove a topic from the table.
See the chapter Topics to find out how topics are categorized.
Switches enable the contributor or editor to activate certain Newscoop features.
The switches for the default Article Type of 'news' are:
The Info box displays general information about the article.
The Media box has three tabs: Images, Slideshows and Files. On the Images tab, click the Attach button to select an image to go with the article.
This action opens a pop-up window with a tab Add New Image, which you can use to upload images from your computer. This tab supports drag and drop if your browser is recent enough, such as Mozilla Firefox version 3.6 or later.
The image you are uploading must have the minimum number of pixels for the smallest rendition used in your publication, in order to maintain quality. The image rendering feature of Newscoop means that the same image can be used at various crop sizes, in different parts of your publication's theme templates, without having to be resized manually. See the chapter Image Rendering for more details.
Alternatively, you can specify the URL of an image on another web server. This feature is useful for linking to a frequently updated image, such as the output from a webcam, which is published at a consistent URL. Of course, you should make sure that any external image used in your Newscoop publication does not breach the copyright of the photographer.
Then click the button Next: Upload and edit metadata in the upper right corner to enter details of the image.
This action opens the Edit Image Data box with fields for Description, Location and Photographer. You must enter some text in at least one of these fields to continue. This metadata will help you and your publication staff to find these images later. Then click the Next: Place Images button.
Another tab in the pop-up window enables you to attach an existing image from the Media Archive. There is a Search box for searching the text metadata of these existing images, such as location or photographer names.
Whether you have uploaded a new image or selected one from the archive, clicking the Place Images button opens a window in which you can preview the image renditions set for this publication, such as a 600 by 450 pixel crop.
Click on the radio button underneath the original image on the lower row, then click Set selected as default image to change the default image for the article. When multiple images are attached to the article, you can drag and drop alternative images to the upper row. This changes the image used for a particular rendition. To return to the default image for the rendition, click Use default in the upper right corner of each rendition.
You can adjust the cropping of an image rendition by double-clicking on it. A crop box will appear over the full-size image. Use your mouse to move and resize the crop box to your satisfaction, and then click the Save button. When you have finished adjusting the cropping of all the image renditions, click the Done editing button.
Finally, click the Finish button in the upper right corner to return to the Article Edit page.
Captions can be edited later by clicking the Edit Metadata button, which opens the Edit Image Data box.
Should rich text captions be enabled (see the chapter System preferences), click in the Description box to open a WYSIWYG editor toolbar which enables you to add formatting to the image caption. Click the small x icon in the upper right corner of the toolbar to close it.
If you have a selection of images to illustrate your article, you can use the Slideshow tab to create an article gallery. This will be displayed as a series of thumbnails on which the reader can click to view your images full-size. To create a new slideshow, click on the Slideshow tab, then the Create button.
In the pop-up window which opens, enter a Headline for the slideshow, and select a rendition size from the drop-down window. Then click the Create button.
Next, drag and drop your choice of images for the slideshow from either the Attached Images tab or Media Archive tab. You can also add an online video URL to the slideshow by clicking the Add video button.
Click any image in the slideshow row to edit its caption, in the field below the image.
The cropping for any image in the slideshow can also be adjusted in this pop-up window. Once you've finalised the caption and cropping, click the Save button to the right of the image.
The updated captions and crops should now be shown in the Slideshow window.
You can now return to the Edit Article page by clicking the Save and Close button in the upper right corner.
To edit the slideshow later, click on its name in the Slideshows tab of the Media box. Existing slideshows can be attached or detached from the article being edited by clicking the Attach/Detach button. This action opens the Attach slideshows box.
You can attach any kind of file you wish to an article. The article template must be set up to display these files, if readers are to have access to them. To begin, click the Attach button in the Files tab of the Media box. The pop-up window which opens has two tabs, Attach new file and Attach existing file. To attach a new file, click the Browse button in the first tab to select a file from your computer.
Enter a Description for the file, and optionally click the radio buttons to set translation and download options. Then click the Save button.
The attached filename will now be displayed in the Files tab of the Media box, with its description, format, size and a download link. To remove the file from the article, click the blue x icon in its row.
Files that have been uploaded to the Newscoop server remain available in the Attach existing file tab, even if they are not presently attached to an article.
Clicking the Edit button in the Related Articles box enables you to create a list of other relevant articles using a drag and drop interface. On the left side, click the Filter link to select a publication, issue and section to search from the drop-down menus. The final drop-down menu enables you to filter by other criteria, including Author or Language.
Check the Display Newswires box to show matches from agency feeds. There is also a field for text searches on article content, which has a magnifying glass icon. Search results are shown in the table beneath.
Click the View article link to preview the content of a search result on the right side of the pop-up window, then click the Close button to return to the Related Articles list.
When you have decided on a related article in the search results, drag and drop it into the Related Articles list on the right side of the pop-up window. Items in the list can be dragged to sort them into a new order. Then click the Save button. When the list is complete, click the Close button to return to the Edit Article page.
A Featured Article List is a custom article list created for a specific purpose. For example, it could be used in a particular page template to display a mixture of articles from different sections. To add the current article to a specific list, click the Edit button in the Featured Article Lists box. This action will open a pop-up window with a drop-down list of available featured article lists.
Click the Add to list button to add the current article to this specific featured article list. Drag and drop the articles in the list to change the ordering, if you wish, then click the Save button.
Finally, click the Close button to return to the article page. The names of the lists which the article is part of, if any, will be shown in the Featured Article List box. To create a new Featured Article List, see the chapter Managing content.
If a Complex Date field is part of the Article Type for the article you are editing, you will see a Multi date events box in the Sidebar. See the chapter Article Types for details of how to add this type of field.
Clicking the Edit button in this box opens a Multi date events pop-up window. This window enables you to set dates and times for events by clicking on the rectangular fields in the top-left corner, marked with calendar and clock icons.
For an event on a specific date, click one of the radio buttons for Start time, Start & end time, or All day, and select the relevant Complex Date from the drop-down menu beneath. In this example, the Complex Date refers to an open house viewing event which is expected to happen on several different days, and is part of a custom Article Type used in the Property section of the publication. These dates and times can then be displayed as part of an article about the property for sale, in a special treatment devised by your theme designer.
If you click Start & end time an extra field will appear for the end time, while All day events do not have a start time. For a regular event, you can click the Recurring button, and select daily, weekly or monthly repeats. Enter a text comment if you wish, then click the blue Save button.
The event will now be shown in the calendar to the right side of the pop-up window. It will also be shown to readers of the published article, if your publication's theme supports the feature. Clicking on an event in the calendar enables you to edit it.
Click the Close button in the upper right corner of the pop-up window to return to the Edit Article page.
At the lower end of the sidebar, you may see additional boxes related to Newscoop plugins that your system administrator has installed. See the chapter Using plugins for more details.
There has been error in communication with Booktype server. Not sure right now where is the problem.
You should refresh this page. | https://sourcefabric.booktype.pro/newscoop-42-for-journalists-and-editors/the-sidebar/ |
Inspired by American studies of the impact of government programs on clients' political activity, Take a Number breaks new ground by investigating the lessons that people draw from their experiences with government bureaucracies, reaching very different conclusions about the effects of program participation in Canada. People's experiences with service providers matter. Far from being de-politicizing, negative experiences can be empowering, stimulating greater political interest and more political activity. In contrast to the findings of some American studies, there is no evidence that these encounters leave claimants in Canada with the sense that they are neither legitimate nor effective actors in the public sphere. Rather than discouraging participation in politics, being a recipient of means-tested benefits likewise seems to be politically mobilizing. Based on extensive survey data, Take a Number casts new light on the problem of non-take-up of social benefits. Elisabeth Gidengil reveals that those who are most likely to benefit are often unaware of government programs. The more demanding and intrusive the claiming process, the more likely claimants are to find it difficult to access the program. These experiences with government programs prove to have larger implications for users' confidence in institutions and their satisfaction with democracy. A wide-ranging study of the politicizing effects of social program participation, Take a Number introduces a compelling new dimension to our understanding of why some citizens are politically active while others remain quiescent.
About the author
Elisabeth Gidengil is Hiram Mills Professor and Director of the Centre for the Study of Democratic Citizenship at McGill University.
Editorial Reviews
"Take a Number takes a novel approach to studying the impact citizen interaction with government has upon political attitudes and behaviours, convincingly demonstrating the important effects such interactions can have and exploring questions that have yet to be seriously considered in the Canadian context." Michael McGregor, Ryerson University
"With its careful discussion of research methodology, the limits of the findings, and potential avenues for future research, Take a Number is written for researchers and academics. But it has opened up a neglected area of research important to all of us—and at just the right time. How we organize programs matters. The details matter. Public administration and how public servants deliver services matter. They matter not just for efficiency, which has been the driving objective in recent times of austerity, but for fairness." Literary Review of Canada
Other titles by Elisabeth Gidengil
Provincial Battles, National Prize? | https://49thshelf.com/Books/T/Take-a-Number |
Trapezoid shaped semi precious gemstone black and white Chinese Writing Rock designer cabochon 33 mm long by 18 to 26.5 mm and 5.5 mm thick.
This unusual semi precious gemstone cabochon material is known as Chinese Writing Rock.
Chinese Writing Rock is named for its interesting shapes of Feldspar crystals in a black Basalt.
It comes from Auburn, California.
Metaphysical Properties: Feldspar is said to assist one in detaching from the old, encouraging unconventional, exciting methods to attain ones goals. It can also assist in locating misplaced things. | https://www.barlowsgems.net/chinese-writing-rock-cabochon-23/ |
# Noleby Runestone
The Noleby Runestone, which is also known as the Fyrunga Runestone or Vg 63 for its Rundata catalog listing, is a runestone in Proto-Norse which is engraved with the Elder Futhark. It was discovered in 1894 at the farm of Stora Noleby in Västergötland, Sweden.
## Description
The Noleby Runestone was dated by Sophus Bugge to about 600 AD, and cannot be dated any younger than about 450 AD due to its language and rune forms. It is notable because of its inscription runo raginakundo which means "runes of divine origin" and which also appears in the later Sparlösa Runestone and the eddic poem Hávamál. This is of importance for the study of Norse mythology since it indicates that the expressions and the contents of the Poetic Edda are indeed of pre-historic Scandinavian origin.
The runic inscription consists of three lines of text between bands, with the second line considered untranslatable and often listed as being a "meaningless formula." The Noleby is the only runestone in Scandinavia that uses the star rune form for rather than for /a/ or /h/. The name Hakoþuz in the last line of the inscription is believed to mean "crooked one," although other interpretations have been suggested.
The Noleby Runestone is now located in the Swedish Museum of National Antiquities in Stockholm.
## Inscription
Below follows a presentation of the runestones based on the Rundata project. The transcriptions into Old Norse are in the Swedish and Danish dialect to facilitate comparison with the inscriptions, while the English translation provided by Rundata gives the names in the de facto standard dialect (the Icelandic and Norwegian dialect): | https://en.wikipedia.org/wiki/Noleby_Runestone |
Simon: I have a question I would like to ask you.
I am wondering about the source of the teachings I am reading about in James’ book. I would like to get a book and be able to read and connect with them. I have come across the names – Vedas, Upanishads, Bhagavad Gita and Brahma Sutras. Could you help me to understand about the teachings and how I can get a good book on them? I can use the internet to order the books, so the name of the author would be helpful.
Sundari: No one person wrote the scriptures of Vedanta. It is an office which passes on to different people in a lineage called the sampradaya. Authorship does not belong to anyone, because the self wrote them. Vedanta is also called The Science of Consciousness because, like any science, its principles and revelations are based on unbiased knowledge which is independent of personal opinion.
The Upanishads for the most part are not ascribed to any one person, although sometimes some of them are referenced by an author. Authorship has no bearing on what they are imparting, because it is the timeless knowledge of the self.
The Bhagavad Gita is part of the Mahabharata, ostensibly written by Vedavyasa.
The Brahma Sutras are a collection of intellectual discourses regarding very subtle issues and were collected and published by Badarayana. They are complicated discussions which are not really necessary for most inquirers to read as part of their sadhana.
We encourage inquirers to read as many of the sacred texts as possible, but unless you have then unfolded by a qualified teacher, they will be interpreted through the filters of your conditioning. You cannot “study” Vedanta.
Vedanta is a doctrinal teaching at the end of the Vedas, which are the sacred, impersonal and eternal scriptures of Hindu tradition. There are four Vedas; the first three pertain to the person living in the world, covering different aspects of physical life and obtaining desired results. They are for the person identified with being a person, the doer who thinks they can accomplish fulfilment of their desires through right action, which it can, to some degree.
The fourth and last Veda deals exclusively with the true nature of reality and negates the doer. As you know, Vedanta’s main teachings are found in the Upanishads, the Brahma Sutras and the Bhagavad Gita. The Vedas form the ancient tradition called the Sanatana Dharma (the Eternal Way), which originated in what is now called India, but was once called Bharat, meaning the Land of Light – or “The people who uphold righteousness” – between 6,000 and 7,000 years ago. This point has been argued by many scholars but most agree that the Vedas are at least 3,500 old, making Vedanta the oldest scriptural teaching on the planet.
Although Vedanta originates from Vedic culture, the basic teaching is universal in that its fundamental principle is that reality is a non-duality as opposed to a duality. It reveals that there is only one principle operating, in which everything has its origin and is made up of, and that is consciousness. Therefore Vedanta in essence is not specific to any culture, race or religion, as consciousness does not “belong” to anyone in particular. It is who we are because there is only consciousness.
The methodology or means of knowledge Vedanta uses to teach was developed and perfected by the Indian culture, most recently by Sri Adi Shankaracharya in the eighth century AD, and is accredited to Hinduism. Vedanta spread throughout the East, influencing the spiritual traditions of the whole of the Indochina Archipelago. In spite of India being taken over by many invaders over the millennia, the British in particular more recently, all of whom attempted to conquer India by cutting at the very foundation of her culture and spiritual tradition, the Sanatana Dharma is nonetheless still alive and well, informing the culture of modern day India and spreading throughout the rest of the world.
Vedanta is unlike any other known doctrinal or scriptural knowledge in that it is not the revealed “word” of an exalted deity as interpreted by man, nor the contention of any person or persons. Vedanta is also referred to as apauruseya jnanum, meaning revealed knowledge of divine origin. It is thousands of years old and has been handed down through the ages to a long line of qualified teachers, called the sampradaya. Vedantic scriptures are called sruti, “that which is heard.” Sruti is knowledge that is revealed to the human mind, not interpreted by it. A good example of revealed knowledge is Einstein’s “discovery” of the laws of relativity; or gravity; or Thomas Edison’s “invention” of electrical applications. Discover means to uncover. Gravity and the law of relativity describe how the world works according to the laws of physics. Einstein did not invent gravity or relativity. Edison did not invent electricity either; it was always here, until it was “discovered.” Gravity, relativity and electricity all function the same way whether they are understood or not, nor do they care whether you believe in them or not. It is the same with self-knowledge: it is always here, right in front of our noses and it is not changed by our ignorance of it. Because we are blinded by ignorance, i.e. duality, we do not see it.
Vedanta Is Not a Belief System
Even though the means of knowledge Vedanta uses originates in Hinduism, Vedanta is not a belief system, a religion or a philosophy, as it is often portrayed and thought to be by those who do not understand it. All beliefs and philosophical ideas are subjective interpretations based on dualistic thinking (ignorance). Self-knowledge is not personal truth. It is the Truth that consciousness, the Creator, the creation and the individual are one, although they exist in apparently different orders of reality. Vedanta is called a “brahma vidya,” the Science of Consciousness, because it is the science of life insofar as life is consciousness. Vedanta predates all known religious or philosophical paths because it is based on the irreducible and irrefutable logic of human experience, albeit it unexamined. This experience has always been the same, in spite of changing conditions and in spite of the fact that consciousness is not understood by most.
The main purpose of Vedanta is not to explain the creation. However, one cannot understand the true nature of reality without examining and understanding the creation and the forces that run it. Vedanta deconstructs the creation in the light of self-knowledge, the knowledge of consciousness. Like other sciences Vedanta is an objective and scientific analysis of the facts, not a personal or philosophical theory of life. Its main aims are to prove that non-dual consciousness is the nature of reality and to reveal that the self is consciousness. Vedanta dissolves the subject-object split – by revealing the belief that the subject (consciousness) and the objects you experience are two different things – is false.
I have attached an excellent book for you that we have just published, called Vedanta: The Big Picture. We paid a professional writer to transcribe Swami Paramarthananda’s brilliant discourses explaining the methodology of Vedanta. It is now in very simple and accessible English without changing any of the meaning.
I scrolled down to the end of the thread of emails you sent to ShiningWorld. I am not sure if the questions you posed in your first email where ever answered. Is this so? It is possible that your email slipped through the cracks; as we have so many people we are coaching from all over the world, our workload is huge. It is impossible to remember everyone we write to. Anyway, if they were not answered and you would like me to do so, I am happy to oblige. | http://www.shiningworld.com/site/satsang/read/3043 |
Island Data develops software solutions that perform real-time analysis of open-ended, unstructured customer feedback to give tangible value to customers' voices. This actionable customer intelligence is the foundation for strategic business decisions that drive sales, retention, customer satisfaction and product quality, and ultimately profitability.
Challenge
The application has a high-end architecture and Version 1 was already developed and in production use. The application includes complex business rules including NLP algorithms. The client wanted to fix a problem in the existing application, including performance issues, while simultaneously developing Version 2. They needed a quick and efficient solution.
Approach
A team of experienced senior engineers and an application architect were deployed on the project. After a short ramp-up period that included in-house training by Island Data's application architect, the team began to fix bugs and add enhancements requested by the customer base. From the first meeting through project completion and beyond, Inforaise set the bar for clear and consistent communication through all phases of the project. The Inforaise and Island Data teams in essence became one, with continuous communication occurring despite the 12-hour time difference. | https://inforaise.com/case-studies-idc.html |
A whiplash injury occurs when your head suddenly moves backward and forward with force. This causes the muscles and ligaments in your neck to extend beyond the normal range of motion. Whiplash can happen for various reasons, and sometimes symptoms may not show up for several days or weeks.
Some people mistakenly believe that whiplash is simply a mild condition. Unfortunately, it can lead to long-term complications. If you want to learn more about the causes, symptoms and complications of whiplash, keep reading below.
Causes
One of the most common causes of whiplash is a car accident, particularly rear-end collisions. It may also result from physical abuse. Contact sports, such as football, karate or boxing, can also cause this injury. Other potential causes are falls or getting struck in the head by a heavy object.
Symptoms
The most common signs of whiplash include:
- Neck pain
- Decreased range of motion in the neck
- Neck stiffness
- Headaches
- Dizziness
More serious and less common symptoms include tinnitus (ringing in the ears), problems concentrating and memory loss. Remember that these symptoms may take a long time to develop after the actual accident.
Complications
Sometimes whiplash is mild and treatable with over-the-counter remedies. Unfortunately, it can also be worse. The sudden motion that causes whiplash can also cause a concussion or traumatic brain injury. This may manifest by worsening or persistent headaches, nausea, confusion or unconsciousness.
Whiplash may also result in chronic neck pain or headaches for several years after the initial injury. Damaged joints or ligaments in the neck can often cause this.
If you recognize any of the symptoms mentioned above, it is important to notify your physician or urgent care center. You should undergo proper testing and diagnosis to determine what you are experiencing. Your doctor may recommend pain medications, ice, exercise or physical therapy to treat your whiplash injury.
Read more about whiplash and neck strain at WebMD. | https://www.sfhlaw.com/blog/2017/08/whiplash-causes-symptoms-and-complications.shtml |
Our current national food system is highly dependent on fossil fuels from the farm to the plate. It is estimated that for each calorie of food consumed, be- tween 7 and 10 calories of energy are expended. Fossil fuels are used for fertilizers, pesticides, fuel to power farm machinery, food processing plants and food storage in grocery stores and at home. According to Janine Benyus, author of Biomimicry, most people in the U.S. “eat” the equivalent of 13 barrels of oil a year.
What Changes Can be Made in the Food Production System?
Relocalization: Produce more basic foods locally.
Diet: Shift toward locally-grown, less-processed, seasonal foods.
Soil Fertility: Shift from fossil fuel fertilizers and monoculture to polyculture, crop rotation and composting.
Transportation: Reduce long distance food hauling.
Assess the current Newburyport food supply system.
What food is grown locally and by whom? Where do we get our other food? How many days supply of food is on hand at any one time?
Create a new food vision.
How would a new low energy food supply system operate to meet our commu- nity’s essential food needs?
How do we move from the current food supply system to our envisioned one, and what is the timetable?
Begin following the timetable and plan to periodically reassess progress and need for revision.
What Can I Do As An Individual/Family?
Become food aware. What am I eating? Where did it come from? What is in it? How is it packaged?
Create your ideal food vision.Will I eat less processed food, more fruits and vegetables? How will I move toward my food vision?
Learn how to food garden.Even growing some herbs or vegetables in a window box starts your journey.
Buy more locally grown produce, eggs and meat.
Shop more frequently at farmers markets, CSA’s and local farm stands. It not only reduces the energy input of your food, it stimulates the local economy.
Be active in local food groups.
Support and participate in groups involved with community gardens, slow food, farmers markets, transition food working group, etc. | https://transitionnewburyport.org/local-food/ |
Steinkamp, Simon Richard ORCID: 0000-0002-6437-0700 (2020). Visual Attention along the Visual Field’s Meridians - Computational Modeling of Neural and Behavioral Dynamics. PhD thesis, Universität zu Köln.
|
|
|
PDF
|
main_print.pdf
Bereitstellung unter der CC-Lizenz: Creative Commons Attribution.
Download (9MB) | Preview
Abstract
The ability to orient attention towards things we consider important, but also to reorient it towards new, salient, and unexpected stimuli, is key to navigate life. To investigate these two aspects of attention, Posner’s spatial cuing task has been used for decades and has been influential as hemispatial neglect provides a good lesion model about the underlying brain regions (Posner et al., 1984). As hemispatial neglect most often extends along the visual field’s horizontal meridian, less is known about attentional (re)orienting along the vertical meridian. To fully understand neglect, however, it is important to study attentional (re)orienting along the whole visual field. In my first project, I investigated differences in vertical and horizontal (re)orienting on a behavioral and neural level, using statistical, machine learning, and dynamic causal modeling (DCM) analyses. Results suggest, that attentional (re)orienting along the two meridians is very similar in terms of reaction times and fMRI data. This indicates that attentional resources are distributed evenly across the visual field. Statistical analysis, however, can only provide indirect associations between neural and behavioral processes; for a direct link, the simultaneous modeling of brain and behavior is required. Rigoux and Daunizeau (2015) have shown that this can be done with behavioral DCM for binary measures. Continuous data, however, often contains more information, especially in Posner’s task. Hence, I extended and validated bDCM for continuous measures, making it available for more tasks. Furthermore, bDCM parameters could be used to classify vertical and horizontal runs, which was not possible with other data, showing that bDCM is sensitive to small variations in task design. | https://kups.ub.uni-koeln.de/52327/ |
No conditions !: US invites North Korea to resume dialogue
15 minutes. South Korea, the United States (USA) and Japan again reached out to Pyongyang on Monday for an unconditional dialogue, after the leader of North Korea, Kim Jong-un, called on his country to prepare both to negotiate and to confrontation.
“We continue to hope that the Democratic People’s Republic of Korea (DPRK, North Korea’s official name) will respond positively to our approach and to our offer to meet anywhere and anytime without preconditions“US special envoy for North Korea, Sung Kim, said Monday.
Kim spoke in these terms at a press conference after a three-way meeting in Seoul with the head of the South Korean nuclear negotiations, Noh Kyu-duk, and the director general for Asia and Oceania of the Japanese Ministry of Foreign Affairs, Takehiro. Funakoshi, as reported by the Yonhap news agency.
Kim Jong-un’s message
During the meeting, the 3 diplomats addressed the recent statements of the North Korean leader, Kim Jong-un. He called on his country to prepare “both for dialogue and for confrontation” with Washington.
It is the first message in which the North Korean leader shows willingness to dialogue with the United States since Joe Biden became president. The current administration is committed to an intermediate diplomatic path to that of its predecessors.
On the other hand, the US envoy assured that the Biden Administration will continue to implement the United Nations (UN) Security Council resolutions “to address the threat posed by the DPRK to the international community.” In addition, he urged other countries to do the same.
Sung Kim took over as special envoy for North Korea in May. Previously, he was Acting Under Secretary of the State Department for East Asia and the Pacific.
The diplomat has been in Seoul since last Saturday as part of a 5-day visit. It is aimed at coordinating positions with South Korea and Japan towards the North Korean regime.
At their meeting, the 3 countries agreed to continue cooperating to achieve substantial progress towards the complete denuclearization of the Korean peninsula. Likewise, they are considering the establishment of permanent peace in the territory, the South Korean Foreign Ministry detailed in a statement.
Ready for any eventuality
Before their meeting, Special Envoy Kim held a bilateral meeting with South Korean nuclear negotiator Noh. During the meeting, he pointed out that, like North Korea, his country will also be prepared for any eventuality.
Regarding the statements of the North Korean leader Kim stressed that they are prepared for either of the two and that they are still waiting for news.
The North Korean regime has not responded, at least publicly, to Washington’s requests. Resuming the denuclearization talks has been on the table since February.
Noh Kyu-duk said Seoul will continue to play a “necessary” role for the early resumption of the stalled dialogue.
“We wish to restore the structure in which inter-Korean relations and relations between the United States and the Democratic People’s Republic of Korea are mutually reinforcing in a mutually beneficial way,” said the South Korean official.
Pyongyang and Washington staged a historic rapprochement in 2018 during the US presidency of Donald Trump. The latter was paralyzed at the beginning of the following year due to their differences in their approach to the North Korean disarmament process.
The disagreement between the North American country and the Asian country affected relations between North and South, which has its main military partner in the United States, and which Pyongyang considers a threat.
In addition, the US envoy reaffirmed Washington’s commitment to achieve denuclearization and cooperation between South Korea and North Korea through dialogue. | |
IISc follows a credit structure. Each subject has a specified number of credits. Each credit stands for one lecture hour per week or 3 hours of practical. The credit for the course is of the form x:y, where x is the credits for lecture hours and y is the credits for practical. In some cases, where there is limited scope for practical, y refers to the credits for solving problems through tutorial sessions of 3 hours per week.
In practice, you may (or will?) have to spend more than 3 hours per week for 1 credit of practical. Every subject in CSA department has a lot of practical content, wherever there is any scope for it. Unlike undergraduate studies, practicals don’t have any timings. You just have to spend time and complete the assignments. You will find numbers associated with each of the subjects. For example,
E0 361 Topics in Databases 3:1
E0 223 Automated Verification 3:1
E1 254 Game Theory 3:1
Here E in E0 stands for the fact that the course is offered by in the Division of Electrical Sciences. 0 stands for Computer Science discipline, 1 stands for Intelligent Systems and Automation discipline etc. The number 228 is course number, where the first 2 stands for 200 level. A 200 level course is at Master level. A 300 Level course is at Research level.
The courses at IISc are classified as hard core, soft core or elective for every branch. Hard core means the course is compulsory. Soft core means it belongs to a pool from which some specified number of credits has to be completed. A CSE student has to take a total of at least 16 credits. MTech students have to take at least 64 credits of which 24 credits are for the project. In the first two semesters, they will be taking 16-18 credits in each semester and the remaining credits in the subsequent semester. Of course, you can decide the exact split up depending on your curriculum requirements in consultation with your faculty advisor. | https://www.csa.iisc.ac.in/academics-all/courses/ |
Compensation architecture for the EC15 min.
The compensation awarded to EC members is primarily driven by the success of the company. In addition to a competitive fixed compensation, there is a performance-related component that rewards for performance and allows EC members to participate in the company’s long-term value creation. The overall compensation consists of the following elements:
- Annual base salary;
- Benefits (such as retirement benefits);
- Short-term incentive;
- Long-term incentive (share-based compensation).
To ensure consistency across the organization, roles within the organization have been evaluated using the job grading methodology of Korn Ferry Hay Group. The grading system is the basis for compensation activities such as benchmarking and determination of compensation structure and levels. For comparative purposes, dormakaba refers to external compensation studies that are conducted regularly by Korn Ferry Hay Group in most countries. Overall, these studies include the compensation data of 2,500 technology and industrial companies, including listed and privately held competitors in the security sector that are comparable with dormakaba in terms of annual revenues, number of employees, and complexity in the relevant national or regional markets. Consequently, there is no predefined peer group of companies that is used globally. Rather, the benchmark companies will vary from country to country based on the database of Korn Ferry Hay Group. For the CEO role, the following companies were included in the benchmark: Autoneum, Bucher Industries, EMS Chemie, Geberit, Georg Fischer, Landis+Gyr, Logitech, Lonza, OC Oerlikon, Sonova, and Sulzer (Swiss listed industrial companies of similar size in terms of market capitalization, revenue, and employees).
The compensation paid to the EC members must in principle be based on the market median in the relevant national or regional market and must be within a range of –20% to +35% of this figure. The variable component of compensation (= short- and long-term incentives) is targeted to make up for at least 50% of the overall compensation.
1. Annual base salary
EC members receive an annual base salary for fulfilling their role. It is based on the following factors:
- Content, responsibilities and complexity of the function;
- External market value of the respective role: amount paid for comparable positions in the industrial sector in the country where the member works;
- Individual profile in terms of skill set, experience, and seniority.
2. Benefits
As the EC is international in its nature, the members participate in the benefits plans available in their country of employment. Benefits consist mainly of retirement, insurance, and health care plans that are designed to provide a reasonable level of protection for the participants and their dependents in respect to the events of retirement, disability, death, and illness/accident. The EC members with a Swiss employment contract participate in the occupational pension plans offered to all employees in Switzerland, which consist of a basic pension fund and a supplementary plan for management positions. The pension fund of dormakaba in Switzerland is in line with benefits provided by other Swiss multinational industrial companies.
EC members under foreign employment contracts are insured commensurately with market conditions and with their position. Each plan varies in line with the local competitive and legal environment and is, as a minimum, in accordance with the legal requirements of the respective country.
Further, EC members are also provided with certain executive perquisites such as company car or car allowance, representation allowance, and other benefits in kind according to competitive market practice in their country of employment.
3. Variable compensation
The variable compensation consists of a short-term incentive (STI) and long-term incentive (LTI).
3.1 Short-term incentive
The short-term incentive is defined annually as a cash payment and aims to motivate the participants to meet and exceed the company’s financial objectives, which are defined in line with the Group’s strategy. Pursuant to the Article of Incorporation 24 the short-term incentive may not exceed 150% of the individual annual base salary for the EC members (cap).
Following the “We are ONE company” principle, the individual short-term incentive paid to the EC members is strictly based on Group and segment financial objectives and not on individual goals. For the CEO and other EC members (CFO, CTO [Chief Technology Officer], CMO [Chief Manufacturing Officer]), the incentive formula relates exclusively to Group results. For the Chief Operating Officers (COOs), it relates to segment results and Group results as follows:
Group
Segment
Rationale
Access Solutions (AS)
10%
30% all AS segments 60% own AS segment
AS segments (AMER, APAC, DACH, EMEA) are interdependent, therefore the weighting strongly encourages collaboration between AS segments and rewards for the AS collective performance and the individual performance of each AS segment in a balanced manner.
Key & Wall Solutions
30%
70%
Key & Wall Solutions is an independent global segment, the 30 – 70% split between Group’s and segment’s results is well balanced in terms of rewarding the collective performance of the Group and the individual performance of the segment.
The business results are compared to the previous year’s results to drive a continuous improvement of the business achievements, year after year.
The incentive formulas for all EC members are built around the following principle: the short-term incentive consists of a predefined share of profit, which is determined for each function individually, multiplied by a growth multiplier and, for COOs, by a net working capital (NWC) multiplier (see the following illustration).
The predefined share of profit is expressed as a percentage of Group net income or as a percentage of segment EBIT. The growth multiplier depends on the company’s or on the segment’s revenue growth compared to previous year and is capped at 1.6 in case of substantial growth; the net working capital (NWC) multiplier depends on the segment’s change of net working capital compared to previous year and is capped at 1.4 in case of substantial reduction of net working capital.
This formula is aligned to the business strategy of profitable growth because it rewards for bottom-line (Group net income or segment EBIT) and top-line results (sales growth).
Further, for the COOs responsible for a segment, the formula also includes an NWC multiplier, which reflects the focus on efficient management of the company’s financial resources.
The calculation of the short-term incentive is based – just as the audited financial statements of the Group – on the actual figures recorded in the financial reporting system. Special effects that have a material impact on the financial results, such as significant acquisitions and divestments or extraordinary results representing merger-related integration costs, are excluded so that the financial results are comparable to previous year. There was no such special effect in the reporting year.
3.2 Long-term incentive
The purpose of the long-term incentive is to give the EC an ownership interest in dormakaba and a participation in the long-term performance of the company and thus to align their interests to those of the shareholders.
At the beginning of the long-term incentive plan cycle (grant date), EC members are awarded restricted shares and performance share units of dormakaba on the basis of the following criteria:
- External benchmark: typical grant size of long-term incentive for a similar function in the relevant market and positioning of the individual’s total direct compensation compared to that benchmark. Total direct compensation includes fixed base salary plus short-term incentive plus allocation under the long-term incentive plan.
- Individual performance: measured against predefined priorities in the financial year prior to the grant, as documented within the performance management process. The long-term incentive is the only compensation program that takes into consideration the individual performance of the EC members. For each member, a list of individual strategic priorities is determined before the start of each financial year based on the mid-term plan of the Group, segment or function. At the end of each financial year, the individual performance of the member is evaluated against those strategic priorities and will be considered for the determination of the grant size of the long-term incentive in the following financial year.
- Strategic importance: impact of the EC member's projects on the long-term company's success.
- Retention: desire to retain the person to the company and to its overall long-term value creation by offering restricted shares and performance share units subject to a three-year vesting period.
Based on the above criteria, the CEO formulates a proposal for long-term incentive awards of the individual EC members and other members of Senior Management, which is subject to approval by the Compensation Committee. For the CEO, the Compensation Committee Chair formulates a proposal that is subject to the approval of the Compensation Committee. Starting with financial year 2018/19, the long-term incentive grant size is determined as a monetary amount (in previous years: number of shares). Pursuant to the Article of Incorporation 24 the fair value of the long-term incentive at grant may not exceed 150% of the individual annual base salary for the EC members (cap).
The long-term incentive award is split into two components: two-thirds are granted in form of restricted shares of dormakaba subject to a three-year blocking period. This component of the award is designed to provide participants an ownership interest in the long-term value creation of the company by making them shareholders. The remaining third of the award is granted in form of performance share units of dormakaba subject to a three-year performance-based vesting period. This component of the award is designed to reward participants for the future performance of the earnings per share (EPS) and the relative Total Shareholder Return (TSR) of the company over the three-year performance period. The vesting level may range from 0% to a maximum of 200% of the original number of units granted (maximum two shares for each performance share unit originally granted).
The TSR performance condition has been introduced in the long-term incentive plan starting with the grant in September 2018. TSR is measured relative to companies of the Swiss Market Index Mid (SMIM) and provides for a full vesting for median performance. The EPS growth target is fully aligned with dormakaba’s communicated strategy of organic sales growth, which is to outperform weighted GDP growth by 2% points. The vesting formula for both performance indicators is illustrated below, there is no vesting below the threshold levels of performance:
In summary, while the long-term incentive award is granted on the basis of factors related to the function (strategic importance) and the individual (positioning versus benchmark, performance, retention need), the vesting of the performance share units depends on future company performance (measured by EPS development and relative TSR).
Restricted shares and performance share units are usually awarded annually in September. In case of voluntary termination by the participant or termination for cause by the company, restricted shares remain blocked and the performance share units are forfeited without any compensation. In case of termination without cause or retirement, restricted shares remain blocked and the performance share units are subject to a pro rata vesting at the regular vesting date. In case of disability, death or change of control, the blocking period of the shares is lifted and performance share units are subject to an accelerated pro rata vesting based on a performance assessment by the BoD (see also Corporate Governance Report 'Changes of control and defense measures'). The conditions for the award of shares and performance share units are governed by the stock award plans of dormakaba.
Shares awarded in recent years have come from treasury shares and to a small extent from conditional capital.
Starting with the long-term incentive grant in September 2019, the mix between restricted shares and performance share units will be shifted towards more performance share units to further align to market practice: half of the grant will be awarded in form of performance share units and half of the grant will be awarded in form of restricted shares. Further, the long-term incentive awards will be subject to clawback and malus provisions. In certain circumstances, such as in case of financial restatement due to material non-compliance with financial reporting requirements or of fraudulent behavior or substantial willful misconduct, the BoD may decide to suspend the vesting or forfeit any granted long-term incentive award (malus provision) or to require the reimbursement of vested shares delivered under the long-term incentive (clawback provision).
4. Employment contracts
The EC members are employed under employment contracts of unlimited duration that are subject to a notice period of up to twelve months. EC members are not contractually entitled to termination payments or any change of control provisions other than the accelerated vesting and/or unblocking of share awards mentioned above. The employment contracts of the EC members may include non-competition clauses for a duration of up to a maximum of two years. In cases where the company decides to activate the non-competition provisions, the compensation paid in connection with such non-competition provisions may not exceed the monthly base salary, or half of the total compensation, for a period of twelve months.
5. Shareholding ownership guideline
The EC members are required to own a minimum multiple of their annual base salary in dormakaba shares within five years of hire or promotion to the EC, as set out in the following table.
CEO
300% of annual base salary
EC member
200% of annual base salary
To calculate whether the minimum holding requirement is met, all vested shares are considered regardless of whether they are restricted or not. However, unvested performance share units are excluded from the calculation. The Compensation Committee reviews compliance with the share ownership guideline on an annual basis. In the event of a substantial rise or drop in the share price, the BoD may, at its discretion, review the minimum ownership requirement.
6. Assessment of actual compensation paid to the EC in the 2018/19 financial year
In comparison to the previous year, total direct compensation (TDC) of the EC decreased by 12%. There are several factors that impacted the level of actual compensation paid to the EC in the 2018/19 financial year, which are summarized below.
- Change in EC composition: three former EC members are no longer reported in this financial year. All relevant compensation was reported in the Compensation Report for the financial year 2017/18. On the other side, one new EC member is reported on a full-year basis in this financial year versus pro rata in previous year.
- Changes in currency exchange rates: five members of the EC are paid in foreign currencies (three in Euros). Their compensation is converted into Swiss francs for the disclosure in this report. Due to the stronger Swiss franc against other major currencies compared to the previous year, especially with the Euro, the amounts disclosed in Swiss francs decreased even when the compensation amount in local currency has remained unchanged.
- Base salary increases: the base salary of one EC member was adjusted during the reporting year. The base salaries of the other EC members did not change compared to the previous financial year. The base salary increase amounts to 0.8% for the EC overall.
- STI payout: the STI payout formula is based on performance improvements versus previous year (and not on the achievement of budgeted targets). A payout of 111% of annual base salary (on average) for the EC members corresponds to the level of expected performance for the financial year 2018/19. The STI payout of the EC members reflects the underlying financial performance in the reporting year, especially the increase in Group net income which is the main driver of the STI payout for the CEO and EC members with global responsibility (CFO, CTO, CMO). All segment (COOs) contributed to the increased profitability compared to the previous year (increased EBITDA and EBIT as well as increased EBITDA margin and EBIT margin). All segments except AS AMER contributed to the organic sales growth of the Group. In the reporting year, the STI payout of EC members is 94% of annual base salary on average (previous year 84%). For the CEO, the STI payout is capped to 150% of annual base salary, as in previous year and as foreseen by the Article of Incorporation 24. Without applying the cap in both years, the STI amount in the reporting year would have been 8% higher than in the previous year.
- LTI grant in September 2018: the long-term incentive grant size was determined as monetary amount for the first time (previous year: number of shares). To determine the grant size following the change, the historical grant value as well allocation criteria that were in place for several years (described under section 3.2) such as individual performance in previous year, strategic importance of the projects under responsibility, position against benchmark and retention need were considered. Based on those factors and on the individual performance (achievement of strategic priorities in the year preceding the grant date), the LTI grant size of the CEO and one other EC member was increased compared to previous year, while it was decreased for two EC members. For the other EC members, the LTI grant size remained unchanged compared to previous year. The strategic priorities of the CEO for financial year 2017/18 (considered for determining the grant size in the reporting year) are detailed below and have been implemented successfully.
Strategic priorities of the CEO (financial year 2017/18)*
Business performance
Achieve business performance
Business development
Ensure post-merger integration of the acquired businesses according to plan. Selectively establish further acquisitions/divestments in accordance with the defined strategic priorities
Group innovation
Drive the digitization initiatives (cloud-based solutions) and strengthen the Information Security Management System (ISMS)
Supply chain management
Deliver the defined procurement savings and execute the defined lean and Industry 4.0 projects
Organization
Ensure succession plans for key positions, strengthen leadership teams and develop/retain key talents. Conduct dorrmakaba dialogue (global all-employees engagement program)
*This information is disclosed in summarized form for confidentiality reasons
The performance share units granted under the long-term incentive in September 2015 vested in September 2018 based on the EPS growth over the three-year vesting period at a vesting level of 102.9%. The share price at vesting amounted to CHF 713.00 compared to CHF 653.00 at grant.
Variable compensation forms a major part of total direct compensation (TDC). The percentage of overall compensation paid to the EC as variable compensation in the reporting year was 67% (excluding benefits and social security contributions) and remained stable (previous year 64%). Variable compensation paid out in shares accounted to 32% of TDC (previous year 30%), which is in line with the compensation strategy (communicated in the previous Compensation Reports) to award 30% of total compensation in shares by applying compensation increases primarily on the long-term incentive component rather than on the other compensation elements.
At the AGM 2017, the shareholders approved a maximum aggregate amount of CHF 19,500,000 for the EC for the financial year 2018/19. The compensation effectively awarded of CHF 12,915,283 is within the limits approved by the shareholders.
As at 30 June 2019, in compliance with the Articles of Incorporation, there were no outstanding loans or credit facilities between dormakaba and current or former EC members, or parties closely related to them. Investments held by EC members or related persons (including conversion and option rights) – if any – are listed here. | https://report.dormakaba.com/2018_19/ar-07_07-architecture-ec/ |
Ontario’s per-person debt burden will soon exceed Quebec’s
Ontario and Quebec, Canada’s two largest provinces, recently released their fall updates on the state of government finances. The updates highlight a marked distinction in the progress that each province has made in terms of deficit reduction. Whereas Ontario continues to run large budget deficits, Quebec is on track to eliminate its deficit and return to a balanced budget this year. As a result, Ontario’s debt burden continues to grow and will soon be, on a per-person basis, larger than Quebec’s.Government debt can pose serious economic problems. A robust body of research finds that high levels of government indebtedness hinder long-term economic growth. In the short term, growing debt can require more government revenue to pay interest on servicing past debt rather than important spending programs such as health care and education, or even tax relief.
Ontario and Quebec currently spend between nine and 10 cents out of every government revenue dollar on debt service payments. A growing debt burden (and, eventually, higher interest rates) could cause the share to rise, other things equal.
Let’s be clear: neither province is a model of fiscal prudence. They are, after all, Canada’s two most indebted provinces. And Quebec’s longstanding fiscal problems are well-documented. In fact, Quebec’s net debt (after adjusting for financial assets) is projected to be 49.7 per cent of GDP this year, the highest among all provinces and American states.
Although the province continues to carry a large debt burden, there are encouraging signs that Quebec may finally have turned a corner, with the recent fiscal update projecting that the province will balance its books for the first time since the recession and then run a string of balanced budgets. With its annual deficit eliminated, Quebec is now in a position to begin reducing its debt burden.
The situation in Ontario is markedly different. Ontario expects to run a cumulative $12 billion deficit this fiscal year and next. The government says it will finally balance the budget in 2017/18 but its plan to do so remains precarious and unlikely, as its own Financial Accountability Office has noted.
Even taking the government’s hopeful projections at face value, Ontario will continue to rack up debt in the coming years—so much so that Ontario will soon overtake Quebec in terms of net debt per person. In 2009/10, Quebec’s per-person net debt burden reached $19,329, approximately $4,400 higher than Ontario’s. That gap has shrunk considerably and is projected to be just $1,100 this year.
The latest government projections suggest the gap will continue to shrink next year and then, in 2017/18, Ontario’s per-person net debt will finally exceed Quebec’s at $22,647 (see chart below). While Quebec’s per-person debt burden remains substantial, Ontario’s will soon be larger. And again, the projections for Ontario are likely optimistic. If the government’s hopeful forecasts do not come to pass, Ontario’s debt per person will grow even larger.
Examining the debt burden in each province relative to the size of the economy provides further evidence of Ontario converging quickly with Quebec. Quebec’s net debt as a share of GDP is projected to fall from its current level of 49.7 per cent to 43.0 per cent by 2019/20. Meanwhile, Ontario expects its debt-to-GDP ratio to hover very close to its current 40 per cent level for the near term.
Quebec is hardly a paragon of fiscal virtue, but its recent progress has enabled the new government to start a much-needed discussion about tax reform including a commitment to reduce the general corporate tax rate. Ontario, on the other hand, continues to enact new tax increases.
While Quebec certainly has to do more work to get its fiscal house in order, Ontario is headed on a path that will see its per person debt converge with and exceed that of La Belle Province. | https://business.financialpost.com/opinion/quebec-and-ontario-the-great-debt-convergence |
From the Right: From Churchill's loss, we've learned nothing
Many aspects of our 2020 election remain stunning and puzzling. For instance, how can Biden win the presidency, based on current votes, and yet Republicans did not lose a single seat in the House and in fact gained 10 seats?
If the Republicans win one of the two Senate seats in the special election in Georgia on Jan. 5, they will maintain control of the Senate. Trump made significant gains in attracting Black and Hispanic voters yet seems to have lost narrowly in a handful of key swing states.
Admittedly, the Biden wins in these key swing states do not pass the "smell test ." After the vote counting was stopped in the early morning hours, in these key states, when Trump had sizable leads, we woke up to Trump suddenly trailing.
While I was stunned, my first thought reflected back on the outcome of the British elections in 1945 after Hitler had been defeated. Throughout WW II, Winston Churchill had been the heart and soul behind Britain's resolve to never surrender to Hitler’s threats and military might. At the point of victory in 1945 , Churchill's approval rating was an astounding 83%! Yet only three months later Churchill and his Conservative Party suffered a resounding defeat to the Labor Party headed by Clement Attlee.
Attlee had about as much charisma as Biden. Atlee won in a landslide while Biden's victory appears very narrow in scope. Churchill had led Britain through their darkest days and yet in the first election following winning the war in Europe he and his Party lose in a landslide.
As a result of Churchill's loss, Britain turned to socialism, which certainly resembles the direction of today's Democratic Party driven by the socialist ideas of Bernie Sanders and Alexandria Ocasio Cortez.
Ironically, Churchill was once again elected prime minister in 1951 after a series of failed socialistic policies under the Labor government.
There is already speculation that if Trump's legal challenges fail in 2020 that he could run again in 2024 and win. The direction of America will significantly be determined in the special election in Georgia. If the Republicans win one or both seats, they will likely be able to thwart much of the socialist ideas embraced by the 2020 Democrat platform. But a loss of both Senate seats to the Democrats would doom our republic as we have known it historically.
The Democrats seek power and the retention of power. Make no mistake, a Biden-Schumer-Pelosi administration would pack the Supreme Court, would grant amnesty to as many illegal aliens as possible, do away with the Senate filibusterer rule, and grant statehood to Washington D.C. and Puerto Rico. Washington D.C. voted 95% for Biden in the 2020 election. The statehood action would guarantee four solid Democratic votes in the Senate for decades.
After Churchill's loss in 1945, the failures of socialism haunted Britain for years, until Margret Thatcher became prime minister. The "Iron Lady" was cut from the same cloth as was Churchill and reversed much of the socialistic policies that had dominated British government and society.
Thatcher like Trump shared several key policies regarding economics. Both were big on deregulation, reducing unemployment, strong on the military and resolute in their beliefs.
Churchill was certainly among the greatest world leaders of the 20th century. His indomitable spirit kept Britain together during its darkest hours. Yet in a period of three months he went from 83% approval rating to losing his election by a large margin. Trump was forced to fight was against the Washington D.C. swamp and entrenched bureaucrats who resented a non-politician winning the presidency and keeping his campaign promises.
Churchill and Thatcher were leaders of immense fortitude as is Trump. For four years we watched a media that spewed anti-Trump assaults on a 24/7 basis joined by the liberal heads of Facebook, Twitter and Google.
Since Churchill's defeat in 1945, and Britain’s embracing socialism, we apparently have learned nothing from history. A Biden administration coupled with control of both the House and Senate will end any chance of a Republican ever again winning the presidency.
We will become a one-party nation with increasing governmental regulations. China will overtake us economically and militarily and our influence as the leader of the free world will be greatly diminished. The two Senate races in Georgia will determine the fate of our republic and thwarting socialism. | https://www.dailycommercial.com/story/opinion/columns/2020/11/20/right-churchills-loss-weve-learned-nothing/6358511002/ |
Numpy log1p() is a math function that helps the user to calculate the natural logarithmic value of x+1 where x belongs to all the input array elements.
In this article we will understand np.where, np.select with some examples.
In this article, I’ll introduce all of the excellent built-in functions in NumPy for us to generate n-dimensional arrays with certain rules. Please be advised that random arrays can be a separated topic so that they will not be included in this article.
Numpy linalg matrix_rank() calculates the Matrix rank of a given matrix using the SVD method. It returns the matrix rank of an array using the SVD method.
Numpy tensordot() Function Example in Python The tensordot() is a numpy function that sums the product of a’s elements and b’s elements over the axes specified by a_axes and b_axes.
Numpy linalg cond() function computes the condition number of a matrix. The cond() function is capable of returning the condition number.
Numpy linalg norm() method is used to get one of eight different matrix norms or one of the vector norms. It depends on the value of the given parameter.
understanding: numpy.random.choice, numpy.random.rand, numpy.random.randint,numpy.random.shuffle,numpy.random.permutation
Python numpy.trace() method is used to find the sum of diagonals of the array. The trace() method returns the sum along diagonals of the array.
Stats for Data Science, you will be working on an end-to-end case study to understand different stages in the data science life cycle. This will deal with 'data manipulation' with pandas and Numpy, 'data visualization' with Matplotlib, and the basic statistics which are required. After Data manipulation, Data visualization, and the basic statistics an ML model will be built on the dataset to get predictions. You will learn about the basics of the Scikit-learn library to implement the machine learning algorithm.
I think that the best way to really understand how a neural network works is to implement one from scratch. That is exactly what I going to do through this article. I will create a neural network class, and I want to design it in such a way to be more flexible.
How to Create Numpy Arrays? A quick overview about different ways of creating numpy arrays
Learn NumPy Copy and View - Deep Copy, shallow copy and No copy in NumPy, NumPy view creation and types with examples, NumPy View vs Copy
Beginner’s Guide to Data Analysis using numpy and pandas. Oftentimes, we tend to forget that the pandas library is built on top of the numpy package.
Get image RGB using PIL(Pillow) and modify using NumPy. In this article, I will demonstrate how to use two libraries in Python — PIL and NumPy — to achieve most of the basic photo editing features in only 2–3 lines of code.
The content present in the NumPy arrays can be made accessible, and also we can make changes thorough indexing as we got to know in the previous module. Another way of data manipulation in arrays in NumPy is though slicing through the arrays. We can also try changing the position of the elements in the array with the help of their index number. Slicing is the extension of python’s basic concept of changing position in the arrays of N-d dimensions.
NumPy along with Matplotlib is a fundamental feature of Python. Learn Numpy Matplotlib Tutorial to learn basics of Matplotlib. Learn various types of matplotlib charts like histogram, bar chart, scatter plot, box plot etc
Learn about stacking and joining in Numpy. These are important functions for array in NumPy. Learn about dstack, hstack and vstack functions in NumPy.
In our previous article we have seen how to create an array using numpy. Once the creation is done, we must be able to access them. In this article we’ll see how to access an array by indexing, slicing of an array and some other functions that are involved in the creation of array. | https://morioh.com/topic/numpy |
On February 28 the Github site was the target of one of the biggest DDOS (denial of service) attacks in history. The attack used a memcached, distributed system vulnerability used to speed up dynamic database-using sites, this service basically has the function of creating a cache of data in RAM to reduce the number of times an external data source (such as a database) must be accessed. And it was just this functionality and ease of caching in RAM that was used to cause a gigantic denial of service attack.
What is Github?
Github offers extra features applied to GIT, one of several File Version Control Systems for programmers and allows you to develop projects where multiple people can contribute simultaneously, allowing you to edit and create new files, allowing changes to not be overwritten in a building of simultaneous code, for example. One of the main applications of Git is precisely this, allowing a file to be edited at the same time by different people. GitHub is a social network for developers and is widely used by developers around the world, offering a variety of functions such as updates and news feeds, followers, and a graph with data on how developers are contributing to the versions of their repositories.
One of the first users of the platform was Linus Trovalds, where he developed the Linux kernel and needed a secure, functional and cooperative repository so the code would be ready quickly and collaboratively. That is, github is one of the world’s largest collaborative repositories for developers, playing a key role in the development of many applications around the world and last week for one of the largest denial-of-service attacks in history.
DDOS attacks
A denial-of-service (DDoS) attack is constituted as an attack in which multiple computers attack a target, such as a server, site, or other network resource, and overload them, causing a denial of service access to users of the attacked target. The large amount of messages, connection requests, access requests or malformed packets to the target system causes a huge decrease in system speed leading to possible failures and shutdowns, denying service to legitimate users or systems. That is, the attacker creates a gigantic amount of access requests on the target causing them to become overloaded and unable to meet the requests of the real users. This method of attack for a long time was known to operate through the botnets (networks zombies) that are constituted many times by computers infected with some kind of malware. The computers were in control of the invaders, enabling the realization of the false accesses to overload the attacked system. However, this attack with surprising numbers used a new methodology.
Memcached and the vulnerability
The discovery of new amplification vectors that allow very large amplifications rarely occur. This new vulnerability, however, is in this category. If 2018 was still not exciting enough in the information security area, say hello to a new type of denial of service attack: User Datagram Protocol (UDP) amplification via servers and running memcached, an open source caching system, mentioned above. Memcached is only used on computers that are not connected to the internet since they do not require authentication. However, according to Akamai, more than 50,000 servers are vulnerable on the Internet, and can be used to perform DDOS attacks. As pointed out by CloudFlare after undergoing a similar attack: “15 bytes of request triggered 134KB of response. This is amplification factor of 10,000x! In practice we’ve seen a 15 byte request result in a 750kB response (that’s a 51,200x amplification).”
Conclusion
Because of the huge capacity to generate such gigantic attacks, attackers are likely to use memcached as a favorite tool in the next few days. In addition, the measurement is like lists of protections and reflectors of blowers from fuel test trials.
Did you like the text about information security? Take a look at this post containing tips on safe internet browsing. | http://irisbh.com.br/ddos-attacks-and-the-github-case/ |
Studio Bell – National Music Centre is a large museum facility that houses an extensive collection of musical instruments, a performance centre, workshop, exhibits and recording studios. The recording studio facilities were designed by Pilchner Schoustal to incorporate legacy recording equipment including the original Olympic recording console, a Trident A range console, and the fully restored Rolling Stones Mobile studio.
The Vision
The studios are split on many levels of the building an include an electronic instrument lab with an extensive synth collection, a main large studio proper, a dedicated acoustic studio with vintage pianos and keyboards, a large control room with a Trident A range, a smaller control room with the Olympic console, a dedicated isolation booth, and the RSM (Rolling Stones Mobile). All of the production spaces are tied together with an extensive analog and digital network that all the spaces to work together in any configuration.
The Design
The facilities were realized to service their “Artist in Residence” program, where an artist can select museum artifact instruments and bring them into the studio spaces for the purpose of creating new music. This idea of “working collection” has been a hallmark of the NMC since its inception.
The stunning architecture of the main building was conceived by Allied Works Architects of Portland Oregon. | http://www.pilchner-schoustal.com/work/national-music-centre/ |
Background: All the medical colleges face students with poor performance. Studies have shown that there was no significant relationship between the learning approaches and academic performance. This study was conducted to know the role of Urdu and English language as medium of education during Primary and Secondary school in Pakistan and its impact on the results of university MBBS professional examinations.
Material and Methods: This study was designed and conducted during September to December 2005 in Gomal Medical College, D.I.Khan. Students of MBBS classes, from sessions 2001 to 2004, were provided with a proforma. The first professional Part-1 examination was expressed as M-1 and other university examinations as M-2, M-3 and M-4 respectively. The medium of learning either Urdu or English during Primary and Secondary level and attempts in their university examinations were asked. The statements regarding medium were randomly checked and the university level performance was verified from results available in the student affairs section of this college. The data was tabulated and statistically analyzed by x2 test.
Results: All the 189 students studying in second to fifth (final) year of MBBS classes in Gomal Medical College were included in this study. Among these 47 were in Second year, 57 in Third year, 46 in Fourth year and 42 in Final year. In Second year, students passing in the first attempt in M-1 examination from Urdu medium were 9 out of 19. In Third year, passing M-I and M-2 in first attempt were 26 out of 44 from Urdu medium. In Fourth year, in first attempt of M-1, M-2, M-3 examinations were 39 out of 72 from Urdu medium. In Fifth year, students passed in M-1, M-2, M-3 and M-4 examinations in first attempt coming from Urdu medium schools were 44 out of 56. The rest of the students from each class were from English medium schools. The results of students from Urdu and English medium schools were compared with each other by x2 test for the number of students passing in first and second attempts, the p-value was found to be insignificant (p > 0.5).
Conclusion: There is no effect of language as a medium of education during Primary and Secondary school upon the results of university professional MBBS examinations.
Full Text:PDF
Refbacks
- There are currently no refbacks.
Copyright (c) 2020 Aziz Marjan Khattak, Fida-ullah Wazir, Habib-ullah Khan, Shaukat Ali, Syed Humayun Shah
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. | https://gjms.com.pk/index.php/journal/article/view/65 |
Internal Organization of Computer
Components:
- Computer can be broken on to three parts: CPU (Central Processing Unit), Memory and I/O (Input/Output) devices.
- Memory is to store (temporary or permanent) information.
- CPU is to process information stored in memory.
- I/O device is to provide a means of communicating with CPU.
- CPU is connected to memory and I/O through strips of wire called a bus.
- Bus inside a computer carries information from one place to another.
- There are three types of buses: address bus, data bus, and control bus.
- For a device to be recognized by CPU, it must be assigned a unique address.
- Address bus is used to identify the devices and memory connected to the CPU.
- Data bus is used to carry information in and out of a CPU.
- Control bus is to provide read or write signals to device.
- Address bus is a unidirectional bus, which means that the CPU uses the address bus only to send out addresses.
- The total number of memory locations addressable by a CPU is always equal to 2x where x is the number of address lines or bits.
- Data bus is a bidirectional bus. CPU must use them to send or receive data.
- The more lines or bits in a data bus, the better the CPU.
- Processing power of a computer is related to the size of its busses.
- RAM and ROM are referred as Primary Memory.
- Storage device such as a disk is called Secondary Memory.
- ROM is to provide information that is fixed and permanent.
- RAM is to store information that is not permanent and can change with time.
- Programs such as operating system and application packages are loaded in to RAM and processed by the CPU.
- Program stored in memory provides information to the CPU to perform an action.
- Function of CPU is to fetch the instructions from memory, decode and execute them.
- CPU is equipped with ALU (Arithmetic Unit), Fetching Unit, Decoding Unit, Control Unit and Registers to perform fetch, decode and execution operations.
- CPU uses register for temporary storage while executing instructions.
- Program counter (PC or IP (Instruction Pointer)), a register, is to point address of the next instruction to be executed by CPU.
- Fetching unit in CPU fetches the instructions from the address, pointed by PC, in memory.
- Decoding unit in CPU interprets the instruction fetched in to CPU and tells what steps the CPU should take.
- ALU is to perform add, subtract, multiply, divide and Boolean operations.
- Control unit is to control the operations of other units.
<< Previous :: Up :: Next >>
Related topics: | https://www.refreshnotes.com/2016/02/computer-internals.html |
E3S Web Conf.
Volume 209, 2020ENERGY-21 – Sustainable Development & Smart Management
|Article Number||05004|
|Number of page(s)||7|
|Section||Session 4. Eastern Vector of Russia's Energy Strategy: Current State and Look into the Future|
|DOI||https://doi.org/10.1051/e3sconf/202020905004|
|Published online||23 November 2020|
Multi-criteria placement and capacity selection of solar power plants in the “Baikal-Khövsgöl” Cross-Border Recreation Area
Melentiev Energy Systems Institute of Siberian Branch of the Russian Academy of Sciences, Department of Complex and Regional Problems in Energy, Irkutsk, Russia
* Corresponding author: [email protected]
The problem of power supply to remote consumers in the “Baikal-Khövsgöl” Cross-Border Recreation Area, associated with the high length and low reliability of power lines is discussed. The assessment of the modes of the power distribution grid showed that the introduction of new consumers in this territory will lead to unacceptable voltage deviations, even taking into account the installation of reactive power compensating devices. Since the area under consideration has a high solar energy potential, it is advisable to use distributed solar generation. The choice of locations and capacities of solar power plants is a multi-criteria optimization problem. Four criteria are proposed: total voltage deviation, total active power losses, reliability and capital costs for construction. An algorithm for multi-criteria optimizationis developed and implemented as a program in the MATLAB, which consists in sequential verification of the feasibility of installing additional power of solar power plants at the consumers of each of the substations under consideration. For each variant, the electric grid mode is assessed using the Power system analysis toolbox program. Solutions for the choice of locations and capacities of solar power plants are obtained, providing high scores by criteria in accordance with the given criteria importance coefficients.
© The Authors, published by EDP Sciences, 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | https://www.e3s-conferences.org/articles/e3sconf/abs/2020/69/e3sconf_energy-212020_05004/e3sconf_energy-212020_05004.html |
Recipes of longevity
Bulgarian professor Christo Mmermerski half-lifeWas engaged in the study of cereals, especially wheat. His scientific experiments gave absolutely unexpected results. He found his recipe for health and longevity, which helps even hopelessly sick people. The main ingredient - Sprouted wheat.
About the amazing properties of grains germinatedWheat knew in ancient times. In different cultures it was used to treat many diseases. Mermerski claims that this food cleans arteries and blood vessels, protects against heart disease, destroys kidney stones, strengthens immunity, improves digestion and even prevents cancer.
Recipes of longevity
How to germinate wheat
- 400 g of fresh green wheat put in a glass bowl.
- So that the seeds began to germinate, fill them with water so that it was 3 cm above the grains.
- Leave the wheat for 12 hours. After this, wash the seeds in a colander and drain them.
- Leave the grain. The first shoots will appear after 24 hours.
Professor says he knows the best in the worldRecipe of youth, health, longevity, freshness and vivacity. Start using this tool, and you can improve the work of the whole body, gain strength and energy.
Ingredients
- 15 lemons
- 12 heads of fresh garlic
- 1 kg of natural honey
- 400 g of sprouted wheat
- 400 g walnuts
Cooking
- Grind sprouted wheat, nuts and garlic.
- Grind 5 lemons with peel and mix all the products in a bowl.
- Squeeze the juice from the remaining 10 lemons and add it to the mixture. Mix thoroughly.
- Add honey. Mix the mass with a spoon again and spread it over the jars.
- Leave the mixture in the refrigerator for 3 days, so that it is infused.
Cancer prevention And other diseases requires the regular use of this delicious medicine. Eat a healing mixture of 2 tablespoons 3 times a day, preferably 30 minutes before eating.
Follow the advice of Dr. Mmersky, and you will notice how the work of your body will improve in a week. | http://www.amazingajmer.com/2186-recepty-dolgoletiya.html |
This April is the hottest since records began, and the rest of 2020 is likely to follow suit.
As we move into more weeks of lockdown, many restrictions are likely to remain in place for the remainder of the year, let alone rolling into the next. Many scientists have pointed out, echoed by Dominic Raab, that until we have a vaccine or at the very least an effective treatment we are stuck.
The restriction that hurts me the most, apart from social distancing and the inability to meet friends, is the lack of green space in Andover. According to the Test Valley ward briefings, the urban areas of Andover do not have the required levels of green space.
In this respect we do less well than many areas in London.
This overdevelopment of Andover, and the new developments are not much better than the old quarters of the town, is typical where we put money before people.
It also reflects a lack of balance between ourselves and our environment and nature. History will probably show that this pandemic, like so many others, originated in the transmission of diseases from animals to humans. Too often we forget to treat the environment with respect.
Some months ago before the pandemic we were justly worried about the floods which demonstrated the challenges posed by the climate emergency. This is one emergency that is not going to go away and unless we take the opportunity to change our behaviours we will face much greater problems than those imposed by the pandemic.
Looking forward, both the pandemic and the impact of global warning are registered as Tier 1 risks within our National Security Strategy. As climate change is already happening, I hope our response will be better.
If the safety and health of the citizens is paramount as Boris Johnson tells us then we really need to get our skates on. The postponement of the Glasgow climate change conference may be a blessing in disguise as Trump may have gone by next year. The advice to inject yourself with bleach to treat Covid-19 was a new low.
Meanwhile keep safe, stay home and protect the NHS. | https://www.andoveradvertiser.co.uk/news/18414537.letter-towns-overdevelopment-reflects-lack-balance-environment/ |
This paper describes a 150-Mb/s monolithic optical receiver for plastic optical fiber link using a standard CMOS technology. The receiver integrates a photodiode using an N-well/P-substrate junction, a pre amplifier, a post amplifier, and an output driver. The size, PN-junction type, and the number of metal fingers of the photodiode are optimized to meet the link requirements. The N-well/P-substrate photodiode has a 200-㎛ by 200-㎛ optical window, 0.1-A/W responsivity, 7.6-pF junction capacitance and 113-MHz bandwidth. The monolithic receiver can successfully convert 150-Mb/s optical signal into digital data through up to 30-m plastic optical fiber link with -10.4 dBm of optical sensitivity. The receiver occupies 0.56-mm2 area including electrostatic discharge protection diodes and bonding pads. To reduce unnecessary power consumption when the light is not over threshold or not modulating, a simple light detector and a signal detector are introduced. In active mode, the receiver core consumes 5.8-mA DC currents at 150-Mb/s data rate from a single 3.3 V supply, while consumes only 120 μW in the sleep mode.
-
KEYWORD
Monolithic optical receiver , Plastic optical fiber , Light detector , Signal detector , CMOS
-
Plastic optical fiber (POF) is widely used and widely regarded in short-reach network systems, such as automotive network, office network, home network, audio interface, and IEEE1394. POF can be easily aligned and quickly installed due to its large diameter compared with a glass optical fiber. Recent advances in poly methyl metacrylate (PMMA) POF allows it become a powerful alternative to copper cables in short-reach applications. High-speed networks such as media oriented systems transport (MOST) and IEEE1394b above 100Mb/s use the POF links. These POF links require only low-cost and low-power optoelectronic integrated circuits (OEICs), while maintaining their high performance. In this paper, we demonstrate a monolithic optical receiver fully integrated with photodiode (PD), pre amplifier, post amplifier, and output driver. Section II describes POF link configuration and design details of the monolithic optical receiver. Measurement setup and results are presented in Section III, and a conclusion is given in Section IV.
II. ARCHITECTURE AND DESIGN DETAILS
Fig. 1 shows the block diagram of the POF link system. The transmitter consists of digital framer, low-voltage differential signaling (LVDS) receiver, light emitting diode (LED) driver, and resonant-cavity (RC) LED. High-speed POF links above 100 Mb/s require LVDS as an electrical interface. Typical output optical power of the RC LED is 0 dBm at room temperature, and minimum output power is about 0.64 mW(-1.94 dBm) due to temperature effects from -20℃ to 80℃ . PMMA POF at 650-nm wavelength has quite large signal loss about 0.2 dB/m. Therefore, optical sensitivity of the receiver determines the maximum transmission distance. The receiver is composed of PD, pre amplifier, post amplifier, LVDS driver, and digital framer. The optical sensitivity is affected by responsivity of the PD and minimum allowable currents of the pre amplifier for the specific bit error rate(BER). In the POF links, 1e-9 of BER is required. Power budget for maximum transmission distance is summarized in Table 1 [2-4]. In the noise simulation of the pre amplifier,
the minimum allowable photo current for 1e-9 BER is 10 μA, and the responsivity of the CMOS PD is measured to 0.1 A/W at 3.3 V reverse bias voltage. The minimum optical power for 1e-9 BER at the input of PD is calculated to be-10dBm with the above parameters. Therefore the maximum transmission distance of PMMA POF link is about 30.3 m.
In a standard CMOS technology, there are two possible PN junctions, P+/N-well and N-well/P-substrate. The doping concentration of N-well is larger than P-substrate, that is, the number of minority carriers of N-well/P-substrate junction (holes) is smaller than that of P+/N-well junction (electrons). Therefore an N-well/P-substrate junction can produce much higher photo currents due to its larger drift currents with larger depletion width at nominal reverse bias voltage. However, an N-well/P-substrate photodiode has worse frequency response than a P+/N-well photodiode due to its much longer penetration depth for the diffusion currents . Consequently, an N-well/P-substrate photodiode has better responsivity and worse frequency response than a P+/N-well photodiode. In our application, an N-well/P-substrate is more appropriate due to its high responsivity. Fig. 2 shows cross section and top view of an N-well/P-substrate photodiode. A shallow trench isolation (STI) layer is used for leakage blocking between P+ and N+. To compensate slow diffusion, an N+ electrode is added at the center of each N-well optical window as
well as at the edge. Additionally by separating the photo-diode into four sections each having 50-㎛ by 50-㎛ of optical window, the penetration depth of diffusion currents can be reduced. Many electrodes are of help to improve frequency response, however, the responsivity is decreased because the area of the effective optical window is reduced.
Fig. 3 shows the schematic diagram of the proposed monolithic optic receiver. It consists of PD, dummy PD, trans-impedance amplifier, light detector, low-pass filter, offset cancellation buffer, signal detector, 3-stage limiting amplifier, and LVDS output driver. Dummy PD is added to provide symmetrical input capacitance of the differential architecture as well as to reduce the effect of dark current. The light detector senses whether the light above estimated receiver sensitivity (-10 dBm) comes into the monolithic PD or not. The light detector can be composed of 165 kΩ of resistor(RL), 1.65 V (half of supply voltage) of reference voltage, and comparator. With these parameter settings, input DC voltage becomes greater than the reference voltage when the optical power is more than -10 dBm, and vice versa. In the sleep mode the input resistance saw into gate of M1 is nearly infinite, therefore all of the photocurrent flows through RL. In the other hands, the input resistance of gate of M1 is nearly Rf/(1+gm1R1) in the active mode, so almost all of photocurrent go into the trans-impedance amplifier. All optical receivers cannot have fully differential architecture because the photocurrent comes into only one port. Therefore there are DC offset errors between differential signals due to inherent pseudo-differential architecture. To reduce the bandwidth degradation, an ft doubler with a low-pass filter is used for an offset cancellation buffer . The input capacitance is half compared with a conventional common-source buffer. The pre amplifier has 60-dBΩ trans-impedance gain.
Even if light over the threshold sensitivity is detected, it may have no information, that is to say light may not be modulated. In this case, post amplifier and LVDS output driver don’t need to operate. Therefore the signal detector between pre- and post-amplifiers is added to prohibit unnecessary operation. If the light is not modulated, the differential outputs of the pre-amplifier don’t have a polarity. Thus the signal detector can be easily designed by using an exclusive OR function to detect whether the polarity exists or not. The limiting amplifier consists of three identical common-source voltage amplifiers. For the desired output swing levels(350 mVpp), the limiting amplifier has 32-dB voltage gain and 180-MHz bandwidth. LVDS output buffer drives off-chip 100-Ω termination for LVDS interface.
III. CHIP IMPLEMENTATION AND MEASUREMENT RESULTS
A 150-Mb/s monolithic optical receiver for plastic optical fiber link is realized using a standard CMOS technology. Fig. 4 shows the microphotograph of the prototype TO-can packaged chip. The receiver occupies the area of 765-㎛ by 730-㎛ including electrostatic discharge (ESD) diode
pads. The receiver core dissipates 19.14 mW at a 3.3 V supply.
Fig. 5(a) and 5(b) show the measurement setup for frequency and transient responses, respectively. Similar to the practical case, the light from the commercial LED module is connected to the device under test (DUT) through bare step-index (SI) PMMA POF and micro-lens . Alignment between the TO-can package and the POF is manually conducted by an electronic XYZ stage. A well-aligned circumstance is simply judged through spectrum analyzer and radio-frequency signal generators. For a frequency response experiment, a 2-port network analyzer is used, as shown in Fig. 5(a). For measurements of transient response and BER, a pulse pattern generator makes a pseudo-random bit sequence 231-1. Commercial LED driver module converts the NRZ electrical data into optical data with 10-dB extinction ratio, and optical data goes into the DUT. The output waveform can be observed by a sampling oscilloscope and can be com-pared to the input patterns for BER test. Commercial LED module can be used for BER test by controlling the output optical power from -15 dBm to 4 dBm. At the end of PMMA POF, we can observe the incident optical power into DUT by using an optical power meter.
[FIG. 8.] Measured eye diagrams at Pin=-10 dBm according to the data rates: (a) 25 Mb/s (b) 50 Mb/s (c) 100 Mb/s and (d)150 Mb/s.
Fig. 6 shows normalized frequency responses of the stand-alone PD and monolithic receiver. For measurement of PD response, stand-alone PD is fabricated with bonding pads. The absolute magnitude value has no meaning in the full optical-link measurement because the measured response contains the response of a commercial LED module. A commercial LED module having 1-GHz bandwidth doesn’t affect the total bandwidth, however, its amplitude response is reflected in the link response. The stand-alone PD and monolithic receiver exhibit 113-MHz and 103-MHz bandwidth, respectively. The stand-alone PD and receiver also have gain flatness under 1 dB. Fig. 7 shows the measurement of BER according to the incident optical power. At 100 Mb/s and 150 Mb/s, -10.9 dBm and -10.4 dBm of optical sensitivities are achieved for 1e-9 BER. The bit rate increases by 50%, however, the difference of BER is only 0.5 dB. The BER limitation is inter-symbol interference due to the lack of bandwidth in 150-Mb/s data rate, while the noise boosting is the BER limit in 100-Mb/s data rate. The eye patterns from 25 Mb/s to 150Mb/s under the condition of -10-dBm optical power are presented in Fig. 8. Above 0.8UI horizontal eye openings are guaranteed for all data rates.
A 150-Mb/s monolithic optical receiver for POF appli-cations is designed, discussed, and implemented. The receiver is fully integrated including CMOS N-well/P-substrate photo-diode. By using light detector and signal detector, power dissipation leakage during no light or no modulation conditions is reduced. The monolithic receiver can successfully detect 150-Mb/s NRZ optical data with -10.4-dBm sensitivity, corresponding to up to 30-m long transmission distance.
-
[FIG. 1.] POF link system.
-
[TABLE 1.] POF link system
-
[FIG. 2.] (a) Cross section and (b) top view of the N-well/P-substrate junction photodiode.
-
[FIG. 3.] Simplified schematic of the monolithic optical receiver.
-
[FIG. 4.] Photograph of TO-CAN packaged monolithic optical receiver.
-
[FIG. 5.] Measurement setup for (a) frequency response and (b)transient response and BER test.
-
[FIG. 6.] Measured S21 of stand-alone PD and monolithic optical receiver.
-
[FIG. 7.] Measured BER according to the incident optical power.
-
[FIG. 8.] Measured eye diagrams at Pin=-10 dBm according to the data rates: (a) 25 Mb/s (b) 50 Mb/s (c) 100 Mb/s and (d)150 Mb/s. | http://oak.go.kr/central/journallist/journaldetail.do?article_seq=11493 |
The invention discloses an optical method and device for simultaneously measuring the direction, the sound intensity and the frequency of ultrasonic waves. The optical method comprises the steps that a dynamic ultrasonic phase grating formed by the ultrasonic waves in a liquid medium is lighted by a laser beam in the direction perpendicular to the propagation direction of the ultrasonic waves, and diffracted light forms a diffraction spectrum of the dynamic ultrasonic phase grating at an image surface of a lens; a hybrid photodiode array detector is arranged at the image surface of the lens, and the occurrence direction of the diffraction spectrum, the level 1 diffracted light intensity and the distance between adjacent diffraction spectrums are recorded; and the propagation direction, the sound intensity and the frequency of the measured ultrasonic waves are acquired through data processing. The optical device comprises a light source, an optical fiber, a lens, a lens base, sheet glass, a hybrid photodiode array detector, a detector base, a driving and scanning amplifying circuit and a computer, and is characterized in that a light emitting port of the optical fiber, the lens, the sheet glass and the detector are sequentially arranged, the detector is arranged on an imaging surface of the lens, and the driving and scanning amplifying circuit is connected with the detector and the computer. | |
Hoeven Working to Increase Federal Funding for Comprehensive Flood Protection in the Red River Valley
FARGO, N.D. – Senator John Hoeven today outlined his work to increase funding for comprehensive flood protection in the Red River Valley. This will address the increased costs due to Route B, the changes recommended by the joint task force convened by Governors Doug Burgum and Mark Dayton. The new route and associated delays have added $600 million to the cost of the project, in addition to higher financing costs.
A new funding plan has been requested by the Diversion Authority where the State of North Dakota and the federal government will each bear $300 million of the new costs. Hoeven has been working with administration officials, including Army Corps Chief Lt. Gen. Todd Semonite, Assistant Secretary of the Army for Civil Works R.D. James, Army Corps Deputy Commanding General Scott Spellmon, Army Corps Mississippi Valley Division Commander Maj. Gen. Richard Kaiser and Office of Management and Budget (OMB) Director Mick Mulvaney, to ensure the Army Corps has the authority it needs to provide the increased funding without passing additional legislation.
The Corps has verified to Hoeven that additional Congressional authorization is not required due to the authorization the senator originally secured in the 2014 Water Resources Development Act (WRDA), but the public-private partnership (P3) will need to be renegotiated. Accordingly, Hoeven has secured agreement from the Corps to renegotiate its Project Partnership Agreement (PPA) with the local sponsor for the additional funding.
“Despite the funding requirements of the new route, the fact remains that the residents of the Fargo-Moorhead region need the certainty and security of comprehensive flood protection,” Hoeven said. “That’s why we’ve worked hard to secure continued support from the Army Corps to provide increased funding over the coming years. The steps we’ve taken will help keep this project moving forward and deliver this much-needed flood protection infrastructure for the region.”
As a member of the Senate Energy and Water Development Appropriations Committee, Hoeven is working to maintain strong support for the U.S. Army Corps of Engineers construction account and secure project funding in the coming years. This follows the increased funding Hoeven secured in Fiscal Year 2019 for the Corps’ construction efforts, which helped ensure the Corps included $35 million for construction of flood protection in the Fargo-Moorhead region in its Fiscal Year (FY) 2019 work plan.
In addition, Hoeven secured provisions in this year’s WRDA legislation and the FY2019 Energy and Water funding bill, both of which were signed into law this fall, to move forward with flood protection projects.
WATER RESOURCES DEVELOPMENT ACT
- Resolves easement issue to enable Route B: Hoeven included his provision to resolve easements on land purchased with Hazard Mitigation Grant Program (HMGP) funding. The senator’s provision grants the authority needed to implement Route B, the changes recommended by the joint task force convened by Governors Doug Burgum and Mark Dayton.
FISCAL YEAR 2019 ENERGY AND WATER FUNDING BILL
- Increases the Corps’ construction account by $150 million to ensure priorities like flood protection in the Red River Valley continue to receive funding.
- Implements the Water Infrastructure Finance and Innovation Act (WIFIA) Program to provide states and local governments with low-cost, flexible funding sources when building water infrastructure.
- The legislation directs the Corps to complete a detailed plan for implementing WIFIA and provides up to $6 million to support WIFIA program implementation. This aligns with Hoeven’s legislation to extend the WIFIA program.
- Ensures fair treatment of P3 projects and others that use alternative financing methods during the Corps’ cost-benefit analysis, including flood protection in the Fargo-Moorhead region.
- The bill requires the Corps to develop a policy on how it will evaluate P3s and incorporate them into its budget.
- Hoeven also helped advance a provision in the Senate-passed Financial Services and General Government funding bill ensuring the Corps can fund P3s. | https://www.hoeven.senate.gov/news/news-releases/hoeven-working-to-increase-federal-funding-for-comprehensive-flood-protection-in-the-red-river-valley |
- This event has passed.
Sketching for Birders
January 10 @ 6:00 pm – 7:30 pm PST
Learn how to draw birds in the field to help you observe more carefully and remember the details more accurately. You do not need to be an artist to enhance your field notes with quick drawings that portray the field marks that you see. These notes can help you learn to identify birds and report rare or unusual sightings. Bird sketches also enhance a nature journal, and can include notes on behavior, habitat use, or interactions with other species. This event is sponsored by the Mesilla Valley Audubon Society.
10 thoughts on “Sketching for Birders”
- Ana Luisa Roque says:
Thank you for the session, today. It was wonderful! And I really enjoyed it. It will help us tremendously! We are just beginning to organize a Nature Journal Club in El Salvador with activities carried out through our Natural History National Museum (Museo de Historia Natural de El Salvador —MUHNES), and I hope we could get people motivated and enthusiastic about this pretty soon. Thank you, again!
- Merrie Potter says:
Will you record this if we can’t attend??
- Kathy Miller says:
Trying to sign in with [email protected] but won’t recognize the email
- Heather Borman says:
It is asking for a password?
- Lynne Portnoy says:
Hi I want to attend. Do I need to register or just Zoom in? | https://johnmuirlaws.com/event/sketching-for-birders/ |
1. Introduction {#sec1}
===============
Soil contamination creates a significant risk to human health. For instance, heavy metals from industrial waste contaminate drinking water, soil, fodder, and food \[[@B1]\]. Also, the large volume of waste and the intense use of chemicals during past decades have resulted in numerous contaminated sites across Europe. Contaminated sites could pose significant environmental hazards for terrestrial and aquatic ecosystems as they are important sources of pollution which may result in ecotoxicological effects \[[@B2]\].
Emissions of hazardous substances from local sources could deteriorate soil and groundwater quality. Management of contaminated sites aims at assessing the adverse effects caused and taking measures to satisfy environmental standards according to current legal requirements. Additionally, the impact of soil contamination to health and more specifically the main epidemiological findings relevant to CS are briefly presented below.
The implication of soils to human health is direct such as ingestion, inhalation, skin contact, and dermal absorption. Some epidemiological examples include geohelminth infection and potentially harmful elements via soil ingestion, cancers caused by the inhalation of fibrous minerals, hookworm disease, and podoconiosis caused by skin contact with soils \[[@B3]\]. Elliott et al. (2001) \[[@B4]\] have found small excess risks of congenital anomalies and low and very low birth weights in populations living near landfill sites.
Soil contamination is mainly located close to waste landfills, industrial/commercial activities diffusing heavy metals, oil industry, military camps, and nuclear power plants. As European society has grown wealthier, it has created more and more rubbish. Each year in the EU, 3 billion tonnes of solid wastes are thrown away (some 90 million tonnes of them are hazardous). This amounts to about 6 tonnes of solid waste for every man, woman, and child (Eurostat, Environmental Data Centre on Waste \[[@B5]\]).
The main anthropogenic sources of heavy metals exist in various industrial point sources, for example, present and former mining activities, foundries, smelters, and diffuse sources such as piping, constituents of products, combustion of by products, and traffic related to industrial and human activities \[[@B6]\].
In the US, the army alone has estimated that over 1.2 million tons of soils have been contaminated with explosives, and the impact of explosives contamination in other countries in the world is of similar magnitude \[[@B7]\]. In recent years, growing concerns about the health and ecological threats posed by manmade chemicals have led to studies of the toxicology of explosives, which have identified toxic and mutagenic effects of the common military explosives and their transformation products \[[@B8]\]. Papp et al. (2002) \[[@B9]\] have studied the significant radioactive contamination of soil around a coal-fired thermal power plant.
Different contaminants have different effects on human health and the environment depending on their properties. The contaminant effect depends on its potential for dispersion, solubility in water or fat, bioavailability, carcinogenicity, and so forth. Chlorinated hydrocarbons (CHCs) are used mainly for the manufacturing of synthetic solvents and insecticides. They are environmental contaminants that bioaccumulate and hence are detected in human tissues. Epidemiological evidence suggests that the increased incidence of a variety of human cancers, such as lymphoma, leukemia, and liver and breast cancers, might be attributed to exposure to these agents \[[@B10]\].
Mineral oil large-scale use and various applications lead in many cases to environmental contamination \[[@B11]\]. Such contamination may be a consequence of petroleum transport, storage and refining, or accidents \[[@B12]\]. From a quantitative perspective, mineral oil is probably the largest contaminant in our body. That humans can tolerate this contaminant without health concerns has not been proven convincingly. The current Editorial of the European Journal of Lipid Science and Technology concludes that this proof either has to be provided or we have to take measures to reduce our exposure (from all sources, including cosmetics and pharmaceuticals) and the environmental contamination.
Polycyclic aromatic hydrocarbons (PAHs) are semivolatile, chemically stable, and hydrophobic organic compounds which are ubiquitous in the environment and good markers of urban activities. PAHs are related with anthropogenic toxic element contamination \[[@B13]\].
Heavy metals have been used by humans for thousands of years. Although several adverse health effects of heavy metals have been known for a long time, exposure to heavy metals continues and is even increasing in some parts of the world, in particular in less developed countries, though emissions have declined in most developed countries over the last 100 years \[[@B14]\]. Any metal (or metalloid) species may be considered a "contaminant" if it occurs where it is unwanted, or in a form or concentration that causes a detrimental human or environmental effect. Metals/metalloids include lead (Pb), cadmium (Cd), mercury (Hg), arsenic (As), chromium (Cr), copper (Cu), selenium (Se), nickel (Ni), silver (Ag), and zinc (Zn). Other less common metallic contaminants include aluminium (Al), cesium (Cs), cobalt (Co), manganese (Mn), molybdenum (Mo), strontium (Sr), and uranium (U) \[[@B15]\].
According to WHO, priority should be given to the pollutants on the basis of toxicity, environmental persistence, mobility, and bioaccumulation \[[@B16]\]. Many of the heavy metals such as cadmium, arsenic, chromium, nickel, dioxins, and PAHs are considered to be carcinogenic, based on animal studies or studies of people exposed to high levels \[[@B17]\]. In addition to carcinogenicity, many of these substances can produce other toxic effects (depending on exposure level and duration) on the central nervous system, liver, kidneys, heart, lungs, skin, reproduction, and so forth.
The toxicity and fate of phenolic pollutants in the contaminated soils are associated with the oil-shale industry \[[@B18]\]. Phenol has been shown to cause liver and kidney damage, neurotoxic effects, and developmental toxicity in laboratory animals (Environment Agency, 2009).
The most common source of cyanide contamination is former gas work sites. However, cyanide contamination is also associated with electroplating factories, road salt storage facilities, and gold mining tailings \[[@B19]\]. Cyanide toxicity results from inhibition of cytochrome oxidase thereby limiting the absorption of oxygen at the cellular level. The central nervous system is a major target of acute cyanide toxicity, with a short period of stimulation evidenced by rapid breathing, followed by depression, convulsions, paralysis, and possibly death \[[@B20]\].
Benzene, toluene, ethylbenzene, and xylene (BTEX) are classified as hazardous air pollutants (HAPs) \[[@B21]\]. Exposure to HAPs can cause a variety of health problems such as cancerous illnesses, respiratory irritation, and central nervous system damage \[[@B22]\].
The objective of relevant EU policies is to achieve a quality of the environment where the levels of manmade contaminants on sites do not give rise to significant impacts or risks to human health and ecosystems. The most recent developments in soil policy at European level are the introduction of the thematic strategy for the protection of soils \[[@B23]\] and the proposed soil framework directive \[[@B24]\]. Soil contamination is recognised as one of the eight soil threats expressed in the thematic strategy and the proposed directive. As there was not a consensus for the establishment of the soil framework directive, legal requirements for the general protection of soil have not been agreed at EU level and only exist individually in most Member States. However, the integrated pollution and prevention control directive \[[@B25]\] requires that operations falling under its scope do not create new soil contamination. Other EU directives such as the water framework directive \[[@B26]\] and the waste directive \[[@B27]\], not aimed directly at soil protection, provide indirect controls on soil contamination \[[@B28]\]. Notwithstanding these controls, some significant new site contamination still occurs as a result of accidents \[[@B29]\] and illegal actions. While the creation of new contaminated sites is constrained by regulation, a very large number of sites exist with historical contamination that may present unacceptable risks and these sites require management. One example is the environmental disaster following flooding by red sludge in the Ajka region in Hungary \[[@B30]\]. However, the research and political arena regarding land contamination no longer consider only a few incidents that lead to severe soil contamination, but rather look at it as a wide spread environmental problem.
In 2001, the European Environment Agency (EEA) in cooperation with EEA affiliated countries started to develop a core set of policy relevant indicators, among which the indicator "Progress in the Management of Contaminated Sites" (CSI015) was the only one related to soil. Since then, data collections in relation to this indicator were launched four times by EEA \[[@B31]\], the last one in 2006, with contribution from member countries of the European Environment Information and Observation Network (EIONET) \[[@B32]\]. In the period 2011-2012, the European Soil Data Centre (ESDAC) \[[@B33]\] organized a similar campaign in order to update the CSI015. This indicator quantifies the progress in the management of local contamination, identifies sectors with major contribution to soil contamination, classifies the major contaminants, and finally addresses issues of budgets spent for remediation. The indicator is very important for policy makers as it tracks progress in the management of contaminated sites and the provision of public and private money for remediation. With this indicator, a number of activities causing soil pollution can be clearly identified across Europe. The indicator also supports the implementation of existing legislative and regulatory frameworks (integrated pollution prevention and control directive, landfill directive, water framework directive) as they should result in less new contamination of soil.
The present study presents an overall picture in Europe concerning contaminated sites and does not focus on individual countries. Instead, there are many other studies, such as the one of Ferguson (1999) \[[@B34]\], that present the inventories of contaminated sites for individual countries. The overall objective of this paper is to make an overview of the current situation of contaminated sites in Europe. Specifically, the study intends tofocus on contaminated sites caused by industrial activities;review the type of sources;respond to the main policy questions addressed in the indicator CSI015.
2. Materials and Methods {#sec2}
========================
The study makes an assessment of the data collected through EIONET and then focuses on the data related to contamination as a consequence of industrial activities.
2.1. EIONET-CSI Data {#sec2.1}
--------------------
The contaminated sites data (denoted as EIONET-CSI from now on) were collected and managed by the European Soil Data Centre (ESDAC). The data were collected in 2011-2012 through the EIONET network which consists of representative organizations from 38 European countries for a number of environmental themes \[[@B35]\]. The appointed organisations for the theme "soil" are lead institutions in the soil domain at national level, and they provide official country data on specific requests related to soil by ESDAC.
The geographical coverage of EIONET includes 27 Member States of the European Union together with Iceland, Liechtenstein, Norway, Switzerland, Turkey, and the West Balkan cooperating countries: Albania, Bosnia, Herzegovina, Croatia, the former Yugoslav Republic of Macedonia, Montenegro, and Serbia as well as Kosovo under the UN Security Council Resolution 1244/99. Similar data on contaminated sites have been collected in 2001, 2002, 2003, and 2006. The data were collected through a standard questionnaire and then compiled in a centralized database. The questionnaire was designed such that received data could feed the compilation of the indicator, the CSI015 indicator. There is no legal obligation for the EIONET member countries to submit data, and their contribution is on a voluntary basis.
2.2. Terms and Definitions {#sec2.2}
--------------------------
In order to minimize the differences in interpretation by individual countries of certain terms used in the questionnaire, ESDAC provided the following definitions according to EEA \[[@B31]\]."Contaminated site" (CS) refers to a well-defined area where the presence of soil contamination has been confirmed and this presents a potential risk to humans, water, ecosystems, or other receptors. Risk management measures (e.g., remediation) may be needed depending on the severity of the risk of adverse impacts to receptors under the current or planned use of the site."Potentially contaminated site" (PCS) refers to sites where unacceptable soil contamination is suspected but not verified, and detailed investigations need to be carried out to verify whether there is unacceptable risk of adverse impacts on receptors."Management of contaminated sites" aims to access and, where necessary, reduce to an acceptable level the risk of adverse impacts on receptors (remediate). The progress in management of CS is traced in 4 management steps starting with preliminary study, continuing with preliminary investigation, followed by site investigation, and concluding with implementation of site remediation (reduction of risk).
There is an important definition in terminology which allows the readers of the article to distinguish between "estimated" and "identified" sites. The questionnaire asked the countries to provide estimations on how many CSs and PCSs may be situated in their territory. Data on estimated CS and PCS is based on studies or expert judgment. The questionnaire also asked for identified number of CS and PCS. In this case, the countries report data for which they actually posses available information about local soil properties and hydrology.
2.3. Other Datasets {#sec2.3}
-------------------
For a more comprehensive assessment, a number of auxiliary official Eurostat datasets \[[@B35]\] were used such as the countries\' populations, the surface area, the gross domestic product (GDP), and the number of enterprises in the industrial/services sectors. Those datasets are used for developing statistics with parameters that include the surveyed population, the surveyed area, the density of CS and PCS, the contribution (%) of various industrial sectors to contamination, and the proportion of budget spent for management of CS.
2.4. Methodology {#sec2.4}
----------------
The study is based on the received data from the countries that participated in the survey, replying to the questionnaire available in the European Soil Portal \[[@B36]\]. The questionnaire has a user-friendly format as a Microsoft Excel file and contains 5 main sections: "management of contaminated sites," "contribution of polluting activities to local soil contamination," "environmental impacts," "expenditures," "remediation targets and technologies." Each section includes between 1 and 5 questions requesting the "user" to submit the data for each of the available options. The questionnaire was requesting numerical values (not classes or vague responses) which allowed making aggregations depending on the policy question that was to be addressed. Two example questions are the following: percentage (%) of sites, where risk reduction measures are completed; expenditures in million euro per capita per year. As a support, a guidelines document was available with detailed explanation for each of the questions and the possible options plus example responses based on previous data collection exercises.
Each country represented by its designated EIONET National Reference Centre for soil provided its best assessment based on available data. The data collection campaign was launched in October 2011 and ended in February 2012.
3. Results {#sec3}
==========
Even if the questionnaire included other data and information, this paper mainly focuses on the local contamination analysis, the type of contamination (which sectors are contributing the most), the distribution of the main contaminants, and the budget spent for remediation. The management of CS will not be analysed in detail as each country follows a different approach concerning the management steps. The analysis is performed in the study area as a whole and not at country level. It should be noted that quite different interpretations of the abovementioned definitions have been applied by individual countries.
3.1. Extent of Local Contamination in Europe {#sec3.1}
--------------------------------------------
Data on soil contamination per country is a necessary input in order to estimate the scale of soil contamination in Europe. The majority of the addressed countries (33 out of 38 countries), corresponding to 80% of the total population, have responded with data on the identified number of PCS and CS ([Figure 1(a)](#fig1){ref-type="fig"}). The missing five countries were Bosnia, Herzegovina, Poland, Portugal, Slovenia, and Turkey. According to [Figure 1(a)](#fig1){ref-type="fig"}, around 1,170,000 PCSs have been identified in Europe till 2011. More than 10% of them or around 127,000 have been identified or confirmed as CSs. The ratio of remediated sites (RSs) to CSs is around 45% as more than 58,000 CSs have already been remediated ([Figure 1(a)](#fig1){ref-type="fig"}). The data gap for the 5 missing countries can be covered by employing the density of PCS (2.4 PCS/1,000 capita) and CS (2.62 CS/10,000 capita) ([Table 1](#tab1){ref-type="table"}). Applying the average of 2.4 identified PCS per 1,000 capita for the 5 missing countries, the identified PCS for the whole Europe (38 countries) is estimated to be around 1,470,000. Applying the average of 2.62 identified CS per 10,000 capita ([Table 1](#tab1){ref-type="table"}, column (a)) for the missing 5 countries, the identified CS can be raised to 160,000.
Apart from the identified PCS and CS, countries have been asked to provide their estimations for those 2 figures. A subset of 12 countries out of the 33 participating ones has provided estimations about the PCS ([Table 1](#tab1){ref-type="table"}, column (b)). As a rule of thumb, the estimations are greater than the identified ones. According to their estimations, 740,000 PCSs may exist in their territory with a density of 4.2 PCS/1,000 capita. Those 12 countries have reported 520,000 PCSs which result that the ratio "*identification to estimation*" for PCS is around 70%. Two types of extrapolations can be performed in order to estimate the total number of PCS. In the first one, the average value of 4.2 PCS/1,000 capita is applied to the total population of the 38 countries, and the total number of estimated PCS is then around 2,553,000 ([Figure 1(b)](#fig1){ref-type="fig"}). In the second extrapolation method, the ratio "*identification to estimation"* for PCS (70%) is applied to the countries which were unable to provide estimations; then the approximate number of PCS can be estimated to be 2,087,000.
Another subset of 11 countries (not a sub group of the previous 12) covering 10% of total population has provided estimations about CS. They estimated more than 32,000 CSs with a density of 5.7 CS/10,000 capita ([Table 1](#tab1){ref-type="table"}, column (c)). Those 11 countries have reported 10,036 identified CSs which result that the ratio "*identification to estimation*" for CS is 30.7%. The first method of extrapolation is to apply the average density to the rest of the population (90%), where data does not exist. According to this estimation, the number of CS in Europe is estimated to be around 342,000 which accounts for 14% of the total estimated PCS ([Figure 1(b)](#fig1){ref-type="fig"}). In the second extrapolation method, the ratio "*identification to estimation*" for CS (30.7%) is applied to the whole population, and the estimated number of CS becomes more than 516,000. When comparing to the last survey of 2006, the estimated number of PCS was around 3 million, and the estimated number of CS was around 250,000.
The high variability of the data reported can be seen in [Figure 2](#fig2){ref-type="fig"}. The huge differences in the density rates represent the situation of PCS per country and how countries interpret the term of "potential contamination." Interpreting the metadata that come with the received data, PCSs are understood in a different way. For instance, Luxemburg, Belgium, Netherlands, and France include potentially polluting activities in their PCS figures, and this is the reason for high density of PCS in those countries ([Figure 2](#fig2){ref-type="fig"}). Other countries such as Austria, Hungary, and Norway include in their PCS figures only the sites where there is an evidence of potential contamination. Another factor contributing to this high variability is the granularity of a site. Some countries report sites which are important at national level, while others include also small sites such as storage tanks.
3.2. Sectors Contributing Most to Soil Contamination {#sec3.2}
----------------------------------------------------
Soil contamination is the result of various sectors and activities. The countries were asked to allocate a percentage of contribution of each sector to local soil contamination based on the occurrence of incidents. The following seven categories of activities were proposed:waste disposal (municipal waste disposal and industrial waste disposal).industrial and commercial activities (mining, oil extraction and production, and power plants).military (military sites and war affected zones).storages (oil storage, obsolete chemicals storage, and other storages).transport spills on land (oil spill sites and other hazardous substance spills sites).nuclear.other sources.
Responses related to contributing sectors were received from 22 countries which correspond to circa 53% of the total study population. Waste disposal and treatment contribute to more than 37% of soil contamination. Inside this category, municipal waste and industrial waste contribute to similar shares. The industrial and commercial activities contribute to 33.3% share, followed by storage (10.5%), while of the rest have a contribution of 19.1%. Nuclear operations contribute only 0.1%, but contamination from major nuclear players (e.g., scores from nuclear power stations) was not taken into account by some countries. The data cannot be compared to 2006 survey as the sample of countries that responded is dissimilar.
A special focus is given to the industrial and commercial sectors causing soil contamination. The countries were asked to assign percentages in each specific industrial sector which contributes to soil contamination. The responses of 17 countries covering 44% of the total study population suggested that the production sector contributes to around 60% of soil contamination, while the service sector has a share of 33% and the mining sector contributes to around 7% ([Figure 3](#fig3){ref-type="fig"}).
A closer look at the production sector reveals that the textiles, leather, wood, and paper industries are of minor importance for local soil contamination (circa 5%), while metal industries are most frequently reported to be important sources of contamination (13%) followed by chemical industry (8%), oil industry (7%), and energy production (7%) that sum up the 35% of the production sector, while all of the rest (25%) are distributed in 6 categories. For the service sector, gasoline stations are the most frequently reported sources of contamination (15%) followed by the car service stations (around 6%).
The Eurostat data on sectoral breakdown of manufacturing (NACE \[[@B37]\]) sums up the total number of enterprises in the EU to 2.041 million. The Eurostat industrial sectors do not correspond one-to-one with the industrial production sectors considered in the EIONET-CSI questionnaire ([Table 2](#tab2){ref-type="table"}, column (a)). Some grouping of the Eurostat sectors (plus sign in column (c)) has taken place to make the correspondence. Note that the Eurostat data for the mining sector was embedded in the Eurostat category "other manufacturing." From the values in columns d and b, a new value (column (e)) is computed that expresses how many enterprises of an industrial sector contribute to 1 percent of the local contamination coming from that sector. The smaller the number, the more one site contributes to industrial contamination. The resulting figures show for instance that mining sites are individually heavier polluters compared to other sectors. Instead, the electronic industry enterprises pollute less compared to the shown sectors ([Table 2](#tab2){ref-type="table"}).
3.3. Main Contaminants {#sec3.3}
----------------------
The countries were asked to allocate a percentage for the proposed contaminant categories based on the occurrence of soil contamination. Distinctions were made between contaminants affecting the solid matrix (soil, sludge, and sediments) and the liquid matrix (groundwater, surface waters, and leachate). The following eight categories of contaminants were proposed both for solid and liquid matrices: chlorinated hydrocarbons (CHCs).mineral oil.polycyclic aromatic hydrocarbons (PAHs).heavy metals.phenols.cyanides.aromatic hydrocarbons (BTEX: benzene, toluene, ethyl benzene, and xylene).others.The responses were received from 16 countries which correspond to about 40% of the total study population. The analysis based on these responses is of key importance for research and development, the remediation market, and related industries. For instance, if a specific compound is known to be a major soil contaminant, it may be worthwhile to develop new detection methods (i.e., in situ detection) and more efficient remediation techniques.
The distribution of the contaminants affecting soil is similar to the one of groundwater. The main contaminant categories are heavy metals and mineral oil contributing jointly to around 60% in soil contamination and 53% of the groundwater contamination ([Figure 4](#fig4){ref-type="fig"}). On the contrary, the phenols and cyanides have an insignificant contribution to total contamination. The remaining four categories (BTEX, CHC, PAH, and others) have similar contributions to soil contamination varying between 8 and 11% and summing up to 40%. In the groundwater contamination, their contribution is around 45% ranging from 6% for PAH to 15% for BTEX. The current distribution is similar to the one proposed after the analysis of the 2006 surveyed results.
3.4. Budget Allocated {#sec3.4}
---------------------
The cost of managing the CS is an important element taken into account by policy makers. The questionnaire included parts to investigate annual estimation of expenditures, share of private/public money, and share of total expenditure. This is a very important aspect as one of the most criticised issues in the proposed European soil framework directive \[[@B24]\] was the required estimate of annual cost for management of CS. According to the impact assessment of the proposed directive, there was a wide-ranging estimate from 2.4 to 17.3 billion Euros.
According to the responses of 11 countries covering 23% of the total population (139 million out of the 612 million inhabitants for the total area), 1,483.2 million euros (€) were spent annually for the management of CS in these countries. In absolute terms, this is around 10.7€ per capita or 0.041% of the gross domestic product (GDP) for the 11 countries. The reported data show a small decrease in expenditure for management of CS compared to 2006 (12€ per capita).
If this sample of 11 countries is considered representative for the whole Europe, then the management of contaminated sites can be estimated to be 6,526 million euros (€) per year. Compared to the impact assessment of the proposed soil framework directive, this amount of money is probably a more precise estimate of the cost of the management of all identified CSs (including remediation).
Regarding the share of private/public money, 42% of the total expenditure comes from public budgets while the 58% from private investments. Another interesting aspect of the study is the share in the total expenditures for the management of CS for the different management steps. The vast majority (80.6%) is spent for the remediation measures while 15.1% is spent in site investigation and only 4.3% in after care measures and redevelopment of the sites. When considering the budget spent on remediation and the number or remediated sites (RSs) in the 11 reported countries, it is calculated that the average amount spent per RS annually is around 37.1 thousand euros (€) in a range varying from 7.5 thousand € to 232 thousand € annually. As the remediation of sites has a duration of more than 1 year, the majority (40%) of the reported remediation projects fall in the range 50,000 to 500,000€, while a considerable 26.5% of the reported cases fall in the range between 5,000 and 50,000€.
4. Discussion and Conclusions {#sec4}
=============================
In terms of estimations, around 1,170,000 PCSs have been identified which are circa 45% of the total estimated PCSs. Also, around 127,000 CSs have been already identified which are circa 27% of the total estimated 342,000 CSs. Moreover, around 46% of the total identified CSs have been remediated (58,300 RSs). The identified figures for CS, PCSs and RS are based on reported data from 33 countries, while the estimated CSs and PCSs have been extrapolated based on data from a limited sample (11 or 12 countries).
Notwithstanding the positive outcomes of the EIONET-CSI data collection, it could be noted that the data submitted were not homogeneous since there are differences in the way that countries interpret the terms of contaminated sites. As shown in [Figure 2](#fig2){ref-type="fig"}, there is a high variability between the data submitted by countries. This variability is explained by the large uncertainty both in terms of methodology and data. Some countries run their own CS management system which may not fit perfectly to the definition of the CSI015 indicator, and this contributes to methodology uncertainty. Moreover, the reported data are usually based on expert judgement which includes a high degree of uncertainly. The countries may interpret the data specifications in different ways, and this increases the heterogeneity in the data reported. The reported data on CSI015 indicator are based on the exceed of limits in concentrations of hazardous chemicals. However, common limits are unlikely to be established at the European level since they may be strongly influenced by local soil and geological properties.
An adequate response to the high data variability could be to make a pan-European training event with the participation of competent national EIONET authorities, with the objective to apply the same terminology in all countries in subsequent data collections. The heterogeneity of responses can also be decreased if the provided documentation is taken into account.
In general, there are difficulties in getting the data on soil contamination, but improvement in data availability and data quality over the years can be observed. At this moment, the resulting dataset is the best "picture" that can be achieved based on national data. The EIONET-CSI data collection has taken place 5 years after the previous one of 2006. This 5-year period between data collections seems to be more appropriate than the 2-year period applied in the past, since the data on CS are not changing considerably in such a relative short time.
The direct and indirect costs to a country for dealing with the problem of CS depend on the amount and characteristics of CS in its territory. Generally, the presence of CS can affect company profits, business confidence, and attractiveness to investors. It may also affect aspects of public health and ecosystem protection. The remediation cost of CS, even if only a very little percentage of GDP, seems to be a major issue, and investments to improve the land quality through remediation are not readily made. Countries should weigh the costs of dealing with local land contamination against benefits to public health, improvement of the environment (e.g., water quality), land regeneration, and sustainable use of soil.
Restrictions set by privacy law in Europe are a major obstacle to identification and management of land contamination. Status and data on private land are not easily accessible to public authorities as this may have some implications for the land owner. However, the situation of his land is affecting public health, water quality, and ecosystem services. In cases of proven soil contamination, public authorities could be allowed for intervention or even raise public awareness. The conflicts between public interest and privacy regarding land and in general environmental problems should be resolved at a legal basis.
The EIONET-CSI dataset will be supplemented with heavy metals data at European level. In 2009, 22,000 soil samples were taken in European Union countries during a soil survey named LUCAS \[[@B38]\]. Those soil samples have been analysed for some of the most important soil attributes such as soil organic carbon, and the results assist to estimate better the overall situation in Europe \[[@B39]\]. Currently, these soil samples are analysed for heavy metals, and the expected output results will facilitate better assessment of soil contamination in European Union. The LUCAS heavy metals dataset will face the issue of privacy which can overcome with the application of digital soil mapping for the development of interpolated maps. The combination of LUCAS heavy metals with EIONET-CSI will be an important step in assessing soil contamination in Europe.
The proposed datasets and the current study can be considered by public health professionals for epidemiological assessments. The study of human exposure pathways is a key issue on contaminated sites, and certainly the integration of EIONET-CSI datasets with epidemiological data would be a very important step forward in this direction. Moreover, as the majority of food is growing in soil, biomonitoring and other research should investigate the pathways and routes from producers to consumers.
The authors confirm and sign that there is no conflict of interests with networks, organisations, and data centres referred in the paper. In specific, ESDAC is the European Soil Data Centre and is an integral part of the Joint Research Centre of the European Commission, to which the authors are affiliated. Moreover, the ESDAC is operated by the authors themselves, so there cannot be any conflict of interests whatsoever. Also, the authors have published the paper relevant to ESDAC \[[@B33]\]. European Environment Information and Observation Network for soil (EIONET-SOIL) is the network of soil organizations officially designated by the European countries that deliver, on request by ESDAC and on a voluntary basis, data on soil related topics, in this case, contaminated sites. Therefore, there cannot be any conflict of interests. Note that the contributing organizations of the EIONET-SOIL are explicitly acknowledged in the paper. Also, the authors have published a paper relevant to another data collection (soil organic carbon) in the past \[[@B32]\].
The authors would like to acknowledge the countries (persons and organizations) that contributed to data in the EIONET study: Albania (Loreta Sulovari of the Agency of Environment and Forestry; Erinda Misho of AEF); Austria (Stefan Weihs, Dietmar Mueller, and Sabine Rabl-Berger of the Umweltbundesamt GmbH Environment Agency Austria; Franz Buchebnerof the Bundesministerium für Land-und Forstwirtschaft, Umwelt und Wasserwirtschaft; Sebastian Holub of the Kommunalkredit Public Consulting GmbH); Belgium (Flanders) (Marijke Cardon and Els Gommeren of OVAM); Bosnia and Herzegovina (Hamid Custovic of the University of Sarajevo, Faculty of Agriculture and Food Sciences); Croatia (Andreja Steinberger and Željko Crnojević of the Croatian Environment Agency (CEA)); Cyprus (Chrystalla Stylianou and Neoclis Antoniou of the Department of Environment; Andreas Zissimos of the Geological Survey Department); Estonia (Peep Siim of the Ministry of Environment Water Department Project Bureau); Finland (Teija Haavisto of the Finnish Environment Institute); France (Véronique Antoni, Delphine Maurice, Farid Bouagal, Philippe Bodenez, and Claudine CHOQUET of the French Ministry in charge of ecology; Jean-François Brunet of BRGM; Antonio Bispo of ADEME); Germany (Joerg Frauenstein of the Umweltbundesamt); Hungary (Gabor Hasznos of the Ministry for Rural Development); Ireland (David Smith of the Environmental Protection Agency); Italy (Laura D\'Aprile of ISPRA); Kosovo (Republic of Kosova) (Gani Berisha of the Ministry of Environment and Spatial Planning, Soil Protection Sector; Shkumbin Shala of the Hydrometeorological Institute, Kosova\'s Environmental Protection Agency); Lithuania (Virgilija Gregorauskiene of the Lithuanian Geological Survey); Former Yugoslav Republic of Macedonia (Margareta Cvetkovska of the Macedonian Environmental Information Center, Ministry of Environment and Physical Planning); Malta (Christina Mallia of the Environmental Permitting and Industry Unit of the Malta Environment and Planning Authority); Montenegro (Vesna Novakovic of the Environmental Protection Agency of Montenegro); Netherlands (Versluijs C. W. and Bogte J. J. of the RIVM); Norway (Per Erik Johansen of the Klima- og forurensningsdirektoratet); Poland (Joanna Czajka of the Chief Inspectorate for Environmental Protection); Slovakia (Katarina Paluchová of the Slovak Environmental Agency; Vlasta Jánová of the Ministry of the Environment of the Slovak Republic); Serbia (Dragana Vidojevic of the Ministry of Environment, Mining, and Spatial Planning, Environmental Protection Agency); Spain (Begoña Fabrellas of the Ministerio de Agricultura, Alimentación y Medio Ambiente); Switzerland (Christoph Reusser of the Federal Office for the Environment (FOEN)); United Kingdom (Mark Kibblewhite and Caroline Keay of Cranfield University). Special thanks should be expressed to Gondula Prokop (Environment Agency Austria) who conducted the operation of data collection and analysis on behalf of ESDAC.
{#fig1}
{#fig2}
{#fig3}
{#fig4}
######
Estimated and identified PCS and CS.
------------------------------------------------------------------------------------------------------
Identified PCS and CS\ Estimated PCS\ Estimated CS\ Total\
(a) (b) (c) (d)
------------------------------ ------------------------ ---------------- --------------- -------------
Countries 33 12 11 38
Surveyed population 487,152,449 177,412,672 57,568,148 612,117,243
Surveyed surface area (km²) 4,460,305 1,552,984 833,188 5,772,075
Surveyed of total population 79.6% 29.0% 9.4%
Surveyed of total area 77.3% 26.9% 14.4%
PCS 1,169,649 739,968 2,553,000\*
PCS/1000 capita 2.4 4.2
CS 127,475 32,601 342,000\*
CS/10,000 capita 2.62 5.7
Remediated sites (RSs) 58,336
RS/10,000 capita 1.20
------------------------------------------------------------------------------------------------------
\*Based on extrapolated data.
######
Comparison of sectoral contribution to industrial contamination against the total number of enterprises.
Industrial/service sector Sector contribution to industrial contamination (production) Manufacturing sector Number of enterprises (1,000) Number of enterprises (1,000) contributing to 1% of industrial contamination
-------------------------------------------------- -------------------------------------------------------------- ---------------------------------------------- ------------------------------- ------------------------------------------------------------------------------
Chemical industry 8.2% Chemicals plus rubber and plastic products 97.2 11.9
Metal working industry 13.1% Basic metals plus fabricated metal products 381.2 29.1
Textile and leather industry 2.0% Textiles plus wearing apparel plus leather 225.4 112.7
Wood and paper industry 3.7% Wood and paper 191.8 51.8
Food industry and processing of organic products 5.7% Food products plus beverages 273.8 48.0
Electronic industry 1.0% Computer, electronic, plus electrical equip. 94.1 94.1
Mining sites 6.2% Mining 18.2 2.9
Total 39.9% Total 1281.7
[^1]: Academic Editor: Piedad Martin-Olmedo
| |
These abstracts are for talks at this event.
NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.
Nominal Typing for Data Languages in QCert
Jerome Simeon (IBM Research)
Most data languages (e.g., SQL) are small functional languages with rich record operations and structural types. Using those in an object-oriented context can result in complex typing issues. We describe an approach to support nominal typing in data languages through a notion of brands. The resulting type system is simple enough for reasoning and flexible enough to capture many object- oriented idioms including classes and methods, object creation, casting, multiple inheritance and interfaces. We illustrate its use on miniOQL, a SQL- like language for objects and on NRA, a database algebra suitable for optimization. The type system has been formalized in Coq as part of QCert, a certified query compiler.
Joint work with Joshua Auerbach, Martin Hirzel, Louis Mandel, and Avraham Shinnar.
Accepting Blame for Safe Tunneled Exceptions
Yizhou Zhang (Cornell University)
Unhandled exceptions crash programs, so a compile-time check that exceptions are handled should in principle make software more reliable. But designers of some recent languages have argued that the benefits of statically checked exceptions are not worth the costs. We introduce a new statically checked exception mechanism that addresses the problems with existing checked-exception mechanisms. In particular, it interacts well with higher-order functions and other design patterns. The key insight is that whether an exception should be treated as exception is not a property of its type but rather of the context in which the exception propagates. Statically checked exceptions through code that is oblivious to their presence, but the type system nevertheless checks that these exceptions are handled. Further, exceptions can be tunneled without being accidentally caught, by expanding the space of exception identifiers to identify the exception-handling context. The resulting mechanism is expressive and syntactically light, and can be implemented efficiently. We demonstrate the expressiveness of the mechanism using significant codebases and evaluate its performance. We have implemented this new exception mechanism as part of the new Genus programming language, but the mechanism could equally well be applied to other programming languages.
Joint work with Guido Salvaneschi, Quinn Beightol, Barbara Liskov, and Andrew C. Myers.
Garbology: A Study of How Java Objects Die
Raoul L. Veroy (Tufts University)
How do objects die? In this paper, we present an analysis framework that can precisely characterize the ways in which Java programs dispose of objects. Our goal is to provide data that complements the existing object demographics literature, which is mostly focused on object allocation and lifetime characteristics. A more complete picture of object lifecycles is crucial to developing new garbage collection algorithms that can take advantage of application-specific information. We present a novel technique that uses trace-based simulation augmented with reference counting. Our analysis is able to identify groups of objects that die simultaneously and can compute the precise program point where these events occur. Furthermore, it can determine the specific program actions that cause objects to become unreachable. We classify object deaths in several different ways, and we present empirical results from running our analysis on the Dacapo, SPECjbb2005, and SPECjvm98 benchmarks using traces from the Elephant Tracks tracing tool.
Joint work with Samuel Z. Guyer.
Assessing the Limits of Program-Specific Garbage Collection Performance
Eliot Moss (University of Massachusetts Amherst)
We consider the ultimate limits of program-specific garbage collector performance for real programs. We first characterize the GC schedule optimization problem using Markov Decision Processes (MDPs). Based on this characterization, we develop a method of determining, for a given program run and heap size, an *optimal* schedule of collections for a non-generational collector. We further explore the limits of performance of a *generational* collector, where it is not feasible to search the space of schedules to prove optimality. Still, we show significant improvements with Least Squares Policy Iteration, a reinforcement learning technique for solving MDPs. We demonstrate that there is considerable promise to reduce garbage collection costs by developing program-specific collection policies.
Foundations of type-directed code inference: which types have a unique inhabitant?
Gabriel Scherer (Northeastern University)
Type information can be useful to guess some parts of the program that the users does not wish to write explicitly. This is the basic idea supporting coercions, Haskell's type classes, and Scala's implicits. I study a foundational question underlying this problem: when is it the case that a type has a unique inhabitant? In other words, when is the program fragment to guess *fully determined* from the type information? To answer this question, we take ideas from proof search and logic (in particular the notion of 'focusing'), and combine them with ideas about program equivalence in (pure) functional programming. In this talk, I would like to explain the motivation, and give an accessible introduction to 'focusing', a technique from logic used to answer questions from programming.
Multirole Logic as a Foundation for Global Coordination
Hanwen Wu (Boston University)
Session types are protocols describing valid communications among parties. Two- party sessions enjoy duality, in which one party's action is always the dual of the other party (e.g. send/receive, choose/offer). Correspondence to logics is being actively studied in which cut reduction is communication and the duality in classical linear logic captures duality in two-party sessions. However in multiparty sessions, duality no longer holds, and the two-premiss cut rule is insufficient to express multiparty communication. Several prior studies proposed multi-cut/coherent-cut as a generalization, with coherence as a side condition guarding cuts. Instead of resorting to classical logic, we propose a new form of logic, Multirole Logic, where propositions are not limited to only two interpretations (itself and its negation), but multiple interpretations annotated by a set of roles. Such generalization naturally gives rise to a (complete) cut rule for multiple propositions and a (partial) cut rule that leaves a residual proposition. And we proved the admissibility of cut rule, thus generalizing the celebrated results of Gentzen. We report that our Multirole Logic is much more general and it provides a foundation for global coordination, including but not limited to multiparty session types.
Joint work with Hongwei Xi.
PLANALYZER: Automatic Analysis of Online Field Experiments
Emma Tosch (University of Massachusetts Amherst)
Online experiments are widely used to evaluate changes to Internet services and inform design and engineering decisions. To manage the complexity of performing experiments at scale, large companies employ frameworks for designing, managing, and logging experimental data. While such systems can help prevent many pitfalls, the design, analysis, and verification of all but the most basic experiments require substantial domain expertise. This paper presents PLANALYZER, a static analysis tool that aids in the verification and analysis of online experiments. PLANALYZER incorporates the rules of causal inference to devise proper estimators for advanced experimental designs. PLANALYZER targets the PlanOut language, though its techniques would be applicable to similar experimental design languages. s static analyses identify whether a PlanOut program implements any valid experimental contrasts, and aids ana- lysts by enumerating these contrasts.
Probabalistic NetKAT
Steffen Smolka (Cornell University)
NetKAT is a language for modeling, programming, and reasoning about software-defined networks. It comes with a rich theory and automated tools including a sound and complete axiomatic system, a compiler, and a decision procedure for checking qualitative network properties automatically. A recent probabilistic extension of NetKAT holds great promise: it enables programming randomized routing algorithms, modeling probabilistc features such as link-failures, and reasoning about quantiative properties such as expected link congestion, probability of packet delivery, etc. While the denotational semantics of probabilistic NetKAT has been worked out, little is known about how to compute in this model algorithmically. There is no decision procedure or compiler. This work aims to develop the theoretic foundations to enable such tools: a theory of approximation and a coalgebraic (automata) theory for probabilistc NetKAT.
This is ongoing work with Nate Foster and Dexter Kozen at Cornell and with Alexandra Silva at University College London.
Toward Compositional Verification of Interruptible OS Kernels and Device Drivers
Newman Wu (Yale University)
An operating system (OS) kernel forms the lowest level of any system software stack. The correctness of the OS kernel is the basis for the correctness of the entire system. Recent efforts have demonstrated the feasibility of building formally verified general-purpose kernels, but it is unclear how to extend their work to verify the functional correctness of device drivers, due to the non-local effects of interrupts. In this paper, we present a novel compositional framework for building certified interruptible OS kernels with device drivers. We provide a general device model that can be instantiated with various hardware devices, and a realistic formal model of interrupts, which can be used to reason about interruptible code. We have realized this framework in the Coq proof assistant. To demonstrate the effectiveness of our new approach, we have successfully extended an existing verified non-interruptible kernel with our framework and turned it into an interruptible kernel with verified device drivers. To the best of our knowledge, this is the first verified interruptible operating system with device drivers.
Joint work with Hao Chen, Zhong Shao, Joshua Lockerman, and Ronghui Gu.
Using Anomaly Detection to Find Bugs in Control Software
Hu Huang (Tufts University)
Modern commercial aircraft are heavily dependent on large software systems for many of their essential functions. However, bugs may cause software failure and endanger the lives of passengers and crew. Software development processes in aviation emphasize near exhaustive testing but not all bugs can be found. Additionally, current bug detection methods are not suitable for bugs that appear during run-time as there exist complex relationships between program variables that are not detectable by assertions or simple invariants. In our work, we make use of a machine learning approach for detecting bugs at run-time. However, we depart from previous work by leveraging program slicing to obtain a set of variables for monitoring. This talk will be on preliminary work that we have done so far on this idea.
Adel: A New Way to Program Microcontrollers
Sam Guyer (Tufts University)
Cheap microcontrollers, such as the Arduino, have become wildly popular for embedded applications ranging from "smart homes" to wearable devices to electronic art. Unfortunately, programming these tiny microprocessors is surprisingly difficult. The central problem is that the applications often consist of logically asynchronous tasks -- for example, blink a light until a button is pushed -- but there is no support for concurrency (e.g., threads). Many of these microprocessors do not have the resources necessary to support an operating system of any kind, so composing multiple tasks requires programmers to implement a kind of ad hoc scheduler that greatly complicates the code. In this talk I will present a new "programming language", called Adel, for programming microcontrollers. Adel provides a limited form of concurrency based on coroutines and is cheap enough for use in even the most bare-bones systems. Adel is not a full programming language -- it is implemented entirely as a set of C macros and requires no changes to the existing Arduino tool chain. Most importantly, it provides compositionality: asynchronous tasks can be composed in a simple and intuitive way without compromising modularity and clarity. | https://nepls.org/Events/29/abstracts.html |
Short answer: English probably works differently than we think it does
Long answer: People use their native language effortlessly provided they don’t have something physically wrong with their brain or any other sort of mental impairment.
Language is an amazingly complex thing. We have adjectives, nouns, pronouns, adverbs, verbs, conjunctions and all sorts of other things. When children grow up, they just hear the language around them and they just pick it up. How they actually do this is a matter of the most cutting edge research, still, after all these years linguistics have been around.
The way people speak tells us something about not only English itself but also how people think. Let’s look at the sentence “We need to move the meeting from 1pm to 2pm”. This makes perfect sense to us, but it actually reveals something about our cognition.
We view a meeting as an object and that it can be moved. But a meeting actually doesn’t exist physically. It is just an agreement amongst some people to meet at a certain time. Time itself is another concept effortlessly handled by the human mind and language but imagine someone with no concept of time. You couldn’t move a meeting because you couldn’t refer to “later”. You could only refer to “now”.
Another way we can see how language reflects how the mind works is how words carve out their own space. Mend means something slightly different to repair. Hurt means something different to inflict pain. Hound and dog are also different. People use words and those that hear them interpret them and use them to try to understand what other people are saying.
They use their understanding of the word to send their own messages and back and forth words go from person to person. The process is not perfect and no word is fixed in meaning but shifts slightly over time.
This is because people interpret words slightly differently as they hear them and use them differently to other people. Over time these slight differences add up and a word like “silly” which is cognate with German “selig” once meant “blessed”. Word change meaning over time because of people. How meanings change over time gives us an insight into the mercurial workings of the human mind.
Past tense forms of words have also changed over time. “sneaked” used to be the way people made “sneak” into the past tense. Now there is “snuck” because people looked at “stick” and “stuck” and by analogy made “sneak” and “snuck”. These constructions that are constructed by analogy are all over the place. It’s another example of how the human mind processes and uses language.
Language is not immovable, but rather a fluid and ever changing thing. People take in language from around them and instinctively work out the rules of this system they are using. People’s idiosyncratic interpretations of words and structures make small changes in words and structures and language slowly change over time.
Now, what does this have to do with “me and my friend”? Well, by a certain way of looking at things, “me and my friend” even at the beginning of a sentence is perfectly alright. I know what you are thinking, “I was thought that it has to be “my friend and I”” and “No one says “Me went”, so you can’t say “Me and my friend went””. There are lots of ways of analysing language. The people who use these arguments are merely using their own line of reasoning and that is perfectly ok. I am merely showing a different way of looking at things.
I have never liked the demonisation by some people of the construction “me and my friend”. As I have tried to make clear in the first part of my article (and by providing many examples) language is an organic entity invented and changed and kept alive by the minds of people in the world. The sentence “That way of speaking is wrong” when speaking of native speakers is simply absurd to me.
Would people look at a penguin and say “That bird should be able to fly. A flightless bird is just wrong” or “That animal has a trunk. No animal should have a trunk. It is just wrong”. I think most people would say that is a silly thing to say. I think it is because many people in literate societies hold up the written word as the best version of their language and end up disliking divergences from that version of the language.
But only about 200 languages in the world are regularly written out of the about 7000 languages in the world. Language is spoken, words are invented, die out, new constructions come in and old constructions get forgotten. The question should not be “Why do people say “me and my friend”?” but “What does the construction “me and my friend” say about English?”
Even after all these centuries of studying language, there is still so much more to be learned. People pick up language effortlessly and speak it effortlessly, yet it is so remarkably complex. It is a bit like walking. You are never taught to walk, you just walk. You aren’t taught to speak, you just speak. Just being able to do something does not always mean you cognitively know how you do it. So my point is, you can actually speak a language perfectly well and still not know how it actually works.
So let’s get to the question at hand, if “me and my friend” is perfectly valid, even at the beginning of a sentence, what does that tell us about English?
When scientists discover a new type of dinosaur in a dig, they might tell the world “this changes everything we know”. Well, I don’t think this construction in English goes that far, but it does fly in the face of what a lot of us have been taught.
So why then has “me and my friend” been so derided? Because it violates a so-called law where all elements in a subject must be in the nominative case. I say so-called because clearly this law is being violated and it is not out of ignorance. Just like the flightless bird violating the idea that all birds fly, this English construction should not be derided but rather it should lead people to ask, “why is English behaving in this way?”
Should a new species be discovered, scientists would immediately ask, “what can the emergence of this species tell us about their environment and about natural selection?”
In this case “me and my friend” is pointing to some trait of English that is a bit different than languages around it. That construction simply does not appear in other languages. Yet it is popping out of mouths of many English speakers, which following from the animal analogy should tell us that something is going on.
“me and my friend” is what is called a compound subject. The whole construction is considered a subject, but it is made up of a number of nouns. A single noun would just be a subject, but two or more creates a compound subject.
“Ego et rex meus” is a compound subject from Latin meaning “me and my king”, or literally “I and my king”. So in Latin, they clearly follow this rule that all constituents of a compound subject must be in the nominative.
What is this nominative and accusative?
Well, in English we say “I went to the store” but “He gave it to me”. “I” is the nominative form and “me” is the accusative form. When a noun is in subject position, it takes the nominative form and when in object position it should take the object form. So “He saw my friend and me” is fine because “my friend and me” are in object position so they both take the object form.
But if this was the rule then no native speaker would ever say “my friend and me” at the beginning of a sentence. No native speaker EVER says “Me go-ed to store” or “Us is here”. Clearly there are certain patterns that are followed by native speakers when using pronouns. “my friend and me” is an anomaly only if you look at it as an anomaly. It goes against what would appear in another language, but English is not Latin or any other language that would never use the equivalent of “my friend and me”.
When someone knocks on the door, and you ask “Who’s there”, you can reply “Me”. In Swedish though, people say “Det är jag” which literally translated is “It is I”. The fact that people say “me” in response in English tells us that cases behave a bit differently in English.
So why do people say “me and my friend” even at the beginning of a sentence? Because in a compound subject, the role of the compound subject itself (whether it be at the beginning of the sentence or at the end of the sentence) does not dictate the forms needed in the actual compound subject.
When looking at a construction used by native speakers we need an explanation that actually comes up with a reason for something happening and doesn’t just dismiss it as a mistake. English treats pronouns differently than other languages. When a subject such as “I” gets a noun or another pronoun added to it, the rules change. “I” becomes “me and my friend”. This compound subject can then be used anywhere in the sentence, such as “He saw me and my friend”.
In my view, this construction is common enough and consistent enough to be considered a proper part of the language and shouldn’t be looked down on. People who look down on this construction are using the standards of other languages which is never the right approach. Each language has its own history and its own ways of doing things.
But I also understand that we don’t understand. By that I mean that we don’t really understand language very well and our attempts to understand it have sometimes created theories that don’t fit 100% with reality. Trying to fully apply the nominative accusative system in the same way it is used in Latin sometimes caused perfectly natural English to be considered a mistake.
When people are taught a certain way it changes their speech patterns and in some cases leads to hyper correction where people say “He say my friend and I” which actually violates the Latin- derived rule many school teachers teach. But again, language is a part of culture and teaching is part of culture too and “my friend and I” is just as much a product of human cognition as “my friend and me” and I won’t spend the rest of this article in turn looking down on “my friend and I” because that would be a bit hypocritical after telling people not to judge.
At the very least I would like to get people to look at speech coming from native speakers with a bit more of a open mind and not to immediately condemn certain forms as wrong. Language is weird and wonderful and the more we can have fun with it rather than making it a chore, the more we can begin to discover what language can really do and what it means to us. | http://sillylinguistics.com/why-do-people-say-me-and-my-friend/ |
One of my great regrets in life is that I have no artistic talent. So when I realised that Alice May, author of Accidental Damage, is both a writer and an artist I had to ask her a bit more about those roles and I’m delighted she agreed to write a guest post for Linda’s Book Bag.
Accidental Damage is available for purchase in e-book and paperback here.
Accidental Damage
If you think the normal school run on a Monday is entertaining you should try doing it from a tent in your back garden surrounded by the jumbled up contents of your entire home. It is vastly more diverting.
Our heroine has survived the sudden collapse of her home – or has she?
Certain events two and a half years ago led her to deliberately destroy an important piece of herself, hiding away all remaining evidence that it ever existed.
What happens when she decides to go looking for it? Does she really deserve to be whole again?
Inspired by a true story, this is an account of one woman’s secret guilt and her journey in search of forgiveness!
Artist vs Author
The Similarities and Differences of the Two Creative Processes
A Guest Post by Alice May
As an exhibiting artist for over a decade before I embarked on my fledgling writing career, I hadn’t thought particularly about the similarities and differences associated with the two creative processes until Linda asked me about it. She wondered if the painting process ever informed the writing or were they two totally separate entities.
From the point of view of my first novel Accidental Damage I would have to say that art has been a massively integral part of the whole development of the main plotline. The story is written retrospectively from a mother’s point of view. She is an artist who is using her painting as a security blanket to help her work through the feelings of guilt she has about her role in events two years previously that lead to her and her family (husband and four children) suddenly becoming homeless. In a series of flashbacks we learn exactly how they became homeless and how they coped with the situation. At the same time we see the mother, in the present, reacting to each stage of the remembered story with a new piece of art.
The concept of art as a therapy for emotional and/or psychological trauma is an intrinsic part of the story.
Although Accidental Damage is mainly a work of fiction, it was inspired by true story (yes, we really did live through a home-collapse disaster) and so the artwork described in the book actually does exist. This fact significantly helped to lend the writing authenticity as there is a very deep connection between the paintings and the emotional journey that the central character takes throughout the story. It was also a nice touch to be able to use one of those pieces of art for the design of the cover for the book.
On a more theoretical level though, while the development of either a painting or a plotline seems to follow a similar path, there is not often an opportunity for one to actually overlap with the other. This is probably what made writing Accidental Damage such fun.
With the art, I frequently see ideas around me in day to day life that I know I need, but at the time I often don’t know why they are important. For example a particular shape, texture or colour combination might catch my attention. It’s a bit like discovering the pieces of a puzzle. I keep these elements on a mental mood board until I know what I want to do with them. It may be days, weeks or even months before the final piece of my little puzzle presents itself (often most unexpectedly) and then, like a catalyst, this triggers the whole concept to evolve and I find myself running to my easel to start throwing paint around. It gets messy very quickly!
In a similar manner, with my writing I find I am constantly making mental notes. Sometimes real notes too – I am often to be found scribbling frantically in a jotter in strange places. I will acquire character traits, accents, places, colours, music or events and store them away for later. At some point, over time, these different elements coalesce into the plotline and substance of the next book. It is a fascinating process and very exciting as I never quite know what is going to happen next.
In both cases, eventually the painting or the plotline that is inspiring me will take over my brain completely as the whole picture or story starts to develop. All else fades into the background until the piece in progress is finished.
The only real issue I have it with it all is the fact that I haven’t yet worked out how to paint and write at the same time. Writing about painting in Accidental Damage was probably about as close to doing that as is possible.
I guess we can’t have everything can we?
About Alice May
She says she is fortunate enough to be married to (probably) the most patient man on the planet and they live in, what used to be, a ramshackle old cottage in the country where her conservatory is always festooned with wet washing and her kitchen full of cake.
Alice loves listening to the radio in the mornings.
You can find out more about Alice on her website. You can follow her on Twitter and you can find her on Facebook. | https://lindasbookbag.com/2017/02/25/artist-vs-author-a-guest-post-by-alice-may-author-of-accidental-damage/ |
Melbourne CIO Virtual Executive Summit
10 November, 2021 | 9:00-13:30 AEDT 11 November, 2021 | 9:00-11:45 AEDT
10 November, 2021 | 9:00-13:30 AEDT 11 November, 2021 | 9:00-11:45 AEDT
Collaborate with your peers
Come together with your peers virtually to tackle top business challenges through peer-driven content and discussions at the Melbourne CIO Virtual Executive Summit.
Join your peers to discuss the most critical issues impacting CIOs today:
Seizing momentum – influencing and leading the enterprise
Thriving through uncertainty - empowering people and delivering value
Advancing and scaling strategic, outcome-based technologies
Co-Chairs
Justin Davies
Ovato
Chief Information Officer
Melinda Duke
RMIT University
CTO
Tracey Evans
Seek
CIO
Barry Magsanay
Treasury Wine Estates
Global Head of Information Security & Americas IT Services
Matt Mueller
Iluka Resources Limited
Chief Information Officer
What to Expect
Connect with your CIO community through a variety of different session formats at the upcoming Virtual Executive Summit. You'll have the opportunity to listen, engage and create lasting relationships with like-minded peers.
Meet the Speakers
Don't miss this opportunity to meet with CIO practitioners and industry thought leaders who shared their insights on the agenda. Come with questions and get ready to meet new friends in this casual session designed to foster peer connections and collaboration in the Melbourne community.
Agenda
Coming Soon
We are currently working with C-level leaders in your community to build the most timely and relevant agenda. All of our sessions, topics and discussions are driven by CIOs, for CIOs to ensure the most valuable experience.REGISTER
Melbourne CIO Programme Manager
For inquiries related to this event, please reach out to your dedicated program contact. | https://www.evanta.com/cio/melbourne/melbourne-cio-virtual-executive-summit-5264 |
Our ecommerce accessibility audits are designed to evaluate how well your website meets the needs of users with disabilities.
Are you concerned that your site might be frustrating or impossible to use for people with disabilities? Have you received a legal letter stipulating you are at violating accessibility guidelines? We can help you better understand these issues and address them swiftly and properly.
During our accessibility audit, we will analyze your site against the W3C’s Web Content Accessibility Guidelines 2.1 with a conformance goal that meets your needs. Our process includes both automated and manual reviews in accordance with best practices.
Our accessibility audit includes:
- Roadmap of specific recommendations to achieve compliance
- Recommended implementation timeline
- Best practice recommendations for your internal team
- Estimates for getting your site compliant
- Where possible, we will prioritize the most impactful changes for you to implement first
Once our accessibility audit is complete, we’ll help you implement our recommendations, starting with the most impactful changes.
Have a listen to our latest podcast episode discussing the ins and outs of ecommerce accessibility audits. | https://commandc.com/accessibility-audit-magento-shopify-miva-bigcommerce/ |
Cesar Torres is an Assistant Professor of Computer Science & Engineering at The University of Texas at Arlington. As a researcher, Cesar specializes in Human-Computer Interaction (HCI) synthesizing new media and craft theory into the software and hardware design of creative tangible user interfaces. He has received multiple best paper awards at top venues within HCI and was recently awarded the NSF CRII Grant and UTA Interdisciplinary Research Grant. He serves as a regular program committee member for ACM Designing Interactive Systems (DIS), User Interface and Software Technology Symposium (UIST), Symposium on Computational Fabrication (SCF), and Human Factors in Computing Systems (CHI). He holds a Ph.D. in Computer Science from UC Berkeley, and a B.A. in Art Practice and B.S. in Computer Science from Stanford University.
The Hybrid Atelier
Meet the members of the The Hybrid Atelier.
Staff
Ph.D. Students
Akib Zaman is pursuing his Doctoral degree in Computer Science at the University of Texas at Arlington (UTA). He completed his BS in Computer Science from UTA. His research interests lie in the intersection of HCI and Machine Learning. Currently, he is working on audio-based classifiers in the makerspace domain and debugging support tools in the programming domain.
Shreyosi Endow is a second year Computer Engineering Ph.D. student at University of Texas at Arlington. She graduated with a Bachelor's degree in Computer Engineering from University of Texas at Arlington in May 2020. Her research interests are in digital fabrication, IoT environments, toolkit design for hybrid makerspaces, and wearable technology. Personal Website
Nasir Rakib is pursuing his Ph.D. in computer science at UTA. Before enrolling for his Ph.D., he completed his bachelor's in textile engineering from the University of Chittagong, Bangladesh, and his master's in retail management from Texas Tech University. His research interest lies in the application of textiles in the HCI field. More
Masters Students
Jeremy Scidmore earned a BFA at the School of the Art Institute of Chicago and later returned to SAIC to study Arts Administration and Policy. While in Chicago he became committed to the idea of art as activism, developing and advising an array of community-based programs for underserved neighborhoods. His concurrent personal art practice has covered a wide array of glass media (neon, hot glass, kilnforming, glass printing), as well as installation, video, and sound. He continues to balance these studio-based explorations with community activism through arts advocacy and teaching.
Scidmore is currently based in Dallas/Fort Worth where he is Studio Technician and Faculty in the Glass Department at the University of Texas Arlington. Additionally, he has been an instructor, lecturer, and adviser at a range of academic, nonprofit, and for-profit institutions including Google, Solar City, The Crucible, Chicago Public Schools, Street Level Youth Media Group, School of the Art Institute of Chicago, University of Illinois Urbana, Pilchuck Glass School, Pittsburgh Glass Center, Public Glass, Ox-bow School of Arts and Residency, California College of Arts, Bullseye Glass Co., The Art Institute of California, Urban Glass, and North Land Creative Glass (Scotland).
Undergrad Research Studios (UGRS) Fellows
Cherryl Maria Bibin is pursuing her Bachelor of Science in Computer Science Engineering at the University of Texas at Arlington. She is passionate about the applications of Artificial Intelligence and Machine Learning in various domains to improve the quality of human life. Her interests include the applications of mathematics and computer science and the impact it can have on the society. She is passionate to bring technology to improve healthcare especially for the lesser privileged communities by incorporating machine assisted intelligent systems. She has also developed a mobile application cAppAble which was aimed at helping underprivileged people and people with disabilities to get job opportunities by connecting them to NGOs and corporates. She believes that we are preparing for a world where humans and machines will work together and we must prepare ourselves to embrace this change. She is also a trained Carnatic singer and is very passionate about music. Her hobbies include debating, watching investigative thrillers and is a huge fan of Sherlock Holmes.
https://www.linkedin.com/in/cherrylmbibin/
Long Nguyen is a junior pursuing a Bachelor's in Computer Science. He loves anything art and design, focusing especially on graphics design. Currently, Long is exploring expressive typography and art direction. For research, he hopes to inspire new ideas through creative or artistic interactions. In his free time, Long experiments with cooking, play some games, paints, draws, and works out. He loves eating good too.
Thomas is a tinkerer at heart and is currently working on his Bachelors in Computer Engineering. His hobbies/interests include Robotics, Virtual Reality, Music Production, Stage & Light Production, Visualizations, L.E.D. Animation & Applications, 3D printing, Home Renovation, Car Modifications, Woodworking, Cosplay/Costume Weapon & Armor Fabrication, Photography/Videography, Cooking, Hiking, Camping to name at least. He has an affinity for Art & Technology and is always conjuring ideas on how to merge both aspects. On his free time he likes to travel and search for exotic delicacies to try and attend festivals and conventions.
An Nguyen is a Computer Science senior at the University of Texas at Arlington. She is also the Vice President of UTA's Society of Asian Professional Scientists and Engineers. This past summer, she interned at Capital One as a Software Engineering Intern. She is currently exploring the field of UI/UX design, IoT, and robotics. In her free time, An enjoys staying active, practicing for her local church’s dance team, and playing board games.
Andy Vu is a computer science senior, with a research interest in AI, machine learning, and data science. During his free time he enjoys watching films, working out, and reading.
Anthony Gomez is a senior pursuing a Bachelor’s in Biomedical Engineering. Anthony is currently serving as the Public Relations officer in the Society for Hispanic Engineers where he can make visual graphics for communication to the student body. Anthony’s research interest include biomechanics, data analysis, and wearable technology. In his spare time, Anthony enjoys going for runs, supporting his favorite soccer team and listening to music.
Edmond Doan is currently an undergraduate Senior at The University of Texas at Arlington and his major is Computer Science. His research interests are in Human-Computer Interaction specifically in the areas of UI/UX. Edmond also has interests in blockchain technologies and he spends most of his free time learning about future trends related to NFTs/cryptocurrencies. His personal hobbies include exercising, anime, watching sports (football/basketball), and playing the guitar. Website LinkedIn
Emmanuel Azobor is a senior graduating with a Bachelors in Industrial Engineering.
He is a researcher interested in the intersections between Human-computer interactions and infrastructure. They enjoy studying machine learning, big data, internet of things (IoT), additive manufacturing, and edge computing. Hobbies include mixed martial arts, cooking, and rock climbing.
Kai is a Sophomore pursuing a Bachelor of Science in Computer Science with a minor in Applied Design and Technology. In their free time, they enjoy doing photography, modeling, composing music, filmmaking, and painting. Their research interests lie in the crossover between Computer Science and Fashion Design.
Temitayo is a sophomore pursuing a Bachelor's in Computer Engineering at the University of Texas at Arlington. She is a member of the UTA Honors College, The Alpha Lambda Delta Society, the UTA volunteers. She worked previously with the UTA hybrid Atelier as a participant for the Throwing Ceramics Study. Her research interests lies in Augmented tutorials, as the world is leaning more towards remote learning, Internet Of Things, Artificial Intelligence, Robotics, and Wearable Technologies. In her free time, she enjoys listening to music, drawing, painting, and photography.
Tyler Do is a senior pursuing a BS in Computer Engineering at the University of Texas at Arlington. In his free time, he loves to trying new foods, cooking, playing sports, and spending time getting to know new people. In his corporate roles, he previous worked in projects that involved Item Management Systems, Database Administration, and Web Content Development. His interest include Human-Computer Interactions (HCI), Autonomous Robotics, Operational Research, and wearable technologies. | http://hybridatelier.uta.edu/people |
JOAG Job Shadowing Program Flyer
Job shadowing is a professional, career development and exploration activity that offers the opportunity to spend time with a more senior professional currently working in a person’s field of interest. Junior officers who shadow get to observe the day-to-day activities of someone in the current workforce and also get a chance to actively engage with more senior officers. Senior officers are able to offer their unique experience and insight with junior officers in an enriching manner. Job shadowing is a brief commitment that can have a significant impact on participants. Beyond allowing participants to increase their understanding of the responsibilities of a senior officer, it also links senior officers with junior officers with the potential for future mentorship at their own discretion.
Additional information is provided in the JOAG Job Shadowing Program Guidance Document.
Interested in participating in the JOAG Job Shadowing Program? Complete the appropriate form and submit the completed form to [email protected]
If you have questions about the program, please contact LCDR Nancy Tian and LCDR Ruby Tiwari.
Page Last Modified on 9/24/2018
This page may require you to download plug-ins to view all content. Persons with disabilities having problems accessing any PDF or document on this page may call 1-888-225-3302 toll free for assistance. | https://dcp.psc.gov/OSG/JOAG/resources_jobshadowingprogram.aspx |
Kanana: Our full report
Situated in the heart of Botswana's Okavango Delta, adjacent to Moremi Game Reserve, Kanana Camp nestles ...... among towering jackalberry (ebony) trees, knobthorn acacias and sausage trees on the edge of a permanent stretch of channel. Kanana's environment is a mix of forests and open seasonal floodplains, combined with permanent channels and lush flood meadows – and it's this mix which leads to it being able to offer a very full range of activities.
Kanana's main lounge and dining area is arranged in a circular fashion on raised decks around an impressive ancient strangler fig tree which grows up through the middle of the main area. This open-sided structure is essentially split into three sections, including two comfortable seating areas with sofas and a selection of coffee-table books. One of these incorporates the bar area, where guests are invited to help themselves from a sideboard containing a selection of spirits and wines, and a wooden cupboard housing a large fridge. In the middle of these two lounge areas is the dining area with a long dining table – where everybody normally eats together – as well as a tea- and coffee-making station. The central area was recently refurbished in October 2015 and is looking very smart.
To the rear of the central area, steps lead down to a sandy firepit with views out onto a grassy floodplain, where at the time of our last visit the ‘resident’ elephant was feeding. With the arrival of the flood this area fills with water and the water activities are conducted from the jetty here. The firepit is often a popular gathering spot to swap stories after dinner.
Although there is no curio shop per se, a small selection of curios for sale is displayed in two glass-fronted cabinets. Just a short walk from the main area, there is a really nice pool deck with a larger-than-average-size pool for a camp in the Okavango.
The nine tented chalets at Kanana Camp are large, structured tents raised on wooden decks. Spread out along wooden walkways and elephant dung pathways (which are much nicer than they sound!), all overlook the channel or seasonal floodplains in front of camp. Each chalet is constructed around a solid frame of thick wooden beams, around which thick canvas is stretched, giving the feel of a much more substantial wall. We found that although the chalets look simple from the outside, the interiors are spacious, airy and attractive. At the front of each chalet is a shaded deck with two comfortable wooden chairs. The front 'wall' is almost fully meshed, with sliding doors. This, together with the high roof and mesh windows running along either side, lends an open and airy feel to the rooms.
Taking centre stage in each chalet are three-quarter-size twin beds – which can be made into a double on request – beneath a large walk-in mosquito net. On the writing/vanity table is information about the camp and area, and a canvas wardrobe incorporates a small key lock safe and a luggage rack. On the opposite side of the room are a couple of armchairs and a floor-standing fan. Polished hardwood floorboards and colourful oriental rugs add warmth to the room. We particularly loved the thoughtful little touches like the tin of homemade biscuits and the whisky decanter that appeared in the evenings.
The spacious en-suite bathroom is at the back of the chalet, reached through a wooden door. The new glass-fronted walk in shower enclosures give the room a modern and light feel. There is also a flushing toilet and 'his and hers' washbasins, plus a good selection of organic and environmentally friendly complimentary toiletries.
Kanana also has a Sleep Out Deck, where guests can enjoy a night under the dazzling African sky. The deck is split level timber platform with extensive views over the surrounding floodplain. On the upper level are a bed and table and chairs, and on the lower level is a toilet and sink. Near the foot of the platform is an old termite mound where there are a couple of directors chairs and a cosy fire is lit so you can sit and enjoy the night sounds of the bush. For security, an armed guide is stationed nearby in a separate tent. The Sleep Out Deck is about a fifteen-minute drive from camp and guests wishing to take advantage of it are normally driven out after dinner, then woken up in the morning with tea and coffee and driven back to camp for breakfast.
Activities at Kanana are as varied as the landscape around the camp. They include day and night 4WD game drives, mokoro trips and motorboat excursions, as well as bush walks with an armed guide. On our last visit in November 2015 we did a mokoro trip that was truly memorable. Besides a host of fabulous water birds including pygmy geese, lesser jacana, squacco herons, and kingfishers, we had a lucky close-up sighting of the rare sitatunga antelope. The highlight however was watching a herd of elephants cross and drink from the channel in front of us - seventy in total! On a previous visit, we did a bush walk which also proved excellent and a really interesting way to learn more about the surrounding environment. You can also take a rod out on a boat trip, and try your hand at fishing.
Kanana also has exclusive access to what is one of the Okavango's largest heronries, about 35–40 minutes by boat from the camp. The boats weave and wind their way through a series of papyrus- and reed-lined channels before reaching the breeding site for pink-backed pelicans, yellow-billed and marabou storks, grey herons, ibises and assorted egrets. This remarkable birding spectacle is usually best between mid-July and October. On our visit in November 2015, we were unfortunately unable to access the heronry, as water levels were too low.
When the water levels in the Delta rise each year, the seasonal floodplains and channels around Kanana are usually filled – affecting the concentrations of big game in the area. Although we thought the area very beautiful on our game drive, game (even plains game) was thin on the ground. This matches our observations of the game densities seen from other camps in this private reserve (Nxabega and PomPom) where, between around May and November – the focus is firmly on water-based activities rather than game-viewing.
Our view
The environment around Kanana is particularly beautiful, the camp's guides are generally very good and the water activities are excellent. If you visit between around July and October, we'd be surprised if even those with only a passing interest in birds failed to be impressed by the sheer magnitude and variety of birds at the heronry. However, this isn't a camp for a first-rate game safari between around May and November, when water levels are high.
Botswana expert
Geographics
- Location
- Okavango Delta Safari Reserves, Botswana
- Ideal length of stay
- Two to three nights is usually perfect here. Most visitors will use Kanana for water activities, particularly during the dry season, and combine it with a good camp for big-game and land-based safaris.
When Kanana is combined in the same itinerary with one of its other sister camps – Shinde, Footsteps Across the Delta or Okuti – there may be a slightly reduced rate. Please ask us for more details and whether this might apply to your trip.
- Directions
- The camp is accessed by light aircraft, followed by a ten to fifteen minute transfer from the airstrip.
- Accessible by
- Fly-and-Transfer
Food & drink
- Usual board basis
- Full Board & Activities
- Food quality
- Meals at Kanana are usually sociable affairs around a communal table – although special requests can usually be catered for.
On our most recent stay the food was really delicious! The camp is able to cater for most dietary needs– vegetarian, vegan, coeliacs etc – but they must be informed well in advance.
The day starts with a wake-up call, when tea, coffee, juice or hot chocolate is served to your tent. Breakfast is served before the morning activity. We had a choice of cereals, fresh and stewed fruit, and toast, as well as a full cooked option.
For brunch, after the morning activity, we were served yummy spare ribs, pepperdew quiche, avocado and papaya salad, bean salad, green salad, homemade bread and a cheese platter.
Afternoon tea before departing on the afternoon activity includes a choice of sweet and savoury treats served with iced tea, homemade lemonade, tea and coffee. We loved the caprese tartlets and the chocolate banana cake.
Dinner is generally three courses and once again did not disappoint. For starters we had a very tasty vegetable soup and homemade bread. This was followed by a mouth-watering roast beef, alongside a mixture of roast vegetables, and finished off with lemon tart.
- Dining style
- Group Meals
- Dining locations
- Indoor Dining
- Further dining info, including room service
- There is no room service.
- Drinks included
- Soft drinks, bottled water, spirits, local beers and a selection of (generally) South African wines are included. Imported wines and spirits and champagne cost extra – and may need to be requested in advance.
Special interests
- Family holidays
- Kanana has a more relaxed child policy than most of Botswana's camps, where it's unusual for families with children under 12 years not to have to book and pay for a private vehicle, although they may still choose to for greater flexibility.
- See ideas for Family holidays
- Birdwatching
- Kanana offers motorboat access to an enormous nearby heronry. From mid-July onwards, many migratory water birds come to nest, including yellow-billed, open-billed and marabou storks, reed cormorants, pink-backed pelicans, grey herons and sacred ibis.
- See ideas for Birdwatching
- Walking safaris
- Guests at Kanana Camp can do bush walks with an experienced, armed guide. These are usually slow walks, often following old hippo paths through the bush. The guide will explain tracks and signs with a view to giving visitors a deeper understanding of the environment.
- See ideas for Walking safaris
Children
- Attitude towards children
- Kanana has a large family tent with two bedrooms and a shared en-suite bathroom. There is enough space to sleep a family of five.
- Property’s age restrictions
- Kanana has a minimum age of seven years and does not require families to book a private vehicle.
- Special activities & services
- The camp will prepare special meals for children on request.
- Generally recommended for children
- Kanana has a more relaxed child policy than most other camps in the Okavango Delta, where it's unusual for families with children under 12 years not to have to book and pay for a private vehicle. They also allow triple rooms, which can make Kanana comparatively economical for a small family. However, because children will generally accompany adults on all activities, we suggest that families with younger children may want to consider booking a private vehicle, which will allow for much greater flexibility.
- Notes
- Both the camp and the pool are unfenced. The camp is also in close proximity to water. Children must be under the constant supervision of their parents at all times.
Our travellers’ wildlife sightings from Kanana
Since mid-2018, many of our travellers who stayed at Kanana have kindly recorded their wildlife sightings and shared them with us. The results are below. Click an animal to see more, and here to see more on our methodology.
100% success
100% success
100% success
100% success
100% success
89% success
86% success
67% success
50% success
38% success
38% success
22% success
13% success
0% success
0% success
0% success
0% success
0% success
Communications
- Power supply notes
- There is a single charging point in each room.
The power supply supports the use of hairdryers but only those issued from the camp office.
- Communications
- There is no cellphone reception or WiFi at the camp but there is a guest computer connected to the internet in the central area. Kanana uses radios to communicate with both its head office in Maun and its sister camps.
- TV & radio
- There is no TV or radio.
- Water supply
- Borehole
- Water supply notes
- All the tented rooms have plumbed hot and cold running water for showers as well as flush toilets. Guests are usually given a water bottle on arrival with filtered water, which they are encouraged to top up from the filtered supply in the camp’s main area. Each room is also provided with glasses and a flask of filtered drinking water.
Health & safety
- Malarial protection recommended
- Yes
- Medical care
- All the managers are first-aid trained and there are first-aid kits on site. The closest doctor is in Maun, which is a 25-minute flight. Medical evacuation is available from the camp in case of a serious emergency. Please note that it is only possible to fly out of camp during daylight hours as the bush airstrips do not have any lighting at night.
- Dangerous animals
- High Risk
- Security measures
- Because Kanana is unfenced and wild animals are known to move through, guests are escorted to their rooms when it is dark. There are foghorns in the rooms for use as alarms in an emergency.
- Fire safety
- There are fire extinguishers in all the rooms and common areas, as well as in boats and vehicles.
Activities
4WD Safari
Birdwatching
Boat trip
Fishing
Guided walking safari
Helicopter
Mokoro
Night drive
Extras
- Disabled access
- Not Possible
- Laundry facilities
- A laundry service is included, including undergarments, although washing powder is provided for those who wish to do their own. If weather permits, laundry collected in the morning will be returned on the same day.
- Money
- There is a small key lock safe in each room. There are no exchange facilities at the camp.
- Accepted payment on location
- MasterCard and Visa credit cards are accepted; Diners and Amex are not. Cash payments may be made in the form of South African rand, GB sterling, US dollars, euros and Botswana pula.
Other lodges in Okavango Delta Safari Reserves
Alternative places to stay in this same area.
Kwara Camp
Kwara Camp's private reserve boasts land and water activities year round, with excellent game-viewing opportunities and access to permanent channels of the north-east Okavango Delta.
Little Kwara
Little Kwara is an intimate camp offering enthusiastic guiding on both land- and water-based safaris in an area known for good densities of big game.
Little Vumbura
On a secluded island within a private reserve, Little Vumbura combines superb game viewing with a broad diversity of habitats in a truly picturesque setting.
Shinde Camp
With experienced staff and a wealth of activities, Shinde offers a traditional safari in an exceptionally varied and wildlife-rich environment.
Chitabe Lediba
Chitabe Lediba, in Botswana's southern Okavango Delta, is a small family friendly safari camp; it offers great dry-land safaris and in our experience consistently delivers good game sightings.
Sandibe Safari Lodge
The luxurious Sandibe Okavango Safari Lodge lies in a private concession in the heart of the Okavango Delta, beside Moremi Game Reserve, with superb big-game viewing.
Chitabe Camp
In the southern Okavango Delta, the excellent Chitabe Camp concentrates on dry-land safaris in an area that we've found particularly good for wild dog sightings in recent years.
Splash Camp
Set in the Kwara Reserve, offering superb wildlife viewing year-round, Splash offers both land and water activities led by guides with a particular knack for tracking big game.
Footsteps across the Delta
Small and very rustic, Footsteps across the Delta focuses on walking safaris; it also runs a special children’s programme so is particularly suitable for families.
Tubu Tree Camp
A traditional tented camp with a distinctive tree-house feel, Tubu Tree offers some of the best game viewing in the Jao Reserve.
Nxabega Tented Camp
Nxabega offers a selection of both land- and water-based activities, plus very good guiding, food and service, but game viewing can be somewhat erratic.
Vumbura Plains
Indulgently stylish and luxurious, Vumbura Plains offers superb game viewing and birding on an exceptionally varied private reserve.
Gomoti Plains Camp
Overlooking a tributary of the Gomoti River, Gomoti Plains Camp is a classically designed camp with very comfortable tents in a good game-viewing area.
Jacana Camp
Jacana Camp is a small safari camp with an informal island feel; it is ideal for water-based activities in the Delta and offers excellent birdwatching.
Kwetsani Camp
Deep in the Delta, overlooking a floodplain, Kwetsani Camp is a small, high-end camp with good access to areas for land and water-based activities.
Mapula Lodge
For an affordable yet varied safari encompassing a range of eco-systems, the traditional Mapula Lodge takes a lot of beating.
Duba Plains Camp
Duba Plains Camp is a traditional safari camp, best known for the thrilling lion and buffalo interaction that is often found here in broad daylight.
Baines' Camp
Baines' Camp is a well-run, intimate camp in a pretty part of the Okavango, offering a range of activities and the option to spend a morning walking with elephants.
Stanley's Camp
In a private concession south of Moremi Game Reserve, Stanley's Camp offers 4WD game drives, seasonal water activities and a superb elephant interaction.
Little Tubu
Little Tubu is a new, traditional camp with just three tented chalets and a distinctive tree-house feel. The areas around it can be explored by water and land-based activities year round.
Pom Pom Camp
Pom Pom Camp lies amidst stunning Okavango Delta scenery. Come for idyllic mokoro trips and great birdwatching, and accept that big-game sightings here are a bonus.
Duba Explorers Camp
Intimate and elegant, Duba Explorers Camp promises a firm safari focus in a remote corner of the Okavango, led by a team who value the highest guiding and hosting standards.
Pelo Camp
In a pristine wilderness environment deep in the Okavango Delta, the seasonal Pelo Camp is tented yet comfortable, with activities focusing on excursions by mokoro.
Xaranna
Xaranna is a plush tented camp amongst the idyllic waterways and islands of the Delta. Each air-conditioned tent has a plunge pool. Water activities and pampering are the focus here.
Seba Camp
Seba Camp is a luxury camp in a lovely location that offers the full range of water and land safari activities, depending on the time of year. This camp is particularly suitable for families.
Jao Camp
In a beautiful area with fantastic water activities, Jao combines an idyllic location with high levels of luxury and service, and a top-end spa.
Setari Camp
Setari Camp stands on an island dotted with palm trees, close to the base of the Okavango’s ‘Panhandle"
Abu Camp
Abu Camp is an exclusive safari camp on the western side of the Botswana's Okavango Delta - offering superb elephant-back safaris and opportunities to walk with them too.
Okavango Walking Safari
The Okavango Delta Walking Safari camps in a secluded Okavango Delta Reserve where there are few roads; the ideal location for a walking trail led by an expert guide.
Eagle Island Lodge
Eagle Island Lodge is a luxurious camp with international-style facilities including air conditioning and intercom in each room; offering water based activities in the Okavango Delta.
Xudum Delta Lodge
Xudum is a beautifully crafted lodge well situated to explore the Delta waterways. Each air-conditioned suite has a private plunge pool and a lofty viewing deck with a sala that doubles as a 'star bed'.
Qorokwe Camp
Luxurious and contemporary, the relatively new Qorokwe Camp is a gem in the Okavango Delta, offering land- and occasionally water-based activities in a prime wildlife area. | https://www.expertafrica.com/botswana/okavango-delta-safari-reserves/kanana/in-detail |
I looked at various systems (e.g. Metasight from The Morphix Company) for generating user profiles several years ago. They did this by looking at titles of emails that people sent and received and so deduced the interests and likely skills of that person. As you can imagine there were many data protection concerns at that time and probably still would be now. However, the world is moving this way – Amazon knows what I am interested in because of my activity on its site.
I would recommend that the profiles are stored in one central place, such as Sharepoint profiles, and then other systems look up on this.
Jon Harman
Networks & Learning Lead
Office: Jealott's Hill, Building 89/8
From: sikmleaders@... [mailto:sikmleaders@...]
Sent: 20 March 2015 14:31
To: sikmleaders@...
Subject: [sikmleaders] Enterprise Patterns of User Profiles
Good Morning SIKM!
We have a number of systems that have user profile information. For example, a talent management system will have name, address, image, uid etc. and a CMS SharePoint or other could house the same kind of data but in many cases a user will have to update this information themselves. I have seen some enterprise service patterns that use LDAP to feed certain data to various systems but I am curious if there is a system or capability that itself is an enterprise resource to provision enterprise user information. The other question is that would this same system or capability be able to ! compose or pull content from other systems to perform some analysis. What are your thoughts or experiences?
Best,
Syngenta Limited, Registered in England No 2710846
Registered Office : Syngenta Limited, European Regional Centre, Priestley Road, Surrey Research Park, Guildford, Surrey, GU2 7YH, United Kingdom
This message may contain confidential information. If you are not the designated recipient, please notify the sender immediately, and delete the original and any copies. Any use of the message by you is prohibited. | https://sikm.groups.io/g/main/message/4129 |
I've seen many different types of microphones, and I feel like I should know some of most important ones before getting any farther. What are the most commonly mic types and how does one decide which to use?
2 Answers
2
Dynamic mics use an electromagnet attached to a diaphrahm. They're comparatively durable, so they generally can stand up to higher SPL. These are commonly used live, as well as for lound instruments like snare drums and guitar cabinets. A lot of really good ones are inexpensive as well (Shure SM57 and SM58, for example).
Condenser mics use a conductive diaphragm as part of a capacitor. Because of this, they require external power, referred to as phantom power. They are quite sensitive, particularly to high frequencies, but because of this they don't handle as much volume, so you don't usually see these as much up next to drums. But they're splendid for vocals and many instruments. Condensers come in large and small diaphragm variants.
Ribbon mics use a thin, very sensitive foil suspended in a magnetic field. They're extremely sensitive to transients and have a very wide frequency response, but they're expensive and delicate. In my opinion they sound simply amazing for vocals though.
These are the major types. There are exceptions to these of course - dynamics with a particularly wide and smooth frequency response, condensers that handle high SPL, and there's even at least one ribbon that takes phantom power, so while you should know these characteristics, in the end it is best to know the properties the mics you have access to.
Choosing a mic is a matter of fitting the right tool to the task at hand. Make sure you have a mic that can handle the appropriate SPL, that covers the frequencies you need covered, and most importantly, sounds good to you when you try it. Don't just choose one based on numbers, but try it out on whatever source you have. If you can, try a few mics that seem to fit the bill and choose the one you think sounds best.
And this might sound silly, but ask around! See what other people prefer for recording the kinds of things you are recording. Then give 'em a try :)
In general, condenser microphones have a flat, extended frequency response, and are the best choice for far-proximity applications such as choirs and orchestras.
In general, dynamic microphones are good for near proximity, high SPL applications such as drums and individual instruments.
There are also certain microphones that are suitable for specific applications. The Shure SM-57 works particularly well for miking snare drums and electric guitar cabinets, for example, although it is also a great, rugged, inexpensive general purpose mic.
Shure and Audio Technica publish a wealth of excellent background and technical information about microphones and their proper use. If you read these publications and study microphone specification sheets, you will develop an excellent background of knowledge for effectively using the right type of microphones in almost any setting.
| |
Substance and drug abuse is a cultural and social issue that people should address, especially the effects on children. “How could you leave us” is a song where the artist talks about losing his mother to pills and substance abuse. The song talks about more considerable cultural conservation on drug and substance abuse which is tearing a family apart and leaving children orphans. In the music, the mother leaves unexpectedly even after they had waited for her for a long time. The child has the assumption that pills kill even though he has never tasted them before. Drugs took away the narrator’s mother that she could not attend her son’s graduation and bid him congratulations. NF is not the only child who has lost a parent to mental issues and drug abuse, but hundreds and thousands of other children also blame the two culprits for the loss of their parents. Substance abuse and drug abuse fail to do justice to children and impact them negatively in both direct and indirect ways. The song explains that the effects of parental substance abuse on children include neglect, social consequences, emotional and behavioral problems, and unstable family systems.
Substance abuse and mental health issues among parents are a larger cultural conversation because children always feel neglected by their caregivers. Drug and substance abuse is not easy for anyone to deal with, especially when parenting is involved. Parents who indulge in substance abuse can knowingly or unknowingly neglect their children. When a parent is under the influence of drugs, they might not be able to respond to their children’s physical and emotional needs correctly (Baker et al., 31). One of the significant reasons for neglect is the high financial cost of acquiring the drug, thus preventing parents from providing sufficient food, clothing, and housing. The song “How could you leave us” talks about the issue of neglectful parents in detail, thus covering the cultural conversation in society. The boy complains that the mother would say she is coming for them in the music, but a few minutes later, she calls and tells them that she cannot make it. Children that undergo such experiences feel neglected by their parents because those parents have no time for them.
Children whose parents suffer from mental and substance abuse suffer social consequences. Children whose parents suffer from substance abuse have to grow up very fast because their caretaker cannot meet their needs. Parents who suffer from substance abuse and mental disorders lack assertiveness and communication skills; therefore, children suffer from poor communication and dysregulation with their caregivers (Lander et al., 194). Children who undergo such experiences experience several feelings such as fear, hate, and shame. Children also fail to form an emotional attachment with other children or parents because of their parents put them through (Baker et al., 33). In the song, the speaker has no attachment to the lady taking care of her and states she does not believe her mama because the love is unreal. One can also feel and hear the disdain in the speaker’s voice when he says that he is in a room with a woman he barely knows. Such children experience social and emotional detachment from other people, even those ready to pull them out of their situation by providing a home for them.
Children who grow up with parents suffering from substance abuse and mental health disorders have behavioral and emotional problems. Children who grow up in such a setting do not have self-control or autonomy as they bottle up issues because the parents are not physically or emotionally present for them (Baker et al., 33). Emotional and behavioral problems arise in children who live in homes with addiction and mental health problems as they have angry outbursts, depression, detachment, and anxiety (Thatcher). Such children find it difficult to express what they are feeling and thinking, thus using their behavior as an indication. For example, Ashley, a 15-year-old female, was asked to go for treatment by her school counselor because of self-injury because her mother was an alcoholic, and Ashley had to clean after her mess (Lander et al., 196). In the song, the speaker states that people say getting high is fun, but he is not laughing and does not understand anything at the time. The piece explains the emotional abuse that children go through because their parents are drug and substance addicts.
Another more considerable cultural aspect that the song addresses is the separation of children from their parents and the creation of unstable family systems. Many children who grow up in foster care have to deal with parents suffering from substance abuse or mental health issues. “In families with severe substance abuse issues, child protection agencies might remove the children from the addict’s parents and place them in foster care or the care of a stable relative” (Baker et al., 32). Separating children from their parents can lead to significant trauma even if the child is at risk when in the presence of the parents. Another impact of parental substance abuse is creating an unstable family system. As the basic unit of society, families are important institutions that promote the development of a unified and well-functioning society. Parents addicted to substance abuse experience difficulties maintaining a regular system in the house, leading to behavioral problems in the children. In the song, the speaker explains that the latest conservation they had with the mother is of her telling him that she is not an addict, but a week later, she went back to popping pills. Such inconsistencies lead to unstable families and separation.
Substance abuse and mental health disorders are controversial today, especially among parents, and he tries to explain his experiences to the world. The musical artifact above is the story of a child whose mother was a substance addict Parenting is not a simple task as it requires hours of constant attention and affection given to the child by the parent. However, children that grow up in families where the parents are drug addicts have to cope differently and grow a thick skin quite early in their lifetime. For example, parents suffering from drug abuse and mental disorders have to intentionally or unintentionally neglect their children. Children in such families have to deal with the absence of their parents either emotionally or physically. Parents also find it hard to provide basic needs for their children and instead purchase their drugs. Children also go through emotional, physical, and social torture because they have no one to guide them or show them deserved affection and attention thus children tend to hide their feelings.
Work Cited
Baker, M., Ford, Jacquelin, Canfield, Brittany, and Grabb, Traci. Identifying, treating, and preventing childhood trauma in rural communities. IGI Global,2016.
Lander, Laura, Howsare, Janie, and Byrne, Marilyn. “The impact of substance use disorders on families and children: from theory to practice.” Social work in public health vol. 28,3-4 (2013): 194-205. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725219/
Thatcher, T. “How Parental Substance Abuse Impacts Kids | Valley Cares.” Valley Behavioral Health, 12Feb.2020, valley Care. https://valleycares.com/blog/families-in-crisis-how-parental-substance-abuse-and-mental-health-impacts-kids/
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. | https://essaymine.com/cee/ |
The Department of Homeland Security is preparing to use surveillance drones for the purposes of “public safety,” according to remarks made by DHS Secretary Janet Napolitano during a House hearing yesterday.
Asked by the House Committee on Homeland Security why the DHS is not more involved in overseeing the rollout of unmanned drones domestically, Napolitano responded by pointing out that the federal agency is looking at using the technology for “public safety”.
“With respect to Science and Technology, that directorate, we do have a funded project, I think it’s in California, looking at drones that could be utilized to give us situational awareness in a large public safety [matter] or disaster, such as a forest fire, and how they could give us better information,” she said.
Despite increasing concerns about drones being hacked or used to collect personal information in violation of the Fourth Amendment, DHS officials declined to appear at a July 19 House Homeland Security Oversight, Investigations and Management Subcommittee hearing that sought to establish how the DHS could guarantee privacy rights would be protected.
As we reported earlier this year, the DHS is already using another type of airborne drone surveillance, also utilized to track insurgents in Afghanistan and Iraq, for the purposes of “emergency and non-emergency incidents” within the United States.
The DHS is seeking four contractors to provide “aerial remote sensing” services, using LIDAR (Light Detection And Ranging) technology fitted to drones or manned aircraft that will provide surveillance capability for “homeland security missions,” as well as “management of emergency incidents by Federal Emergency Management Agency (FEMA) regional offices, joint field offices and by state and local government.”
A bill passed in by Congress in February paves the way for the use of surveillance drones in US skies on a widespread basis. The FAA predicts that by 2020 there could be up to 30,000 drones in operation nationwide.
US law enforcement bodies are already using drone technology to spy on Americans. In December last year, aPredator B drone was called in to conduct surveillance over a family farm in North Dakota as part of a SWAT raid on the Brossart family, who were suspects in the egregious crime of stealing six missing cows. Local police in this one area have already used the drone on two dozen occasions since June last year.
Last summer, the Department of Homeland Security gave the green light for police departments in the United States to deploy the ShadowHawk mini drone drone helicopter that has the ability to taze suspects from above as well as carrying 12-gauge shotguns and grenade launchers. The drone, also used against insurgents in Afghanistan and Iraq, is already being used by the Montgomery County Sheriff’s office in Texas.
*********************
Paul Joseph Watson is the editor and writer for Prison Planet.com. He is the author of Order Out Of Chaos. Watson is also a regular fill-in host for The Alex Jones Show and Infowars Nightly News. | https://www.prisonplanet.com/big-sis-drones-to-be-used-for-public-safety.html |
I can't get the test suite to work.
Anyone else having a problem?
I appear to get the exact same answers.
The function is setup the wrong way by default. You actually have to return a vector with Nopt as the first element and the product as the second. When he calls the function in the test suite, it will return only Nopt the way the function is originally setup with two outputs.
I have made the problem a little bit more challenging by changing only the test suite.
The propose is to get a math solution.
The leading solver has already made it.
If the purpose is to get a math solution, why play tricks with how the output needs to be passed and read?
No tricks.
Math solution is always better than brute force solution.
if you choose bruth - your solution may be limited by the number of iterations.
Sorry, I should have been clearer with my previous comment. Yes, there is a math solution to this problem that does not require brute force. Solutions 231885-231887 are mine, and give the correct (mathematical) answers with no brute forcing at all. However, because of the way the output needs to be passed to the assert function, they are marked as wrong by Cody. Other than the odd way that the output needs to be parsed, this is a great problem.
The test suite has been updated to avoid the problems mentioned by James.
I took a look at the test cases. The 4th test has a result that is expected to be on the order of 1e40. Then it compares that to the solution returned, and tests to see if the absolute difference is less then 1e-4? eps(1e40) is roughly 1e24. So the difference between two numbers of size 1e40 will never be less than 1e-4, unless they are IDENTICAL. Far better would be to test if the relative difference is small.
I updated the problem explanation to be a bit more clear. I also repaired the checks for a valid solution to use a relative test, mainly valuable for one of the cases where an absolute test on the error was a poor choice.
Clearly, the best score comes from older solvers not having to deal with singularity that occurs when p = 5, which was not included in the test suite back then.
Solution Comments
-
1 Comment
Problem Recent Solvers41
Suggested Problems
-
2537 Solvers
-
Given two arrays, find the maximum overlap
1069 Solvers
-
449 Solvers
-
Back to basics 8 - Matrix Diagonals
885 Solvers
-
714 Solvers
Problem Tags
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting! | https://in.mathworks.com/matlabcentral/cody/problems/1428-find-the-optimal-shape-to-bring-the-maximum-product-by-a-given-perimeter |
We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Tennessee Williams
1911- 1983
Playwright
Esteemed American playwright Tennessee Williams produced some of the classics of 20th century theatre. The Glass Menagerie (1944), A Streetcar Named Desire (1947), and Cat on a Hot Tin Roof (1955) are considered the most important of his two dozen plays.
Among his many accolades were two Pulitzer Prizes (for Streetcar and Cat), along with four Drama Critics Circle Awards. Leading actors such as Paul Newman, Marlon Brando and Elizabeth Taylor achieved great distinction in Williams' work.
His later life was marked by a career decline and depression, alcohol and drug abuse.
Playwright Tennessee Williams was born on March 26, 1911, in Columbus, Mississippi. After college, he moved to New Orleans, a city that would inspire much of his writing. On March 31, 1945, his play, The Glass Menagerie, opened on Broadway and two years later A Streetcar Named Desire earned Williams his first Pulitzer Prize. Many of Williams’ plays have been adapted to film starring screen greats like Marlon Brando and Elizabeth Taylor. Williams died in 1983.
Playwright Tennessee Williams was born Thomas Lanier Williams on March 26, 1911, in Columbus, Mississippi, the second of Cornelius and Edwina Williams’ three children. Raised predominantly by his mother, Williams had a complicated relationship with his father, a demanding salesman who preferred work instead of parenting.
Williams described his childhood in Mississippi as pleasant and happy. But life changed for him when his family moved to St. Louis, Missouri. The carefree nature of his boyhood was stripped in his new urban home, and as a result Williams turned inward and started to write.
His parent’s marriage certainly didn’t help. Often strained, the Williams home could be a tense place to live. “It was just a wrong marriage,” Williams later wrote. The family situation, however, did offer fuel for the playwright’s art. His mother became the model for the foolish but strong Amanda Wingfield in The Glass Menagerie, while his father represented the aggressive, driving Big Daddy in Cat on a Hot Tin Roof.
In 1929, Williams enrolled at the University of Missouri to study journalism. But he was soon withdrawn from the school by his father, who became incensed when he learned that his son’s girlfriend was also attending the university.
Deeply despondent, Williams retreated home, and at his father’s urging took a job as a sales clerk with a shoe company. The future playwright hated the position, and again he turned to his writing, crafting poems and stories after work. Eventually, however, the depression took its toll and Williams suffered a nervous breakdown.
After recuperating in Memphis, Williams returned to St. Louis and where he connected with several poets studying at Washington University. In 1937 returned to college, enrolling at the University of Iowa. He graduated the following year.
Tennessee Williams
One of Americas greatest playwrights, and certainly the greatest ever from the South, Tennessee Williams wrote fiction and motion picture screenplays, but he is acclaimed primarily for his playsnearly all of which are set in the South, but which at their best rise above regionalism to approach universal themes.
Thomas Lanier Williams was born in Columbus, Mississippi, on March 26, 1911, the first son and second child of Cornelius Coffin and Edwina Dakin Williams. His mother, the daughter of a minister, was of genteel upbringing, while his father, a shoe salesman, came from a prestigious Tennessee family which included the states first governor and first senator. The family lived for several years in Clarksdale, Mississippi, before moving to St. Louis in 1918. At the age of 16, he encountered his first brush with the publishing world when he won third prize and received $5 for an essay, “Can a Good Wife Be a Good Sport?,” in Smart Set. A year later, he published “The Vengeance of Nitocris” in Weird Tales. In 1929, he entered the University of Missouri. His success there was dubious, and in 1931 he began work for a St. Louis shoe company. It was six years later when his first play, Cairo, Shanghai, Bombay, was produced in Memphis, in many respects the true beginning of his literary and stage career.
Building upon the experience he gained with his first production, Williams had two of his plays, Candles to the Sun and The Fugitive Kind, produced by Mummers of St. Louis in 1937. After a brief encounter with enrollment at Washington University, St. Louis, he entered the University of Iowa and graduated in 1938. As the second World War loomed over the horizon, Williams found a bit of fame when he won the Group Theater prize of $100 for American Blues and received a $1,000 grant from the Authors League of America in 1939. Battle of Angels was produced in Boston a year later. Near the close of the war in 1944, what many consider to be his finest play, The Glass Menagerie, had a very successful run in Chicago and a year later burst its way onto Broadway. Containing autobiographical elements from both his days in St. Louis as well as from his family’s past in Mississippi, the play won the New York Drama Critics’ Circle award as the best play of the season. Williams, at the age of 34, had etched an indelible mark among the public and among his peers.
Following the critical acclaim over The Glass Menagerie, over the next eight years he found homes for A Streetcar Named Desire, Summer and Smoke, A Rose Tattoo, and Camino Real on Broadway. Although his reputation on Broadway continued to zenith, particularly upon receiving his first Pulitzer Prize in 1948 for Streetcar, Williams reached a larger world-wide public in 1950 when The Glass Menagerie and again in 1951 when A Streetcar Named Desire were made into motion pictures. Williams had now achieved a fame few playwrights of his day could equal.
Over the next thirty years, dividing his time between homes in Key West, New Orleans, and New York, his reputation continued to grow and he saw many more of his works produced on Broadway and made into films, including such works as Cat on a Hot Tin Roof (for which he earned a second Pulitzer Prize in 1955), Orpheus Descending, and Night of the Iguana. There is little doubt that as a playwright, fiction writer, poet, and essayist, Williams helped transform the contemporary idea of the Southern literature. However, as a Southerner he not only helped to pave the way for other writers, but also helped the South find a strong voice in those auspices where before it had only been heard as a whisper. Williams died on February 24, 1983, at the Hotel Elysée in New York City.
BIOGRAPHICAL AND CRITICAL SOURCES:
BOOKS
Atkinson, Brooks, Broadway, revised edition, Mac-millan (New York, NY), 1974.
Bentley, Eric, What Is Theatre?, Atheneum (New York, NY), 1968.
Bernstein, Samuel J., The Strands Entwined: A New Direction in American Drama, Northeastern University Press (Boston, MA), 1980.
Bigsby, C.W. E., Confrontation and Commitment: A Study of Contemporary American Drama 1959–1966, University of Missouri Press (Columbia, MO), 1968.
Bigsby, C.W. E., A Critical Introduction to Twentieth-Century American Drama, three volumes, Cambridge University Press (New York, NY), 1985.
Cohn, Ruby, Dialogue in American Drama, Indiana University Press (Bloomington, IN), 1971.
Crandell, George W., Tennessee Williams: A Descriptive Bibliography, University of Pittsburgh Press (Pittsburgh, PA), 1995.
Devlin, Albert J., editor, Conversations with Tennessee Williams, University Press of Mississippi (Jackson, MS), 1986.
Dictionary of Literary Biography, Volume 7: Twentieth-Century American Dramatists, Thomson Gale (Detroit, MI), 1981.
Dictionary of Literary Biography Documentary Series, Volume 4, Thomson Gale (Detroit, MI), 1984.
Fleche, Anne, Mimetic Disillusion: Eugene O'Neill, Tennessee Williams, and U.S. Dramatic Realism, University of Alabama Press (Tuscaloosa, AL), 1997.
Gassner, John, Theatre at the Crossroads: Plays and Playwrights of the Mid-Century American Stage, Henry Holt (New York, NY), 1960.
Gassner, John, Directions in Modern Theatre and Drama, Henry Holt (New York, NY), 1966.
Gilman, Richard, Common and Uncommon Masks: Writings on Theatre, 1961–1970, Random House (New York, NY), 1971.
Gould, Jean, Modern American Playwrights, Dodd (New York, NY), 1966.
Griffin, Alice, Understanding Tennessee Williams, University of South Carolina Press (Columbia, SC), 1995.
Kerr, Walter, The Theatre in Spite of Itself, Simon & Schuster (New York, NY), 1963.
Kerr, Walter, Journey to the Center of Theatre, Alfred A. Knopf (New York, NY), 1979.
Leverich, Lyle, Tenn: The Timeless World of Tennessee Williams, Crown Publishers (New York, NY), 1997.
Lewis, Allan, American Plays and Playwrights of the Contemporary Theatre, Crown Publishers (New York, NY), 1965.
Martin, Robert A., editor, Critical Essays on Tennessee Williams, Prentice Hall International (Tappan, NJ), 1997.
McCann, John S., The Critical Reputation of Tennessee Williams: A Reference Guide, G.K. Hall (Boston, MA), 1983.
McCarthy, Mary, Theatre Chronicles: 1937–1962, Farrar, Straus, & Giroux (New York, NY), 1963.
Miller, Arthur, The Theatre Essays of Tennessee Williams, edited by Robert A. Martin, Penguin (New York, NY), 1978.
Nathan, George Jean, The Theatre Book of the Year, 1947–1948, 1948, 1948–1949, Alfred A. Knopf (New York, NY), 1949.
O'Connor, Jacqueline, Dramatizing Dementia: Madness in the Plays of Tennessee Williams, Bowling Green State University Popular Press (Bowling Green, OH), 1997.
Porter, Thomas E., Myth and Modern American Drama, Wayne State University Press (Detroit, MI), 1969.
Rasky, Harry, Tennessee Williams: A Portrait in Laughter and Lamentation, Dodd (New York, NY), 1986.
Simon, John, Acid Test, Stein & Day (New York, NY), 1963.
Styan, J.L., Modern Drama in Theory and Practice, Volume 1, Cambridge University Press (New York, NY), 1981.
Tynan, Kenneth, Curtains, Atheneum (New York, NY), 1961.
Weales, Gerald, American Drama since World War II, Harcourt, Brace (New York, NY), 1962.
Williams, Edwina Dakin, as told to Lucy Freeman, Remember Me to Tom, Putnam (New York, NY), 1963.
Williams, Tennessee, The Glass Menagerie, Random House (New York, NY), 1945, published as The Glass Menagerie: Play in Two Acts, Dramatists Play Service (New York, NY), 1948.
Williams, Tennessee, Camino Real: A Play, foreword and afterword by Williams, New Directions Publishing (New York, NY), 1953.
Williams, Tennessee, Cat on a Hot Tin Roof, preface by Williams, New Directions Publishing (New York, NY), 1955, published as Cat on a Hot Tin Roof: A Play in Three Acts, Dramatists Play Service (New York, NY), 1958.
Williams, Tennessee, Orpheus Descending: A Play in Three Acts, New Directions Publishing (New York, NY), 1959.
Williams, Tennessee, The Theatre of Tennessee Williams, New Directions Publishing (New York, NY), Volume 1, 1971, Volume 2, 1971, Volume 3, 1971, Volume 4, 1972, Volume 5, 1976, Volume 6, 1981, Volume 7, 1981.
Williams, Tennessee, Tennessee Williams: Eight Plays, introduction by Harold Clurman, Doubleday (New York, NY), 1979.
PERIODICALS
Booklist, September 15, 1995, Jack Helbig, review of Something Cloudy, Something Clear, p. 131.
Library Journal, September 1, 1995, Ming-ming Shen Kuo, review of Something Cloudy, Something Clear, p. 178 October 15, 1995, Denise A. Garofalo, review of The Migrants, p. 100.
New Republic, Volume 112, 1945 June 17, 1996, Robert Brustein, review of The Night of the Iguana, p. 26.
New York, March 14, 1983, John Simon, "Poet of the Theater," p. 76 November 28, 1994, John Simon, review of The Glass Menagerie, p. 75 May 15, 1995, John Simon, review of The Rose Tattoo, p. 59 October 23, 1995, John Simon, review of Garden District, p. 60.
New Yorker, July 18, 1994, John Lahr, "Fugitive Mind," p. 68 November 21, 1994, John Lahr, review of The Glass Menagerie, p. 124 December 19, 1994. John Lahr, "The Lady and Tennessee," p. 76 May 15, 1995, Nancy Franklin, review of The Rose Tattoo, p. 100 April 8, 1996, Nancy Franklin, review of Night of the Iguana, p. 103.
New York Post, April 21-May 4, 1958, Robert Rice, interview with Williams.
New York Times Book Review, May 27, 1990, Edmund White, review of Five o'Clock Angel: Letters of Tennessee Williams to Maria St. Just, 1948–1982, p. 1.
Southern Living, March, 1996, Wanda Butler, "A Weekend Named Desire," p. 26.
Time, December 5, 1994, William Tynan, review of The Glass Menagerie, p. 94.
World Literature Today, winter, 1992, Phillip C. Kolin, "Tennessee Williams: Fugitive Kind," p. 133.
Tennessee Williams: A tormented playwright who unzipped his heart
Article bookmarked
Find your bookmarks in your Independent Premium section, under my profile
Tennessee Williams: A tormented playwright who unzipped his heart
1 /4 Tennessee Williams: A tormented playwright who unzipped his heart
Tennessee Williams: A tormented playwright who unzipped his heart
585443.bin
Tennessee Williams: A tormented playwright who unzipped his heart
585442.bin
Tennessee Williams: A tormented playwright who unzipped his heart
585444.bin
Tennessee Williams: A tormented playwright who unzipped his heart
585445.bin
Tennessee Williams – arguably the greatest of American dramatists – would have notched up his 100th birthday on 26 March. He was born Thomas Lanier Williams III in Columbus, Mississippi in 1911. His mother, Edwina, was the daughter of an Episcopalian minister, his father, Cornelius, was a womanising and hard-drinking travelling salesman for a shoe company. History does not record how the birth went, though it is a fair bet that the occasion was more elevated than the master playwright's less than ideally dignified demise some 71 years later.
In February 1983 in a Manhattan hotel room, Williams choked to death from inhaling the plastic cap of a nasal spray dispenser. His gagging reflex had been impaired by drink and drugs. To his righteous detractors – who had long looked askance at this laureate of lost souls and champion of life's undesirables – it must have seemed like roundly retributive poetic justice. The assiduous substance-abuse of the author of such classics as The Glass Menagerie and A Streetcar Named Desire was, by then, the stuff of legend. In his Memoirs (1972), Williams had characterised the 1960s as his "Stoned Age", while Tallulah Bankhead, chum and sometime leading lady, had once quipped, punningly: "Tennessee – you and I are the only constantly High Episcopalians I know."
It is not hard, however, to imagine the playwright's ghost snorting at the grotesque farce of this accidentally emblematic, cautionary ending. Richard Eyre has written of "the drollery [that] runs under all his work like a fast-flowing stream". His sense of humour could be disconcerting. There's the revealing story of the night he went to see Maggie Smith in Ingmar Bergman's 1970 production of Hedda Gabler at the National Theatre. Williams started cackling from the moment she came on and, to the bemusement of cast and audience, kept this up all the way through, climaxing with an enormous roar at the offstage shot in the head. When Smith asked him why, he replied in his Southern drawl, "That poor woman, she's so bored. " But, as Peter Hall has remarked of the event, this was an acutely perceptive laughing fit: "[Tennessee] saw comedy in the blackest things. I think Ibsen would have approved."
Williams had certainly needed this talent for extracting humour from depressing circumstances in the latter phases of his life. By the time of his death in 1983, the man who had bagged a couple of Pulitzer Prizes – for Streetcar in 1947 and for Cat on a Hot Tin Roof in 1955 – had not had a major Broadway hit since Night of the Iguana in 1961. That play – set on the veranda of a bohemian hilltop hotel in Mexico – stages a kind of spiritual one-night stand between one of Williams's archetypal apostates (an end-of-the-tether ex-minister, defrocked for blasphemy and a taste for underage girls) and a New England spinster and itinerant artist who is the ethereal embodiment of "how to live beyond despair and still live".
The piece has the air of valedictory stock-taking and, as Nicholas Wright has written, in the hero's choosing to stay and share his life with the blowsy, wisecracking bacchante who runs the hotel, Williams was predicting his own last two decades of "hedonistic riot fringed with sexy boys". But though they were dogged by depression, drink, drugs and vindictive critics (a review in Time magazine was helpfully headlined "Mistah Williams – he dead), these were also years of unflagging productivity.
On the occasion of the playwright's centenary, it's worth pausing to reflect on a number of interrelated questions. How have attitudes towards his work changed during the years since his death? Has our sense of his artistic range expanded, given the discoveries that have been made at both ends of his career? And if the Bible is right to propose that "by their fruits ye shall know them", what do we learn about Williams from his spiritual legatees?
Reviewing Peter Hall's production of Orpheus Descending in 1988, Frank Rich, then theatre critic of the New York Times, wrote that: "In death, Tennessee Williams is more often regarded by the American theatre as a tragic icon than as a playwright worthy of further artistic investigation. The reverse is true in London when the Williams canon, neglected by the major companies during the writer's lifetime, is suddenly being rediscovered."
This was to be even truer in the years immediately following, as Richard Eyre masterminded three revelatory revivals at the National, including his own excellent productions of Night of the Iguana and Sweet Bird of Youth. Later, under Trevor Nunn and thanks to the intervention of Vanessa Redgrave, who had retrieved and pressed the claims of this unperformed 1938 script, the NT's premiere production of Not About Nightingales in 1998 showed us Williams, the youthful social protest writer. Sticking up for the solitary, sensitive outcast in a world full of redneck philistines had seemed to be the author's forte, not defending the rights of the abused mass of men, but this powerful drama – based on the real-life case of hunger-striking prisoners during the Depression who were cooked to death in a room full of steaming radiators – brought home how the poet and the protester in him were not at odds.
As Eyre's productions had already underlined and as director Harold Clurman, the most perceptive critic of Williams's work) had long forcefully argued, his Southern Gothic environments steam with social commentary as well as with sex, being distorted places where "lack of cultural nourishment produces bigotry, brutality, madness, and a persistent depression of the human personality".
Now, as the festivities for Williams's 100th birthday get underway, there's a dramatic new twist to the proposition that London takes the lead in the posthumous re-evaluation. At her stylish new venue, the Print Room in Bayswater, Lucy Bailey, who scored a huge hit with a sizzling stage adaptation of Baby Doll (the Williams-scripted movie denounced by Time as "just about the dirtiest American-made motion-picture that has ever been legally exhibited") is gearing up for a fresh assault on Kingdom of Earth, a play that bloodily bombed on Broadway in 1967 and hasn't been seen here in England since the mid-Eighties. Meanwhile, Kilburn's Cock Tavern Theatre, under the enterprising artistic directorship of Adam Spreadbury-Maher, has weighed in with a couple of coups. Tom Erhardt, the agent who is the playwright's literary executor in Europe, was so impressed by the recent Edward Bond play at this address that he has given them the right to present the world premiere of two late Williams plays, one of them such a rarity that it won't be published until the birthday.
I Never Get Dressed Till After Dark on Sundays, a Pirandello-esque play-within-a-play about the dramatist's rocky relationship with the American theatre industry, opened last week. At the end of the month, it will be followed by Gene David Kirk's production of A Cavalier for Milady, the most graphic and gob-smacking of all his snapshots of Rose, the schizophrenic sister whom their mother had lobotomised (behind his back) when she began to make sexual abuse charges against the father, Cornelius.
In various guises that betoken his feelings of guilt (and his fear of going mad himself), Rose haunts his oeuvre from the breakthrough play and his first Broadway hit, Glass Menagerie (1945), where she is incarnated as the crippled, painfully shy Laura Wingfield, who has withdrawn from her mother's heavy-handed match-making into the cocoon of tending her collection of fragile figurines.
There are other recurring personnel in the Williams world. To pick out just three: there is the sacrificial stud, such as Val Xavier, that guitar-playing cross between Christ and Elvis who threatens the rigid patriarchy of a Southern town in Orpheus Descending (1957) there's the woman liberated through the libido, aroused by a sexy hunk – comically so in the The Rose Tatoo (1950) where the explosive widow, Rosa delle Serafina drops her mourning weeds for an unconventionally attractive truck driver who is a Sicilian immigrant like the main love of Williams's life, Frank Merlo and there is the devouring mother, quintessentially embodied in Violet Venable, the wealthy bird-of-prey dowager in Suddenly, Last Summer (1958) who, having used her social pulling power to procure sex for her son on their glamorous trips to Europe, wants to have her niece lobotomised when threatened with exposure.
In his autobiography, Palimpsest, Gore Vidal, who writes about close friends with what can only be described as Olympian attachment, is amusing about the frightful fates that tend to befall Williams's protagonists. Williams had complained that the fight at the end of The City and the Pillar, Vidal's groundbreaking gay novel, was too melodramatic: "That from Tennessee," Vidal writes, with poisoned-tipped poise, "whose heroes, when not castrated, are eaten alive by small boys in Amalfi, just below where I live. I should note that whenever Suddenly Last Summer appears on Italian television, the local boys find it irresistibly funny."
But, if a Puritan streak can be molten (as opposed to icy) this is what the hedonistic Williams had. The grandson of an Episcopalian minister, he saw himself, as his defrocked priest Shannon does, as "a man of God, on vacation". The tension between his characters enacts his own interior struggles it's his ambivalence towards them that gives them the plays life – along (crucially) with the luxuriance of their leisurely, undulating Southern speech, which was once beautifully described on these pages by Rhoda Koenig. Reviewing the early rarity Spring Storm (1937/8), brilliantly directed by Laurie Sansom at the Royal and Derngate, Northampton in 2009 and rightly imported by Nick Hytner's National Theatre the following year, she characterised Williams's dialogue as being full of "drawling music that slaps its penultimate syllable against your ear like lazy river water against a boat".
It's true that A Streetcar Named Desire climaxes in a monstrous act of rape and Harold Clurman – who felt that Marlon Brando, in the greatest soiled-vest part of all time, unbalanced the play and film by being too diabolically desirable – was, on one level, extremely shrewd in identifying what kind of social threat Stanley Kowalski represents. He is, Clurman wrote, the "unwitting anti-Christ of our time, the little man who will break the back of any attempt to create a more comprehensive world in which thought and conscience are expected to evolve from the old Adam. His mentality provides the soil for fascism, viewed not as a political movement but as a state of being."
But it's clear that Williams views him and Blanche Dubois, the faded Southern belle who clings to illusions of refinement and is the moth to brother-in-law Stanley's flame, with nearly the same mix of attraction and repulsion as they view one another. A production that gave no validity whatsoever to Stanley's avowal, on the brink of violating her, that: "We've had this date with each other from the beginning!", or denied the audience an unholy sense of corrupt catharsis when the storm breaks, would be untrue to the play.
The two most gifted current beneficiaries of the Williams spirit are, to my mind, Tony Kushner (in whose plays, such as Angels in America, Williams and Brecht seem to mate to rampant and rigorous effect) and the songwriter Rufus Wainwright. The latter seems to me to have the same knack of being simultaneously self-dramatising and drolly self-mocking in his recklessly cruising, crystal-meth-addict days, he had a comparable urge to render himself not only open to experience but dangerously vulnerable to it. And he remains magnanimously arch and witty about the horror he has been through. Take his allusion to the frightening temporary blindness he suffered as a result of crystal meth in the song "Sanssouci": "Who will be at Sanssouci tonight?/The boys that made me lose the blues and then my eyesight". The humorous non-recriminatory balance of that (and stunning use of zeugma) sound like a miraculous out-of-time collaboration between Tennessee Williams and Alexander Pope.
There is, though, something odd and a little dispiriting in the way Time Out chose to honour the Williams centenary principally in its "Gay and Lesbian" section. He is, honourably and admirably, a gay icon but that is not the only thing he is. His work speaks to the outcast and the drag queen in all of us, not least black writers – from Lorraine Hansberry to August Wilson – who have found inspiration in the way his work champions the underdog, sometime with explicit reference to racial bigotry. This point is taken up by Lucy Bailey who describes Kingdom of Earth, which is set in a farmhouse in the Mississippi Delta during the flooding season, as a "poem on loneliness", bringing together a trio of mismatched misfits – the effete, dying Lot who likes dressing up as his mother (without shades of Psycho) his nominal wife, Myrtle, who once had career in showbiz and is one of Williams's sexed-up life-givers and Lot's mixed-race malcontent brother, Chicken, who is out to usurp him. The relationship between the two brothers carries echoes of that between Blanche and Stanley with the crucial, complicating difference that Chicken is partly the production of racial prejudice as he recounts in eloquent reminiscences.
Besides, there was always one implication that Williams hated: "People who say I create transvestite women are full of shit. Frankly. Just full of shit. Personally I like women more than men."
This remark gives you some measure of the outrageously funny revenge he took on his mother, Edwina, in A Cavalier for Milady. Directed by Gene David Kirk, it will be the second of the late, as-yet-unperformed rarities at the Cock Theatre. Williams never forgave his mother for lobotomising Rose, which he regarded as an extreme act of censorship on his sister's wayward sexual nature. He gets his own back on both the repressed Edwina and his critics in Cavalier by turning her into a Park Lane society lady who has, essentially, the appetites and habits of a Seventies gay New Yorker. She isn't, but she might as well be a drag act. At the start, we see her leaving the infantilised Rose-figure with a babysitter while she and her cronies go out cruising in Central Park with studs hired from the eponymous escort agency. Left alone, the masturbating Rose-figure conjures up an apparition of the great ballet dancer Nijinsky who dances for her but, with problems of his own, frustrates her desire for touch in their conversational pas de deux. The ending is breathtaking in its audacious self-reference to the earlier oeuvre. Surreptitiously, the daughter phones the agency and is left holding a candle on the threshold, like a mutinous Laura Wingfield in Glass Menagerie who may get a gentleman caller.
There will be splashy events later in the year (Nicole Kidman and James Franco will appear in Suddenly Next Summer on Broadway in the autumn). But in keeping with London's traditional role of setting the pace in the re-evaluation of Williams, it would be good if the centenary established that the routinely derided later work is sometimes in genuinely imaginative cross-fertilisation with the earlier classics (sending up back to them freshly sensitised) rather than merely parasitic upon them and that his scrutiny of abiding preoccupations through the Absurdist lens of Ionesco and Beckett could bring hidden things to light. It's an encouraging sign that the Cock Tavern Theatre is well into negotiations for bringing the two unperformed rarities into a West End house in the summer.
A review of Memoirs notoriously claimed that the author may not have opened his heart, but he had certainly opened his fly. Williams knew better than most dramatists the hotline between the groin and the higher seat of the emotions. His productivity right to the end of his life, exemplified in the celebrations here, offers the heartening spectacle of a man who, even when hardly able to stand upright through excess, could still, in Gene David Kirk's lovely description, "sit at the typewriter each morning and unzip his heart".
Tennessee Williams: The Man, The Playwright
As one of the most notable figures in American literature and playwrighting, there is so much information out there about Tennesee Williams. On the flip side, although many of us are familiar with his work, most don’t know about his personal background and from where he drew his inspiration.
As Lyric presents Williams’ classic tale THE GLASS MENAGERIE, opening March 27 at the Plaza Theatre, we took some time to do a little research on this literary genius..
.
Born Thomas Lanier Williams in Columbus, Mississippi on March 26, 1911 (you’re right–the show opens the day after his birthday), he changed his first name to Tennessee shortlly after graduating from the University of Iowa. Like many artists, his early years were filled with struggles to “make it.” It wasn’t until THE GLASS MENAGERIE was produced in 1944 that his work saw critical acclaim. The play won the New York Drama Critics’ Circle Award for Best Play of the Season that very year.
The huge success of his next play, A STREETCAR NAMED DESIRE, in 1947 secured his name among the great playwrights of his time. Between 1948 and 1959, seven more of his plays appeared on Broadway including SUMMER AND SMOKE, THE ROSE TATTOO, CAMINO REAL, CAT ON A HOT TIN ROOF, ORPHEUS, DESCENDING, GARDEN DISTRICT and SWEET BIRD OF YOUTH. By 1959 he had won two Pulitzer Prizes, three New York Drama Critics’ Circle Awards, three Donaldson Awards and a Tony Award.
As far as his personal life was concerned, Williams remained close to his sister Rose, whose life inspired the character “Laura” in THE GLASS MENAGERIE. She was diagnosed as schizophrenic as a young adult. During his rise to fame, Williams ran in a gay, New York City social circle that included fellow writer and close friend Donald Windham. The most notable relationship of his life was that with Frank Merlo, an occasional actor, which lasted 14 years.
At the time of his death on February 25, 1983, Williams’ works were not seeing the success of his previous plays. Despite this, the power of his ideas and words continue to inspire, uplift and entertain audiences around the world.
Don’t miss Williams’ autobiographical play, THE GLASS MENAGERIE, at the Plaza Theatre, March 27 through April 13.
Tennessee Williams Biography
Tennessee Williams at age 54 in 1965. Photo by Orland Fernandez.
He was brilliant and prolific, breathing life and passion into such memorable characters as Blanche DuBois and Stanley Kowalski in his critically acclaimed A STREETCAR NAMED DESIRE. And like them, he was troubled and self-destructive, an abuser of alcohol and drugs. He was awarded four Drama Critic Circle Awards, two Pulitzer Prizes and the Presidential Medal of Freedom. He was derided by critics and blacklisted by Roman Catholic Cardinal Spellman, who condemned one of his scripts as “revolting, deplorable, morally repellent, offensive to Christian standards of decency.” He was Tennessee Williams, one of the greatest playwrights in American history.
Born Thomas Lanier Williams in Columbus, Mississippi in 1911, Tennessee was the son of a shoe company executive and a Southern belle. Williams described his childhood in Mississippi as happy and carefree. This sense of belonging and comfort were lost, however, when his family moved to the urban environment of St. Louis, Missouri. It was there he began to look inward, and to write— “because I found life unsatisfactory.” Williams’ early adult years were occupied with attending college at three different universities, a brief stint working at his father’s shoe company, and a move to New Orleans, which began a lifelong love of the city and set the locale for A STREETCAR NAMED DESIRE.
Williams spent a number of years traveling throughout the country and trying to write. His first critical acclaim came in 1944 when THE GLASS MENAGERIE opened in Chicago and went to Broadway. It won a the New York Drama Critics’ Circle Award and, as a film, the New York Film Critics’ Circle Award. At the height of his career in the late 1940s and 1950s, Williams worked with the premier artists of the time, most notably Elia Kazan, the director for stage and screen productions of A STREETCAR NAMED DESIRE, and the stage productions of CAMINO REAL, CAT ON A HOT TIN ROOF, and SWEET BIRD OF YOUTH. Kazan also directed Williams’ film BABY DOLL. Like many of his works, BABY DOLL was simultaneously praised and denounced for addressing raw subject matter in a straightforward realistic way.
The 1960s were perhaps the most difficult years for Williams, as he experienced some of his harshest treatment from the press. In 1961 he wrote THE NIGHT OF THE IGUANA, and in 1963, THE MILK TRAIN DOESN’T STOP HERE ANY MORE. His plays, which had long received criticism for openly addressing taboo topics, were finding more and more detractors. Around this time, Williams’ longtime companion, Frank Merlo, died of cancer. Williams began to depend more and more on alcohol and drugs and though he continued to write, completing a book of short stories and another play, he was in a downward spiral. In 1969 he was hospitalized by his brother.
After his release from the hospital in the 1970s, Williams wrote plays, a memoir, poems, short stories and a novel. In 1975 he published MEMOIRS, which detailed his life and discussed his addiction to drugs and alcohol, as well as his homosexuality. In 1980 Williams wrote CLOTHES FOR A SUMMER HOTEL, based on the lives of Zelda and F. Scott Fitzgerald. Only three years later, Tennessee Williams died in a New York City hotel filled with half-finished bottles of wine and pills. It was in this desperation, which Williams had so closely known and so honestly written about, that we can find a great man and an important body of work. His genius was in his honesty and in the perseverance to tell his stories.
Best Playwrights
It has been an arduous task collating a list of best playwrights. However, after careful deliberation, we believe that these playwrights deserve to be regarded as the best playwrights of all time. This list took into account craftsmanship, aesthetic value, originality, contribution to theatre and, of course, subjective favouritism by the StageMilk team (yes this is just our opinion).
We include playwrights from several countries and every period in history from Ancient Greece to modern marvels like Lucy Prebble. Each playwright has written a number great plays and has offered something truly original to the theatre. We are sure there will be plenty of contention about this list, but we would love to hear your thoughts.
At the end of the day this list has one purpose, to encourage actors to read more plays. You will be a better actor for reading the work of any of these great playwrights. If you are interested in reading plays by any of these playwrights click the link underneath each picture to see a more specific list of each playwrights strongest plays. Enjoy!
Later career
Through the 1970s and 1980s, Williams continued to write for the theater, though he was unable to repeat the success of most of his early years. One of his last plays was Clothes for a Summer Hotel (1980), based on passionate love affair between the American writer F. Scott Fitzgerald (1896) and his wife, Zelda.
Two collections of Williams's many oneact plays were published: 27 Wagons Full of Cotton (1946) and American Blues (1948). Williams also wrote fiction, including two novels, The Roman Spring of Mrs. Stone (1950) and Moise and the World of Reason (1975). Four volumes of short stories were also published. One Arm and Other Stories (1948), Hard Candy (1954), The Knightly Quest (1969), and Eight Mortal Ladies Possessed (1974). Nine of his plays were made into films, and he wrote one original screenplay, Baby Doll (1956). In his 1975 tell-all novel, Memoirs, Williams described his own problems with alcohol and drugs and his homosexuality (the attraction to members of the same sex).
Williams died in New York City on February 25, 1983. In 1995, the United States Post Office commemorated Williams by issuing a special edition stamp in his name as part of their Literary Arts Series. For several years, literary enthusiasts have gathered to celebrate the man and his work at the Tennessee Williams Scholars Conference. The annual event, held along with the Tennessee Williams/New Orleans Literary Festival, features educational, theatrical and literary programs.
OBITUARIES:
PERIODICALS
National Review, March 18, 1983, "Tennessee Williams, R.I.P."
New York Times, February 26, 1983, p. 1.
New York Times Book Review, March 4, 2007, "Playwright's Diary."
Time, March 7, 1983, T.E. Kalem, "The Laureate of the Outcast," p. 88.
Washington Post, February 26, 1983, p. B6.
Cite this article
Pick a style below, and copy the text for your bibliography.
"Williams, Tennessee 1911-1983 (Thomas Lanier Williams) ." Contemporary Authors, New Revision Series. . Encyclopedia.com. (June 18, 2021). https://www.encyclopedia.com/arts/educational-magazines/williams-tennessee-1911-1983-thomas-lanier-williams
"Williams, Tennessee 1911-1983 (Thomas Lanier Williams) ." Contemporary Authors, New Revision Series. . Retrieved June 18, 2021 from Encyclopedia.com: https://www.encyclopedia.com/arts/educational-magazines/williams-tennessee-1911-1983-thomas-lanier-williams
Citation styles
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list. | https://gh.payqar.org/9180-tennessee-williams-1911-1983-playwright-history.html |
Get A Propos, Levinas PDF
Rejects Levinas’s argument for the preeminence of ethics in philosophy.
“Imagine listening at a keyhole to a talk with the duty of transcribing it, and the outcome could be a textual content just like the current one.” — from half I: Stagework
In a chain of meditations responding to writings through Emmanuel Levinas, David Appelbaum means that a fallacious grammar warrants Levinas to converse of language on the provider of ethics. it's the nature of functionality that he error. Appelbaum articulates this flaw through acting in writing the act of the philosophical brain at paintings. Incorporating the voices of different thinkers—in specific Levinas’s contemporaries Jacques Derrida and Maurice Blanchot—sometimes in actual fact, occasionally indistinctly, Appelbaum creates on those pages one of those soundstage upon which illustrations look of what he phrases “a rhetorical aesthetic,” which might reestablish rhetoric, principles for giving voice—and now not ethics—as the right kind matrix for realizing the otherness and beyond-being that Levinas seeks in his paintings
Connecting aesthetic adventure with our event of nature or with different cultural artifacts, Aesthetics as Phenomenology makes a speciality of what artwork potential for cognition, attractiveness, and affect—how artwork alterations our daily disposition or habit. Günter Figal engages in a penetrating research of the instant at which, in our contemplation of a piece of paintings, response and idea confront one another.
This can be a superb quantity which expands upon the realm Phenomenology Institute s fresh study: the learn of the gorgeous intertwining of the skies and the cosmos with the human goals of philosophy, literature and the humanities. the connection of people to the cosmos is tested during the exploration of phenomenology, metaphysics and the humanities.
A groundbreaking exploration of Heidegger and embodiment, from which a thorough moral point of view emerges.
Extra info for A Propos, Levinas
Example text
But art doesn’t explicitly advocate sobering up unto a vigilance alert to the trace of the other. Art may be mindful when language doesn’t hang together nicely or speak properly or determinately. Possibly, art doesn’t play sides in what transpires, namely, a weakened technology unable to fend off the other’s indiscretions as they grow abusively inaudible. ” (EE, 65). It threatens to travesty the themes of consciousness and its objects represented. To bring forth a play of imagery that morphs one thing to the next to the next, wreaking havoc in an orderly narrative, a midsummer night’s dream that fascinates as it dissolves the boundaries between things and their linguistic casements.
Hei‑ degger has the idea that language does not exhaust itself in meaning. Levinas picks up on it and reaffirms the anteriority of such a language, an arche‑language: “[T]he unsayable saying lends itself to the said, to the ancillary indiscretion of the abusive language that divulges or profanes the unsayable” (OB, 44). The language that language speaks (die Sprache spricht) before its use‑value is exploited by a speaker lets itself be reduced to the language spoken, yet with an unexpungeable trace of the originary.
For Levinas, significance lies chiefly in how the I hereby instantiates itself not as pure nominative presence but under an accusation syntacti‑ cally inscribed in the accusative. The performative involves a play, not of an intention but of a non‑intention, somehow enunciated and ready to operate on the Sinngebung, the making‑sense of the situation, and take its linguistical non‑place. Two dimensions of the performative are blurred not only because of the extralinguistical mode, but also because of an imperfect distinction between language usage and action, actus that becomes the difference between illocutionary and perlocutionary acts, constative and performative acts, as defined by Austin.
| |
Soft tissue injuries can broadly be divided into two types – acute and overuse (or chronic). Acute soft tissue injuries involve muscles, tendons and ligaments, and occur in an instant causing sudden pain, disability and swelling. Examples are ankle sprains and torn muscles. Overuse soft tissue injuries, on the other hand, are characterised by a gradual increase in pain and disability and are caused by a gradual overload on tendons and ligaments. Examples are tendonitis, ‘shin splints’ and tennis elbow. Exercises and medication prescribed by your doctor are essential for treating soft tissue injuries. The following guide will set your mind at ease so keep it close by for handy reference.
Acute injuries
The three types of acute soft tissue injuries are:
- Contusions – bleeding into a tissue as a result of a direct blow
- Strains – tearing of a muscle or tendon
- Sprains – tearing of a ligament or joint capsule
The first Step – RICE
The following measures should be taken immediately and for the next 72hours following an acute injury.
Rest: Stop using the injured site and rest your body generally.
Ice: Apply crushed ice in a moist cloth to the injured site for 15 to 20 minutes and repeat every one to two hours.
Compression: After the ice application, apply a firm rubberised bandage.
Elevation: If possible, keep the injured part above the level of your heart.
Management
Your GP will give advice that will cover the following areas:
Rest
If there is a tissue weakness the injury should be rested completely. This can be done by putting a splint on the injury which will limit your normal activities. Pain is usually a good indicator of whether you are exercising too much. Remember, while rest is important some exercises will speed your recovery.
Medication
You may need anti-inflammatory medication for a few days to provide some pain relief and to treat the tissue inflammation. Your GP will discuss your medication with you.
Exercise
This will involve stretching and strengthening the injured site and your body in general. Look below for some recommendations, although your GP may give you more specific exercises as well.
Overuse injuries
These chronic injuries (which develop slowly) can usually be divided into groups:
- Overloading a muscle, or more commonly, a tendon (tendonitis)
- Sprains to ligaments or joint capsules
Causes
Overuse injuries are caused by doing too much of one activity – this may be too often, too hard or for too long. They can also be caused by doing the activity incorrectly. People vary in their ability to cope with repetitious activity and an overuse injury may represent an inbuilt ‘weakness’ placing that person at risk of repeated injury. An example is flat feet causing excessive foot twisting (or pronation) resulting in Achilles tendonitis or shin splints.
Management
Your GP will give you specific advice that may include:
Rest
Stopping the activity causing the excessive loading of your tissues is vital.
Anti-Inflammatory treatment
This may involve medication (taken by mouth or applied to the skin), applying ice or heat to the injury, or physiotherapy.
Exercise
As there is a tissue weakness, stretching and strengthening exercises will be essential for a complete recovery. Examples and guidelines for these are below but your GP will probably give you more specific ones.
Correcting an underlying cause
This may require a referral to another doctor, podiatrist (as in the case of flat feet), or professional to have your technique analysed. Ask your GP if this is necessary.
stretching exercises
These exercises should involve the site of the injury as well as your entire body. They restore length and some strength to the injured tissue and maintain full function of non-injured parts. Stretching exercises should be done slowly, without pain and without bouncing. Try to hold these stretches for about 5-10 seconds and do 5 sets of them 2 or 3 times a day.
Front Thigh –
Hold on to a wall for support. With knees together, pick up leg and hold the foot towards the buttocks; pull back gently. Repeat on other side.
Calf Stretch –
Place heels on the ground and the balls of the feet on a book, with the legs straight.
Other specific exercises will vary according to the injured site and will be given to you by your GP or physiotherapist.
strengthening exercises
These exercises involve a progressive increase in the resistance you apply and the number of times they are performed (repetitions). These exercises should be done gradually, regularly and usually under supervision. They are designed to strengthen the weakened tissue and maintain your general body strength. Try to do them in 3 or 4 sets of 10-30 repetitions every day.
Calf –Rise up onto toes, hold for 2 seconds then down. With time, add speed, weight, and then use each leg separately.
Ankle –Loop a piece of rubber over the foot and with a fixed heel, do exercises inwards and outwards.
returning to activity
Becoming active again will depend on your injury, so make sure you follow your doctor’s instructions. Generally, this will involve a progressive and gradual to physical activity, sport and work.
In the case of a lower limb injury, start with walking as early as possible, swimming, cycling and then jogging if appropriate. Do this 3 or 4 times a week for 25 to 40 minutes.
With any injury, it’s vital to maintain some level of fitness throughout your recovery with an exercise such as walking. Allow about 15 minutes to warm up and stretch. Following exercise, allow 5 minutes to cool down and stretch. Remember to slow down or stop if you are experiencing pain
Exercise tip: Before any strenuous exercise, allow 15 minutes to warm up and stretch muscles. This will help prevent further injuries. | http://completefeet.com.au/soft-tissue-injuries-2 |
[**Thermodynamic approach to field equations in Lovelock gravity and $f(R)$ gravity revisited**]{}
Yan-Gang Miao${}^{1,2,}$[^1], Fang-Fang Yuan${}^{1,}$[^2], and Zheng-Zheng Zhang${}^{1,}$[^3]
${}^{1}$[*School of Physics, Nankai University, Tianjin 300071, China*]{}
${}^{2}$[*State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,\
Chinese Academy of Sciences, P.O. Box 2735, Beijing 100190, China*]{}
[**[Abstract]{}**]{}
The first law of thermodynamics at black hole horizons is known to be obtainable from the gravitational field equations. A recent study claims that the contributions at inner horizons should be considered in order to give the conventional first law of black hole thermodynamics. Following this method, we revisit the thermodynamic aspects of field equations in the Lovelock gravity and $f(R)$ gravity by focusing on two typical classes of charged black holes in the two theories.
[**PACS Number(s)**]{}: 04.50.Kd, 04.70.Dy
[**Keywords**]{}: Thermodynamics, Field equations, Multiple horizons
Introduction
============
Given the action of a gravitational system, the field equations can be obtained through the variation with respect to a metric. Although the field equations ought to contain all available dynamical information of the system including some thermodynamic properties, it is still surprising that through a quite simple procedure a relevant component of the field equations in a static and spherically symmetric spacetime can be rewritten [@Padmanabhan:2002sha] as the form of the first law of thermodynamics at a black hole horizon. Since then this approach has been applied to various black hole solutions including the ones in the Lovelock gravity [@Paranjape:2006ca] and $f(R)$ gravity [@Akbar:2006mq]. As noted in e.g. ref. [@Kothawala:2009kc], the resulting expression is actually different from the conventional first law of black hole thermodynamics because of the appearance of the $P d V$ term, where $P$ is the radial pressure and $V$ is the volume surrounded by a horizon. It has been shown [@Akbar:2007qg; @Kwon:2013dua] for the BTZ black holes in several gravity theories that the expected first law (with the variation of black hole charges) can be obtained by summing the contributions of the first law of thermodynamics at all black hole horizons.
Based on the idea of ref. [@Kwon:2013dua], we reconsider the thermodynamic aspects of field equations in the Lovelock gravity and $f(R)$ gravity, respectively. In contrast with the general analysis in refs. [@Paranjape:2006ca; @Akbar:2006mq], the procedure of ref. [@Kwon:2013dua] involves the explicit relations between the positions of horizons and black hole charges. Due to the fact that the black hole thermodynamics in the Lovelock gravity and $f(R)$ gravity has been studied intensively, it is interesting and meaningful to compare the results obtained from the idea of ref. [@Kwon:2013dua] with that from the general one. Therefore, we choose two typical classes of charged black holes in the two gravity theories for our investigations.
This paper is organized as follows. In the next section, we demonstrate how to derive the first law of black hole thermodynamics for a particular class of charged Lovelock black holes in the 5-dimensional spacetime. In section \[sec3\], we discuss a class of 4-dimensional $f(R)$-Maxwell black holes imposed by a constant curvature scalar, where this class of black holes has four horizons. The conclusion is given in the last section.
Lovelock black holes: 5-dimensional case
========================================
For simplicity, we start with a class of charged black holes in Einstein-Maxwell-Gauss-Bonnet theory [@Wiltshire:1985us; @Charmousis:2008kc]. Obviously more general static Lovelock black holes can be studied along the same way.
The action we consider here has the following form, S = d\^d x ( R - 2Ł+ L\_[GB]{} ) + S\_m, where $L_{GB} \equiv R^2 - 4 R_{\m\n} R^{\m\n} + R_{\m\n\r\s} R^{\m\n\r\s}$ is the Gauss-Bonnet Lagrangian, the matter part $S_m$ is assumed to be the Maxwell term, and the parameter $\a$ is the Gauss-Bonnet coupling.
When the spacetime dimension $d=5$ and the cosmological constant $\L=0$, the corresponding black hole solutions with spherical horizons have been found [@Wiltshire:1985us] to be ds\^2 = - V(r) dt\^2 + dr\^2 + r\^2 , \[ncet\] V(r) = 1 + , F\_[tr]{} = , where $F_{tr}$ is the nonzero component of electromagnetic tensors and $\m$ is a parameter related to the black hole mass. The horizon positions are at r\_\^2 = -, from which we can find the following relations: (r\_+ r\_-)\^2 = q\^2, r\_+\^2 + r\_-\^2 = 2 (- ).
Based on the analysis of ref. [@Paranjape:2006ca], we arrive at the $(r,r)$ component of field equations at the outer horizon, - (r\_+\^2 + 4) -2 = r\_+\^2 8G P\_+, where $T^{\mu \nu} = F^{\mu\lambda} F^\nu_{\ \lambda} - \frac{1}{4} g^{\mu\nu} F^{\lambda\sigma} F_{\lambda\sigma}$ is the energy-momentum tensor and $P_+ \equiv T^r_{\ r} (r_+) $ is the radial pressure. An analogous equation at the inner horizon can be obtained easily by the replacement of $r_+$ by $r_-$ in eq. (\[lrr\]).
Since $V(r_+) = 0$, we have $\sr{r_+^6 - 8\a q^2 + 16\a\m r_+^2} = r_+ (r_+^2 + 4\a)$. Considering the relation $(r_+ r_-)^2 = q^2$, we can rewrite the first term of eq. (\[lrr\]) to be $2 \Big(1-\f{r_-^2}{r_+^2} \Big)$. Noting that the volume in this case is $V_+ = \f{S_3}{4}r_+^4 = \f{\pi^2}{2} r_+^4$, we multiply by $\f{3\pi}{8 G} r_+ dr_+$ on both sides of eq. (\[lrr\]) and get a simpler form, T\_+ d S\_+ - d E\_+ = P\_+ d V\_+, with T\_= , S\_= r\_\^3 (1+), E\_= (r\_\^2 + 2). This is the expected first law of thermodynamics, where $E_\pm$ coincides with the Misner-Sharp energy inside the horizons. More explicitly, we have the corresponding equations at the outer and inner horizons as ( d r\_+ - r\_+ d r\_+ ) &=& P\_+ d V\_+,\
( d r\_- - r\_- d r\_- ) &=& P\_- d V\_-. Following the procedure described in ref. [@Kwon:2013dua], the sum of these two equations reads - (r\_+ dr\_+ + r\_- dr\_-) + T\_+ d S\_+ + ( r\_- - )d r\_- = P\_+ d V\_+ + P\_- d V\_-. \[sum\]
From the relations in eq. (\[lre\]), we have $$\begin{aligned}
d \m - d\a &=& r_+ dr_+ + r_- dr_-, \nonumber \\
q d q &=& r_+ r_- ( r_+ d r_- + r_- d r_+ ). \label{dmaq}\end{aligned}$$ Thus, if we recall the parameters, i.e., the black hole mass $M$, the total electric charge $Q$, and the conjugate potential $\Phi_+$, M = , Q = ()\^ q, \_+ = 3 ()\^ , eq. (\[sum\]) can be rewritten as - d M + d + T\_+ d S\_+ + \_+ d q - ( d r\_+ + d r\_- ) = 2 \^2 ( P\_+ r\_+\^3 d r\_+ + P\_- r\_-\^3 d r\_- ). This gives the motivation for us to introduce the expressions of radial pressures as $P_\pm \equiv T^r_{\ r} (r_\pm) = - \frac{3}{8 \pi G} \frac{r_\mp^2}{r_\pm^4}$. Just like the charged black holes in ref. [@Kwon:2013dua], $P_\pm$ is also proportional to the square of the electromagnetic parameter.
On the other hand, in order to incorporate the variation of a relevant quantity with respect to the coupling $\a$, we define a total variational operator as $\wt d \equiv d + d_\a$. For example, we have d S\_+ && d S\_+ + d\_S\_+ = ( r\_+\^2 + 4) d r\_+ + r\_+ d ,\
d E\_+ && d E\_+ + d\_E\_+ = ( r\_+ d r\_+ + d ). Now we see that eq. (\[m2\]) turns into the precise form of the first law of black hole thermodynamics, - d M + T\_+ d S\_+ + \_+ d q + \_+ d = 0. Here the potential conjugate to the Gauss-Bonnet coupling is $\Theta_+ = \f{3\pi}{4 G} (1 - 8 \pi r_+ T_+)$ and $T_+ d S_+$ can be rewritten as $\f{\k}{8\pi G} d A_+$. For the sake of aesthetics, one can of course deduce an equivalent formula: $ - \wt d M + T_+ \wt d S_+ + \Phi_+ \wt d q + \Theta_+ \wt d \a= 0 $. It is worth mentioning that we can also obtain the first law at the inner horizon by applying the same method.
We note that the extended first law (including variation with respect to the Gauss-Bonnet coupling) for the Lovelock gravity has been derived from first principles in ref. [@Kastor:2010gq] and the result, together with the Smarr formula, has been utilized [@Castro:2013pqa] to investigate some issues related to the universality of the product of horizon areas, where a set of relations between thermodynamic potentials $\Theta_\pm$ has also been obtained.
Here it is necessary to make a comparison of our treatment with the earlier computation. We have adopted the method of ref. [@Kwon:2013dua] to obtain the same extended first law as that of ref. [@Castro:2013pqa] for Lovelock black holes. Our procedure is different from that of ref. [@Kastor:2010gq]. Moreover, the first law at the inner horizon can be derived analogously. Therefore, it is interesting that the same goal is reached by two different means.
$f(R)$ black holes: constant curvature case
===========================================
In this section, we turn to the investigation of charged black holes in the $f(R)$ gravity. From the action S = d\^4 x ( R + f(R) ) + S\_m, \[fraction\] a class of $f(R)$-Maxwell black holes imposed by a constant curvature scalar $R=R_0$ can be obtained [@Moon:2011hq; @Sheykhi:2012zz], ds\^2 = - N(r) dt\^2 + dr\^2 + r\^2 ( d\^2 + \^2 d\^2 ), N(r) = 1 - + - r\^2 = - \_[i=1]{}\^[4]{} (r-r\_i) , F\_[tr]{} = , where $\m$ is a parameter related to the black hole mass. From the action (eq. (\[fraction\])), we can derive the relevant component of field equations as follows: ( - - r\_i - ) - ( f - R\_0 f’ ) = 8G P\_i. Since $N(r_i) = 0$, we have = ( 1 + - r\_i\^2 ). Noting that here the volume is $V_i=\f{4\pi}{3}r_i^3$ and multiplying by $\f{r_i^2}{2G}dr_i$ on both sides of eq. (\[frr\]), we arrive at the following equation, ( 1- - r\_i\^2 ) dr\_i- dr\_i - ( f - R\_0 f’) r\_i\^2 dr\_i = P\_i d V\_i. Thus, it has the form of the first law of thermodynamics, T\_i d S\_i - d E\_i - T\_i d |S\_i = P\_i d V\_i, where the black hole parameters are T\_i = ( 1- - r\_i\^2 ), S\_i = (1+f’), E\_i = (1+f’), \[tse\] where $E_i$ is just the Misner-Sharp energy.
As noted in ref. [@Akbar:2006mq], see also ref. [@Eling:2006gr], the additional entropy term[^4] $d \bar S_i$ is the entropy production term in non-equilibrium thermodynamics. That is, when the $f(R)$ higher derivative term is included in the action of $f(R)$ gravity, the horizon thermodynamics will become non-equilibrium. In this case, the entropy balance law needs to be modified [@Eling:2006gr; @Chirco2010] to derive the $f(R)$ gravity field equations from the thermodynamical prescription. The extra irreversible entropy production term can be interpreted as the bulk viscosity, and has its origin in the nonzero expansion of the null geodesics comprising the horizon. However, through a more general definition of local entropy, the reversible spacetime thermodynamics can still be applied [@Elizalde2008] to this non-equilibrium case. The original first law cannot be sustained unless the extra entropy production term $d \bar S_i$ is taken into account to balance the inequality. In the limit of Einstein’s gravity, this extra term will disappear. On the other hand, if the energy term is redefined as $d \bar E_i=d E_i+\f{1}{4G} ( f - R_0 f') r_i^2 dr_i$, the first law eq. (\[ffl\]) turns into the following form, T\_i d S\_i - d |E\_i = P\_i d V\_i, \[frpdv\] which is exactly same as that in the equilibrium circumstance.
From the expression of $N(r)$ in eq. (\[fnr\]), we obtain some useful relations as follows: \_[i=1]{}\^[4]{} r\_i = 0, \_[i=1]{}\^[4]{} r\_i &=& - ,\
r\_1 r\_2 + r\_2 r\_3 + r\_3 r\_4 + r\_4 r\_1 + r\_1 r\_3 + r\_2 r\_4 &=& - ,\
r\_1 r\_2 r\_3 + r\_2 r\_3 r\_4 + r\_3 r\_4 r\_1 + r\_1 r\_2 r\_4 &=& ,\
\_[i=1]{}\^[4]{} r\_i\^2 d r\_i &=& - d . Eq. (\[fre4\]) can easily be obtained from the helpful formula provided in ref. [@Du:2014gr], where it has been proved that the roots $r_i$ $(i=1, 2, \cdots, m)$ of the following polynomial a\_m r\^m + a\_[m-1]{} r\^[m-1]{} + .... + a\_0 r\^0 = 0, satisfy a simple formula, s\_n=- \_[i=0]{}\^[m-1]{} s\_[n-m+i]{} a\_i, where $s_n \equiv \sum_{i=1}^m r_i^n$. Note that $s_{n-m+i}=0$ for $n-m+i<0$, and $s_{n-m+i}=n$ for $n-m+i=0$. For our case, the positions of horizons are determined by the equation $N(r)=0$ which can be put into a standard form as r\^4 - r\^2 + 2r - = 0. So we have $a_4=\f{R_0}{12}$, $a_3=0$, $a_2=-1$, $a_1=2\m$, and $a_0=-\f{q^2}{1+f'}$. Substituting these coefficients into eq. (\[sn\]) and setting $n=3$ and $m=4$, we get \_[i=1]{}\^[4]{} r\_i \^3 = - . By taking derivative on both sides of the above equation, we recover eq. (\[fre4\]).
With this digression finished, we are ready to use eq. (\[fr2\]) and sum the four equations for $i=1, 2, 3, 4$ to obtain a central equation, & &T\_1 d S\_1 - d r\_1 - ( d r\_2 + d r\_3 + d r\_4 )\
& &+ (-) (r\_2\^2 d r\_2 + r\_3\^2 d r\_3 + r\_4\^2 d r\_4 ) - ( f - R\_0 f’ ) \_[i=1]{}\^[4]{} r\_i\^2 d r\_i\
& &= \_[i=1]{}\^[4]{} P\_i d V\_i, where the radial pressure is $P_i \equiv T^r_{\ r} (r_i) = - \frac{q^2}{8\pi G r_i^4}$. Note that the energy-momentum tensor of $f(R)$ black holes takes the same form as that of Lovelock black holes, see its formulation under eq. (\[lrr\]), because both kinds of black holes have the same Maxwell charge. The difference between them lies in metrics, which gives rise to different nonzero components of electromagnetic tensors $F_{tr}$, see eqs. (\[ncet\]) and (\[fnr\]). After regarding $r_1$ as the outermost position of event horizons and making some manipulation, we have T\_1 d S\_1 - ( - r\_1\^2- ) d r\_1 + d = 0.
On the other hand, when $i=1$ eq. (\[dm\]) leads to d = ( 1 - - r\_1\^2 ) d r\_1 + d q. Combining the above two equations, we obtain T\_1 d S\_1 + d q + d = 0. By recalling the black hole parameters, i.e., the electric potential $\Phi_i$ on the $r_i$ horizon and the electric charge $Q$, \_i = , Q = , and modifying the mass parameter, M = (1+f’) M = - , we finally derive the first law of black hole thermodynamics, - d M + T\_1 d S\_1 + \_1 d Q = 0. Only when $f' - \f{2}{R_0} f - 1 = 0$ can we have $\wt M = M$. This requires that $f(R) = C e^{\f{2R}{R_0}} - \f{R_0}{2}$, where $C$ is a constant.
We make some comments. First, the physical interpretation of the new mass parameter $\wt M$ is unclear at present. Without the introduction of $\wt M$ the expected first law could not be reproduced even if the non-equilibrium part were discarded. Second, it is puzzling[^5] that only two of the five relations given in eqs. (\[fre1\])-(\[fre4\]) have been used in the derivation of the first law. The phenomenon brings one to look for an alternative derivation. Third, it is unclear how to associate a proper interpretation with the thermodynamic first law at virtual horizons due to the restrictions of the method itself. Finally, one can make use of the analogous method to study other black holes in the $f(R)$ gravity as done in refs. [@delaCruzDombriz:2009et; @Sebastiani:2010kv; @Hendi:2011eg].
Conclusion
==========
In ref. [@Kwon:2013dua] an interesting property is discovered that the contributions of inner horizons should be considered when one derives the first law of black hole thermodynamics. The essential step is to sum the equations corresponding to the first law of thermodynamics at all horizons. By applying this method, we have studied a 5-dimensional charged Lovelock black hole and a 4-dimensional $f(R)$-Maxwell black hole imposed by a constant curvature scalar. More general black holes in the Lovelock gravity and $f(R)$ gravity can be analyzed similarly.
This work may be extended along the following ways. Firstly, one may attempt to rigorously prove the property found by ref. [@Kwon:2013dua]. In this aspect, the investigation in ref. [@Kothawala:2009kc] may be helpful where the near horizon symmetries of the Einstein tensor are used to demonstrate the thermodynamic interpretation of the field equations near the horizon. Secondly, it is interesting to generalize this method to the case of nonzero variation of the cosmological constant. We note that since the work of ref. [@Kastor:2009wy], a lot of efforts have been devoted to study this kind of extended first law, see, for instance, ref. [@Cvetic:2011], and its relevant phase transitions [@Altamirano:2014tva]. Thirdly, as a previous work pointed out [@Son:2013eea], the pressure plays a complementary role in the black hole thermodynamics. Curiously, the literature focusing on the thermodynamic volume [@Altamirano:2014tva] involves a term like $V d P$ rather than $P d V$. We note that $P d V$ appears in eqs. (\[sum\]) and (\[frpdv\]), but does not appear in the final expressions of black hole thermodynamics eqs. (\[lbh\]) and (\[dM\]), and that $V d P$ actually plays no role in the black hole thermodynamics investigated in the present paper. Both terms may have some connection to the method proposed in ref. [@Kwon:2013dua].
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank the anonymous referee for the helpful comments that indeed improve this work greatly. This work was supported in part by the National Natural Science Foundation of China under grant No.11175090 and by the Ministry of Education of China under grant No.20120031110027.
[99]{}
T. Padmanabhan, “Classical and quantum thermodynamics of horizons in spherically symmetric space-times,” Class. Quant. Grav. [**19**]{} (2002) 5387 \[arXiv:gr-qc/0204019\].
A. Paranjape, S. Sarkar, and T. Padmanabhan, “Thermodynamic route to field equations in Lanczos-Lovelock gravity,” Phys. Rev. [**D 74**]{} (2006) 104015 \[arXiv:hep-th/0607240\].
M. Akbar and R.-G. Cai, “Thermodynamic behavior of field equations for $f(R)$ gravity,” Phys. Lett. [**B 648**]{} (2007) 243 \[arXiv:gr-qc/0612089\].
D. Kothawala and T. Padmanabhan, “Thermodynamic structure of Lanczos-Lovelock field equations from near-horizon symmetries,” Phys. Rev. [**D 79**]{} (2009) 104020 \[arXiv:0904.0215 \[gr-qc\]\].
M. Akbar, “Thermodynamic interpretation of field equations at horizon of BTZ black hole,” Chin. Phys. Lett. [**24**]{} (2007) 1158 \[arXiv:hep-th/0702029\]. Y. Kwon and S. Nam, “Thermodynamics from field equations for black holes with multiple horizons,” arXiv:1310.4933 \[gr-qc\].
D.L. Wiltshire, “Spherically symmetric solutions of Einstein-Maxwell theory with a Gauss-Bonnet term,” Phys. Lett. [**B 169**]{} (1986) 36.
C. Charmousis, “Higher order gravity theories and their black hole solutions,” Lect. Notes Phys. [**769**]{} (2009) 299 \[arXiv:0805.0568 \[gr-qc\]\]. D. Kastor, S. Ray, and J. Traschen, “Smarr formula and an extended first law for Lovelock gravity,” Class. Quant. Grav. [**27**]{} (2010) 235014 \[arXiv:1005.5053 \[hep-th\]\]. A. Castro, N. Dehmami, G. Giribet, and D. Kastor, “On the universality of inner black hole mechanics and higher curvature gravity,” JHEP [**1307**]{} (2013) 164 \[arXiv:1304.1696 \[hep-th\]\]. T. Moon, Y.S. Myung, and E.J. Son, “$f(R)$ black holes,” Gen. Rel. Grav. [**43**]{} (2011) 3079 \[arXiv:1101.1153 \[gr-qc\]\]. A. Sheykhi, “Higher-dimensional charged $f(R)$ black holes,” Phys. Rev. [**D 86**]{} (2012) 024013 \[arXiv:1209.2960 \[hep-th\]\].
C. Eling, R. Guedens, and T. Jacobson, “Non-equilibrium thermodynamics of spacetime,” Phys. Rev. Lett. [**96**]{} (2006) 121301 \[arXiv:gr-qc/0602001\]. G. Chirco and S. Liberatiy, “Non-equilibrium thermodynamics of spacetime: the role of gravitational dissipation," Phys. Rev. [**D 81**]{} (2010) 024016 \[arXiv:0909.4194 \[gr-qc\]\].
E. Elizalde and P.J. Silva, “$F(R)$ gravity equation of state," Phys. Rev. [**D 78**]{} (2008) 061501 \[arXiv:0804.3721 \[hep-th\]\].
Y.-Q. Du and Y. Tian, “The universal property of the entropy sum of black holes in all dimensions,” arXiv:1403.4190 \[gr-qc\].
A. de la Cruz-Dombriz, A. Dobado, and A.L. Maroto, “Black holes in $f(R)$ theories,” Phys. Rev. [**D 80**]{} (2009) 124011 \[Erratum-ibid. [**D 83**]{} (2011) 029903\] \[arXiv:0907.3872 \[gr-qc\]\]. L. Sebastiani and S. Zerbini, “Static spherically symmetric solutions in $f(R)$ gravity,” Eur. Phys. J. [**C 71**]{} (2011) 1591 \[arXiv:1012.5230 \[gr-qc\]\]. S.H. Hendi, “Some exact solutions of $f(R)$ gravity with charged (A)dS black hole interpretation,” Gen. Rel. Grav. [**44**]{} (2012) 835 \[arXiv:1102.0089 \[hep-th\]\]. D. Kastor, S. Ray, and J. Traschen, “Enthalpy and the mechanics of AdS black holes,” Class. Quant. Grav. [**26**]{} (2009) 195011 \[arXiv:0904.2765 \[hep-th\]\].
M. Cvetic, G.W. Gibbons, D. Kubiznak, and C.N. Pope, “Black hole enthalpy and an entropy inequality for the thermodynamic volume," Phys. Rev. [**D 84**]{} (2011) 024037 \[arXiv:1012.2888 \[hep-th\]\].
N. Altamirano, D. Kubiznak, R.B. Mann, and Z. Sherkatghanad, “Thermodynamics of rotating black holes and black rings: phase transitions and thermodynamic volume,” Galaxies [**2**]{} (2014) 89 \[arXiv:1401.2586 \[hep-th\]\]. E.J. Son and W. Kim, “Complementary role of the pressure in the black hole thermodynamics,” Phys. Rev. [**D 87**]{} (2013) 067502 \[arXiv:1303.0491 \[gr-qc\]\].
[^1]: E-mail address: [email protected]
[^2]: [email protected]
[^3]: [email protected]
[^4]: It is easy to get $d \bar S_i = \frac{\pi (f-R_0 f')}{G} \frac{r^3_i}{ 1 - \frac{q^2}{r^2_i (1+f')} - \frac{R_0}{4} r^2_i} dr_i$ when we compare eq. (\[fr2\]) with eq. (\[ffl\]) and consider the expression of $T_i$ in eq. (\[tse\]). This formula coincides with that given by ref. [@Akbar:2006mq].
[^5]: In the case of two horizons (see eq. (\[lre\])) of the 5-dimensional Lovelock black holes, the both horizons have been utilized to obtain the first law.
| |
For whatever reason, some people seem to have a difficult time differentiating between environments and ecosystems. Environmentsi generally refer to larger areas and a particular type of climate. For example, the Integrated and Adaptive Community Developments that are proposed to be built within the Verde Island Passage will be built in a tropical environment. In this case, it merely indicates the type of area wherein the construction will take place.
Ecosystems however, are much smaller areas and incorporate a more limited scope of life and diversity. An ecosystem is further defined as a locale, generally smaller in nature, wherein a limited scope of life forms and/or matter interact. In the case of virtually every location where there is not a single, overwhelming environmental feature such as a desert, there are numerous ecosystems which all have their own unique needs and requirements and all of that must be taken into account before construction can even be considered, much less started.
The local ecosystems in and around the Integrated and Adaptive Community Developments and the Verde Island Passage are comprised of a great many, diverse and unique ecosystems. Factors so seemingly inconsequential as a few extra feet of elevation on an island can reasonably (and likely will) place you in an entirely unique and separate ecological system.
You may have some of the same insects, some of the same wildlife, but the plants will be different, the soil will be different and there will be different and unique needs for that particular location. Unfortunately, even the best of intentions sometime lead to unintended consequences when it comes to environmental conservation and this is often merely due to people being able to look at the big picture without examining all of the individual components closely enough, much less adequately planning and implementing programs meeting the unique needs of the individual ecosystem.
The direct impact on the local environment should be kept to a minimal as well. Not all of the construction can take place underground, and people cannot live forever underground without coming out into the sunlight on occasionii. However, the more construction that can take place underground, the more the top of the constructs can be blended more seamlessly and beneficially into the environment and even the local ecosystems.
Homes that are built largely underground tend to be very well insulated and require substantially less in the way of heating and cooling. Furthermore, those portions that do remain above ground can also be fitted with natural looking swimming pools that blend seamlessly into the surrounding environment, wet roofs that will allow for fish farms, vertical farming and other options, in addition to providing a great deal more room above ground for parks, ecological preservation parks and other more environmentally sound principles.
Great lessons can be learned from Disney and their construction in and around Kissimmee, Florida as well. While such a project would never be approved these days, given the need to protect swamps and wetlands, back in the fifties and sixties when the swamp lands were being purchased, a great many people believed it to be an inevitably failed venture, if for no other reason than attempting to build in the swamps. The solution was both ingenious and informative however.
What most people perceive when they wander through the theme parks is that they are walking on terra firma, rather than the truth of the matter which is that they are quite literally walking around on the roofs of a vast underground complex. In fact, back in the seventies, it was rumored that the underground complex was so vast that it was comprised of three fully incorporated cities. If such an “illusion” can be created for the sake of entertainment, it should certainly be a viable option for the sake of environmental sustainability.
The electrical grid is another area of major (and rightful) concern. There have been the occasional Coronal Mass Ejections that have occurred since as far back as the 1850s, and all have been very destructive in their own rightsiii. While one such CME barely missed the earth in 2012, none has hit the earth in the day of the modern electrical grid that remains so vulnerable to just such an event. There is little doubt that one will hit in the foreseeable future and it is not just a matter of if, but only a matter of when. Both of the major CME strikes wreaked havoc in the industrialized world throughout both the US and Europe. Imagine the prospects of such an event occurring these days?
Aside from the obvious aspect of riots, among which would certainly be included a great deal of arson, further poisoning our air, but an entire generation who has grown up on electronic gadgets now going through psychological withdrawal, rioting, an absolute inability to utilize cash machines, banking, credit or other viable mediums of exchange and the picture just grows increasingly ugly.
While Point of Use electrical power generations and solutions may not be an ultimate solution, they would certainly decrease the risk of systemic losses that would otherwise occur. While there are increasingly diverse and viable technologies being introduced for such a solution, power storage and batteries remain the current weak link in such a system. However, it should be noted that even here, some progress is at least being made.
Food forests are yet another concept that has multiple purposes, all of which provide a mutual benefit to both the environment and to humanity. It is difficult to comprehend exactly why programs such as this have not been implemented on a large scale if there is really so much concern from the proverbial powers that be about ending hunger among the poor people of the world. Could it possibly be because such solutions are not so easily regulated and taxed? Whatever the reason, the food forests can and should be implemented on a large-scale basis not only out in the more rural areas, but in each and every park, greenbelt and other location within the inner cities where such a natural ecosystem would thrive and provide a host of benefits.
Large scale food forests produce fruits and vegetables on a virtually constant basis, depending of course on the natural surrounding environment and the availability of fruit bearing plants and trees. Furthermore, this level of the introduction of new forestation would greatly reduce the current concerns about the dreaded Carbon Dioxide whose numbers are increasing, as plants thrive on Carbon Dioxide and “exhale” oxygen in exchange for all of the CO2 that they “inhale”.
These entirely natural ecosystems that were created as part of the introduction of the food forests would also provide viable habitat for birds, squirrels, insects (needed for pollination) and a host of other species, all of which are necessary in a healthy ecological system. Furthermore, organic waste from within the cities could be used for composting and digesting programs in order to create and constantly generate new sources of lively and productive soils to expand the existing food forests and create new systems at the same time, with seeds that were grown and produced locally and will subsequently, evolve into plants that thrive exceedingly well in the local conditionsiv.
Underground railroads and streets are actually very common in many large cities. What is to prevent the construction of entire systems underground with only a limited amount of road surface actually being located on the surface of the earth? Cost? What is the cost in the disruption of traffic for road repairs and repairs in infrastructure in any large city? Forced Air Ventilation systems would certainly seem to be in order, though all of the Carbon Monoxide, carbon particulate matter and other impurities and pollutants could also be more easily filtered as they were exiting through a contained ventilation system from a series of tunnels.
People tend to forget in all of the hyperbole about carbon dioxide, that it is only one of many pollutants and at the same time, also a very important part of the symbiotic relationship between plant and animal life to sustain our planet. I am personally much more worried about breathing in the carbon monoxide from the car of my neighbor than I am about breathing a little CO2 that he may have inadvertently (or even purposefully) exhaled in my general direction … even if he does have a rather bad case of halitosis. Indeed, the question should not be what is the cost of implementing these roads and byways, but rather, what is the cost of not implementing these or similar solutions.
Land Reclamation and Land Restoration processes are often brought into play, though personally, it may be best not to delve too deeply in to such areas. What are the purposes of the deserts of the world? That very relevant little detail remains every bit as hidden as the purposes of oil within the crust of the earth and along the tectonic plates if in fact it is a naturally occurring abiotic substance.
If the truth were to be told, if the earth and its surrounding atmosphere and innards were an orange, the human race would not even be able to give a comprehensive explanation of the peel … which is roughly equivalent to the area of the earth most relevant to our survival as a species. Peers of some of the founding members have developed a means to naturally bring moisture in to desert areas that has resulted in over forty centimeters of tenable soil in the middle of the Australian Outback, and while such programs show great promise in some regards, it is not ultimately known how this would directly impact the global environmental system.
The same theory holds true for many efforts to reclaim bays and other ocean-frontage. Again, the long-term environmental impacts are not fully understood to the extent necessary to realistically determine whether these programs are more viable for sustaining or more detrimental to the surrounding ecological systems in place. The current plans under way for the reclamation of Manila Bay would certainly endanger and possibly destroy great swathes of the Verde Island Passage at the tip of the Coral Triangle.
This is by and large, one of the top three most diverse aqualogical ecosystems in existence in the world today and the runoff from such a project alone could conceivably incur irreparable harm to an environment that could not be replaced in tens of thousands of years at the natural growth rate of coral. Just because these kinds of programs can be implemented, does not by any means that they should be undertaken.
i ENVIRONMENT: the surroundings or conditions in which a person, animal, or plant lives or operates.
ECOSYSTEM: a biological community of interacting organisms and their physical environment. (in general use) a complex network or interconnected system.
ii There is every indication that people are working, and living to a limited degree in large underground government bases outside of Denver and on the Arkansas and Missouri border in addition to numerous underground government facilities around the globe.
iii A partial list of CME Events:
-
In 1859, what became known as “The Carrington Event” stemmed from a Solar Mass Coronal Ejection or CME coming from the sun. The Carrington Event was powerful enough to destroy the telegraph systems throughout Europe and the United States. It may be easy to laugh off the idea of losing the telegraph, but it should be noted that there was no electrical grid in existence at the time … had there been, it would have been completely destroyed as well.
-
“The Great Geo-Magnetic Storm of 1921” is considered to be one of the five worst recorded events of solar storms, it disrupted communications traffic from the Atlantic coast to the Mississippi River. On May 15, it not only disrupted but knocked out of operation, the entire signal and switching system of the New York Central Railroad below 125th street. This outage then was followed by a fire in the control tower at 57th and Park Avenue. The same storm burned out a Swedish telephone station and interfered with telephone, telegraph and cable traffic over most of Europe.
-
1958 – In the last century, there also have been other events such as the Feb. 11, 1958, solar storm which resulted in nationwide radio blackouts. According to various reports, auroras were visible in Boston, Seattle, Canada and Newfoundland. The storm reportedly was so intense over Europe that newspaper reports at the time said that there was concern for fires and the fear that war had broken out again.
-
1989 – The entire province of Quebec was blacked out from a glancing blow from a passing plasma storm. (The latter portion of the “tail” of the storm was likely the only portion that passed through the atmosphere of the earth … had the storm hit directly, the damage would have been substantially greater.
-
2012 – A geomagnetic plasma cloud resulting from yet another CME, barely missed the earth. This one was deemed to be larger than both the events of 1859 and 1921 which adversely impacted vast swathes of Europe and the US. The estimated damage and financial impact of a direct hit was estimated by the US government to be over two trillion US dollars … but that would not be the worst of it these days.
iv Granted, not all species will thrive in all environments. However … and just as a singular example, one variety of apple tree may grow exceedingly well but not produce the desired type of apple, whereas another apple tree may produce a more viable fruit, but does not grow so well in the local environment. Grafting and other principles of biology and ecology can often overcome virtually any similar challenges. Similar methods can also be used to create berry bushes that will be producing a variety of berries on single bushes growing in some locations.
Return to the Table of Contents for Whole System Sustainable Development
Let us know what you think please! | http://freespeechportal.com/index.php/sustainable-development/54-environmental-sustainability-in-systemically-sustainable-developments |
Sheriff’s Department urges Caution with Traffic Increase
A noticeable boom in summer travel has swept across western South Dakota, bringing welcome revenue to the area’s vital tourism industry. Along with this business come all of the unplanned events which are inevitably a part of a traveler’s itinerary.
Last week the Pennington County Courant sat down with Pennington County Deputy Sheriff William Christopherson to discuss what this means for public safety on the I-90 corridor.
Christopherson explained that the increase in visitors is marginal this year, and is especially noticeable for his department on Highway 240 between Wall and Badlands National Park. An early estimate for traffic into the park sits somewhere near 200%, and if the season follows the annual trend next month may be even busier.
With this heavier highway traffic, there has also been a noticeable increase in calls for service, road hazards involving debris, vehicle vs. deer collisions, and motorist assists which usually involve mechanical issues or tire failure.
Deputy Christopherson did not indicate that these issues were more prevalent compared to the usual ratio of motorists passing through, but statistics gathered throughout the summer will likely paint a clearer picture when they are available.
Although the things which local drivers can do to stay safe on the road are the same as they would usually be, busy road conditions are a good time to remind drivers that we need to be vigilant and alert on the highway. Wearing seatbelts, refraining from using cellular devices while driving, paying close attention to posted speed limits, and exercising caution when driving through construction are all required by law, and abiding by these rules is the best way to stay safe.
Another important tip offered by Christopherson is to call for assistance sooner rather than later if you experience vehicle trouble on the highway. “Most vehicle assists start because we see people pulled over and stop to help. If you feel unsafe, call right away.” Less time on the shoulder leaves less time for unfortunate events to occur. When experiencing car trouble on the highways making sure to pull away from the outer line, and turning on your hazard lights are also important steps in dealing with the situation safely. | http://philipsd.com/penn-co-courant/sheriff%E2%80%99s-department-urges-caution-traffic-increase |
The object of this Code of Ethics is to outline high standards, ethical practices in digital news publishing, and does not constitute any attempt to involve itself in the day to day operations of the publishers — who have complete editorial and content independence.
The basic precepts of the Code of Ethics are to maintain the standards of digital publishing as well as protect and maintain the independence of journalists, content entities and publishers.
If news report or article is found to contain false or inaccurate information, then on approach by the concerned person or party, providing correct information, identifying himself or herself, providing required documents or material, the portion of the news report or article should be edited or deleted.
If entire news report is found to contain false, inaccurate information, the entire article should be deleted.
Members –when intermediaries as defined under the Information Technology Act, 2000— follow the grievance redressal mechanism as outlined therein and are cognizant of the liabilities and safe harbor protections under Section 79 of the IT Act 2000. Hence, as relevant, they follow the Information Technology (Intermediary Guidelines) Rules, 2011 including appointing a grievance officer whose contact details are displayed on the website and who acts within 36 hours of receipt of complaint by affected person and redresses the complaint within one month from its receipt.
Conduct periodic training and awareness programs with editorial staff about existing laws including Constitution of India, and over 30 laws relating to the media like The Indecent Representation of Women (Prohibition) Act, Copyright Act, Right to Information Act, relevant provisions of Indian Penal Code and CrPC, civil and criminal defamation, IPR, Juvenile justice, POCSO, relevant provisions relating to reporting on rape and molestation, harassment in the work place, caste or gender related crime, domestic violence, etc. | https://www.sadhanaweekly.com/Encyc/2022/8/30/The-DNPA-s-Code-Of-Conduct.html |
Point-of-care tests (POCTs) are increasingly used in family medicine clinics in the United States. While the diagnostics industry predicts significant growth in the number and scope of POCTs deployed, little is known about clinic-level attitudes towards implementation of these tests. We aimed to explore attitudes of primary care providers, laboratory and clinic administrative/support staff to identify barriers and facilitators to use of POCTs in family medicine.
Methods
Seven focus groups and four semi-structured interviews were conducted with a total of 52 clinic staff from three family medicine clinics in two US states. Qualitative data from this exploratory study was analyzed using the constant comparison method.
Results
Five themes were identified which included the impact of POCTs on clinical decision-making; perceived inaccuracy of POCTs; impact of POCTs on staff and workflow; perceived patient experience and patient-provider relationship, and issues related to cost, regulation and quality control. Overall, there were mixed attitudes towards use of POCTs. Participants believed the added data provided by POCT may facilitate prompt clinical management, diagnostic certainty and patient-provider communication.
Perceived barriers included inaccuracy of POCT, shortage of clinic staff to support more testing, and uncertainty about their cost-effectiveness.
Conclusions
The potential benefits of using POCTs in family medicine clinics are countered by several barriers. Clinical utility of many POCTs will depend on the extent to which these barriers are addressed. Engagement between clinical researchers, industry, health insurers and the primary care community is important to ensure that POCTs align with clinic and patient needs. | https://bmcprimcare.biomedcentral.com/articles/10.1186/s12875-016-0549-1 |
Dian Fossey- " When you realize the value of all life, you dwell less on what is past and concentrate more on the preservation of the future."
3 Healthy Lies We Tell Ourselves
Are "positive illusions" really that positive?
Posted Nov 13, 2018
"I believe the common denominator of the universe is not harmony, but chaos, hostility, and murder." —Werner Herzog, Grizzly Man (2005)
In psychiatry and psychology, the ability to distinguish reality from fantasy—known as “reality testing”—has traditionally been considered a prerequisite for mental health. Conversely, its impairment is a defining characteristic of psychosis, as exemplified by symptoms such as delusion and hallucination. But in 1988, UCLA psychologist Shelley Taylor challenged this notion with the radical hypothesis that some impairments in reality testing may actually be key to mental health. In a paper written with Jonathan Brown of Southern Methodist University, Taylor outlined the case for “positive illusions,” defined as misbeliefs associated with happiness, the ability to care for others, and the capacity for creative, productive work.1 Put more simply, positive illusions are healthy lies that we tell ourselves.
Thirty years later, positive illusions are now well-recognized (albeit still debated) in psychology, and following Taylor’s original conception, they fall into three general categories:
1. “I’m better than the average person.”
Taylor cited evidence from various studies that individuals tend to regard positive traits as core parts of their identity while discounting negative ones. (It’s worth noting that much of this research is based on college undergraduates responding to surveys, as is often the case in psychology studies.) Most report that they are “better than the average person”—a mathematical contradiction if a trait is normally distributed—with self-appraisals that are inflated compared to how others see them. This cognitive bias has come to be known as the “better than average effect,” the “superiority illusion,” and the “Lake Wobegon Effect” (after Garrison Keillor’s fictional radio-show community in which "all the women are strong, all the men are good-looking, and all the children are above average"). Indeed, we tend to extend the superiority illusion beyond ourselves to our loved ones as well, offering a kind of explanation for how love can be blind, allowing us to overlook the faults and foibles of our romantic interests and our children alike.
Subsequent research by Brown found that the better than average effect is stronger for valued attributes like honesty, kindness, responsibility, intelligence, and competence. The effect also increases following threats to self-worth and is motivated by the desire to feel good about ourselves.2 But in contrast to such common and hard-to-measure personality characteristics, other research has shown evidence for a “worse than average effect” when it comes to rare abilities and difficult tasks, such as computer programming, riding a unicycle, or coping with the death of a loved one.3 This suggests that the better than average effect may sometimes be less about self-aggrandizement than about errors in estimating traits and abilities in others and the way that we interpret the term “average” as a pejorative rather than a statistical norm.
Although the better than average effect has been found to be associated with psychological well-being, there’s also evidence that its benefits might depend on quantity. It should come as no surprise that confidence is correlated with self-esteem—they’re nearly the same thing. And over-confidence might very well lead to perseverance that is predictive of real achievement in some circumstances, such as among children learning new skills, or even superiority, such as among elite athletes. But it should also come as no surprise that the superiority illusion has also been correlated with narcissism, with “self-enhancing” individuals more likely to be rated as condescending, resentful, and defensive.4 (Note that the idea of a continuum of confidence/overconfidence mirrors the finding that narcissism can be both adaptive and maladaptive, depending on degree. See "Just What is a “Narcissist” Anyway?” for more.)
As always, the devil may be in the details. "Self-enhancement," defined as a discrepancy between self-perceptions and others’ impressions, might have more negative effects than overconfidence that’s not as obvious to others.5 Some research has also suggested that self-enhancement might have short-term social benefits through favorable initial impressions that can become more negative and socially harmful in the long run.6
Social psychologist Roy Baumeister has argued for a kind of Goldilocksian “optimal margin” of positive illusions in which too much superiority bias, but also too little, could be associated with less psychological well-being.7 Just the right amount of superiority bias might therefore go a long way, but it’s probably best to keep it to yourself.
2. “I am the master of my fate.”
When the English poet William Ernest Henley was recovering from amputation-sparing surgery on his leg, he penned Invictus, concluding that despite his hardship, “I am the master of my fate, I am the captain of my soul.” Belief in personal control over circumstances that are largely beyond our control represents the second category of positive illusions.
“Locus of control” is the more generic belief in how much personal control we have over life events, whether or not that belief is accurate. This has been studied for well over 60 years, with findings indicating that belief in personal control is associated with positive outcomes of both mental health and physical health. According to researchers like Taylor, it appears that an exaggerated sense of personal control can be beneficial.
As with the better than average effect, however, determining whether one’s locus of control belief is illusory can be difficult to assess and can come up against philosophical challenges, not the least of which involves whether free will actually exists at all. (For more, see "The Neuroscience of Free Will and the Illusion of 'You.'") For example, it has been argued that illusions of control may be able to impact mental and physical health outcomes by leading to the promotion of healthy behaviors. But if that’s the case, then the beliefs aren’t really illusions at all, but examples of “positive thinking” that would be expected to be correlated with other self-reported measures of positive thinking definitional to the larger constructs of psychological and physical well-being alike.
What’s more clear is that helplessness and hopelessness are antitheses of personal control and are often core features of depression. Such helplessness has been hypothesized to result in excess secretion of stress hormones like cortisol, which could worsen symptoms of depression or physical illness, resulting in a downward spiral. Illusions of control might therefore provide a shield against stress, or so the theory goes.
That said, illusions of control can clearly become more harmful when they are more obviously inaccurate and within certain settings. Unwarranted belief in personal control might be helpful for someone coping with cancer, for example, but much less so for a compulsive gambler spending all night in front of a slot machine.
A few years ago, University of California, Berkeley psychologist Paul Piff performed a now well-publicized (but as-of-yet unpublished) experiment in which study subjects played a “rigged” game of Monopoly that gave disproportionate advantages (e.g., more money) to some players at the start of the game and as it progressed. At the game’s end, the advantaged winners proudly attributed their success to personal skill and superior strategy rather than advantage or even luck. Piff’s research suggests that illusions of control can result in unwarranted self-appraisals for people with inherent financial advantage, leading them to be less empathic of those who are disadvantaged. Such people might, for example, be more likely discount the ethical or practical benefits of real-world social programs like welfare or affirmative action. This conclusion suggests that while some illusions of control can result in higher self-ratings of individual happiness or mental health, they might also contribute to interpersonal disregard, with a harmful effect on society as a whole.
3. “The future will be great, especially for me.”
The third category of positive illusions involves overestimating the likelihood that good things will happen to us while underestimating bad outcomes. This view of the future through rose-colored glasses has come to be known as the illusion of “unrealistic optimism” or “optimism bias.” Unrealistic optimism accounts for why many of us believe that we might defy the odds to have a long and happy marriage, or win the Mega Millions lottery.
Like illusions of control, unrealistic optimism is thought to represent a kind of denial that can reduce stress and anxiety and allow us to devote energy to achieving goals. Just as people with depression tend to lack the illusion of control, they also tend to suffer from “depressive realism” in place of the optimism bias.
In the Devil’s Dictionary, Ambrose Bierce defined an optimist as “a proponent of the doctrine that black is white” and a cynic as “a blackguard whose faulty vision sees things as they are, not as they ought to be.” Recognizing the overlap between unrealistic optimism and hope, it’s easy to understand how seeing the world as an optimist instead of a cynic might be associated with mental well-being. Unrealistic optimism may exert a kind of placebo effect on mental health that reflects more about how we feel about the world than how it actually is or will be. (See "The Healing Power of Placebos: Fact or Fiction?” for a discussion of the illusory underpinnings of the placebo effect.)
Still, following Baumeister’s “too much of a good thing” theory of positive illusions, excessive optimism bias can result in a “planning fallacy” that can lead us to engage in dangerous behaviors like smoking, unprotected sex, or texting while driving, despite known risks. But University of Birmingham philosopher Lisa Bortoloti doesn’t think that the effects of positive illusions are determined by their magnitude of reality distortion, so much as the degree to which they promote positive behaviors. This sounds like something of a tautological answer to the question of what makes positive illusions positive, but its proposed mechanism harkens back to Taylor’s original premise that positive illusions may represent “the fuel that drives creativity, motivation, and high aspirations.”8 Bortolloti similarly suggests that certain types of positive illusions like unrealistic optimism are healthy because when “we are optimistic about how competent and efficacious we are, and about how desirable and attainable our goals are…we continue to cherish our goals and pursue them after setbacks.”9 She therefore links unrealistic optimism with locus of control, arguing that the former enhances the latter by supporting our sense of “agency” and promoting resilience when things don’t go as expected. Optimistically-biased illusions, even those that are significantly off-base, can end up being self-fulfilling, “becoming more and more realistic over time.” In contrast, illusions of invulnerability and depressive realism may lead us to give up and view things as hopeless.
Conclusion
In Werner Herzog’s documentary Grizzly Man, eponymous protagonist Timothy Treadwell is portrayed as someone whose self-superiority, illusions of control, and unrealistic optimism ultimately lead him to be done in by the very bears with whom he was trying to commune. Watching the film knowing the outcome from the start, Treadwell appears at best foolhardy and at worst, narcissistically delusional. But viewed from another perspective, Treadwell had managed to survive 12 previous seasons in the Alaskan wilderness, largely alone, before he and his girlfriend succumbed to his “vaulting ambition.” That was a remarkable achievement and one that garnered him some measure of fame before his demise.
While Bortolotti has defended irrational beliefs and even delusional thinking as potentially protective,10 Treadwell’s cautionary tale reminds us that there's often a fine line between misbeliefs that help and self-deception that harms. For every positive illusion, there are 10 other cognitive biases that are more likely to hurt us. How’s that for a depressing reality?
References
1. Taylor SE, Brown JD. Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin 1988; 103:193-210.
2. Brown JD. Understanding the better than average effect: motives (still matter). Personality and Social Psychology Bulletin 2012: 38:209-219.
3. Moore DA. Not so above average after all: when people believe they are worse than average and its implications for theories of bias in social comparison. Organizational Behavior and Human Decisions Processes 2007; 102:42-58.
4. Colvin CR, Block J, Funder DC. Overly positive self-evaluations and personality: negative implications for mental health. Journal of Personality and Social Psychology 1995; 68:1152-1162.
5. Anderson C, Brion S, Moore DA, Kennedy JA. A status-enhancement account of over-confidence. Journal of Personality and Social Psychology 2012; 103:718-735.
6. Robins RW, Beer JS. Positive illusions about the self: short-term benefits and long-term costs. Journal of Personality and Social Psychology 2001; 80:340-352.
7. Baumeister RF. The optimal margin of illusion. Journal of Social and Clinical Psychology 1989; 8:176-189.
8. Taylor SE, Collins RL, SKokan LA, Aspinwall LG. Maintaining positive illusions in the face of negative information: getting the facts without letting them get to you. Journal of Social and Clinical Psychology 1989; 8:114-129.
9. Bortolotti L. Optimism, agency, and success. Ethical Theory and Moral Practice 2018; 21:521-535.
10. Gunn R, Bortolloti L. Can delusions play a protective role? Phenomenology and the Cognitive Sciences 2018; 17:813-833.
no guarantees
Reactive responses
Good and bad stuff happens. How we react either exacerbates or positively furthers the narrative. The best way to prepare for the future is the daily practice of self awareness. | https://rsrc2.psychologytoday.com/us/blog/psych-unseen/201811/3-healthy-lies-we-tell-ourselves |
Objective 1: Select inbred mouse strains with phenotypic extremes in milk production will be used to: a) identify genomic variants along with intestinal and mammary-expressed genes that differentiate low and high milk production, and b) determine the extent to which genome-driven differences in milk production and mammary gene expression are directly mediated through host-dependent differences in the intestinal and/or mammary tissue microbiome. Subobjective 1A: Sequence the genomes of additional unsequenced strains from our original milk yield cohort and then use this completed lactation phenome genotype data to identify strain-specific private alleles and predict the functional consequences of these variants on genes with the potential to regulate traits defined in the lactation phenome dataset. Subobjective 1B: Combine the lactation phenome dataset with the expanded common variant data from sub objective 1A to conduct an enhanced joint-GWAS of SNP, INDEL, and SV, and to subsequently predict the functional consequences of the newly identified variants to lactation. Subobjective 1C: Using a complete 3x3 diallele cross of QSi3, QSi5, and PL/J determine the contribution of strain-dosage, heterosis, parent-of-origin, and epistasis to milk production and composition, and mammary gland development during early lactation, and identify mammary epithelial cell and intestinal eGenes on the basis of allelic imbalance. Subobjective 1D: Integrate the set of eGenes discovered in 1C with the set of private and common variants discovered 1A and1B and employ network modeling to predict and test those variant-eGene pairs that are most likely to cause the variation in the lactation phenome traits. Subobjective 1E: Analyze the fecal microbiota along with prolactin and oxytocin in samples obtained from the diallel conducted under sub-objective 1C to determine the contribution of strain-dosage, heterosis, parent-of-origin, and epistasis to the diversity and richness of the intestinal microbiota, to the abundance of specific taxa, and to neuroendocrine function in mouse strains with a genetic propensity for high or low milk yield.
Approach
Genetic background is known to influence variation in milk production however environmental factors also play a role. Advances in high-throughput DNA sequencing technologies have revolutionized the way in which the microbial world is viewed and has led to the concept that the microbiome is a major regulator of normal development and health. The microbiome is regulated by diet, but is also under the control of the host genome. In this regard, the full number of host genetic variants associated with lactation-related traits remains to be determined. Differences in milk production are driven by changes in gene expression within organs important to milk synthesis. Additionally, the intestinal microbiome is controlled by the host genome, but can directly influence gene expression within the host. We aim to understand how variations in the maternal genome interact with the microbiome to determine lactation success. Whole genome sequence data from select mouse strains will be used to identify genetic variants that are unique to high or low milk production. These newly identified variants will be functionally linked to milk production and composition, and to lactation-induced intestinal and mammary gene expression through a specific RNA Sequencing test known as allelic imbalance. Strain- and allele-dependent differences in fecal ribosomal 16s sequencing reads will associate the variants with the intestinal microbiome. Lastly, maternal microbiome seeding through neonatal cross-fostering will establish the ability of the intestinal microbiome to over-ride the effects of genetic background lactation-dependent gene expression and milk production.
Progress Report
This project was recently certified and work thus far has focused on obtaining the whole genome sequence data from additional lactation phenome mouse strains. We have obtained samples and sent them for sequencing for the first of three strains we plan to sequence. We have also made arrangements with our collaborator at the University of Sydney to send us a specific mouse strain (QSi3) for our studies. | https://www.ars.usda.gov/research/project/?accnNo=436284&fy=2019 |
Window alcoves are typically referred to as bays or bay windows. These are windows that extend beyond the walls of the home, creating a niche or alcove in which furniture, plants, or decorations can be placed.
Bay windows can add light, extra seating and give a room a larger, open feeling. They are especially popular in older homes and apartments, as they are often a feature of the original architecture. Bay windows can also provide a great view of the outside, and can be an enjoyable spot for reading, resting, or other activities.
Contents
- What do you call a sitting area in the window?
- What is another name for a window seat?
- What is a banquette in a house?
- How do you make a Windows Nook?
- Is a window seat in plane?
- How high is a window seat?
- Can you sit in a bay window?
- How do you cover windows with decorations?
- How can I make my windows look good without curtains?
- What can I put up instead of curtains?
- Is it OK not to have curtains?
- What should I put on my windows?
- Do you have to put curtains on all windows in a room?
- Are curtains out of style?
- How much does it cost to add a window seat?
- How much weight can a floating bench hold?
What do you call a sitting area in the window?
A sitting area in the window is often referred to as a window seat or window nook. It is a comfortable spot for you to relax and take in the scenery outdoors. A window seat typically consists of a cushioned seat, often in the shape of a bench, placed along one of the walls of a room below a window.
The seat might be built-in or a stand-alone piece of furniture. It can be further paired with other pieces of furniture such as small coffee tables, chaise lounges, arm chairs, bookcases, or other decorative pieces.
A window seat is the perfect addition to any home and can be an inviting reading corner for the whole family to enjoy.
What is another name for a window seat?
A window seat is also commonly referred to as a bay window, bay seat, or alcove. A window seat is typically a built-in or designated area along a wall within a room that features a windowsill or sill-like recess that’s framed by the walls around it.
Window seats provide additional seating and storage areas, making them popular fixtures in many homes or other buildings that feature large windows.
What is a banquette in a house?
A banquette is a type of seating that is typically used as an additional seating option in a house. It is often positioned along walls in dining rooms, kitchens, or other living spaces, and is typically made up of a bench or an upholstered seat, with a backrest and table in front of it.
This provides both seating and a place to put items such as books, magazines, plates, drinks, and more. Banquettes can either be built in place as part of the room’s structure, or they can be portable and freestanding.
They are available in a variety of sizes, materials, and designs, which can range from traditional to contemporary.
How do you make a Windows Nook?
Creating a Windows Nook is a great way to take your computing experience to the next level. The Nook provides a personalised and secure computing environment, so you will want to ensure that you are taking all the necessary steps to ensure the best outcome.
The first step is to purchase a Nook. Once you have obtained your device, you will then be able to download and install the Nook operating system. This will allow access to the Nook Store where you can purchase apps and other content to extend the functionality of your Nook.
Once you have the basic hardware and operating system set up, you can begin customising your Nook with additional software. There are a variety of third party applications available, such as productivity suites, games, and more.
Many of these require the installation of additional drivers, which can be done by downloading the appropriate package from the Nook support website.
You can further personalise your Nook with custom backgrounds and themes, as well as a variety of widgets to get quick access to common tasks or functions. You can also set up syncing with your other devices so that you can keep all of your data in one place.
Finally, the Nook includes many of the same settings and customization options as a traditional Windows PC, so you can tweak the look and feel to suit your needs and preferences.
With a bit of setup and configuration, you can turn your Nook into a personalised, secure and powerful computing device.
Is a window seat in plane?
Yes, a window seat in a plane is a great choice for a variety of reasons. One of the main advantages of having a window seat is the views of the outside world. Depending on where you’re flying, you may see stunning mountain peaks, vast deserts, or glittering cityscapes.
Additionally, the window seat provides a feeling of increased safety and security because it feels like you’re cocooned inside your own little bubble. Being near to the window, you don’t have to pass aisles and rows of fellow passengers.
Another major benefit of a window seat is that it can provide an extra buffer from smells and noises from other passengers, allowing you to have a more comfortable and peaceful flight. Additionally, the seats are usually a little wider than aisle seats, giving you more room and comfort during the flight.
How high is a window seat?
Window seats vary widely in height and are typically customized for specific window sizes. If a standard-size window is 36 inches wide, the seat might be 12 to 18 inches height, depending on the desired look and feel of the room.
In some cases, the height of the window seat can be higher than the standard 36 inches to accommodate a larger window size or a more comfortable sitting experience. In addition, you may need to adjust the height of the window seat based on your personal preference, such as if you want to be able to look outside from the window seat or choose a seat height that’s easier for you to get in and out of.
Ultimately, it’s important to find the right balance between comfort and practicality when deciding on a window seat height.
Can you sit in a bay window?
Yes, you can sit in a bay window. A bay window is a window space that projects outward from the main walls of a building, forming a bay in a room. Many homes have bay windows, which are often in a living room, bedroom, or dining area.
Sitting in a bay window can be a cozy spot to relax, read a book, or take in the natural beauty outside. Before sitting in a bay window, it is important to take a few safety precautions. You should make sure that the windows are securely locked and, if they open, ascertain that they cannot be opened too far.
If your windows do open, place pillows and blankets on the ledge to ensure that you are comfortable and secure. It is also important to use common sense and take into account your location and potential security risks.
For instance, if you live in a high-traffic or unsafe area, you may want to consider your safety before settling into a bay window.
How do you cover windows with decorations?
Covering windows with decorations can be a fun and easy way to add a personal touch to any room. Depending on the look you want to achieve, there are many different options. You can hang curtains or blinds to provide privacy and control the amount of light coming in, or you can opt for something more decorative like a valance or swag.
If you’re looking for something more permanent, you can frame the window with molding to make a prominent feature of it, or you can stencil it with a unique design. Another option is window films, which make it easy to add patterned, stained glass, or mirrors to your windows.
In addition to these options, there are plenty of wall decals, stickers, window clings, and other light decorations that can be used to brighten up the look of your windows. With all these choices, there are plenty of ways to bring the window in line with your style, and make it a focal point in the room.
How can I make my windows look good without curtains?
Having window treatments can add a lot of style to a room without curtains, so here are some ideas for how to make your windows look good without curtains.
1. Hang decorative window shades, such as roman shades, roller shades, or cellular shades. These window treatments offer light control, insulation, and lots of style.
2. Install shutters to the interior or exterior of your windows. Shutters are not only perfect for traditional, rustic, and farmhouse-style homes, but they can also add a touch of creativity and interest to a room.
3. Install decorative window film to a window or a group of windows. This type of window treatment provides privacy while still allowing light to enter the room. There are many decorative patterns and styles to choose from.
4. Hang a window valance. Valances come in many different materials, styles, and colors that can add a touch of sophistication and glamor to a room.
5. Install window decals. Window decals are perfect for adding a special touch to a window without actually covering it up. This is a great choice for kids rooms or any other room that you would like to personalize and make unique.
There are lots of great ways to make your windows look good without the need for curtains. With one of these five ideas you will be sure to add beauty and style to your windows.
What can I put up instead of curtains?
Rather than curtains, there are several alternatives that can be used to achieve the same look and feel of a window with curtains.
One option is to hang shades or shutters, which not only adds a stylish touch to the room, but also provides privacy and light control. Cellular or pleated shades are great for providing insulation for the room and also adjustable slats for different levels of light control.
Wooden shutters come in a variety of styles and materials to match the look of your home.
Another option is to use blinds rather than curtains. Horizontal or vertical blinds are a popular choice for rooms that need good light control. Not only do blinds offer more coverage against sunlight and privacy, but their texture and style can create a modern, chic look.
Sheers are a great option as well. Sheer curtains come in a range of transparency and thickness and work best in rooms that have plenty of natural light but still require a certain degree of privacy.
Sheers are also great for adding a bit of style and softness to the window while still allowing light to filter into the room.
Lastly, you can always opt for privacy film. Privacy window film is made from a vinyl material that is applied directly to the window. It reduces visibility from outside while allowing light to come in, and it comes in various design options that will work with your existing decor.
Is it OK not to have curtains?
Whether or not it is “OK” to not have curtains is ultimately up to the homeowner and their level of comfort about having their windows open for viewing. In some locations, it might be acceptable to leave windows uncovered due to privacy concerns, such as not having neighbors or passersby able to view inside, or depending on the height of the window and the local ordinances.
While no curtains can offer a great deal of natural light, there may be the potential for privacy concerns, depending on location. In some cases, neighbors or passersby may be able to view into the home, and it is important to consider this when deciding whether or not to have curtains.
It is also important to take into consideration that curtains can offer some insulation benefits in terms of protecting the inside of a home from hot and cold outside temperatures. They can also provide additional darkness at night, helping to block out outside light, and create better sleeping conditions.
The choice ultimately comes down to personal preference, and taking into account any potential risks or benefits. Ultimately, it is up to the homeowner to decide what works best for their needs and wants when it comes to whether or not to have curtains.
What should I put on my windows?
Depending on the purposes and aesthetic you have in mind, you could choose between curtains and blinds, as well as different types of window treatments.
Curtains are often used for a classic, traditional look, and come in various materials, patterns, and colors. Light-filtering curtains can be used to reduce glare or block out light and provide privacy.
Blackout curtains offer total light blockage, making them suitable for bedrooms or for darkening a room for special events. Sheer curtains are semi-transparent and offer privacy as well as light filtration.
Blinds are a versatile window treatment, available in materials like wood, faux wood, aluminum, fabric, and vinyl. Venetian blinds are widely used and come in a range of sizes and styles. They can be tilted to allow light to enter while maintaining privacy.
Vertical blinds are mounted to a track and are best suited for larger windows. Roller and solar shades are available in a variety of colors and fabric textures and have the added convenience of being able to easily roll up and out of the way when not in use.
Window film is another option that can be used to enhance privacy, reduce glare, block UV rays, and reduce energy costs. It is available in a range of styles and textures, including color-tinted, mirrored, textured, and patterned.
You can also choose from various decorative window treatments like shutters and cornices. Shutters come in hinged and sliding varieties and are usually made from wood, aluminum, or vinyl. They are great for adding a classic look and providing privacy.
Cornices are a type of decorative window cover that adds a soft shape to the top of the window and can be used in conjunction with curtains, shades, or blinds.
No matter what type of window treatment you’re looking for, there are a variety of options available to fit your needs and style.
Do you have to put curtains on all windows in a room?
No, you don’t have to put curtains on all the windows in a room. It actually depends on a few factors such as personal preference and the room’s purpose. If you’re concerned about the amount of light coming into the room or need to ensure some degree of privacy, then adding curtains on all windows would be beneficial.
On the other hand, if you want to create a bright and airy atmosphere, then leaving some of the windows curtain-free might be a better option. It all comes down to what look and effect you are aiming for.
Additionally, if the window looks over a nice view, such as a garden or park, then curtains may not be necessary.
Are curtains out of style?
No, curtains are certainly not out of style. Curtains are timeless home accents that can bring warmth, texture, and personality to any decor. That’s why curtains continue to be popular among homeowners who are looking to elevate the look of their living space.
They can add a certain style, be it a traditional, modern or even eclectic look. Furthermore, curtains can help with achieving a unified theme throughout the space and they can also act as a focal point.
Curtains are also highly versatile, ranging from sheer to blackout and from lightweight to heavy-duty fabric. Additionally, curtains provide much needed privacy, and help control sunlight to rooms such as bedrooms and living rooms.
Curtains also offer a variety of colors and patterns that can tie into the color scheme of your interior design. For all of these reasons, curtains are definitely not out of style, and can bring a sense of sophistication to any room.
How much does it cost to add a window seat?
The cost of adding a window seat depends on a variety of factors such as the size of the window, the material and labor costs, and any custom features such as built-in storage or access doors. On average, a basic window seat without any custom features might cost between $400 and $1000, depending on the size of the window, materials used, and labor costs.
If the window seat includes custom features like storage or access doors, the cost could be anywhere from $1,000 to $3,000. The most economical option would be to build a freestanding window seat, as this would require fewer materials and labor hours.
In this case, the total cost might be in the range of $500 to $1,500.
How much weight can a floating bench hold?
The amount of weight a floating bench can support depends on a few factors such as the size of the bench and the type of material used for its construction. Generally, smaller benches will be able to support less weight than larger benches, and plastic or wood constructions will hold less weight than metal constructions.
Additionally, the type of water the bench is placed into can also affect how much weight it can support – for example, a bench placed into salt water will often be able to support less weight than one placed into fresh water.
In general, a small wooden or plastic floating bench should be able to support up to 250lbs while a larger metal construction can often hold up to 600lbs. However it is important to remember that in order to ensure adequate safety, the overall weight carried on the bench should not exceed the maximum capacity of the bench itself. | https://www.remodelormove.com/what-are-window-alcoves-called/ |
This Data Processing Policy sets out how CL Consortium Limited (”we”, “our”, “us”, “the Company”) handle the Personal Data of our customers.
This Data Processing Policy applies to all Personal Data we Process regardless of the media on which that data is stored or whether it relates to past or present customers, clients or website users.
This Data Processing Policy applies to all Company Customers (”you”, “your”).
We recognise that the correct and lawful treatment of Personal Data will maintain confidence in the Company and will provide for successful business operations. Protecting the confidentiality and integrity of Personal Data is a critical responsibility that we take seriously at all times.
Please contact the DPO with any questions about the operation of this Data Processing Policy or the GDPR or if you have any concerns that this Data Processing Policy is not being or has not been followed.
(e) Not kept in a form which permits identification of Customers for longer than is necessary for the purposes for which the data is Processed (Storage Limitation).
(h) Made available to Customers and Customers are allowed to exercise certain rights in relation to their Personal Data (Customers Rights and Requests).
Personal data must be Processed lawfully, fairly and in a transparent manner in relation to the Customer.
We will only collect, Process and share Personal Data fairly and lawfully and for specified purposes. The GDPR restricts our actions regarding Personal Data to specified lawful purposes. These restrictions are not intended to prevent Processing, but ensure that we Process Personal Data fairly and without adversely affecting the Customer.
(e) to pursue our legitimate interests for purposes where they are not overridden because the Processing prejudices the interests or fundamental rights and freedoms of Customers. The purposes for which we process Personal Data for legitimate interests need to be set out in applicable Privacy Notices.
Our Controllers will only process Personal Data on the basis of one or more of the lawful bases set out in the GDPR, which include Consent.
A Customer consents to Processing of their Personal Data if they indicate agreement clearly either by a statement or positive action to the Processing. Consent requires affirmative action so silence, pre-ticked boxes or inactivity are unlikely to be sufficient. If Consent is given in a document which deals with other matters, then the Consent must be kept separate from those other matters.
You are easily able to withdraw Consent to Processing at any time and withdrawal will be promptly honoured. Consent may need to be refreshed if we intend to Process Personal Data for a different and incompatible purpose which was not disclosed when the you first consented.
Unless we can rely on another legal basis of Processing, Explicit Consent is usually required for Processing Special Categories of Personal Data and Criminal Convictions Data, for Automated Decision-Making and for cross border data transfers. Usually we will be relying on another legal basis (and not require Explicit Consent) to Process most types of Special Categories of Personal Data and Criminal Convictions Data. Where Explicit Consent is required, we will issue a Privacy Notice to you to capture Explicit Consent.
We will need to evidence Consent captured and will keep records of all Consents in accordance with Related Policies and Privacy Guidelines so that the Company can demonstrate compliance with Consent requirements.
The GDPR requires Data Controllers to provide detailed, specific information to you depending on whether the information was collected directly from you or from elsewhere. Such information must be provided through appropriate Privacy Notices which must be concise, transparent, intelligible, easily accessible, and in clear and plain language so that you can easily understand them.
Whenever we collect Personal Data directly from you, we must provide you with all the information required by the GDPR including the identity of the Controller and DPO, how and why we will use, Process, disclose, protect and retain that Personal Data through a Privacy Notice which must be presented when the Customer first provides the Personal Data.
When Personal Data is collected indirectly (for example, from a third party or publicly available source), we will provide you with all the information required by the GDPR as soon as possible after collecting/receiving the data. We will also check that the Personal Data was collected by the third party in accordance with the GDPR and on a basis which contemplates our proposed Processing of that Personal Data.
Personal Data will be collected only for specified, explicit and legitimate purposes. It will not be further Processed in any manner incompatible with those purposes.
We cannot use Personal Data for new, different or incompatible purposes from that disclosed when it was first obtained unless we have informed you of the new purposes and you have Consented where necessary.
We may only Process Personal Data when performing our job duties requires it. We cannot Process Personal Data for any reason unrelated to our job duties.
We may only collect Personal Data that we require for our job duties: do not collect excessive data. We ensure any Personal Data collected is adequate and relevant for the intended purposes.
We will ensure that when Personal Data is no longer needed for specified purposes, it is deleted or anonymised in accordance with the Company’s data retention guidelines.
We will ensure that the Personal Data we use and hold is accurate, complete, kept up to date and relevant to the purpose for which we collected it. We will check the accuracy of any Personal Data at the point of collection and at regular intervals afterwards. We will take all reasonable steps to destroy or amend inaccurate or out-of-date Personal Data.
Personal Data will not be kept in an identifiable form for longer than is necessary for the purposes for which the data is processed.
We will not keep Personal Data in a form which permits the identification of you for longer than needed for the legitimate business purpose or purposes for which we originally collected it including for the purpose of satisfying any legal, accounting or reporting requirements.
We will take all reasonable steps to destroy or erase from our systems all Personal Data that we no longer require in accordance with all the Company’s applicable records retention schedules and policies. This includes requiring third parties to delete such data where applicable.
We will ensure that you are informed of the period for which data is stored and how that period is determined in any applicable Privacy Notice.
Personal Data will be secured by appropriate technical and organisational measures against unauthorised or unlawful Processing, and against accidental loss, destruction or damage.
We will develop, implement and maintain safeguards appropriate to our size, scope and business, our available resources, the amount of Personal Data that we own or maintain on behalf of others and identified risks (including use of encryption and Pseudonymisation where applicable). We will regularly evaluate and test the effectiveness of those safeguards to ensure security of our Processing of Personal Data. We are responsible for protecting the Personal Data we hold. We will implement reasonable and appropriate security measures against unlawful or unauthorised Processing of Personal Data and against the accidental loss of, or damage to, Personal Data. We will exercise particular care in protecting Special Categories of Personal Criminal Convictions Data from loss and unauthorised access, use or disclosure.
We will follow all procedures and technologies we put in place to maintain the security of all Personal Data from the point of collection to the point of destruction. We may only transfer Personal Data to third-party service providers who agree to comply with the required policies and procedures and who agree to put adequate measures in place, as requested.
The GDPR requires Controllers to notify any Personal Data Breach to the applicable regulator and, in certain instances, the Customer.
We have put in place procedures to deal with any suspected Personal Data Breach and will notify you or any applicable regulator where we are legally required to do so.
The GDPR restricts data transfers to countries outside the EEA in order to ensure that the level of data protection afforded to individuals by the GDPR is not undermined. We transfer Personal Data originating in one country across borders when we transmit, send, view or access that data in or to a different country.
(d) the transfer is necessary for one of the other reasons set out in the GDPR including the performance of a contract between us and the Customer, reasons of public interest, to establish, exercise or defend legal claims or to protect the vital interests of the Customer where the Customer is physically or legally incapable of giving Consent and, in some limited cases, for our legitimate interest.
We will verify the identity of an individual requesting data under any of the rights listed above (do not allow third parties to persuade us into disclosing Personal Data without proper authorisation).
12.1 The Controller will implement appropriate technical and organisational measures in an effective manner, to ensure compliance with data protection principles. The Controller is responsible for, and must be able to demonstrate, compliance with the data protection principles.
We will keep and maintain accurate corporate records reflecting our Processing including records of Customers’ Consents and procedures for obtaining Consents.
These records should include, at a minimum, the name and contact details of the Controller and the DPO, clear descriptions of the Personal Data types, Customer types, Processing activities, Processing purposes, third-party recipients of the Personal Data, Personal Data storage locations, Personal Data transfers, the Personal Data’s retention period and a description of the security measures in place. In order to create such records, data maps should be created which should include the detail set out above together with appropriate data flows.
We will undergo all mandatory data privacy related training and ensure our team undergo similar mandatory training.
We will regularly review all the systems and processes under your control to ensure they comply with this Data Processing Policy and check that adequate governance controls and resources are in place to ensure proper use and protection of Personal Data.
(d) the risks of varying likelihood and severity for rights and freedoms of Customers posed by the Processing.
If a decision is to be based solely on Automated Processing (including profiling), then Customers must be informed when we first communicate with them of their right to object. This right must be explicitly brought to their attention and presented clearly and separately from other information. Further, suitable measures must be put in place to safeguard the Customer’s rights and freedoms and legitimate interests.
We must also inform you of the logic involved in the decision making or profiling, the significance and envisaged consequences and give you the right to request human intervention, express their point of view or challenge the decision.
For example, a Customer’s prior consent is required for electronic direct marketing (for example, by email, text or automated calls). The limited exception for existing customers known as “soft opt in” allows organisations to send marketing texts or emails if they have obtained contact details in the course of a sale to that person, they are marketing similar products or services, and they gave the person an opportunity to opt out of marketing when first collecting the details and in every subsequent message.
The right to object to direct marketing must be explicitly offered to the Customer in an intelligible manner so that it is clearly distinguishable from other information.
A Customer’s objection to direct marketing must be promptly honoured. If a customer opts out at any time, their details should be suppressed as soon as possible. Suppression involves retaining just enough information to ensure that marketing preferences are respected in the future.
We may only share the Personal Data we hold with another employee, agent or representative of our group (which includes our subsidiaries and our ultimate holding company along with its subsidiaries) if the recipient has a job-related need to know the information and the transfer complies with any applicable cross-border transfer restrictions.
We reserve the right to change this Data Processing Policy at any time so please check back regularly to obtain the latest copy of this Data Processing Policy. We last revised this Data Processing Policy on 24th May 2018.
This Data Processing Policy does not override any applicable national data privacy laws and regulations in countries where the Company operates.
Company name: CL Consortium Limited.
Consent: agreement which must be freely given, specific, informed and be an unambiguous indication of the Customer’s wishes by which they, by a statement or by a clear positive action, signifies agreement to the Processing of Personal Data relating to them.
Customer: a living, identified or identifiable individual about whom we hold Personal Data. Customers may be nationals or residents of any country and may have legal rights regarding their Personal Data.
Personal Data: any information identifying a Customer or information relating to a Customer that we can identify (directly or indirectly) from that data alone or in combination with other identifiers we possess or can reasonably access. Personal Data includes Special Categories of Personal Data and Pseudonymised Personal Data but excludes anonymous data or data that has had the identity of an individual permanently removed. Personal data can be factual (for example, a name, email address, location or date of birth) or an opinion about that person’s actions or behaviour. | https://charitylearning.org/data-processing-policy/ |
Q:
PHP array functions, difference and merge
I have 2 arrays: colors and favorite colors.
I want to output an array with all the colors but the favorite colors on top, keeping the same sort order.
My example is working fine, but wanted to know if this is the correct (fastest) way to do this.
Thank you
$colors_arr = array("yellow","orange","red","green","blue","purple");
print "<pre>Colors: ";
print_r($colors_arr);
print "</pre>";
$favorite_colors_arr = array("green","blue");
print "<pre>Favorite Colors: ";
print_r($favorite_colors_arr);
print "</pre>";
$normal_colors_arr = array_diff($colors_arr, $favorite_colors_arr);
print "<pre>Colors which are not favorites: ";
print_r($normal_colors_arr);
print "</pre>";
// $sorted_colors_arr = $favorite_colors_arr + $normal_colors_arr;
$sorted_colors_arr = array_merge($favorite_colors_arr, $normal_colors_arr);
print "<pre>All Colors with favorites first: ";
print_r($sorted_colors_arr);
print "</pre>";
output:
Colors: Array
(
[0] => yellow
[1] => orange
[2] => red
[3] => green
[4] => blue
[5] => purple
)
Favorite Colors: Array
(
[0] => green
[1] => blue
)
Colors which are not favorites: Array
(
[0] => yellow
[1] => orange
[2] => red
[5] => purple
)
All Colors with favorites first: Array
(
[0] => green
[1] => blue
[2] => yellow
[3] => orange
[4] => red
[5] => purple
)
A:
You could possibly shorten it to
$sorted_colors_arr = array_unique(array_merge($favorite_colors_arr, $colors_arr);
| |
But the Bank of Korea (BOK), which cut its rate by 50 basis points last year, repeated its view from last month that the domestic economy will continue to recover though uncertainties surrounding the growth path have increased.
The BOK also repeated that it is closely monitoring external risks, including changes in the monetary policies of major countries, the financial and economic conditions in China, movements in capital flows, geopolitical risks and the rise in household debt.
In January the BOK lowered its 2016 growth forecast to 3.0 percent from October's forecast of 3.2 percent but at that point the central bank governor said the cut in the forecast did not warrant an easing of monetary policy.
South Korea's economy grew by an annual rate of 3.0 percent in the fourth quarter of 2015, up from 2.7 percent in the third quarter and the BOK estimated full year growth of 2.6 percent.
Consumer price inflation in South Korea eased to 0.8 percent in January from 1.3 percent in December as the impact of higher cigarette prices dropped out of the comparison. Core inflation, which excludes agricultural and petroleum products, fell to 1.7 percent from 2.4 percent.
The BOK forecasts 2016 inflation of 1.4 percent, down from its previous forecast of 1.7 percent, and below its target of 2.0 percent.
The Bank of Korea issued the following statement:
"The Monetary Policy Board of the Bank of Korea decided today to leave the Base Rate unchanged at 1.50% for the intermeeting period.
Based on currently available information the Board considers that the trends of economic recovery in the US and the euro area have weakened somewhat. Economic growth in emerging market countries including China has meanwhile continued to slow. The Board forecasts that the global economy will maintain its recovery going forward, albeit at a moderate pace, centering around advanced economies such as the US, but judges that it will be affected by factors such as financial and economic conditions in China and other emerging market countries, international oil price movements, and global financial market volatility.
Looking at the Korean economy, the trend of decline in exports has expanded and the recovery of domestic demand activities such as consumption has also shown signs of weakening somewhat, while the sentiments of economic agents have been sluggish. On the employment front, as the trend of increase in the number of persons employed expanded in December, the employment-to-population ratio rose compared to that in December the year before while the unemployment rate fell. The Board forecasts that the domestic economy will continue its recovery going forward, centering around domestic demand activities, but in view of external economic conditions judges the uncertainties surrounding the growth path to have increased.
Consumer price inflation fell from 1.3% the month before to 0.8% in January, owing chiefly to the disappearance of the effect of the cigarette price hike, and core inflation excluding agricultural and petroleum product prices also fell to 1.7%, from 2.4% in December. Looking ahead the Board forecasts that consumer price inflation will continue at a low level, due mainly to the declines in international oil prices. In the housing market, the upward trends of sales and leasehold deposit prices slowed in both Seoul and its surrounding areas and the rest of the country.
In the domestic financial markets, influenced by global stock market unrest and by foreigners’ continuing net sales of domestic securities, stock prices have fallen and the Korean won has depreciated against the US dollar. The won has depreciated even more against the Japanese yen than the US dollar, on the strengthening of the yen due to investor preference for safe assets. Long-term market interest rates have fallen, in response mainly to declines in interest rates in major countries and to the movements of domestic economic and price indicators. Bank household lending has sustained a trend of increase at a level substantially exceeding that of recent years, led by mortgage loans.
Looking ahead, while working to sustain the recovery of economic growth, the Board will conduct monetary policy so as to maintain price stability over a medium-term horizon, and pay attention to financial stability. In this process it will closely monitor external risk factors such as any changes in the monetary policies of major countries or in financial and economic conditions in China, the movements of capital flows, geopolitical risks, and the trend of increase in household debt." | http://www.centralbanknews.info/2016/02/south-korea-holds-rate-improvement-in.html |
China astronauts return after 90 days aboard space station
A trio of Chinese astronauts returned to the Earth on Friday after a 90-day stay aboard their nation's first space station, in China's longest mission yet
A trio of Chinese astronauts returned to the Earth on Friday after a 90-day stay aboard their nation's first space station in China's longest mission yet.
Nie Haisheng, Liu Boming, and Tang Hongbo landed in the Shenzhou-12 spaceship just after 1:30 pm (0530 GMT) after having undocked from the space station on Thursday morning.
Spacecraft landed in the Gobi Desert
State broadcaster CCTV showed footage of the spacecraft parachuting to land in the Gobi Desert where it was met by helicopters and off-road vehicles. Minutes later, a crew of technicians began opening the hatch of the capsule, which appeared undamaged.
Details
Astronauts went on two spacewalks, deployed 10-meter mechanical arm
After launching on June 17, mission commander Nie and astronauts Liu and Tang went on two spacewalks, deployed a 10-meter mechanical arm, and had a video call with Communist Party leader Xi Jinping.
Notably, the astronauts also carried out a range of experiments in the space station and sent stunning images of Earth, according to Space.com.
Further details
China has not announced launch date of Shenzhou-13 yet
While few details have been made public by China's military, which runs the space program, astronaut trios are expected to be brought on 90-day missions to the station over the next two years to make it fully functional.
China has not announced the names of the next set of astronauts nor the launch date of Shenzhou-13.
Further details
China has sent 14 astronauts into space since 2003
China has sent 14 astronauts into space since 2003 when it became only the third country after the former Soviet Union and the United States to do so on its own.
China embarked on its own space station program after being excluded from the International Space Station, largely due to the objections by the USA to the Chinese space program's secrecy and military backing.
Further details
Chinese space station is expected to be operational for 10yrs
However, the Chinese space program's chief designer Zhou Jianping had said that foreign astronauts will enter the Chinese space station one day.
The Chinese space station is expected to remain operational for at least 10 years. Meanwhile, International Space Station is due for retirement in 2024.
Notably, the mission was reportedly viewed as a milestone to mark the 100th anniversary of the Communist Party.
Further details
China has planned 11 space missions for 2021 and 2022
To recall, in April, China had also launched a module of the space station which served as the living quarters for the Shenzhou-12 astronauts.
This was followed by the launch of two cargo spacecraft carrying supplies for them.
Meanwhile, China has planned 11 space missions for 2021 and 2022 to complete the construction of the space station. This includes plans for four manned missions. | |
Background
==========
Generalisability (external validity) is the extent to which the results of a study can be applied to other populations. The many threats to the external validity of a study\'s results include choice of sampling frame, representativeness of the initial sample, and attrition. These issues were discussed in a previous paper \[[@B1]\], and reporting methods were proposed that would enable the reader to assess - at least qualitatively - the generalisability of results from a cohort or longitudinal study. These reporting methods have since been taken further with the publication of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) initiative \[[@B2]\].
A common method of assessing generalisability is to compare demographics, health characteristics, and health service variables between a study sample and population of interest at baseline. Over time, this comparison should be repeated to see if biases are changing. This process relies on data from people who enrol and remain in the cohort, enrol and later drop out (using data before withdrawal), were invited but never participated, and those in the population of interest (i.e., all people who might have been selected for inclusion in the study). Repeated assessments of generalisability are particularly appropriate in longitudinal cohort studies where drop out is potentially not a random phenomenon (e.g., related to characteristics of people who cease to participate).
Among cohorts of older people, drop out is frequently due to death. This potential source of bias is different from biases that lead to other types of attrition among participants who are still alive.
Relative survival is the ratio of survival that is observed in the study sample in comparison to that of the population from which it was drawn \[[@B3]\]. This method, which was originally developed to measure survival of cancer patients, has not been used previously to assess bias due to deaths in cohort studies. The main purpose of this paper is to explain and illustrate relative survival as a tool for assessing generalisability of results from a cohort of older people among whom death is a potential threat to generalisability.
We illustrate the method using data from women born in 1921-26 who first participated in the Australian Longitudinal Study on Women\'s Health (ALSWH) in 1996. We also consider possible reasons for differences in relative survival using data from the ALSWH, data collected from the reference population in the national five yearly census, and periodic national health surveys.
Materials and methods
=====================
Participants
------------
The ALSWH is a longitudinal study of factors affecting the health and well-being of three national cohorts of women who were born in 1973-78, 1946-51, and 1921-26. The women were selected randomly from the national Medicare health insurance database (which includes all citizens and permanent residents of Australia), with intentional over-sampling of women living in rural and remote areas. In 1996, more than 40 000 women responded to the initial survey; they were reasonably representative of the general population of Australian women in each age group, although compared with data from the 1996 Australian Census there was over-representation of women who were born in Australia, employed and had a university education \[[@B4]\]. More details about the study can be found at <http://www.alswh.org.au>. Ethical clearance for the study was obtained from the Universities of Newcastle and Queensland.
This paper focuses on the 12 432 women in the 1921-26 cohort who participated in the baseline survey in 1996. Although these women had a nominal age range of 70 to 74 years when the sample was selected, 5% of women were aged 75 years. Due to the small number of participants from the Northern Territory this jurisdiction is not included in the State/Territory comparisons.
Mortality data
--------------
Using personal identifying information provided by the participants, vital status was ascertained by probabilistic linkage to the National Death Index (NDI) for all participants from baseline (1996) to 31 October 2008 \[[@B5]\]. The expected mortality of the study population was ascertained using annual life tables produced by the Australian Bureau of Statistics (ABS) for each State and Territory of Australia \[[@B6]\].
National Health Survey
----------------------
The Australian National Health Survey (NHS) is conducted periodically by trained interviewers from the ABS. In addition to demographic information, the survey provides detailed information about the health status of Australians; their use of health services, facilities, and medications; and health-related aspects of their lifestyle. It consists of a representative sample of residents of private and non-private dwellings in all States and Territories, but excludes special dwellings such as hospitals, institutions, and nursing homes.
The 1995 NHS was conducted during the 12-month period of January 1995 to January 1996. It comprised about 23 800 households, representing approximately 57 600 persons \[[@B7]\]. A total of 894 women aged 70 to 74 years (i.e., women born in 1921-25) participated in the 1995 NHS. Using unit record data supplied by the ABS, these women were compared against the ALSWH cohort participants at the first survey for selected characteristics \[[@B8]\].
Relative survival analysis
--------------------------
Relative survival - the ratio of survival observed in the study sample to the survival to that it should have experienced - can be calculated based on the life table of the population from which it was drawn \[[@B3]\]. In this instance, the study sample was those ALSWH participants born in 1921-26 and the reference population was all Australian women of the same age and State or Territory of residence. It was assumed that the expected mortality experienced by the study sample during a particular period would be the same as mortality in the general population of the same sex, age, and State or Territory of residence from which they are drawn. The Ederer II method was used to calculate interval-specific relative survival \[[@B9]\].
Firstly the population (L) at the start of each interval in each birth cohort and the number of deaths (D) and those lost to follow-up (W) during the interval were determined. From this information, both the population at risk (L\') and interval-specific survival (P) were estimated by assuming withdrawals and deaths were evenly distributed over the interval via the formulae:
$$\begin{array}{l}
{\text{L}' = \text{L} - \text{W}/2\text{~and}} \\
{\text{P} = 1 - \text{D}/\text{L}'.} \\
\end{array}$$
The cumulative survival (CP) for a particular interval (i) was then obtained by the cumulative product of the interval-specific survival terms where the initial cumulative survival (CP(0)) was equal to one, and:
$$\text{CP}(\text{i} + 1) = \text{CP}(\text{i}) \cdot \text{P}(\text{i}).$$
The expected interval-specific and expected cumulative survival (P\* and CP\*) were calculated similarly, using expected deaths (D\*) obtained from appropriate life tables. Finally the interval-specific and cumulative relative survival ratios (R and CR) were calculated as the ratios of the observed and expected interval-specific and cumulative survivals:
$$\begin{array}{l}
{\text{R} = \text{P}/\text{P}*\text{~and}} \\
{\text{CR} = \text{CP}/\text{CP}*} \\
\end{array}$$
The effect of oversampling women in rural and remote areas in the ALSWH was accounted for via the use of sampling weights, wherein individual weighted deaths both observed and expected were summed to derive the survival estimates.
Separate analyses were carried out for each State and Territory of residence, each category of the Accessibility/Remoteness Index of Australia (ARIA) classification \[[@B10]\], and initial age in years. ARIA categorises areas as \'highly accessible\', \'accessible\', \'moderately accessible\', \'remote\' and \'very remote\' based on the road distance from the closest service centre. Relative survival was calculated using SAS macros created by Paul Dickman \[[@B11]\].
Comparison to NHS
-----------------
Selected demographic, health behaviour, and health status characteristics of the ALSWH sample were compared to those of participants of the 1995 NHS of the same age in order to explore possible reasons for observed differences in survival. These comparisons were presented as percentages and analysed using the χ^2^statistic.
Effects of factors associated with mortality
--------------------------------------------
A proportional hazards model was used in order to assess the effects of initial differences in potential factors associated with mortality between the study sample and the population \[[@B12]\]. The model was of the form:
$$\lambda_{\text{i}}(\text{a},\text{s},\text{z}_{\text{i}}(\text{a})) = \lambda^{*}{}_{\text{i}}(\text{a},\text{s})\exp\lbrack\beta'\text{z}_{\text{i}}(\text{a})\rbrack$$
where λ~i~(a, s, z~i~(a)) is the death intensity at age a and State/Territory (s) for the ith individual with covariates z~i~(a) and λ\*~i~(a, s) represent the population mortality at age a for an individual of the same sex and State/Territory as the ith individual in the study who is born in the same year as i. The State and Territory specific life tables \[[@B6]\] were used to obtain values for λ\*~i~(a, s).
A multiplicative model was used because in the more widely used additive model it is assumed that, at all times and for all values of covariates, the mortality in the study sample is always either higher or lower than that of the general population. This assumption was not justifiable in this context. The effect of oversampling in rural areas was accounted for by including place of residence as it was defined in the original sample (urban, rural and remote) as a covariate in the model. Factors considered in this analysis were based on the results of a previous study of the survival of this cohort \[[@B13]\]. These factors included were: age, marital status, country of birth, State or Territory of residence, Accessibility/Remoteness Index (ARIA), education, smoking status, physical activity, body mass index and self rated health. The proportional hazards model was conducted using the SAS PHREG procedure; a hazard ratio of less than one indicates better relative survival. All analyses were performed using SAS version 9.1 \[[@B14]\].
Results
=======
Relative survival
-----------------
There were 3661 deaths (29.4%) amongst the 12 432 women born in 1921-26 who participated in the baseline survey of the ALSWH. Over the 12-year period of 1996 to 2008, the ALSWH sample had a relative cumulative survival 9.5% (95% confidence interval, 8.3% - 10.7%) greater than their peers in the general Australian population matched for age and State or Territory of residence (Table [1](#T1){ref-type="table"}). Interval-specific relative survival remained relatively constant over the whole period, varying between 0.3% and 1.2%.
######
Life Table Estimates of Survival of the Australian Longitudinal Study on Women\'s Health 1921-26 Cohort Relative to Women in the Australian Population Born in the Same Period and Resident in the Same State or Territory.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*Interval*\ *L* *D* *W* *Effective number at risk* *Interval-specific observed survival* *Cumulative observed survival* *Interval-specific expected survival* *Cumulative expected survival* *Interval-specific relative survival\** *Relative cumulative survival\** *Lower 95% CI* *Upper 95% CI*
*(years)*
------------- ------- ----- ----- ---------------------------- --------------------------------------- -------------------------------- --------------------------------------- -------------------------------- ----------------------------------------- ---------------------------------- ---------------- ----------------
0.0 - 1.0 12424 135 0 12424 0.989 0.989 0.981 0.981 1.008 1.008 1.006 1.010
1.0 - 2.0 12289 181 0 12289 0.985 0.975 0.979 0.960 1.007 1.015 1.012 1.018
2.0 - 3.0 12108 195 0 12108 0.984 0.959 0.977 0.938 1.007 1.022 1.018 1.026
3.0 - 4.0 11912 185 0 11912 0.984 0.944 0.975 0.914 1.010 1.032 1.028 1.037
4.0 - 5.0 11727 237 0 11727 0.980 0.925 0.973 0.890 1.007 1.040 1.034 1.045
5.0 - 6.0 11490 226 0 11490 0.980 0.907 0.971 0.864 1.010 1.050 1.044 1.056
6.0 - 7.0 11264 282 0 11264 0.975 0.884 0.968 0.836 1.007 1.058 1.051 1.064
7.0 - 8.0 10981 359 0 10981 0.967 0.855 0.964 0.806 1.003 1.061 1.054 1.069
8.0 - 9.0 10623 379 0 10623 0.964 0.825 0.961 0.774 1.004 1.066 1.057 1.074
9.0 - 10.0 10244 361 0 10244 0.965 0.795 0.956 0.740 1.009 1.075 1.066 1.085
10.0 - 11.0 9882 422 0 9882 0.957 0.761 0.951 0.704 1.006 1.082 1.071 1.093
11.0 - 12.0 9460 398 707 9106 0.956 0.728 0.945 0.665 1.012 1.095 1.083 1.107
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
where:
L = Number alive at start of interval;
D = Deaths during interval;
W = Lost to follow-up during each 12 month interval;
Effective number at risk = persons at risk during the interval, accounting for withdrawals;
Interval-specific observed survival = proportion of persons at risk who survive to the end of the interval;
Cumulative observed survival = proportion of persons at risk at the start of the study period who survive to the end of the period, accounting for withdrawals;
Interval-specific expected survival = proportion of persons at risk who are expected to survive to the end of the interval;
Cumulative expected survival = proportion of persons at risk at the start of the study period who are expected to survive to the end of the period, accounting for withdrawals;
Interval-specific relative survival = ratio of observed interval-specific observed survival to expected survival;
Relative cumulative survival = ratio of observed cumulative survival to expected cumulative survival;
\* Relative survival ratios greater than one indicate that the study sample has lower total mortality than the population from which they are drawn from while ratios less than one indicate higher mortality;
ALSWH participants had significantly better survival than the general population in all jurisdictions, with their relative cumulative survival advantage ranging from 6% in South Australia to 23% in the Australian Capital Territory (Table [2](#T2){ref-type="table"}). Their relative cumulative survival was consistently higher than for the general population across all ARIA groups, although in the remote/very remote areas the difference was not statistically significant (Table [2](#T2){ref-type="table"}). The relative survival advantage of ALSWH participants increased with initial age: among those assessed in 1996, a 6% advantage among women aged 70 approached 22% among women aged 75 (Table [2](#T2){ref-type="table"}).
######
Life Table Estimates of Relative Cumulative Survival Over the Period 1996-2008 Among Participants in the Australian Longitudinal Study on Women\'s Health 1921-26 Cohort by State or Territory of Residence, Accessibility/Remoteness Index (ARIA), and Age at Baseline (1996).
Relative Cumulative Survival 95% Confidence Limits
------------------------------------------- ------------------------------ ----------------------- -------
**State or Territory of Residence**
Australian Capital Territory 1.232 1.113 1.321
New South Wales 1.108 1.087 1.127
Queensland 1.085 1.054 1.113
South Australia 1.063 1.024 1.099
Tasmania 1.106 1.024 1.179
Victoria 1.086 1.063 1.109
Western Australia 1.109 1.068 1.146
**Accessibility/Remoteness Index (ARIA)**
Highly Accessible 1.104 1.091 1.116
Accessible 1.042 1.004 1.079
Moderately Accessible 1.064 0.997 1.124
Remote, Very Remote 1.068 0.939 1.173
**Age at Survey 1 (years)**
70 1.064 1.039 1.087
71 1.060 1.036 1.083
72 1.094 1.068 1.119
73 1.109 1.079 1.138
74 1.129 1.095 1.161
75 1.215 1.145 1.280
Comparison of ALSWH cohort to NHS sample at baseline
----------------------------------------------------
The comparison of the ALSWH 1921-26 cohort at baseline (1996) with the 1995 NHS for selected socio-demographic characteristics and health related behaviours is shown in Table [3](#T3){ref-type="table"}. The ALSWH cohort was less likely than their NHS counterparts to be widowed, more likely to be married, and more likely to have a tertiary education. There were more women born in \'Other English speaking background (ESB)\'countries in the ALSWH cohort and less born in Europe. Among health related behaviours, ALSWH participants were less likely to be current smokers, report fair or poor self-rated health, and report that their health limited their ability to exercise (walk 100 metres).
######
Comparison of Selected Characteristics Between the Australian Longitudinal Study on Women\'s Health 1921-26 Cohort and the 1995 National Health Survey.
------------------------------------------------------------------------------------
ALSWH\ NHS\
N = 12423 N = 894
--------------------------------------------------- ----------- --------- ----------
***Smoking Status***
*Never-smoker* 62.1 64.6 \<0.0001
*Ex-smoker* 30.4 24.3
*Current smoker* 7.6 11.0
***Marital Status***
*Partnered* 55.6 49.6 \<0.0001
*Separated/Divorced* 6.3 5.6
*Widowed* 34.8 42.4
*Never married* 3.2 2.4
***Country of Birth***
*Australian born* 73.5 74.2 0.05
*Other English Speaking* 13.6 10.9
*Europe* 10.1 12.0
*Asia* 1.8 1.4
*Other* 1.0 1.6
***Highest Educational Qualification***
*No Higher Qualification* 84.0 79.3 \<0.0001
*Trade/Apprentice Certificate/Diploma* 11.7 16.7
*University* 4.2 2.7
*Inadequately described* 1.2
***Body Mass Index (BMI) Group***
*Underweight, BMI \< 18.5* 3.2 4.2 0.07
*Healthy weight, 18.5 ≤ BMI \< 25* 50.4 52.1
*Overweight, 25 ≤ BMI \< 30* 33.1 29.1
*Obese, 30 ≤ BMI* 13.2 14.5
***Self Rated Health***
*Excellent* 6.4 7.6 \<0.0001
*Very good* 26.2 23.2
*Good* 39.4 34.4
*Fair* 23.6 24.3
*Poor* 4.3 10.5
***Does Your Health Limit You in Walking 100 m***
*Limited a lot* 7.1 11.3 \<0.0001
*Limited a little* 15.4 19.7
*Not limited* 77.4 68.9
***State or Territory of Residence***
*New South Wales* 34.9 35.8 0.969
*Australian Capital Territory* 1.1 0.8
*Queensland* 16.3 16.6
*South Australia* 10.2 10.0
*Tasmania* 2.8 2.5
*Victoria* 26.0 25.9
*Western Australia* 8.5 8.5
------------------------------------------------------------------------------------
Effects of factors on mortality (relative survival)
---------------------------------------------------
The results of the multiplicative relative survival model are shown in Table [4](#T4){ref-type="table"}. Relative survival was significantly associated with initial age, country of birth, State or Territory of residence, marital status, body mass index (BMI), smoking status, physical activity, and self rated health. For example, age at baseline was positively associated with better relative survival, implying that - even after adjusting for major risk factors - older members of the cohort were healthier than their younger counterparts in the general population.
######
Multivariate Analysis of Relative Survival of the Australian Longitudinal Study on Women\'s Health 1921-26 Cohort, 1996-2008.
*Hazard Ratio\** *95% Confidence Limits*
--------------------------------------------- ------------------ ------------------------- ------
***Smoking Status***
*Never-smoker (Ref)* 1.00
*Ex-smoker* 1.27 1.16 1.38
*Current smoker* 1.84 1.61 2.10
***Marital Status***
*Partnered (Ref)* 1.00
*Separated/Divorced* 1.09 0.92 1.30
*Widowed* 1.10 1.01 1.20
*Never married* 1.36 1.08 1.72
***Country of Birth***
*Australian born (Ref)* 1.00
*Other English Speaking* 1.02 0.90 1.15
*Europe* 0.85 0.73 1.00
*Asia* 0.64 0.42 0.97
*Other* 0.38 0.20 0.74
***Highest Educational Qualification***
*No Higher Qualification (Ref)* 1.00
*Trade/Apprentice Certificate/Diploma* 0.97 0.85 1.11
*University* 0.86 0.68 1.08
***Body Mass Index (BMI) Group***
*Underweight, BMI \< 18.5* 1.66 1.39 1.97
*Healthy weight, 18.5 ≤ BMI \< 25 (Ref)* 1.00
*Overweight, 25 ≤ BMI \< 30* 0.83 0.76 0.91
*Obese, 30 ≤ BMI* 0.96 0.85 1.08
***Self Rated Health***
*Excellent (Ref)* 1.00
*Very good* 1.10 0.88 1.37
*Good* 1.42 1.14 1.75
*Fair* 2.42 1.95 3.00
*Poor* 4.86 3.80 6.22
***Physical Activity***
*None* 1.60 1.44 1.78
*Low* 1.04 0.93 1.17
*Moderate (Ref)* 1.00
*High* 1.04 0.89 1.20
***State or Territory of Residence***
*New South Wales (Ref)* 1.00
*Australian Capital Territory* 0.88 0.51 1.53
*Queensland* 1.09 0.97 1.22
*South Australia* 1.09 0.94 1.26
*Tasmania* 1.06 0.86 1.30
*Victoria* 1.13 1.01 1.25
*Western Australia* 1.07 0.90 1.26
***Accessibility/Remoteness Index (ARIA)***
*1. Highly Accessible (Ref)* 1.00
*2. Accessible* 1.10 0.99 1.21
*3. Moderately Accessible* 1.18 1.00 1.39
*4. Remote, Very Remote* 1.18 0.85 1.64
***Age at Survey 1 (years)***
*70 (Ref)* 1.00
*71* 1.07 0.93 1.23
*72* 0.98 0.85 1.12
*73* 0.92 0.80 1.06
*74* 0.91 0.79 1.05
*75* 0.73 0.59 0.90
\* Hazard ratios greater than one indicates higher mortality relative to the reference category while ratios less than one indicate lower mortality;
Discussion
==========
In the 12 years (1996-2008) under consideration, the ALSWH 1921-26 cohort had significantly better survival than the general population. This better relative survival was consistent across all jurisdictions for the duration of the study. There was also no indication that the survival of the sample converged to that of the general population over time. This result was unexpected because most sample groups tend to become more like the general population over time (or, even if they differ at baseline, the effect of that initial difference becomes less important). Other longitudinal studies have found the mortality of sampled respondents and non-respondents converging over time \[[@B15],[@B16]\].
That the ALSWH 1921-26 cohort had significantly better survival was expected, as participants were self selected from an initial sample of randomly selected Australian women. From an initial sample of 39 000 women selected from the Medicare database, 12 432 responded \[[@B17]\]. It would be expected that older women - in particular those with better health (\'the healthy volunteer\') or who were more interested in their health - would be more likely to participate in such a survey. Similar effects have been observed in other longitudinal studies \[[@B15],[@B16],[@B18]\].
Consistent with previous analysis comparing the ALSWH cohort to census data \[[@B19]\], systematic differences were observed between the ALSWH 1921-26 cohort and the participants of the 1995 National Health Survey. A previously published study of survival among the 1921-26 cohort showed that self rated health was a strong predictor of long term survival \[[@B13]\]. Other variables found to be associated with survival in this current study (e.g., marital status, country of birth, smoking, physical activity and BMI) were also associated in the previous study. This study found significant differences between the ALSWH cohort and the NHS participants with respect to several variables, namely: self rated health, smoking status, marital status, and country of birth. Given the magnitude and direction of these differences, these factors could explain a major portion of the observed survival advantage in the ALSWH cohort. For example, if the distribution of self rated health observed in the NHS was applied to the ALSWH sample then the cumulative mortality of the ALSWH cohort as a whole would be increased by 10%, thereby reducing the survival advantage observed in the ALSWH cohort by about a half.
Limitations
-----------
An alternative possible reason for the observed difference in survival between the ASLWH sample and NHS participants could be incomplete ascertainment of deaths in the study population. Death information was obtained from both linkage to the NDI as well as notification by family or carers of participants. A previous paper examining linkage of the study population to the NDI in 1998 showed that such a linkage identified 95% of the deaths \[[@B5]\], however this has not been reassessed since that time and it is possible that this capture method has worsened over the last decade. Other studies of the accuracy of the NDI found the false negative rate ranging from 3% to 11% (compared to 5% in the ALSWH study) \[[@B20]-[@B22]\]. If the false negative rate is as high as 10%, this could account for about half the difference observed in this study. However, such a high false negative rate is considered unlikely because NDI linkage is supplemented by other information, particularly data that is obtained at the time of the triennial ALSWH surveys. Still, the systematic difference in survival observed over the study period suggests some under ascertainment of deaths may have occurred.
Another limitation to the use of relative survival as a tool for assessing generalisability is the categorisation of available life tables. For Australia, life tables are available by age, sex and State/Territory of residence only; it would be useful if they were available for other factors such as smoking status. Indeed, if other population data on survival were available with stratification by other variables (e.g. from other cohort studies), then the relative survival approach with weighting by strata would be feasible.
Conclusion
==========
This study has shown that relative survival can be a useful and relatively easily obtained measure of generalisability (external validity) in a longitudinal study of the health of a population-based sample, particularly when participants are of advanced years. The advantage of this method is that, through a single measure, it can indicate the degree to which a study sample corresponds to the general population with respect to health status. Along with the other comparisons of the study population to census and survey data, this measure provides information relevant to the reporting requirements of the STROBE statement \[[@B2]\]. In the case of the ALSWH cohort studied here, it seems likely that most of the difference in their survival over that of the general population is attributable to the better health of the sample at baseline.
Implications
------------
It is essential that any future analysis of this cohort considers the results of this investigation, but if and how any adjustments to design must occur will depend on the objectives of future work. If the analysis involves examining the associations of various factors with some outcome, then it may be sufficient to control for the factors that were found to be associated with improved survival. On the other hand, if population estimates are required then it would be necessary to employ some type of weighting scheme involving these factors, as well as the weights that account for the deliberate oversampling in rural areas. For example, if one was estimating the population prevalence of diabetes in older women then it may be necessary to use weights for area of residence and perhaps other factors such as self rated health.
Competing interests
===================
The authors declared no conflict of interest, and no funding source was involved in the creation of this manuscript.
Authors\' contributions
=======================
RH conceptualized the study, conducted the analysis, and prepared the manuscript; LT and AD assisted with the initial design, methods used in the analysis, and drafting of the manuscript. All authors have read and approved the final manuscript.
Acknowledgements
================
The research on which this paper is based was conducted as part of the Australian Longitudinal Study on Women\'s Health, The University of Newcastle and The University of Queensland. We are grateful to the Australian Government Department of Health and Ageing for funding and to the women who provided the survey data.
| |
the Department of Modern Languages,
Faculty of Management and Human Resource Development,
Universiti Teknologi Malaysia
ACKNOWLEDGEMENTS
In the name of Allah the Most Gracious, the Most Merciful
Throughout the preparation of this dissertation, I owed much to my supervisor, Dr. Salbiah Seliman whose support, advice and encouragement had been the backbone of the research. Thank you so much, only Allah knows how much I am indebted to you.
My thanks and appreciations also go to my wonderful and supportive English Panel in SM Teknik Kota Tinggi, for their never-ending help and share of ideas. Not forgetting, heartfelt thanks to my friends, En. Hasni, En. Shamsulkamal, Puan Zailila, Puan Zanariah and Cik Nojiza, thank you for your support.
A very special thanks go to my friends in UTM, Sazuliana, Marsyiana, Masitah, Norhaliza and Siti Adibah who would be there to give me support in ups and downs. To Firdaus, thank you for everything. You always have faith in me and you are always at my side through thick and thin. Thanks for being such a wonderful person in my life.
ABSTRACT
ABSTRAK
TABLE OF CONTENTS
Declaration ii
Acknowledgement iii
Abstract iv
Abstrak v
Table of Content vi
List of Figures ix
List of Tables x
List of Abbreviations xi
List of Appendices xii
1.0 INTRODUCTION
1.1 Introduction 1
1.2 Background of Study 2
1.3 Statement of Problem 4
1.4 Objectives of the Study 5
1.5 Research Questions 6
1.6 Scope of Study 6
1.7 Significance of the Study 7
2.0 REVIEW OF LITERATURE
2.1 Introduction 8
2.2.2 The Concept of and Importance of Conferencing 12 2.2.3 Different Types of Feedback 14 2.2.4 Different Types of Conferencing 16 2.2.5 Problems Arising in Giving Feedback and
Conferencing 19
2.3 Related Research to the Use of Feedback in Writing 22
3.0 METHODOLOGY
3.1 Introduction 28
3.2 Research Design 29
3.3 Population 33
3.4 Sampling Design 34 3.5
Subject 36
3.6 Instruments 37
3.6.1 Writing Task 37
3.6.2 Conference Questions 40
3.6.3 Questionnaires 40
3.6.4 Interview Questions 41
3.7 Procedure of Data Collection 42
3.7.1 Preliminary Study 44
3.7.2 Preparation of Instruments 45
3.7.3 Piloting of Instruments 46
3.7.4 Improvement of Instruments 47
3.7.4 Fieldwork 49
4.0 FINDINGS AND DISCUSSION
4.1 Introduction 58
4.2 Types of Feedback Given to the Respondents 59
4.2.1 Feedback on Content 59
4.2.2 Feedback on Form 65
4.3 The Respondents’ Responses to Teacher’s Written
Feedback 72
4.4 The Respondents’ Responses to the Teacher’s
Conferencing Session 76
5.0 CONCLUSION AND RECOMMENDATIONS
5.1 Introduction 82
5.2 Conclusion 83
5.3 Limitations 84
5.4 Recommendations 85
5.4.1 Pedagogical Implications 85
5.4.2 Suggestion for Further Research 87
References 88
LIST OF FIGURES
Figure Title Page
1 The Research Design Used in the Study 31
LIST OF TABLES
Table Title
Page
1 Marks Obtained by the Respondents for ESL
Writing in January Test 36
2 Feedback on Content Received by the Respondents 61 3 Feedback on Form Received by the Respondents 66
LIST OF ABBREVIATIONS
L1 First Language
L2 Second Language
LIST OF APPENDICES
Appendix Table Page
A Writing Task 92
B Guidelines for Teachers in Commenting on
Essay Drafts 93
C Assessment Scale for Written Work 94 D Marking Code Used to Assess Writing 96 E Conference Questions 97
F Questionnaires 98
G List of Interview Questions 101 H A Sample Transcription of the Interview Session 102
I A Sample Transcription of the Conferencing
CHAPTER I
INTRODUCTION
1.1 Introduction
1.2 Background of the Study
There are several ways to think about errors in writing in light of what we know about second language and what we know about how texts, context and the writing process interact with each other. As mentioned, students’ writing in ESL generally produces texts that contain various degrees of grammatical and rhetorical errors. This kind of error is especially common among ESL writers who have a lot of ideas, but not enough language to express what they want to say in a comprehensible way.
The ability to write well is not naturally acquired. It is learned as a set of practical and learned experience. Writing also involves composing, which implies the ability to tell the information. The introduction of process approach in writing helps the students to understand better the process of writing and this approach eventually helps the students to build their own strategies in writing, As stated by Flower (1981),by using process approach in writing, students will have much time in their hands to discover their reading strategies and to consider feedback from teachers. As stated by Zamel (1983), “By studying what it is our students do in their writing, we can learn from them what they still need to be taught”. That is one major reason why the teachers’ feedback is crucial in helping students to improve their writing
1.3 Statement of Problem
Some of the issues raised for the lack of confidence in ESL writing are connected to the teacher and the methodology in teaching writing. The methodology is an important factor in the teaching process. The debate on the decline of the English language has seen much discussion falling on teaching methodology (Star, Nov.8 2000). It has been suggested that the teaching methods selected have not been effective in delivering the lessons to the students.
however their feedback on the form and content are often vague, contradictory, unsystematic and inconsistent. This leads to various reactions by students including confusion, frustration and neglect on the comments. Hence, to help the students to improve in writing, both aspects of the teacher and the methodology must be looked into.
This research suggests feedback, which is “…as an input from a reader to a writer with the effect of providing information to the writer for revision” (Siti Hamin, 2001) and conference “where students are invited to further develop their stories, to add more information, to include descriptive language” (Taylor, 1994, cited in Jarvis, 2002).
1.4 Objectives of the Study
1. To find out whether the use of written feedback help students to improve their writing.
2. To investigate whether the use of conferencing helps students to improve their writing.
1.5 Research Questions
1. What are the types of feedback given to students?
3. What are the students’ responses towards the teacher’s conferencing sessions?
1.6 Scope of the Study
This study focuses on secondary school students from a sub- urban school in
Johore. In addition, the margin of improvement studied in the research is not the main focus. The focus is only on the students’ reactions on teacher’s feedback and conferencing session. | https://1library.net/document/zlmex8oy-use-written-feedback-conferencing-improving-students-writing.html |
Encoding predicted outcome and acquired value in orbitofrontal cortex during cue sampling depends upon input from basolateral amygdala.
Certain goal-directed behaviors depend critically upon interactions between orbitofrontal cortex (OFC) and basolateral amygdala (ABL). Here we describe direct neurophysiological evidence of this cooperative function. We recorded from OFC in intact and ABL-lesioned rats learning odor discrimination problems. As rats learned these problems, we found that lesioned rats exhibited marked changes in the information represented in OFC during odor cue sampling. Lesioned rats had fewer cue-selective neurons in OFC after learning; the cue-selective population in lesioned rats did not include neurons that were also responsive in anticipation of the predicted outcome; and the cue-activated representations that remained in lesioned rats were less associative and more often bound to cue identity. The results provide a neural substrate for representing acquired value and features of the predicted outcome during cue sampling, disruption of which could account for deficits in goal-directed behavior after damage to this system.
| |
When art and design faculty aren’t busy teaching students art techniques, they are often creating their own work. USI is home to many gifted artists who have been exhibited worldwide. In this series we asked a member of the USI art faculty to talk about a favorite work they created and the meaning or influence behind it.
We kick this series off with Rob Millard-Mendez, the chair of the Art and Design Department and associate professor of art at USI. His work has been shown in 510 exhibitions, both internationally and in all 50 states. His art has been featured in Sculpture Magazine, American Craft Magazine and numerous books and pieces have found homes in more than 75 private and public collections.
STYLE:
“My style is fairly eclectic in that I’m working with all kinds of different materials,” said Millard-Mendez. “There’s not one signature material that is decidedly my own. My style is also kind of chunky and inspired by American folk art. It’s meant to be sarcastic and funny. I like to play with the levels of craft in art, so my work ranges from meticulously crafted items to more bare works that show the process of the creation.”
A FAVORITE PIECE:
Rob Millard-Mendez, The Antediluvian Plan of the Manikin Men (from the Popol Vuh) wood, paint, 19"h x 22"w x 16"d, 2015.
STORY BEHIND THIS WORK:
“The Popol Vuh is a Mayan creation myth,” said Millard-Mendez. The story in a nutshell is that the gods created a population to worship them and they make a couple of attempts. One of the attempts is to make manikin men out of wood and resin, but what happens is these manikin men aren’t smart enough or don’t care enough to worship them. The gods then send a flood to destroy them.”
He had initially intended to paint and finish the whole piece, but after a suggestion from his wife, Nancy Raen-Mendez, instructor in art, he decided on the “naked style.” The “naked style,” was an approach by another artist Millard-Mendez followed, George Lopez, a southwestern carver of saints and angels who often went with this unfinished approach.
In Millard-Mendez’s piece, you can find glue marks and pencil lines marking measurements. This style allows viewers a glimpse into the process of creating the work. He also utilized a variety of species of wood. The entire piece is made from repurposed material including mahogany that he has kept from scraps used during his days as a contractor in Massachusetts, nearly 20 years ago.
CURRENT PROJECTS:
Rob Millard-Mendez, The Poet's Dilemma, wood, steel, paint, glass eye, pencils, paper, mousetraps, 36"h x 13"w x 11"d, 2017
He recently completed a piece called “The Poet’s Dilemma,” an interactive piece featuring a cyclops bird with wings made of mice traps, legs and talons made of pencils mounted on a base with a hand crank.
He is currently working on a piece for the USI Faculty Show held in the McCutchan Art Center and Pace Galleries, about the famous philosopher, Immanuel Kant. Millard-Mendez describes the piece as “being a commentary about Kant’s bridge between realism and idealism and the negotiation of both.”
More of Millard-Mendez’s can be found on his website. | http://usi.edu/news/releases/2017/06/artist-spotlight-rob-millard-mendez/ |
[Good Practice of Clinical Physiology Examination for Patient Safety with a Team-Based Approach: Quality Practice in Ultrasonographic Examination].
For the safety of patient care, a team-based approach has been advocated as an effective measure. In clinical physiology examination, we have been making efforts to promote good practice for patient safety based on such an approach in Tokai University Hospital, as represented by quality practice in ultrasonographic examination. The entire process of ultrasonographic examination can be divided into three parts: pre-examination, examination, and post-examination processes. In each process of the examination, specific quality issues must be considered, eventually ensuring the quality and safety of patient care. A laboratory physician is responsible for not only quality assurance of examination, diagnosis, and reporting, but also patient safety. A laboratory physician can play a key role in all aspects of patient safety related to each process of the examination by taking a leadership role in the team-based approach.
| |
In this lesson, we briefly talked about the difference between risks and rewards. We learned that the 10 year Federal Note is a risk free investment that provides a marginal return. We know that in follow on lessons, we're going to use the 10 year note as our baseline value to relatively compare the value of other investments.
When we assess the amount of risk that's associated with an investment, we learned about three factors that make an investment risky.
1. Debt. We learned that as a company increases the amount of debt (or leverage) they use, it typically results in diminishing returns. By avoiding investments that carry a lot of debt, you'll mitigate the risks associated with any investment.
2. Price. Although investors might have the opportunity to purchase a really great business, we learned that the price at which they purchase the asset can actually result in a poor investment. We know that the price is what we pay and that value is what we get. This idea is at the heart of a value based investing approach.
3. Knowledge. One of the hardest things for an investor to do is to admit that they don't know all the facts. Although this may prove challenging, the faster an investor can identify they lack of knowledge or ability to properly account for all the variables, the less risk they'll assume in any investment.
Course Index
- What is Value Investing?
- Value a Small Business like Warren Buffett
- What is a Balance Sheet and Margin of Safety
- What is a Share
- (PE). Finding Basic Stock Terms
- Warren Buffett Stock Basics
- What is a Bond
- What are the components of a bond
- Value a Bond and Calculate Yield to Maturity (YTM)
- What is the Stock Market
- Stock Market Crash and Market Bubbles
- What is the Fed
- What is Financial Risk
- What is Inflation
- What is the S&P Rating
- What is a Yield Curve
- How to use a Bond Calculator
- Warren Buffett's Four Rules to Investing
- Warren Buffett's 1st Rule - What is the Current Ratio and the Debt to Equity Ratio
- Warren Buffett's 2nd Rule - Understanding Capital Gains Tax
- Warren Buffett's 3rd rule - A stock must be stable and understandable
- Warren Buffett Intrinsic Value Calculation - Rule 4
- What is Preferred Stock
- Calculate Yield to Call and How to buy Preferred Stock
- Calculate Book Value with Preferred Stock
- What is Income Investing
- What is a Cash Flow Statement
- How to read a cash flow statement
- When to sell stock like Warren Buffett
- What is Return On Equity - Warren Buffett's Favorite Number
- (PE) Return on Equity Practical Exercise
- What is Stock Volume
- How to calculate stock terms
- How to use a stock screener
- What is Goodwill on a Balance Sheet
- Warren Bufett's Owner's Earnings Calculation
- Warren Buffett DCF Intrinsic Value Calculator
Course Description
This course will teach you how to invest in stocks and bonds like Warren Buffett. It is highly recommended that you take all the lessons in order.
Download Preston's 1-page checklist for finding great stock picks: http://buffettsbooks.com/checklist
Preston Pysh is the #1 selling Amazon author of two books on Warren Buffett. The books can be found at the following location:
http://www.amazon.com/gp/product/0982967624/ref=as_li_tl?ie=...
http://www.amazon.com/gp/product/1939370159/ref=as_li_tl?ie=... | https://cosmolearning.org/video-lectures/what-financial-risk/ |
We consider the problem of assigning flights to baggage belts in the baggage reclaim area of an airport. The problem is originated by a real-life application in Copenhagen airport. The objective is to construct a robust schedule taking passenger and airline preferences into account. We consider a number of business and fairness constraints, avoiding congestions, and ensuring a good passenger flow. Robustness of the solutions is achieved by matching the delivery time with the expected arrival time of passengers, and by adding buffer time between two flights scheduled on the same belt. We denote this problem as the Baggage Belt Assignment Problem (BBAP). We first derive a general Integer Linear Programming (ILP) formulation for the problem. Then, we propose a Branch-and-Price (B&P) algorithm based on a reformulation of the ILP model tackled by Column Generation. Our approach relies on an effective dynamic programming algorithm for handling the pricing problems. We tested the proposed algorithm on a set of real-life data from Copenhagen airport as well as on a set of instances inspired by the real data. Our B&P scheme outperforms a commercial solver launched on the ILP formulation of the problem and is effective in delivering high quality solutions in limited computational times, making it possible its use in daily operations in medium-sized and large airports. | https://orbit.dtu.dk/en/publications/the-baggage-belt-assignment-problem |
How rural Hills Elementary boosted student test scores and became a Blue Ribbon School
Lisa TeBockhorst echoes wisdom often shared among educators: the teacher is one of the most pivotal factors in student achievement.
What's important is that they actually believe it, she says.
"Rather than excusing why the students aren't achieving, it's more that mindset of, I've got them for seven hours a day — how can I impact them in the most critical way?" said TeBockhorst, a longtime principal at Hills Elementary.
Spanning the length of just a few city blocks, Hills is a rural community of fewer than 1,000 people south of Iowa City. It's also home to Hills Elementary, which was nationally recognized this year for boosting academic achievement among its diverse class of students.
Hills is one 325 schools to win a National Blue Ribbon School award for 2021. About 81% of students learning English at the school are making progress toward proficiency in reading, writing, speaking and listening — as compared with a state average of 61%, according to data from 2019.
Hills was listed as "targeted" by the state's department of education in 2018 because its ELL students were scoring among the lowest 5% of schools in the state in some areas. The next year, the students met the target.
In addition, 74% of Hills students overall are showing grow in math, compared with 50% statewide.
More than one in four kids at the school, or 27%, are in the process of learning English. Spanish is their most commonly spoken first language. Across the Iowa City district, about 12% of students are Hispanic; at Hills, it's 38%.
About three in four students at Hills qualify for free or reduced-price lunch, at 77%. That's also far above the districtwide average of 39% in 2018-19.
How did this happen?
Hills took an individual approach to learning: educators closely tracked students' test scores and found ways to give them additional practice when they needed it. Teachers also kept an eye out for their day-to-day emotional well-being, and stepped in to help if they learned through behavior screening tools that something could be wrong.
Just 128 schools in Iowa have been awarded a Blue Ribbon since the program began nearly four decades ago, including fellow Iowa City Community School District schools Weber Elementary in 2005 and Southeast Junior High in 1983.
TeBockhorst, who took a job in a different school district this school year, says it's a rewarding feeling. She was at Hills for nine years.
"When I made that choice to be an educator, it was to unlock possibilities for kids. To help them be the best that they can be in their areas," she said. "But it's also a very proud moment, not necessarily for me; I am proud of our staff and our students and the support from the community and the families."
Hills educators use data to track progress, know when to intervene
At Hills, learning is planned out, purposeful and intentional.
Educators monitor kids' test scores over time with an eye for which skills, specifically, they needed support on. Along with "screener" tests, students are given diagnostic tests on specific skills — for example, in reading, they are tested for understanding of semantics, phonics and words with multiple syllables.
Using the data, teachers then figure out how to intervene.
If data show a student isn't making progress, a teacher or instructional coach will reassess their strategies for helping the student: Had they chosen the wrong type of academic "intervention?" Was it being taught the right way?
The additional help could come in the form of one-on-one or small group practice with a teacher.
"(For example) Maybe their correct words per minute is not what it should be. So we would look at, what's causing their breakdown in fluency? Is it because they're having trouble blending? Or is it because they just need more practice on certain skills of reading fluently, like pausing or breaking apart words?" TeBockhorst explained.
For ELL students, the school puts a focus on vocabulary and what's called "realia," a way of helping kids learn concepts that are new to them. For example, some students at Hills may have never seen snow before. So when the word comes up in a story, teachers will find ways to model what it means and make it more than just a concept in order to reinforce the meaning.
Because of the wide array of learning needs kids bring with them to Hills, the school also makes use of "class wide interventions," or additional practice for all students for skills they could improve on. That's part of a broader focus on what the district calls its "multi-tiered system of supports."
It might sound like a simple solution, TeBockhorst said, but educators also talked a lot about making the most of their time. They used "time audits" to ensure kids were getting enough instruction during the day in literacy and math, for example.
Another part of their work was on "not watering down the grade level standard," TeBockhorst said, "but scaffolding it up so kids can reach that benchmark."
Diane Schumacher, executive director of teaching and learning for Iowa City schools, confirmed that the educational tools being used at Hills are also present across the district. She can't say what worked particularly well there, she said via email, "other than a strong commitment to all of them."
Also essential at Hills: Monitoring students' emotional well-being
Research shows that the optimum learning environment is one where students feel safe and cared for. It's the school's job to partner with families to make that happen, including offering them resources and communicating openly about their students, TeBockhorst said.
At Hills, teachers track student data using a tool called mySABERS, looking at indicators about kids' emotional well-being over time. The tool was administered for students in second through eighth grade for the first time last school year across ICCSD.
Data from mySABERS can be used to trigger class-wide interventions — for example, spending a few extra minutes talking with the whole class about school procedures and routines, conflict resolution or how to regulate emotions.
Hills also holds 30-minute morning meetings, another social/emotional learning strategy being used in the district. They could involve a greeting, sharing about the day and specific skill lessons. A teacher might ask: "Can you share an example of when you saw someone feeling really happy, and how did you know?"
Older students also begin the day with a screener called Closegap. It asks them to identify if they were feeling tired, hungry, happy or sad, among other measures of well-being.
The data gives teachers a reason to see if the student needs extra help or, in some cases, connect them with a counselor or student family advocate.
"It's really framing that mindset around, we want you to feel safe to take a chance. Because that's how we're going to get better at what you do," TeBockhorst said. | |
On Tuesday, March 1 at 9 p.m. the Travel Channel airs Zimmern’s return to our area as part of his second series “Bizarre Foods: Delicious Destinations.”
The Duck & Bunny and the East Side Win I HEART PROVIDENCE 2013 Culinary CompetitionPublished on :
On Tuesday, February 5th, the fifth annual I HEART PROVIDENCE was held at Providence City Hall. This year’s event included a culinary competition with the theme East Side vs. West Side. East Side chefs Nemo Bolin of Cook & Brown Public House, Brandy Schwalbe of The Duck & Bunny, and Paul Jalaf of Vanity “battled” against [….]
I HEART PROVIDENCE Hosts Culinary Competition Between East Side and West Side Chefs, Tuesday, February 5, 2013Published on : | https://eatdrinkri.com/tag/olneyville-new-york-system/ |
The Canadian Electricity Association (CEA) is a leader in the conversation about gender equality in the electricity sector. As a way to continuously re-emphasize our commitment to NRCan’s Equal by 30 campaign, and to shedding a light on the importance of inspiring and educating other women in this sector, we have produced a short series of conversations with CEA’s Women in Leadership.
Janice Garcia, Corporate Secretary and Director of Membership and Sustainability at CEA, talked to us about what it means to be a woman in the electricity sector, the opportunities it presents and the importance of mentorship. Working in the electricity sector for 15 years, Janice offers a unique perspective on this topic.
- What initially drew you to the electricity sector?
I’ve been working in the electricity sector for almost 15 years now. I started my career working at BC Hydro—which happens to be a long-term CEA member—in an Indigenous Relations and Negotiations role. I was always aware of BC Hydro’s reputation as a top employer in the Province. The time I spent working at this company was incredibly rewarding and I learned a tremendous amount of lessons from the people on my team and from the Indigenous leaders we consulted with, about the electricity sector, the way it operates, the opportunities and the challenges it presents.
- What is the greatest opportunity that exist for women in this sector?
This sector offers a tremendous opportunity for women in many paths such as EITs, power workers, skilled trades and executive management roles. The industry as a whole is continuously focusing on providing workplaces that represent the diversity of the communities they operate in and serve.
- What has been the most rewarding aspect about working in the sector?
Some of the most rewarding aspects of my experience in this sector would be the relationships I have built, the diverse opportunities available to contribute to strategic discussions, to voice my opinion and provide my input, all within the common good of Canadians. For instance, witnessing the way that many industry initiatives and projects can transform Indigenous communities through training and development has been exceedingly inspiring.
- What is something young women should look for in a mentor?
I strongly believe in the power of mentorship and that every young person should seek a mentor. A mentor should be someone who takes the time to get to know and understand you, someone who is honest, trustworthy and can help you see the best in yourself. It’s also crucial that your chosen mentor understands your personal and professional goals and help you establish a plan to achieve them. Most people would be extremely flattered if asked to be a mentor, so if you have someone in mind, I would strongly encourage you to approach them. | https://electricity.ca/blog/ceas-women-in-leadership/ |
When the United Kingdom released its coronavirus app in early May on the Isle of Wight, Health Secretary Matt Hancock said people testing the digital tracing tool were "at the forefront of helping Britain get back on her feet."
What a difference almost two months makes.
On Thursday, London announced it had postponed the countrywide launch of its coronavirus app so that it could be overhauled to use technology provided by Google and Apple. The U-turn follows more than two months of technical glitches, questions about the apps' effectiveness, and doubts over whether people would even download it in the first place.
"We have agreed to share our own innovative work on estimating distance between app users with Google and Apple," Dido Harding, who chairs the U.K. government's test and trace program, and Matthew Gould, chief executive of NHSX, the innovation unit of the country's health service, said in a statement. "Our ambition is to develop an app which will enable anyone with a smartphone to engage with every aspect of the NHS Test and Trace service."
Harding and Gould did not give a date for when the country's revamped app would be released, though officials said the fall would be the most likely time frame.
The decision represents a blow for Britain's efforts to show that it is at the forefront of tackling the global pandemic.
Unlike other countries like Germany, and many U.S. states, London had decided initially not to work with Google and Apple, which would only allow access to their mobile phone technology to government apps that stored sensitive data on people's mobile devices.
British officials, along with their counterparts in France, had balked at the American tech giants' demands that data should remain decentralized. They said it was preferable to collect people's information into one central server so that researchers could better analyze the spread of the disease. (Paris released its app in early June, though it has so far only been downloaded by a fraction of the country's population.)
But now, amid ongoing technical problems, which have resulted in months of delays before the British app can be released nationwide, the U.K. has decided to fall in line with other countries and work with Google and Apple directly on its coronavirus digital tracing tool.
Coronavirus apps use a device's bluetooth mobile technology to determine if someone has been in close contact with another person infected with the virus, so that people can be informed if they need to isolate themselves.
British officials acknowledged that the country’s standalone app had not been accurate in identifying if people had been in contact with someone who had the coronavirus.
On devices using Google's Android operating system, for instance, the U.K. digital tool was 75 percent effective. But on Apple devices, that figure fell to a mere 4 percent. Coronavirus apps that relied on technology provided by the tech giants were accurate in 99 percent of instances, U.K. officials noted.
Despite the setback, London may not be as far behind others in using smartphone apps to identify who has been infected with COVID-19.
Other countries that have released their own apps have faced problems with not enough people downloading the digital tracing tools. Norway, for instance, paused its own app this week after only 14 percent of the population had signed up.
Privacy campaigners also have warned against government surveillance by the back door if officials are allowed to use people’s sensitive health data for purposes other than to combat COVID-19.
Security experts have similarly raised questions about using bluetooth to identify who has been in touch with those infected by the coronavirus.
For now, the U.K. plans to rely on people to conduct so-called contact tracing, or calling those infected with the virus to determine whom they have been in contact with.
Developers behind Italy's app said it is highly likely that these tools would lead to both false positives and negatives because of the inaccuracy of the mobile device technology. British officials also said that Google and Apple's solution is still not accurate enough to tell who has been in proximity with someone suffering from the virus, though countries like Germany, Spain and Ireland are still using their technology for their national apps.
For now, the U.K. plans to rely on people to conduct so-called contact tracing, or calling those infected with the virus to determine whom they have been in contact with. That analogue system has had its own difficulties, with many people who had been contacted either not answering the tracers' calls or declining to say with whom they had been in contact.
Currently, the U.K. has the third highest global death toll, at 42,200 people, behind the U.S. and Brazil, according to figures from Johns Hopkins University. | https://www.politico.eu/article/uk-changes-course-on-coronavirus-app/?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication |
Although English and Spanish both have the voiceless stops /ptk/, they differ in VOT; English has long-lag voiceless stops and Spanish has short-lag. This difference means that native English-speaking learners of Spanish are likely to transfer the long voice lag typical of their first language (L1) to Spanish voiceless stops. This study measured the VOT of 20 native English-speaking learners of Spanish, each with a length of residence (LOR) in a Spanish-speaking country of almost 2 years. The study participants were found to produce voiceless stops intermediate to the averages of their L1 (American English) and L2 (Spanish), with some speakers producing voiceless stops with the range observed for Spanish. A significant main effect on VOT was found for all the variables of linguistic context tested: place of articulation, word-initial vs. -internal position, stress, preceding segment and following segment. A significant main effect was also found for speech style, percentage of communication done in Spanish with native Spanish speakers while abroad, years of formal L2 instruction prior to stay abroad, and time spent each week speaking Spanish with native speakers since their return home. While the extra-linguistic variables are correlated with more target-like VOT, the amount of communication done in the L2 with other native English L2 learners of Spanish was correlated with longer VOTs, i.e. less target-like VOTs, possibly due to reinforcement of L1 transfer habits.
Crane, Mary Williams, "Acquisition of Spanish Voiceless Stops in Extended Stays Abroad" (2011). All Theses and Dissertations. 2707. | https://scholarsarchive.byu.edu/etd/2707/ |
WhatsApp will not delete any account for not accepting its new privacy update, but users not agreeing to the controversial terms after “several weeks” will not be able to access their chat list, and eventually, will not be able to answer incoming phone or video calls over the app.
It, however, did not divulge the timelines set for these reminders. Explaining the course of action after ‘persistent’ reminders are sent to users, WhatsApp said: At that time, you’ll encounter limited functionality on WhatsApp until you accept the updates. This will not happen to all users at the same time. You won’t be able to access your chat list, but you can still answer incoming phone and video calls.
If you have notifications enabled, you can tap on them to read or respond to a message or call back a missed phone or video call, it said. The messaging platform said after a few weeks of limited functionality, users, who still won’t accept the terms, won’t be able to receive incoming calls or notifications and WhatsApp will stop sending messages and calls to your phone.
WhatsApp said it won’t delete the users’ accounts if they haven’t accepted the update but highlighted that its existing policy related to inactive users will apply. WhatsApp accounts are generally deleted after 120 days of inactivity, wherein inactivity refers to users not connecting to the messaging platform.
While the company did not respond to specific queries around these reminders, how long they will run and other modalities, a WhatsApp spokesperson said: We’ll continue to provide reminders to those users within WhatsApp in the weeks to come. We’ve spent the last several months providing more information about our update to users around the world.
In that time, the majority of people who have received it have accepted the update and WhatsApp continues to grow. “However, for those that have not yet had a chance to do so, their accounts will not be deleted or lose functionality on May 15, the spokesperson said. | https://newsnext.live/users-who-dont-accept-new-privacy-terms-wont-be-able-to-access-chat-list-or-receive-calls-whatsapp/ |
Scientists around the world are racing to find the best medicine for COVID-19
The new coronavirus pandemic has been going on for three months, but it’s not clear which drug will fight the virus,media The Verge reported. With the development of public health, the scientific community is looking for answers at an unprecedented rate. When the new coronavirus raged in China in January and February, researchers and doctors quickly conducted dozens of clinical trials to test existing drugs for COVID-19 caused by the new coronavirus. But so far, studies in China have not yielded enough data to draw a definitive answer.
“We commend researchers around the world for working together to systematically evaluate experimental therapies,” WHO Director-General Tan Desai said in a press release. “Multiple small trials using different methods may not provide us with clear and strong evidence of which treatments we need to save lives. “
To gain “clear and strong evidence”, WHO is conducting a multi-country clinical trial to test four drug therapies for COIVD-19 therapy: an experimental antiviral drug called ridsevir, the antimalarial drug chloroquine (or associated hydroxychloroquine), and two HIV drugs, As well as the same two HIV drugs as well as anti-inflammatory interferon beta.
The trial will be flexible and can be added or removed over time for other treatments. That makes it look similar to an adaptive trial that the American Institute of Allergy and Infectious Diseases began in the U.S. in February, which was originally designed to test Ridsiewe, but could be extended to other drugs. The United States is not currently involved in the WHO trial.
Hundreds of other clinical trials are under way, and other teams are continuing to test the drugs chosen by the WHO – a classification of some of the drugs that researchers are studying.
chloroquine and hydroxychloroquine
The study found that hydroxychloroquine and associated chloroquine can prevent the new coronavirus from became infected in laboratory cells, and evidence suggests that it can help PATIENTs with COVID-19. Scientists have experience with the drug because it has been an antimalarial drug for decades. “It’s a known drug,” said Caleb Skipper, a postdoctoral researcher on infectious diseases at the University of Minnesota, who is conducting smaller trials of the drug. “Laboratory data over the past few years have rarely shown that the drug is antiviral. “
Skipper’s trial is looking at whether hydroxychloroquine can prevent people exposed to the virus from developing into a serious disease. They hope to recruit high-risk health care providers exposed to the virus to participate in the trials.
Skipper says the goal is to introduce the drug into the human system as soon as possible. “Especially for viruses, the earlier your ability to suppress virus replication, the better your condition will be.” If a drug works, it is more likely to work in the early stages of the disease. He said. “If you can find someone early and provide treatment, there will be much less early virus replication. “
Skipper said the available evidence on hydroxychloroquine points in the right direction, but all research on the drug is still at a very early stage. “There is a long way to go before it proves effective,” he said. “
Despite the limited evidence available, public figures, including Elon Musk and Trump, are promoting the message that oxychloroquine and chloroquine are the solutions for the new coronavirus. “I feel good about it,” Mr. Trump said at a news conference on Friday. It’s just a feeling of mine. You know, I’m a smart guy. I feel good. You’ll see it soon. “
Similar hype has led to a surge in demand for drugs, and manufacturers are increasing production. Two Nigerians have been poisoned by overuse of the drug after Trump said chloroquine could cure COVID-19. Those who use it for other diseases, such as lupus, are struggling to get their usual supply.
It is clear that there is still no conclusive evidence that chloroquine can treat COVID-19. And, according to anecdotes or “feelings” seemingly promising treatments, it’s not usually going to work, as scientists know: most clinical trials have failed, and they’ve seen a strengthening of coronavirus treatment.
Lopinave/Litonawe
In February, doctors in Thailand said they had cured patients with new coronary pneumonia by combining Lopinavir/Litonavir with anti-flu drugs. WHO is testing the combination of the drug in their trials, as well as anti-inflammatory interferon beta, which is naturally produced in the human body and protects against the virus. During SARS and MERS outbreaks, the combination of the drug appears to help in patients.
But a clinical trial of the two drugs in China has just found that patients with new coronary pneumonia who take them did not improve faster than those who did not.
The study, published this week, focused on a group of 199 seriously ill patients, which may have contributed to the drug’s ineffectiveness – and those who are already seriously ill. But Timothy Sheahan, a coronavirus expert and an assistant professor at the Gillings School of Global Public Health at the University of North Carolina at Chapel Hill, said he wasn’t surprised the drug didn’t work. “We’ve done the work on that particular drug,” he said. The fact that it failed is exactly the same as what we have done in the past. “
Redsiewe
The antiviral drug Redcivir was originally developed to treat the Ebola virus, but later studies have shown that it can also block MERS and SARS in cells. Laboratory tests have shown that it can also suppress new coronaviruses in cells.
There is also anecdotal evidence that ridsieve can help treat COVID-19 patients, but this does not guarantee that clinical trials will show it is better than placebo. That’s why the data on drugs collected through WHO trials, adaptive trials in the U.S. and other studies is so important: Before putting them on a large scale, doctors must make sure they’re actually effective.
Other drugs
Although the WHO trial is not an integral part, some researchers and reports suggest that clinical trialresults show efficacy against the new coronavirus. Although data on these drug trials have not yet been released, Japan is studying the drug more closely. Based on the drug’s antiviral activity in cells, Sheahan said he would be surprised if the drug ultimately works. He says it doesn’t work for MERS in cells, and MERS is similar to a new coronavirus.
In addition, some pharmaceutical companies are seeking to re-use anti-inflammatory drugs to relieve lung inflammation in patients with COIVD-19 critical conditions. Other labels protect editts, and people develop their efforts to develop the virus after infection in the manufacture of treatments.
Clinical trials take time to collect data correctly, so there is no specific evidence until next month or later. Patients have accepted these drugs by using a program that allows doctors to order experimental drugs in certain circumstances and outside of labels, in which case the doctor will prescribe prescription drugs without permission. However, before determining the conclusions of the best course of action, it is necessary to ensure that the clinical trial process is carried out simultaneously to ensure that the patient can be treated on the basis of evidence. | https://www.smalltechnews.com/archives/107072 |
Where we are today
We are pleased to announce that we’ve completed the first round of update reboots as of the evening of Thur Jan 11th. These reboots consisted of updated kernels with Kernel Page Table Isolation (KPTI) and CPU firmware (microcode) updates for a handful of our production systems, namely Intel Haswell, Broadwell, Skylake architectures.
In our last update, we detailed that there will likely be multiple reboots over a period of time in order to update CPU firmware (microcode). At this time, we have not cemented a timeline for these updates other than our original note of 2-4 weeks out. The current limitation here is a series of updates from our industry peers indicating there are reliability issues with some of the currently available microcode updates.
We are coordinating with our OEM vendor Dell and industry peers to ensure we are balancing confidence in the stability of microcode updates with the security considerations of the Meltdown & Spectre vulnerabilities.
What was the downtime impact?
Our team scheduled and executed on thousands of reboots in the last week and a great deal of systems returned online without incident. It is never easy organizing these kind of operations and they always uncover a series of issues that are a mix of software and/or hardware related issues.
We had an incidence rate of 1.27%, which we defined as systems that did not return online without intervention. Our internal tracking recorded each one of these incidences at the point at which they were handed off to our data center operations team. We subsequently validated patches against these systems along with a full audit to scope any systems that we may might have missed updating. Subsequently, we had a little under two dozen systems that were either missed, skipped or required additional work to patch that were part of Thursday, January 7th’s maintenance window.
The average downtime per-system through Saturday, January 6th was 8 minutes 39 seconds. The longest downtime (including systems with incidence events) was 2 hours 19 minutes. There was a single system that had returned online after an extended fsck with a corrupted data volume. The volume in question on this system was restored from our continuous data protection backups and was, given the restore window, effectively offline for 3 hours 31 minutes.
The average downtime per-system from Sunday, January 7th to Thursday, January 11th was 2 minutes 51 seconds. The longest downtime (including systems with incidence events) was 41 minutes.
The tangible reduction in downtime and incidence rate on and after Sunday, January 7th was the result of improvements in our procedures and the introduction of kexec. The kexec resource allowed our team to complete kernel upgrades without fully power cycling our servers, avoiding lengthy BIOS/POST boot delays.
What is the performance impact?
The topic of performance, outside of downtime itself, is the most frequent inquiry our customer service team has been receiving regarding the updates. This is understandable and the amount of media attention, stating broad performance reductions is a bit misleading.
We’ve thoroughly tested our platform for the workloads important to our customers. These tests consisted of measuring the performance pre and post kernel updates against PHP execution time, FPM threading, static content requests to Apache and requests per-second to Varnish, Redis and MySQL. We’ve observed a negligible but measurable performance impact averaging about 5%.
That said, there is a caveat here. We have found that systems and resources that are heavily loaded toward their upper performance limits are significantly impacted post-updates. We have very few systems within our infrastructure that we would describe as anywhere near upper performance limits (overloaded). This is due to very strict user density limits on how our servers are filled and equally strict hardware build requirements across all our product lines.
We break out our infrastructure by logical region boundaries, similar to other providers. We have been closely monitoring metrics on a per-region (e.g. us-midwest-1) and per-role (e.g. cluster load balancer) basis. Our data reinforced over the last week that we have not observed any broad performance impact.
Below are graphs for our two largest regions, us-midwest-1 and uk-south-1. These graphs represent the overall Load Average (a broad measure of system utilization) and CPU Idle Time (a measure of CPU utilization) for all systems in the respective regions. The time span on these graphs is from January 1st 2018 00:00:00 UTC through January 8th 2018 12:00:00 UTC. The red vertical lines indicate the point at which reboots were conducted in the respective regions.
us-midwest-1
uk-south-1
Next Steps
The next steps for our team consist of monitoring for continued performance impacts and assessing CPU firmware (microcode) updates.
Performance
We will continue monitoring for performance impacts. We are thoroughly continuing to assess, on a daily basis, the performance of our platform against the KPTI kernel updates. If at any time we discover that performance impacts are deeper than already stated, we will issue an update with details.
CPU Firmware (microcode) Updates
Our teams are monitoring the status of CPU firmware (microcode) updates broadly across the technology space and preparing to validate the reliability and performance of those updates. When we have a confidence level that we feel balances reliability, performance and security in those updates, we will announce our next round of maintenance reboots.
We will attempt to use ‘kexec’ during the CPU firmware (microcode) updates, if they can be applied with the Linux kernel microcode loader, to minimize downtime. A process around these updates is already being worked on.
When we have more information available and more accurate timelines for continued updates, we will send out notifications accordingly as we begin to schedule emergency maintenance windows.
We appreciate your understanding and patience as we complete this process. If you have any questions or concerns, please reach out to our Support team via https://portal.nexcess.net.
Our earlier Meltdown & Spectre Vulnerability posts can be found at: | https://blog.nexcess.net/2018/01/14/update-2-nexcess-response-to-side-channel-speculative-execution-meltdown-spectre-vulnerabilities/ |
To practice Kundalini Meditation, knowledge of chakras is very essential. There are six chakras along the Sushumna Nadi to the final chakra Sahasrara Chakra.
Chakras are storage places for subtle and vital energy.
Chakras are also of consciousness with specific tones of awareness and bliss.
Chakras have corresponding centres in the spinal cord and nerve plexuses of the gross physical body, with which they are closely related.
The location of chakras and their corresponding centres in the physical body are –
1. Muladhara Chakra (Root Chakra) – at the lower end of the spinal column, corresponding to the sacral plexus.
2. Swadhisthana Chakra (Sacral Chakra) – in the region of the genital organs, corresponding to the prostatic plexus.
3. Manipura Chakra (Solar Plexus) – at the navel, corresponding to the solar plexus.
4. Anahata (Heart Chakra) – at the heart, corresponding to the cardiac plexus.
5. Vishuddha (Throat Chakra) – in the throat region, corresponding to the laryngeal plexus.
6. Ajna (Third Eye Chakra) – between the eyebrows, trikuta, corresponding to the cavernous plexus.
7. Sahasrara (Crown Chakra) – at the crown of the head, corresponding to the pineal gland.
During Kundalini Meditation,
Each chakra is visualised as a lotus with a certain number of petals.
The number of petals is determined by the number and position of the nadis that emanate from the chakra and give it the appearance of a lotus.
Petals In Each Chakra
Muladhara – 4 petals.
Swadhisthana – 6 petals.
Manipura – 10 petals.
Anahata – 12 petals.
Vishuddha – 16 petals
Ajna – 2 petals
Sahasrara – 1000 petals.
Associated with each petal is one fo the fifty Sanskrit letters, representing the vibration produced on it by the kundalini as it passes through the chakra. These sounds exist in latent form, and when manifested as vibrations on the nadis, can be felt during concentration.
Besides petals and sound vibration, each chakra has its geometric form representing a specific power, as well as its colour, function, element, presiding deity and bija, or mystic vibration.
Colour Of Different Chakras.
Muladhara – Red
Swadhisthana – Orange
Manipura – Yellow
Anahata – Green
Vishuddha – Blue
Ajna – Indigo / Purple
Sahasrara – Purple / Purplish white
Element Of Each Chakra
Muladhara – Earth
Swadhisthana – Water
Manipura – Fire
Anahata – Air
Vishuddha – Ether
Ajna – Light
Sahasrara – Thought
Function Of Each Chakra
Muladhara – Safety, grounding, right to live
Swadhisthana – Emotions, creativity, sexuality
Manipura – Will, social self, power
Anahata – Compassion, love, integration
Vishuddha – Personal truth, etheric, expression
Ajna – Extrasensory perception, intuition, inspiration
Sahasrara – Wisdom, transcendence, universality
Presiding Deity Of Each Chakra
Muladhara – Lord Ganesha (Removal of obstacles)
Swadhisthana – Lord Brahma is god of creative energy, represents the creative energies of the Sacral Chakra.
Manipura – Vishnu is the great preserver and peace bringer who brings positive emotions through the Stomach Chakra.
Anahata – Rudra is the unpredictable god of the weather who brings sudden change and exerts his power over the Heart Chakra.
Vishuddha – Isvara, an aspect of Shiva as ruler of the universe, is the presiding god of the Throat Chakra.
Ajna – Paramashiva is an aspect of Shiva as the supreme self, the highest development of humanity before uniting with the divine in the Crown Chakra.
Sahasrara – Shiva resides in the Crown Chakra as the bringer of liberation and ecstasy.
When attempting to locate the chakras from the back, one moves his concentration directly upward along the spinal cord, from chakra to chakra.
If approaching from the front, one moves from the base of the spine up to the navel, the heart, the throat, etc.
At all times the consciousness is kept internalised and receptive to experiencing the inner vibrations indicating an energy centre. In all exercises, a comfortable meditative posture should be assumed, a straight spine is essential.
One should focus on chakra while chanting Om or any other mantra, in different pitches. Fixing the concentration on the Muladhara chakra, Om is chanted at the lowest pitch. Then moving up the spinal cord to the area of each successive centre, the pitch is raised higher each time.
When he Kundalini is awakened, it does not proceed directly to the Sahasrara unless one is an exceptionally pure yogi. It must be moved up from one chakra to another, and a great deal of concentration and patience is required.
The speed at which the Kundalini is aroused depends upon the aspirant’s purity, stage of evolution, dispassion, purification of the psychic nerves and vital sheath, and yearning for liberation.
Learn Kundalini meditation techniques from Vivek? | https://www.growwithvivek.com/a-beginners-guide-to-chakras-for-kundalini-meditation/ |
Objectives The sense of smell is important as a warning system, in social communication and in guiding food intake. Impairment is common, and cases are increasing following COVID-19. Olfactory dysfunction may lead to decreased quality of life. There are several established ways to assess olfaction including the "Sniffin' Sticks" which are a validated test for healthy and diseased populations. Methods The odor threshold is traditionally determined using a single staircase procedure, with narrow or wide step. We investigated a Bayesian adaptive algorithm (QUEST) to estimate olfactory threshold in a hyposmic population compared with a healthy control group. Thresholds were measured using the three procedures in two sessions (Test and Retest). Results All the tested methods showed considerable overlap in both groups: there was a positive correlation between the QUEST procedure and classic staircase method (r = 0.88), and high test-retest reliability for all three methods used (Sniffin' Sticks narrow: r = 0.81;Sniffin' Sticks wide: r = 0.95;QUEST: r = 0.80). Conclusions Results from these approaches exhibit considerable overlap with all of them being suitable for clinical use. An advantage of the QUEST method can be the defined number of trials needed to determine an odor threshold.
ABSTRACT
The current COVID-19 or Sars-CoV-2 pandemic increased awareness of hyposmia or anosmia, as this can be an accompanying symptom. In mild cases, anosmia without rhinorrhea can be the only presenting symptom of this infection. Timely identification can lead to early detection of otherwise asymptomatic carriers. History taking and essential clinical assessment with appropriate protective measures can be performed in patients in whom COVID-19 is suspected. Patients with anosmia without nasal obstruction should be considered COVID-19 suspect and this should initiate testing or self-isolation. As for treatment of hyposmia or anosmia, the authors do not advise treatment with systemic corticosteroids in patients with COVID-19. Based on expert opinion, nasal corticosteroids can be considered, with a preference for spray formulation. Patients who were already using topical or inhalation corticosteroids for proven pre-existing disease (such as asthma and/or allergy) should be advised to continue their maintenance therapy. ENT (Ear Nose Throat) focus on hyposmia and anosmia should be continued, to gain additional knowledge of the disease mechanisms of COVID-19 and improve follow-up, not only on the pneumological aspects but also to evaluate the impact on quality of life of potentially long-term side effects caused by anosmia.
ABSTRACT
BACKGROUND: This study aimed to examine whether omega-3 supplementation would support olfactory recovery among postviral olfactory dysfunction patients. METHODOLOGY: Patients with postviral olfactory dysfunction were included in this non-blinded, prospective pilot study. Structured medical history was taken from the patients, including the following: age, sex, history of COVID-19 infection, and duration of symptoms. Patients were randomly assigned to receive olfactory training only (control group) versus olfactory training with omega-3 supplementation (treatment group). All patients exposed themselves twice a day to four odours (phenyl ethyl alcohol [rose], eucalyptol [eucalyptus], citronellal [lemon], and eugenol [cloves]). Olfactory function was measured before and after training using 'Sniffin' Sticks', comprised of tests for odour threshold, discrimination, and identification. The average interval between olfactory tests was 3 months. RESULTS: Fifty-eight patients were included in the study, 25 men and 33 women. Generally, an improvement in olfactory scores was observed. Compared to the control group, the improvement in odour thresholds was more pronounced in the omega-3 group. Age, sex, and duration of symptoms had no effect on olfactory scores among both control and treatment groups. CONCLUSION: Overall, the present results indicate that omega-3 supplementation may be an option for adjunct therapy with olfactory training in patients with postviral olfactory dysfunction.
Subject(s)COVID-19 , Olfaction Disorders , Dietary Supplements , Female , Humans , Male , Odorants , Olfaction Disorders/diagnosis , Olfaction Disorders/etiology , Olfaction Disorders/therapy , Pilot Projects , Prospective Studies , Sensory Thresholds , Smell
ABSTRACT
The olfactory bulb (OB) plays a key role in olfactory processing;its volume is important for diagnosis, prognosis and treatment of patients with olfactory loss, e.g. due to a Covid-19 infection, neurodegenerative diseases or other causes. So far, measurements of OB volume have been limited to quantification of manually segmented OBs, which makes its application in large scale clinical studies infeasible. The aim of this study was to evaluate the potential of our previously developed automatic OB segmentation method for clinical measurements of OB volume. The method employs convolutional neural networks that localize the OBs and subsequently automatically segment them (Noothout et al., 2021). In previous work, we showed that this method accurately segmented the OBs resulting in a Dice coefficient above 0.8 and average symmetrical surface distance below 0.24 mm. Volumes determined from manual and automatic segmentations were highly correlated (r=0.79, p<0.001) and the method was able to recognize the absence of an OB. Here, we included MRI scans of 181 patients with olfactory loss from the Dutch Smell and Taste Center. OB volumes were computed from automatic segmentations as described above. Using a multiple linear regression model, OB volumes were related to clinical outcome measures. Age, duration and etiology of olfactory loss, and olfactory ability significantly predicted OB volume (F(5, 172) = 11.348, p<0.001, R2 = .248). The results demonstrate that our previously described method for automatic segmentation and quantification of the OB can be applied in both research and clinical populations. Its use may lead to more insight in and application of the OB in diagnosis, prognosis and treatment of olfactory loss. We aim to extend our research to other populations of patients with olfactory loss.
ABSTRACT
The corona pandemic made it painfully clear to a broader public that there are limited options for the treatment of olfactory loss. Hence, the title of the symposium is provocative. Having said this, major advances in the understanding of olfactory loss have been made during the alst 20 years. Several options for treatment have been investigated, so that their possibilities and limitations are now clearer. The symposium will almost exclusively include presentations from medical doctors who see patients with olfactory loss on a daily basis. Speakers come from the USA, the UK and France, and all of them are widely recognized researchers. First, Katie Whitcroft form London will talk about corticosteroids which are the most frequently used drugs in the treatment of olfactory loss. Vijay Ramakrishnan from Aurora will deal with the nasal microbiome which may play a major role in olfactory loss. Andrew Lane from Baltimore will then talk about most recent advances in the understanding of the mechanisms of olfactory loss associated with inflammatory conditions - which are the cause of approximately 2/3 of all olfactory disorders, apart from aging. Finally, Moustafa Bensafi from Lyon will shed light on current developments in new therapeutic options including olfactory implants.
ABSTRACT
This manuscript aims to provide an overview of the etiology and diagnosis of olfactory and gustatory disorders. Not only are they common with about 5% of the population affected, but olfactory and gustatory disorders have recently gained attention in light of the rising SARS-CoV2 pandemic: sudden loss of smell and/or taste is regarded as one of the cardinal symptoms. Furthermore, in the early diagnostics of neurodegenerative diseases, olfactory disorders are of great importance. Patients with olfactory dysfunction often show signs of depression. The impact of olfactory/gustatory disorders is thus considerable, but therapeutic options are unfortunately still limited. Following a description of the etiology, the diagnostic and therapeutic options are discussed on the basis of current literature. Potential future treatments are also addressed, e.g. autologous mucosal grafts or olfactory implants.
Subject(s)COVID-19 , Olfaction Disorders , Humans , Olfaction Disorders/diagnosis , Olfaction Disorders/etiology , Olfaction Disorders/therapy , SARS-CoV-2 , Smell , Taste Disorders/diagnosis , Taste Disorders/etiology , Taste Disorders/therapy
ABSTRACT
BACKGROUND: Using an age and gender matched-pair case-control study, we aimed to estimate the long-term prevalence of psychophysical olfactory, gustatory , and chemesthesis impairment at least one year after SARS-CoV-2 infection considering the background of chemosensory dysfunction in non-COVID-19 population. METHODOLOGY: This case-controlled study included 100 patients who were home-isolated for mildly symptomatic COVID-19 between March and April 2020. One control regularly tested for SARS-CoV-2 infection and always tested negative was matched to each case according to gender and age. Chemosensory function was investigated by a comprehensive psychophysical evaluation including ortho- and retronasal olfaction and an extensive assessment of gustatory function. Differences in chemosensory parameters were evaluated through either Fisherâ™s exact test or Kruskal-Wallis test. RESULTS: The psychophysical assessment of chemosensory function took place after a median of 401 days from the first SARS-CoV-2 positive swab. The evaluation of orthonasal smell identified 46% and 10% of cases and controls, respectively, having olfactory dysfunction, with 7% of COVID-19 cases being functionally anosmic. Testing of gustatory function revealed a 27% of cases versus 10% of controls showing a gustatory impairment. Nasal trigeminal sensitivity was significantly lower in cases compared to controls. Persistent chemosensory impairment was associated with emotional distress and depression. CONCLUSION: More than one year after the onset of COVID-19, cases exhibited an excess of olfactory, gustatory , and chemesthesis disturbances compared to matched-pair controls with these symptoms being associated to emotional distress and depression.
Subject(s)COVID-19 , Olfaction Disorders , Case-Control Studies , Follow-Up Studies , Humans , Olfaction Disorders/epidemiology , Olfaction Disorders/etiology , Prevalence , SARS-CoV-2 , Smell , Taste Disorders/epidemiology , Taste Disorders/etiology
ABSTRACT
Introduction The worldwide outbreak of COVID 19 is progressing rapidly and represents a challenge for public health systems and hospital facilities. Surgical wards shut down in order to increase the capacity of intensive care units and physicians. Nevertheless, patients with COVID 19 will need surgical procedures in future as well. Methods We report about our first experience, with respect to the expenditure of time, of a patient with a confirmed "SARS-Cov-2"infection in our clinic undergoing an explanation of a port catheter due to a catheter infection. Results Normally, the explanation of a port catheter is a routinely performed surgical procedure. The amount of work is low under usual conditions. Nevertheless, COVID 19 positive patients may be an exception. In this case the duration of the surgical procedure was 125 minutes, whereas the mean duration of the last five procedures was only 50 minutes. Furthermore the time-length of postoperative ward rounds is longer due to the additional necessary personal protection procedures. Conclusion Patients with a SARS-CoV-2 infection will be a challenge for surgical disciplines too as the logistic hygienic support is higher. Therefore surgical capacitiesmay be limited in future.
ABSTRACT
Taste disorders, impacting well-being and physical health, can be caused by many etiologies including the use of medication. Recently, taste disturbance is also considered as one of the predominant symptoms of COVID-19 although its pathogenesis requires further research. Localized taste disorders may be overlooked considering that whole-mouth taste perception is insured through several mechanisms. Individuals often fail to discern taste from flavor, and interviews/surveys are insufficient to properly assess taste function. Hence, various taste assessment methods have been developed. Among them, psychophysical methods are most widely applied in a clinical context. Less-biased electrophysiological, imaging, or morphological methods are used to a much lesser degree. Overall, more research is needed in the field of taste.
ABSTRACT
Olfactory disorders may be temporary or permanent and can have various causes. Currently, many COVID-19 patients report a reduced or complete loss of olfactory function. A wide range of treatment options have been investigated in the past, such as olfactory training, acupuncture, medical therapy, transcranial magnetic stimulation, or surgical excision of olfactory epithelium, e.g., in severe qualitative smell disorders. The development of a bioelectric nose, e.g., in connection with direct electrical stimulation or transplantation of olfactory epithelium or stem cells, represent treatment options of the future. The basis of these developments and the state of knowledge is discussed in the following work.
Subject(s)COVID-19 , Olfaction Disorders , Electric Stimulation , Humans , Olfactory Mucosa , SARS-CoV-2 , Smell , Stem Cell Transplantation
ABSTRACT
This is a correction notice for article bjz034 (DOI: https://doi. org/10.1093/chemse/bjaa034), published on 22 May 2020. Due to an error in the script used to create subsections of Figure 1, there was both a shift in the intensity data and an erroneous calculation of error bars in all panels. Figure 1 and the accompanying figure legend have been revised to show the correct levels and error bars. This script error only affected visualization of the data in Figure 1 and did not impact the reported data or conclusions.(Figure Presented). © The Author(s) 2020. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
ABSTRACT
Anosmia constitutes a prominent symptom of COVID-19. However, anosmia is also a common symptom of acute colds of various origins. In contrast to an acute cold, it appears from several questionnaire-based studies that in the context of COVID-19 infection, anosmia is the main rhinological symptom and is usually not associated with other rhinological symptoms such as rhinorrhoea or nasal obstruction. Until now, no study has directly compared smell and taste function between COVID-19 patients and patients with other causes of upper respiratory tract infection (URTI) using valid and reliable psychophysical tests. In this study, we aimed to objectively assess and compare olfactory and gustatory functions in 10 COVID-19 patients (PCR diagnosed, assessed on average 2 weeks after infection), 10 acute cold (AC) patients (assessed before the COVID-19 outbreak) and 10 healthy controls, matched for age and sex. Smell performance was assessed using the extended "Sniffin' Sticks" test battery (4), while taste function was assessed using "taste strips" (5). Receiver Operating Characteristic (ROC) curves were built to probe olfactory and gustatory scores in terms of their discrimination between COVID-19 and AC patients. Our results suggest that mechanisms of COVID-19 related olfactory dysfunction are different from those seen in an AC and may reflect, at least to some extent, a specific involvement at the level of central nervous system in some COVID-19 patients. In the future, studies to assess the prevalence of persistent anosmia and neuroanatomical changes on MRI correlated to chemosensory function, will be useful to understand these mechanisms.
Subject(s)COVID-19/complications , Common Cold/complications , Olfaction Disorders , Humans , Olfaction Disorders/diagnosis , Olfaction Disorders/etiology , Smell
ABSTRACT
BACKGROUND: This is a report on the high incidence of olfactory dysfunction in COVID-19 patients in the first cohort of COVID-19 patients in Germany (Webasto cluster). METHODS: Loss of sense of smell and/or taste was reported by 26 of 63 COVID-19 patients (41%), whereas only 31% of the patients experiencing hyposmia had simultaneous symptoms of rhinitis. Smell tests were performed in 14 of these patients and taste tests in 10. The measurements were conducted in a patient care setting in an early COVID-19 cohort. RESULTS: An olfactory disorder was present in 10/14 patients, before as well as after nasal decongestion. In 2 of these patients, hyposmia was the leading or only symptom of SARS-CoV2 infection. All tested patients reported recovery of smell and/or taste within 8 to 23 days. CONCLUSION: The data imply that a) COVID-19 can lead to hyposmia in a relevant number of patients, the incidence was approximately 30% in this cohort; b) in most cases, the olfactory disturbance was not associated with nasal obstruction, thus indicating a possible neurogenic origin; and c) the olfactory disorder largely resolved within 1-3 weeks after the onset of COVID-19 symptoms. There were no indications of an increased incidence of dysgeusia. These early data may help in the interpretation of COVID-19-associated hyposmia as well as in the counseling of patients, given the temporary nature of hyposmia observed in this study. Furthermore, according to the current experience, hyposmia without rhinitic obstruction can be the leading or even the only symptom of a SARS-CoV2 infection. | https://search.bvsalud.org/global-literature-on-novel-coronavirus-2019-ncov/?lang=en&q=au:%22Hummel,%20T.%22 |
For a translation to do justice to the Quran and capture its elegance and vigor, it has to be accurate, smooth, eloquent, and accessible. Unlike most popular translations, The Clear Quran masterfully passes on all counts.
Thousands of hours have been put into this work over the last few years to guarantee accuracy, clarity, eloquence, and flow. To achieve accuracy, the translator has made use of the greatest and most celebrated works of old and contemporary tafsir (Quran commentaries), and shared the work with several Imams in North America for feedback and insight. For clarity, every effort has been made to select easy to understand words and phrases that reflect the beauty, flow, and power of the original text. Along with informative footnotes and surah (chapter) introductions, verses have been grouped and titled based on their themes for a better understanding of the chapters, their main concepts, and internal coherence. Thanks to the dedicated team of scholars, editors, and proofreaders, we believe that the "Clear Quran Translation" is the finest English translation of the Final Revelation.
This translation has been officially approved by Al-Azhar University and endorsed by ISNA (Islamic Society Of North America) and the Canadian Council of Imams.
Sample Translations from 'The Clear Quran Translation'
View PDF of Surah Yusuf Translation from Clear Quran Translation
'Clear Quran Translation' versus other popular English Translations
|
|
Verse No
|
|
Other Translations
|
|
The Clear Quran
|
|
Arabic
|
|
21:87
|
|
And [remember] him of the great fish when he went off in wrath, thinking that We had no power over him!
|
|
And ˹remember˺ when the Man of the Whale stormed off ˹from his city˺ in a rage, thinking We would not restrain him.
|
|
وَذَا النُّونِ إِذ ذَّهَبَ مُغَاضِبًا فَظَنَّ أَن لَّن نَّقْدِرَ عَلَيْهِ
|
|
6:9
|
|
And if We had made him an angel, We would have made him [appear as] a man, and We would have covered them with that in which they cover themselves.
|
|
And if We had sent an angel, We would have certainly made it ˹assume the form of˺ a man—leaving them more confused than they already are.
|
|
وَلَوْ جَعَلْنَاهُ مَلَكًا لَجَعَلْنَاهُ رَجُلًا وَلَلَبَسْنَا عَلَيْهِمْ مَا يَلْبِسُونَ
|
|
53:27
|
|
Those who believe not in the Hereafter, name the angels with female names.
|
|
Indeed, those who do not believe in the Hereafter, label angels as female.
|
|
إِنَّ الَّذِينَ لَا يُؤْمِنُونَ بِالْآخِرَةِ لَيُسَمُّونَ الْمَلَائِكَةَ تَسْمِيَةَ الْأُنثَىٰ
|
|
3:106-107
|
|
On the Day when faces are whitened and faces are blackened. As for those whose faces are blackened: ´What! Did you become kafir after having had iman? Taste the punishment for your kufr!´ As for those whose faces are whitened, they are in Allah´s mercy, remaining in it timelessly, for ever.
|
|
On that Day some faces will be bright while others gloomy. To the gloomy-faced it will be said, “Did you disbelieve after having believed? So taste the punishment for your disbelief.” As for the bright-faced, they will be in Allah’s mercy, where they will remain forever.
|
|
يَوْمَ تَبْيَضُّ وُجُوهٌ وَتَسْوَدُّ وُجُوهٌ فَأَمَّا الَّذِينَ اسْوَدَّتْ وُجُوهُهُمْ أَكَفَرْتُم بَعْدَ إِيمَانِكُمْ فَذُوقُوا الْعَذَابَ بِمَا كُنتُمْ تَكْفُرُونَ * وَأَمَّا الَّذِينَ ابْيَضَّتْ وُجُوهُهُمْ فَفِي رَحْمَةِ اللَّهِ هُمْ فِيهَا خَالِدُونَ
|
|
94:5-6
|
|
So surely with difficulty comes ease. Surely with difficulty comes ease.
|
|
So, surely with hardship comes ease. Surely with ˹that˺ hardship comes ˹more˺ ease.
|
|
فَإِنَّ مَعَ الْعُسْرِ يُسْرًا * إِنَّ مَعَ الْعُسْرِ يُسْرًا
|
|
12:33
|
|
He (imploring God) said: "My Lord! Prison is dearer to me than what they bid me to.
|
|
Joseph prayed, “My Lord! I would rather be in jail than do what they invite me to.
|
|
قَالَ رَبِّ السِّجْنُ أَحَبُّ إِلَيَّ مِمَّا يَدْعُونَنِي إِلَيْهِ
BOOK REVIEW BY IMAM OMAR SULEIMAN
(AL-MAGHRIB INSTITUTE)
"Although countless English translations of the Quran have been produced over the last few decades, many translators did not have the necessary qualifications to undertake this mighty task, rendering the Quran inaccessible to some and misunderstood by others. The Clear Quran—which is noted for clarity, accuracy, eloquence, and flow — is indeed a scholarly and timely work that reflects the beauty and relevance of Islam. I highly recommend Dr. Mustafa Khattab's translation."
BOOK REVIEW BY HAFIZ ABDULLAH MUHAMMAD
(POST-GRADUATE ISLAMIC STUDIES FROM SOAS UNIVERSITY OF LONDON AND AL-AZHAR UNIVERSITY)
I have a special interest in English translations of the Qur’an published from 1649 to date. Spanning over three decades, I have collected around 80 complete English translations from different parts of the world. I was also privileged to be in communication with some of the translators, including Professor MAS Abdel Haleem under whose supervision I wrote a Master’s paper on "Qur’an Translations" way back in 1999. I am so delighted to receive today one of the latest translations of the Qur’an published from Canada. The Clear Qur’an is the most user-friendly translation I have ever found and I will use it as a handy reference for many months, if not years, to come In-sha-Allah.
BOOK REVIEW BY DR. ABDUR RAHEEM KIDWAI
(PROFESSOR AT ALIGARH UNIVERSITY, AUTHOR OF THE BOOK 'TRANSLATING THE UNTRANSLATABLE - A CRITICAL GUIDE TO 60 ENGLISH TRANSLATIONS OF QURAN')
"Among the Muslim translators, Dr. Khattab stands out for displaying a thorough understanding of the needs of readers. His translation therefore is most likely to win a wide acclaim."
Title: Clear Quran : Thematic English Translation (Arabic - English Parallel Edition)
Author: Dr. Mustafa Khattab
Publisher: Furqaan Institute Of Quranic Education - USA
Pages: 566
Binding: Paperback With Leather Cover
With two decades of experience in Islamic translation, Dr. Mustafa Khattab is an authority on interpreting the Quran. He was a member of the first team that translated the Ramadan night prayers (Tarawîḥ) live from the Masjid Al-Haram (Makkah) and Masjid-e-Nabwi (Madeenah) during 2002-2005. He memorized the entire Quran at a young age, and later obtained a professional ijâzah in the Ḥafṣ style of recitation with a chain of narrators going all the way to Prophet Muḥammad (ﷺ). Dr. Khattab received his Ph.D., M.A., and B.A. in Islamic Studies in English with Honors from Al-Azhar University’s Faculty of Languages & Translation. | https://www.tadabburbooks.com/products/clear-quran-thematic-english-translation-pb-edition |
Thank you for visiting my site. It is a joy for me to share my love of art and painting.
I couldn't imagine a day without looking at the world through an artist's eyes.
I am amazed at the peace that painting brings me and the satisfaction of knowing that I can share my vision of God's world through my art. I hope you enjoy my work and please don't hesitate to contact me.
Favoring a loose impressionistic style, Susan's preferred subject matter is landscapes, seascapes, animal life and nature in general. She finds that being an artist allows her the blessing of being a student of God's amazing creation. | https://www.susanjenkinsfineart.com/index.html |
Hippias Of Elis, (flourished 5th century bc, Elis, in the Peloponnese, Greece), Sophist philosopher who contributed significantly to mathematics by discovering the quadratrix, a special curve he may have used to trisect an angle.
A man of great versatility, with an assurance characteristic of the later Sophists, Hippias lectured on poetry, grammar, history, politics, archaeology, mathematics, and astronomy. His vast literary output included elegies and tragedies besides technical treatises in prose. He is credited with an excellent work on Homer, collections of Greek and foreign literature, and archaeological treatises; but nothing remains except a few fragments. He is depicted in Plato’s Protagoras, and two of Plato’s minor dialogues are named after him. | https://www.britannica.com/biography/Hippias-of-Elis |
MOSCOW (AP) — Nearly 40,000 years old and in surprisingly good shape, the carcass of a woolly mammoth has gone on display in Moscow.
The scientists who found the teenage mammoth in 2010 in Russia's far north region of Yakutia have named it Yuka. The carcass had gone on display in Japan and Taiwan before it was exhibited in Moscow Tuesday.
Albert Protopopov, a researcher from Yakutia, said Yuka's carcass bore traces indicating that humans hunted for mammoths during the Ice Age. The young mammoth, aged between six and nine years old when it died, also had injuries left by an encounter with a predator, he said.
Protopopov told The Associated Press that Yuka is an estimated 38,000 years old, while other researchers have put its age at about 39,000.
Yuka was pulled out of permafrost in a spectacular condition, its soft tissues and reddish fur well preserved. Even most of its brain is intact, offering scientists a rare opportunity to study it.
Up to 4 meters (13 feet) in height and 10 tons in weight, mammoths once ranged from Russia and northern China to Europe and most of North America before they were driven to extinction by humans and a changing climate.
Woolly mammoths are thought to have died out around 10,000 years ago, although scientists think small groups of them lived longer in Alaska and on islands off Siberia.
Researchers have deciphered much of the woolly mammoth's genetic code from their hair, and some believe it would be possible to clone them, if living cells are found.
Protopopov, though, was skeptical about that. "It is not possible to find living cells as they don't survive after tens of thousands years," he said. | https://townhall.com/news/world/2014/10/28/nicely-preserved-mammoth-carcass-shown-in-moscow-n1911121 |
Wibalin® is a high strength, durable, non woven book covering materials, made from selected ECF pulps from sustainable forests. WIBALIN® is REACH compliant and produced in accordance with ISO14001 systems. WIBALIN® is completely non-toxic, biodegradable, recyclable and complies with “Safety of Toys Regulation EN 71 Part 3 1995”.
Cased books, Book endpapers , Lever arch files, Ring binders, Stationery items, Pocket diaries, Desk diaries, Albums, Saving books, Promotion materials, Displays and Boxes. | http://www.pentamapan.co.id/product/wibalin-fine-linen-flute-satina/ |
A research library of key reports on the sports and physical activity sector from a variety of sources.
About This Report:
It has been an absolute privilege to chair the Fan Led Review of Football Governance working alongside an exceptional panel and a brilliant team of officials. Since the Review began, triggered by the European Super League (ESL) debacle, the Review team heard over one hundred hours of evidence from passionate fans, club leaders, interest groups, football authorities, financial experts and many others who engage day in and day out with football.
Related Reports:
Parklife Football Hubs National Programme
Wed, 26 Oct 2016
The FA
This document sets out the details of the programme, funding process and the journey to deliver a new sustainable model for grassroots football hubs nationwide.
Leadership Insights On Governance In Sport
Sat, 23 Mar 2013
M INC
The business case for good governance in sport and the case for Scottish Football Association on how to enable modernisation of governance standards.
Fit for the Future: The Health of Wellbeing and Leisure Services
Mon, 06 Jun 2022
DCN
This report was commissioned by the District Councils' Network, with the aim of evidencing the health economic value of their members' leisure and wellbeing services, and the further impact they could potentially have on reducing health inequalities. It includes estimates of the potential impact of increasing...
PE & School Sport: The Annual Report 2022
Mon, 06 Jun 2022
Youth Sport Trust
This report outlines the current state of PE, school sport and physical activity in England and the issues and challenges facing young people today.
There is a wealth of research and insight which informs our understanding of the importance of activity in children's lives and their engagement in PE and...
Levelling Up White Paper
Wed, 09 Feb 2022
HM Government
Levelling up is not about making every part of the UK the same, or pitting one part of the country against another. Nor does it mean dampening down the success of more prosperous areas. Indeed, by extending opportunity across the UK we can relieve pressures on public services, housing and green felds...
Government response to DCMS Select Committee report on concussion in sport
Wed, 09 Feb 2022
DCMS
This report outlines the government's approach to reducing the risks associated with concussion and head injuries in sport. This work has been developed in parallel with, and greatly benefiting from, the work of the House of Commons' Digital, Culture, Media and Sport Select Committee inquiry into concussion...
Active Lives Children & Young People Survey
Thu, 09 Dec 2021
Sport England
Sport England's Active Lives Children & Young People Survey is the most comprehensive study of activity levels among children and young people aged 5-16 in England. The annual statistics provide detailed insight & understanding around their sport and physical activity habits
Diversity in Sport Governance Survey 2020
Tue, 07 Dec 2021
Sport England & UK Sport
Following the first Leadership Audit conducted in 2018, once again in 2020 board members in organisations funded by us or UK Sport were asked to complete a survey.
Available to download below, the latest Audit captures data related to gender, ethnicity, disability, LGBT+, sexuality and educational background....
The Impact of major Sporting Events
Wed, 01 Dec 2021
UK Sport
Major sport events in the UK could deliver up to £4billion of soft power, trade and investment benefits in the next decade, according to a new report commissioned by UK Sport and the City of London Corporation.
Commissioned in 2020, the findings of UK Sport and the City of London Corporation's report...
Ukactive Partners With Alliance Leisure To Release Active Families Report
Mon, 22 Nov 2021
Alliance Leisure
Alliance Leisure is proud to announce the launch of Active Families, an exploration of the vital role that family life plays in a child's exposure to physical activity and the positive contribution purpose built facilities can play on the journey. The report has been compiled in partnership with ukactive... | https://www.sportsthinktank.com/research,129305.html |
All ebooks currently held on the MyiLibrary platform will be migrating to ProQuest Ebook Central on Wednesday 21st March 2018.
This upgrade will bring a number of additional benefits, including improved citation and note-making tools. However, before the migration, please take note of the following:
- Any Bookshelves you have created within MyiLibrary will not be transferred. Please note the contents of any Bookshelves, to allow you to quickly recreate them in Ebook Central.
- Any saved bookmarks or highlighting will not migrate to Ebook Central.
- If you have added notes to any ebooks, they will not automatically transfer, but can be preserved by following these instructions.
Please contact if you have any questions about these changes. | http://blogs.exeter.ac.uk/librarynews/blog/2018/02/27/myilibrary-ebook-changes/ |
Orbit - LoFi Ambient is a trinity of ambient sound collections covering an expansive range of emotion, each impressed by the dreamy quality for which lofi is adored. Immersed in analog tape, stompbox effects, and modular and standalone synthesis, these dusky sound abstractions are ideal for any cinematic or creative production.
Dark features shadowy cinematic textures, rolling out intriguing drones and building long, evolving passages.
Light unfurls modular magic to deliver saturated, nostalgic textures in intimate layers that form more complete parts.
Melodic gravitates towards the musical, accentuating melodic elements played with vintage and classic synthesizers in a relaxed fashion that somehow always seems familiar. | https://sonalsystem.com/collections/eurorack/products/orbit-lofi-ambient-for-morphagene |
Himal Innovative Development and Research Pvt. Ltd. (HIDR) is a company established by experienced, capable and energetic professional in the field of management and rights based development, especially targeting to women, children, and overall marginalised people/communities in Nepal. Since its establishment in 2014 in Kathmandu, Nepal, HIDR has been providing multi-disciplinary services from the beginning of its operation. Following a period of significant expansion in the field of research, advocacy, policy analysis, and community empowerment services, it is now one of the growing consultancy firms led by established professional women in the various sectors of human development areas. The HIDR team/group is a well-established name in the Nepalese consulting business in the area of study, research, evaluation, publication on social, economic, educational, cultural, political rights and overall human rights issues of women, children and marginalized communities.
It also provides management-consulting services to other agencies on the issues of human resource management and capacity building. The consultancy services are either undertaken by the HIDR itself or also in association with other local consulting firms. The HIDR is a learning organization that is looking forward to the opportunities associate with international organizations and experts as well.
Objectives
- Joint advocacy through engaging with the member of the constituent assembly and the member of legislative parliament in warranting a rights friendly new constitution.
- Reforming government policy and legislation with a greater child rights perspective.
- Working with various levels of policy makers including the government authorities and political parties in Nepal.
- Mapping of current political development, engaging in various levels of advocacy works: planning, strategizing, relation with political leaders, working with CA members and collective joint work with various networks related to child rights, human rights, and women rights along with influential civil society leaders from different discipline.
- Contributions to the constitution making process for securing child rights in the new constitution, through sensitization and capacity building on the child rights issues.
- To work together with different stakeholders including CA members, policy makers, civil society, and other key stakeholders in warranting the child rights friendly constitution of Nepal and other necessary laws and policies. | http://hidrnepal.com/index.php/about-us/introduction |
First thing’s first: Like everything else in the universe, your data is going to die eventually.
Surprised? I don’t blame you. In this digital day and age—in which the internet never forgets, and technology is advancing at an uncomfortably fast rate—it’s easy to forget that nothing actually lasts forever.
Personal photos, music playlists, medical documents, school records… There are plenty of important files we have on our computers that, while we do not access them on the regular, we’d be gutted to lose all the same.
Lest such a tragedy occur, why not consider data archiving?
I’m not talking about simply putting the files on a USB or sticking them into the cloud here. Oh no. Long-term storage requires much more research, preparation, and maintenance than your run-of-the-mill backup plan.
And to save you the headache of having to do all of that yourself, I’ve prepared a handy little guide for you!
By the time you’re done reading it, you’ll have all the information you need to put together a foolproof preservation plan for all of your most precious data.
Archive vs. Backup
Before I say anything further, I want to make this clear: backing up data is not the same as archiving it.
While backup is meant to last a few years, data archiving is meant to last a few decades (minimum!). Due to these two very different goals, very different approaches need to be taken.
What works for backup typically does not work for archiving, and what works for archiving typically does not work for backup. As such, making a distinction between the two is essential when considering your methods of digital data storage.
Backup is primarily used for disaster recovery. You know, in case you spill your morning coffee on your laptop, or your hard drive decides to up and end it all. It functions so that should such disaster strike, you’ll have all your files tucked safely offsite in the cloud and the external hard drive in your work desk (if you follow the 3-2-1 backup strategy that is! You do, don’t you?).
Backup is for files you would need to retrieve soon as possible if something goes awry with your computer. Files that you regularly update, and that you couldn’t go more than a few days without.
Archiving is nothing like that.
Archive storage is for inactive data. Data that you typically write or upload only once, and that you read or need only from time to time. Think primarily personal data such as old family photos, birth certificates, your master’s thesis, and the like.
In a nutshell, it’s important data that you want—and sometimes need—to hold onto, but that you don’t need present on your computer all the time.
So don’t keep it there!
By employing an archiving strategy, all (or most) of your inactive data can be moved out of your systems and safely stored away, and the performance of your active data can be optimized. It’s a win-win!
Storage Options
There’s a whole slew of long term data storage options, and each has a laundry list of pros and cons. Which one you choose depends on your needs, how much data you’re storing, what kind of data you’re storing, and so on, but the overall goal is the same: You want something that is cost-effective and that allows you to easily access your data when needed.
Tape
I know what you’re thinking.
“Who on earth uses tape storage anymore? I thought that went the way of the dinosaurs decades ago.”
Surprise! Magnetic tape storage is still alive and well. In fact, it’s even enjoying a bit of a resurgence in popularity as of late, with more and more people and businesses opting to use the retro storage medium thanks to its durability and reliability. Even Google uses it!
And why wouldn’t they? Magnetic tapes can store massive amounts of data, can be recorded over and repeatedly reused, are budget friendly, and are proven to last ages!
There are some cons to using magnetic tapes, of course. Unlike more modern technology, which employs random access memory, a tape is strictly sequential. Also, its long retrieval times leave something to be desired, and data quality tends to take a nosedive around the 15-year mark.
In short, as long as you don’t store your tapes next to a giant box of magnets, tapes are your best bet for archiving data for the long run.
Average cost: $30 per tape
Average storage capacity: 6 TB per tape
Retrieval speed: 1 TB per hour per tape drive
Power efficiency: Consumes no electricity
Lifespan: 10 to 20 years
Optical media
If you’re unfamiliar with the term, optical media are discs that have been both written and read with laser technology, such as CD and DVD.
One con to using optical media as your archival medium is that it’s one that is most susceptible to being gravely affected by environmental factors—such as exposure to light, dust, heat and pressure—and overuse. It is also extremely prone to damage due to the little amount of protection that it gets from the coating on its readable surface. I mean, who here hasn’t scratched a CD?
That said, if optical media is burned properly, stored in jewel cases, and treated with care, it is a good low-cost archiving option that is easy to use and doesn’t take up much storage space.
Just remember that when it comes to CDs and DVDs, you get what you pay for. Avoid cheap bulk buys that come in stacks or “cakes”, which tend to be of poor quality. Instead, spring for special discs made by big manufacturers specifically for archiving.
Average cost: $2.80 per CD; $3.10 per DVD
Average storage capacity: 700 MB per CD; 4.7 GB per DVD
Retrieval speed: 7.8 MB per second per CD; 10.56 MB per second per DVD
Power efficiency: Consumes no electricity
Lifespan: Manufacturers boast that optical media made specifically for archiving can last anywhere from 30 to over 300 years, but given the fact that the technology has only been available for a little over 30 years, it’s difficult to confirm. I’d take a more conservative guess of 10 years for a recorded piece of optical media—longer if properly stored and handled!
Hard disk drive
The cloud is on the rise, but hard disk drives are still the go-to backup solution for those who prefer having a physical copy of their files as a preventive measure against data loss.
While better suited to backup of active data, a hard disk drive can still work as a short-term solution for archival storage. Emphasis on the ‘short-term’.
Why? Because hard disk drives are designed to be a temporary medium, with most having a lifespan of a mere four years.
You could certainly try to unplug your hard drive and store it in a dust-free and temperature-controlled area, but you’re playing with fire, as stored drives are notorious for not spinning up once reconnected.
And I’m not talking about trying to store it for a decade here—I’m talking about attempting to store it for even just a year.
My advice? If there is data you want to archive, put it on your hard drive, but check it regularly for data degradation and spin up, and look for a more durable, long-lasting solution as soon as you can.
Average cost: $0.06 per GB
Average storage capacity: 1 to 3 TB
Retrieval speed: It depends on the model. Older models average 1.5 GB per second while the newer ones are more in the range of 6 GB per second.
Power efficiency: Consumes no electricity
Lifespan: Three to four years
Cloud
I know that online storage is all the rage right now, and it’s a great option for regular backup, but I would not recommend it for archiving.
Why you ask?
Because storing your archive data in the cloud is risky—more so than any other option on this list.
One, you don’t have physical access to the hardware on which your precious files are stored, which means that need to trust your chosen service’s ability to properly test and maintain its servers.
Two, you’re entirely reliant on the financial health of the service you’re using. If, by some misfortune, the service goes under, you have absolutely no guarantee that you’ll be able to retrieve your data.
That said, with services such as iDrive, Microsoft Azure and Amazon beginning to introduce archival storage options and put sufficient security measures and guarantees in place, the cloud is becoming a more and more attractive long-term data storage solution.
Average cost: Usually somewhere between $0.25 to $12 per GB per month, depending on the cloud provider
Average storage capacity: Again, it depends on what kind of cloud storage you choose. Online storage services like Dropbox usually ranges between 100 to 500 GB. Providers such as Backblaze, however, have a staggering range of price plans and storage allowances, with some even offering unlimited space (though 1 to 2 TB is more common).
Retrieval speed: Instant
Power efficiency: None, as it’s stored offsite
Lifespan: That’s up to your cloud provider, now isn’t it?
Long-Term Storage Tips
Now I’ve got you up to speed on the different kinds of storage mediums, but the planning doesn’t end there! I’ve got a couple more tips for you to ensure your personal data archiving plan runs smooth as silk.
Be picky about what you archive.
Doing so will help to prevent overloading your chosen medium and to reduce overall cost.
Besides the obvious, such as health records and school degrees, limit your other archival data to things that are important but that you need only in case of emergencies, as well as things that you’d like to preserve for future generations.
Directly engage in the curating of your data.
Proper archiving requires maintenance, upkeep, and regular testing. Take care to check your hard drive disks and CDs for scratches or degradation, as well as the files themselves for readability.
Also, remember to take note of the age of your chosen storage option, and switch to a newer version when it’s nearing the end of its life expectancy.
As I mentioned at the beginning of this post, nothing lasts forever. There is always the risk of technological failure or corruption. But, if you curate properly, you maximize your chances of your data staying readable and accessible 10, 20, and even 50 years or more down the road!
Use more than one archive medium.
Putting all of your eggs in one basket is never a good idea.
Each storage solution mentioned above risks going belly-up—sometimes without warning—so instead of relying on just one, choose two or three.
You’ll sleep better at night knowing that even if, say, your offsite cloud provider is hit by disaster, your irreplaceable files will still be safe and sound on onsite data tape.
Pay attention to new storage and archival options.
You never know when your current method will go out-of-date, so it’s best to be aware of new options on the market and take full advantage of them.
Are you one of those tried-and-true people who don’t think such a thing is necessary?
Remember these?
Yeah, I thought so.
File format is key.
Files forms such as ‘.docx’ or ‘.dot’ are program-specific, which means that if the program that supports them has ceased to exist when you try to open your archived data decades down the road, you may not be able to.
To be on the safe side, save things as raw files or as open documents. That way, even if your data’s current form becomes obsolete, your data itself won’t! | https://www.bestbackups.com/blog/7153/everything-you-need-to-know-about-personal-data-archiving/ |
FIELD OF THE INVENTION
BACKGROUND
SUMMARY
DETAILED DESCRIPTION INCLUDING BEST MODE
INDUSTRIAL APPLICABILITY
The present invention relates generally to computer systems and, in particular, to hardware processors that implement virtual computing machines.
Java™ is a well known object orientated programming language which was developed by Sun Microsystems™. The use of Java™ has increased in popularity in recent times, particularly on the Internet, since Java™ is simple, distributed, and portable across platforms and operating systems.
Most conventional programming languages use a compiler to translate the source code of a program into machine code or processor instructions, which are native to a central processing unit (CPU) of a particular operating system. However, once translated, the program will only execute on that particular operating system. In order for the program to be executed on a different operating system, the original source code must be recompiled for the CPU of this different operating system.
Java™ programs are typically compiled for a Java™ Virtual Machine. A Java™ Virtual Machine is an abstract computer that executes the compiled Java programs. The Java™ Virtual Machine is referred to as ‘virtual’ since it is implemented in software on a ‘real’ hardware platform and operating system. Accordingly, the Java™ Virtual Machine needs to be implemented on a particular platform for compiled Java™ programs to be executed on that platform.
The Java™ Virtual Machine sits between the compiled Java program and the underlying hardware platform and operating system. The portability of the Java™ programming language is provided largely by the Java™ Virtual Machine, since compiled Java™ programs run on the Java™ Virtual Machine, independent of whatever may be underneath the Java™ Virtual Machine.
In contrast to conventional programming languages, Java™ programs are compiled into a form called Java™ bytecodes. The Java™ Virtual Machine executes these Java™ bytecodes. So Java™ bytecodes essentially form the machine language of the Java™ Virtual Machine. The Java™ Virtual Machine comprises a Java™ compiler that reads a source of Java™ language source (e.g., in the form of java files), translates the source into Java™ bytecodes.
A stream of bytecodes is seen as a sequence of instructions by the Java™ Virtual Machine. Each of these instructions comprises a one-byte opcode and zero or more operands. The opcode indicates to the Java™ Virtual Machine what action to take. Immediately following the opcode may be other information (e.g., operands), if the Java™ Virtual Machine requires such information to perform the particular action.
—
—
Each bytecode instruction has a corresponding mnemonic. These mnemonics essentially form the assembly language for the Java™ Virtual Machine. For example, one of the Java™ instructions causes the Java™ Virtual Machine to push a zero onto a Java™ stack. This instruction has the mnemonic ‘iconist0’, and its bytecode value is 60 hex. The iconist0 instruction does not require any operands.
The virtual hardware of the Java™ Virtual Machine comprises four basic parts: registers, a stack, a trash area, and a method area. These parts are abstract, just like the Java™ Virtual Machine they compose, but they must exist in some form in every Java™ Virtual Machine implementation.
The Java™ Virtual Machine can address up to four gigabytes of memory, with each memory location containing one byte. Each register in the Java™ Virtual Machine stores one 32-bit address. The stack, the trash, and the method area are positioned somewhere within the four gigabytes of addressable memory depending on the particular implementation of the Java™ Virtual Machine.
A word in the Java™ Virtual Machine is 32 bits. The Java™ Virtual Machine also has a small number of primitive data types (e.g., byte (8 bits), int (32 bits) and float (32 bits)). These types conveniently map to the types available to a Java™ programmer.
The method area contains bytecodes. As such, the method area is aligned on byte boundaries. The Java™ stack and trash are aligned on word (32-bit) boundaries.
The Java™ Virtual Machine has a program counter and several other general registers that manage the Java™ stack. The Java™ Virtual Machine has only a small number of registers since the bytecode instructions of the Java™ Virtual Machine operate primarily on the Java™ stack. Such a stack-based design allows the instruction set of the Java™ Virtual Machine and the implementation thereof to be small.
As described above, the Java™ Virtual Machine uses a Java™ program counter to maintain where in memory the Java™ Virtual Machine is executing instructions. Other registers point to various parts of the stack frame of a currently executing method. The stack frame of an executing method stores the state (e.g., local variables (LV) and intermediate results of calculations, etc.) for a particular invocation of the method.
As described above, the method area contains the Java™ bytecodes. The program counter always stores the address of some byte in the method area. After a bytecode instruction has been executed, the program counter will contain the address of the next instruction to be executed by the Java™ Virtual Machine. Following execution of an instruction, the Java™ Virtual Machine typically sets the program counter to the address of the instruction that immediately follows the previous one.
The parameters for and results of bytecode instructions are stored in the Java™ stack. The Java™ stack is also used to pass parameters to and return values from methods. Further, the Java™ stack stores the state of each method invocation, where the state of a method invocation is called the method's stack frame, as described above.
The objects of a Java™ program reside in the trash area of the Java™ Virtual Machine. Any time memory is allocated with a new operator, the allocated memory comes from the trash. Allocated memory is not able to be freed directly using the Java™ programming language. Instead, the runtime environment maintains the references to each object in the trash. The runtime environment may then automatically free the memory occupied by objects that are no longer referenced.
The Java™ Virtual Machine also comprises a Java™ byte code interpreter. The Java™ byte code interpreter converts bytecodes into machine code or processor instructions that are native to a particular CPU. For example, a request to establish a socket connection to a remote CPU will involve an operating system call. Different operating systems handle sockets in different ways. The Java™ Virtual Machine will handle the socket translations, so that the operating system and CPU architecture on which Java™ programs are running is completely irrelevant.
However, the execution of Java™ programs is relatively slow compared to some programs coded according to a conventional programming language, because of the need for the Java™ bytecodes of the programs to be processed and translated by the Java™ Virtual Machine. For example, for a Java™ program executing on a particular CPU, the CPU must firstly execute the Java™ Virtual Machine to translate the Java™ bytecodes of the program into native instructions. These native instructions must then be executed by the CPU. The translation of the bytecodes into native instructions causes a bottleneck in the execution of the Java™ programs.
The execution of Java™ programs as described above may be compared to a conventional program being executed by a CPU for which the conventional program has been compiled. In this instance, the processor must merely execute the native instructions for the conventional program.
Specialised interpreters have been used to increase the execution speed of the Java™ Virtual Machine and accordingly increase the execution speed of a Java™ program. However, these specialised interpreters often result in both a compile overhead and an additional memory overhead for an operating system in which they are being used. As a result, the use of Java™ has been limited in low memory and low energy consumption implementations.
Another known method of increasing the execution speed of Java™ programs is through the use of a hardware Java™ accelerator such as that disclosed by U.S. Pat. No. 6,332,215 to Patel, et al. This hardware Java™ accelerator implements portions of the Java™ virtual machine in hardware in order to accelerate the operation of an operating system generating Java™ bytecodes. The hardware Java™ accelerator of U.S. Pat. No. 6,332,215 also translates bytecodes into native processor instructions. However, one disadvantage of the hardware Java™ accelerator of U.S. Pat. No. 6,332,215 is that it requires the use of multiple hardware Java™ registers. These hardware Java™ registers are required to store Java™ register files defined in the Java™ virtual machine. The register files contain the state of the Java™ virtual machine and are updated after each bytecode is executed. The need for such multiple hardware Java™ registers complicates the hardware necessary to execute the Java™ programs.
Another hardware Java™ accelerator is that disclosed by U.S. Pat. No. 6,965,984 to Seal, et al. However, the hardware Java™ accelerator of U.S. Pat. No. 6,965,984 is only designed for use with central processing units produced by a company called ARM Limited of Cambridge, England and the instruction set of such ARM central processing units.
Thus, a need clearly exists for an improved and more efficient means of increasing the execution speed of Java™ programs.
It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
The present invention generally relates to a hardware Java™ bytecode unit for use in translating Java™ bytecodes into native instructions for a particular central processing unit (CPU). The hardware Java™ bytecode unit increases the processing speed of Java™ bytecodes compared to Java™ Virtual Machines implemented purely in software, by using a programmable lookup table to perform the translation.
The hardware Java™ bytecode unit of the present invention minimises hardware complications by converting stack-based Java™ bytecodes into register-based native instructions for a particular CPU using an original CPU register file for all stack operations.
According to one aspect of the present invention there is provided a system comprising:
a central processing unit for use in executing RISC instructions; and
a hardware unit associated with the central processing unit, the hardware unit being configured for translating stack-based instructions into RISC instructions for execution by said central processing unit, wherein the translation is performed using a programmable lookup table.
According to another aspect of the present invention there is provided a system comprising:
a central processing unit for use in executing RISC instructions, said central processing unit comprising a CPU register file; and
a hardware unit associated with the central processing unit, the hardware unit being configured for translating stack-based instructions into RISC instructions using an operand stack configured within the CPU register file, wherein the operand stack is managed by the hardware unit and is used for performing the stack operations necessary in performing said translations.
According to still another aspect of the present invention there is provided a method of translating a stack-based instruction into RISC instructions for execution by a central processing unit, said method comprising the steps of:
downloading the stack-based instruction to a hardware unit associated with the central processing unit;
matching the stack-based instruction to one or more RISC instructions stored in a programmable lookup table, using the hardware unit; and
executing the one or more RISC instructions using the central processing unit.
According to still another aspect of the present invention there is provided an apparatus comprising:
a central processing unit for use in executing RISC instructions; and a hardware unit associated with the central processing unit, the hardware unit being configured for translating stack-based instructions into RISC instructions for execution by said central processing unit, wherein the translation is performed using a programmable lookup table to match stack-based instructions to one or more RISC instructions stored in the programmable lookup table. Other aspects of the invention are also disclosed.
Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
FIG. 1
100
102
100
102
100
100
100
It is to be noted that the discussions contained in the “Background” section and that above relating to prior art arrangements relate to discussions of documents or devices which form public knowledge through their respective publication and/or use. Such should not be interpreted as a representation by the present inventor(s) or patent applicant that such documents or devices in any way form part of the common general knowledge in the art. shows a hardware Java™ bytecode unit connected to a RISC CPU , in accordance with one embodiment of the present invention. The hardware Java™ bytecode unit generates RISC instructions to be executed by the CPU which may be a generic register based CPU. The principles of the hardware Java™ bytecode unit are not limited to the Java™ programming language. The hardware Java™ bytecode unit may be used with any stack-based language that is to be converted to register-based native instructions. The hardware Java™ bytecode unit may also be used with any programming language which is executed by a virtual machine similar to the Java™ virtual machine.
100
100
102
The hardware Java™ bytecode unit increases the processing speed of Java™ bytecodes compared to Java™ Virtual Machines implemented purely in software, by using a programmable lookup table to perform the translation. Further, the hardware Java™ bytecode unit of the present invention minimises necessary hardware by translating stack-based Java™ bytecodes into register-based RISC instructions for the CPU using a CPU register file for all stack operations.
102
100
102
102
102
100
102
The CPU register file is used to store general registers defined for a Java™ virtual machine being executed by the CPU . The CPU register file is also used to store special registers used by the hardware Java™ bytecode unit . In accordance with preferred embodiment, the CPU register file is used by the CPU both when executing RISC instructions native to the CPU (i.e., when the CPU is operating in “native mode”) and when the hardware Java™ bytecode unit is translating stack-based Java™ bytecodes into register-based RISC instructions (i.e., when the CPU is operating in “Java™ mode).
100
102
100
The special registers used by the hardware Java™ bytecode unit of the preferred embodiment are not the same as general registers which are typically operated on by the CPU in executing RISC instructions. The special registers stored in the CPU register file include a Java™ program count (jpc) register, a Java™ stack pointer (jsp) register, a local variable frame pointer (lvfp) register, a number of arguments and local variables (narg_nlocal) register, an upper limit of jsp (jspul) register, a lower limit of jsp (jspll), a thread counter (threadcnt) register, a virtual Java™ stack pointer (vjsp) register and a register indicating the number of stack registers used (used). Each of the general and special registers stored in the CPU register file are updated after each bytecode is translated by the hardware Java™ bytecode unit . The jpc (or program counter) register keeps track of where in memory the Java™ Virtual Machine should be executing instructions. The other registers will be described in detail below.
FIG. 3
300
102
301
303
305
307
The CPU register file also stores the Java™ stack. As described above, the Java™ stack is used to keep track of the state of each method invocation, where the state of a method invocation is represented by a Java™ stack frame. The jsp and lvfp registers point to different parts of a current Java™ stack frame. As seen in , there are four sections in a Java™ stack frame of the Java™ virtual machine being executed by the CPU , according to the preferred embodiment. The four sections include the operand stack (OS) , a context information (CI) section , a local variables (LV) section and an arguments (ARG) section .
305
The local variables (LV) section contains all the local variables (i.e., up to a number of local variables, nlocals) being used by the current method invocation. These variables are allocated upon the current method being invoked.
301
301
301
301
301
301
The execution of bytecodes may cause pushing of elements, or popping of elements to/from the operand stack (OS) . The operand stack (OS) is used as a work space by bytecodes. The parameters for bytecodes being executed are placed in the operand stack , and results of bytecode instructions are found in the operand stack . The top of the operand stack is pointed to by the jsp register. The operand stack (OS) of the currently executing method is always the topmost stack section, and the jsp register therefore always points to the top of the entire Java™ stack. The lvfp register points to the beginning of the current Java™ stack frame.
307
The arguments section (ARG) is used for parameter parsing from an invoker method (i.e., up to a number of arguments, nargs) to the invoked method (i.e., the method being invoked by the invoker method). Once the invocation of a method is completed, the arguments are treated as local variables inside the invoked method.
303
The context information (CI) section is used to store all of the information required to return to the previous method.
The CPU register file is also used to store a portion of the general purpose registers for use as a buffer for the current stack frame of the Java™ stack. This buffer is referred to as the Java™ register stack. The Java™ register stack only keeps the registers in the stack frame associated with the currently executing method. Upon invocation of the method and subsequent return of the method, spill and fill, as will be described in detail below, will be performed to ensure that the Java™ register stack only contains the current stack frame.
FIG. 4
FIG. 4
400
401
403
301
405
305
307
407
303
300
301
303
305
shows the mapping of the Java™ stack and the Java™ register stack . A portion (e.g., ) of the Java™ register stack is reserved for the buffering of the operand stack (OS) . A further portion (e.g., ) of the Java™ register stack is reserved for the local variables (LV) section and the arguments section (ARG) of the current stack frame. A still further portion (e.g., ) of the Java™ register stack is reserved for the context information (CI) section of the current stack frame . As seen in , the virtual Java™ stack pointer (vjsp) register points to the top of the Java™ register stack. Further, the used register indicates the number of registers used in buffering of the operand stack (OS) , the context information (CI) section and the local variables (LV) section .
FIG. 5
FIG. 3
0
1
2
3
4
303
300
1
2
3
4
309
1
2
3
4
0
300
0
As seen in , there are five words, CI, CI, CI, CI and CI stored in the context information (CI) section of the current stack frame . Four of the words CI, CI, CI and CI are used to store the information in a context information (CI) section of a previous Java™ stack frame (e.g., stack frame of ). The word CI stores the value of the lvfp register of the previous Java™ stack frame. The word CI stores the number of arguments and local variables (narg_nlocal), of the previous Java™ stack frame. The word CI stores the jpc of the previous Java™ stack frame. The word CI stores the Java™ Constant Pool Base Pointer (CPB) of the previous Java™ stack frame. The remaining word, CI, stores a reference to the current stack frame (i.e., stack frame ) associated with the current method. The word CI is used for synchronisation checking and to keep track of the method running in each stack frame.
102
100
Table 1, below, shows the general register used when the CPU is operating in Java™ mode (i.e., when the hardware Java™ bytecode unit is translating stack-based Java™ bytecodes into register-based RISC instructions):
TABLE 1
Register Number
Alias
Usage
$r0
$0
Ties to zero
$r1–$r22
$vn
Buffer of elements (OS, LOCAL, ARG) in
current frame
$r23
$ci0
Context information - current method ptr
$r24
$ci1
Context information - previous lvfp
$r25
$ci2
Context information - previous narg_nlocal
$r26
$ci3
Context information - previous jpc
$r27
$ci4
Context information - previous cpb
$r28
$jsp
Java Stack Pointer (in case of spilling and
filling)
$r29
$nsp
Native Stack Pointer
$r30
$cpb
Constant Pool Base Pointer
$r31
Stores the return address back to Native mode
102
102
102
The bytecode unit has eight special registers which are also stored in the CPU register file and are used for managing the Java™ stack stored in the CPU register file. The CPU can access these eight special registers using load-store instructions. The eight special registers of the bytecode unit are described in Table 2, below:
TABLE 2
Index
Register
Description
1
$jpc
The Java PC
2
$jsp
The Java Stack Pointer
3
$lvfp
The Local Variable Frame Pointer
4
$narg_nlocal
The No. of args (31:16) and the No. of
local (15:0)
5
$jspul
The upper limit of jsp
6
$jspll
The lower limit of jsp
7
$threadcnt
The thread counter
8
$vjsp
The Virtual Java Stack Pointer
9
$used
The No. of stack registers used
100
102
102
100
100
102
100
The hardware Java™ bytecode unit uses a RISC instruction set look-up table for translating Java™ bytecodes into native instructions for execution by the CPU . The look-up table stores the RISC instruction set used by the CPU . To translate a particular Java™ bytecode into one or more RISC instructions, the hardware Java™ bytecode unit uses the particular Java™ bytecode as an index into the look-up table. The Java™ bytecode unit matches the particular Java™ bytecode to one or more RISC instructions stored in the look-up table. The matched RISC instructions may then be executed by the CPU . The instruction set look-up table is programmable and may be updated during runtime to improve performance and functionality of the hardware Java™ bytecode unit .
102
102
102
104
105
106
107
108
105
102
103
109
105
103
102
117
104
105
105
107
106
110
111
107
108
112
The CPU is executing a typical RISC CPU pipeline. In accordance with such a RISC CPU pipeline, the CPU comprises an instruction cache , a multiplexer , an instruction fetch unit , a multiplexer , an instruction dispatch unit , and an integer unit . When operating in native mode, the instruction fetch unit of the CPU fetches one or more native RISC instructions (per clock cycle) from the instruction cache , via an internal bus . The instruction fetch unit accesses the instruction cache by sending an instruction address to the instruction cache via an internal bus and the multiplexer . The RISC instructions are typically fetched into an instruction queue (not shown) incorporated within the instruction fetch unit . The instruction fetch unit sends the RISC instructions to the instruction dispatch unit , via the multiplexer and internal buses and . The instruction dispatch unit decodes the RISC instructions before dispatching the RISC instructions to the integer unit via an internal bus .
108
108
107
108
301
108
301
127
108
127
100
108
127
400
100
108
127
FIG. 1
The integer unit may be a fixed-point arithmetic logic unit (ALU) that performs all integer maths including instruction address calculations and executes the RISC instruction. The integer unit may perform integer and floating-point load-address calculations, integer and floating-point store-address calculations, integer and floating-point load-data operations and integer store-data operations in accordance with the RISC instruction received from the instruction dispatch unit . The integer unit performs these calculations and operations using the operand stack (OS) stored in the CPU register file. The integer unit accesses the operand stack (OS) stored in the CPU register file via the hardware bus which is referred as a “Register Load/Store” bus, as seen in . For example, the integer unit may use the bus for programming the hardware Java™ bytecode unit special registers (e.g., jpc) (as shown in Table 2) stored in CPU register file. Further, the integer unit may use the bus for accessing the Java™ stack in order to determine the status of the hardware Java™ bytecode unit during any bytecode translation or mode switching operation. The general registers (as shown in Table 1) stored in the CPU register file will also be updated based on the RISC instruction executed by the integer unit , via the bus .
FIG. 1
125
100
100
100
108
As seen in , hardware bus is referred to as a “Branch controls” bus. The hardware Java™ bytecode unit is configured to perform branching and has branch capability. As such, the hardware Java™ bytecode unit pre-translates speculative bytecode instructions before knowing branch results. The hardware Java™ bytecode unit accesses branch results from the integer unit for a particular branch and may use the branch results to correct a target address and invalidate instructions, if necessary.
102
103
100
100
100
102
FIG. 1
The CPU also executes the Java™ virtual machine which is responsible for interpreting any Java™ bytecodes fetched from the instruction cache . In accordance with the embodiment of , the hardware Java™ bytecode unit implements at least part of the Java™ Virtual Machine in hardware. The hardware Java™ bytecode unit increases the speed of processing of Java bytecodes. The hardware Java™ bytecode unit at least partially performs the translation of the Java bytecodes into native RISC instructions for the CPU .
FIG. 1
100
103
105
104
100
107
105
106
103
105
100
109
As seen in , the hardware Java™ bytecode unit shares the instruction cache with the instruction fetch unit using the multiplexer . The hardware Java™ bytecode unit also shares the instruction dispatch unit with the instruction fetch unit using the multiplexer . Instructions from the instruction cache may be supplied to either the instruction fetch unit , as described above, or to the hardware Java™ bytecode unit , via the internal bus .
102
102
104
106
100
102
102
109
102
103
103
115
117
104
When the CPU is initially “powered on”, the CPU is in “native mode” and the multiplexers and are set to bypass the hardware Java™ bytecode unit . In the native mode, the CPU executes native RISC instructions supplied to the instruction fetch unit via the bus . The instruction fetch unit accesses the instruction cache by sending an instruction address referencing a RISC instruction to the instruction cache via the internal buses , and the multiplexer .
103
102
102
100
102
102
104
122
104
100
103
106
123
106
100
107
129
102
100
102
113
104
115
103
100
109
105
If the instruction cache contains a Java™ bytecode, then the Java™ Virtual Machine being executed by the CPU switches the CPU to Java™ mode. In this instance, the Java™ Virtual Machine initialises the special and general registers stored in the CPU register file and sends a “load/store” to the hardware Java™ bytecode unit . The Java™ Virtual Machine also sends a “change mode” instruction down the RISC CPU pipeline of the CPU upon switching the CPU to Java™ mode. The change mode instruction results in a signal being sent to the multiplexer , via a bus . This signal switches the multiplexer so that the hardware Java™ bytecode unit may access the Java™ bytecode stored in the instruction cache . The change mode instruction also results in a signal being sent to the multiplexer , via a bus , which switches the multiplexer so that RISC instructions output from the hardware Java™ bytecode unit are supplied to the instruction dispatch unit , via the a bus . In order to access the Java™ bytecode in the instruction cache , the bytecode unit sends an instruction address referencing the Java™ bytecode to the instruction cache via a bus , the multiplexer and an internal bus . The instruction cache supplies the Java™ bytecode referenced by the instruction address to the bytecode unit via the internal bus . The instruction fetch unit is essentially disabled when the CPU is in a Java™ mode.
100
100
102
107
100
110
106
107
108
108
107
108
301
108
301
127
108
127
400
100
127
107
In this instance, the hardware Java™ bytecode unit converts the Java™ bytecode into a RISC instruction by using the Java™ bytecode as an index into a programmable lookup table stored in the Java™ bytecode unit . As described above, the programmable lookup table stores the RISC instruction set used by the CPU . The RISC instruction is supplied to the instruction dispatch unit by the hardware Java™ bytecode unit via an internal bus and the multiplexer . The instruction dispatch unit decodes the RISC instruction and dispatches the decoded instruction to the integer unit . The integer unit may perform integer and floating-point load-address calculations, integer and floating-point store-address calculations, integer and floating-point load-data operations and integer store-data operations in accordance with the RISC instruction received from the instruction dispatch unit . The integer unit performs these calculations and operations using the operand stack (OS) stored in the CPU register file. As described above, the integer unit accesses the operand stack (OS) stored in the CPU register file via the hardware bus . Further, the integer unit may use the bus for accessing the Java™ stack in order to determine the status of the hardware Java™ bytecode unit during any bytecode translation or mode switching operation. The general registers (as shown in Table 1) stored in the CPU register file will also be updated, via the bus , based on the RISC instruction received from the instruction dispatch unit .
100
102
The hardware Java™ bytecode unit increases the processing speed of the Java™ Virtual Machine being executed by the CPU allowing existing native language legacy applications and development tools to be used. Typically, a RISC CPU executing a Java™ Virtual Machine would not be able to access such legacy applications.
100
102
102
102
In another embodiment, the hardware Java™ bytecode unit may be incorporated into a central processing unit such as the CPU . In such an embodiment, the translation of Java™ bytecodes into native RISC instructions for the CPU may be performed by a hardware Java™ bytecode sub-unit of the CPU .
FIG. 2
FIG. 2
100
100
201
202
203
204
205
206
207
208
shows details of one embodiment of the hardware Java™ bytecode unit . As seen in , the bytecode unit comprises a branch unit , a bytecode buffer , a bytecode folder , a stack management unit , a stack control instructions generation unit , bytecode ram , a bytecode translator and a multiplexer .
102
201
102
102
201
102
113
104
115
103
202
109
202
When the CPU is in Java™ mode, the bytecode unit fetches bytecodes from the instruction cache . In order to access the instruction cache , the branch unit sends an instruction address to the instruction cache via the hardware bus , the multiplexer and the internal bus . The instruction cache supplies a Java™ bytecode referenced by the instruction address to the bytecode buffer via the bus . In the preferred embodiment, the bytecode buffer may store up to sixteen Java™ bytecodes in an instruction queue.
202
203
209
203
204
210
203
202
A Java™ bytecode stored in the bytecode buffer is sent to the bytecode folder , via an internal bus . The bytecode folder matches the Java™ bytecode to an operation code (op-code) using op-code pattern matching and sends the op-code to the stack management unit via an internal bus . The bytecode folder may combine several of the Java™ bytecodes stored in the bytecode buffer into a single RISC op-code.
204
203
207
211
204
205
301
The stack management unit uses the op-code received from the bytecode folder to generate RISC instruction parameters which are supplied to the bytecode translator via an internal bus . The stack management unit also provides update values for various stack pointers (i.e., the Java™ stack pointer (jsp) register and the virtual Java™ stack pointer (vjsp) register). These update values are sent to the stack control instruction generation unit which generates stack control instructions for the operand stack (OS) stored in the CPU register file.
209
207
210
207
203
204
102
207
206
102
207
206
216
206
102
207
The bytecode folder also sends the op-code to the bytecode translator via the internal bus . The bytecode translator translates the op-code received from the bytecode folder and the RISC instruction parameters received from the stack management unit into a RISC instruction native to the CPU . The bytecode translator uses a programmable instruction set lookup table stored in the bytecode RAM to determine the RISC instruction. As described above, the look-up table stores the RISC instruction set used by the CPU . In translating the op-code, the bytecode translator provides an address to the instruction set lookup table stored in the bytecode RAM via an internal bus . This address indicates the location in the bytecode RAM of the native RISC instruction for the CPU . Accordingly, the address provided by the bytecode translator forms the index, as described above, into the look-up table.
207
107
102
205
208
106
129
215
107
108
111
108
107
108
301
205
108
301
127
108
127
400
100
107
The RISC instruction determined by the bytecode translator is sent to the instruction dispatch unit of the CPU , together with the stack control instructions generated by the stack control instruction generation unit , via the multiplexer , the multiplexer , and the buses and . As described above, the instruction dispatch unit decodes the RISC instruction before dispatching the RISC instruction to the integer unit for execution, via the internal bus . The integer unit may then perform integer and floating-point load-address calculations, integer and floating-point store-address calculations, integer and floating-point load-data operations and integer store-data operations in accordance with the RISC instruction received from the instruction dispatch unit . The integer unit performs these calculations and operations using the operand stack (OS) stored in the CPU register file according to the stack control instructions generated by the stack control generation unit . As described above, the integer unit accesses the operand stack (OS) stored in the CPU register file via the hardware bus . Further, the integer unit may use the bus for accessing the Java™ stack in order to determine the status of the hardware Java™ bytecode unit during any bytecode translation or mode switching operation. The general registers (as shown in Table 1) and also the special registers (as shown in Table 2) stored in the CPU register file will be updated based on the executed RISC instruction received from the instruction dispatch unit .
207
203
207
102
104
106
102
122
123
105
103
103
102
If the bytecode translator receives a non-translatable bytecode from the bytecode folder , the bytecode translator generates the change mode instruction, which is sent to the CPU . Upon receiving the change mode instruction, the multiplexers and of the CPU are switched to native mode, via signals on the buses and , allowing the instruction fetch unit to access the instruction cache in order to fetch the non-translatable bytecode from the instruction cache . This non-translatable bytecode may then be executed by the Java™ Virtual Machine being executed by the CPU .
100
119
100
121
119
102
100
FIG. 1
As described above, the instruction set look-up table is programmable and may be updated during runtime to improve performance and functionality of the hardware Java™ bytecode unit . The look-up table may be programmed by a programmer, for example, using an external interface as seen in . The external interface communicates with the hardware Java™ bytecode unit via a bus . The look-up table may be updated at run-time for different application usage. For example, debug instructions may be inserted by the programmer using the external interface in order to “code trace” as known to those skilled in the relevant art. As another example, certain bytecodes may be optimised for performance purposes if the CPU predetermines that not all of the security features of the bytecodes are required to execute the bytecodes. Still further, the look-up table may be modified for different central processing units having different issue capability, for example, for central processing units configured to issue multiple instructions in a single cycle. The hardware Java™ bytecode unit may be integrated with single or multi-issue central processing units with configurable numbers of instruction ports.
205
102
208
106
401
400
102
301
The stack control instructions for the Java™ stack generated by the stack control instruction generation unit are sent to the CPU via the multiplexer and the multiplexer . The CPU register file register stack and the Java™ stack are updated based on the stack control instructions. In particular, the state of the Java™ virtual machine being executed by the CPU and the pointer to the top of the operand stack (OS) are updated based on the stack control instructions.
401
400
400
102
401
401
206
206
401
The register stack stored in the CPU register file acts as a circular buffer for the Java™ stack . The Java™ stack grows and shrinks during execution of the Java™ Virtual Machine as Java™ bytecodes are translated into register-based RISC instructions for the CPU . Due to the limited number of registers in the register stack , data needs to be moved out of the register stack to the RAM (i.e., the data is “spilled”) and access data from the RAM (i.e., the register stack is “filled”).
204
207
100
400
206
204
102
207
211
Under certain conditions, the stack management unit interrupts normal bytecode translation and sends instructions for stack management to the bytecode translator . In particular, the hardware Java™ bytecode unit , performs automatic spilling and filling of the Java™ stack to and from the bytecode RAM using load and store instructions generated by the stack management unit during the translation of Java™ bytecodes into register-based RISC instructions for the CPU . These load and store instructions are sent to the bytecode translator via an internal bus .
(i) when the translation of a bytecode requires more free general or special registers;
102
(ii) upon the CPU being switched from native mode to Java™ mode, where all used registers of the CPU register file including the context information (CI) are spilled;
(iii) before method invocation;
(iv) upon method invocation, the allocation of local variables requires more free registers; and
(v) after method invocation, the register stack spills data until only elements in the current stack frame are stored in the register stack.
Normal bytecode translation will be interrupted and spilling will occur under the following conditions:
(i) a bytecode currently being translated requires access to operand stack elements which are not stored in the CPU register file;
102
(ii) upon the CPU being switched from native mode to Java™ mode, the elements, including the context information, for a current stack frame are filled;
(iii) after method return, the elements, including context information, for a current stack frame are filled.
Normal bytecode translation will be interrupted and filling will occur under the following conditions:
100
401
301
102
100
201
102
102
201
102
113
104
115
103
202
109
The translation of stack-based Java™ bytecodes into register-based RISC instructions using the hardware Java™ bytecode unit will now be described with reference to an example Java™ bytecode, “iadd”. The op-code for iadd is 0x60. The bytecode iadd processes two integer operands at the top of the register stack (e.g., ) stored in the CPU register file—other types of operands are illegal and would cause the bytecode translation to fail. Both operands are popped from the operand stack (OS) (e.g., ) of the register stack stored in the CPU register file and the integer sum of both operands is pushed back on to the register stack. In order to translate the iadd bytecode into register-based RISC instructions, the CPU switches the hardware Java™ bytecode unit to Java™ mode. In Java™ mode, the bytecode unit fetches the iadd bytecode from the instruction cache . In order to access the instruction cache , the branch unit sends an instruction address for the iadd bytecode to the instruction cache via the hardware bus , the multiplexer and the internal bus . The instruction cache supplies the iadd bytecode to the bytecode buffer via the bus .
202
203
209
203
204
210
204
203
1
2
1
204
204
207
211
204
FIG. 6(
FIG. 6(
a
b
The iadd bytecode stored in the bytecode buffer is sent to the bytecode folder , via an internal bus . The bytecode folder matches the iadd bytecode to the op-code, 0x60, using op-code pattern matching and sends the op-code 0x60 to the stack management unit via an internal bus . The stack management unit uses the op-code 0x60 received from the bytecode folder to generate RISC instruction parameters including the RISC opcode for “add”, and register indices for two source registers (e.g., register vjsp- and register vjsp-, as seen in )) and one destination register (e.g., register vjsp-, as seen in )). Other RISC instruction parameters may be generated by the stack management unit for other bytecodes. The RISC instruction parameters generated by the stack management unit are combined into a complete RISC instruction, which is supplied to the bytecode translator via an internal bus .The stack management unit also provides update values for various stack pointers including the virtual Java™ stack pointer (vjsp) and the Java™ stack pointer (jsp). These stack pointers are updated as follows:
vjsp=vjsp−
1 (i)
jsp=jsp−
1 (ii)
205
These update values are sent to the stack control instruction generation unit which generates stack control instructions for the operand stack (OS) of the register stack stored in the CPU register file.
209
207
210
207
203
204
102
207
206
102
207
206
216
206
102
The bytecode folder also sends the op-code 0x60 to the bytecode translator via the internal bus . The bytecode translator translates the op-code 0x60 received from the bytecode folder and the RISC instruction parameters received from the stack management unit into a RISC instruction native to the CPU . The bytecode translator uses the programmable instruction set lookup table stored in the bytecode RAM to determine the RISC instruction. As described above, the look-up table stores the RISC instruction set used by the CPU . The RISC instruction in the programmable instruction set lookup table corresponding to the op-code 0x60 is “add $(vjsp−2), $(vjsp−1), $(vjsp−2)”. In translating the op-code, the bytecode translator provides an address to the instruction set lookup table stored in the bytecode RAM via an internal bus . This address indicates the location in the bytecode RAM of the native RISC instruction “add $(vjsp−2), $(vjsp−1), $(vjsp−2)”, for the CPU .
207
107
102
205
208
106
129
215
107
108
111
108
108
205
The RISC instruction “add S(vjsp−2), $(vjsp−1), $(vjsp−2)” determined by the bytecode translator is sent to the instruction dispatch unit of the CPU , together with the stack control instructions (i.e., vjsp=vjsp−1 and jsp=jsp−1) generated by the stack control instruction generation unit , via the multiplexer , the multiplexer , and the buses and . The instruction dispatch unit decodes the RISC instruction “add $(vjsp−2), $(vjsp−1), $(vjsp−2)” before dispatching the RISC instruction to the integer unit for execution, via the internal bus . The integer unit may then perform integer and floating-point load-address calculations, integer and floating-point store-address calculations, integer and floating-point load-data operations and integer store-data operations in accordance with the RISC instruction “add $(vjsp−2), $(vjsp−1), S(vjsp−2)”. The integer unit performs these calculations and operations using the operand stack (OS) stored in the CPU register file according to the stack control instructions generated by the stack control generation unit . The general registers and also the special registers, as described above, stored in the CPU register file will be updated based on the executed RISC instruction. In particular, the register representing the number of stack registers used (i.e., $used) and the Java™ program counter (jpc) are updated as follows:
used=used−1 (i)
jpc=jpc−
1 (ii)
FIG. 6(
FIG. 6(
FIG. 6(
FIG. 6(
a
a
b
b
401
1
2
400
1
) shows the register stack (stored in the CPU register file) prior to the translation of the iadd bytecode in accordance with the above example. As seen in ), register vjsp- is one of the source registers and has a local variable LV(n+1) stored in the register. Further, the register vjsp- is the other one of the source registers and has a local variable LV(n) stored in the register. The number of registers used (i.e., $used) is equal to four (4). ) shows the register stack (stored in the CPU register file) after the translation of the iadd bytecode in accordance with the above example. As seen in ), register vjsp- is the destination register and has a local variable (LV(n+1)+LV(n)) stored in the register. Further, the number of registers used (i.e., $used) is equal to three (3).
It is apparent from the above that the arrangements described are applicable to the computer and data processing industries.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of”. Variations of the word “comprising”, such as “comprise” and “comprises” have correspondingly varied meanings.
BRIEF DESCRIPTION OF THE DRAWINGS
Some aspects of the prior art and one or more embodiments of the present invention will now be described with reference to the drawings and appendices, in which:
FIG. 1
shows a hardware Java™ bytecode unit connected to a reduced instruction set computer (RISC) CPU, in accordance with one embodiment of the present invention;
FIG. 2
FIG. 1
shows details of one embodiment of the hardware Java™ bytecode unit of ;
FIG. 3
shows the sections in a Java™ stack frame;
FIG. 4
shows the mapping of the Java™ stack to a Java™ register stack;
FIG. 5
shows five words stored in a context information (CI) section of a stack frame;
FIG. 6(
a
) shows the Java™ register stack prior to translation of an iadd bytecode; and
FIG. 6(
FIG. 6(
b
a
) shows the Java™ register stack of ) after the translation of the iadd bytecode | |
Saturday , 2019 / Рейтинг:
3.7
/ Views:
214
Random photos "Legal case studies in education" (77 photos):
Legal case studies in education
-
Many of the other Big Ideas in Legal Education are compatible with the case study method: capstone experiences could be designed with case study curricula, and case studies can be adapted for online learning. Programs like the teaching hospital, the co-op, and the corps would only improve with more experiential opportunities.
Effective Class Action Lawsuits in Special Education
Free CE Live continuing education online pharmacy, pharmacists, pharmacy technicians, nurses, Pharmacy Law Case Studies: Institutional Pharmacy Practice. Program Type. OnDemand. Credits. 1 Contact Hours The program closes with a case-based discussion of emerging trends in institutional pharmacy litigation. Rating. Handouts
Effective Class Action Lawsuits in Special Education
The Journal of Cases in Educational Leadership The University Council for Education Administration sponsors this journal in an ongoing effort to improve administrative preparation. This journal is a member of Call for Pedagogical Case Submission to the Journal of Case Studies in Educational Leadership. More. Next slide All Issues
Effective Class Action Lawsuits in Special Education
Education Weeks blogs The School Law Blog See our Law & Courts coverage Mark Walsh is a contributing writer to Education Week. He has covered legal issues in education for more than two decades.
A case study is a story about something unique, special, or interestingstories can be about individuals, organizations, processes, programs, neighborhoods, institutions, and even events. 1 The case study gives the story behind the result by capturing what happened
Collection of case studies on examples of good practice in
FindLaw provides Case Summaries Supreme Court Cases Summary, all thirteen U. S. Circuit Courts of Appeals, and select state supreme and appellate courts Case Summaries Not a Legal Professional?
Journal of Legal Studies Education - Wiley Online Library
The product can either be a student study of a new case or a student analysis of pre-existing case studies towards a particular goal. Appropriate Content Areas: Initially common in law, business, engineering, teacher, and medical education, it can be modified to most curriculum.
Legal and Ethical Case Studies - representing all counselors
PDF Special Education Law Case Studies: A Review from Practitioners by David Bateman Full Version none Special Education Law Case Studies: A Review from Practitioners
Open access academic research from top universities on the subject of Education Law. a case involving the strip search of a thirteen-year-old girl at an Arizona middle school. Thus, the Court has now decided four cases regarding public school students Fourth Amendment rights while at school and the time is ripe to take stock of this
The primary purpose in producing this collection of case studies in Teacher Education is provide a mechanism for sharing good practice. One hopes that this will afford colleagues an opportunity to reflect on the ideas which lie behind this good practice.
Legal education in the United States generally refers to a graduate degree, such was not the case with England because of the English rejection of Roman Law. Although Oxford did teach canonical law, its importance was always superior to civil law in that institution. Academic masters degrees in legal studies are available, such as the
The case is viewed by many as the first federal court decision to strike down segregation in K-12 education, and it helped lay the groundwork for the legal attack on racial segregation that led to
Caselaw: Cases and Codes - FindLaw Caselaw
The district and school disciplinary process that governs disciplinary actions of all students in school communities is the same process governing special education students. Discipline should be progressive and fair given the severity of the incident. Here. well take a look at a specific case scenario.
Case Studies Effectiveness Studies. Connect Effectiveness Study. Business Law. Cincinnati State Technical and Community College. Business Statistics. Angelo State University including the innate abilities and prior education of the students participating, as well as differences among instructors and their pedagogies.
Characteristics of Effective Class Action Lawsuits in Special Education: An Examination of Case Studies. At the time of this studys publication, the purported systems change lawsuit had cost 19 million dollars and had not improved the availability of FAPE in any significant way. Angel G. v. TEA
Diversity Issues in American Colleges and Universities: Case studies for higher education and student affairs professionals. It brings to light students personal beliefs and identities that conflict with those of the institution as well as those that conflict with the beliefs of other students.
Summer Reading: Legal Educations 9 Big Ideas, Part 3
Case Studies Customer success stories Illinois Education Association chooses Legal Files for its matterdocumentemail management tools. Read More. Legal Aid of North Carolina chooses Legal Files for full function case management and automated Read More. Legal Files for Law Firms. Hagens Berman.
choose text work: | https://difficultdecision22.cf/b151.php |
The purpose of this research is to analyze inclusion of sustainable development principles in relevant legislation as normative concepts. For the purpose of this research, Constitution of the Republic of Kosovo and 22 laws are analyzed regarding the level of the incorporation of sustainable development principles within their provisions and field of regulation. The legal basis for sustainable development lies in the Constitution of Kosovo under the Chapter on Economic Relations promoting wellbeing for all citizens encouraging a sustainable economic development. The research identified nine laws including more than one of the principles, six laws that treat sustainable development in very general terms, and seven laws that do not include anything regarding the principles, although their scope has significant impacts in sustainable economic, social, and environmental development of the country. The most prominent principle was sustainable use of resources; however, principles pertaining to the use of local resources and social justice were not found in any of current legal provisions. Laws on environmental protection, nature conservation and water include the most sustainable development principles. However, although the current legislation might be considered satisfactory regarding the extent to which it includes sustainable development principles, Kosovo is far from a sustainability path. This is related to the post-war situation of the country, unclear economic development directions, weak law enforcement, and poor cooperation of horizontal and vertical lines of public authorities and government agencies.
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | https://www.ejosdr.com/article/principles-of-sustainable-development-as-norms-of-the-current-legislative-framework-in-kosovo-5878 |
How SDSLabs works - 2018
SDSLabs is roughly run as two cells (programming/design) that collaborate with each other on everything. This blog post will take a look at the various tools, technologies, and applications that we use on a daily basis. It is our hope that some other groups in the campus might be interested in this. We’re open to the inquiry about any of this, and you can reach us anytime at chat.sdslabs.co or [email protected].
Mailing List: Our mailing list, like most other groups’, runs on Google Groups. All notifications regarding discussion of projects or general meetings are sent over the mailing list. We didn’t want to disturb our alumni over our regular meetings, so we now have separate groups for current members and the alumni.
Facebook Group: Like everyone else, we too have a private Facebook group too. Unlike most other groups, we hardly ever use it. We find the Facebook notification system terrible, and the comments system broken for serious discussion. Hence we limit discussion on Facebook to bakar (random chit-chat) and taking jibes at each other. We also keep a document with our internal lingo here.
Slack: As a software development team, having around 50 members who work on various projects, a central communication platform is a must. This is our primary means of communication. Every member is aware of the discussion going on on every project and thus can give inputs which are equally valued. We are completely transparent regarding our working, and encourage everyone to communicate only in public channels. As a student group, it is not feasible for us to pay the hefty monthly fees for Slack premium, but the free version has almost all the features we need, be it the option to add a number of integrations, or infinite members.
Hubot: We built ourselves a bot around hubot, written on CoffeeScript and hosted on Heroku, to serve our daily needs: plus-plus (a positive reinforcement for doing something great, and a primary driver of work in Labs), keeping track of embarrassing aliases, information about any member, knowing which people are in lab at any moment, and much more. Recently, we added a score counter for the number of messages sent by every member in a week to encourage members to interact on a daily basis. The one with the highest gets a plus-plus. The bot is highly customizable, and only a script file needs to be added for a new feature.
Presence: It is a tool that is primarily aimed at increasing people’s presence in Labs. We maintain a competitive board wherein we keep the score as the number of hours spent by a person in the lab. It also helps our bot determine who all are present in the lab at any given moment (we also have a spycam, just in case). It works by identifying users based on the mac addresses of their devices which are requested using a arp scan. A script is used which logins on our TP-Link router’s portal and fetches the mac addresses and for other router’s since they can not provide mac addresses of connected devices natively, we had to patch the router firmware in order to do so. This mac address scans is a cron which runs every minute. And every five minutes, the leaderboard is updated. As for the spycam, we have another cron job set up which takes pictures of the lab every 15 mins, or whenever requested by the bot at any time.
Watchdog: Managing people’s access to servers was one hell of a job. Asking for their public key, copying it over to the server, and deleting it when access needed to be revoked. So, we built ourselves a little (yet powerful) tool to ease the process. Whenever someone tries to log in to a server, Watchdog checks a repository for whether that person has access to that particular user on that particular server, and denies or grants access based on that. Anyone seeking access to a server needs to open a pull request on that repository, and access is granted after it is approved and merged. A simple commit is all we need to grant or revoke access. We also added extra feature of keeping track of who is accessing what and at which level of privilege, and all that information is posted by a bot on a channel on our slack. So, it is out in open for everyone what everyone is up to. The whole idea of watchdog is based on how linux uses PAM modules for authentication purposes. We built our custom modules and added them to predefined
sudo,
su and
sshd modules.
Dropbox: Our designers primarily share their work through Dropbox. All our designs, past and present are stored in Dropbox. We also use it to share documents. A new Facebook feature for Dropbox sharing within Facebook groups was actually a great boon for us as we use both products together.
Github: We host all our repositories on Github. We have unlimited private repositories, and though we encourage open source, we prefer to keep some of our applications private. We extensively use Github features, such as issue labeling, milestones, and projects, to keep track of work needed for a project to be shipped on time and with perfection. We go by one rule master should always be deployable.
Trello: We use trello boards for keeping track of the current status of a project. Since we started using this we have experienced a great acceleration towards the completion of our projects.
Workflowy: We use a custom account at Workflowy with a shared list to easily manage lots of things. We find that the list system of workflowy is an excellent place to chalk out ideas and hold brainstorming sessions in writing. Workflowy keeps track of most of our administration related stuff, with tenders, management contacts, and events of various subgroups, etc. being stored there. A daily log of our changes on workflowy is forwarded to our google-group so everyone is kept in the loop about any changes made there. | https://blog.sdslabs.co/2018/12/how-sdslabs-works |
A memorial service for University of Iowa Professor Emeritus of Sculpture Julius Schmidt is scheduled for 4-6 p.m. Friday, Aug. 11 in the Kirkwood Room, 515 Kirkwood Ave. in Iowa City, next to Lensing Funeral & Cremation. The service begins at 4 p.m. and will be followed by a reception.
Schmidt, whose contributions to the art world and to the countless students he mentored garnered him the unofficial title “grandfather of cast iron sculpture,” passed away June 20 at the age of 94.
Schmidt’s family encourages UI faculty and staff who knew him and who plan to attend the memorial to share brief memories during the short service.
Schmidt accepted a position as head of the UI graduate sculpture department in 1970 and he remained here until his retirement in 1993.
He mainly worked in cast iron and bronze, and his work reflected influences of ancient cultures, natural forms, and the machinery of the modern age. Synthesizing these elements, his sculptures were an exploration of the dichotomy between the natural and the mechanical. It was both an ancient and futuristic vision, as well as a representation of man’s place within a technological world.
Schmidt first introduced iron casting into the academic environment in the early 1960s when he was teaching at Cranbrook Academy of Art. Prior to the 1960s, most artists had to rely on commercial foundries to cast their sculpture. According to Schmidt, “entrusting the casting to foundry men, the sculptor never learned all he should about materials and processes and thus the range of his imagination and achievement was inhibited.” Through Schmidt’s drive for innovative techniques, such as adapting from industry the core sand process of mold making, combined with his intense dedication to research and his knowledge about materials and processes, he succeeded in putting the cast metal process directly into the hands of the sculptor.
In 1998, Schmidt received the Outstanding Educator Award from the International Sculpture Center. He received numerous other awards and honors, including being one of the artists represented in the 1959 exhibition, Sixteen Americans, at the Museum of Modern Art and receiving a Guggenheim Fellowship in 1964.
More about Schmidt’s life is available at http://www.lensingfuneral.com/obituaries/Julius-Schmidt?obId=1966405#/obituaryInfo. | https://clas.uiowa.edu/news/memorial-service-innovative-sculpture-professor-julius-schmidt-set-aug-11 |
National Institute for Health and Clinical Excellence
- "NICE" redirects here. For other uses, see NICE (disambiguation).
The National Institute for Health and Clinical Excellence (NICE) is a special health authority of the English National Health Service (NHS), serving both English NHS and the Welsh NHS. It was set up as the National Institute for Clinical Excellence in 1999, and on 1 April 2005 joined with the Health Development Agency to become the new National Institute for Health and Clinical Excellence (still abbreviated as NICE).
NICE publishes guidelines in three areas. The use of health technologies within the NHS (such as the use of new and existing medicines, treatments and procedures), clinical practice (guidance on the appropriate treatment and care of people with specific diseases and conditions), and guidance for public sector workers on Health promotion and ill-health avoidance. These appraisals are based primarily on evaluations of efficacy and cost-effectiveness in various circumstances.
NICE was established in an attempt to defuse the so-called postcode lottery of healthcare in England and Wales, where treatments that were available depended upon the NHS primary care trust area in which the patient happened to live but it has since acquired a high reputation internationally as a role model for the development of clinical guidelines. One aspect of this is the explicit determination of cost-benefit boundaries for certain technologies that it assesses. NICE also plays an important role in pioneering technology assessment in other healthcare systems through NICE International, established in May 2008 to help cultivate links with foreign governments.
Contents
Technology appraisals
Since January 2005 the NHS in England and Wales has been legally obliged to provide funding for medicines and treatments recommended by NICE's technology appraisal board. This was at least in part as a result of well-publicised postcode lottery anomalies in which certain less-common treatments were funded in some parts of the UK but not in others due to local decision making in the NHS.
Before an appraisal, the Advisory Committee on Topic Selection (ACTS) draws up a list of potential topics of clinical significance for appraisal. The Secretary of State for Health or the Welsh Assembly must then refer any technology so that the appraisal process can be formally initiated. Once this has been done NICE works with the Department of Health to draw up the scope of the appraisal.
NICE then invite consultee and commentator organisations to take part in the appraisal. A consultee organisation would include patient groups, organisations representing health care professionals and the manufacturers of the product undergoing appraisal. Consultees submit evidence during the appraisal and comment on the appraisal documents. Commentator organisations include the manufacturers of products to which the product undergoing appraisal is being compared. They comment on the documents that have been submitted and drawn up but do not actually submit information themselves.
An independent academic centre then draws together and analyses all of the published information on the technology under appraisal and prepares an assessment report. This can be commented on by the Consultees and Commentators. Comments are then taken into account and changes made to the assessment report to produce an evaluation report. An independent Appraisal Committee then looks at the evaluation report, hears spoken testimony from clinical experts, patient groups and carers. They take their testimony into account and draw up a document known as the 'appraisal consultation document'. This is sent to all consultees and commentators who are then able to make further comments. Once these comments have been taken into account the final document is drawn up called the 'final appraisal determination'. This is submitted to NICE for approval.
The process aims to be fully independent of government and lobbying power, basing decisions fully on clinical and cost-effectiveness. There have been concerns that lobbying by pharmaceutical companies to mobilise media attention and influence public opinion are attempts to influence the decision-making process. A fast-track assessment system has been introduced to reach decisions where there is most pressure for a conclusion.
Clinical guidelines
NICE carries out assessments of the most appropriate treatment regimes for different diseases. This must take into account both desired medical outcomes (i.e. the best possible result for the patient) and also economic arguments regarding differing treatments.
NICE have set up several National Collaborating Centres bringing together expertise from the royal medical colleges, professional bodies and patient/carer organisations which draw up the guidelines. The centres are the National Collaborating Centre for Cancer, the National Clinical Guidelines Centre for Acute and Chronic Conditions, the National Collaborating Centre for Women and Children´s Health, and the National Collaborating Centre for Mental Health.
The National Collaborating Centre then appoints a Guideline Development Group whose job it is to work on the development of the clinical guideline. This group consists of medical professionals, representatives of patient and carer groups and technical experts. They work together to assess the evidence for the guideline topic (e.g. clinical trials of competing products) before preparing a draft guideline.
There are then two consultation periods in which stakeholder organisations are able to comment on the draft guideline. After the second consultation period, an independent Guideline Review Panel reviews the guideline and stakeholder comments and ensures that these comments have been taken into account.
The Guideline Development Group then finalises the recommendations and the National Collaboration Centre produces the final guideline. This is submitted to NICE who then formally approve the guideline and issues this guidance to the NHS.
Cost effectiveness
As with any system financing health care, the NHS has a limited budget and a vast number of potential spending options. Choices must be made as to how this limited budget is spent. By comparing the cost effectiveness in terms of health quality gained for the money spent. By choosing to spend the finite NHS budget upon those treatment options that provide the most efficient results, society can ensure it does not lose out on possible health gains through spending on inefficient treatments and neglecting those that are more efficient.
NICE attempts to assess the cost-effectiveness of potential expenditures within the NHS to assess whether or not they represent 'better value' for money than treatments that would be neglected if the expenditure took place. It assesses the cost effectiveness of new treatments by analysing the cost and benefit of the proposed treatment relative to the next best treatment that is currently in use.
Quality-adjusted life years
NICE utililises the quality-adjusted life year (QALY) to measure the health benefits delivered by a given treatment regime. By comparing the present value (see discounting) of expected QALY flows with and without treatment, or relative to another treatment, the net/relative health benefit derived from such a treatment can be derived. When combined with the relative cost of treatment this information can be used to form an Incremental Cost-Effectiveness Ratio (ICER) to allow comparison of suggested expenditure against current resource use at the margin (the cost effectiveness threshold).
As a guideline rule, NICE accepts as cost effective those interventions with an incremental cost-effectiveness ratio of less than £20,000 per QALY and that there should be increasingly strong reasons for accepting as cost effective interventions with an incremental cost-effectiveness ratio of over a threshold of £30,000 per QALY.
Over the years, there has been great controversy as to what value this threshold should be set at. Initially, there was no fixed number. But the appraisal teams created a consensus that about £30,000. However, in November 2008 Alan Johnson, the then Secretary of State, announced that for end-of-life cancer drugs the threshold could be increased above £30,000.
The first drug to go through the new process was Lenalidomide. And its ICER was £43,800.
Cost per quality-adjusted life year gained
The following example from NICE explains the QALY principle and the application of the cost per QALY calculation.
A patient has a life threatening condition and is expected to live on average for 1 year receiving the current best treatment which costs the NHS £3,000. A new drug becomes available that will extend the life of the patient by three months and improve his or her quality of life, but the new treatment will cost the NHS more than three times as much at £10,000. Patients score their perceived quality of life on a scale from 0 to 1 with 0 being worst possible health and 1 being best possible health. On the standard treatment, quality of life is rated with a score of 0.4 but it improves to 0.6 with the new treatment. Patients on the new treatment on average live an extra 3 months, so 1.25 years in total. The quality of life gained is the product of life span and quality rating with the new treatment less the same calculation for the old treatment, i.e. (1.25 x 0.6) less (1.0 x 0.4) = 0.35 QALY. The marginal cost of the new treatment to deliver this extra gain is £7,000 so the cost per quality life year gained is £7000/0.35 or £20,000. This is within the £20,000-£30,000 ceiling range for NHS funding so the NHS will fund the new treatment at its expense without charge to the patient.
If the patient was expected to live only one month extra and instead of three then NICE would issue a recommendation not to fund. The patient's Primary Care Trust could still decide to fund the new treatment, but if not, the patient would then have two choices. He or she could opt to take the free NHS standard treatment, or he or she may decide to pay out of pocket to obtain the benefit of the new treatment from a different health care provider. If the person has a private health insurance policy the person could check to see whether the private insurance provider will fund the new treatment. About 8% of the population has some private health insurance from an employer or trade association and 2% pay from their own resources.
Basis of recommendations
Theoretically, it might be possible to draw up a table of all possible treatments sorted by increasing the cost per quality-adjusted life year gained. Those treatments with lowest cost per quality-adjusted life year gained would appear at the top of the table and deliver the most benefit per value spent and would be easiest to justify funding for. Those where the delivered benefit is low and the cost is high would appear at the bottom of the list. Decision makers would, theoretically, work down the table, adopting services that are the most cost effective. The point at which the NHS budget is exhausted would reveal the shadow price, the threshold lying between the CQG gained of the last service that is funded and that of the next most cost effective service that is not funded.
In practice this exercise is not done, but an assumed shadow price has been used by NICE for many years in its assessments to determine which treatments the NHS should and should not fund. NICE states that for drugs the cost per QALY should not normally exceed £30,000 but that there is not a hard threshold, though research has shown that any threshold is "somewhat higher" than being in the range £35,000 - £40,000.
The House of Commons Health Select Committee, in its report on NICE, stated in 2008 that "the (...) cost-per-QALY it uses to decide whether a treatment is cost-effective is of serious concern. The threshold it employs is not based on empirical research and is not directly related to the NHS budget, nor is it at the same level as that used by Primary Care Trusts (PCTs) in providing treatments not assessed by NICE, which tends to be lower. Some witnesses, including patient organisations and pharmaceutical companies, thought NICE should be more generous in the cost per QALY threshold it uses, and should approve more products. On the other hand, some PCTs struggle to implement NICE guidance at the current threshold and other witnesses argued that a lower level should be used. However, there are many uncertainties about the thresholds used by PCTs." It went on to recommend that "an independent body should determine the threshold used when making judgements of the value of drugs to the NHS."
Criticism
The work that NICE is involved in attracts the attention of many groups, including doctors, the pharmaceutical industry, and patients. NICE is often associated with controversy, because the need to make decisions at a national level can conflict with what is (or is believed to be) in the best interests of an individual patient. From an individual's perspective it can sometimes seem that NICE is denying access to certain treatments but this is not so. Patients are freely able to get access to the treatment but may have to contribute to the cost. For example, approved cancer drugs and treatments such as radiotherapy and chemotherapy are funded by the NHS without any financial contribution being taken from the patient. But certain cancer drugs not approved by NICE because of cost will be available only if the patient is prepared to pay a co-pay to make up the difference in the NICE perceived value and the actual cost. Where NICE has approved a treatment, the NHS must fund it. But not all treatments have been assessed by NICE and these treatments are usually dependent on local NHS decision making. For example the NHS usually pays for several rounds of treatment for fertility problems but because NICE has not assessed them some PCTs may cap the number of rounds and the patient may then have to pay privately if he or she wished to continue with fertility treatments beyond the capped level.
NICE has been criticised for being too slow to reach decisions. On one occasion, the Royal National Institute of Blind People said it was outraged over its delayed decision for further guidance regarding two drugs for Wet AMD that are already approved for use in the NHS. However the Department of Health said that it has 'made it clear to PCTs that funding for treatments should not be withheld simply because guidance from NICE is unavailable'.
Some of the more controversial NICE decisions have concerned donepezil, galantamine, rivastigmine (review) and memantine for the treatment of Alzheimer's disease and bevacizumab, sorafenib, sunitinib and temsirolimus for renal cell carcinoma. All these are drugs with a high cost per treatment and NICE has either rejected or restricted their use in the NHS on the grounds that they are not cost-effective.
A conservative shadow minister once criticized NICE for spending more on communications than assessments. In its defence, NICE said the majority of its communications budget was spent informing doctors about which drugs had been approved and new guidelines for treatments and that the actual cost of assessing new drugs for the NHS includes money spent on NICE's behalf by the Department of Health. When these were added to NICE's own costs, the total cost of the technology appraisal programme far outstrips the cost of NICE communications.
See also
- List of similar organizations in the United Kingdom
- Scottish Medicines Consortium which assesses all new medicinal products in the UK within 8 weeks of marketing
- National Institute for Health Research
- Social Care Institute for Excellence
- Scottish Intercollegiate Guideline Network which has produced treatment guidelines since 1995 on over 120 conditions
References
- ^ "The National Institute for Clinical Excellence (Establishment and Constitution) Order 1999" (Press release). Office of Public Sector Information. 1999-02-02. http://www.opsi.gov.uk/si/si1999/uksi_19990220_en.pdf. Retrieved 2009-09-18.
- ^ "The National Institute for Clinical Excellence (Establishment and Constitution) Amendment Order 2005" (Press release). Office of Public Sector Information. 2005-03-07. http://www.opsi.gov.uk/si/si2005/20050497.htm. Retrieved 2009-09-18.
- ^ "The Special Health Authorities Abolition Order 2005" (Press release). Office of Public Sector Information. 2005-03-07. http://www.opsi.gov.uk/si/si2005/20050502.htm. Retrieved 2009-09-18.
- ^ About NICE
- ^ Schlander, Michael (2007). Health Technology Assessments by the National Institute for Health and Clinical Excellence. New York: Springer Science+Business Media. pp. 245. ISBN 978-0-387-71995-5. http://www.springer.com/public+health/book/978-0-387-71995-5. Retrieved 2008-11-13.
- ^ Cheng, Tsung-Mei (2009-09-15). "Nice approach". Financial Times. http://www.ft.com/cms/s/0/5be67610-9ce2-11de-ab58-00144feabdc0,s01=1.html?nclick_check=1. Retrieved 2009-09-18.
- ^ Sorenson, C; Drummond, M; Kanavos, P; McGuire, A. National Institute for Health and Clinical Excellence (NICE): How does it work and what are the implications for the U.S.?. National Pharmaceutical Council. http://www.scribd.com/doc/8737637/National-Institute-for-Health-and-Clinical-Excellence-NICE-How-Does-it-Work-and-What-are-the-Implications-for-the-US-Executive-Summary. Retrieved 2009-09-18.
- ^ Berg, Sanchia (2006-06-09). "Herceptin: Was patient power key?". BBC News. http://news.bbc.co.uk/1/hi/health/5063352.stm. Retrieved 2008-11-13.
- ^ NICE website - Collaborating Centres
- ^ a b Methods for the Economic Evaluation of Health Care Programmes, Drummond et al (2005)
- ^ NICE guidance, 2008
- ^ NICE Guideline Manual: Incorporating health economics in guidelines and assessing resource impact
- ^ Boseley, S, Sparrow, A. Johnson lifts NHS ban on top-up treatment. Guardian Newspaper. http://www.guardian.co.uk/politics/2008/nov/04/nhs-health-cancer-topup-treatment. Retrieved 2011-05-13.
- ^ Appraisal Committee. Final appraisal determination: Lenalidomide for the treatment of multiple myeloma in people who have received at least one prior therapy. NICE. http://www.nice.org.uk/nicemedia/live/11937/43868/43868.pdf. Retrieved 2011-05-13.
- ^ Measuring eefectiveness and quality effectiveness - the QALY National Institute for clinical effectiveness
- ^ a b Measuring effectiveness and cost effectiveness: the QALY
- ^ Devlin, N; Parkin D (pdf). Does NICE have a cost effectiveness threshold and what other factors influence its decisions? A discrete choice analysis.. City University, London. http://www.city.ac.uk/economics/dps/discussion_papers/0301.pdf. Retrieved 2008-11-13.
- ^ House of Commons Health Committee: National Institute for Health and Clinical Excellence - First Report of Session 2007-08
- ^ NHS fertility treatment
- ^ (Press release). Royal National Institute of Blind People. 2007-08-08. http://www.rnib.org.uk/aboutus/mediacentre/mediareleases/media2007/Pages/mediarelease14jun2007.aspx. Retrieved 2008-11-13.
- ^ Drug watchdog NICE 'spends more on 'spin' than tests on new treatments'
Further readingCategories:
- 1999 establishments in England
- Clinical pharmacology
- Healthcare quality
- Medical and health organisations based in England
- NHS England
- NHS special health authorities
- NHS Wales
- Organisations based in London
- Organizations established in 1999
Wikimedia Foundation. 2010.
Look at other dictionaries: | https://enacademic.com/dic.nsf/enwiki/242269 |
E-book trend doesn’t lessen joy of reading
They are easily downloadable, cheap — if not free — and highly portable for those settling in on a long plane ride. And though no one’s quite sure when they first arrived, e-books have confounded the publishing world since they first reached mainstream success in 2007, when Amazon unveiled its Kindle e-reader.
For bibliophiles, the changing literary landscape poses a series of uncomfortable questions about the future of reading. (What happens when a reader’s digital library is erased? How are starving authors expected to support themselves?). What will happen to the relationship between the reader and the printed page?
As technology brings more of the written word to the digital screen, these frightening questions warrant well-considered answers.
On Monday, author Scott Turow published a well-written article in The New York Times titled “The Slow Death of the American Author.” Turow, who is the president of the Authors Guild and expects to publish his 10th original novel this year, is perhaps one of the most qualified writers to explore the drawbacks of e-publishing and does so with remarkable poetic style.
Like several writers wary of e-book popularity, Turow cites the reasons why literary culture is so important in America — and it’s not so he can preserve his own status as a bestselling author.
“Authors practice one of the few professions directly protected in the Constitution, which instructs Congress ‘to promote the progress of Science and the useful Arts by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries,’” Turow writes. “The idea is that a diverse literary culture, created by authors whose livelihoods, and thus independence, can’t be threatened, is essential to democracy. That culture is now at risk.”
Turow goes on to describe the lessening value of copyright protections for published works, growing trends in book piracy and the problematic nature of e-book lending in public libraries, all to publicize the negative effects that the electronic reading market has had on the lives of authors. Though none of his arguments are new per se, Turow does a fantastic job of compiling viable concerns that fans of the printed page might have after the recent surge in e-book sales.
Still, what’s most interesting about Turow’s article and recent exposés on the horrors of digital reading can be boiled down to a single question: Will technological advancements affect the personal interaction between a reader and a beloved novel?
According to a 2013 article on Publishing Unleashed, e-books accounted for 22 percent of the publishing market by the second quarter of 2012. Upon a first read, this might not seem like much, but according to that same article, this statistic marks a 49.5 percent increase in adult book sales from January 2011 to January 2012, which is quite a large jump considering that e-books hit a major boom just six years ago. In response to such rapidly growing numbers, Pricewaterhouse Coopers’ Global Entertainment and Media Outlook told paidContent that it expects e-books to make up 50 percent of the U.S. trade book market by 2016 (see Daily Trojan graphic below). It would seem that, like it or not, e-books are here to stay.
To be honest, I’m just as skeptical of e-books as the next person. I make a point of visiting Barnes & Noble at least every other month, buying all of my required readings in a print format and laughing at my Kindle-loving friends whenever the battery goes dead on their e-readers.
But at the same time, I’m also inspired by the durability of reading in American and Western culture. Though e-books might seem like the death of the novel as we know it, it’s important to remember that trends in literature and publishing have always been unstable. Mega bookstores, such as Barnes & Noble or Borders (which went out of business on Sept. 18, 2011), only appeared in the early ’80s, despite how old and familiar they might seem to today’s generation. And before these companies, there were smaller, family-owned book shops, which fizzled out in the wake of such grand competition.
Amazon, which was founded in 1994 and currently holds the largest share of the e-book market, only represents the latest development in reading trends. Because of Amazon’s ability to sell used books for less than a dollar, the company quickly became the largest seller of online books before diversifying and selling other products. Now, five years after its launch of the first successful e-reader, Amazon is giving major publishers a run for their money. But don’t fear, readers. This too shall pass.
I’m not saying that e-readers are a passing trend or that Amazon hasn’t permanently changed the publishing industry, but we should keep in mind that books have always adapted to meet the needs of the reading public. The publication of written stories has evolved from the stone tablet and the papyrus scroll, to the handwritten manuscript and, finally, to the modern printed — or digital — novel. Though we’re unsure of where books might be moving next, the evolution of publishing will most certainly be in the best interest of those who love reading. And as we navigate this evolution, issues like copyright and the rights of the author will fall into place.
So for now, just settle back into the bathtub with your favorite paperback or curl up in an armchair to download the latest New York Times bestseller. After all, what’s most important is the intimate relationship between readers and the stories that have the potential to change their lives.
Carrie Ruth Moore is a sophomore majoring in English. Her column “Cover to Cover” runs Thursdays. | https://dailytrojan.com/2013/04/10/e-book-trend-doesnt-lessen-joy-of-reading/ |
Russian Book Union vice president Leonid Palko discusses how the Russian book market is developing in terms of book sales, formats, and genres.
The Russian book publishing industry is steadily developing, despite ongoing challenges, many of them covered here at Publishing Perspectives. Recently there’s been a concern that even as consumers may have more money to spend on books, there appears to be a drop in the national interest in reading, a worry evolving now for some years there as in other world markets where digital entertainment competes for users’ time, funds, and interest.
In Russia, the development of the book business is supported by the state, which in the last several years has significantly expanded its aid to both publishers and booksellers. However, more is needed from Moscow, publishing players say, to improve the current outlook for the industry.
Leonid Palko is managing vice president with the Russian Book Union, a public association that brings together leading publishers in Russia with distributors and printing houses.
Publishing Perspectives has spoken with Palko about how the state of the industry looks to him and his colleagues today and where they see the likeliest paths forward in a complex and shifting market.
Publishing Perspectives: Let’s start with that important question of Russian consumers’ interest level in reading. What genres do you see now as the most popular?
Normally, the books in these three categories accounted for 75 percent of the turnover in our market.
But in 2018, the balance in the market significantly changed. This was reflected by a notable decline of the shares held by both fiction and educational literature. The children’s books sector remained stable, essentially on the same levels as in 2017.
So Russian publishers this year are watching these trends and staring to revise their catalogues and investment priorities for the next several years.
PP: Do you think the market may face more consolidation in years to come, particularly as these trends in reading play out? We know the positions of the current majors—Eksmo-AST, Prosveshenie—are really strong. And both of those companies appear to be continuing robust expansions in the domestic market.
LP: We see heavy consolidation to be less likely going forward. The share of our top-five publishers in Russia is currently estimated at 22 percent in terms of title output and 46 percent in units sold in the marketplace. There are no signs of any unfair competitive practices being used by these companies. Large publishers simply have more opportunities for development than their smaller competitors.
And we see comparable situations in other world markets. For example, in the UK, the local market is also dominated by several leading players among which are Pan Macmillan, Penguin Random House, HarperCollins, Cambridge University Press, and Oxford University Press. But as we see it, that doesn’t create competitive obstacles and problems for the development of the market or for independent publishing there.
PP: How do you assess the current contribution of the state to the industry? Do you think what Moscow is doing is sufficient?
LP: The current level of state support for the industry is sufficient.
Russia implements a federal program to stimulate book publishing in the country. Thanks to this approach, the government has supported the publication of more than 4,600 titles since 2012. Most of these are academic collections of works, such as The Great Russian Encyclopedia, children’s and fiction books, scientific and university publications.
It wouldn’t have been possible to publish many of these titles without this state support provided in recent years.
PP: In the Soviet era, there was a system of search-and-support to discover and develop young, talented authors. Does the same initiative today exist in Russia? How difficult can it be for younger debut writers to be published here in Russia?
LP: Such an initiative still exists, yes, and we see young, talented Russian authors every year getting more opportunities to improve and unlock their potential.
Since the beginning of this century, the All-Russian Forum of Young Writers of Russia annually has taken place in the Moscow region. So far, the forum has produced discoveries of recognized writers including Zakhar Prilepin, Sergey Shargunov, Alisa Ganieva, Alexander Snegirev, Denis Gutsko, Alexey Ivanov, and Roman Senchin.
PP: Which drivers do you think may be the most important in stimulating growth in the Russian book business in the next year?
LP: At the moment, we see the book market in a significant level of transition to online formats. And at the same time, more growth in our national bookstore chains can happen only if more stores are opened.
Online sales are a major growth factor in the market, pressuring physical retail. Last year, book sales online grew by 18 percent, year over year. By comparison, sales in traditional bookstore chains increased by 13 percent, while sales of independent booksellers fell by 5 percent.
And in digital formats, we see ebooks continuing to be a major driver in the Russian book market for the next several years. We also see audiobooks and self-publishing beginning to pick up speed in their development.
One of the distinctive market trends we’re also watching closely is a growing interest among Russian readers in higher-priced books, copies they’ll keep and value. The latest statistics show that in the past, books priced at 600 rubles and above (US$9.25) accounted for about 10 percent of sales.
But today, we see books in this price range accounting for 20 percent or more of market action, and we think this will grow.
More from Publishing Perspectives on the Russian publishing industry and market is here. | https://publishingperspectives.com/2019/04/interview-leonid-palko-russian-book-union-2019-distinctive-market-trends/ |
Green Bay will offer free high-speed internet in parks where neighborhoods are short on broadband
GREEN BAY - Up to four Green Bay parks will become high-speed internet hotspots under a new city plan for federal pandemic relief money.
City staff members want to use $253,000 of CARES Act community development block grant funding to extend the city's high-speed internet service to several parks, each in a low- or moderate-income neighborhood of Green Bay where roughly one in every five households does not have an internet connection.
The first four parks up for consideration are Seymour Park on Ashland Avenue, Eastman Park in the Olde North neighborhood, Navarino Park on South Jackson Street and St. John's Park in the Downtown Green Bay neighborhood.
At Seymour Park, high-speed access would be a benefit to kids who use the park in summer, people who are homeless and job seekers for whom a computer and internet connection are necessities, said Miriah Kelley, a Seymour Park Neighborhood Association board member.
"I think it could help," Kelley said. "It's almost impossible to apply for a job with a paper application."
Smartphones provided to students often have a limited number of minutes which are quickly consumed, she said, making a free hotspot more attractive.
The grant would not be enough to cover the physical costs to build out internet cable to the four parks, said Mike Hronek, the city of Green Bay's IT administrator. But Nsight Telservices Chief Innovation Officer Brighid Riordan, a member of the city's Redevelopment Authority which discussed the plan Tuesday, offered to collaborate with the city to help cut costs and maximize the number of locations reached.
"We have a lot of fiber in the ground. There’s more than one outcome we can have if we work together," Riordan said.
RELATED:Wisconsin program to cover internet bills for those at risk of housing instability, homelessness
Collaborating with Nsight would help with a longer-term goal of extending internet service to more parks throughout the city, Hronek said. He said extending internet access to the parks would also enable the city to install security cameras in the parks for public safety purposes.
Navarino Neighborhood Association residents have endorsed extending internet to parks so security cameras can be installed to help combat drug dealing and other safety issues in the park, said Kayla Branam, president of association. She's not as supportive of the free, public internet access.
"That raises red flags for me," Branam said. "A bright, sunny park is not the best place to do schoolwork. I worry it will attract people who loiter and do nothing."
She said Navarino residents already can tap free internet at the Brown County Central Library.
The four parks were selected in consultation with Green Bay Area Public School District staff members and other nonprofit service providers, who helped identify the areas where the need for broadband access is greatest, said Will Peters, a neighborhood development specialist with the city of Green Bay. The grant program's only requirement was that the parks be located in low- and moderate-income neighborhoods.
The hotspot would be set up in each park's pavilion or shelter building. There is no shelter in Navarino Park, though, so the hotspot would be erected on a pole near the playground. The hotspots will not be able to cover the whole parks, but would be close enough to parking lots that people could connect to the internet while sitting in their vehicles.
"This isn’t the answer, the silver bullet to solve our access to broadband citywide, but we’re looking at it to help, to add a few extra spots to increase access for families and individuals who may not have access at home," Peters said.
Almost one in four households in those neighborhoods do not have an internet connection and barely half have a service that can provide a download speed of 25 megabits per second, the current federal standard for broadband internet service.
The coronavirus pandemic forced businesses and schools to rely on online options for employees and students for much of the past year and a half. The moves brought more attention to the need for high-speed internet access, especially in rural areas where access to broadband needed for virtual conferencing often involves data caps and higher prices.
The pandemic also exposed a specific need for internet access among families with school-age children. Green Bay Area Public Schools, for example, spent $2.1 million in 2020 to provide internet hotspots, computers and software licenses to help students shift to virtual learning.
RELATED:Northeast Wisconsin races to improve rural broadband after pandemic exposes 'horrible' internet speeds
RELATED:What SpaceX rocket launches have to do with high-speed internet in rural Wisconsin, plus your guide to options
The city's use of the money would be permitted under the Essential Frontline Employee Relief Program, created to provide quality of life improvements for low- and moderate-income families.
Green Bay's Redevelopment Authority approved the funding allocation Tuesday and it will go to City Council for final approval. The board also gave the city leeway to adjust the specific parks based on where Nsight has resources to offer, though the parks still must be in low- and moderate-income areas.
The plan would use existing internet pipelines owned by the city and Brown County to extend internet service most of the way to the parks in question. The city would install new underground conduit to cover the remaining distance to the parks.
Contact Jeff Bollier at (920) 431-8387 or [email protected]. Follow him on Twitter at @GBstreetwise. | https://www.greenbaypressgazette.com/story/news/2021/08/11/green-bay-offer-free-high-speed-internet-parks-via-cares-act/5537293001/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.