content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
The Defense of Yavin 4
|
|
The Defense of Yavin 4
Type: Objective
Faction: Light Rebel Alliance
Yavin 4.
Interrupt: When you play a Vehicle unit, discard X cards from your hand to reduce its cost by X (to a minimum of 1).
Health: 5
Resources Generated: 1
Block Number: 8 - 1 of 6
Set: Core Number: 0138
Illustrator: Ralph McQuarrie
|
3 Rebel Sympathizers have rated this card!
|
|Other Cards in Block 8|
|Block Stats:
|
# Units: 2
Total Cost: 9
Average Cost: 1.8
Total Force Icons: 6
Average Force Icons: 1.2
|Recent Decks Using This Card:|
6 Comments
What happens if this effect is countered by "It's Worse" (34-6)? Can the LS player trigger it again by spending extra cards? And if not (or if they choose not to) what happens with the resource (or resources) that were spent to attempt to play the unit?
Q.1. Can the LS player trigger it again by spending extra cards?
Rulebook, page 24:
"A card’s interrupt effect may only be resolved once per triggering condition."
It's Worse:
"Interrupt: When another Interrupt effect is executed, cancel its effects."
----
Q.2. What happens with the resource (or resources) that were spent to attempt to play the unit?
Interrupts resolve before their triggering effect. Resources would not be generated to play the vehicle until after The Defense of Yavin 4's interrupt is cancelled or resolved.
If the effect is cancelled and the LS player is unable to generate enough resources to play the unit, the unit returns to hand and is not played.
Rulebook, page 16:
"When a player wishes to play a card or is required to spend resources to execute an ability, he first reveals the card or designates the desired ability to the opponent. Then he generates the required number of resources from resource-providing cards in his play area to pay for the cost. After doing so, he plays the card or executes the designated ability. If a player cannot generate enough resources to pay the resource cost, the card cannot be played (it returns to his hand) nor the designated ability executed."
BakaMatt has it exactly correct. Additionally, if It's Worse is used to cancel DoY4, the discarded cards remain discarded and if the LS player can pay to finish playing the vehicle they must do so.
What if the LS player has two of these objectives in play? Would discarding 1 card reduce the cost by 2 resources? Had some discussion around this situation in our last game and was curious to what others thought. I was leaning toward it not doubling the effect since it was an interrupt and not a constant. Appreciate input/feedback from others.
Your reasoning is correct. Even if the LS player has two copies in play, each discarded card would only reduce the cost of the vehicle by 1. Even so, it might be worth having two copies of the objective out so your opponent has to destroy both to disable the effect.
Thank you for confirming my position. | http://www.cardgamedb.com/index.php/starwars/star-wars-card-spoilers/_/core/the-defense-of-yavin-4-core-8-1 |
The Royal College of Physicians library holds more than 100 volumes stolen from John Dee during his lifetime, the largest single collection of Dee’s books in the world.
Our exhibition ‘Scholar, courtier, magician: the lost library of John Dee’ ran from 18 January 2016 to 28 July 2016.
John Dee (1527–1609) was one of Tudor England’s most extraordinary and enigmatic figures – a Renaissance polymath, with interests in almost all branches of learning. He served Elizabeth I at court, advised navigators on trade routes to the ‘New World’, travelled throughout Europe and studied ancient history, astronomy, cryptography and mathematics. He is also known for his passion for mystical subjects, including astrology, alchemy and the world of angels.
Dee built, and lost, one of the greatest private libraries of 16th century England. He claimed to own over 3,000 books and 1,000 manuscripts, which he kept at his home in Mortlake near London, on the River Thames.
The authors and subjects of Dee’s books are wide-ranging, and reflect his extraordinary breadth of knowledge and expertise. They include diverse topics such as mathematics, natural history, music, astronomy, military history, cryptography, ancient history and alchemy.
These books give us an extraordinary insight into Dee’s interests and beliefs – often in his own words – through his hand-written illustrations and annotations. The books are identified as belonging to Dee by these annotations, by Dee’s distinctive signature and by evidence from both Dee’s and the RCP’s library catalogues. Details all the books at the RCP believed to have been Dee’s are available in the library catalogue and in the handlist to the collection and exhibition.
While Dee travelled to Europe in the 1580s, he entrusted the care of his library and laboratories to his brother-in-law Nicholas Fromond. But according to Dee, he ‘unduely sold it presently upon my departure, or caused it to be carried away’. Dee was devastated by the destruction of his library. He later recovered some items, but many remained lost.
We know that a large number of Dee’s books came into the possession of Nicholas Saunder. Little is known about Saunder, or whether he personally stole Dee’s books. He may have been a former pupil; the presence of multiple copies of some books in Dee’s library catalogue suggests that he kept additional copies for pupils. Saunder must have known that his books once belonged to Dee, because he repeatedly tried to erase or overwrite Dee’s signature with his own. Given that several books have part of the title page missing, we can also assume that Saunder probably cut and tore signatures from some books.
Saunder’s collections later passed to Henry Pierrepont, the Marquis of Dorchester: a devoted book collector. Dorchester’s family presented his entire library to the RCP after his death in 1680, where this exceptional collection of early printed books remains today.
Read more about John Dee's books on the library, archive and museum blog and in an article from the RCP magazine Commentary. | https://www.rcplondon.ac.uk/node/1541 |
Some Windows functions are reserved for the command line, command prompt, or CMD. There are few useful Windows commands to become a network expert. Do you want to know them?
6 of Most Useful Windows CMD Command on How to Become a Network Expert
Therefore, we have compiled the 6 most useful Windows CMD commands that everyone should know and use on their computers.
- How to Recover a Permanently Deleted Files Without Using any Software on Windows 7, Windows 8 and Vista
Whether you use System Token (CMD) or the new PowerShell (ISE), it is very likely that there are dozens of commands that we do not know that can be tremendously useful in certain situations. As it is not possible to cover all the possibilities of this tool, we have selected six quite useful, all related to networks.
All the functions that we will be able to use through these commands must work in Windows XP, Windows Vista, Windows 8, Windows 8.1 and Windows 10. To open the command line, we must go to Start> Run and type CMD or use Cortana to find CMD. We can also right-click on the start menu to open Windows PowerShell, the new alternative from Microsoft.
6 Most Useful Windows CMD Commands
The commands selected for this collection are as follows.
IPConfig – Find your IP address in a simple way
We can locate our IP address in different ways, such as in the Control Panel. However, a simple IPconfig at the command prompt quickly returns the information we need. In addition, we will also know the default gateway that will serve us with access address to the router. For more details, we can use ipconfig/ all.
IPConfig/flushdns – Resolves browsing problems
When we change the DNS of our connection, the changes may not be applied immediately. For that, we have to “clean” the current cache stored on the computer. In addition, this is a very useful function to solve navigation problems in certain cases.
Ping and tracert – more options to solve navigation problems
If the above option could serve us in certain cases, these two commands can save our lives many times. Thanks to them, we will be able to identify network problems. First, we have the ping command that we have to accompany an IP address or URL of a web. This will cause the destination server to respond, indicating latency and lost packets.
For its part, the tracert command “traces” a path between our connection and the destination server. Again, we have to accompany the command with an IP address or a URL of a web page. Quickly, we will see where our connection goes and how long it takes to “reach” each stop. If it does not happen, we can detect the fault.
Netstat -an – list of connections and ports used
The netstat command is especially interesting to show different statistics of our network and we can add certain sub commands to get more specific information. One of the most interesting is netstat -an, which shows all active connections on our computer and all the ports that are being used.
Nslookup – Find the web of a domain
When we enter a URL in the browser, it is responsible, along with the DNS servers, to locate the IP that corresponds to this address to allow us access to that web page. With the command prompt, we can do that ourselves by entering the nslookup command together with an Internet address. We can also do the opposite operation by entering that command together with an IP address.
Telnet – Connect to Telnet servers
The Windows telnet client is not installed by default and it is possible that at some point we need to access one of these servers. Although it is true that we can install it from the Control Panel, we can use the telnet command of the system symbol to quickly and easily access one of these servers, without the need of installing the third-party software.
These are the 6 Windows CMD commands every Windows users to know to become a network expert. I hope this article helps you to learn some new commands. If you are already aware of using these commands, share your experiences with our readers.
Would you like to add more useful Windows commands to this list? Then let us know through your comment section below. We will review them and add to our list in the next article update.
Also, if you have any queries regarding any of the command, then let us know in below comment section. We will get back to you to solve all your queries as soon as possible.
Which Windows CMD command do you use more? | https://www.alltop9.com/6-useful-windows-cmd-commands-become-network-expert/ |
In October 2015 the US administration abandoned its efforts to build up a new rebel force inside Syria to combat the Islamic State, acknowledging the failure of its $500 million campaign to train thousands of fighters and announcing that it will instead use the money to provide ammunition and some weapons for groups already engaged in the battle. The decision to change the policy was made after mounting evidence that the training mission had resulted in no more than a handful of American-trained fighters.
The Pentagon spent 384 million dollars out of initially planned $500 million program on the preparation of 150 fighters, instead of almost 3,000 militants it originally planned to train. At that point, US officials declared this program a bitter failure and shut it down, without ever mentioning that the Pentagon spent 2 million dollars per fighter trained.
Since then it has changed its tact and started backing alternative groups. In southern Syria the US launched a new project – the New Syrian Army (NSA), a Sunni rebel group aligned with the Free Syria Army (FSA), and mainly made up of locals from Syria’s Deir ez-Zor Governorate. With its strength of a few hundred fighters, it has received training in Jordan, as well as arms from the US and UK. Furthermore, the US-led coalition provided air and artillery support.
On July 4, the US-backed New Syrian Army suffered another crippling defeat as a result of Islamic State (IS) massive attack at Bir Mahrutha near the Syria-Jordan border.
This is the second setback in a row right after the US-trained force was defeated at Al-Bukamal on the Iraqi border.
On June 28, the NSA launched the al-Bukamal offensive, also known as Operation Day of Wrath. Al-Bukamal, just a few miles from the Iraqi frontier, is a key gateway city on the border between Syria and Iraq where the Euphrates River crosses the frontier. In 2014 it was captured by IS to effectively erase the border between Syria and Iraq. Losing it would be a huge symbolic and strategic blow to the Islamic State group.
The Pentagon-trained counterterrorism force dispatched 200 of its 300 fighters to the area. The advance was aided by anti-IS elements inside the city. Islamic State fighters encircled the rebels in a surprise ambush. They reportedly inflicted heavy casualties on the NSA forces seizing satellite communications equipment and weapons. It’s hard to say if the group will exist as a coherent force after such a rout.
It was logical to assume that such a large operation conducted by trained troops with cutting-edge equipment and surprise on their side was well prepared. Thorough planning was expected to be based on reliable intelligence and extensive logistic support. Evidently, it was not the case.
It’s impossible to understand how could such an attack, with all advantages on the side of the NSA, end in disaster. It will go down as one of the most striking defeats ever suffered by an American-backed Syrian force. In a broader sense, it shows that one more time the US military has failed in one of the training programs it runs in support of fighting the Islamic State. The crushing defeat represents yet another failure by the US to create an effective anti-IS Arab force in Syria. Earlier training missions had also gone awry.
It’s not Syria only. After disbanding the Iraqi military in the wake of the 2003 invasion, the US spent more than $25 billion through fiscal year 2012 to build a new force. Yet several Iraqi divisions collapsed under Islamic State attacks in 2014 and 2015, with soldiers shedding their weapons and uniforms and fleeing the battlefield. In the battle of Mosul (June 2014) around 1500 IS fighters defeated 30,000 Iraqi troops.
In Yemen, American-trained troops and counterterrorism forces crumbled against attacks by Houthi rebels who wound up overrunning the capital in 2014 forcing the government into exile. The battle is now being fought mostly via a Saudi-led air campaign, which is hardly a success story.
In Afghanistan, the United States has spent about $65 billion to build the army and police. In October 2015 US-backed Afghan security forces suffered a setback in Kunduz.
Today thousands of Afghan Army, police and militia defenders display poor performance against the Taliban force, which is much smaller in numbers.
In northwest Africa, the United States has spent more than $600 million to combat Islamist militancy, with training programs stretching from Morocco to Chad. American officials once heralded Mali’s military as an exemplary partner. But in 2012, battle-hardened Islamist fighters returned from combat in Libya to rout the military, including units trained by United States Special Forces. That defeat, followed by a coup led by an American-trained officer, Capt. Amadou Haya Sanogo, astounded US commanders.
French, United Nations and European Union forces now carry out training and security missions in Mali.
The American government has invested nearly $1 billion in the overall strategy in Somalia. But even with the gains, the Shabab militants have been able to carry out bombings in Mogadishu, the capital, and in neighboring countries.
Tens of billions of dollars spent by the US in recent years to train security forces across the Middle East, North Africa and elsewhere have not succeeded in transforming local fighters into effective, long-term militaries. It calls into question the effectiveness of the American conflict management policy. «Our track record at building security forces over the past 15 years is miserable», said Karl W. Eikenberry, a former military commander and United States ambassador in Afghanistan.
The US cooperation with Kurds in the northern part of Syria has its limits, while all the attempts to form a capable Sunni Arab fighting force have failed. It means, the policy aimed at bringing to power a pro-American puppet regime with its military trained by US instructors and armed with US-made weapons is questionable at best. It just does not work, neither in Syria, nor in Iraq, nor in any other country. US officials should acknowledge these realities.
At the same time, a broader regional coalition could be a powerful tool against the Islamic State. True, the United States still has a strong military presence in the area, as well as strong ties to the Kurds. But it also has weak points, such as poor intelligence in Syria and a failed military training program for the Syrian opposition, a troubled relationship with the ineffective Iraqi government and few links with Iran. Russia has the leverage in Syria that the US lacks: military partnership with the Syrian government and its forces operating on the ground, working ties with other actors like the Iranian government, and an intelligence-sharing agreement with Iraq, Syria, and Iran, that could well include Iranian allies like Hezbollah.
Working together, the US and Russia could take advantage of their respective ties with the regional actors. It is worth mentioning Henry A. Kissinger’s Primakov Lecture at the Gorchakov Fund in Moscow in February 2016, where he emphasized that «Today threats more frequently arise from the disintegration of state power and the growing number of ungoverned territories. This spreading power vacuum cannot be dealt with by any state, no matter how powerful, on an exclusively national basis. It requires sustained cooperation between the United States and Russia, and other major powers».
Perhaps, Syria will never be the same country we knew for the past seventy years. It will have to be put together again in a totally new way. This can only result from negotiations among the various Syrian players (minus IS, Jabhat al-Nusra and some other extremist groups), with the assistance of the international community, including the US and Russia. | https://www.globalresearch.ca/us-mission-to-train-syrian-opposition-forces-goes-awry/5534974 |
1. Field of the Invention
The present invention relates to an automobile on-board and/or portable telephone system in which the number of channels can be increased easily.
2. Description of the Related Art
In recent years, automobile on-board and/or portable telephone systems of the code division multiple access (CDMA) type have been developed for practical use as described in a paper “On the System Design Aspects of Code Division Multiple Access (CDMA) Applied to Digital Cellular and Personal Communications Networks”, May 19-22, 1991, IEEE Vehicular Technology Conference. A conventional example of the construction of the automobile on-board and/or portable telephone system the of CDMA is shown in FIG. 3. In the Figure, reference numeral 1 designates units at the transmitter side such as a base station and 2 units at the receiver side such as an automobile on-board telephone or a portable telephone. Denoted by reference numerals 3, 4 and 5 are information input lines which are provided, in the units at the transmitter side 1, in correspondence to channel numbers assigned to individual users and to which information from the individual users is inputted, the information input lines 3, 4 and 5 corresponding to channel numbers #1, #2 and #3, respectively. Reference numerals 6, 7 and 8 designate spread modulators connected to the information input lines 3, 4 and 5, respectively, and operative to perform spread processings in accordance with spread code codes corresponding to the individual channel numbers, and reference numeral 9 designates a combiner for synthesis and transmission of spread signals of a plurality of users. Denoted by reference numeral 10 is an a despreader adapted to perform, in the units at the receiver side 2, a despread processing in accordance with a spread code of a channel assigned to each user. In the units at the transmitter side 1, the spread modulators 6, 7 and 8 are supplied with parameters W1(t), W2(t) and Wm(t) representative of orthogonal spread codes, respectively, and a parameter PN(t) representative of a pseudo-random noise series, and the orthogonal spread codes are multiplied by the pseudo-random noise series to produce spread codes corresponding to the individual channels and spread processings are carried out in accordance with the spread codes. In the following description, the pseudo-random noise series is referred to as the “PN” series. In the units at the receiver side 2, each equipment has an a despreader 10 and when the channel number of the units at the receiver side 2 shown in FIG. 3 is #i, that despreader 10 is supplied with a parameter Wi(t) representative of an orthogonal spread code and the parameter PN(t) representative of the PN series to perform a despread processing in accordance with a spread code corresponding to that channel. To perform the spread and despread processings as above, spread codes as exemplified in FIG. 4 are used inside a certain cell in correspondence to channel numbers assigned to individual users.
In the automobile on-board and/or portable telephone system constructed as above, when user information is inputted from each information input line 3, 4 or 5 at a predetermined information transmission bit rate, for example, B(bps), a spread processing is carried out, in the units at the transmitter side 1, by the spread modulator 6, 7 or 8 in accordance with a spread code corresponding to a channel number assigned to a user of interest and then spread signals of a plurality of users are combined in the combiner 9 and transmitted. On the other hand, when a combined spread signal is received in the units at the receiver side 2, the combined spread signal is subjected to a despread processing by the despreader 10 in accordance with a spread code of a channel number assigned to each user to reproduce the information at the information transmission bit rate B(bps) and the reproduced information is delivered out through an information output line 11.
Waveforms are changed as shown in FIGS. 5 to 7 when a signal representative of user information received at a certain information transmission bit rate is subjected to a spread processing, transmitted and then subjected to despread. The user information inputted from the information input line 3, 4 or 5 has the form of a spectrum signal 12 having a bandwidth of B and a power spectrum density of P. When this spectrum signal 12 undergoes a spread processing in the spread modulator 6, 7 or 8, power in the bandwidth B is spread to a spread bandwidth S of a spread multiplexed spectrum on a link path as shown in FIG. 6 to provide a spread signal 13 shown therein. Since the spread modulators 6, 7 and 8 correspond to channel numbers assigned to the individual users and the spread codes are set to different values in correspondence to the respective channel numbers as shown in FIG. 4, the spread signal 13 differs from channel to channel to assume a multiplexed structure. FIG. 6 shows an example of a 4-channel spread multiplexed spectrum.
When the spread signal 13 as above is subjected to a despread processing in the units at the receiver side 2, the despread processing is carried out in the units at the receiver side 2 under the condition that the orthogonal spread code is Wi(t) and the PN series is PN(t) and consequently, of the 4-channel spread multiple spectrum, a spread signal of a channel corresponding to this spread code, that is, the power of a desired wave, is again concentrated in the bandwidth B and multiplexed signals of the other users (for three channels) remain spread waveforms which exist as interference waves. Then when the multiplexed spectrum is filtered to pass the band B in the units at the receiver side 2, there result results a desired wave 14 subject to the despread and a spectrum of interference wave 15. As long as the ratio between power of the desired wave 14 and power of the interference wave 15, that is, the signal to interference ratio (SIR) can be maintained at a predetermined value, the necessary quality of communication can be maintained.
Also, when B=9600, that is, the information transmission bit rate is 9600 bps, a maximum of 64 channels can be set within a range in which the SIR can be maintained at a predetermined value from the viewpoint of coping with the interference and there is available an example of an automobile on-board and/or portable telephone system using 64 kinds of Walsh codes representative of orthogonal spread code codes.
In the aforementioned conventional automobile on-board and/or portable telephone system, however, the maximum channel number of channels of the outbound link path (a link path bound from the base station to an automobile on-board telephone or a portable telephone) in one cell is limited to the number of orthogonal spread codes (assuming assumed to be m) and, for example, even when a if voice signal coded coding (coding/decoding unit) having a rate which is half the presently existing rate becomes applicable in the future in the field of communication, there will be a disadvantage in that the subscriber capacity of subscribers cannot be increased because of a shortage of the number of assigned codes or series in spite of the fact that link paths in excess of m channels are could be set up in one cell from the viewpoint of the necessary SIR and the requisite quality can could be maintained for performing communication.
More specifically, in the case where the information transmission bit rate is, for example, halved, the bandwidth becomes B/2 in a signal spectrum 16 of user information as shown in FIG. 8 and when this spectrum signal 16 having a power spectrum density of Po is subjected to a spread processing by the spread modulator 6, 7 or 8, power inside the bandwidth B/2 is spread to a spread bandwidth S of a spread multiplexed spectrum on a link path of FIG. 9 and there results a spread signal as shown in FIG. 9. Since as described previously the spread modulators 6, 7 and 8 are set with values of spread code values which are different for different channel numbers, the spread signal 17 differs for the individual channels and has a multiplexed structure. FIG. 9 shows an example of a 7-channel spread multiplexed spectrum.
When the spread signal 17 is subjected to a despread processing in the units at the receiver side 2, the despreader 10 performs the despread processing in accordance with Wi(t) representative of the orthogonal spread code and the PN(t) representative of the PN series and consequently, of the 7-channel spread multiplexed spectrum, a spread signal corresponding to this spread code, that is, the power of a desired wave 18, is again concentrated to the bandwidth B/2 and the multiplexed signals of the other users (for 6 channels) remain spread waveforms which exist as interference waves 19. Then when the multiplexed spectrum is filtered to pass the band B in the units at the receiver side 2, there result results a desired wave 18 subject to the despread and a spectrum of interference waves 19 as shown in FIG. 10. As long as the ratio between power of the desired wave 18 and power of the interference waves 19, that is, the signal to interference ratio (SIR), can be maintained at a predetermined value, the necessary quality of communication can be maintained. In this case of half rate, since the SIR can be maintained at a predetermined value, the number of the interference waves 19 for maintaining the necessary communication quality can be increased to a value which is twice the presently existing rate. For simplicity of explanation, the number of multiplexed channels is small in the example (presently existing) of FIGS. 5 to 7 and the example (in the future) of FIGS. 8 to 10 but actually the number of multiplexed channels is large (presently, 64 channels) and the number of multiplexed channels can be increased approximately twice (in this case, amounting up to 128 channels). Accordingly, if the capacity of subscribers is not increased but is left to be the existing one, then the automobile on-board and/or portable telephone system will be used wastefully.
| |
Let’s split up this question into three main parts, each one of the most important revelations in modern physics; ‘Quantum’, ‘Gravity’ and ‘Antimatter’ before getting to grips with the whole question.
Part 1: Quantum
Quantum Mechanics is all about physics at extremely small scales. Physicists like Niels Bohr and Max Planck started to realize that atoms didn’t seem to behave the way scientists like Isaac Newton or Albert Einstein predicted they should. When we look at very small things (for example atoms or parts of atoms like electrons) they don’t behave as we would expect in classical physics; Imagine you are throwing a tennis ball against a wall, in classical physics we can use Newton’s laws to predict what will happen. We know the ball will bounce off the wall, and we can even model how it will travel if we know the ball's position and mass x velocity* (known as momentum) (we can improve our model even further if we make some assumptions about the elasticity of the ball, friction and air resistance).
Snapshot from a video showing Quantum Tunnelling and how it links to smell see a video made by students at UCL’s Phys Film Makers: https://www.youtube.com/watch?v=i_w9SXTYCkc&list=PLzzIa5YfF-iIu7cpc_Ib4BvUFL79QRLzM
Now when we start looking at atoms and smaller, something strange starts to happen. A German physicist called Werner Heisenberg calculated that you can only know -exactly- the position -or- the momentum of an atom. So if you know an atom's position you can’t know exactly what the momentum will be, this is called the Heisenberg Uncertainty Principle. Even more weirdly if you know the momentum of an atom (which is akin to the speed, internal energy or temperature), then you don’t know exactly where it is! That’s like our ball being thrown against a wall and instead of bouncing off the surface, it is a bit inside and a bit outside of the wall.
Quantum Mechanics tells us the probability of finding the ball in a variety of locations. For all but the smallest of objects this doesn’t matter, so our tennis ball can be modelled using Classical (Newtonian) Mechanics, but as objects get smaller down to the size of an atom we have to take quantum effects into consideration. This isn’t just in theory, the process of fusion that powers all the stars including our Sun wouldn’t be possible without this probability effect** known as Quantum Tunnelling, allowing hydrogen atoms to get close enough to fuse and release the energy required to heat the star. A universe without quantum effects would be very different, and it seems unlikely life would be around to ask these questions in the first place.
Notes
* Velocity is a measure of speed, but it is a vector so has magnitude and direction. Speed is a scaler, it only has one value, the magnitude, and no direction is given.
** This probability factor, and not being able to determine things exactly no matter how good your measurements are, upset many scientists including Albert Einstein who suggested [The Universe] “does not play dice”. Paraphrased here so as to not bring in extra misconceptions that Einstein said he never intended.
Read Part 2 which is all about Gravity... | https://www.ucl.ac.uk/culture-online/ask-expert/your-questions-answered/pt1-could-quantum-gravity-particles-be-made-antimatter-and-travel |
Every year, millions of tons of solid waste is generated in households around the world and which is disposed of in council and municipal landfill sites. Today, a new generation of Waste to Energy, (WTE) conversion technologies are emerging which hold the potential of creating clean renewable energy from solid and liquid waste materials.
The energy content available from combustion of solid waste represents a significant “alternative energy” supply to help reduce the use and our dependency on conventional fossil fuels. “Waste” by definition includes any waste materials other than liquids, gases and any non-usable by-product generated as a result of a process or production that is no longer deemed valuable, and are therefore normally discarded.
Such waste materials typically originate from either the residential community as municipal solid waste or from commercial, light-industrial, and agricultural wastes. Wastes generated from heavy industrial manufacturing activities, chemical and medical industries are typically classified as hazardous wastes and are generally not used to convert this type of waste to energy.
Traditionally the term “waste to energy” has generally referred to the practice of incineration of waste products either by burning the rubbish in the back yard or by large industrial incinerators to produce heat.
The category of waste to energy broadly describes any of a number of processes or technologies in which a useful byproduct (energy) is recovered from an otherwise unusable source.
Waste to energy technologies physically convert waste matter into more useful forms of fuel that can be used to supply energy. These waste to energy processes includes thermal conversions such as combustion (incineration), pyrolysis, gasification, or biological treatments such as anaerobic digestion and fermentation, etc, and various combinations of the above.
For example, solid wastes can be converted into biomass wood pellets and along with gasified waste co-fired with fossil fuel coal in an existing conventional coal fired power station.
Energy can be derived from waste in a number of ways. Waste to energy processes include wastes that has been treated and made into a solid fuel for incineration to produce heat and steam. Waste that has been converted into biogas or syngas from both organic and inorganic wastes, or biological technologies, in which bacterial fermentation is used to digest organic wastes to yield fuel, converting the waste to fuel.
Below are some of the more common ways in which waste is converted in energy.
- Combustion – This is by far the oldest, most common and well-proven thermal process using a wide variety of waste fuels. Municipal and household waste is directly combusted in large waste to energy incinerators as a fuel with minimal processing known as mass burning.
The earliest waste combustion systems were simple incinerators which produced heat and carbon dioxide, along with a variety of other pollutants and had no energy recover capabilities. Today the heat energy generated from the combustion process is used to turn water into steam, which is then used to power steam-turbine generators to produce electricity.
Most modern waste incinerators now incorporate heat recovery systems and air-pollution control systems. The mass burn process burns the waste virtually as it is received thereby eliminating the need to process the material prior to burning except for the removal of oversized items and obvious non-combustible metallic materials.
The problem with this mass burn approach is that after combustion, the incinerators ash and other pollutants removal system must be capable of disposing of every bit of the size and capacity of the combusted material coming out the incinerator as it is going in.
- Gasification – Note that gasification of waste materials is not the same as incineration. Incineration is the burning of waste fuels in an oxygen rich environment, where as gasification is the conversion of waste materials that takes place in the presence of limited amounts of oxygen.
Gasification is a thermochemical process that converts solid wastes into a mixture of combustible gases. Steam or the oxygen in the air is reacted at high temperature with the available carbon in the waste material to produce gases such as carbon monoxide, hydrogen and methane. The gasification process produces a syngas (hydrogen and carbon monoxide) which is used for generating electricity power.
Whereas the incineration of waste to energy converts the fuel waste into energy directly on-site, thermal gasification of the waste materials allow the production of biogas energy which can be easily collected and transported.
Pyrolysis – This is also a thermal process similar to gasification above which involves the thermal degradation of organic waste in the absence of free oxygen to produce combustible gases. In other words, pyrolysis uses heat to break down organic materials in the absence of oxygen.
Materials suitable for pyrolysis processing include coal, animal and human waste, food scraps, paper, cardboard, plastics and rubber. The pyrolytic process produces oil which can be used as a synthetic bio-diesel fuel or refined to produce other useful products.
Although pyrolysis technology has been around for a long time, its application to biomass and waste materials is a relatively recent development as pyrolytic products are more refined and therefore can be used with greater efficiency.
A common byproduct of pyrolysis is a kind of fine-grained bio-charcoal called “biochar”, which retains most of the carbon and nutrients contained in biomass so can be used as a soil enhancement to increase soil productivity.
- Anaerobic Digestion – Landfilling is still the primary method of disposal of municipal solid waste and if left undisturbed, landfill waste produces significant amounts of gaseous byproducts, consisting mainly of carbon dioxide (CO2) and methane (natural gas, CH4). This landfill gas or “biogas” is produced by the anaerobic (oxygen-free) digestion of organic matter.
Anaerobic digestion to produce biogas can either occur naturally producing a landfill gas, or inside a controlled environment like a biogas digester. A digester is a warmed, sealed, airless container where bacteria ferment an organic material such as liquid and semi-liquid slurries, animal wastes and manures in oxygen-free conditions to produce biogas.
The main advantage of anaerobic digestion for converting waste to energy fuel is that it deals with “wet waste” which normally may be difficult to dispose of. The amount of biogas produced is limited by the size of the digester tank, so is largely used as a fuel for small-scale operations, such as farms, where enough energy can be produced to run the farm. The biogas produced can be burned in a conventional gas boiler to produce heat or as fuel in a gas engine to generate electricity or fuel some of the farm vehicles.
- Fermentation – Fermentation uses various microorganisms and yeasts to produce liquid ethanol, a type of alcohol, from biomass and biowaste materials. The conversion of waste to energy by fermentation requires a series of chemical reactions to produce the ethanol biofuel. The first reaction is called hydrolysis, which converts organic materials into sugars. The sugars are then fermented to make dilute ethanol, which is then further distilled to produce a biofuel grade ethanol, (ethyl alcohol).
With increasing global consumerism and population growth leading to an increase in the levels of waste materials we produce, waste management technologies are now helping to find alternatives to landfills by converting as much of the mixed general waste back to be recycled or reused as possible in an attempt to turn waste into energy and waste into fuels.
Rather than sending residual wastes direct to landfill, advanced conversion technologies coupled with advanced pollution control systems can be employed to convert these calorific materials into clean energy.
Advanced waste-to-energy technologies can be used to produce biogas (methane and carbon dioxide), syngas (hydrogen and carbon monoxide), liquid biofuels (ethanol and biodiesel), or pure hydrogen. Just as oil, coal and gas are used as fuels in a fossil fuel fired power stations, these alternative biofuels can also be converted into electricity.
Converting Waste to Energy has many advantages too by not only reducing the amount of landfill dumping, but by reducing the amount of greenhouse-gas emissions and pollution we pump into the atmosphere each and every year as well as reducing our dependence on non-renewable fossil fuels.
However, many waste to energy conversion technologies are designed to handle only one or a few types of waste and can be difficult to fully separate different types of waste materials.
Advances in non-incineration conversion technologies and methods like pyrolysis and thermal gasification are providing ways of generating clean energy from waste materials that avoid many of the pollution concerns around conventional incineration and combustion.
Today, we have the technologies and options available to us to separate the bio-waste which should be recycled, from the waste that can be used as a valuable and future energy source turning waste and other renewable waste fuels into clean energy. | https://www.alternative-energy-tutorials.com/biomass/waste-to-energy-conversion.html |
The invention belongs to the field of ball processing devices, and particularly relates to a small robot suitable for picking up scattered balls, which comprises a bottom plate, a driving assembly and a grabbing cylinder, wherein the driving assembly is arranged at the bottom of the bottom plate; the top side of the bottom plate is fixedly connected with a mechanical arm and a collecting basket; the grabbing cylinder is mounted at the top of the mechanical arm; and a first cavity and a second cavity are formed in the grabbing cylinder. Through the arrangement of the bottom plate, the first motor, the driving assembly, the mechanical wall, the grabbing cylinder, the first gear, the second gear, a worm, a rack and a grabbing block, during use, the driving device is driven by the driving assembly to move in a sports place, the grabbing cylinder is driven by the mechanical arm to move, and then the grabbing block is driven by the first motor to grab ball articles; picking ofball articles is thus realized; according to the device, manual picking is replaced, so that the working intensity of workers can be reduced, the workload can be reduced, and the workers can conveniently clean a sports place. | |
9 comments:
If it was truly dead then you were wise to take it down-it was awfully close to your house. However--bird poop is part of living anywhere--even in the city--now you won't hear the chirping birds anymore--how sad.ReplyDeleteReplies
Oh we still have the chirping of the birds as we have other trees at our backyard. That was just one that we took down. In fact, robins and sparrows love nesting at our patio so yeah, the birds are pretty much around us.Delete
Such a nice neighbor! I like living and seeing green in my surroundings, but during fall, I dread having to sweep the dead leaves on the ground.ReplyDeleteReplies
Oh that is one favorite thing that we do during fall, sweeping the leaves and play on it lol. Yeah, we are blessed to have great neighbors that treats us like family!Delete
Dead trees down here is a hang out for ants and termites. In a storm it may have took down your nice fence. Glad that you got that out of your way. I see you still have trees for the birdies.ReplyDeleteReplies
You are right on the money Sherrie, I forgot to mentioned that the tree has a termite house in the middle of it that's why I was glad that we took it down. I think that was the reason why it died, the termites took over the life of the apple tree.Delete
As it was a dead tree no harm done in my eyes... at least now you will have poop free garden ;-)ReplyDeleteReplies
Absolutely Agata. Besides, the birds uses the fence to hang out and the other trees in our backyard.Delete
if the tree was dead no biggie it should come down since its so close to the house it could do real damage if for some reason it fell overReplyDelete
Coal Country Miniature Golf: West Virginia
We always passed by this Coal Country Miniature Golf along Interstate 79 in Fairmont, West Virginia. I would like to explore their facili... | https://www.mycountryroads.com/2014/07/chopped-down-apple-tree.html |
Ghost ships, lost cities, unsolved crimes and mysteries, ancient mythology, dissapeared explorers - if it’s a historical mystery, it’s going under the microscope.
History has always been fascinating, but more fascinating than what we know is what we don’t; in this podcast Ashleigh Stiles looks into famous and obscure episodes of folklore, unexplained circumstances, and unsolved mysteries from around the world and throughout time.
Taking a modern lens to these ancient puzzles, she uses all the evidence and findings to try and piece together what really happened, or as close a picture as we can get, to both entertain the listener, and inspire interest in these lesser-known chapters of the human experience. So sit back and listen along, and join us in pondering the great unsolved questions of history.
EPISODE 37: WE BUILT THIS CITY (PART 2)
Way down deep in the jungle, there lies a city of such unimaginable splendour that to even conceive of it is to dream one’s wildest dreams. There also lies a wealth of danger, tropical diseases, wild animals, and an endless sea of rainforest.
Would you brave the latter to find the former? Or, dream and speculate about what treasures might be hiding away? How about both? | https://www.wizardradio.co.uk/news/demystified-we-built-this-city-part-2 |
JOB PURPOSE:
As a Lead UX Researcher, this position will drive the complete research process and collaborate closely with cross-functional counterparts in Product Management, Design, Web Development, Marketing and Engineering to influence key strategy and execution decisions. The research will be instrumental in defining next-generation products, spearheading a customer-centered decision-making process, and develop the most useful and usable product and service experiences for a global audience.
This position will conduct independent research, analyze real user behavior and work with User Interface Designers, Product Managers and Engineers to improve our product and service features and develop new ones. The work spans across all products and services, and this position is encouraged to identify new opportunities for user experience improvements. The position will strive to connect users to Caterpillar products and services, and make that connection intuitive and accessible.
JOB DUTIES:
Partner with design, product / services and marketing colleagues to understand business needs and design appropriate research studies to generate focused insights.
Design and conduct user research studies, including surveys, in-depth interviews, field studies, focus groups, and concept/prototype testing.
Foster collaboration with designers, product and service managers, marketers and sales to translate insights into actionable recommendations that inform execution.
Work in partnership with other researchers and organizations (e.g., data analytics) to create comprehensive and coordinated research strategies that triangulate data from different sources/methods to develop recommendations.
Cleverly deliver research findings with strong recommendations.
Collect and analyze user behavior through server logs, online experiments (A/B testing), benchmark studies, lab studies, surveys, etc., to understand how users experience online searches.
Work with designers, product and service managers, engineers, and research managers to prioritize research opportunities in a fast-paced, rapidly changing environment.
Identify and prioritize research questions based on analysis of existing product and service designs, business needs, project goals and risks.
Communicate results and opportunities in clear, concise, and compelling ways to all levels of management.
Drive impact, as measured by changes to the product and service design and development roadmap.
The Marketing & Digital Division is leading Caterpillar and our industry with the utilization of data analytics, innovation, digital channels and techniques to drive profit, create more efficient revenue growth, unlock new revenue streams and provide more durable competitive advantages for the business through unexpectedly great experiences for our customers.
BACKGROUND/EXPERIENCE:
Required Qualifications:
A Bachelor's degree in Market Research, Anthropology, Psychology, Sociology, Cognitive Science, Information Science or related field or equivalent practical experience is required.
Requires 4 to 6 years of relevant work experience, including experience integrating user research into product and service designs and design practices.
A strong portfolio demonstrating past work experience, deliverables, and the ability to extract actionable insights from qualitative data.
Desirable Qualifications:
A balanced understanding of strategy, research, and interaction is a must for the role.
Must be able to thrive in a fast-paced, collaborative, team-oriented, cross-functional environment.
Experience conducting customer research in an applied setting.
Strong command of quantitative, behavioral analysis and statistics.
Experience using standard market research tools.
Strong in both physical and digital customer research.
Excellent working knowledge of statistics and the principles of experiment design is desired.
Excellent interpersonal, communication, and collaboration skills.
Ability to create compelling presentations and confidently convey insights and design solutions through storytelling across all levels.
Familiarity with quantitative methods is preferred.
Desirable is flexibility / adaptability (ability to change direction based upon team and stakeholder consensus).
The preferred location is Peoria, IL, Chicago, IL or Denver, CO, but remote work within the United States is available for the right candidate.
Compensation & Benefits:
Base salary for this role ranges from $104,112 to $156,168. Actual salary will be based on experience. The total rewards package, beyond base salary includes:
Annual incentive bonus plan
Medical, dental and vision coverage starting day 1
Paid time off plan (Vacation, Holiday, Volunteer, Etc.)
401(K) Savings Plan including company match
Health savings account (HSA)
Flexible spending accounts (FSA)
Short and long term disability coverage
Life insurance
Paid parental leave
Healthy Lifestyle Programs
Employee Assistance Programs
Voluntary Benefits and Employee Discounts (Ex: Accident, Identity Theft Protection)
Career Development
Subject to annual eligibility and incentive plan guidelines
EEO/AA Employer. All qualified individuals - Including minorities, females, veterans and individuals with disabilities - are encouraged to apply.
Not ready to apply? Submit your information to our Talent Network here . | https://dailyremote.com/remote-job/sr-ux-researcher-remote-2886223 |
The navy uniform that has become a symbol of America’s service in the wars in Iraq and Afghanistan has come to an end.
The Marine Corps has replaced its iconic blue and white navy uniform with a blue and yellow army uniform that is lighter, more casual and with a new coat of arms that was unveiled Wednesday.
The navy blue uniform has been worn since the 1970s and is worn by about 30,000 U.N. personnel and 1,000 civilians, the Marine Corps said.
The military has worn the blue and gold navy uniform for decades.
But in March, the Army decided to wear a blue, white and red navy uniform to honor veterans of the Vietnam War.
“The blue and red uniform will be worn for now, but the Army has not yet decided what it will wear next,” said Brig. Gen. Tom Stengel, the commander of Marine Corps Personnel.
The new blue and green uniform, which will be unveiled Wednesday at the Marine Air Warfare Center in Kaneohe Bay, Hawaii, will be used by Marine Corps personnel who are deployed to support missions overseas and for those who are on active duty.
The Army has also decided to retire its blue and silver Navy uniform.
A similar blue and black uniform was worn by the Marines in 2011 for the first time since World War II.
A blue and brown uniform was retired in 2016, and the red and gold Marine Corps uniform was adopted by the Navy in 2014.
The blue and gray Army uniform is lighter than the navy blue and is available in black, silver and bronze.
The changes come amid a growing shortage of military uniforms.
The U.C.L.A. Military Department is recommending that the Army and Marine Corps use the same uniform to be worn at all times, as they have been doing for years.
But that recommendation was made in a joint memo with the Office of the Secretary of the Army.
“We have to find a way to do it,” said Col. Ryan McKeon, a spokesman for the Marine Army, which oversees the military’s military uniforms and gear.
The two service branches also have different uniform standards for the Navy, which is required to wear the Navy uniform only when conducting exercises or in the presence of other service members, such as Marines in the field.
But the Navy has not made the changes required for use on the job and does not currently have uniforms for deployment.
The Navy has already started wearing navy-blue and navy-white uniforms for Marine Corps officers and enlisted personnel, which it will start wearing next year.
The uniforms will be available in navy blue, navy-gray, black, gold, silver, and bronze, McKeown said.
“At this point, there is no plan to change the uniforms that we have for the foreseeable future,” McKeonsaid.
He said the Navy will begin using the navy-black uniform when the military is deployed to a combat zone in a few months.
A navy-grey uniform was introduced for the Marines last year.
A separate blue and navy uniform was also introduced in 2016.
But Navy leaders were not sure if the Navy would start wearing navy blue again in the future. | https://dongphuclinhdung.com/when-the-u-s-military-uniforms-go-out-the-window/ |
The Worldbuilding Structures Worksheet is used to record the values of the structures and substructures of your world. For each substructure, circle the number from 1 to 5 and whether the value is trending or stable. If it’s trending, circle the appropriate directional arrow. For example, if Government Presence is a 3 trending to a 4 in your world, circle the number 3 and the arrow pointing in the direction of number 4.
In the early stages of the worldbuilding project, it’s useful for everyone in the group to have their own copy of the structures’ numeric values and a place to jot down a flurry of notes and ideas. Transferring the values from the card deck to the worksheet also saves valuable tabletop space as you continue to work on your project.
The concepts of structures and substructures are covered in depth in Chapters 5 and 10 in Collaborative Worldbuilding for Writers and Gamers and as Appendix A.
The Framework Worksheet has three boxes: one is for recording the world’s scope and perspective; one is for writing down major historical events that shaped the world, and the largest box across the bottom of the page is for marking those historical events on a sequential timeline.
These terms and concepts are expanded upon in Chapters 4 and 9 of Collaborative Worldbuilding for Writers and Gamers and as Appendix B.
The Scope and Schedule Worksheet helps keep your worldbuilding project on track and on schedule as your transition to populating the catalog with people, places, and things. Essentially, it’s a spreadsheet used to record which contributor agreed to create which entries, and the date by which the contributors agree to complete them. It also ensures that your world will have a diverse set of entries.
Scope and schedule are discussed more in Chapters 9 and 10 of Collaborative Worldbuilding for Writers and Gamers and as Appendix C. | https://www.collaborativeworldbuilding.com/resources/worldbuilding-worksheets/ |
Course:
Dessert
Cuisine:
American
Servings:
24
Calories:
258
kcal
Author:
Adapted from MyCakeSchool.com
Ingredients
Cupcake:
1
box yellow cake mix
eggs
oil, and water indicated on the box
1
teaspoon
vanilla extract
2
tablespoons
mayonnaise
1
cup
crushed sandwich cookies
I used Bud's Best Cookies
Buttercream:
2
sticks softened butter
8
cups
powdered sugar
2
teaspoons
vanilla extract
1/3
cup
milk
pinch
of salt
Cookie Truffle:
1 8
oz
block cream cheese
2 6
oz
bags sandwich cookies
crushed (I used Buds Best Cookies)
1
package of chocolate candy melts
Sprinkles
Instructions
For the cupcakes: Preheat oven to 325 degrees F.
In a mixing bowl, sift cake mix.
Add ingredients on box (eggs, water, oil).
Add mayonnaise and vanilla and 1 cup crushed cookies.
Mix in an electric mixer for 2 minutes.
Prepare muffin tin with cupcake liners. Fill liners 3/4 full with the mixture.
Bake for approximately 12-15 minutes.
For buttercream: Mix the softened butter until smooth in an electric mixer.
Add 4 cups of the powdered sugar and milk.
Mix at medium speed for 3-4 minutes. Add remaining sugar and vanilla extract.
Mix for 4-5 minutes.
Cookie Truffles:
Finely crush cookies. You can put them in food processor or blender. You can even place in a plastic bag and crush with rolling pin.
Mix cream cheese and crushed cookies until completely combined.
Line a cookie sheet with wax paper or parchment paper.
Roll the mixture into about 40 balls and place on cookie sheet.
Put cookie balls in freezer for about 20 minutes or until firm.
Melt the chocolate in microwave in 30 second increments.
Place cookie balls in the chocolate and remove with a fork. Place back on cookie sheet to harden.
Place in refrigerator to let chocolate completely firm.
Frost the cupcakes with buttercream using a knife. Dip frosting covered cupcake in bowl full of sprinkles.
Place truffle on top of cupcake.
Notes
You can freeze buttercream in air tight container for up to 3 months.
Nutrition
Calories:
258
kcal
|
Carbohydrates:
58
g
|
Protein:
2
g
|
Fat:
1
g
|
Cholesterol:
2
mg
|
Sodium:
241
mg
|
Potassium:
44
mg
|
Sugar:
49
g
|
Vitamin A:
10
IU
|
Calcium:
87
mg
|
Iron: | https://www.foodlovinfamily.com/wprm_print/9091 |
Hurricane is a natural disaster with far-reaching consequences. It takes away the lives of millions of people and causes damage to almost all of human creation. It can cause extensive damage to coastlines and several hundred miles inland due to heavy rainfall.
Floods and flying debris often play havoc in the lives of people living along with coastal areas. Slow-moving hurricanes produce heavy rains in mountainous regions. Landfall and mud-slides can occur due to excessive rain. Chances of flash floods also brighten due to heavy rainfall. Below are some interesting facts about hurricanes.
What is a Hurricane?
A hurricane is a form of tropical cyclone or severe tropical storm that occurs in the eastern Pacific Ocean, the Caribbean Sea, southern Atlantic Ocean and the Gulf of Mexico. All coastal areas around the Gulf of Mexico and Atlantic are prone to hurricanes.
Tropical cyclones have low thunderstorm activity and rotate counterclockwise. When the winds of a tropical cyclone are less than 38 mph, it is called a tropical depression. When the winds reach between 39-73 mph, it is classified as a tropical storm. However, when the winds exceed 74 mph, it is classified as a hurricane.
50+ Breathtaking Facts About Hurricanes
Fact 1: The term ‘hurricane’ is derived from Taino Native American word ‘hurucane,’ which means the evil spirit of the wind.
Fact 2: The first hurricane that caused people to fly in it occurred in 1943 during World War II.
Fact 3: A tropical storm is a hurricane that travels for 74 miles per hour or higher than that.
Fact 4: Hurricanes are the only weather disasters, each of them having its own name.
Fact 5: Hurricanes are first formed in a warm moisture atmosphere by swirling above tropical ocean water.
Fact 6: The center of the hurricane, which is the ‘Eye,’ can be as huge as 32 kilometers. The weather in this center (the eye) is usually calm with low winds.
Fact 7: The ‘Eye Wall’ is the ring of clouds and thunderstorms occurring closely around the eye. This experiences the most terrible hurricanes with extremely heavy rains.
Fact 8: A huge hurricane can release energy equivalent to 10 atomic bombs per second.
Fact 9: Hurricanes also produce mild tornadoes, which can last up to a few minutes.
Fact 10: Hurricanes that move slowly are likely to produce more rains causing more damage by flooding than fast-moving hurricanes.
Fact 11: Hurricane Floyd, which was barely a category I hurricane, has destroyed 19 million trees and has caused damage of more than a billion dollars.
Fact 12: Many people die in hurricanes because of the rising seawater that enters the mainland, instantly killing people.
Fact 13: Hurricanes in the Pacific Ocean are known as typhoons.
Fact 14: In the Indian Ocean, they are typically known as tropical cyclones.
Fact 15: The year 1933 had the most named storms on record with 21. In 2005, that record was broken when the National Hurricane Center identified 28 storms. 1933 is now second, and 1995 is third with 19 tropical storms.
Fact 16: The year 2005 saw the most hurricanes ever to form in a single Atlantic season, with 15.
Fact 17: The least number of tropical storms happened in 1983 when
just four storms formed. In 1982, just two hurricanes formed, making it the year with the least amount of hurricanes since 1968.
Fact 18: The first person to give names to hurricanes was a weather forecaster from Australia named C. Wragge in the 1900s.
Fact 19: The first hurricane of the year is given a name beginning with the letter “A.”
Fact 20: Hurricanes are named because it’s much easier to remember the name of a storm than using latitude and longitude. The tracking becomes easy. It also helps prevent confusion when there is more than one tropical storm or hurricane occurring at the same time.
Fact 21: The National Hurricane Center was the first organization that started assigning ‘female’ names to the hurricanes in 1953. However, they stopped this practice in 1978.
Fact 22: In 1979, men’s names were included on the list. The names are in alphabetical order, excluding the letters Q, U, X, Y and Z.
Fact 23: Today, the list includes English, Spanish and French names because these languages are most commonly used by the countries in the Atlantic Basin. There are six lists of names. Each list is used in rotation every six years.
Fact 24: Since all of the traditional names had been used for 2005, the last six named storms were called “Alpha,” “Beta,” “Gamma,” “Delta,” “Epsilon,” and “Zeta,” the first six letters of the Greek alphabet.
Fact 25: A name is retired when the storm caused so many deaths or so much destruction that it becomes insensitive to use the name again. The World Meteorological Organization is in charge of retiring hurricane names and choosing new names.
Fact 26: The headline-making hurricanes of 2004 — Charley, Frances, Ivan and Jeanne have all been retired. They will be replaced by Colin, Fiona, Igor, and Julia when the list is used again.
Fact 27: The names of costliest hurricanes include Katrina, Maria, Irma, Harvey, Sandy and Andrew.
Fact 28: Hurricane Katrina is one of the costliest category 5 type hurricanes, which has caused damage over $100 billion.
Fact 29: Hurricanes mostly occur from June to November when seas are the warmest and most humid, forming conducive weather for the hurricanes to build up.
Fact 30: In the Atlantic Ocean, hurricanes begin from 1st June, and in the Pacific, they start in mid-May. Both end together towards the end of November.
Fact 31: An average hurricane season based on data from 1968 to 2003 brings 10.6 tropical storms. Six of those become hurricanes and
two become major hurricanes, meaning category 3 or greater.
Fact 32: Planet Jupiter has a hurricane that appears as a red dot in it, spinning since 300 years. This hurricane is bigger than the earth itself.
Fact 33: Hurricanes are large enough to carry winds that travel for 160 miles per hour.
Fact 34: Hurricanes can have a diameter of 600-800 kilometers.
Fact 35: 90% of the deaths that occur during hurricanes is because of the floods created by this disaster.
Fact 36: A hurricane that occurred in Bangladesh in 1970 took away the lives of one million people. This hurricane is supposedly the worst tornado, in terms of loss of life.
Fact 37: The winds in the hurricane can cause above 2 million trillion gallons of rains per day.
Fact 38: In years with an El Niño, fewer tropical storms and hurricanes appear because vertical shear increases during El Niño years. The vertical shear can prevent tropical cyclones from forming and becoming intense.
Fact 39: In years with La Niña (opposite of El Niño), researchers have found that there are chances of an increased number of hurricanes and an increased chance that the United States and the Caribbean will experience hurricanes.
Fact 40: Hurricanes upon entering the land bring in strong winds, heavy rains and waves which are strong enough to cause damages like washing away the entire cityscape. These are known as storm surge.
Fact 41: Florida is hit by at least 40% of the hurricanes that occur in America.
Fact 42: Hurricanes are differentiated from tropical storms by their wind speeds. Tropical storms carry winds that travel 35-50 miles per hour. Hurricane’s wind speeds are double and travel for at least 74 miles per hour.
Fact 43: The Saffir-Simpson Hurricane Scale defines hurricane strength by categories. Hurricanes are categorized into 5 types, depending upon their wind speed and their capacity to cause damage. The wind speed of the 5 categories is as follows.
- Category 1- 74 to 95 miles per hour
- Category 2- 96-110 miles per hour
- Category 3- 111-129 miles per hour
- Category 4- 130-156 miles per hour
- Category 5- Most dangerous. Above 157 miles per hour.
Fact 44: A Category 1 storm is the weakest hurricane with winds having speed between 74-95 mph; a Category 5 hurricane is the strongest with winds greater than 155 mph.
Fact 45: The deadliest hurricane is the category 4 hurricane that occurred in Texas and Galveston in the year 1990. 8000 people were found dead by the 15-foot waves, which carried winds traveling for 130 miles per hour.
Fact 46: It is believed that hurricanes have killed approximately 1.9 million people over the past 200 years.
Fact 47: Most of the category 5 hurricanes occurred in the years 2000-2009, with eight. These include Isabelle (2003), Ivan (2004), Emily (2005), Katrina (2005), Rita (2005), Wilma (2005), Dean (2007), and Felix (2007).
Fact 48: Stronger hurricanes can reach 40,000 to 50,000 feet up into the sky.
Fact 49: Hurricanes need Coriolis Force to form, which is very weak at the Equator, and this is the reason that they cannot form near the Equator.
Fact 50: The Southern Hemisphere typically experiences about half the number of hurricanes as the Northern Hemisphere each year.
Fact 51: A typical hurricane needs (a) light upper-level winds, (b) warm water (at-least 80º F) and (c) pre-existing conditions with thunderstorms to form.
Fact 52: Cape Verde-type hurricanes are those Atlantic basin tropical cyclones that develop into tropical storms fairly close (<1000km or so) to the Cape Verde Islands and then become hurricanes before reaching the Caribbean.
Fact 53: It is a myth that opening windows will help equalize pressure in your house when a hurricane approaches. Your windows should be boarded up with plywood or shutters. If your windows remain open, it will just bring a lot of rain into your house and flying debris into your home, too. Don’t waste time taping your windows, either. It won’t help prevent hurricane damage.
Fact 54:2020 Atlantic hurricane season activity is projected to be extremely active, according to the team at Colorado State University (CSU) with 24 named storms (including the nine named storms that already formed as of July 4), 12 hurricanes, and 5 major hurricanes, with above-normal probability for major hurricanes making landfall along the continental United States coastline and in the Caribbean.
Fact 55: Hurricanes kill more people than any other type of storm.
Hurricanes are natural disasters and, unfortunately, cannot be controlled by man. However, we can prepare ourselves for future precautions to safeguard ourselves against such calamities, although this is remotely possible.
References: | https://www.conserve-energy-future.com/various-hurricane-facts.php |
The world's largest swimming pool, The Crystal Lagoon at the San Alfonso Del Mar Resort, is bigger than 20 Olympic-sized pools put together.
The huge pool is set halfway up the country's Pacific coast, in the city of Algarrobo, Chile, about 100 kilometers west of the capital Santiago. The pool holds the Guinness World Record for the largest and deepest pool.
The man-made Crystal Lagoon opened in December 2006 after nearly five years of construction work. According to the early estimates, the total cost of the construction of the pool was about US$ 3.5 million, but more recent, and perhaps the most accurate estimates show that the total cost of the construction of the pool is about $1.5 to 2 billion USD and about $4 million USD is the annual cost of the maintenance of the pool.
The pool, developed by Chilean company Crystal Lagoons, stretches for more than a half-mile and is filled with 66 million gallons of crystal clear seawater which is obtained directly from the Pacific Ocean via filtration.
The water is filled in the pool with the help of a computer-controlled suction and filtration system that sucks water in from the sea at one end and pumps it out at the other.
The Crystal Lagoon is also the world's deepest at 115 feet and people can also engage themselves in various activities from boating to snorkeling. | https://www.ibtimes.co.in/worlds-largest-and-deepest-pool-at-chile039s-san-alfonso-del-mar-resort-photos-344873 |
The projects aim at the formation, manipulation, and analysis of three-dimensional lipid membrane structures on micro- and nano-structured platforms. The goal is to develop a novel methodology to design and create simple artificial cells and cell organelles, bio-hybrid cells, and bio-mimicking membrane networks, which could be an entirely novel tool for cell analysis, and promises fascinating prospects for cell manipulation, biotechnology, pharmacy and material sciences. The basis of the projects is formed by an unconventional concept that involves two current cutting-edge fabrication technologies, i.e. the so-called top-down and bottom-up approaches. The combination of the two approaches, with respect to both engineering methods and biological applications, opens the door to overcome current limitations in the creation of complex soft matter objects in micro- and nanometre dimension. The key method is a recently developed micro-extrusion process. It relies, on the one hand, on the ability of the lipid molecules to self-assemble (“bottom-up”). On the other hand, photolithography processes (“top-down”) are utilized to fabricate microchips, in which shape transformation, handling and analysis of the lipid structures are performed. The proposed engineering process will enable, for the first time, to precisely design composition, size and morphology of complex membrane structures. It will provide the requirements to design an artificial cell of reasonable complexity (“bottom-up”). One main emphasis is the creation of unique bio-hybrid systems, in which artificial membrane structures are connected to living cells, or in which natural membranes of cells are integrated within artificial systems (“top-down”). This highly interdisciplinary study will further include fundamental studies on membrane properties, engineering aspects to generate novel soft-matter devices, and the development of analytical methods and lipid sensors based on micro- and nanostructured chips. | https://cordis.europa.eu/project/id/203428 |
"In general, there are two approaches to writing repetitive algorithms. One uses loops; the other uses recursion. Recursion is a repetitive process in which a function calls itself. Both approaches provide repetition, and either can be converted to the other's approach."3 Iteration is one of the categories of control structures. It allows for the processing of some action zero to many times. Iteration is also known as looping and repetition. The math term "to iterate" means to perform the statement parts of the loop. Many problems/tasks require the use of repetitive algorithms. With most programming languages this can be done with either:
- looping control structures, specifcally the for loop (an iterative approach)
- recursive calling of a function
Using repetitive algorithms as the solution method occurs in many mathematical oriented problems. These in include factorial, Fibonacci numbers, and the Towers of Hanoi problem. Solutions to these problems are often only presented in terms of using the recursive method. However, "... you should understand the two major limitations of recursion. First, recursive solutions may involve extensive overhead because they use function calls. Second, each time you make a call you use up some of your memory allocation. If the recursion is deep that is, if there is a large number of recursive calls then you may run out of memory. Both the factorial and Fibonacci numbers solutions are better developed iteratively." 1
Understanding how recursion or the iterative approaches work will be left to others. They are usually covered in detail as part of studying data structures. Our goal in covering them is to:
- Provide you with a definition of recursion
- Introduce the alternate solution approach of iteration
The following demonstration program shows both solutions for 8! (eight factorial). | http://www.opentextbooks.org.hk/ditatopic/5828 |
Content-Type: text/html; charset=utf-8
Advance Bid Deadline:
October 12, 2021
Auction Homepage
| |
Next Lot
(updated Thu Sep 23 7:24:52 2021 E.S.T.)
Lot No. 284
Bidding Status:
Pat (Lapez) Rogers
American 20th C.
click thumbs
super large
super large
"My Bougainvillea Bouquet"
acrylic on canvas, 2004, signed on the front as well as on verso along with the title and dimensions, ,
- Condition:In generally very good condition, minor rubbing at the edges.
- Size: 50 x 40 in. (125 x 100 cm.)
Framing: Sold framed Note: Shipping not available unless special wooden crating authorized by buyer.
Estimates* US $
Euro
UK pound
Swiss Fr
200-400
170-340
146-292
184-368
* Estimates do not include 25% buyer's premium (see Terms and Conditions). NR means no reserve (minimum bid $50 unless otherwise posted). Estimates in other currencies based on conversion rates of Euro: 0.85 , British Pound 0.73, Swiss Fr 0.92. All conversion values are approximate with the final cost determined in dollars. | https://www.rareart.com/auction/auction210907/html/284.php |
Risk management plans documents the processes, tools and procedures that will be used to manage and control potential risks that could have a negative impact on a specific project. The document also works towards minimizing and eliminating impact of those events as well as subsequent changes that could occur with the project as a result of the risks. Risk management is a process that is ongoing throughout the span of the project and is an important part of project management.
Potential issues that could have occurred had the team not created a risk plan could have been detrimental to the success of a project. The variables within the Satellite Development Project in the manufacture of components, integration of systems, working with subcontractors, tests, and other areas made the project full of risk (Kloppenborg, 2012). One issue that could have developed without a risk plan would be the number of unknown variables within the project increasing as the project matured. Another issue is the risk of not being ready to respond to unplanned events. Risk Management plans prepares a practical way to handle risks of projects. With a risk plan, the Satellite Development Project was able to prepare a response to risks if and when they occurred.
Major impacts of risks that the team needs to understand for the project to be successful is process timing and risk finances. The satellite development project was on a tight timeline for production and required frequent risk reevaluations because the project was high risk. This has a major impact on the project because high risk events are likely to cause a significant increase in the budget, disruption of the schedule or performance problems. Risk finances deals with funds set aside specifically for risk within a project. Limited finances available for projects can create a downfall for the organization as well as unexpected cuts.
Risk management plans protects the value of a project by decreasing the probability, impact and occurrence of risks to the project. Risk plans helps the organization to excel with reliable, timely, and current information on risk. This information systematically works with the risk of the project when the risk is identified and the degree of impact to the schedule, scope, cost, and quality of the project is prioritized. By prioritizing risks, project members are able to respond to risks and plan for risks quickly. This course of action saves time, money, and resources. The goal is to identify resources to protect and obstacles to overcome.
The project manager’s responsibility is to assist project members and all appropriate stakeholders in identifying and documenting known and unforeseen risk. As project manager the recommendation to ensure the project meets the critical path identified is to develop a risk plan and criteria to evaluate and prioritize risks. From there, researched and compared methods for implementation is performed. The recommendation would also involve an integration tool that allows all members of the project to actively engage in sharing new risk, providing input for potential solution to the risk as well as view and provide feedback towards other project member’s entry of risks. This begins with a description of the project followed by probability risk occurrences. Next, a schedule, scope, quality and cost impact are created to determine the length of time a risk factor could impact the schedule, the impact the risk will have on the project budget, and envisioned accomplishments as well as determining if the risk will affect the quality of work.
To assess how to determine the level of risk management appropriate for a project, there should be risk plans in place for unexpected risks that may occur as well as a determination of the impact of possible and predicted future risks. The process includes the following procedure;
Identify risks that could cause project failure
Transfer all risks to proper stakeholders and members
Prioritize and rank identified risks in order of impact towards project
Rate and calculate the risk of high significance, probability, and controllability
Narrow the project scope to the most crucial risks and plan how to minimize impact if occurred
Assign each project member a specific risk to eliminate the most crucial risks
Monitor and review risks
The level of risk factors is determined by Organizations and team attention to project execution in order to deliver impactful projects.
If the team working on the satellite development project was a virtual team in which team members were unable to meet in person the expected impact on the project would be the same. In the satellite development project, project members would utilize a database system that each member could use individually to log in risks. Every other month, the project team would hold a risk management review, in which each risk would be discussed and any decisions on actions would be made. Both of these methods can still be used in different ways to maintain and achieve its current goal in both planning and execution. Instead of meeting in the physical every month to review and analyze risks, project members can have an online session that includes only appropriate stakeholders and project members. Each person are still allowed to discuss any areas of the project that might have been impacted by the risk that resulted from the occurrence of the risk, or provided potential ideas for deferring, transferring, mitigating, or accepting the risk. The team will continue to virtually determine whether the risk decision needs to be elevated through live video sessions. Also, the integral database would still be accessible for members to review, add, and discuss all risks. | https://alpinegravity.net/risk-management-plans-documents-the-processes/ |
This invention relates to packet switching (or cell switching), in particular methods for allocating requests for switching from one of the inputs of a packet switch to one of the outputs of the packet switch and methods of fabric allocation within a packet switch.
A Hung et al, "ATM input-buffered switches with the guaranteed-rate property
A Hung et al, Proc. IEEE ISCC '98, Athens, Jul 1998, pp 331-335
Martin Collier "High-Speed Cell-Level path allocation in a three-stage ATM switch" 0-780301820-x/94,1994,IEEE
Input-buffered cell switches and packet routers are potentially the highest possible bandwidth switches for any given fabric and memory technologies, but such devices require scheduling algorithms to resolve input and output contentions. Two approaches to packet or cell scheduling exist (see, for example, ," and ). The first approach applies at the connection-level, where bandwidth guarantees are required. A suitable algorithm must satisfy two conditions for this; firstly it must ensure no overbooking for all of the input ports and the output ports, and secondly the fabric arbitration problem must be solved by allocating all the requests for time slots in the frame.
In, the use of a large number of processors to implement path allocation is described in which each of the processors deals with paths through just one middle-stage switch in each sequential iteration of the routing algorithm. The present invention seeks to obviate and/or mitigate the limitations of the prior art routing algorithm of Martin Collier by reducing the number of processors required to implement path allocation.
logically combining the status of each time slot with regard to the input port and with regard to the output port to generate the status of the time slot with regard to the given input port-output port pair;
logically combining in pairs the status of input port-output port pairs to determine whether one of the input port-output port pairs is available; and
computing the paths for the connections between each input port-output port pair until an available input port-output port pair has been identified,
collecting a plurality of said connection requests which are handled together in parallel by simultaneously computing paths for the connections between each requested input port-output port pair in parallel.
According to a first aspect of the invention there is provided a method of handling connection requests in a packet switch by identifying available time slot(s) in said packet switch in order to route a packet from an input port to a designated output port, the method comprising the steps of;
In one embodiment, a plurality of connection requests are collected which are handled together by computing the paths for the connections between each input port-output port pair in parallel by simultaneously performing said step of logically combining in pairs the status of input port-output port pairs until an available input port-output port is identified by a selection process.
In one embodiment, a plurality of connection requests are collected which are handled together by computing the paths for the connections between each input port -output port pair in parallel by processing the status information for each input-output pair generated in the step of logically combing the status of each timeslot in a logical concentrator so that the available Input port-output port pair(s) are ordered hierarchically.
In one embodiment, the method further comprises allocating connection requests by performing the steps of: establishing switch connection request data at each input port; processing the switch request data for each input port to generate connection request data for each input port-output port pairing; comparing the number of connection requests from each input port and to each output port with the maximum connection request capacity of each input port and each output port; and allocating all connection requests for those input-output pairs where the total number of connection requests is less than or equal to the maximum connection request capacity of each input port and each output port; reducing the number of connection requests for those input-output pairs where the total number of connection requests is greater than the maximum connection request capacity of each input port and each output port such that the number of requests is less than or equal to the maximum request capacity of each input port and each output port; allocating the remaining connection requests and handling the allocated connection requests using the identified available time-slots.
In one embodiment, the method further comprises allocating connection requests by performing the steps of: (a) establishing switch connection request data at each input port; (b) processing the switch connection request data for each input port to generate connection request data for each input port-output port pairing; (c) allocating a first switch connection request from each of the input port-output port pairing connection request data, the connection requests being allocated only if the maximum connection request capacity of the respective output port has not been reached; (d) allocating further switch connection requests by the iterative application of step (c) until the maximum connection request capacity of each output port has been reached; and handling the allocated connection requests using the identified available time-slots.
In one embodiment, the method further comprises allocating connection requests by performing the steps of: (a) establishing switch connection request data at each input port; (b) processing the switch connection request data for each input port to generate connection request data for each input port-output port pairing; (c) identifying a first switch connection request from each of the input port-output port pairing request data; (d) identifying further switch connection requests by the iterative application of step (c) until all of the switch connection request data has been identified;(e) subject to the maximum request connection capacity of each input port and each output port, allocating all of the identified switch connection requests; (f) reserving unallocated switch connection requests for use in the next phase of switch connection request allocation; and handling the allocated connection requests using the identified available time-slots.
According to another aspect of the invention a packet switch comprises means arranged to implement steps in any one of the previous method aspects.
Figure 1
is a schematic depiction of a three stage switch;
Figure 2
is a schematic depiction of an apparatus according to the present invention;
Figure 3
is a schematic depiction of a half-matrix concentrator according to the present invention;
Figure 4
is a schematic depiction of a concentrator according to the present invention;
Figure 5
is a depiction of a (8,8) concentrator according to the present invention;
Figure 6
is a schematic depiction of a concentrator according to a second embodiment of the present invention;
Figure 7
is a schematic depiction of a (7,4) concentrator according to the second embodiment of the present invention;
Figure 8
is a schematic depiction of a (8,8) concentrator according to a third embodiment of the present invention;
Figure 9
is a schematic depiction of a (8,8) concentrator according to a fourth embodiment of the present invention;
Figure 10
is a schematic depiction of an apparatus for determining sequentially the address of a common middle-stage switch;
Figure 11
is a schematic depiction of an apparatus for preventing the overbooking of output ports according to the present invention; and
Figure 12
is a schematic depiction of a processor for use in determining in parallel the address of a common middle-stage switch according to the present invention.
The invention will now be described with reference to the following figures in which:
Figure 1
shows a three-stage switch 100 which comprises a plurality of first stage switches 110, a plurality of middle stage switches 120 and a plurality of third stage switches 130. Each of the first stage switches is connected to all of the middle stage switches, and all of the middle stage switches are connected to each of the third stage switches so that any first stage switch can be connected to any third stage switch, via any of the middle stage switches. The three-stage switch 100 has p first stage switches 110 and p third stage switches 130, each of which is connected to n ports (input ports for the first stage switches 110 and output ports for the third stage switches 130). The total capacity of the three-stage switch is N input ports and N output ports, where N=n x p. The number of middle stage switches is m, where for a non-blocking switch m=2n-1. Each first stage switch has m outlets to the middle stage switches, one outlet being connected to each middle stage switch. Similarly, each third stage switch has m inlets from the middle stage switches, one inlet being connected from each middle stage switch. (Although the following discussion assumes the use of a non-blocking switch, the present invention may also be used with lower values of m, as long as m remains greater than or equal to n.)
The inventor has had the insight that techniques that can be applied to the setting-up of connections between the input ports and the output ports of circuit-based switches may also be applied to packet-based (or cell-based) switches.
Figure 1
2
3
1/2
3/2
If a single call processor is employed to control connection set-ups in the symmetric three-stage packet switch of , the maximum number of processing steps required to find a free, common middle-stage switch through which to connect one of the N input ports i to one of the N output ports j is essentially the number of outlets and inlets, m, on the first and third stage switches A and C respectively. In the prior art it is known to compare sequentially the statuses of every outlet/inlet pair attached to the same middle-stage switch, stepping through the middle-stage switches, until the first pair is found that are both unused. To make N connections across the switch therefore requires a maximum of O(Nm) processing steps. For a strictly non-blocking Clos switch with m=2n-1, and for which n ≈ (N/2) to minimise the number of matrix crosspoints needed to interconnect a total of N ports, this results in O(N) computing steps.
Figure 2a
i
j
shows an apparatus 200 embodying a binary logic tree that allows the above processing to be performed sequentially in order to find the first free, common middle-stage switch more quickly. The logic tree comprises outlet status array 210 and inlet status array 220, each status array having m elements. The outlet status array 210 contains the status of the m outlets connected to one of the first stage switches A, and the inlet status array 220 contains the status of the m inlets connected to one of the third stage switches C. - A '0' entry in the array indicates that the outlet (or inlet) is in use and a '1' entry in the array indicates that the outlet (or inlet) is available to be used. The two status arrays are logically compared using an array of m AND gates 230, such that a logic '1' output is produced only when both an outlet and its corresponding inlet are free. The statuses of all outlet and inlet pairs are made available simultaneously using parallel logic gates.
Figure 2b
Figure 2c
Figures 2a
2b
2
2
One of the resulting free inlet/outlet pairs (there may be only one such pair) is selected by comparing the outputs from the AND gate array 230 using the array of binary comparison elements 240. shows the structure of such a binary comparison element 240 and shows the truth table for the binary comparison element 240. If only one of the inlet/outlet pairs is available, then the binary comparison element 240 will pass this availability data and the addresses of the available inlet and outlet pair (the addresses may be passed either sequentially or in parallel and has the form of logm bits) to the next stage of comparison using a binary comparison element 240. If both sets of inlet/outlet pairs are free, then, for example with the combination element and truth table shown in & , the inlet/outlet pair on the uppermost input of the binary comparison element is chosen as the output. Thus, after logm stages of comparisons, the address of a free inlet/outlet pair being switched to the output of the binary tree 200, which should be the 'uppermost' free pair in the two status arrays 210 & 220.
2
2
2
Assuming that the address bits are switched in a parallel bus, the number of computing steps needed to find the first free middle-stage switch is simply the number of binary tree stages, i.e. log(2m). Thus, the total computing time for N connections is O(Nlog(2m)), which for the above assumptions results in O(NlogN) computing steps.
2
3
1
1
2
2
Figure 1
i
j
i
j
i
j+1
As an alternative to the above method it is possible to handle connection requests in parallel rather than sequentially, so that the paths for N connections can be computed simultaneously. A number of connection requests could be collected, up to the full throughput of the switch N, which could then be processed together. In this approach, all the connections wanted between a pair of first and third stage switches, e.g. A and C in , are computed simultaneously. Every first stage switch has a processor associated with it (alternatively they could be associated with the third stage switches) so there are N/n (or p) outer-stage processors. Each of these processors must interrogate each of the third stage switches in turn (N/n [or p] of them), and for each one find up to n free common inlet/outlet pairs to middle-stage switches, depending on the precise number of connections required between that pair of switches, A/C. To do this it must interrogate the status of all m middle-stage switches. Beginning, for example, with A/C, it will be possible for first and third stage switches A/C to compute their set of common, free middle-stage switches at the same time, and this parallel processing holds for all A/C. When all these parallel computations have been completed, the first and third stage switch pairings are cycled around by one, so that all A/C switch pairings are computed in parallel. This continues until all N/n input switches have been paired with all N/n third stage switches, taking N/n steps. For each pairing of first and third stage switches, it is necessary to compute up to n separate connections.
Once all of the possible connections between a given first stage switch and a given third stage switch have been established, one of these connections will be selected and the connection made between the first stage switch and the third stage switch.
Figure 3a
Figure 3b
Figure 3c
Figure 3
3/2
shows a concentrator 300 that can compute these connections in parallel, packing the addresses of all free, common middle-stage switches together onto its output lines. Concentrator 300 comprises outlet status array 310, inlet status array 320, AND gate array 330 and an array of binary comparison elements 340. shows the structure of the binary comparison element 340 and shows the truth table for the binary comparison element 340. The outlet status array 310 and the inlet status array 320 each have m elements, the elements having a '1' status to indicate that the associated middle stage switch is available and a '0' status to indicate that the associated middle stage switch is in use. The two status arrays 310 & 320 are logically compared using the array of m AND gates 330, such that a logic '1' output is produced only when both an outlet and its corresponding inlet are free. The binary comparison elements 340 process the outputs of the AND gate array 330 so that all of the '1' logic states are grouped together on the uppermost output lines, along with the address of the associated middle-stage switch. The apparatus of will guarantee to find up to n free switches in 3n-3 stages, using 3n(n-1)/2 comparison elements. The total number of computing steps that results is O(N) and the number of comparison elements required is O(N) using the previous optimum switch design. This allows all N connections through a 3-stage packet switch to be computed in at most linear time.
Figure 4
Figure 3
Figure 4
Figure 3
Figures 3b & 3c
Figure 5
Figure 5
shows an alternative to the concentrator shown in . The multi-stage concentrator 400 shown in has outlet status array 310, inlet status array 320 and AND gate array 330, as shown in . The outputs of the m AND gates are fed into an (m,m) concentrator 490, which comprises m/2 binary combination elements 340 (as shown in ), two (m/2, m/2) concentrators 450, and an mxm rearrangeable switching network 460. shows how the (m/2, m/2) concentrator 450 and the mxm rearrangeable switching network 460 may be constructed using a plurality of binary combination elements 340 for m=8. The concentrator 490 has an iterative structure, consisting of three parts. The first is a single input stage of comparison elements 340, which feed the second part, which consists of two (m/2,m/2) concentrators 450, i.e. two half-concentrators. These concentrate any logic ones on their inputs to their upper output lines. This is essential for feeding the final part of the (m,m) concentrator 490, which consists of binary comparison elements 340 configured in the form of an mxm rearrangeable switching network 460 (or permutation network). The overall effect of this particular example of an (m,m) concentrator is to pack the free, common middle-stage switch addresses into the uppermost output lines of the concentrator. shows an example of an (8,8) concentrator, which for simplicity, just shows how the logic input and output ports a,b,e and g of each comparison element are connected, i.e. the address bus has been omitted.
2
2
2
2
2
2
2
1/2
2
2
2
In general the number of stages required in the concentrator can be shown to be O((logm)), and it uses O(m(logm)) comparison elements. The total number of computing steps is O(N(logN)) for an optimally designed 3-stage Clos switch, and the total number of processing elements is O((Nm/n)(logm)) = O(N(logN)).
Figure 6
Figure 4
2
depicts an alternative embodiment of the present invention, comprising an alternative design for the (m,m) concentrator 690, which uses log(m/2) stages of comparison elements as a merging network 660 for the third part of the concentrator 600, instead of a mxm rearrangeable switching network 460. The remainder of the multistage concentrator 600 is as the multistage concentrator 400 shown in . Analysis indicates that merging networks require a significantly lower number of binary comparison elements 340 than re-arrangeable switching networks and also require fewer stages of comparison elements for the concentrator design.
S
i
m
m
=
,
∑
i
=
1
log
2
m
2
2
2
2
2
2
2
2
2
2
2
1/2
1/2
2
2
1/2
2
It can be shown that the number of stages needed to construct a concentrator increases as a series, which for logm terms is S(m,m) = ((logM) + logm)/2 and the number of binary comparison elements in the (m,m) concentrator is O(m(logm)). The total number of computing steps is O((N/2n).(log(2n))), which for an optimum 3-stage packet switch with n ≈ (N/2) is O(N(logN)). The total number of comparison elements is O((N/2).(logn)), which for an optimum 3-stage packet switch with n ≈ (N/2) is O(N(logN))
Figure 7
Figure 7
shows a refinement on the structure of an (8,8) concentrator for the strictly non-blocking 3-stage switch. Because m=2n-1, m is always odd, so the concentrator actually requires only 7 inputs and 4 outputs (i.e. n=4). The 4 outputs could all come from the outputs of the top half-concentrator, or could be distributed between the top and bottom half-concentrators, but never all through the bottom one (as it only has three inputs). Consequently, the top half-concentrator must have a full (((m+1)/2),((m+1)/2)) structure, i.e. (4,4), but the bottom one has one less input and, in this case, two fewer outputs, and so it need only be a (3,2) concentrator. shows the resulting (7,4) concentrator 900 structure, which comprises 3 binary comparison elements 340, a (4,4) concentrator 910, a (3,2) concentrator 920 and a 2 stage merging network 930. The (4,4) concentrator 910, (3,2) concentrator 920 and the 2 stage merging network 930 are all formed from a plurality of binary comparison elements 340.
2
2
2
2
Figure 8a
Figure 8b
Figure 8c
Figure 8a
Figure 8a
Figure 8a
Figure 8d
Figure 8e
If the log(m/2) logic stages of the merging network part of a concentrator were to be replaced by a single logic step, then the iterative procedure for constructing large concentrators out of smaller concentrators would no longer grow as O((logm)), but as O(logm), i.e. in a similar fashion to the iterative procedure for constructing large permutation networks. shows an example of the structure of an (8,8) concentrator. The concentrator 1000 comprises four binary comparison elements 1010, two (4,4) half concentrators 1020 and a two stage merging network 1030 formed from five switching elements 1030A, 1030B, 1030C, 1030D & 1030E. shows the structure of binary comparison element 1010 and shoes the truth table of the binary comparison element 1010. Binary comparison element 1010 comprises an OR gate 1011, a 2x2 switch 1012 and an AND gate 1013. Inputs a and b cany data regarding the availability of middle stage switches, whilst inputs c and d carry the address of the respective middle stage switch ( shows only the data buses and omits the address buses for the sake of clarity. Each (4,4) half concentrator 1020 comprises 5 binary comparison elements 1010 configured as shown in . The two half-concentrators produce eight data outputs 1021 -1028, with 1021 being the uppermost output and 1028 the lowermost output as shown in . The two stage merging network 1030 is formed from five switching elements 1030A-1030E. shows the structure of switching elements 1030A-1030E and shoes the truth table of the switching elements 1030A-1030E. Switching elements 1030A-1030E comprise a NOT gate 1031, and two 2x2 switch 1032 & 1033. Inputs a and b carry data regarding the availability of middle stage switches, whilst inputs c and d carry the address of the respective middle stage switch. The control signals applied to the NOT gate of switching elements 1030A-1030E are taken from the data outputs of the half-concentrators.
12
The concentrator 1000 differs from the previous (8,8) concentrator structures by using some of the half-concentrator outputs as control inputs for the two stages of switching elements in the merging network. The switching elements are no longer used as logic elements, but the control inputs from the half-concentrators set the states of the 2x2 switches. These can all be set simultaneously by their control inputs, so there is only one logic "toggle" delay for all switches. The half-concentrator output logic states and addresses now propagate through the previously set switches, incurring no more "toggle" delays through the multiple stages of the merging network, but only propagation delays and any bandwidth narrowing through each switch. If the logic and switch elements are implemented electronically, e.g. with transistors, where propagation delays may be small due to chip integration, benefits may be gained if bandwidth narrowing per switch stage is less than the equivalent inverse "toggle" delay of a logic gate. If all-optical interferometric switches are considered for the implementation technology, the bandwidth narrowing can be extremely small per stage (e.g. around 10 Hz bandwidth is possible per stage using psec pulses), while the "toggle" rate may be far lower (e.g. 10 Gbit/s). This apparently enormous benefit of all-optical switching will of course be offset by the relatively long propagation delays, due to lower levels of integration in optics, but these propagation delays will decrease as optical integration technology advances.
Figure 8a
Figure 8
shows all the possible permutations of logic states at the half-concentrator outputs 1021-1028. There are far less permutations at this location in the concentrator than at its overall inputs. At the half-concentrator outputs, the logic 1's representing free, middle-stage switch addresses are concentrated or packed into the uppermost output outputs of each of the two half-concentrators. Now these two sets of 1's must still be packed together, side by side, but there can be outputs of the top half-concentrator, between these two sets of 1's, that are in the 0 logic state. However, because the 1's from the top half-concentrator are packed together, this means that any 0's are also packed together. So each of the m/2 - 1 top half-concentrator outputs that could be separating logic 1's can be used to control one of the merging network switches. Let us start by considering the data output 1024. When this is in the 0 state, we want to switch all possible 1's in outputs 1025, 1026 and 1027 up by one output, i.e. to outputs 1024, 1025 and 1026. This is achieved by controlling switches 1030B and 1030E by output 1024, such that they are switched to the crossed state when output 1024 is in the 0 state. When the logic 0 from output 1024 now propagates through switch 1030B, it will be routed out of the way, closing the gap between the two sets of 1's if there is only one logic 0 on output 1024. But if there is also a logic 0 on output 1023, from the permutations shown in , we therefore need at most to switch outputs 1025 and 1026 up to outputs 1023 and 1024. Output 1023 must therefore control switches 1030A and 1030D. Switch 1030B has already been taken care of (controlled) by output 1024's logic 0. The fact that switch 1030E is also crossed does not matter, since when there are two logic 0's, in outputs 1023 and 1024, it will simply swap over the logic 0's on its outputs, which makes no difference. The two logic 0's are now removed from between the sets of logic 1's. If output 1022 is also a logic 0, so there are three logic 0's together on outputs 1022, 1023 and 1024, then we need to do no more than switch output 1025 to output 2. So output 1022 should control switch 1030C. All other switch settings 1030A, 1030B, 1030D and 1030E don't matter. In this way three logic 0's will be removed, enabling the two 1's to be together on outputs 1021 and 1022. There is one additional problem that when output 1024 is in the 1 state, all switches 1030A - 1030E will be in the through state, and so the crossed wiring pattern between lines 1024 and 1025 will cause a 0 on output 1025 to be raised to output 1024 (thus creating a new gap between the logic 1s). There is only one permutation out of the 15 where this happens. This is solved, when output 1025 is in the 0 state, by using this output as a second control input to switch 1030D, in order to allow the logic 1 from output 1024 to be raised back to output 1024. Switch 1030D will be in the crossed state if either output 1023 or output 1025 are in the 0 state.
2
2
2
2
2
2
1/2
1/2
2
The number of stages of switching elements remains as above, but the number of logic steps can be reduced. Let us assume the number of logic steps for a (4,4) concentrator is 3 (as before). The number of steps for an (8,8) concentrator will be S(8,8) = 1 + S(4,4) + 1 = 2 + S(4,4) because a concentrator requires two half-concentrators sandwiched between a left-hand logic stage and a right-hand merging network. Similarity, S(16,16) = 1+ S(8,8) + 1 = 2 + 2 + S(4,4) and S(32,32) = 1 + S(16,16) + 1 = 2 + 2 + 2 + S(4,4), so S(m,m) = 2log(m/4) + S(4,4) = 2logm - 1. (If additional regeneration were required after every merging network within a concentrator, then S(m,m) = 3logm - 3 stages). The total number of computing steps required is O((2N/n)logn) which for an optimum 3-stage packet switch with n ≈ (N/2) is O(NlogN). The total number of logic and switching elements is the same as when the merging networks are used as multiple stages of logic elements, i.e. O(N(logN)).
2
i
i-1
Figure 9
Figure 8
Figure 9a
Figure 8a
Figure 9b
Figure 9a
By careful design of the link pattern between the half-concentrators and the merging network, it is possible to reduce the logic complexity to such an extent that it becomes possible to implement the merging network with just a single stage of switches and/or gates, controlled by the logic states of the half-concentrator output links. This ensures O(logm) overall growth rate for the number of stages of both logic and switch elements in a concentrator. There is more than one way of achieving this, but the embodiment shown in and described below is one example of such a solution. For an (8,8) concentrator, the half-concentrator output link permutations shown in are examined, and ranked in order of the leftmost position that a logic 1 can appear in a link, over all the permutations. The order is 1021, 1022, 1025, 1023, 1024, 1026, 1027, 1028. shows the link pattern between half-concentrator outputs and the merging network re-arranged in this order, for an (8,8) concentrator 1100. The concentrator 1100 has the same structure as the concentrator of , except that the two stage merging network 1030 is replaced by a single-stage merging network 1130 which comprises elements 1130A - 1130G, where elements 1130D, 1130E & 1130 F are on/off gates and the other elements 1130a-1130C & 1130G are 1x2 switches. shows the control of the elements 1130A-1130G by the data links. Each element 1130A-G is controlled by only one link, except the gate that prevents a logic 1 on link 6 from exiting on that output port when the gates connecting it to either port 4 or port 5 are "on". It is evident from the permutations of 1s that many of the permutations (6 of them for an (8,8) concentrator) still have gaps which need to be closed up by the 1s s below them. The 1s to be used for this are chosen with as simple a rule as possible. Every link that can possess a gap has logic to decide whether the particular permutation extant is currently producing a gap in that link. This can simply be achieved by recognising a 0 state in the link itself, and a 1 state in the link above it, i.e. NOT(link) AND (link). When this occurs, the 1s below the link are switched upwards appropriately to fill the gap. But for larger concentrators than (8,8) this will have the added complication that more than one link can have a gap (as defined above) within the same permutation, and so a decision must be made as to which link should control the raising of 1s below it. It should be the uppermost link having a gap. To decide which is the uppermost link with a gap, all links with gaps that could have links with gaps above them must also be logically controlled by those higher links, so that they can be disabled by the higher links (with gaps) from controlling switches or gates. This is not required in the (8,8) concentrator example shown in . The numbers in brackets on the links indicate the output ports to which the links may need to be switched. Most need to be switched only to one other output port, and only link 6 needs to be switched to two possible other output ports. With this single-stage merging network, the number of concentrator stages is again
2
2
1/2
1/2
2
2
3/2
1/2
S(m,m) = 2logm - 1, and the total number of computing steps is again O(NlogN) for an optimum 3-stage packet switch with n ≈ (N/2) The number of logic and switching elements depends on the particular concentrator structure. The one described above requires O(m/8) gates and/or switches in the merging network, and hence O(m/8) for a complete concentrator. So, overall there would be O(N) for an optimum 3-stage packet switch with n ≈ (N/2).
Ted H. Szymanski, "Design principles for practical self-routing nonblocking switching networks with O(N.logN) bit-complexity," IEEE Trans. On Computers, vol.46, no.10, 1057-1069 (1997
Joseph Y Hui, Switching and Traffic Theory for Integrated Broadband Networks, Kluwer Academic Publishers, 1990, Chapter 4
1/2
2
Other concentrator designs are known, e.g. ) and) and these concentrator structures may also be used with the above parallel path search algorithm to achieve the same number of computing steps, i.e. O(NlogN).
Figure 10
2
2
Connection requests in a 3-stage switch are conventionally handled sequentially (see ). Before path-searching is performed on a new connection request, it is first established whether the output port is free, and willing to accept the connection. This can simply be achieved by interrogating the status of the desired output port. Since there are N such ports, this needs only a decoder of logN stages. For N sequential connection requests, this would take O(NlogN) steps. There follows a possible implementation for minimising the number of processing steps needed to ensure no overbooking when all N requests are processed in parallel, as in the method of the present invention.
Figure 11
2
2
2
2
2
2
1/2
2
2
Joseph Y Hui, Switching and Traffic Theory for Integrated Broadband Networks, Kluwer Academic Publishers, 1990, Chapter 6
shows an apparatus for determining the address of available middle-stage switches. The decoder comprises a number of shift registers 1310, a decoder 1320 and a array 1330 containing a list of output port statuses. If each output port is represented by a bit of information representing its status (free or connected), then a simple list of N bits is needed. To establish whether a requested output port is free, one simply has to switch a request bit to the desired memory location in the list, and if it is free, allow the request bit through to provide a time sequence of accepted requests. logN shift registers 1310 would clock out the logN output address bits in parallel to the stages of a decoder 1320. A single clock bit would then be routed through to the requested list location 1330. By staggering the shift registers, each clock bit would be switched by successive address bits of the same request at each decoder stage. This allows all N requests to be processed in O(N) steps. The successful request bits allowed through the status list (using simple logic in each list location) are connected into a single sequential stream of acceptance bits. The time location of these identifies the specific requests. In essence this method allows connection requests to be processed as a sequential series of bits, rather than of address words. The processing of N connection requests in this way reduces the number of processing steps needed to ensure no overbooking in a 3-stage switch by a factor O(logN), i.e. from O(NlogN) to O(N), with respect to the conventional sequential processing of connection requests. No overbooking can therefore be ensured using far fewer steps (O(N)) than the conventional path-searching algorithm (O(N)). This method has the same complexity as the polling technique described in , and suffers from a similar drawback that earlier cell requests have an unfair advantage in seizing the output port. The "probe-acknowledge-send" method of resolving output port conflicts described by Hui requires fewer steps (O(logN)), by employing a Batcher sorting network, but requires O(NlogN) components.
2
There are two steps in the path-search algorithm, both of which are needed in order to decide how every individual switch of the 3-stage network must connect its inputs to its outputs, according to the present invention. In the first step, every first-stage switch has a 1xn matrix of connection requests, representing the requested destination of each input port. Each entry contains at least the logN-bit destination addresses, and possibly also the source address, as well as perhaps some other parameter values. Furthermore, every first-stage switch also has an associated list giving the statuses of its m=2n-1 output ports, which records which middle-stage switches are seized and which are free from that first-stage switch. Every third-stage switch has a similar status list which will provide the statuses of all the links to the middle-stage switches from that third-stage switch. At the end of the algorithm, the first-stage matrix will also hold the addresses of the middle-stage switches allocated to each connection, and the first- and third-stage lists will hold the addresses of the connections using those middle-stage switches. Overall there are N/n iterations and in each one, the matrix and list of each first-stage switch are paired with the list of one third-stage switch, such that any first- or third-stage switch is involved in only one pairing.
Figure 12
shows the structure of each of the N/n processors 1400 that are required. Each processor 1400 comprises a first stage connection request matrix 1410, a first stage status list 1420, a third stage status list 1430, a first array of AND gates 1440, a plurality of shift registers 1450, a first concentrator 1460, a second concentrator 1465 and a second array of AND gates 1445. The first stage connection request matrix 1410 is connected to the first concentrator 1460. The outputs of the first stage status list 1420 and the third stage status list 1430 are logically combined in the first array of AND gates 1440, which has m AND gates. The first element of the first stage status list 1420 is combined with the first element of the third stage status list 1430 in the first AND gate of the array 1440, and so on for all m elements of the first stage status list and the third stage status list and the array of AND gates. The outputs of the AND gate array 1440 are concentrated using second concentrator 1465 such that all common, free middle-stage switches are connected to at most m adjacent links 1475. At the same time the contents of the first stage connection request matrix 1410, i.e. the first-stage connection requests wishing to connect between that particular pair of first- and third-stage switches, are passed through the first concentrator 1460, such that the connection requests can be concentrated onto at most n adjacent links 1470. Both of the concentrators 1460 & 1465 can operate bidirectionally, or possess two sets of switches/gates, in order to provide two separate contra-directional paths. Both of the concentrators are simultaneously concentrating another of the pairings.
2
The second array of AND gates 1445, comprising n AND gates, enables the required number of connection requests (up to n) to allow address information to pass in each direction via switches or gates 1490 as follows, while the established routes through the concentrators are held in place. The first-stage connection request addresses are routed to the first- and third-stage status lists, where they are stored in the appropriate middle-stage switch locations. The logN address bits can be transmitted sequentially, i.e. pipelined. The state of one additional bit location can be changed to signify the seizure of that middle-stage switch from the first-stage switch. In the opposite direction, preferably simultaneously, the addresses of the common, free middle-stage switches are routed through to the first-stage connection request locations and stored there. The middle-stage switch addresses can be simply generated by simultaneously clocking them sequentially from the respective one of the plurality of shift registers 1450.
The first-stage and third-stage switch pairings can now be cycled to the next set of pairings. This can be simply achieved by transmitting the first-stage matrix and list to the next adjacent processor. Alternatively, transmission requirements would be lower if instead only the third-stage list were transmitted to the adjacent processor. The first processor would transmit its third-stage list to the N/n-1th processor. The iterations continue until all first-stage and third-stage switches have been paired together. At the end of this first path-searching step, all first- and third-stage switches have sufficient information to determine which of their output ports to connect to each of their input ports.
op cit,
2
2
2
2
2
2
2
Assuming that the concentrators have the structure described by Szymanski, each status list concentrator has O(3logm) stages and each connection request concentrator has O(3logN) stages. For all N/n iterations, therefore, the middle-stage switch addresses take O((N/n)(3logm + 3logN + logm)) steps to be completed, and the connection request addresses take O((N/n)(3logm + 4logN)) steps. The latter is the larger, and therefore represents the number of computing steps needed to allocate middle-stage switches to all connections in the first- and third-stage switches.
2
In the second step of the path-search algorithm, the 1xN/n connection matrix for each middle-stage switch is computed. This is very simple, because the first-stage status lists already contain all the information. Using N links between the first-stage lists and a set of middle-stage connection matrix data memories, all destination port address bits can be transmitted sequentially in only logN steps, which is much faster than the first step. Of course this could be slowed down, if desired, by using fewer links with more steps. The middle-stage switches now have all address information necessary to connect their input ports to their output ports.
1/2
2
2
2
2
2
It is also possible to compute up to n separate connections for any Ai/Cj pair of first and third stage switches by using a single processor to interrogate the status of each common, middle-stage switch in turn (m steps to go through all possible middle-stage switches). Although the overall number of computing steps is increased to O(N) again, the number of processors required is greatly reduced to just N/n, i.e. O(N). The advantage of the present invention is that the fastest parallel algorithm found for ensuring no overbooking in a 3-stage packet switch takes O(N) time using O(NlogN) components, which is a lower component count than the existing "probe-acknowledge-send" method of resolving output port conflicts which requires fewer steps (O(logN)), but uses more components O(NlogN).
1/2
3
2
2
2
2
2
2
2
A range of computing times have been found for solving the path-searching problem, the fastest of which takes sub-linear time O(NlogN) using O(NlogN) components by employing a new, highly parallel processing algorithm making use of existing multi-stage concentrator designs. Although the known parallel looping algorithm is potentially faster, requiring either O(logN) computing steps using the same number O(NlogN) of components, or O(logN) computing steps using O(N) components, the use of Szymanski's self-routeing switch structure in the former may only be asymptotically useful for large N, and full interconnection in the latter may require far too much hardware.
GB0006084.8
The present invention provides a method whereby third stage switches can be identified in order to route a packet from a desired input port to a desired output port. It should be understood that the method of the present invention is compatible with all manner of switch fabrics and methods of avoiding fabric contention. However, it is preferred that the fabric contention avoidance method described in the applicant's co-pending United Kingdom patent application (the contents of which are hereby incorporated by reference) is implemented.
It will be understood that the algorithms and structures described above are equally applicable to either optical, electronic or opto-electronic switches. The person skilled in the art will readily appreciate that the invention also includes different combinations of Boolean logic and logic elements that achieve the same, or similar results, as those described above, e.g. replacing AND gates with NAND gates and changing the '0' and '1' signals appropriately, etc.. Although the above description has assumed that the three-stage switch is strictly non-blocking, i.e. m=2n-1, the invention is still applicable to those three stage switches having lower values of m. | |
:
CAMBRIDGE | Volpe Center Redevelopment | FT | FLOORS |
View Single Post
#
1
Posted
Apr 11, 2015, 5:28 PM
scalziand
Mortaaaaaaaaar!
Join Date: Aug 2007
Location: Naugatuck, CT/Worcester,MA
Posts: 3,491
CAMBRIDGE | Volpe Center Redevelopment | FT | FLOORS |
Quote:
So many Volpe unknowns, but councillors prioritize housing, open space (and tower)
By Marc Levy
Tuesday, April 7, 2015
Landmark tower
One way to put more onto a site constrained by a large park: build higher around it.
Councillor Leland Cheung’s suggestion to build high also painted a tower as a way to ensure people around the world better identified Cambridge and Kendall Square as distinct from Boston.
“If there’s ever been a site in Cambridge for a tall building that says ‘That’s where Cambridge is,’ I think this is it. [The John Hancock Tower] is 800-and-something feet tall? Let’s go over 1,000,” Cheung said, “and say this is where Cambridge is, this is the heart of innovation in Massachusetts.”
Other councillors didn’t seem entirely sure how serious Cheung was, with Kelley saying 1,000 feet was too high but Mayor David Maher acknowledging that “if there’s a place for height in this city, this is probably the place.”
http://www.cambridgeday.com/2015/04/...ace-and-tower/
It's probably a bit early to make this thread, but eventually there will be a significant development here. | http://forum.skyscraperpage.com/showpost.php?s=296eb4b9be99e9b746e201982f7b4f36&p=6986670&postcount=1 |
subphylum uniramia examples
Subphylum Crustacea • only group of arthropods that is primarily aquatic • examples: crayfish,lobster,shrimp, barnacles, isopods (pill bugs), crabs (not horseshoe) Balanus Barnacles. They are part of a larger natural group known as the panarthropods [Mandibulata (they include the Crustacea and the This redefined the Uniramia as strictly "true" arthropods with exoskeletons and jointed appendages. ); can see images from multiple angles. Examples… Centipedes. in subphylum uniramia , body divided into three parts, the head, thorax, and the abdomen, have a lubrum and labium, mouth parts may be highly modified head of class insecta has one pair of antennae, compound eyes and simple eyes, have a mandible and two pairs of maxillae, one pair of which is fused to form a lower lip or labium legs and antennae. Ø Trunk segments are diplosegmentic (Diplosegmentic = formed by the fusion of two segments during embryonal development). Ø Each diplosegment bears two pairs of legs and two pairs of spiracles. The general characteristics of Insects (class): – Insects are most successful form of the life on planet: they do up more than half of all living things on Earth – Some experts propose that there are more than 10 million insects – Divided into 29 orders Have 2 pairs of legs on each body segment ... | PowerPoint PPT presentation | free to view . Subphylum Trilobitomorpha . how are arthropods and annelids similar. examples of uniramia. Almost all are terrestrial or aquatic (freshwater). 2 pairs of maxillae (usually) 5. body is covered with a cuticular exoskeleton. Arthropod taxonomy is currently under review and … Mantis Shrimp. Subphylum Uniramia. subphylum myriapoda This Subphylum contains four classes, Chilopoda (centipedes) , Symphyla , Diplopoda (millipedes) and Pauropoda . 171:111â130. arthropods make up _____ of all animals . Subphylum Uniramia is characterized by uniramous (single-branching) appendages, one pair of antennae and two pairs of mouthparts (single pairs of mandibles and maxillae). Crustaceans belong to the phylum Arthropoda, along with insects, arachnids, millipedes, centipedes, and fossil trilobites.However, crustaceans occupy their own subphylum, Crustacea. Zoology Exam- Arthropoda, Crustaceans, Chelicerate and Uniramia(Myriapods), Insects. Flies Eyes (say that 5 times fast ) Simple Eyes: Sense light and dark (like ocelli) Compound Eyes: Have multiple lenses (up to 2000! Malfeasance. Seafood that is not mollusk or fish is generally arthropod.There are three great groups, subphyla or superclasses, of living arthropods: Crustacea, Uniramia, and Chelicerata. The uniramia contains the flourishing insecta. Chelicerates (Chelicerata) are a group of arthropods that includes harvestmen, scorpions, mites, spiders, horseshoe crabs, sea spiders, and ticks.There are about 77,000 living species of chelicerates. Subphylum Uniramia - Only 1 pair of legs per segment. They are relatives of insects. Subphylum Uniramia Subphylum Uniramia all other phyla • the largest subphylum • comprises ~75% of all known animals o t h e r a r t h r op o d s. Uniramia Characteristics Have the general arthropod characteristics plus: 1. unbranched (uniramous) appendages 2. "Atelocerata ." Uniramians. Manton, S. M. 1973. 3 pairs of walking legs. (MILLIPEDES) (twofold + podus, foot), CLASS CHILOPODA (CENTIPEDES) (ehenos, lip + podus, foot), SUBPHYLUM CRUSTACEA (erustacea, hard shelled), Answer of Question of Reproduction & Development, DEFINITIONS AND KEY POINTS FOR OBJECTIVES. Zoology With plural agreement: a subphylum or phylum of arthropods having unbranched (uniramous) limbs, comprising the insects and myriapods, and formerly (in Manton's original scheme) also the onychophorans. Ø Respiration is by tracheae. The first three groups have marine origin, while uniramia appears to have evolved on land. (ii) Presence of calcified exoskeleton. It has following classes: Class Diplopoda (millipedes): Two pairs of legs per apparent segment: body round in cross section. Uniramia Formerly a phylum or a subphylum of Arthropoda, in which the appendages are unbranched (i.e. Uniramia (uni – one, ramus – branch, i.e. This classification divided arthropods into a three-phyla polyphyletic group, with phylum Uniramia including the Hexapoda (insects), Myriapoda (centipedes and millipedes) and the Onychophora (velvetworms). Last but certainly not least are the animals of sub-phylum Uniramia. Long legs allow for faster movement ... Class Diplopoda: millipedes. Uniramia (uni â one, ramus â branch, i.e. Atelocerata (phylum Arthropoda) A subphylum that contains those classes formerly assigned to the subphylum Uniramia. Animals with metamerism and tagmatization;a jointed exoskeleton, and a ventral nervous system. Uniramian Printouts: Uniramians are a subgroup of the arthropods. All follow the body plan of head with a single pair of antennae, mandibles and 1 or 2 pairs of maxillae and trunk with more than 4 similar leg-bearing segments. Invertebrates, Sinauer. MLA; Chicago; APA "Atelocerata ." SubPhylum :- Uniramia. Insect, any member of the class Insecta (Hexapoda), the largest class of phylum Arthropoda, about 1 million species or three-fourths of all animals. Crayfish. The term crustacean derives from the Latin crusta, meaning crust or hard shell.In some references, the crustaceans are classified at the class … (see List of animal phyla). The Crustacea were generally considered the closest relatives of the Uniramia, and sometimes these were united as Mandibulata. Aside from the insects, the uniramids include the myriapods (millipedes and centipedes). Brusca, R.C. Subphylum Uniramia. No other invertebrate can fly (with the possible exception of one group of squid). http://www.faqs.org/abstracts/Zoology-and-wildlife-conservation/Demise-of-the-Atelocerata-Homeotic-genes-and-the-evolution-of-arthropods-and-chordates.html, https://en.wikipedia.org/w/index.php?title=Uniramia&oldid=918028779, Srpskohrvatski / ÑÑпÑÐºÐ¾Ñ ÑваÑÑки, Creative Commons Attribution-ShareAlike License. Crustaceans include crabs, lobsters, crayfish, shrimp, krill and barnacles. Subphylum Crustacea ; only group of arthropods that is primarily aquatic ; examples crayfish,lobster,shrimp, barnacles, isopods (pill bugs), crabs (not horseshoe) Balanus Barnacles. Insects (flies, bees, ants) Unique Features for Movement. The name Uniramia was temporarily rejected as a polyphyletic group, but when used now refers to the subphylum consisting of the insects + myriapods. In the past this group included the Onychophora, which are now considered a separate category. The Crustacea are a subphylum of arthropods with 67,000 described species. 3 . Their body forms and ecologies are diverse. Examples of parasitic invertebrates: These include helminth parasites, ectoparasites (leech), lice, etc. Uniramians include insects, millipedes, centipedes, and their relatives. Characters :- (i) Presence of double trunk segments, which are either cylindrical or somewhat flattened. simple eyes, antennae, and mouth appendages. Chelicerates have two body segments (tagmenta) and six pairs of appendages. Subphylum Uniramia (grasshoppers, butterflies, etc.) Subphylum Myriapoda is divided into fo… The group is currently used in a narrower sense. Class Trilobita: possess a plate ... Mouthparts, for example, are greatly modified among insect taxa to perform a variety of feeding functions. uses openings in exoskeleton called spiracles What is metamorphosis? Subphylum UNIRAMIA: Insects, Centipedes, Millipedes. Although the name is hyperbolic in suggesting that myriad legs are present in these invertebrates, the number of legs may vary from 10 to 750.
Lately Social Location, Hidden Garden Weddings, Papers On Deep Reinforcement Learning, Houses For Rent 75249, Whiskey Sour Egg White, Oriental Cafe Hours, | https://www.theheirhunters.co.uk/masterchef-canada-bjbhxnx/archive.php?582bc7=subphylum-uniramia-examples |
Glimåkra reeds are an economical choice and are lightweight, made in the traditional way with cord wrappings on a wooden spine. The flexible dents are made with thinner steel than the American reeds, which can be an advantage in sleying the reed (especially fine reeds), and for ease of repairing broken warp ends since you can easily open the dent with a fingernail. If we don't have what you need in stock, you will need to wait until our next shipment comes in (4–12 weeks).
Gowdey reeds are made here at home in the USA, only one state away from us in Rhode Island. Their sturdy epoxy-bound construction makes them a durable choice. This is an advantage if you get dent damage with the lighter-weight reeds. They can tolerate hard use and are especially recommended for rugs, weaving with no temple, or anything else heavy duty, but can be used to weave any fabric. | http://store.vavstuga.com/category/looms-reeds.html |
This easy onion soup is a classic French recipe a crisp bread and ooey-gooey cheese topping. In this version, condensed beef broth gives the soup its bold flavor. If the broth tastes too strong for your tastes, simply add some water. Or use a good quality homemade or store-bought beef stock, preferably low sodium. You can always taste and add extra salt before the soup is ready to serve.
A great-tasting onion soup takes time, so avoid making it if you are in a hurry. Caramelizing onions to a deep, rich, flavorful brown can take 45 minutes or longer. While it takes some time, the soup is easy to fix with just five ingredients and is so delicious and satisfying. This is an excellent soup to serve as for lunch, and it's hearty enough to serve for dinner with a tossed green salad or Caesar salad.
If your soup bowls can stand the heat from the preheated broiler—made with sturdy stoneware, cast iron, or porcelain—you can broil the bread and cheese directly on the hot soup.
Ingredients
- 3 tablespoons unsalted butter
- 4 medium onions sliced into 1/4-inch lengthwise pieces (about 6 cups)
- 2 (10 1/2-ounce) cans condensed beef broth
- 2 cans water (about 2 3/4 cups)
- 4 slices French bread (approximately 1 inch thick)
- 1/2 cup coarsely grated Gruyere cheese, more for serving
- Garnish: Chives, sliced
Steps to Make It
-
Gather the ingredients.
-
In a large non-stick skillet or saute pan over medium-low heat, melt the butter. Sauté the sliced onions for about 30 to 45 minutes, or until very soft and golden brown in color. Stir them frequently. If they are browning too quickly or scorching, turn the heat to low. It takes time to get that sweet caramelization, so don't try to rush it.
-
In a medium saucepan or Dutch oven, combine the beef broth with the cooked onion and bring to a boil. Reduce the heat to low and cover the pan. Simmer for 25 to 30 minutes.
-
Meanwhile, position a rack 4-inches from the broiler. Heat the broiler and place the bread on a foil-lined baking sheet. Toast until golden, 30 to 45 seconds.
-
Remove from the oven, flip the bread, and divide the cheese between the slices. Return to the oven and broil until the cheese is melted, 20 to 30 seconds.
-
To serve, pour the onion soup into four individual soup bowls. Float a slice of toasted French bread, cheese side up, in each bowl and sprinkle with extra cheese. Garnish with chives, if using.
-
Serve and enjoy!
Glass Bakeware Warning
Do not use glass bakeware when broiling or when a recipe calls to add liquid to a hot pan, as glass may explode. Even if it states oven-safe or heat resistant, tempered glass products can, and do, break occasionally.
Tips
- It takes time to caramelize onions to a rich, sweet, golden brown, so it shouldn't be rushed. Keep the heat low or medium-low and turn the onions frequently until they are soft and evenly browned. They should not be crusty or burnt.
- French onion soup may be frozen—without the bread and cheese—for up to three months in an airtight container.
- To make it in advance, prepare the onions and soup a day or two before you plan to serve it. Prepare the bread and cheese just before serving as you reheat the soup.
Recipe Variations
- If you don't have Gruyére cheese or can't find it locally, you may substitute with another type of Swiss cheese such as compté, Emmental, Beaufort, or Jarlsberg. Mozzarella will melt but lacks the flavor of Gruyére. Mild cheddar won't melt as nicely as Gruyere or mozzarella, but it is a good budget-friendly alternative.
- For more complex flavor, add a splash of dry sherry or Marsala wine to the soup about five minutes before it's ready.
- As soon as the French bread is toasted, rub each slice with the cut side of a clove of garlic before topping with cheese. Or spread the bread lightly with garlic butter.
- Taste and add a dash of Worcestershire sauce to the finished soup.
- Thyme goes well with onions. Add 2 to 3 sprigs of thyme to the soup when you add the broth; remove the thyme sprigs before serving. Alternatively, add about 1/2 teaspoon of dried thyme to the soup.
Recipe Tags: | https://www.thespruceeats.com/easy-french-onion-soup-3062131 |
Given a binary tree, print all root-to-leaf paths
For the below example tree, all root-to-leaf paths are:
10 –> 8 –> 3
10 –> 8 –> 5
10 –> 2 –> 2
Algorithm:
Use a path array path to store current root to leaf path. Traverse from root to all leaves in top-down fashion. While traversing, store data of all nodes in current path in array path. When we reach a leaf node, print the path array.
C++
|
|
C
|
|
Java
|
|
Python3
|
|
C#
|
|
Output :
10 8 3 10 8 5 10 2 2
Time Complexity: O(n2) where n is number of nodes.
References:
http://cslibrary.stanford.edu/110/BinaryTrees.html
Please write comments if you find any bug in above codes/algorithms, or find other ways to solve the same problem.
GeeksforGeeks has prepared a complete interview preparation course with premium videos, theory, practice problems, TA support and many more features. Please refer Placement 100 for details
Recommended Posts:
- Print all the Paths of a Binary Tree whose XOR is non-zero
- Print all k-sum paths in a binary tree
- Print Palindromic Paths of Binary tree
- Print all the paths from root, with a specified sum in Binary tree
- Given a binary tree, print out all of its root-to-leaf paths one per line. | https://www.geeksforgeeks.org/given-a-binary-tree-print-all-root-to-leaf-paths/?ref=rp |
Learning how to draw a Palm tree can help you if you are drawing a beach, scenery, a house, or any other illustration related to nature. It always depicts the beach, sandy environment. All over the habitat, there are 2600 varieties of a big or small palm tree. They are not only found near seashores but also in the plain plateau, rain forest, and desert areas. They are the source of natural beauty, food, and fiber to the world.
It is also part of the flags of South Carolina, Florida, and Saudi Arabia which delivers its importance to human life. In many cultures, it is used as a symbol of victory, abundance, fertility, and peace. Thus learn this super easy palm tree drawing and enjoy it.
- How To Draw A Chicken
- How To Draw A Monkey
- How To Draw A Shark
- How To Draw A Christmas Tree
- How To Draw A Mickey Mouse
- How To Draw A Stairs
- How To Draw A Dolphin
- How To Draw A Dog
- How To Draw A Tree
- How To Draw A Cool Emoji
- How To Draw A Rose Flower
- How To Draw A Car
- How To Draw A Book
- How To Draw A Wink Emoji
- How To Draw A Mountain
- How To Draw A Dragon
- How To Draw A Bee
- How To Draw Eyebrows
- How To Draw A Sunflower
- How To Draw A Leaf
How to draw Palm Tree:
Step 1: Draw 2 long parallel lines depicting the long trunk of the tree. Draw these 2 lines parallel and closer to each other on the top and a little apart at the bottom near the base.
Step 2: In this step start drawing the upper part, for this step draw 5 lines from the trunk in different directions. Don’t make straight lines keep them curvy as it is the base for leaves.
Step 3: Start drawing the leaves of the tree, around the first line draw curve from tip to bottom on both sides. Then repeat the curve with the second line also. You only need to keep in mind that these curves must be of random shape and size.
Step 4: Continue this leaf curve to the 3rd and 5th line, draw few tips pointed. 5th line overlaps trunks.
Step 5: In this step change the angle of the leaf. Draw a single curve for these leaves and also draw some more leaf strands in between the already drawn leaves.
Step 6: To give leafs shape draw small triangles all over the borders of the leaf. These will work as guidelines in the further drawing.
Step 7: At the point where leaves are emerging draw overlapping circles depicting the palm fruit.
Step 8: Erase all the guidelines drawn in step 6 for the leaf. Border lying outside of the triangle erase them and you will get a jagged look and darken the outlines.
Step 9: Draw flat horizontal lines on the trunk.
Step 10: Color the leaves of the tree with green and use brown for the trunk and fruit.
How to draw cartoon Palm Tree for kids:
Step 1: Start your drawing with a trunk, for this draw 2 long curved lines forming a flattened triangular shape with the top open-end and curved closed end.
Step 2: Secondly, draw small curves all along the trunk, this will give the trunk a realistic look.
Step 3: On the end of the trunk draw roots with many random curvy lines.
Step 4: On the top of the trunk draw 3 circular shapes to add fruits. By drawing these fruits your trunk opening will become close.
Step 5: Lastly palm tree is incomplete about leaves, draw 2 curved lines from the fruits and form a single leaf, do the same for other leaves. Make borders of these leafs jagged. Draw 5 to 8 leaves pointed leaves on your tree.
Your palm tree drawing is complete in these easy and simple steps.
How to draw Palm Tree step By step
Below is a few more step-by-step guidance to draw a palm tree. Drawing these is easier as it has a simple pattern. You need a paper, pencil and start with the given tutorial.
Illustration 1:
Illustration 2: | https://howtodrawa.org/palm-tree/ |
The Autonomous Vehicle Technology Expo 2019 will be on show at Messe Stuttgart from 21 to 23 May. Around 90 companies present their autonomous driving products and services.
X2E GmbH will also be present with a booth to present their high-bandwidth dataloggers and accessories. In addition, Andreas Ehrle, Head of Research & Development at X2E, will give a keynote address at the Autonomous Vehicle Test & Development Symposium: High-bandwidth data logging solution for autonomous driving
NOTE: The event will take place in parallel with the Automotive Testing Expo, the world’s largest exhibition of vehicle testing and development technologies. There, we will also be represented in Hall 10 Stand 1136 with a booth for you.
We are looking forward to your visit! | https://x2e.de/en/fairs-events/autonomous-vehicle-technology-expo-may-21-23-2019 |
Biologist & Writer
Life is the most extraordinary phenomenon in the known universe; but how does it work? Even in this age of cloning and synthetic biology, the remarkable truth remains: nobody has ever made anything living entirely out of dead material. Life remains the only way to make life. Are we missing a vital ingredient in its creation?
Like Richard Dawkins’ The Selfish Gene, which provided a new perspective on how evolution works, Life on the Edge alters our understanding of life’s dynamics. Bringing together first-hand experience of science at the cutting edge with unparalleled gifts of exposition and explanation, Jim Al-Khalili and Johnjoe Macfadden reveal the hitherto missing ingredient to be quantum mechanics and the strange phenomena that lie at the heart of this most mysterious of sciences. Drawing on recent ground-breaking experiments around the world, they show how photosynthesis relies on subatomic particles existing in many places at once, while inside enzymes, those workhorses of life that make every molecule within our cells, particles vanish from one point in space and instantly materialize in another.
Each chapter in Life on the Edge opens with an engaging example that illustrates one of life’s puzzles – How do migrating birds know where to go? How do we really smell the scent of a rose? How do our genes manage to copy themselves with such precision? – and then reveals how quantum mechanics delivers its answer. Guiding the reader through the maze of rapidly unfolding discovery, Al-Khalili and McFadden communicate vividly the excitement of this explosive new field of quantum biology, with its potentially revolutionary applications, and also offer insights into the biggest puzzle of all: what is life? As they brilliantly demonstrate here, life lives on the quantum edge.
Reviews:
“Life on the Edge’ gives the clearest account I’ve ever read of the possible ways in which the very small events of the quantum world can affect the world of middle-sized living creatures like us. With great vividness and clarity it shows how our world is tinged, even saturated, with the weirdness of the quantum.” (Philip Pullman)
“Hugely ambitious … the skill of the writing provides the uplift to keep us aloft as we fly through the strange and spectacular terra incognita of genuinely new science.” (Tom Whipple The Times)
“Coherence is just one of the complex phenomena that Jim Al-Khalili and Johnjoe McFadden set out to teach the reader. They succeed by using delightfully revealing analogies and similes, some borrowed from their prior work, that make slippery concepts sit still for study.” (The Economist)
“This thrilling book represents an overview of a field that barely exists. Its argument is that there really is life, at a subatomic level, where indescribably small events have powerful effects on human and animal behaviour. ” (Nicholas Blincoe, The Daily Telegraph)
“… as Jim Al-Khalili and Johnjoe McFadden show in their groundbreaking book, evidence is accumulating that life uses quantum effects for processes ranging from bird navigation and plant photosynthesis to the way enzymes carry out biochemical reactions.” (Clive Cookson, The Financial Times)
““Life on the Edge” is a fascinating and thought-provoking book that combines solid science, reasonable extrapolation from the known into the unknown, and plausible speculation to give an accessible overview of a revolutionary transformation in our understanding of the living world. I will certainly look at robins with more respect in future.” (John Gribbin, The Wall Street Journal)
“This illuminating account of an important new field is a wonderfully educative read.” (A C Grayling)
“Physicist Jim Al-Khalili and molecular biologist Johnjoe McFadden explore this extraordinary realm with cogency and wit.” (Barbara Kiser, Nature Magazine)
Hot news!!! Life on the Edge has been selected as one of the ‘Books of the Year’ for 2014 by The Economist, The Financial Times and The Independent!
US edition of Life on the Edge now published!
Even hotter news!!!! Life on the Edge has been nominated for thhe 2015 Royal Society Winton Prize!!!
but …
Adventures in the Anthropocene, by Gaia Vince
We didn’t win but we had a terrific evening. Here’s a video of the event:
and here’s all the shortlisted authors
and a nice collection of video animations for each book
and Life on the Edge has just been nominated for 2015 PhysicsWorld Book of the Year!
and Life on the Edge made it to the Wall Street Journal’s best non-fiction books for 2015 list! | https://johnjoemcfadden.co.uk/books/life-on-the-edge/ |
AbstractKaryotypic analysis and genomic copy number analysis with single nucleotide polymorphism (SNP)-based microarrays were compared with regard to the detection of recurrent genomic imbalances in 20 clear cell renal cell carcinomas (ccRCCs). Genomic imbalances were identified in 19 of 20 tumors by DNA copy number analysis and in 15 tumors by classical cytogenetics. A statistically significant correlation was observed between the number of genomic imbalances and tumor stage. The most common genomic imbalances were loss of 3p and gain of 5q. Other recurrent genomic imbalances seen in at least 15% of tumors included losses of 1p32.3-p33, 6q23.1-qter and 14q and gain of chromosome 7. The SNP-based arrays revealed losses of 3p in 16 of 20 tumors, with the highest frequency being at 3p21.31-p22.1 and 3p24.3-p25.3, the latter encompassing the VHL locus. One other tumor showed uniparental disomy of chromosome 3. Thus, altogether loss of 3p was identified in 17 of 20 (85%) cases. Fourteen tumors showed both overlapping losses of 3p and overlapping gains of 5q, and the karyotypic assessment performed in parallel revealed that these imbalances arose via unbalanced 3;5 translocations. Among the latter, there were common regions of loss at 3p21.3-pter and gain at 5q34-qter. These data suggest that DNA copy number analysis will supplant karyotypic analysis of tumor types such as ccRCC that are characterized by recurrent genomic imbalances, rather than balanced rearrangements. These findings also suggest that the 5q duplication/3p deficiency resulting from unbalanced 3;5 translocations conveys a proliferative advantage of particular importance in ccRCC tumorigenesis.
NotesPei, Jianming Feder, Madelyn M Al-Saleem, Tahseen Liu, Zemin Liu, Angen Hudes, Gary R Uzzo, Robert G Testa, Joseph R eng P30 CA006927/CA/NCI NIH HHS/ P30 CA006927-47S6/CA/NCI NIH HHS/ Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Genes Chromosomes Cancer. 2010 Jul;49(7):610-9. doi: 10.1002/gcc.20771. | https://staffpubs.fccc.edu/article/14802;jsessionid=5ADE0FE074CE484DCF68D71A40BC80B3 |
Policy development and review is guided by policy CH.BP - Framework for Policy Development and Review.
The Board's Policy Review Committee assists the Board in ensuring that all policies are developed in accordance with the Framework for Policy Development and is overseeing the comprehensive review of policy. The Board may also a pass a motion to provide further direction to the Policy Review Committee for example to review a specific policy or create a new one.
- how controversial is the topic?
- what is the impact on student learning?
- how will staff and stakeholders be affected?
- is the new or revised policy a significant change?
The policy AA.BP - Stakeholder Relations provides direction for all consultations. Plans for consultation are provided to the Board of Trustees when draft policies are brought forward to Board for first consideration.
All policies are considered three times by the Board before final approval.
At first consideration - the proposed policy is recommended to the Board for approval to seek stakeholder review. The Board may make suggestions for changes or amendments prior to approval for stakeholder review.
After first consideration - the draft policy is revised as necessary and circulated for stakeholder review. The Administration will collect and provide an analysis of stakeholder input for the Policy Review Committee reviews to consider for changes to the policy.
Second consideration - the draft policy is reviewed by the Board, and amended as needed.
Note: second and third consideration may be done at the same Board meeting as the Board determines.
Third consideration - the draft policy is recommended for approval by the Board.
the Administration monitors and report back to the Board of Trustees on implementation.
*ADDENDUM The minimum seven year review cycle for existing policy shall be suspended for the duration of the Full Review of Board Policy Project. | http://www.abusoasisdeli.com/70c/ourdistrict/policy/created/ |
Last week’s frosty temperatures and windy conditions certainly brought down a lot of leaves. If we leave them to overwinter on the lawn, fallen leaves can cause problems as they build up and pack down, encouraging fungus and organisms that can cause damage.
Raking leaves by hand can be a stimulating experience, best accomplished on a non-windy day. I find it helpful to use a tarp and garden cart, and pack-down the leaves before I transport them to the compost pile. Combining grass clippings and food wastes with leaves in the compost helps speed leaf decomposition to develop a finished product sooner.
Using a mulching-type lawnmower (without a leaf bag) enables chopped-up leaves to sift down into the lawn, adding valuable nutrients to the soil. You may need repeated passes to reduce the leaves to sufficiently small particles.
Running a mower equipped with a leaf-catcher bag over the fallen leaves makes the leaves easier to collect and also chops them up. I spread 3-6” of chopped-up leaves as mulch around trees and shrubs, and covering the ground where my cutback herbaceous perennial garden now looks so naked. This layer of leaves is nutritionally beneficial, also creating a mulch that discourages weed germination, insulates and helps moderate winter soil temperature fluctuations.
These days it seems to be general practice to remove newly-fallen leaves and old mulch from landscaped beds down to bare soil, and then apply fresh mulch. I personally support removing weeds and unsightly debris, but prefer leaving already-decomposing organic matter in place to add “life” to the soil. It seems so illogical to remove one type of organics, only to replace it with another, particularly when the existing matter is working just fine. If you want that fresh-mulch appearance, it’s better to wait until spring. Even then, I prefer fluffing existing mulch, topping with a light layer of new.
Now that plants have dropped their leaves, it’s easier to spot undesirables that have moved-in where they’re not wanted. In my garden Vinca, Pachysandra and ivy have a way of expanding their territory, seriously crowding my shrubs. This is the time to trim them back before they become really unmanageable. It’s also easy to identify truly invasive plants like bittersweet, buckthorn (Rhamnus), multiflora rose, Virginia creeper and poison ivy. Physically pulling out their roots rather than trying to control them chemically is the right way to manage them.
November and early December are invigorating times to work outdoors and enjoy the crisp air, particularly if we are accomplishing an important task. And if we consider fallen leaves to be a resource rather than a burden, it certainly helps makes this work more gratifying. | https://www.westonnurseries.com/raking-leaves/ |
CDOT, in close coordination with Commerce City and Adams County, is now conducting the Vasquez Boulevard I-270 to 64th Avenue National Environmental Policy Act (NEPA) and Design project. This project aims to improve traffic operations and safety at and between the 60th Avenue and 62nd Avenue intersections with Vasquez Boulevard.
Project Facts
- Cost: Approximately $24M
- Contractor: TBD
- Timeline: August 2019 to 2023 (planning and design)
- Location:Commerce City in Adams County
CDOT conducted an alternatives development and evaluation process in 2020, considering alternatives carried forward from earlier study and newly developed alternatives. This process included extensive coordination with Commerce City and Adams County staff and elected officials. Preliminary recommendations have been made.
Coordination is currently occurring with potentially impacted property owners to gather feedback related to refinements to minimize impacts before preliminary design begins. This project will also include NEPA documentation and final design of the proposed project. The improvements are planned to be constructed starting in early to mid-2024, depending on funding availability. The majority of funding has been identified. Public and stakeholder coordination will occur throughout the process. Additional information can be found below and through the Resources links on the left side of this page.
The Vasquez Boulevard I-270 to 64th Avenue NEPA and Design project began in August 2019 and is expected to be completed in Late 2023.
Construction is anticipated to begin in early- to mid-2024, depending on the availability of funding. | https://codot.gov/projects/vasquez-improvements-i270-to-64th |
Presidential: Head of state/government ('president') is popularly elected* for fixed term. Ministers can be dismissed by president. If assembly can dismiss any executive, it's only by supermajority. Popularly elected equals direct election (including two-round, plurality, or presidential elections fused with parliamentary elections), or election by electoral college directly elected for that purpose alone.
Semi-presidential (premier-presidential): Head of state ('president') is popularly elected. President has some part in appointing the head of government ('prime minister') but the prime minister, and the cabinet, is formally only removable by a majority in an assembly.
Semi-presidential (president-parliamentary): Head of state ('president') is popularly elected. Head of government ('prime minister') is appointed by president and removable by both president and a majority in an assembly. This includes countries that lack a prime minister (and may be described as quasi-semi-presidential), if the ministers appointed by the president are individually removable by the assembly (Uruguay, Colombia, Maldives, Afghanistan, Iran).
Monarchy: Hereditary or elective monarch holds executive power and appoints the government. If there is an assembly, it can only remove ministers by supermajority (or, even if it does have the power to remove ministers by majority vote, it has not yet asserted this power or established its primary role in forming the government, which would have made it parliamentary).
Assembly-independent: Head of state/government or government elected by assembly through majority (or ex officio electoral college of elected officials) for fixed term. If assembly can dismiss government, it's only by supermajority. In some cases, head of state/government election by representative assembly requires supermajority. If this is not achieved, remaining candidates go to final round of election by an electoral college composed of other officials. | http://constitutionnet.org/vl/item/executive-types-graphic-illustration |
Quantitative phase microscopy by digital holography is a good candidate for high-speed, high precision profilometry. Multi-wavelength optical phase unwrapping avoids difficulties of numerical unwrapping methods, and can generate surface topographic images with large axial range and high axial resolution. But the large axial range is accompanied by proportionately large noise. An iterative process utilizing holograms acquired with a series of wavelengths is shown to be effective in reducing the noise to a few micrometers even over the axial range of several millimeters. An alternate approach with shifting of illumination angle, instead of using multiple laser sources, provides multiple effective wavelengths from a single laser, greatly simplifying the system complexity and providing great flexibility in the wavelength selection. Experiments are performed demonstrating the basic processes of multi-wavelength digital holography (MWDH) and multi-angle digital holography (MADH). Example images are presented for surface profiles of various types of surface structures. The methods have potential for versatile, high performance surface profilometry, with compact optical system and straightforward processing algorithms.
[HTML FullText](313) [PDF 6879KB](156) [Cited by] ()
Published , Published online: 06 May 2022 , doi: 10.37188/lam.2022.026
Three-dimensional (3D) printing, also known as additive manufacturing (AM), has undergone a phase of rapid development in the fabrication of customizable and high-precision parts. Thanks to the advancements in 3D printing technologies, it is now a reality to print cells, growth factors, and various biocompatible materials altogether into arbitrarily complex 3D scaffolds with high degree of structural and functional similarities to the native tissue environment. Additionally, with overpowering advantages in molding efficiency, resolution, and a wide selection of applicable materials, optical 3D printing methods have undoubtedly become the most suitable approach for scaffold fabrication in tissue engineering (TE). In this paper, we first provide a comprehensive and up-to-date review of current optical 3D printing methods for scaffold fabrication, including traditional extrusion-based processes, selective laser sintering, stereolithography, and two-photon polymerization etc. Specifically, we review the optical design, materials, and representative applications, followed by fabrication performance comparison. Important metrics include fabrication precision, rate, materials, and application scenarios. Finally, we summarize and compare the advantages and disadvantages of each technique to guide readers in the optics and TE communities to select the most fitting printing approach under different application scenarios.
[HTML FullText](655) [PDF 10335KB](333) [Cited by] ()
Published , Published online: 01 May 2022 , doi: 10.37188/lam.2022.014
Imaging through random media continues to be a challenging problem of crucial importance in a wide range of fields of science and technology, ranging from telescopic imaging through atmospheric turbulence in astronomy to microscopic imaging through scattering tissues in biology. To meet the scope of this anniversary issue in holography, this review places a special focus on holographic techniques and their unique functionality, which play a pivotal role in imaging through random media. This review comprises two parts. The first part is intended to be a mini tutorial in which we first identify the true nature of the problems encountered in imaging through random media. We then explain through a methodological analysis how unique functions of holography can be exploited to provide practical solutions to problems. The second part introduces specific examples of experimental implementations for different principles of holographic techniques, along with their performance results, which were taken from some of our recent work.
[HTML FullText](663) [PDF 5294KB](272) [Cited by] ()
Published , Published online: 01 May 2022 , doi: 10.37188/lam.2022.024
Coded aperture imaging (CAI) is a technique to image three-dimensional scenes with special controlled abilities. In this review, we survey several recently proposed techniques to control the parameters of CAI by engineering the aperture of the system. The prime architectures of these indirect methods of imaging are reviewed. For each design, we mention the relevant application of the CAI recorders and summarize this overview with a general perspective on this research topic.
[HTML FullText](424) [PDF 13086KB](188) [Cited by] ()
Holography in the invisible. From the thermal infrared to the terahertz waves: outstanding applications and fundamental limits
Published , Published online: 11 April 2022 , doi: 10.37188/lam.2022.022
Since its invention, holography has been mostly applied at visible wavelengths in a variety of applications. Specifically, non-destructive testing of manufactured objects was a driver for developing holographic methods and all related ones based on the speckle pattern recording. One substantial limitation of holographic non-destructive testing is the setup stability requirements directly related to the laser wavelength. This observation has driven some works for 15 years: developing holography at wavelengths much longer than visible ones. In this paper, we will first review researches carried out in the infrared, mostly digital holography at thermal infrared wavelengths around 10 micrometers. We will discuss the advantages of using such wavelengths and show different examples of applications. In nondestructive testing, large wavelengths allow using digital holography in perturbed environments on large objects and measure large deformations, typical of the aerospace domain. Other astonishing applications such as reconstructing scenes through smoke and flames were proposed. When moving further in the spectrum, digital holography with so-called Terahertz waves (up to 3 millimeters wavelength) has also been studied. The main advantage here is that these waves easily penetrate some materials. Therefore, one can envisage Terahertz digital holography to reconstruct the amplitude and phase of visually opaque objects. We review some cases in which Terahertz digital holography has shown potential in biomedical and industrial applications. We will also address some fundamental bottlenecks that prevent fully benefiting from the advantages of digital holography when increasing the wavelength.
[HTML FullText](682) [PDF 4775KB](213) [Cited by] ()
Published , Published online: 02 April 2022 , doi: 10.37188/lam.2022.017
Optically transmissive and reflective objects may have varying surface profiles, which translate to arbitrary phase profiles for light either transmitted through or reflected from the object. For high-throughput applications, resolving arbitrary phases and absolute heights is a key problem. To extend the ability of measuring absolute phase jumps in existing 3D imaging techniques, the dual-wavelength concept, proposed in late 1800s, has been developed in the last few decades. By adopting an extra wavelength in measurements, a synthetic wavelength, usually larger than each of the single wavelengths, can be simulated to extract large phases or height variations from micron-level to tens of centimeters scale. We review a brief history of the developments in the dual-wavelength technique and present the methodology of this technique for using the phase difference and/or the phase sum. Various applications of the dual-wavelength technique are discussed, including height feature extraction from micron scale to centimeter scale in holography and interferometry, single-shot dual-wavelength digital holography for high-speed imaging, nanometer height resolution with fringe subdivision method, and applications in other novel phase imaging techniques and optical modalities. The noise sources for dual-wavelength techniques for phase imaging and 3D topography are discussed, and potential ways to reduce or remove the noise are mentioned.
[HTML FullText](694) [PDF 8671KB](213) [Cited by] ()
Contributions of holography to the advancement of interferometric measurements of surface topography
Published , Published online: 02 April 2022 , doi: 10.37188/lam.2022.007
Two major fields of study in optics—holography and interferometry—have developed at times independently and at other times together. The two methods share the principle of holistically recording as an intensity pattern the magnitude and phase distribution of a light wave, but they can differ significantly in how these recordings are formed and interpreted. Here we review seven specific developments, ranging from data acquisition to fundamental imaging theory in three dimensions, that illustrate the synergistic developments of holography and interferometry. A clear trend emerges, of increasing reliance of these two fields on a common trajectory of enhancements and improvements.
[HTML FullText](1327) [PDF 6212KB](436) [Cited by] ()
Published , Published online: 30 March 2022 , doi: 10.37188/lam.2022.013
With the explosive growth of mathematical optimization and computing hardware, deep neural networks (DNN) have become tremendously powerful tools to solve many challenging problems in various fields, ranging from decision making to computational imaging and holography. In this manuscript, I focus on the prosperous interactions between DNN and holography. On the one hand, DNN has been demonstrated to be in particular proficient for holographic reconstruction and computer-generated holography almost in every aspect. On the other hand, holography is an enabling tool for the optical implementation of DNN the other way around owing to the capability of interconnection and light speed processing in parallel. The purpose of this article is to give a comprehensive literature review on the recent progress of deep holography, an emerging interdisciplinary research field that is mutually inspired by holography and DNN. I first give a brief overview of the basic theory and architectures of DNN, and then discuss some of the most important progresses of deep holography. I hope that the present unified exposition will stimulate further development in this promising and exciting field of research.
[HTML FullText](1251) [PDF 6859KB](352) [Cited by] ()
Published , Published online: 14 March 2022 , doi: 10.37188/lam.2022.004
Fluorescent nanomaterials have long been recognized as essential contributors to the advancement of material technologies. Over the years, the rapid expansion in this massive selection of materials has led to the emergence of systems with tunable and unique fluorescent properties, occupying pivotal roles across niche areas in imaging, photonics, micro-encryption, and steganographic applications. In recent years, research interest in the translation of laser-based operations towards the production and modulation of nanomaterial fluorescence has been reignited, owing to its ease of operation and low cost. In this paper, we summarize the assortment of laser operations for the fabrication, modification, and spatial positioning of various fluorescent nanomaterials, ranging from metallic nanoparticles, carbon dots, 2D ultrathin films to wide-bandgap nanomaterials, and upconversion nanocrystals. In addition, we evaluate the importance of laser-modified fluorescence for various applications and offer our perspective on the role of laser-based techniques in the forthcoming advancement of nanomaterials.
[HTML FullText](639) [PDF 5985KB](386) [Cited by] ()
55 Years of Holographic Non-Destructive Testing and Experimental Stress Analysis: Is there still Progress to be expected?
Published , Published online: 10 March 2022 , doi: 10.37188/lam.2022.008
Holographic methods for non-destructive testing, shape measurement, and experimental stress analysis have shown to be versatile tools for the solution of many inspection problems. Their main advantages are the non-contact nature, the non-destructive and areal working principle, the fast response, high sensitivity, resolution and precision. In contrast to conventional optical techniques such as classical interferometry, the holographic principle of wavefront storage and reconstruction makes it possible to investigate objects with rough surfaces. Consequently, the response of various classes of products on operational or artificial load can be examined very elegantly. The paper looks back to the history of holographic metrology, honors the inventors of the main principles, discusses criteria for the selection of a proper inspection method, and shows exemplary applications. However, the main focus is on modern developments that are inspired by the rapid technological process in sensing technology and digitization, on current applications and future challenges. | https://www.light-am.com/custom/category/Review?ArticleState=inpress |
The Gender Impact of Social Security Reform compares the gendered outcomes of social security systems in Chile, Argentina, and Mexico, and presents empirical findings from Eastern and Central European transition economies as well as several OECD countries. Women’s positions have improved relative to men in countries where joint pensions have been required, widows who have worked can keep the joint pension in addition to their own benefit, the public benefit has been targeted toward low earners, and women’s retirement age has been raised to equality with that of men. The Gender Impact of Social Security Reform will force economists and policy makers to reexamine the design features that enable social security systems to achieve desirable gender outcomes.
216 pages | 23 line drawings, 66 tables | 6 x 9 | © 2008
Economics and Business: Economics--Development, Growth, Planning, Economics--Government Finance, Economics--International and Comparative, Health Economics
Reviews
“This book provides a detailed analysis of how men and women are faring under the new pension systems in three Latin American countries with recent reforms. The cross-national nature of the study, along with the way in which the authors conduct their analysis and discuss the results, will allow readers to draw many useful conclusions about how policy choices affect pension outcomes for women and men. The Gender Impact of Social Security Reform has important lessons for analysts, policy makers, and interested lay people in all countries that have enacted or are considering enacting reforms.”
Courtney Coile, Wellesley College
“Changes in longevity and fertility, and persistent increases in healthcare costs, are driving us rapidly toward solvency crises in our entitlement programs. As we confront necessities for reform, experience from other countries can suggest options and evidence of what may work. The Gender Impact of Social Security Reform contributes very knowledgeable and in-depth discussions of Social Security reform in Chile, Argentina, and Mexico—paying particular attention to the changing needs and economic roles of women.”
John Laitner, director of the Michigan Retirement Research Center and professor of economics at the University of Michigan
Table of Contents
Acknowledgments
Introduction
One / Why Do Social Security Systems and Social Security Reforms Have a Gender Impact?
Two / Living Arrangements and Standards of Elderly Men and Women
Three / How Do We Measure the Impact of Social Security Systems and Reforms?
Four / Chile
Five / Argentina
Six / Mexico
Seven / Gender Issues in Social Security Reforms of Other Regions
Eight / Design Features That Determine Gender Outcomes
Nine / Conclusion
Appendixes
NotesReferences
Be the first to know
Get the latest updates on new releases, special offers, and media highlights when you subscribe to our email lists! | https://press.uchicago.edu/ucp/books/book/chicago/G/bo5809641.html |
GDP requires that medicines are obtained from the licensed supply chain and are consistently stored, transported and handled under suitable conditions. Medicines & Healthcare Products Regulatory Agency
Good Distribution Practice outlines a set of procedures that are to be followed by any EU company with an involvement in the pharmaceutical industry. The guidelines were originally published by the European Council in 1992, contained within an European Directive 92/25/EC. Since the initial publication, the guidelines have been altered continuously, European Directive 92/25/EC has evolved into European Directive 2001/83/EC. Here article 84 and 85b(3) have a specific focus on medicine for human consumption, this is regarded as the core of Good Distribution Practice within the telematics industry.
The most recent amendment formed part of European Directive 2012/26/EU, this extended the guidelines to include pharmacovigilance. This is where drugs are monitored after they have been licensed for use in order to evaluate previously unreported adverse reactions. This could occur due to insufficient management of the medicine during the transportation process, if the vehicle chamber conditions are inadequate the characteristics of the drug may change.
Good Distribution Practice embraces each pharmaceutical-associated firm within the EU, and therefore the amount of information on offer can appear overwhelming. However, as mentioned previously, it is article 84 and 85b(3) that concern the transportation of medical products. The key principles outlined within these include:
Products and shipment containers should be secured to prevent or provide evidence of unauthorised access.
Pharmaceutical products should be stored and transported in accordance with procedures such that appropriate environmental conditions are maintained. For example, a Cold Chain Solution must be used for temperature sensitive products, often referred to as thermolabile.
The required storage conditions for pharmaceutical products should be maintained within acceptable limits during transportation. If a temperature excursion is noted by the entity responsible for shipping it must be reported to both the distributor and recipient.
Where special conditions are required during transportation that are different from or limit the given environmental conditions, these should be provided by the manufacturer on the labels. According to guidelines these requirements must be monitored and recorded.
In short, it is the responsibility of the supplying wholesale distributor to protect medicinal products against breakage, adulteration and theft. It is also vital to ensure that temperature conditions are maintained within acceptable limits during transport. GDP is effectively a quality warranty system, including requirements for purchase, receiving, storage and export of drugs intended for human consumption.
GDP is monitored within the UK by the Medicines and Healthcare products Regulatory Agency (MHRA). The MHRA carries out inspections to check if manufacturing and distribution sites comply with the GDP guidelines. Companies will be inspected on application for a manufacturer or wholesaler dealer licence and then periodically with little or no notification, based on risk assessments.
Seven Telematics offer a selection of high-quality products that can ensure GDP compliance during the transportation process, even in the most remote locations. The Transcan® package will play a pivotal role in ensuring that the mandatory environmental conditions are maintained. The SevenEye system will be key in sustaining a secure forensic audit trail, this is vital when proving ‘due diligence’ during legal proceedings. Alarm functions within the system will alert fleet managers, as well as the driver when there is a deviation from the essential temperature range. This will allow immediate action to be taken, and therefore will minimise any potential spoilage losses. Full CANbus integration alongside a sophisticated KPI reporting suite will enable pharmaceutical fleets to be managed faultlessly. | http://coldchainlogistics.co.uk/support/regulations/good-distribution-practice-gdp-2/ |
Whenever something goes terribly wrong, the first thing ministers do is to call in a judge. The judiciary’s reputation for fearless fact-finding offers the best hope of restoring public confidence. But how bad does it have to get for the government to recruit a serving lord justice of appeal and no fewer than 15 former judges from across the UK?
Sir Adrian Fulford (pictured) and his team of judicial commissioners – as the retired senior judges are called – have been appointed under the Investigatory Powers Act 2016 to review each ministerial warrant issued under the act. They started work last week at an office in central London. On their shoulders will rest the public’s trust in the entire secret state – the security and intelligence services, the police and other law enforcement agencies that use covert powers to protect us all.
Ministers have power to issue warrants for targeted interception of communications – what used to be called wiretapping – and for ‘equipment interference’, effectively computer hacking. In a ‘double-lock’ introduced by the new legislation, commissioners must then decide whether to approve those warrants.
‘They will say “yes” or “no” and perhaps occasionally “maybe”,’ Fulford told me. ‘There will be around four judicial commissioners on duty at any one time and they will be considering everything from applications to hold significant quantities of bulk data through to highly particular requests to bug, for example, a notorious armed robber in the lead-up to the gang’s latest heist.’
Fulford holds the post of investigatory powers commissioner. His office, known as IPCO, is expecting around 3,000 applications a year for interception warrants and a further 4,000 applications from agencies seeking authorisations for property interference, intrusive surveillance and similar powers. Fulford provides general guidance for the 15 commissioners as well as hearing appeals when they refuse warrants.
What test will IPCO use when deciding whether to approve a warrant? Section 23 of the Investigatory Powers Act says the commissioner must review the necessity and proportionality tests applied by the minister who granted the warrant. In doing so, the commissioner ‘must apply the same principles as would be applied by a court on an application for judicial review’.
But judicial review, at least when the courts are applying principles associated with the Wednesbury case, has traditionally been more concerned with the mechanics of a decision rather than its merits. When the new act was first proposed, I recall suggesting to Home Office officials that tests such as irrationality were not much of a safeguard.
That message seems to have been heeded. The commissioners would not be able to maintain public confidence if they applied a test of Wednesbury reasonableness. So Fulford has published a detailed advisory notice making it clear that the commissioners will apply the tests of necessity and proportionality, just as the courts would in a judicial review application based on human rights law or EU law.
In difficult or important cases, a commissioner may decide to take technical or legal advice before deciding whether to approve a warrant. IPCO has its own technology advisory panel headed by the leading statistician Sir Bernard Silverman, a former chief scientific adviser to the Home Office. Its standing counsel is Tom Hickman of Blackstone Chambers.
With the benefit of their advice, it is possible that a commissioner will be better informed about the capabilities of a device or a technique than the person who issued the warrant. Section 2 of the 2016 act requires ministers to consider whether information could be obtained by ‘other less intrusive means’ – for example, not monitoring people who happen to be in the same place as a target.
So what happens if the commissioner comes across something that might have persuaded the minister to take a different approach? Fulford’s answer is an interesting one and shows how different this process is from that of a court.
‘We couldn’t end up in the position where, because of additional information which we had obtained, we were approving a warrant that the minister would have refused,’ he said. And he would even send the papers back to the minister for reconsideration if new information fortified the case for a warrant. The commissioners’ role is to review decisions taken by ministers, not to take those decisions themselves.
After Edward Snowden leaked classified information from the US National Security Agency in 2013, its UK counterparts realised that they must be much more open about what they do – even though the details must remain confidential. ‘In the post-Snowden world,’ Fulford said, ‘security and law enforcement agencies can no longer claim to be allowed to work in the shadows regardless of whether particular details of their work actually need to be kept under wraps.’
By casting light on those shadows, Fulford and his commissioners are allowing that vital work to continue. | https://www.lawgazette.co.uk/commentary-and-opinion/public-trust-in-the-post-snowden-secret-state/5066698.article |
On Wednesday, the Federal Reserve will definitely raise interest rates, which the US President does not like. The dollar is slightly weaker but within a narrow fluctuation range. Events on the equity market have only a small impact on the currency market. Core inflation in Poland is in line with expectations.
Will the dollar appreciate on Wednesday?
There are two more days until the decision on interest rates in the USA (a practically certain 0.25 pts. increase), and the US President Donald Trump has already warmed up the atmosphere. In his Twitter post this afternoon, he suggests to the Federal Reserve (Fed) not to raise interest rates, mentioning a strong dollar and low inflation as the reasons.
This is not the first time that Trump is putting pressure on the Fed, so the response of the market this time was very limited. On Wednesday, there will be a press conference with Jerome Powell, head of the Fed. He will have the chance to emphasise the independence of the US central bank, which should support the dollar, regardless of new macroeconomic projections.
The main currency pair's quotations, i.e. euro/dollar, were slightly (about 0.3%) above Friday's level, but they were within a very narrow fluctuation range. Wednesday's events (publications and a press conference of the Fed) may provide the dollar with a trigger that will pull it out of its current stagnation.
Depending on the movement (slightly higher chances of dollar appreciation), it will have a significant impact on the whole zloty basket, not only for the USD/PLN pair. Especially if the US currency appreciates significantly, the zloty could come under great pressure. Today, however, the zloty's quotations were relatively stable - a slight weakening of the dollar translated into even slight increases in the zloty's value in relation to the basic currencies. Core inflation, published by the National Bank of Poland, did not disappoint either. The 0.7% increase in prices, excluding energy and food, turned out to be in line with the market consensus. This will support the zloty's stabilisation until the end of the day.
In the afternoon, there are no major events planned or macroeconomic data published, which increases the probability that the main currencies will continue to be traded in limited fluctuation ranges. During the European sessions, we observed drops in the main indexes, which were further deepened as the main US markets' trading opened. The 30 largest companies index Dow Jones fell to its lowest level since May, and the technology company index Nasdaq has fallen to its lowest level since April, just like S&P 500.
Recently, we have been observing a slightly greater separation of events on the equity and currency markets. Geopolitical risks and trading tensions significantly increase the volatility of the equity market, but the final impact of these events is difficult to translate into currencies. As a result, the zloty is also more resistant to equity market depreciation, and we should not observe its significant weakening.
Tomorrow's preview
Tuesday should be another relatively peaceful day on the currency market. The calendar of important macroeconomic publications is practically empty, and the market should focus on Wednesday's events related to the Federal Reserve.
The zloty is currently in good condition, although this is mainly due to the lack of dollar appreciation. At 10:00 a.m. the Polish Central Statistical Office (GUS) will publish data on average wages and employment in the enterprise sector. The median of market expectations indicates that they grew at a pace of 7.2 and 3.0 per cent per year, respectively.
The high rate of wage growth has been maintained in Poland for a long time - the last time it fell below 6% (on a yearly basis) in mid-2017. In theory, the high pace of wage growth should support the return of inflation to its target, although other factors are currently influencing the relatively low inflation rate. However, this is a process we are observing not only in Poland but also in the eurozone. The Monetary Policy Council is still very keen to raise interest rates and one reading by the GUS will not change much in this respect. However, it may give some support to the zloty in case of dollar appreciation. | https://conotoxia.com/news/daily-analysis/text-analysis/pressure-on-lower-interest-rates-in-the-usa-afternoon-analysis-17-12-2018 |
With a minimum of 10% chromium content, stainless steel is known for corrosion resistance. It's typically not as strong as steel and can be more difficult to machine.
Grade 5 is the strongest of all the titanium alloys thanks to its higher aluminum and vanadium content. It offers a versatile mix of good corrosion resistance, weldability, and formability. It's often used for turbine blades, fasteners, and spacer rings.
These nickel-iron-cobalt alloy rods expand at the same rate as glass when heated. They offer better machinability than Invar 36 and are often used anywhere a dependable glass-to metal seal is required, such as in diodes and microwave tubes.
Use clear plastic for windows, instrument covers, display cases, and other applications where visibility is essential.
Garolite G-7 withstands temperatures up to 425° F—higher than any other grade of Garolite. While not as strong as Garolite G-9, it offers better arc resistance in dry environments.
Often used as a lightweight alternative to metal and wood, fiberglass is widely used in electrical and structural applications. | https://www.mcmaster.com/shafts/hardness-rating~hard/ |
32 Kan.App. 2d 82 (2003)
79 P.3d 211
STATE OF KANSAS, Appellee,
v.
TROY A. PERCIVAL, Appellant.
No. 89,498.
Court of Appeals of Kansas.
Opinion filed November 21, 2003.
*84 Carl F.A. Maughan, of Law Offices of Carl Fredrick Alexander Maughan L.L.C., of Wichita, for appellants.
Lesley A. Isherwood, assistant district attorney, Nola Foulston, district attorney, and Phill Kline, attorney general, for appellee.
Before ELLIOTT, P.J., MALONE, J., and ROGG, S.J.
MALONE, J.
Troy A. Percival appeals his jury conviction of aggravated robbery. He raises numerous claims of error, including the improper admission of evidence of his prior convictions, improper jury instructions, and insufficiency of the evidence. We conclude that Troy received a fair trial and affirm his conviction.
On December 27, 2001, at approximately 12:20 a.m., Jennifer Scott heard someone punching in numbers to enter the security door at the Comfort Inn in Wichita, Kansas. Scott was working as the motel's night auditor and believed the person attempting to enter was a coworker. However, a masked man with a meat cleaver came into the office, pushed Scott back into the motel's counter, shoved the cleaver in her face, and verbally threatened her. The man was "covered from head to toe in dark clothing." After shoving Scott, the man went directly to the security camera and struck it with the meat cleaver. Scott pled for him not to hurt her. The man *85 went to the cash drawer, took money, and left with approximately $132.
Scott called 911 and the motel's general manager, Teresa Helm. Scott told Helm that she believed Steven Percival, one of Helm's sons, had committed the robbery. Helm is the mother of Steven and Troy Percival. Both Steven and Troy had previously worked at the Comfort Inn. Helm instructed Scott to tell the police everything Scott observed.
The police responded quickly to Scott's call. Scott gave a description of the perpetrator, his clothing, and other details of the event to an officer. Meanwhile, other officers were dispatched to look for the suspect in the area.
Two officers stopped Steven and Troy in a car for running a stop sign near the Comfort Inn at approximately 12:26 a.m., 6 minutes after the robbery. Both Steven and Troy are white males, approximately 6 feet in height, and weigh 150-170 pounds, all characteristics which matched Scott's description. Steven was driving. There was currency on the floorboard at Troy's feet. A ski mask and gloves were on the middle console. Troy was sweaty, swearing, and belligerent.
Officers transported Steven and Troy separately to a Coastal Mart near the Comfort Inn for a show-up. During the transport, Steven told Officer Dean that Troy threw clothing Troy had worn during the robbery out the car window before they had stopped. Steven also stated that Troy had thrown away a meat cleaver Troy used in the robbery. Dean radioed Officer McKee who went to the location described by Steven. McKee found a blue sweatshirt, a white stocking cap, a blue bandana, and a pair of black gloves at the site Steven designated. McKee did not find the meat cleaver.
Troy, in his ride to the Coastal Mart, apparently said, "I've been in the pen." He was wearing an ankle bracelet and stated he was on parole.
Scott identified Troy as the perpetrator at the show-up. She identified him by his pants and his voice. She did not, however, recognize the shirt Troy had on because it was light in color. Steven wore light colored pants. This identification occurred at approximately 1:44 a.m.
*86 Helm came to the Coastal Mart after checking in at the Comfort Inn. She confronted Troy. She would later testify Troy had access to the security door; Steven did not.
Steven and Troy were charged with aggravated robbery in violation of K.S.A. 21-3427. Steven pled guilty to a reduced charge of robbery and was sentenced.
Around February 1, 2002, David Paiva, a maintenance man at the Comfort Inn, found a meat cleaver while picking up trash. He showed it to Scott. She informed him that the police had already found the meat cleaver used in the robbery, so Paiva sharpened the meat cleaver and left it in the motel's maintenance office. Shortly before Troy's preliminary hearing, Helm learned about the discovery of the meat cleaver and turned it over to the police.
At the May 29, 2002, trial, Troy testified that he and Steven had smoked cocaine on the night of the robbery. According to Troy, Steven bought the cocaine with someone else's money and needed to replace the money. Troy contended he dropped Steven off at the motel to earn money from a man by prostitution. According to Troy, Steven paged him shortly thereafter and Troy went back to the Comfort Inn. Troy testified that Steven opened the driver's door, yelled at Troy to let him drive, and threw money at him as Troy was moving into the passenger's seat. Troy denied any knowledge of the robbery until being stopped by the officers.
Steven testified that he and Troy smoked crack on the night of the robbery and wanted more but had no money. Troy then decided to rob the Comfort Inn because he still had a passkey. Steven described Troy getting a blue sweatshirt, blue bandanna, and a meat cleaver from his home before they left for the Comfort Inn. Steven kept the car running while Troy went inside the motel. Several minutes later, Troy came out running. Troy jumped into the backseat and, while Steven was driving, Troy threw some clothes out the window. Troy climbed over the seat and was in the passenger seat when Steven ran the stop sign and was stopped by the officers.
Scott testified and identified Troy as the person who committed the robbery. Helm also testified for the State. She confirmed that Troy had called the motel a week before the robbery and asked *87 the clerk if she would look the other way if he came and took money from the cash drawer.
The jury convicted Troy of aggravated robbery. He was sentenced to 233 months in prison. Troy timely appealed.
Evidence of prior convictions
Troy first claims the trial court erred by allowing the State to question Troy about his past convictions. Troy argues this evidence violated the order in limine and K.S.A. 60-421.
Generally, an appellate court's standard of review regarding a trial court's admission of evidence, subject to exclusionary rules, is abuse of discretion. State v. Jenkins, 272 Kan. 1366, 1378, 39 P.3d 47 (2002). However, this issue also involves interpretation of K.S.A. 60-421. Interpretation of a statute is a question of law, and an appellate court's review is unlimited. An appellate court is not bound by the district court's interpretation of a statute. State v. Maass, 275 Kan. 328, 330, 64 P.3d 382 (2003). In State v. Johnson, 21 Kan. App. 2d 576, 578, 907 P.2d 144, rev. denied 258 Kan. 861 (1995), a similar issue was raised. This court stated: "Because this issue involves an interpretation of K.S.A. 60-421, our standard of review is unlimited."
Prior to trial, Troy filed a motion in limine to suppress evidence of his statements made in the patrol car during transport to the Coastal Mart. These statements were that Troy had "been in the pen" and that he was on parole. At the hearing, defense counsel stated Troy also wanted "to keep out any 60-455 evidence" and "prior criminal activity." The motion was granted.
At trial, Troy referred to Steven as "shady." The prosecutor asked, "Are you saying that you aren't anything like your brother?" Eventually, after much quibbling, the prosecutor asked:
"Q. What's your image?
"A. My image?
"Q. Yeah.
"A. I'm a twenty-four-year-old male . . . getting ready to get married. That's my image.
"Q. You've got some crimes for false statement [or] dishonesty; don't you?
"A. Crimes for some false statements?"
*88 Counsel objected and a bench conference was held. Counsel stated the questioning was a violation of the order in limine. The district court responded:
"COURT: I disagree. If there's he takes the stand and testifies under oath, his credibility can be challenged with crimes of dishonesty and false statement, 60-421.
"[DEFENSE]: I strongly disagree with you, Judge, because I don't believe that you can impeach him with prior bad acts unless, of course, he's saying I've never committed any bad prior acts.
. . . .
"COURT: . . . I agree prior bad acts, but if they are convictions of dishonesties or false statement, they are admissible, and it's proper impeachment."
The court then took a short recess. Immediately after the recess, the prosecutor again began questioning Troy about his convictions of dishonesty and false statement. There was no objection at that time. Troy ultimately disclosed he had a conviction of criminal deprivation of property. The State also asked about a theft, but Troy made no admission to this charge. The prosecutor failed to impeach by inquiring further or by producing any abstracts of conviction. There was no further questioning about Troy's criminal record throughout the trial.
The State initially argues that this issue was not preserved for appeal because Troy failed to make a contemporaneous objection at trial. Although Troy originally objected to the evidence of his prior convictions when the question was first asked by the prosecutor, Troy failed to renew the objection when the questioning continued immediately after the short recess.
K.S.A. 60-404 states that a judgment shall not be reversed by reason of the erroneous admission of evidence "unless there appears of record objection to the evidence timely interposed and so stated as to make clear the specific ground of objection." Here, Troy clearly objected to the evidence of his prior convictions when the question was initially asked by the prosecutor, and the trial court overruled the objection. The trial court took a short recess, and then the same line of questioning was continued by the prosecutor. Although Troy failed to renew his objection after the recess, we believe that he substantially complied with the requirement of *89 K.S.A. 60-404 and that the purpose of the statute has certainly been met in this case. Accordingly, we will review the merits of this issue.
K.S.A. 60-421 states:
"Evidence of the conviction of a witness for a crime not involving dishonesty or false statement shall be inadmissible for the purpose of impairing his or her credibility. If the witness be the accused in a criminal proceeding, no evidence of his or her conviction of a crime shall be admissible for the sole purpose of impairing his or her credibility unless the witness has first introduced evidence admissible solely for the purpose of supporting his or her credibility."
According to the statute, the credibility of a witness can generally be impeached with evidence of a conviction of a crime involving dishonesty or false statement. However, this rule does not apply to the accused in a criminal proceeding. Under K.S.A. 60-421, a criminal defendant can only be questioned about convictions involving dishonesty or false statement if the defendant "has first introduced evidence admissible solely for the purpose of supporting his or her credibility."
The Kansas Supreme Court has "established that a criminal defendant does not place his or her credibility in issue, as contemplated by K.S.A. 60-421, merely by taking the witness stand." See State v. Harris, 215 Kan. 649, 651, 527 P.2d 949 (1974). In State v. Smith, 28 Kan. App. 2d 56, 11 P.3d 520 (2000), the court specifically stated that the previous convictions of dishonesty or false statements are barred when the witness is the defendant in a criminal case "unless the defendant `opens the door' by introducing evidence of his credibility." (Emphasis added.) 28 Kan. App. 2d at 62 (citing State v. Logan, 236 Kan. 79, 83, 689 P.2d 778 [1984]); see also State v. Johnson, 21 Kan. App. 2d 576, 578-79, 907 P.2d 144, rev. denied 258 Kan. 861 (1995) (The court decided the defendant emphasized his truthfulness without any solicitation from the State, so he opened the door to being cross-examined about his previous convictions involving dishonesty or false statements.).
Here, the State argues that Troy "opened the door" to the evidence of his prior convictions by stating that Troy was not like his brother and that he had an image of "a 24-year-old male . . . getting ready to get married." We disagree. Troy's statement did not support his credibility or emphasize his truthfulness. *90 More importantly, the remarks were clearly solicited by the State. We conclude there was no basis for the prosecutor to question Troy about his prior convictions. This evidence was admitted in violation of K.S.A. 60-421 and the court's order in limine.
We must now decide if this error is grounds to reverse Troy's conviction of aggravated robbery. "When reviewing the erroneous admission or exclusion of evidence, the error is harmless if no substantial right of the defendant is involved. [Citation omitted.]" State v. Albright, 271 Kan. 546, 556, 24 P.3d 103 (2001). "`Where the evidence of guilt is of such direct and overwhelming nature that it can be said that evidence erroneously admitted or excluded in violation of a constitutional or statutory right could not have affected the result of the trial, such admission or exclusion is harmless. [Citation omitted.]'" State v. Jamison, 269 Kan. 564, 570, 7 P.3d 1204 (2000). To determine whether a trial error is harmless error or prejudicial error, each case must be scrutinized and viewed in the light of the trial record as a whole, not on each isolated incident viewed by itself. State v. Navarro, 272 Kan. 573, 584, 35 P.3d 802 (2001).
We first consider the fact that the admission of improper evidence in this case was relatively isolated. Troy disclosed that he had a prior conviction of criminal deprivation of property. The prosecutor asked about a theft conviction, but Troy denied this charge. After this limited questioning, the prosecutor moved on to a different issue. The prosecutor did not compound the error by offering to introduce any abstracts of conviction. There was no further questioning about Troy's criminal record during the course of the 3-day trial. Also, the prosecutor never brought out that Troy was on parole at the time of the robbery.
The evidence that Troy at least assisted in the aggravated robbery was overwhelming. Troy was arrested with Steven in the car near the Comfort Inn a few minutes after the robbery. There was currency on the floor board at Troy's feet. A ski mask and gloves were on the middle console. Troy was sweaty, swearing, and belligerent. Scott identified Troy as the perpetrator at the show-up shortly after the robbery. She identified him by his pants and his voice. Although Troy's shirt was different than originally described *91 by Scott, the evidence indicated that Troy had earlier thrown his dark shirt out the car window.
Primarily, Troy was implicated by his own family members. Steven testified that Troy committed the robbery while Steven waited in the car. Although Troy tried to blame the robbery on Steven, the evidence in the case was much stronger against Troy. Helm testified that Troy had access to the security door and Steven did not. Helm also testified that her son, Troy, had discussed robbing the motel a week earlier.
We conclude that the erroneous admission of evidence was harmless. Considering the trial record as a whole, we do not believe that the error affected Troy's substantial rights. We conclude beyond a reasonable doubt that the error did not affect the result of the trial and, accordingly, the error is not grounds for reversal.
Instruction on lesser included offense
Troy next claims the trial court erred by failing to give a jury instruction for simple robbery. Troy was charged with aggravated robbery pursuant to K.S.A. 21-3427, the taking of property by force or threat of bodily harm while "armed with a dangerous weapon, to-wit: a meat cleaver." Troy contends the meat cleaver was used as a tool to disable the security camera and not as a weapon. Thus, Troy claims that the jury should also have been allowed to consider whether he was guilty of simple robbery.
A criminal defendant has a right to an instruction on all lesser included offenses supported by the evidence as long as (1) the evidence, when viewed in the light most favorable to the defendant's theory, would justify a jury verdict in accord with that theory and (2) the evidence at trial does not exclude a theory of guilt on the lesser offense. State v. Williams, 268 Kan. 1, 15, 988 P.2d 722 (1999). An instruction on a lesser included offense is not proper if from the evidence the jury could not reasonably convict the accused of the lesser offense. State v. Robinson, 261 Kan. 865, 883, 934 P.2d 38 (1997).
Whether a robber is "armed with a dangerous weapon" for aggravated robbery is determined from the victim's point of view. An object can be a dangerous weapon if intended by the user to convince *92 the victim that it is a dangerous weapon and the victim reasonably believes it is a dangerous weapon. State v. Colbert, 244 Kan. 422, 425-26, 769 P.2d 1168 (1989).
Scott testified at trial that the perpetrator pushed her back into the counter with a meat cleaver in his hand. He told her to "shut the fuck up . . ." and "that he would kill [her]." Scott felt threatened and pleaded with the perpetrator not to hurt her. She testified she was "freaking out." Clearly, the perpetrator committed robbery with a dangerous weapon. The fact that he also used the meat cleaver to strike the security camera does not transform the weapon into a tool.
There was no evidence to support a verdict for simple robbery. Furthermore, Troy's theory of defense was that Steven committed the robbery. If the jury believed Troy, they would have acquitted him. Troy's argument for a lesser included instruction on simple robbery is without merit.
Accomplice and informant instructions
Next, Troy claims the trial court erred by not giving an accomplice instruction. There was no request at trial for the instruction.
"`When reviewing challenges to jury instructions, we are required to consider all the instructions together, read as a whole, and not to isolate any one instruction. If all the instructions properly and fairly state the law as applied to the facts of the case, and a jury could not reasonably have been misled by them, the instructions do not constitute reversible error even if they are in some way erroneous.' [Citations omitted.]" State v. Peterson, 273 Kan. 217, 221, 42 P.3d 137 (2002).
"Unless the instruction is clearly erroneous, no party may assign as error the giving or failure to give an instruction unless he or she objects thereto before the jury retires to consider its verdict, stating distinctly the matter to which he or she objects and the grounds of his or her objection. Opportunity shall be given to make objections out of the hearing of the jury. K.S.A. 2001 Supp. 22-3414(3). Instructions are clearly erroneous only if the reviewing court is firmly convinced there is a real possibility that the jury would have rendered a different verdict if the error had not occurred. [Citations omitted.]" State v. Davis, 275 Kan. 107, 115, 61 P.3d 701 (2003).
PIK Crim. 3d 52.18 states:
"An accomplice witness is one who testifies that he was involved in the commission of the crime with which the defendant is charged. You should consider with caution the testimony of an accomplice."
*93 Although the trial court did not give the accomplice instruction, the trial court gave the general credibility of a witness instruction at PIK Crim. 3d 52.09, which states:
"It is for you to determine the weight and credit to be given the testimony of each witness. You have a right to use common knowledge and experience in regard to the matter about which a witness has testified."
During cross-examination, defense counsel attempted to impeach Steven as being the person who robbed and threatened Scott. Counsel stressed his past convictions, his drug use, and his earlier threats to others. Counsel also attempted to impeach Steven's credibility with allegations that Steven committed prostitution and that he used another's money to buy drugs on the night of the robbery. Counsel suggested Steven testified against Troy to obtain a deal from the State. After this cross-examination, the jury was certainly aware of Steven's potential lack of credibility. There is no reason to believe that the jury failed to use "common knowledge and experience" in assessing Steven's testimony.
Furthermore, Steven's testimony was corroborated by Scott, the officers, and Helm. All the evidence presented by the State pointed to the conclusion that Troy committed the robbery inside the motel. Even though Steven's testimony was corroborated, it would have been the better practice for the trial court to give the accomplice instruction. See PIK Crim. 3d 52.18, Notes on Use. However, considering the instructions as a whole and the fact that the accomplice instruction was not requested, we conclude that the trial court's failure to give the instruction was not clearly erroneous.
Troy also argues the jury should have received a cautionary instruction regarding the testimony of an informant. There was no request at trial for the instruction.
PIK Crim. 3d 52.18-A states:
"You should consider with caution the testimony of an informant who, in exchange for benefits from the State, acts as an agent for the State in obtaining evidence against a defendant, if that testimony is not supported by other evidence."
This instruction was not indicated because Steven was not an informant. He did not act as an agent for the State. The definition *94 of an informant does not include a person who gives information only after being interviewed by police officers or gives information during the course of an investigation. State v. Abel, 261 Kan. 331, 336, 932 P.2d 952 (1997). Also, Steven's testimony at trial was substantially corroborated.
Admission of meat cleaver
Next, Troy claims the trial court erred by admitting into evidence the meat cleaver found at the Comfort Inn approximately 5 weeks after the robbery. At trial, counsel objected because the blade had been sharpened after it was found and may have looked more menacing than during the robbery. As we previously indicated, the admission of evidence lies within the sound discretion of the trial court. Jenkins, 272 Kan. at 1378.
The State correctly laid the foundation for where and how the meat cleaver was found. Additionally, the jury heard that the blade had been sharpened and why. The fact that the meat cleaver was not discovered until later and that its condition was changed goes to the weight of the evidence, not its admissibility. Physical evidence, unless it is clearly irrelevant, should be admitted for such weight and effect as the jury sees fit to give it. State v. Whitesell, 270 Kan. 259, 277, 13 P.3d 887 (2000). Therefore, the court did not abuse its discretion by admitting the meat cleaver into evidence.
On appeal, Troy argues there was no link to the meat cleaver admitted into evidence and the one used in the robbery, making the evidence irrelevant. This was not the objection at trial. A party may not object at trial to the admission of evidence on one ground and then on appeal argue a different objection. State v. Bryant, 272 Kan. 1204, 1208, 38 P.3d 661 (2002). Clearly, a meat cleaver found on the grounds of the motel after the robbery was not totally coincidental. The trial court did not act arbitrarily, fancifully, or unreasonably by admitting the meat cleaver into evidence.
Aiding and abetting instruction
Next, Troy claims the trial court erred by giving an aiding and *95 abetting instruction. The trial court gave the jury the following instruction found at PIK Crim. 3d 54.05:
"A person who, either before or during its commission, intentionally aids or abets another to commit a crime with intent to promote or assist in its commission is criminally responsible for the crime committed regardless of the extent of the defendant's participation, if any, in the actual commission of the crime."
Troy objected to this instruction at trial. On appeal, he asserts the State did not produce any evidence which indicated Troy may have aided Steven in the robbery. Therefore, Troy argues it was error to include this instruction.
Steven admitted to participating in the robbery and testified Troy planned and actually went into the motel. Troy's description, however, was that he was not involved and Steven tricked him into waiting with the getaway car. The jury could have believed various scenarios given by the evidence. A juror could have concluded that Troy drove Steven to the motel to commit the robbery, waited in the car, and then assisted with the getaway.
"If, from the totality of the evidence, a jury reasonably could conclude that the defendant aided and abetted another in the commission of the crime, then it is appropriate to instruct the jury on aiding and abetting. [Citation omitted.]" State v. Pennington, 254 Kan. 757, 764, 869 P.2d 624 (1994).
The State charged both Steven and Troy with aggravated robbery. Both men were found in the car with the money. The aiding and abetting instruction was included based on law applicable to the facts of this case. The instruction was supported by substantial competent evidence presented in the case.
Allen instruction
Next, Troy claims the trial court erred in giving what is known as an Allen instruction to the jury. An Allen instruction is based upon the holding in Allen v. United States, 164 U.S. 492, 41 L. Ed. 528, 17 S. Ct. 154 (1896). Troy did not object at trial to the court's inclusion of the instruction. As previously stated, we consider this issue based upon a clearly erroneous standard of review.
The instruction given by the trial court was taken verbatim from PIK Crim. 3d 68.12. Troy asserts the Kansas Supreme Court has voiced disapproval of the Allen instruction and has urged caution *96 in its application because of its potential coercive effect. Although there is some truth to this claim, Troy fails to put the Supreme Court's concerns in the proper context.
The potential coercive effect of an Allen instruction in any case depends largely on when the instruction is given to the jury. The instruction is disapproved if given after the jury has begun deliberations, but the instruction is approved if given prior to deliberations. This view was expressed in State v. Struzik, 269 Kan. 95, 109, 5 P.3d 502 (2000), wherein the court stated:
"This court's reasoning for continued disapproval of a deadlock instruction given after the jury has begun deliberations is that such an instruction could be coercive or exert undue pressure on the jury to reach a verdict. One of the primary concerns with an Allen-type instruction has always been its timing. When the instruction is given before jury deliberations, some of the questions as to its coercive effect are removed."
See also State v. Poole, 252 Kan. 108, 114, 843 P.2d 689 (1992) ("If the instruction is given prior to deliberation, `all question with regard to the coercive effect of the same would be removed.'"); State v. Hall, 220 Kan. 712, 719, 556 P.2d 413 (1976) ("The danger in giving an intimidating or coercive instruction arises when a jury has reported its failure to agree on a verdict. Under such circumstances a coercive instruction might induce a jury to return a verdict which they would not otherwise have reached.")
Here, the trial court gave the instruction prior to the jury retiring for deliberations. Therefore, it cannot be said the instruction had such a coercive effect as to require reversal of Troy's conviction. The giving of the Allen instruction was not clearly erroneous.
Sufficiency of the evidence
Next, Troy claims there was insufficient evidence to convict him of aggravated robbery. This argument fails.
"When the sufficiency of the evidence is challenged in a criminal case, the standard of review is whether, after review of all the evidence, viewed in the light most favorable to the prosecution, the appellate court is convinced that a rational factfinder could have found the defendant guilty beyond a reasonable doubt." State v. Beach, 275 Kan. 603, Syl. ¶ 2, 67 P.3d 121 (2003).
*97 We have already recited the sufficiency of the evidence in applying the harmless error standard to the evidence of Troy's prior convictions. Troy would have this court believe his version of the robbery and the events surrounding it, but this court does not reweigh evidence or pass on the credibility of witnesses. State v. Saiz, 269 Kan. 657, 664, 7 P.3d 1214 (2000). The evidence in this case was sufficient to convict Troy of aggravated robbery.
Troy raises other issues including the failure to give a unanimity instruction and cumulative error. We have reviewed the record and, without further elaboration, find these arguments to be without merit. "This, like many criminal trials, was a difficult one, but as we have often said, an accused is entitled to a fair trial, not a perfect one. State v. Chandler, 252 Kan. 797, Syl. ¶ 3, 850 P.2d 803 (1993)." State v. Broyles, 272 Kan. 823, 842, 36 P.3d 259 (2001).
Affirmed.
| |
When Breathing Isn’t Easy: Acute and Chronic Bronchitis
Breathing is fundamental to life, and usually effortless. Each breath delivers fresh oxygen to your bloodstream and helps remove carbon dioxide (a byproduct of breathing) from your body. The air you inhale makes its way to your lungs through your windpipe or trachea, then through a branching network of smaller passageways called bronchi, or bronchial tubes.
But when your bronchial tubes become irritated or inflamed, a condition called bronchitis, breathing becomes more complicated. Bronchitis increases mucus production and tightens muscles that can make breathing difficult and cause coughing, wheezing and chest pain.
Bronchitis comes in two main varieties: acute (short-term) and chronic (ongoing).
The main cause of chronic bronchitis, smoking, can constantly irritate and inflame the bronchial tubes. Air pollution or other environmental factors, common in some work settings, may also pay a role.
Acute Bronchitis
Acute bronchitis, also known as a chest cold, is typically caused by a viral infection, most often the same viruses that cause colds and the flu. Viruses can be spread through the air (if someone carrying the virus coughs in your direction, for instance), and they can be passed through physical contact when an infected person who has not washed well shakes your hand.
Acute bronchitis can also result from a bacterial infection. And your risk of contracting acute bronchitis can increase if you’re exposed to tobacco smoke (even secondhand), dust, fumes and air pollution.
Symptoms and diagnosis: Most of us are familiar with the symptoms of acute bronchitis. It usually lasts up to two weeks and can involve a good deal of coughing and nose-blowing, chills, slight fever, headache, chest soreness, achiness, fatigue, sore throat, watery eyes and wheezing. Your health care provider can usually diagnose acute bronchitis through a physical exam and your description of symptoms. Your doctor may order other tests, such as chest X-rays, blood gas studies, sputum testing, and pulmonary (lung) function tests, to rule out other potential causes (pneumonia, for instance).
Treatment: Acute bronchitis may last one or two weeks. Depending on the cause, your doctor may prescribe antibiotics or, in severe cases, steroids to help reduce the inflammation in your airways. Your health care provider will usually focus on helping you manage your symptoms. Treatments may include:
- Taking cough medicine, pain relievers, and fever reducers such as acetaminophen or ibuprofen.
- Humidifying the air at home
- Avoiding cigarette smoke or stopping smoking
- Drinking plenty of fluids
- Prescription antibiotics
Chronic Bronchitis
Like acute bronchitis, chronic bronchitis is an inflammation of the bronchial tubes. But acute bronchitis is relatively short-lived, while the chronic variety is long-lasting. The main cause of chronic bronchitis, smoking, can constantly irritate and inflame the bronchial tubes. Air pollution or other environmental factors, common in some work settings, may also pay a role.
Sometimes chronic bronchitis can occur in combination with other lung problems, including:
- Asthma
- Pulmonary emphysema
- Scarring of the lungs (pulmonary fibrosis)
- Sinusitis
- Tuberculosis
- Upper respiratory infections
- Symptoms and diagnosis: The effects of chronic bronchitis can vary, but the most common symptoms include a persistent cough (that sometimes brings up mucus), wheezing and chest discomfort. Additional symptoms may include:
- A bluish tint to fingernails, lips and skin due to lowered oxygen levels
- Swollen feet
- Heart failure
- Shortness of breath with activity
For bronchitis to be diagnosed as chronic, you must have a cough and mucus most days for at least three months a year, for two years in a row. In addition to a physical exam, your health care provider will likely order tests to assess your condition and rule out other problems. These tests may include the following:
- Spirometry: to see how well your lungs are working
- Arterial blood gas: to check the levels of oxygen and carbon dioxide in your blood
- Pulse oximetry: another way to measure oxygen in the blood
- Chest x-rays: to view your lungs and other internal tissues
- CT scan: to gather more detailed images of your internal organs and tissues
- Treatment: The treatment of chronic bronchitis will include many of the same symptom-management remedies as those used for acute bronchitis, with the possible addition of:
- Bronchodilators and steroids: to help open your airways and clear up mucus
- Steroids: to help reduce inflammation in the bronchial tubes
- Oxygen therapy: to help you breathe easier and increase your body's oxygen supply
- Pulmonary rehabilitation: to help you live more comfortably and stay active
Practice Good Self Care
If you smoke, one of the best steps you can take to lower your risk of bronchitis — and benefit your overall health — is to quit smoking. Ask your doctor or health care provider to recommend programs and provide tips on how to stop.
Also, avoid other lung irritants, such as secondhand smoke, dust and air pollution.
Finally, practice good hygiene. Wash your hands regularly to lower your risk of contracting a bronchial infection from someone else. | https://www.premierhealth.com/your-health/articles/women-wisdom-wellness-/when-breathing-isn-t-easy-acute-and-chronic-bronchitis |
The famed Shirai sisters take on Hiroyo Matsumoto and Mayu Iwatani. This match is bound to be a bout of friendly competition, as Mayu and Io are the current Goddesses Of Stardom Title holders. Still, friendly or not, this match will surely prove the universal truth: blood is thicker than water.
#SupportWomensWrestling is more than a trending topic on Twitter: it's a movement. ClickWrestle is committed to making it easier for fans to support women's wrestling, one match at a time.
Best Sellers
Contact Us
Release Schedule
Wrestler Index
Match Calendar
601 Van Ness Ave #E3-508
San Francisco, California 94102
United States. | https://www.clickwrestle.com/pro/91170/io-shirai-mio-shirai-vs-hiroyo-matsumoto-mayu-iwatani |
The present invention relates to the field of digital scanning systems, and more particularly to a system and method for installing, configuring and operating a digital scanner.
Digital scanners are commonly used to digitize documents for use in home and office computer systems. Although the physical characteristics of scanners vary, the initial setup and basic operation of many scanners is essentially the same. Initially, the user connects the scanner to a computer and installs scanner software that is provided with the scanner on one or more diskettes. The user then launches the scanner software, for example, by clicking an icon that represents the scanner software, by typing a command to invoke the software or, in some scanners, by presenting a document to be scanned to the scan head where the document is sensed and the scanning device signals the computer to invoke the scanner software.
Many popular imaging and document processing software applications are equipped with a TWAIN interface capable of interacting with a TWAIN data source supplied by a hardware vendor. Through selection of the appropriate data source and invocation thereof, a user is able to scan an image directly into a target application program.
When executed, the scanner software typically prompts the user to indicate the type of document to scan (e.g., text or picture). After the user indicates the type of document to scan, the user is then prompted to enter scan parameters for the scanning operation. For example, if a picture is to be scanned, the user is usually prompted to indicate whether the picture is a color image, shaded image (i.e., a grayscale image) or a purely black and white image (sometimes called a line drawing). The user may also be prompted to indicate a desired resolution for the scanned image. If a text scan is indicated by the user, the user may be prompted to enter scan parameters such as the size and quality of the text, whether to maintain the scanned text in columns and so forth.
Ergo, for many scanning devices, the user needs to complete a series of tasks to be able to achieve the goal of acquiring an image. Using at least one embodiment of the present invention, a user may achieve the acquisition of an image by simply hooking up scanner hardware, installing scanner software and pressing a button intuitively placed on the scanner itself.
A method and apparatus for scanning a document are disclosed. A request is received to generate a digitized representation of a document. Previously stored configuration information is inspected to identify an application program associated with the request and a scan setting associated with the application program. The digitized representation of the document is generated according to the scan setting.
Other features and advantages of the invention will be apparent from the accompanying drawings and from the detailed description that follows below.
| |
Hire Technologies Inc. (OTCMKTS:HIRRF) saw a large increase in short interest in May. As of May 14th, there was short interest totalling 1,300 shares, an increase of 62.5% from the April 29th total of 800 shares. Based on an average trading volume of 0 shares, the short-interest ratio is currently ∞ days.
Shares of Hire Technologies stock opened at $0.34 on Monday. Hire Technologies has a 1 year low of $0.34 and a 1 year high of $0.64.
Get Hire Technologies alerts:
Separately, Eight Capital assumed coverage on shares of Hire Technologies in a report on Wednesday, April 14th. They set a “buy” rating on the stock.
Hire Technologies Company Profile
Hire Technologies Inc, through its subsidiaries, provides human resources services in Canada. The company offers temporary and permanent placement services. It also acquires information technology, staffing, and HR consulting firms. In addition, the company cross-selling opportunities, access to proprietary operational tools, and centralized back-office system. | |
The Homes (Fitness for Human Habitation) Bill was presented to the House of Commons for its first reading on the 19th July 2017. Now, the Bill has reached the report stage, which takes place on 26th November 2018. This is where MPs can suggest amendments to the Bill or new parts they think should be added.
Currently, Landlords are not required to ensure that their rented properties are absent of hazards that can be potentially harmful to the wellbeing of the occupants. There are circumstances that an offence is committed if the Landlord fails to comply with the Local authority’s enforcement notice under the Housing Act 2004. Under these conditions, tenants do not have the ability to take direct legal action towards the landlord to resolve a defective property. This has resulted in Landlords having the ability to rent out potentially dangerous properties.
The Bill itself aims to achieve the following:
- To increase the quality of private and social rental properties
- Landlords will be under obligation to ensure that the property is in good condition
- Tenants will have the right to take legal action against their Landlord if these conditions are not met
- Applying any category 1 hazard to the 1985 act section 10
We’re pleased to hear the Government is not opposing such a bill. Now, they’re driving to have it put through. If a success, we hope to see private and social tenants gaining the right to safe and secure homes for themselves and their families.
What this means for Landlords
With this Bill making progress through the House of Commons where it will then be passed through the House of Lords. It appears likely that this will be passed and enforced in full fruition.
The responsibility will fall heavily on the Landlords shoulders. There will be more power held by the tenants in the action they can take to ensure problems in the property are seen to and amended. Landlords will be up against more serious consequences.
Alongside the current responsibilities of ensuring a property is in housing “Fitness” which includes:
- Repair
- Stability
- Freedom From Damp
- Internal Arrangement
- Natural Lighting
- Ventilation
- Water Supply
- Drainage and sanitary conveniences
- Facilities for preparation and cooking of food and for the disposal of wastewater
We will see “Category 1” Hazards will now be under the landlord’s responsibility. “Category 1” hazards are highlighted in the “Housing Health and Safety Rating System” (or HHSRS) created by the Housing Act 2004.
“What are Category 1 Hazards”
A category 1 hazard is a hazard that brings about a serious threat to the health or safety of the occupiers or visiting the property.
Examples include:
- Exposed wiring or overloaded electrical sockets
- Dangerous or broken boiler
- Bedrooms that are very cold
- Leaking roof
- Mould on the walls or ceiling
- Rats or other pest or vermin infestation
- Broken steps at the top of the stairs
- Lack of security due to badly-fitting external doors or problems with locks
There will be a grand total of 29 prescribed matters which will mean the Landlord has the responsibility to ensure these are in order. Furthermore, Smoke alarms, Carbon Monoxide detectors and electrical safety testing will also be included.
When a landlord fails to fix a problem which deems the dwelling unfit. They may be at risk of having their landlord licence revoked. Tenants will now no longer require a detailed assessment carried out by a local authority. Instead, it will be taken directly to the courts if there’s no agreement made between landlord and tenant.
The Landlord will not be obliged to issues in respect to section 11 of the Housing Act 2004 such as:
- Any works for which the tenant is liable as a result of his or her failure to use the dwelling in a tenant-like manner, or
- Any works to any item or fixture that the tenant is entitled to remove from the dwelling.
How regular property inspections can help
The new responsibilities of Landlords will make regular Property Inspections such as Inventories, Check Ins, Interim Property Visits and Check Outs vital aspects throughout a tenancy.
A landlord or property manager who takes full advantage of these services will have a far smoother process of keeping the property up to scratch. The inventory clerks word is not final. But, it will help raise awareness of issues in the property that have been mentioned in the new Bill.
How? It’s simple:
- High-quality photographic evidence
- A full and thorough snapshot of the property during the inventory stage will provide the landlord with high-quality photographic proof of the properties condition at that point in time.
This followed up with a check in will continue the trail of evidence of the properties condition. This is at the time the tenants move in (bear in the mind the property should be FFHH by this point!) However, if there are any issues, the report will highlight these issues and bring it to the attention of the Property Manager and/or Landlord.
The use of regular property inspections during the tenancy period, if there are any major changes or damages to the property will be photographically documented too. An Interim property inspection also aims to make the landlord or letting agent aware of any developing problems such as mould issues, structural damage or overcrowding in the property.
- A full and thorough snapshot of the property during the inventory stage will provide the landlord with high-quality photographic proof of the properties condition at that point in time.
- An experienced eye
- Our clerks are trained to look out for:
- Damp and Mould
- Evidence of more occupiers
- Damage to gas items
- Electrical appliances and powerpoints
- Smoke Alarms and Carbon Monoxide Detectors
- Our clerks are trained to look out for:
Although this bill has not come into play, yet, we feel it is always important that Landlords and Property Managers harness the full benefits of independent inventory reporting. As mentioned above, you will end up with a full snapshot of the properties condition before, during and at the end of the tenancy period. Key action points and issues will be highlighted to you immediately enabling quick and efficient action to maintain the property you manage or rent out. | http://hinchpm.com/ffhh/ |
BOILER BOIL OUTS
For a safe and efficient https://slotsups.com top ten review start up of steam generating equipment, it is recommended to remove any organic matter from the internal surfaces. Construction oil and grease compounds may cause foaming and reduce heat transfer on the tubes, which can cause tube failure. As a prevention measure, boiler boil outs are recommended prior to start up activities. This is another alkaline way of cleaning steam generating systems.
COMMISSIONING BOILER/CONDENSER CLEANING
Commissioning boiler cleaning is highly recommended for any steam generating systems. The cleaning is performed when a boiler is filled with a water-based chemical solution. The solution is heated either by starting the boiler itself or by using steam from https://slotsups.com top ten review an external boiler. Maintaining the conditions above the boiling point, chemical concentration, and oil content are monitored for a certain period of time or until the oil content drops to a predetermined level. | http://www.visualchem.in/boiler-condenser-cleaning/ |
The invention provides an atmosphere lamp with a luminous edge, which comprises a lamp shell, a light guide plate and a luminous lamp source for emitting light towards the inner side of the light guide plate are arranged on the lamp shell, a shading decorative plate is arranged on the outer side of the light guide plate, and the light guide plate comprises a light-transmitting edge part arranged around the edge of the peripheral side of the shading decorative plate. According to the atmosphere lamp capable of emitting light at the edge, through the structural arrangement of the shading decorative plate and the light-transmitting edge part, when the light-emitting lamp source projects light to the light guide plate, it can be found that the shading decorative plate does not emit light whenthe atmosphere lamp capable of emitting light at the edge is viewed from the outside, and the light-transmitting edge part arranged around the edge of the circumferential side of the shading decorative plate can emit light; the atmosphere lamp capable of emitting light at the edge forms a unique illumination feeling of emitting light at the edge of the lamp shell, so that a unique illumination atmosphere can be created, a good illumination feeling can be effectively given to a user, and the use requirements of the user are met. | |
Actuarial fees fall for smaller schemes over 2013
Actuarial fees for small schemes have fallen over 2013 as competition for their business rises, Kim Gubler Consulting (KGC) finds.
In its Annual Actuarial Fee survey, KGC found there was an overall increase in fees for larger schemes.
For schemes of 5,000 lives, the average annual actuarial fee rose from around £25,000 in 2012 to approximately £30,000 this year, although costs did not rise above those seen in 2010, a triennial valuation year.
Bigger schemes, of 10,000 lives, saw a similar increase, from just under £35,000 to just over, although fees did not break the £40,000 mark seen in 2010.
However, fees continued to fall this year among 2,000-life schemes, where a gradual decrease from 2010's average annual actuarial fee of £20,000 continued.
KGC said it is competition among providers that has kept fees below 2010's levels, despite this year being a triennial valuation year.
Costs per member remained highest in smaller schemes, demonstrating the benefits of scale in actuarial services.
KGC analysed the average annual actuarial unit cost per member (UCM) among schemes of different sizes.
It found that in 2013, compared to a 10,000-life scheme, the UCM for a 2,000-life scheme was 168% higher than for a 10,000-life scheme, while the UCM for a 5,000-life scheme was 67% higher.
However, this is an improvement for smaller schemes on 2012's figures, when the UCM for a 2,000-life scheme was 187% higher, and on a 5,000-life scheme 50%, than in a 10,000-life scheme.
Despite average costs per member and overall falling among lower schemes, the gap between the highest and lowest pricing has grown over 2013, the survey found.
Among 5,000-life schemes the difference between the highest and lowest fees for triennial valuations increased to £45,000 over this year, which is an 8% increase on 2012's figures.
In 10,000-life schemes, the difference in cost between the highest and lowest increased by 4% over 2013 to £53,000, although this represents a 19% fall on 2011's figures.
KGC research analyst Hayley Mudge said: "While many respondents claimed the competitive nature of the actuarial market was the most important factor in their pricing strategy, the growing gap between the highest and lowest pricing does seem at odds with this opinion.
"KGC looks forward to comparing the results with next year's survey and would like to thank all of the participants for their assistance in providing such comprehensive information."
To gather its data, KGC surveyed 19 actuarial firms on the charges they levy on schemes of 200, 500, 1,000, 2,000, 5,000 and 10,000 lives.
More on Admin / Technology
Updated: Landmark GMP ruling 'significant challenge' for schemes
Defined benefit (DB) schemes that provide GMPs must revisit and, where necessary, top-up historic cash equivalent transfer values (CETVs) that have been calculated on an unequal basis, a landmark court judgment said last week.
Pension Sync teams up with My Digital to support new era of quantum employment
Technology platform PensionSync has partnered with quantum employment pioneer My Digital to help contractors and employers manage pensions as more workers do temporary work for multiple firms.
Capita partners with Intellica to tackle GMP equalisation challenges
Capita Pensions has partnered with data technology solutions firm Intellica to tackle the GMP equalisation challenges facing pension schemes.
Hewlett Packard renews TPA agreement with EQ Paymaster
The Hewlett Packard Retirement Benefit Plan has reappointed EQ Paymaster as its third-party administrator (TPA) for five years.
Register now: How is Covid-19 disruption driving digital transformation in pensions?
Schemes and their administrators have rightly received much praise for ensuring that pensions have continued to be paid in full and on time during an unprecedented period of disruption. | https://www.professionalpensions.com/news/2318200/actuarial-fees-fall-schemes-2013 |
Holiday Version of The Paper Chef
Since I am on vacation during the week of July 4th, the Paper Chef is going to a very different schedule. In particular since the first Friday of the month is in fact July 1st and since I will not be able to blog at all (I suspect) for the following week, I am going to make this an extended, leisurely Paper Chef. Ingredient nominations are now open and final ingredients will be picked on Friday but you will have until Monday July 11th at noon to post an entry. Ten days, not three. As a result, if the ingredients happen to be a bit wild or a bit bizarre, we will go with them anyway.
The current ingredient list is:
Red wine, cream, cheddar cheese, quinoa, butter, asparagus and cured, aged ham (country not city), lemongrass, spinach, vinegar, sweet potatoes and marshmallows.
You cannot nominate eggs, buttermilk, honey or dates. Anything else goes – nominate away!
11 Comments
-
I nominate:
Dried Chiles
Citrus
1 food stuff from a convenient store – Canned/bottled/jared.
An item from a neighbors garden without consent.
Biggles
-
“An item from a neighbors garden without consent.”
LOL!
okay, and seriously now, here is my nomination: sausage
-
Dr. B – “Item from a neighbor’s garden without consent” – Already done (They have a plum tree)!
But seriously, Owen, can I nominate dried fruit this time around?
Maybe with the extended deadline I’ll be able to enter… finally!
-
Olives!
-
berries
-
potatoes (not sweet)
-
aubergine/eggplant
thyme
star anise
-
summer squash
blueberries
figs
-
An item from a neighbors garden without consent, such that you need to sneak about after the sun has gone down.
biggles
-
how about…edible flowers (in the recipe)? not for garnishing. Well, you could use them for garnishing IF they are in the recipe 🙂
-
Shrimp or scallops. | http://www.tomatilla.com/2005/06/holiday-version-of-the-paper-chef/ |
.
Media Centre
.
Museum
.
Publications
.
Speeches
.
Careers in BOU
.
Contact Us
About Us
Who We Are
What We Do
Organisation & Governance
Board of Directors
Contact Us
Museum Collections
Monetary Policy
Overview
Monetary Policy Framework
Monetary Policy Statements
Monetary Policy Reports
Monetary Policy Report Highlights
MPC Meeting Calendar
Financial News Reports
Financial Stability
Bank of Uganda Role
Macropudential Policy
Payment Systems Oversight
Crisis Management
Financial Stability Report
Quarterly Stability Review
Financial Markets
Bills and Bonds Issuance Calendar
Domestic Financial Markets
Auction Procedures and Guidelines
Major Foreign Exchange Rates
COMESA Foreign Exchange Rates
Rediscount and Bank Rates
Invitation to Tender Treasury Bills
T-Bills Auction Results & Yield Curve
Invitation to Tender Govt Bonds
T-Bonds Auction Results & Yield Curve
Tbills and Tbonds Forms
Forex Bureau Rates
Rules Governing the Lombard and the Rediscount window
Supervision
Overview
Supervised Institutions
Compilation Forms BS100 and MD100
Credit Reference Bureau
Annual Supervision Report
Acts and Regulations
Financial Institutions Charges
Supervision Circulars and Guidance Notes
Statistics
Overview
Statistics
Metadata
Financial Inclusion
Overview
Financial Inclusion Project
Financial Literacy
Financial Consumer Protection
Financial Innovations
Financial Services & Data Measurement
Payment Systems
Overview
Role of the Central Bank
Payment System Instruments
Payment Systems in Uganda
Legal and Regulatory Framework
National Payment System (NPS) Policy Framework
Private Sector Initiatives
Financial Inclusion
Data and Statistics
Uganda National Interbank Settlement
Central Depository System (CSD)
Automated Clearing House (ACH)
Banking & Currency
Overview
Currency
Banking & Clearing Services
Currency Management
Banking Application
Reproducing Currency
Banking Department
Site Map
ABOUT US
Who We Are
Governance and Organisation
What We Do
Contact Us
Acts and Regulations
MONETARY POLICY
Framework
FAQs
SUPERVISION
Overview
Financial Stability
Annual Supervision Report
FAQs
Supervised Institutions
Credit Reference Bureau
Acts and Regulations
RATES & STATISTICS
Policy Rates
Domestic Money Market Rates
Major & Comesa Rates
Forex Bureau Rates
Statistics
NOTES and COINS
About the Currency
History of the Currency
Counterfeit Detection
Replacement of Damaged Notes
PUBLICATIONS & RESEARCH
Governor Speeches
Deputy Governor Speeches
State Of the Economy Reports
Annual Reports
Working Papers
Trade and Remittances
Monthly Economic and Financial Indicators
Quarterly Economic Report
Annual Supervision Report
BOU Staff Research Papers Journal
Private Sector Investment Monitoring
FINANCIAL MARKETS
About Financial Markets
Financial News Reports
COMESA Countries Foreign Exchange Rates
Treasury Bills Auction Results
Treasury Bond Auction Results
Domestic Financial Markets
Major Foreign Exchange Rates
Rediscount and Bank Rates
Invitation to Tender Treasury Bills
Invitation to Tender Government Bonds
Repo Auction Results
Forex Bureaux Rates
FAQs
PAYMENT SYSTEMS
Overview
UNISS
Cheque Clearing
MEDIA
Press Releases
COMMON LINKS
Common Links
FEEDBACK
MEMBERS LOGIN
DOWNLOADS
LINKS
BOU EMAIL
COPYRIGHT
TERMS & CONDITIONS
Home
|
Downloads
|
Links
|
Contact Us
|
Sitemap
|
BOU Staff Email
|
Copyright
|
|
Terms & Conditions
|
Careers in BOU
© 2022 Bank of Uganda, All Rights Reserved. | https://archive.bou.or.ug/archive/opencms/bou/misc/footer_links/sitemap.html |
The accreditation is given by the Society of Cardiovascular Patient Care (SCPC), an international not-for-profit organization that focuses on transforming cardiovascular care by assisting facilities in creating communities of excellence that bring together quality, cost and patient satisfaction.
Hospitals that have received SCPC accreditation also emphasize the importance of standardized diagnostic and treatment programs which provide more efficient and effective evaluation as well as more appropriate and rapid treatment of patients with chest pain and other heart attack symptoms. Additionally, they serve as a point of entry into the healthcare system to evaluate and treat other medical problems, and they help to promote a healthier lifestyle in an attempt to reduce the risk factors for heart attack.
To become an Accredited Chest Pain Center, St. Joseph’s engaged in rigorous evaluation by SCPC for its ability to assess, diagnose, and treat patients who may be experiencing a heart attack. To the community, this means that processes are in place that meet strict criteria aimed at: | https://www.dignityhealth.org/arizona/locations/stjosephs/about-us/press-center/press-releases/st-josephs-hospital-and-medical-center-achieves-new-status-as-accredited-chest-pain-center |
Updates in the Treatment of Pulmonary Arterial Hypertension - Episode 9
Charles Burger, MD: Prostaglandin had been involved in the pathophysiology of pulmonary arterial hypertension, specifically prostacyclin or prostaglandin I2. It’s a very potent vasodilator. It also is what we call an anti-proliferative agent, where it keeps the cells that line the pulmonary arterials and the cells in the walls of the pulmonary arterials under control, so that they don’t act in a way that narrows the lumen and increases pulmonary vascular resistance. The first drug actually approved for pulmonary arterial hypertension was an infusion prostacyclin called epoprostenol, and that’s really the only agent in which we have confirmed survival advantage. Because, there was no other treatment to compare it to when the original study was done. Subsequent therapies don’t have that advantage, because it wasn’t ethical to have a placebo-controlled arm.
Even in today’s expert guidelines for patients with severe class IV disease, moving to infusion prostanoids—which include epoprostenol and treprostinil—are the recommended treatment intervention. For patients with less severe disease, there’s an option to use other therapies. And, over time, the ability to deliver the prostanoids, the prostacyclin medications, have evolved from infusion to inhaled options and now oral preparations. So, in that one category of medications that target the prostaglandin pathway, we have oral, inhaled, and infusion options. It’s getting very complex in terms of the decision making about which of these to choose in which patient. And the science just isn’t there quite yet to make firm recommendations.
One should appreciate, however, that if you are on infusion therapy, that clearly is the one that has the documented improvement in survival. Using it in class IV disease is very appropriate. Second, you have zero order kinetics, so there’s a constant level in the bloodstream, and, therefore, treatment effect on the pulmonary circulation as opposed to if you’re delivering it by inhaled or oral where you have peaks and valleys of the medication. And you may not have coverage during nighttime, for example, depending on which of these you choose. So, it’s unknown as to whether inhaled and oral would ultimately be equivalent to an infusion prostanoid. There’s no information that says yes or no. I think the majority of the pulmonary arterial hypertension experts at this point would err on the side of the infusion therapy in a severe patient, and use inhaled or oral in a more moderately ill patient, perhaps in combination with the other classes of medications.
The infusion prostanoids are delivered by one of two routes. One option is to have an indwelling central venous catheter, so it’s delivered intravenously, connected to an infusion pump where the drug is prepared and installed in the pump, and run at a certain dosing rate. A second option is subcutaneous infusion. That applies to treprostinil, and the drug is 100% bioavailable when infused subcutaneously, again, connected to a line and an infusion pump with a dose that’s prepared and run at a particular dosing rate. The intravenous route has, of course, the indwelling line which runs some infection rate, albeit low. And if the line gets infected, obviously the chances of it progressing to a bacteremia or a systemic infection is a very real situation that needs to be monitored by the patient and the clinician. Because, that can be life-threatening if not detected early and treated appropriately.
The subcutaneous infusion does not have the risk of what we call line-related bacteremia, or line-related sepsis, that the intravenous infusion does. But, it does create a fair amount in the way of inflammation and pain at the infusion site. There’s particular and specific ways of managing that, that every patient who delivers the drug subcutaneously will have something in the way of a side effect at the infusion site, even if it’s just a modest amount of redness or swelling. But, that can range all the way to serious pain or even an abscess underneath the skin. So, there’s specific complications of these infusion medications that probably warrant, in almost all cases, the patient to be managed in an accredited pulmonary hypertension center where you have expert clinicians. And, more importantly, coordinators who are available on the phone to be able to respond to the patient when they call, and if they have an indwelling catheter and they’re having a fever, to direct them to the physician’s office for evaluation. Or if they’re infusing it subcutaneously and they're having inflammation or signs of infections that require early evaluation or intervention, then that’s done by an expert who has experience in managing these problems rather than in a general medical setting.
If you’re comparing infusion prostanoid therapy to the inhaled preparations, there are several important differences. With the infusion therapies, you have a constant dose that’s being provided systemically, to which the pulmonary arterial circulation and the right heart is exposed. You have a high degree of confidence that the dose that you’re providing, either intravenously or subcutaneously, is absorbed and is meeting your target blood level for action on the disease state. Secondly, it’s zero order kinetics, so there are no peaks and troughs of the medication. The exposure to the medication doesn’t vary whether you’re awake and taking the medication as it would with an inhaled preparation. It’s 24 hours of the same dose, same exposure.
With the inhaled preparations, of course, it requires a delivery system. And the medication is prepared and put in the delivery system, typically a proprietary delivery system or nebulizer in some cases. The medication is then delivered to the respiratory tract, some of which has to be hitting the oral pharynx, or the upper GI tract, or the larger airways, and perhaps not getting into the lower airways where the drug is active. It’s not clear how much might be absorbed systemically versus active locally in the pulmonary arterial circulation. There are peaks and troughs, so you have a high peak level right after you’ve administered it and then the drug is metabolized. And then, there’s no coverage during nighttime, because you aren’t delivering the drug while the patient is asleep. So, there’s a gap where the trough level of the medication is persistent beyond what you would anticipate would be the best for the patient in terms of therapeutic exposure. And then, you have issues potentially of irritating the respiratory tract, causing cough and wheezing that could also affect how much of the drug you’re delivering into the system.
Lastly, the infusion prostanoids really do not have a ceiling on dose. So, if a patient isn’t doing as well as anticipated, you can increase the infusion level of the medication really without a whole lot of concern about having some maximum dose beyond which you cannot prescribe the medication with some rare exceptions. The inhaled preparations of prostanoids have ceilings on the maximum amount of drug you can deliver, which is limited by the preparation of the drug and the delivery system. | https://www.ajmc.com/view/prostacyclins-in-pah-selecting-among-the-agents |
What Is Industrial Pollution?
Last Updated Mar 26, 2020 1:17:41 AM ET
Industrial pollution is the contamination of the environment by businesses, particularly plants and factories, that dump waste products into the air and water. Industrial waste is one of the largest contributors to the global pollution problem endangering people and the environment.
Many dangerous pollutants, by-products of manufacturing, enter the air and water, risking health and lives. Common pollutants include carbon monoxide, formaldehyde, mercury and lead. Waste released into the water systems, including medical waste, kills river and ocean life. Cities are particularly at risk for the direct effects of industrial pollution, but the ultimate results filter down throughout the environment.
More From Reference
Will 5G Impact Our Cell Phone Plans (or Our Health?!)
The Secret Science of Solving Crossword Puzzles
Racist Phrases to Remove From Your Mental Lexicon
Is the Coronavirus Crisis Increasing America's Drug Overdoses?
Fact Check: What Power Does the President Really Have Over State Governors? | https://www.reference.com/science/industrial-pollution-6a7e87c309600910 |
Hawaii County government won’t approve short-term vacation rentals where homeowner’s associations prohibit them, the Board of Appeals ruled unanimously Friday in denying an appeal from a Keauhou View Estates property owner.
Alaska resident Lynn Allingham, president of Nortecca Properties Inc., had purchased the foreclosed property in 2016 on an online auction site and rehabilitated it for a vacation rental. She paid $442,334 for the three-bedroom, two-bath home on a quarter-acre lot, county records show.
But when she applied to be grandfathered in after the county began requiring STVRs to be registered, her application was denied on the grounds the homeowner’s association had a restrictive covenant prohibiting rentals of less than 30 days.
Allingham contended the county is not in the position to enforce private entities’ declarations of covenants, conditions and restrictions.
In addition, she said, the subdivision declaration filed with the sate Bureau of Conveyances was not referenced in the title report, so she was unaware of it, and she took “umbrage” at the county’s assertion she misrepresented that fact in her application.
Allingham’s attorney Barbara Franklin said the county’s interpretation of a defective rule allows the director to “bootstrap” a rule that says the county can’t invalidate homeowner association restrictions into “something more than it really is.”
“The director stepped outside his authority,” Franklin said. “The county is not Keauhou View Estates. … It is the job of the owner’s association to enforce their own covenant.”
Planning Director Zendo Kern, defending decisions made by the previous administration, said the county code is clear that the county will not invalidate covenants, conditions and restrictions imposed by private property regimes.
“I think it’s pretty clear when you read this code. If you have a CC&R on your property it is honored,” Kern said. “The moment we start overriding CC&Rs, we’re running into very dangerous territory.”
Kern said he took “slight almost offense” to some of Allingham’s arguments, saying the appellant and her attorney were probably smarter than he was, but he had common sense.
“I know we’re on an island in the middle of the Pacific, but I don’t think we’re that country,” Kern said.
Kern said the county denied two other STVR applications in the subdivision for the same reason. The short-term vacation rental law in general has been controversial, he said. There have been more appeals of STVR cases than appeals in the whole prior history of the county, he said.
Deputy Corporation Counsel Jean Campbell said the county would “open up a can of lawsuits” if the STVR was approved, and could also be sued itself. | https://www.kaumakani.com/county-honors-hoa-restrictions-board-of-appeals-upholds-planning-director-on-vacation-rental-issue/ |
Master Student Required (Genetic Engineering/Molecular Biology)
The INM – Leibniz-Institut für Neue Materialien is looking for a MASTER STUDENT (at least 6 months, full-time) who would be interested in doing the thesis in the research group of Bioprogrammable Materials.
Major duties/responsibilities:
- Genetic engineering of stimuli-responsive production of a therapeutic protein in B. subtilis
and lactic acid bacteria by designing and incorporating various genetic modules
in plasmids and bacterial chromosomes.
- Analysis and optimization of release profiles and bioactivity of the therapeutic
protein from the modified bacteria. This could also involve in vitro experiments with
mammalian cells.
- Encapsulation of bacteria in hydrogels and analysis of their performance.
- Possible participation in other research activities.
The Bioprogrammable Materials group works at the intersection of Synthetic Biology and Biomaterials. The major focus of the group is on the development of materials with genetically programmed functionalities capable of stimuli-responsive long-term drug release, manipulation of cell behavior, and biosensing. Synthetic Biology employs the use of DNA manipulation tools to engineer genetic circuits that can be used to develop microbes and proteins with smart functionalities. These genetically programmed entities are then incorporated with polymeric matrices to create composite materials with dynamic and smart functionalities for biomedical applications.
Potential candidates should be undertaking a Master’s degree in biotechnology, biochemistry, microbiology or related fields, with practical experience in bacterial genetic engineering. Candidates should be self-motivated with good interpersonal and communication skills in English for working in a multi-national environment.
The INM is an equal-opportunity employer with a certified family-friendly policy. We promote professional opportunities for women and strongly encourage them to apply. Severely disabled applicants with equal qualification and aptitude will be given preferential consideration.
The deadline for submission is December 20st, 2021. The starting date would ideally be some time in November (some flexibility). Interested candidates should submit their application (in PDF format) by email to Dr. Shrikrishnan Sankaran (Group leader; [email protected]) and Marc Blanch Asensio (PhD student; [email protected]). The application should include a motivation letter, CV, academic transcripts and contact details of a least 1 reference.
More information about our research can be found here. | https://www.leibniz-inm.de/stellenangebot/master-student-required-genetic-engineering-molecular-biology/ |
BCW Food Products Inc. in Dallas is facing $66,900 in fines for three safety violations after a worker’s left arm ended up amputated by an industrial screw conveyor while cleaning the inside of a packaging machine, said officials at the Occupational Safety and Health Administration (OSHA).
“This is the second time in less than a year that BCW Food Products has failed to comply with OSHA’s regulations which safeguard lockout and tagout equipment energy sources. These energy sources can easily expose workers to amputation, as they did in this case,” said Stephen Boyd, OSHA’s area director in Dallas. “Had the employer followed OSHA standards, this incident could have been prevented. Employers must take their responsibilities under the law seriously.”
RELATED STORIES
Sugar Cooperative Faces Safety Fines
Safety Fines for Truck Maker
PSM Fines at AR Refinery
Chemical Hazards at Poultry Processor
OSHA’s Dallas Area Office began its investigation in February at the Denton Drive facility. It cited the employer with one willful violation for failing to ensure lockout or tagout devices ended up affixed by authorized workers to each of the energy-isolating devices. A willful violation is one committed with intentional, knowing or voluntary disregard for the law’s requirements, or with plain indifference to worker safety and health.
The repeat violation was for failing to indicate the identity of the worker who applied the lockout and tagout devices. A repeat violation exists when an employer previously faced citations for the same or a similar violation of a standard, regulation, rule or order at any other facility in federal enforcement states within the last five years. The company faced a similar violation in November 2012.
The serious violation was for failing to train and ensure workers understood the purpose and function of the energy control program; the company also did not ensure workers acquired the knowledge and skills required for the safe application, usage and removal of the energy controls. A serious violation occurs when there is substantial probability that death or serious physical harm could result from a hazard about which the employer knew or should have known.
BCW Food Products, a manufacturing company that specializes in custom mixes, bases and concentrates, has three manufacturing facilities and warehouses in Texas, Arkansas, Louisiana, Colorado, Kansas, Utah and Illinois.
Leave a Reply
You must be logged in to post a comment. | http://www.isssource.com/safety-alert-food-firm-faces-fines/ |
GRAFENWOEHR, Germany — As the holidays approach, all DOD personnel should be aware of active travel restrictions and travel alerts in Europe.
You and your family members can receive real-time emergency messages through the USAG Bavaria’s AtHoc Mass Warning and Notification System. The system works through a client on your computer and can send crisis communication messages to your email, home phone, iOS and Android phones, as well as text messages.
Here is the most comprehensive and up-to-date list.
Worldwide
The U.S Dept. of State has issued a worldwide travel alert in effect until Feb. 24. The travel alert suggests that militants with al-Qa’ida, Boko Haram, and other terrorist groups continue to plan terrorist attacks in multiple regions. Travelers should register through the State Department’s Smart Traveler Enrollment Program (STEP), which ensures the State Department can contact you in the event of an emergency.
Italy
On Nov. 18, the U.S. Embassy in Rome issued a security message for U.S. citizens that the following locations have been identified as potential targets of terrorist attacks in Rome and Milan.
Turkey
U.S. European Command travel restrictions to Turkey remain in place for all DoD personnel, including service members, civilians, contractors and family members. Both unofficial and official travel — including ship-to-shore travel from cruise ships — is prohibited. Any exceptions require approval from the first general officer or SES in an individual’s chain of command. | https://www.bavariannews.com/blog/2015/11/25/latest-travel-restrictions-travel-alerts-and-security-messages/ |
- This event has passed.
Why Middle Eastern Jewish Refugees Are Key to Understanding and Resolving the Israel-Palestine Conflict
27th November 2018 @ 6:00 pm - 7:00 pm
Jews lived continuously in the Middle East and North Africa for almost 3,000 years. But in just 50 years, indigenous communities outside Palestine almost totally disappeared as more than 99 percent of the Jewish population fled. Until the mass exodus of Christians and Yazidis, the post-1948 displacement of more than 850,000 Jewish refugees was the largest movement of non-Muslims from the Arab Middle East and North Africa. Yet, it is denied, falsified or dismissed.
The issue of the Jewish refugees also foreshadowed the flight of other non-Muslim minorities from the region. Does this point to dysfunction in Arab society and an inability to tolerate the ‘other’? And as Jewish refugees from Arab and Muslim lands and their descendants now form over half the Jews of Israel, what implications does this important factor hold for a future peace settlement with the Palestinians?
On the 27th November, The Henry Jackson Society was delighted to host Lyn Julius for a discussion on her latest book, ‘Uprooted: How 3000 Years of Jewish Civilisation in the Arab World Vanished Overnight’. Lyn is a journalist specialising in Jews from Arab lands and the British-born daughter of Iraqi-Jewish refugees. She founded Harif, the UK Association of Jews from the Middle East and North Africa, in 2005 in order to raise awareness of the history and culture of these Jewish people. The talk centered around the forced exile of the Middle East’s Jewish Community
Lyn begun her talk by describing her personal motivations for writing her latest book. Being of Mizrahi origin, numerous members of her family had resided in Bagdad up until the early 1940s. However, facing significant persecution from the Iraqi government which included the termination of jobs, state organized and sponsored violence, confiscation of private property and regular attacks, her family, like many others, was forced to flee Iraq. Lyn went on to draw attention to the fact that the phenomenon of Jewish expulsion from Middle East has received little attention in the academic world.
Lyn developed her talk by describing the destruction of Jewish heritage following the Jewish expulsions which functioned as a form of cultural eraser. Lyn described how cemeteries were bulldozed, synagogues converted to Mosques and the Jewish quarters of various cities were looted and burned. The sole exception to this being Morocco, where Lyn claimed the primary motivation for the preservation of the Jewish quarter is to boost Morocco’s ever important tourism industry.
Reverting to the treatment of Jews preceding their expulsion, Lyn made the important point that co-habitation between Jews and Muslims did not entail co-existence. Jews were a Dhimmi status by which they faced severe religious, legal and theological discrimination. Following the collapse of the Ottoman Empire, this discrimination emanated from two major movements – Arab Nationalism and Islamism. Arab nationalism perceived Jews as alien to the creation of their new state. This being a nationalism of blood and soil which marginalized minorities and excluded non-Muslims. Similarly, Islamism perceived the Jews as a relic from the past, a group defeated by the prophet Muhammed at Khaybar who were too stubborn to convert. Islamism also sought to re-establish the caliphate which clashed with the Zionist movement’s goal of establishing a Jewish State.
Furthermore, the driver of the Islamist movement was the Palestinian leader Haj Amin Al-Husseini, the Grand Mufti of Jerusalem. The Mufti openly supported the Nazis, appearing on German radio to propagate anti-Semitism and demanded a holocaust in the Middle East upon Hitler’s victory in Europe. He even helped create a division of Bosnian Muslim’s for the SS. He was found guilty of war crimes following the conclusion of World War 2, but escaped to Egypt where he continued to stoke tension and anti-Semitism.
Finally, Lyn explained why her book made such an important contribution to peace. She highlighted the difficulty of achieving peace with the “right of return” being a key stipulation demanded by Palestinian negotiators. This return, by which hundreds of thousands of Palestinians who lost their homes following the creation of Israel demand a right to return to their properties, would involve the cessation of Israel as a Jewish state and likely lead to civil war. Yet, the fact that exponentially larger numbers of Jews were ethnically cleansed throughout the Middle East, highlights that the occurrence should be considered a “population transfer” – an event which occurred in at least five separate states at that time. As such, the “right of return” should not be accepted or be allowed to use as a bargaining chip. Moreover, the mass expulsion of Jews and their successful integration in Israel provides an important for such an event. A fact of which many Arab states would do well to employ for their Palestinian minorities. | https://henryjacksonsociety.org/event/why-middle-eastern-jewish-refugees-are-key-to-understanding-and-resolving-the-israel-palestine-conflict/ |
Alternate Hydro Energy Centre (AHEC) was set up at the Institute by the Ministry of Non-Conventional Energy Sources, Govt. of India in the year 1982 for imparting training and undertaking R&D work in the field of small hydro-power and other renewable/non-conventional energy sources. The Centre has well equipped laboratories viz., Hydro-mechanical Systems, Computer Aided Design and Information Systems, Control Systems and Bio-mass & Eco-systems. The centre handles consultancy and sponsored projects in the field of small hydro power development and other renewable energy sources and organises short-term training courses to train personnel to design, operate and maintain such power generating systems. The centre offers a Master of Technology (M.Tech.) course in Alternate Hydro Energy Systems. Ph. D. programmes are also offered by AHEC in the field of Alternate Hydro Energy.
The Centre undertakes the work of the preparation of detailed project reports, engineering designs, techno-economic analysis, field execution of small hydro projects, refurbishment of power houses and development of biomass and solar energy systems. The environment and energy auditing of process and allied industries and environment impact assessment of small hydro projects are also dealt by the Centre as its diversified activities. AHEC has expertise for GIS based natural resources mapping and planning.
TRAINING on GENDER and HYDROPOWER (PART II)
In the last five years, ICH has increased its engagement with a range of gender programmes across all regions. ICH has focused on training and strengthening human capital, providing management tools to incorporate these principles into strategic and operational... | https://ich.no/alternate-hydro-energy-centre-ahec/ |
Sydney-based comedian Jack Gow will showcase his unique coming-of-age story over eight nights at the Melbourne Fringe Festival. Jack, described as “a growing force in Australian comedy” (Broadsheet), enjoyed a sold-out debut season last year at the MFF and earned high praise from reviewers and audiences alike. The talented wordsmith’s “wry, gentle storytelling” (Sydney Morning Herald) is characterised by hilarious personal anecdotes imbued with dark pathos.
Exploring the idiosyncrasies of growing up as an outsider in small-town country Australia, Jack’s show touches on identity politics, notions of traditional masculinity and the extreme lengths individuals go to try to belong. His style has been described as “an anxious, apologetic eloquence that takes the everyday and makes it quietly marvellous” (★★★★ The Music), and he has been lauded as “one of the finest emerging comedians in the country” (Sydney Comedy Festival).
A multiple The Moth StorySLAM winner and two-time The Moth Sydney GrandSLAM runner-up, Jack writes regular comedic pieces for ABC News Digital and his stories have appeared on Radio National, the Story Club podcast, and he is a former contributing writer and performer on The Checkout (ABC TV).
Before you go, would you like to subscribe to our free weekly newsletter with events happening in your area, competitions for free tickets and CD giveaways?
No thanks - I'm already an Eventfinda member (or I don't want to join)
Enter your email below, click on the Sign Up button and we’ll send you on your way
Continuing confirms your acceptance of our terms of service. | https://www.eventfinda.com.au/2019/just-small-town-boy-by-jack-gow-melbourne-fringe/melbourne/carlton |
The invention pertains to digital data processing and, more particularly, to automated methods and apparatus for managing and routing work. The invention has application, by way of non-limiting example, in call service centers and in other applications requiring routing and/or assignment of tasks to resources.
Work can be thought of, by way of non-limiting example, as consisting of individual work items that are subject to a workflow that solves a particular problem. A resource is a person, system or a piece of equipment, by way of further example, that has a capacity for work. Intelligent routing, assignment and/or work management (sometimes referred to below collectively as “routing”) of work items to resources is a critically important problem in today's large and complex business environments.
Regardless of the specifics, routing problems share the following characteristics: there may be a large number of tasks (e.g., many call service center customers waiting in queues for service); workflows are often complex and may not be highly differentiated; available resources typically vary greatly in level of skill, and the more skilled or apt resources are typically scarce. The bottom line in many business applications, at least, is that customers expect fast, efficient service, so routing decisions have to be good and have to be made quickly. They also have to be effectively managed in light of evolving deadlines and circumstances.
Computer based systems for assigning work to resources are well known in the art. Such systems include discrete-parts manufacturing scheduling systems, batch process scheduling systems, optimization systems for matching energy producers with consumers, and call center workflow routers. Simple systems of this type consider one work item at a time; they take the next work item from a queue, search for a resource that is capable of performing the work, and make the assignment.
The advantages of such a simple system are that it is easy to implement, and that it makes fast decisions. The main drawback of such a system is that it can easily make bad decisions. The resource assigned in this simple way may be better utilized if it were assigned to a work item further back in the queue. Thus, more sophisticated systems consider multiple work items at the same time. Assignments are made taking into account costs and capacities of resources, so that the cheapest resources are used whenever possible. This results in significantly better decisions, than those from the most simple system. However, there are still problems that more sophisticated systems in the prior art do not properly address.
An object of this invention is to provide improved methods and systems for routing (and/or assigning) items to resources.
A further object is to provide such methods and systems for managing a pool of assigned items to pursue continued optimizations.
A related object is to provide such methods and systems as facilitate the ongoing management, e.g., reassignment, of items as deadlines and other service levels are passed.
Another object of the invention is to provide such methods and apparatus for service level driven skills-based routing.
Another more particular object of the invention is to provide such methods and systems as achieve optimal assignment of work items to resources.
Still another object of the invention is to provide such methods and systems as can be applied in a range of applications, business-related or otherwise.
Still other objects are to provide such methods and systems as can be implemented on a variety of platforms and without undue expense or resource consumption.
| |
Thousands rush daily to view the Mona Lisa by Leonardo di Vinci and pass judgments on the painting. To a child the painting is simply the picture of a woman who is not smiling, but to most art critiques; it is one of the greatest portraits of all time. Viewers are able to form their own opinions resulting in motley of perspectives on a single item. Chinua Achebe’s tragic novel, Things Fall Apart, has a hero who, through the perspective of various critiques, can be interpreted as both a positive and negative leader through his morals and wealth. When the power struggle begins in the novel, a Marxist view of the events shows the reasons why the protagonist is in power and how he loses that power. Through the emotions of the protagonist, a psychoanalyst sees his motivation and the psychological effects power has on his behavior. Viewing the protagonist’s leadership through various techniques creates a dual perspective of leadership and how not all leaders are necessarily good leaders.
On initial glance, a Marxist critique approves of Okonkwo’s power because of his wealth but changes vantage points as the protagonist gradually loses his power. Achebe’s tragic hero, Okonkwo, is initially seen as a result of his Ibo culture raised in the belief that through displays of wealth and grandeur “he would return with a flourish” and gain power (171). On page 32 of Joseph Badaraco’s article , “Question of Character”¸ he claims Okonkwo’s beliefs and ethics were “shaped by the traditions and practices of his people” which causes him to have narrow views. The environment the protagonist is raised in shapes his leadership qualities which help his ability to lead the Ibo people successfully based upon their morals. Okonkwo’s wealth and power “rested on solid personal achievements” rather than earthly possessions making him a viable leader in the Umuofian society in a similar manner (7). The people of Umuofia live in a society which values “what he (Okonkwo) has done and who he is” rather than money which shifts the power from the wealthy to those who deserve power (Badaraco 37). Although Okonkwo uses his achievements to gain his leadership role, he uses both his skill and wealth to maintain that power upholding the Marxist view of the bourgeoisie controlling society. However, Okonkwo’s firm grasp on power quickly fades when he “lost the chance to lead his warlike clan” after inadvertently killing a clansman (171). Although Okonkwo believes he can reclaim his status with hard work, eventually, he “becomes a leader without followers” alienating himself further from others (Badaraco 32). Once Okonkwo experiences his downfall, he is unable to reclaim his leadership role because he does not possess the main necessity to lead, which is money. As a result of Okonkwo’s untimely financial demise, he loses his title of a successful leader because of his loss of economic status and despite his original acts of bravery and valor.
Although to a Marxist critic,... | https://brightkite.com/essay-on/leading-by-example-or-wealth |
For Men in the Cities, Robert Longo set up his camera on the rooftop of his apartment and threw a variety of objects at his friends, capturing their aggressive reactions in these remarkable photographs, created between 1977 and 1983. The jerks and spasms of Longo's subjects, sharply dressed in business attire, have an elegance and grace that is entirely unexpected; protective reactions and exaggerated gestures have been turned into effortless and authentic choreography, a ballet of falls and stumbles, leaps and trips. The movements are fresh and vital, full of energy and life, even while they portray a sense of agony. They document an essence of human motion, boiled down to pure expression. This work later became the inspiration for his iconic Men in the Cities series of large-scale, monochrome charcoal and graphite drawings. | http://www.adamsongallery.com/artists/longo-men-in-the-cities |
action or surrender.
The great Goddess Isis told
me that there is great strength in surrender, and great power in simplicity. For these times we are in, I am burning away that which
is unreal, to unfold the glorious creation that I AM.
I am finding so many symbols and stories to remind me of that truth.
Here is one that ignites my fire.
For the Great Goddess Pythia, daughter of Gaia, may you live and breathe through me.
The Delphic Oracle, who was the High priestess of he ancient world, has been silent for fifteen hundred years now. Nobody in the ancient world wrote a history of this priesthood, although the philosopher Pythagorus was witness to the oracle. Many knew some of her personal names. Her title was Pythia, Dragon Priestess of the Earth.
Many writers recorded some of her oracles, which were her words. All know her first motto, "Know Thyself". | https://www.marierosesrt.com/single-post/2017/05/29/dragon-goddess |
The statistics Purchases and sales by firms is calculated from firms VAT reports and shows the development in most standard industrial groupings in Denmark. Purchases and sales are calculated for the branches in "Dansk Branchekode DB07".
Covid-19: In March 2020, the calculation for missing values was supplemented with already published figures for e.g. retail turnover and industry production and revenue, as well as new experimental data sources such as companies' electricity consumption.
The values Sales total and Purchases total are a measure for the turnover. Data for the calculation are based on the VAT report made by the firms. The statistics contains information on Domestic sales, Total sales, Domestic purchases and Total purchases.
The purpose of the statistics is to monitor business trends and economic activity in Denmark through information on purchases and sales as reported by enterprises covered by the Danish VAT system.
January 2001 the statistics became monthly, and its title changed to Purchases and Sales by Firms. 2012 the statistics was changed, and there are new series from January 2009. There are adjustments and changes in calculation methods and distributions. Now we have quarterly data for the detailed industry groups. Monthly data are published for the 19-, 36- and 127-groupings, and also for the 'NYT'-grouping, which is the 10-grouping supplemented with more detailed information about the largest industry groups. For industry groupings please refer to "Dansk Branchekode" (DB07).
The standard industry groupings are based on Danish Industrial Classifications, Dansk Branchekode 2007. 'Dansk Branchekode' is the National version of the EU nomenclature NACE rev. 2.
The statistics covers all Standard industrial groupings but lacks information on purchases and sales in those industrial groups and activities there are free of VAT duty.
Enterprise: Legal business unit with commercial purpose.
Firms.
Enterprises with annual turnover of more than DKK 50.000 or enterprises who have voluntarily been registered for VAT.
Denmark excluding Faroe Islands and Greenland. The statistics also covers firms without a Danish address if the Danish tax authority "SKAT" gave the firm an "SE"-number.
2009-
Not relevant for this statistics.
Mio. DKK.
Calendar month.
Monthly.
Section 6 of the Act on Statistics Denmark, as subsequently amended by Act no. 599 of 22 June 2000.
Council Regulation (EC) No 1165/98 of 19 May 1998 concerning short-term statistics.
There is no response burden as the data are supplied by the Customs and Tax Authorities.
However in the period 2001 to 2011 Statistics Denmark occasionally carried out an annual survey among some 260 joint declarers (units reporting for two or more corporations) in order to ascertain the percentage share of each corporation covered by a joint declaration, but participation in this survey is voluntary. From 2012 this split up is made on basis of administrative information, and the cost of burden is 0.
For detailed information on concepts and definitions relating to the Danish VAT system please refer to the homepage of the Danish tax authority SKAT. | https://www.dst.dk/en/Statistik/dokumentation/documentationofstatistics/purchases-and-sales-by-enterprises/statistical-presentation |
The invention relates to a method used for measuring the dynamic elastic modulus epsilon and damp ratio xi of woods and wooden compound materials. An acceleration sensor is adopted to convert the inherent mechanical quantity (acceleration) into electric signals and then the data is collected to measure the basic frequency f1 of cantilever beam material with the focused quality after the wave filter and amplification by a signal regulating instrument; the material dynamic elastic modulus epsilon dynamic is figured out in virtue of the defined relation between the material elastic modulus and the basic frequency f1 actually measured by a frequency-domain plot. The amplitudes of A1, A2, ellipsis and An are read out on the time-domain attenuating surge curve in the time-domain plot and then the logarithmic decrement delta and damp ratio xi are respectively figured out. The invention has the advantages that the actual dynamic changes of woods and wooden compound materials can be objectively reflected; the dynamic elastic modulus epsilon and damp ratio xi of woods and wooden compound materials can be simultaneously figured out and the measuring time is short and the data is full, accurate and reliable; and the basic frequency (f1) and damp features can be obtained in the frequency-domain plot and time-domain plot of the measured samples for the relevant analysis. | |
PÄS Gallery presents Artifacts and Rituals, a two-person sculpture show featuring the work of Jon St. Amant and Evan Everest.
St. Amant’s work reveals the two major religious factions in his allegorical cartoon ninja series, “Nowhereland”. Identical in philosophy, but differing in aesthetics, the two faiths-Green Mountianity and Red Treeism- clash in a satirical propaganda war.
Everest’s work references disparate concepts, philosophies, and spiritual systems to create pseudo ritualistic drawings, paintings, and sculptures, that are reminiscent of a past or future civilization whose rights of passage, rituals and customs are at once vaguely familiar and impenetrable.
The opening will take place during the Downtown Fullerton Art Walk on Friday, May 4th, and will run until May 26, 2012.
For more info, click here. | http://www.2pas.org/blog/2012/04/artifacts-and-rituals-sculpture-show/ |
Background {#Sec1}
==========
Advances in genomic technologies and the availability of single nucleotide polymorphism (SNP) markers have enabled genome-wide studies of the effect of selection in cattle \[[@CR1],[@CR2]\]. Selection signals that result from environmental or anthropogenic pressures help us understand the processes that have led to breed formation. These studies are usually conducted with a "top-down" approach \[[@CR3]\], from genotype to phenotype, whereby genomic data are statistically analysed to detect traces/marks/signs of directional selection. In analyses that aim at identifying selection signatures, the phenotype is considered in its broadest sense: breed, production aptitude or even adaptation to a specific environment. This approach holds the potential to investigate traits that are very expensive, difficult and sometimes impossible to study with classical GWAS (genome-wide association study) approaches, such as tolerance to extreme climates or various feeding and husbandry systems, resilience to diseases, etc. Therefore, results from these studies are complementary to those from GWAS for investigating the molecular mechanisms that underlie important biological processes \[[@CR4]\]. Many methods have been proposed to scan for selection signatures at the genomic level \[[@CR5]\] by analysing either within- or across-breeds patterns of diversity by comparing allele or haplotype frequencies and sizes, alleles that are segregating or fixed in populations, and to preferentially detect recent or ancient selection events \[[@CR5]-[@CR7]\]. Different methods have different sensitivities and robustness, *e.g.* they may be influenced to a different extent by marker ascertainment bias and uneven distribution of recombination hotspots along the genome.
Extended haplotype homozygosity (EHH), a method that identifies long-range haplotypes, was developed by Sabeti et al. \[[@CR8]\] for applications in human genetics and has been applied to many animal species, including cattle \[[@CR2],[@CR9],[@CR10]\]. Under a neutral evolution model, changes in allele frequencies are assumed to be driven only by genetic drift. In this scenario, a new variant will require many generations to reach a high frequency in the population, and the surrounding linkage disequilibrium (LD) will decay due to recombination events \[[@CR11]\]. Conversely, in the case of positive selection, a rapid rise in frequency of a beneficial mutation in a relatively few generations will preserve the original haplotype structure (core haplotype), since the number of recombination events would be limited. Therefore, based on EHH, a positive selection signature is defined as a region characterized by strong and long-range LD and having an allele within an uncommonly high frequency haplotype.
The EHH method detects genomic regions that are candidates for having undergone recent selection and, unlike integrated haplotype score (iHS) \[[@CR12]\], does not require the definition of ancestral alleles. In addition, it is suited to the analysis of SNP data, because it is less sensitive to ascertainment bias than other methods \[[@CR4]\]. However, EHH is likely to generate a large number of false positive and false negative results, due to heterogeneous recombination rates along the genome \[[@CR2]\]. An additional drawback that is shared by all selection signature methods, including EHH, is the challenge of robust inference, *e.g.* the ability to distinguish between true and spurious signals \[[@CR13]\].
To partially account for these limitations, Sabeti et al. \[[@CR8]\] developed the relative extended haplotype homozygosity (rEHH) method, which applies an empirical approach to assess the significance of signals. The rEHH of a core haplotype (*i.e.* short region in strong LD along the genome) is compared with the EHH value of other haplotypes at the same locus of the core haplotype, using these as a control for local variation in recombination rates. Therefore, it only identifies genomic regions, which carry variants under selection that are still segregating in the population. Although EHH and rEHH methods were developed for human population studies, they have been successfully applied to livestock species, such as pig \[[@CR14]\] and cattle \[[@CR2]\].
After domestication, which occurred about 10 000 years ago in the fertile crescent, taurine cattle colonized Europe and Africa and were selected to satisfy different human needs \[[@CR15]\]. During the last century, anthropogenic pressure has led to the formation of hundreds of specialized breeds that are adapted to different environmental conditions and linked to local traditions, constituting a gene pool relevant for conservation \[[@CR16]\]. Some of these breeds have experienced strong artificial selection for dairy, beef, or both production specializations \[[@CR17]\]. The present study uses the rEHH method to identify signals of recent directional selection in dairy and beef production, using five Italian dairy, beef and dual purpose cattle breeds. We focused on significant core haplotypes that are shared by breeds selected for the same production type. Finally, we identified positional candidate genes within the genomic regions under selection and investigated their biological role.
Methods {#Sec2}
=======
Animals sampled and genotyping {#Sec3}
------------------------------
A total of 4311 bulls from five Italian dairy, beef, and dual purpose breeds were genotyped with the Illumina BovineSNP50 BeadChip v.1 (Illumina, San Diego, CA), by combining genotyping efforts of two Italian projects ("SelMol" and "Prozoo"). The dataset included 101 replicates and 773 sire-son pairs, used for downstream quality checking of the data produced. The genotypes of 2954 dairy (2179 Italian Holstein and 775 Italian Brown), 864 beef (485 Marchigiana and 379 Piedmontese) and 493 dual purpose (Italian Simmental) bulls were available. Data quality control (QC) was performed in two steps: first on animals, independently in each breed, by applying the same filters and thresholds, and then on markers, across all individuals in the dataset. The first step excluded individuals with unexpectedly high (≥0.2%) Mendelian errors for father-son pairs and individuals with low call rates (≤95%). The second step excluded: (i) SNPs with more than 2.5% missing values in the whole dataset or completely missing in one breed; (ii) SNPs with a minor allele frequency less than 5%; and (iii) SNPs that were located on the sex chromosomes or for which chromosome assignment or physical position was lacking.
Estimation of rEHH {#Sec4}
------------------
Haplotypes were obtained by fastPHASE using the default options \[[@CR18]\], and were run by breed and chromosome for each breed. Pedigree information for all bulls was provided by breed associations, and were used to filter out direct relatives (in father-son pairs, the son was maintained in the dataset and the father removed) and over-represented families (a maximum of five randomly chosen individuals per half-sib family was allowed). The final dataset containing these "less-related" animals is referred to as the "non-redundant" dataset and was used to calculate the within-breed pair-wise LD. The r^2^ statistic for all pairs of markers was calculated using PLINK v.1.0.7 \[[@CR19]\]. The decay of LD was estimated by averaging r^2^ values as a function of marker distance, up to 1 Mb.
To test if population structure influenced rEHH detection, we repeated the whole Italian Holstein dataset analysis (*i.e.* the "redundant" dataset comprising father-son pairs and all available half-sibs per family) and focused on genes or gene clusters that are well known to be under recent selection in cattle (*i.e.* "control regions"). In particular, we focused on the *casein* gene cluster, the polled locus and on two coat colour genes (*MC1R* and *KIT* \[[@CR2],[@CR13]\]).
EHH and rEHH were calculated by Sweep v.1.1 \[[@CR8]\]. Some default program settings had to be modified to adapt the analysis to the bovine genome. Specifically, local recombination rates between SNPs were approximated to 1 cM per Mb. EHH and rEHH calculations were performed by breed and chromosome, using automatic haplotype core selection with default options, *i.e.* considering the longest non-overlapping haplotype cores and limiting haplotype cores to at least three and no more than 20 SNPs, as in Qanbari et al. \[[@CR2]\]. To set an (empirical) rEHH significance threshold, we first split rEHH values into 20 bins with a frequency range of 5% each, then log-transformed within-bin values to achieve normality, and finally considered significant core haplotypes with a *p*-value less than 0.05. Although EHH and rEHH values were obtained for all core haplotypes, only those with a frequency greater than 25% were retained for further analyses.
Breed grouping according to production type {#Sec5}
-------------------------------------------
Regions under putative selection for dairy and beef production were identified from significant core haplotypes that shared one or more SNPs in at least two breeds with the same production type. The dual purpose Italian Simmental was included in both dairy (Italian Holstein and Italian Brown) and beef breeds (Piedmontese and Marchigiana), since this breed potentially possesses haplotypes that have been selected for both production types. All downstream analyses were performed separately for the dairy and beef breeds.
Detection and annotation of candidate genes {#Sec6}
-------------------------------------------
The genomic coordinates (in bp) of the regions shared by dairy or by beef breeds were used as inputs to retrieve gene information and annotation from the Biomart web interface (<http://www.ensembl.org/biomart/martview>). The resulting gene set was then used as input for a canonical pathway analysis by examining the functional relationships among the resulting genes using Ingenuity Pathway Analysis tool version 8.0 (IPA; Ingenuity® Systems, Inc, Redwood City, CA; <http://www.ingenuity.com>), coupled with a detailed examination of the literature. IPA operates with a proprietary knowledge database, providing pathway analysis for several species, including cattle. For IPA analysis, Fisher's exact test following a Benjamini and Hochberg correction for multiple-testing was used to estimate the significance of each biological function.
Results {#Sec7}
=======
Quality control of the dataset {#Sec8}
------------------------------
Reproducibility observed from analysis of 101 replicates in the whole dataset was greater than 99.8%. After the two quality control steps, 105 individuals and 9730 SNPs were removed. After phasing, 1292 additional individuals were removed to reduce the large number of sib-families present in the redundant dataset. The final dataset contained 44 271 SNPs and 1132, 514, 393, 410 and 364 individuals from Italian Holstein, Italian Brown, Italian Simmental, Marchigiana and Piedmontese breeds, respectively (Table [1](#Tab1){ref-type="table"}).Table 1**Number of animals genotyped before and after quality controlBreedTotal genotypedED- 5% misANED-REPLED- MENDCleanedHOL**2179403152093**BRW**7756164749**SIM**493662479**MAR**4853738-410**PIE**379510-364Total number of animals genotyped and number of animals removed after quality control analysis; HOL = Holstein, BRW = Italian Brown, SIM = Simmental, MAR = Marchigiana, PIE = Piedmontese, ED- 5% misAN = number of animals excluded with call rates \< 95%, ED-REPL = number of animals excluded because they are replicates, ED- MEND = number of animals excluded with Mendelian errors \> 0.2%.
Assessing the effect of population structure in control regions {#Sec9}
---------------------------------------------------------------
Comparison of rEHH at the four selected control regions (*casein* gene cluster, polled locus, *MC1R* and *KIT* genes) based on analysis of the redundant versus non-redundant datasets indicated that population structure (redundant vs. non-redundant datasets) has an influence on the detection of selection sweeps. In fact, in the redundant dataset, Sweep v1.1. detected only one significant core haplotype at the *casein* gene cluster, while in the non-redundant dataset significant haplotypes were also found at the polled locus and at the *MC1R* gene. No significant signal was detected at the *KIT* gene in either dataset (Table [2](#Tab2){ref-type="table"}). Plots of EHH vs. distance for the two most frequent haplotypes around the genes with significant rEHH (*casein* gene cluster, polled locus and *MC1R*) are in Figure [1](#Fig1){ref-type="fig"}. All subsequent analyses were conducted on the non-redundant dataset which, although of smaller size, proved more informative than the entire dataset.Table 2**Comparison of rEHH signals in candidate regions in non-redundant and redundant datasetCandidate regionBTAPos (bp)Core haplotype rangeRedundantNon-redundantCHFrqrEHH -log(** ***P*** **) up/down** ^**1**^**CHFrqrEHH -log(** ***P*** **) up/downPolled locus**119811541897418-1981154H1:0.790.93/0.37H1:0.781.67\*/1.74\***MC1R**181365791213317720-14007505H1:0.691.20/0.96H1:0.690.77/1.67\***KIT\_ BOVIN**67282117572504921-72821175H1:0.540.30/0.30H1:0.540.33/0.48**Casein cluster**68842776088350095-88452829H1:0.470.94/1.39\*H1:0.461.48\*/1.33\*68842776088350095-88452829H2:0.320.02/0.03H2:0.330.02/0.03rEHH in candidate regions in both uncorrected (redundant) and corrected (non-redundant) datasets for population structure; BTA = *Bos taurus* autosome; CHFrq = core haplotype frequency; ^1^up-stream (left) and down-stream (right) of the core haplotype; \*significant rEHH (p \< 0.05).Figure 1**EHH decay over distance (1) and bifurcation plots (2) in the Italian Holstein non-redundant dataset. (a.1)**, **(b.1)** and **(c.1)** show the decay of haplotype homozygosity as a function of distance for the two most frequent core haplotypes. **(a.2)**, **(b.2)** and **(c.2)** show haplotype bifurcation diagrams for the two most frequent core haplotypes at three control regions found to be significant rEHH in our study i.e. **(a)** polled locus, **(b)** *MC1R* gene and **(c)** *casein* gene cluster).
Detection of selection signatures {#Sec10}
---------------------------------
In total, 17 363, 17 801, 14 837, 13 814, and 12 747 core haplotypes with a frequency greater than 25% were detected in the Italian Holstein, Italian Brown, Italian Simmental, Marchigiana and Piedmontese breeds, respectively. The genome-wide distributions of *p* values for rEHH for each breed are in Additional file [1](#MOESM1){ref-type="media"} \[See Additional file [1](#MOESM1){ref-type="media"}: Figures S1, S2, S3, S4 and S5\]. In total 838, 866, 740, 692 and 613 core haplotypes were found to be significant (p values ≤ 0.05) in the aforementioned breeds. Table [3](#Tab3){ref-type="table"} shows the distribution of total and significant core haplotypes per chromosome and breed.Table 3**Distribution of core haplotypesBTAItalian HolsteinItalian BrownItalian SimmentalMarchigianaPiedmonteseHap** ^**2**^**Sign.Hap** ^**3**^**Hap** ^**2**^**Sign.Hap** ^**3**^**Hap** ^**2**^**Sign.Hap** ^**3**^**Hap** ^**2**^**Sign.Hap** ^**3**^**Hap** ^**2**^**Sign.Hap** ^**3**^**1**1238661228681062499885190746**2**99847104052830448224273533**3**91348103962780467213470234**4**8694686850786417253969739**5**6963268029619305432748822**6**8915192840820457703872942**7**7613780746631396243366127**8**8353986338735357103263731**9**7383772741649375322257530**10**6853672234577256202649825**11**8163975834686306373358625**12**4862451118477214372540522**13**6753260219477185393542419**14**5541756325531294232240219**15**5713158129468234461339919**16**5352860229504254441938015**17**5242758529540314292238916**18**4131851119356143461025212**19**4242243623349163101829515**20**5783552233467233872338417**21**4582347830356183152131213**22**4442445119349213551831613**23**32513315161981018691828**24**4081144920396172992330616**25**273122711120510246122146**26**379737514305122751529016**27**2841228115235112681219010**28**27792981020110202918311**29**315153101324810215920912**TOT1736383817801866148377401381469212747613**Distribution of total and significant core haplotypes (up- and down-stream) per chromosome and breed; BTA = *Bos taurus* autosomes; ^2^number of detected core haplotypes with a frequency in the breed higher than 25%; ^3^number of significant core haplotypes at *p* ≤ 0.05.
Comparison to previously reported data {#Sec11}
--------------------------------------
A number of studies have searched for selection sweeps in Holstein \[[@CR2]\], Brown \[[@CR20]\] and Simmental \[[@CR21]\] cattle. Since different methods are expected to identify different signatures, comparison with previous results is limited to those using the same method and breed(s) as in our study. There is currently only one study that reported rEHH results in (German) Holstein-Friesian cattle \[[@CR2]\]. The number of core haplotypes found in our study in the candidate regions was lower than that in Qanbari et al. \[[@CR2]\] (Table [4](#Tab4){ref-type="table"}). For Holsteins, the two most significant candidate regions in both studies agreed (*casein* gene cluster and *somatostatin SST* gene), although it is impossible to determine if the haplotype under selection is the same, since this information was not provided in \[[@CR2]\]. However, other genes considered significant in Qanbari et al. \[[@CR2]\] (with *p*-values ranging from 0.04 to 0.10) were not significant in our study. When using the same loose significance threshold as in Qanbari et al. \[[@CR2]\] (*p*-values ≤ 0.10), the *casein* gene cluster in this study was identified in Italian Brown (−log~10~*p*-value = 1.09) and the *SST* gene in Italian Simmental (−log~10~*p*-value = 1.26).Table 4**rEHH values in the candidate gene regions studied in** \[[@CR2]\]**HolsteinBrownSimmentalCand geneBTAClosest SNP (bp)CH rangeCH freqrEHH-log(p)CH rangeCH freqrEHH-log(p)CH rangeCH freqrEHH-log(p)DGAT1**14444 963236 653--443 936H1:0.57-/0.11443 936--763 332H1:0.420.13/0.007\-\--**Casein cluster**688 391 61288 350 098--88 452 835H1:0.461.48\*/1.33\*88 326 012--88 452 835H2:0.680.80/1.0988 350 098--88 452 835H1:0.440.37/0.2288 350 098--88 452 835H2:0.330.18/0.30\-\-\-\-\-\-\-\-\-\-\--88 350 098--88 452 835H3:0.340.22/0.45**GH**1949 652 377\-\-\-\-\-\-\-\--**GHR**2033 908 597\-\-\-\-\-\-\-\--**SST**181 376 95681 283 585--81 376 961H1:0.362.00\*\*/1.89\*\*\-\--81 318 451--81 376 961H1:0.421.26/0.5381 283 585--81 376 961H2:0.290.063/0.084\-\--81 318 451--81 376 961H2:0.420.06/0.27**IGF-1**571 169 823\-\-\-\-\-\-\-\--**ABCG2**637 374 911\-\--37 317 020--38 256 889H1:0.440.31/0.27\-\-\-\-\--37 317 020--38 256 889H1:0.400.29/0.31\-\--**Leptin**495 715 500\-\-\-\-\-\-\-\--**LPR**385 569 20385 497 108--85 594 551H1:0.470.91/0.7285 497 108--85 794 693H1:0.680.80/0.6385 497 108--85 794 693H1:0.630.02/0.0585 497 108--85 594 551H2:0.410.27/0.25\-\-\-\-\--**PIT-1**135 756 434\-\-\-\-\-\-\-\--Cand gene = candidate gene; BTA = *Bos taurus* autosomes; CH = core haplotype; freq = frequency. \*0.05, \*\*0.01.
Shared signatures between breeds {#Sec12}
--------------------------------
Significant core haplotypes were aligned across breeds to identify those that were shared by dairy or beef breeds. Since breeds can be considered as independent sets of observations, shared signatures are more likely to represent real effects rather than false positives. A total of 123 significant core haplotypes (2.2% of the genome), with an average length 216 932 bp, were shared by at least two dairy breeds \[See Additional file [2](#MOESM2){ref-type="media"}: Table S1\]. For beef breeds, 142 core haplotypes (1.7% of the genome) were shared by at least two breeds, with an average length of 190 994 bp. Only 82 and 87 of the shared core haplotypes for dairy and beef breeds, respectively, contained genes. These were considered as positional candidate genes under positive selection and were further investigated.
Gene set annotation and pathway analysis {#Sec13}
----------------------------------------
A total of 244 and 232 annotated genes fell within the regions under selection in dairy and beef breeds, respectively (Table [5](#Tab5){ref-type="table"} and \[See Additional file [2](#MOESM2){ref-type="media"}: Table S1\]). Among these, eight genes were shared by all three dairy breeds and 11 by all three beef breeds (see Figure [2](#Fig2){ref-type="fig"} as an example).Table 5**Statistics on common significant core haplotypes in dairy and beef breedsDairy breedsBeef breedsBTA** ^**1**^**Sign.CH** ^**2**^**Nb genes** ^**3**^**Sum CH size** ^**4**^**Avg. CH size** ^**5**^**Sign.CH** ^**2**^**Nb genes** ^**3**^**Sum CH size** ^**4**^**Avg. CH size** ^**5**^**1**71115216081383287132263213174093**2**5922166852462988112658549241686**3**2516193363238677213755847178850**4**72159567552836556111761288160117**5**41745061192650665111251127113739**6**2414677383669345125149692429141**7**630484296916143232811889191424614**8**00004123512215292685**9**471554863222123261106768184461**10**417883976251998623573138191046**11**68184180223022511100439100439**12**28424413353051711536461536461**13**38193302624162828985376123172**14**00003969239776933**15**5914154051572672218909494547**16**36130250621708446905719150953**17**34471046117762291551010172334**18**00003232961718128770**19**32357441432497452211693858469**20**121488867444324627971156993**21**371930206275744151150760230152**22**356206741241353102411751241175**23**0000116842168421**24**218916600450922224803770200942**25**211201660218332713329199109733**26**2446613411653447776080110869**27**11631792631792231004717334906**28**0000119346493464**29**2993520910391215798300159660**TOT**82244653934032169328723250024613190994^1^ *Bos taurus* autosomes; ^2^number of significant core haplotypes (*P \<* 0.05); ^3^number of genes identified in the significant regions; ^4^sum of significant core haplotypes size, in bp; ^5^average size of significant core haplotypes, in bp.Figure 2**Genomic location of the selection signatures shared among the studied breeds. (a)** Genes in Ensembl tracks are displayed as red boxes; core haplotypes and SNPs are coloured in orange (Marchigiana; MAR), in purple (Piedmontese; PIE) and pink (Simmental; SIM). **(b)** Genes in Ensembl tracks are displayed as red boxes; core haplotypes and SNPs are coloured in blue (Holstein; HOL), in green (Italian Brown; BRW) and pink (Simmental; SIM).
All identified genes were submitted to pathway analyses. The most interesting genes for dairy breeds were *breast cancer anti-estrogen resistance 3* (*BARC3*) and *pituitary glutaminyl cyclase* (*QPCT*), which are directly connected with the metabolism of the mammary gland \[[@CR22],[@CR23]\]. *Solute carrier family 2, member 5* (*SLC2A5*) facilitates glucose/fructose transport \[[@CR24]\], and *zeta-chain (TCR) associated protein kinase 70 kDa* (*ZAP70*) plays a critical role in T-cell signalling \[[@CR25]\]. Calpain is another important complex that, together with *calpain-3* (*CAPN3),* mediates epithelial-cell death during mammary gland involution \[[@CR26]\]. Furthermore, RAS guanyl nucleotide-releasing protein (RASGRP1) activates the Erk/MAP kinase cascade, regulates the development of T- and B-cells, homeostasis and differentiation, and is involved in regulation of breast cancer cells \[[@CR27]-[@CR29]\].
*Chondroitin sulfate proteoglycan 4* (*CSPG4*) and *snurportin-1* (*SNUPN*) are the most interesting genes that were shared among all beef cattle breeds investigated. *CSPG4* is related to meat tenderness, while *SNUPN* is an imprinted gene that has an important role in embryo development and is involved in human muscle atrophy \[[@CR30]\].
A total of six and nine statistically significant canonical pathways (FDR ≤ 0.05; −log~10~(FDR) ≥ 1.3) were identified using IPA for dairy and beef breeds, respectively (Figure [3](#Fig3){ref-type="fig"} and \[See Additional file [3](#MOESM3){ref-type="media"}: Table S2\]). For the dairy breeds, the most significant canonical pathway was identified for purine metabolism (−log~10~(FDR) = 2.6), which supports the highly synthetic processes in the mammary epithelium \[See Additional file [4](#MOESM4){ref-type="media"}: Figure S6\]. In beef breeds, the signal for *ephrin receptor* (−log10(FDR) = 2.7) was the most significant canonical pathway \[See Additional file [5](#MOESM5){ref-type="media"}: Figure S7\]. Among other functions, *ephrin receptor* is known to promote muscle progenitor cell migration before mitotic activation \[[@CR31]\]. All other canonical pathways are reported in Table S2 \[See Additional file [3](#MOESM3){ref-type="media"}: Table S2\].Figure 3**Bar plot of statistically significant canonical pathways.** *P*-values were corrected for multiple-testing using the Benjamini-Hochberg method and are presented in the graph as -log(*p*-value). The bar represents the percentage of genes in a given pathway that meet the cut-off criteria within the total number of molecules that belong to the function. **(a)** Bar plot of statistically significant canonical pathways in dairy cattle breeds. **(b)** Bar plot of statistically significant canonical pathways in beef cattle breeds.
Discussion {#Sec14}
==========
In this study, the genotypes of more than 4000 bulls from five Italian breeds were analysed for putative dairy and beef selection signatures. Strict data quality control was applied to reduce possible sources of bias from genotyping errors and population structure. In particular, the confounding effect of population structure was investigated by replicating part of the analyses without excluding a large number of close relatives and without balancing family members in the dataset. Assessment of the effect of population structure on rEHH results was restricted to four control regions that are known to be under selection in Italian Holstein, namely the *casein* gene cluster, the polled locus, and the *MC1R* and *KIT* genes. This breed was selected for two reasons: (i) according to our data it is a highly structured breed and (ii) it allowed comparing our results with a previous study \[[@CR2]\]. Although analyses conducted on both redundant and non-redundant datasets identified rEHH signals in these regions, the non-redundant dataset produced five significant rEHH signals, compared to only one in the redundant dataset (Table [2](#Tab2){ref-type="table"}). These results highlight the confounding effect of the presence of close relatives in the dataset and, consequently, the improved ability to detect a significant selection signature when correcting for population structure.
Due to pedigree links, population stratification rather than selection leads to an over-representation of haplotypes that are present in large families (*e.g.* sires that pass half of their genetic material to their sons). For this reason, for the full analyses, all sire-son pairs were removed after haplotype phasing (retaining only the sons), and half-sib families were restricted to a maximum of five randomly chosen individuals, to reduce family over-representation. This threshold was a compromise between limiting haplotype redundancy and retaining sufficient information to detect signals; reducing half-sib families to only one individual (which would have been the most rigorous choice), would have led to an excessive reduction of the dataset. The progeny-tested Italian bulls analysed in this study are highly related, especially those of the dairy type, and if the most stringent threshold had been applied, 82% of the Italian Holstein individuals would have been removed.
The three significant control regions (Table [2](#Tab2){ref-type="table"}) in the non-redundant dataset showed a slightly different EHH decay over distance, as shown in the bifurcation plots of Figure [1](#Fig1){ref-type="fig"}. Here, EHH values are reported, since they are graphically easier to interpret. The two most frequent haplotypes for the polled locus showed a similar EHH pattern: high values (i.e. \~1) close to the core haplotype and a rapid decay to 0.2 at \~1 cM down- and up-stream (Figure [1](#Fig1){ref-type="fig"}.a). The second haplotype, however, was excluded from this analysis, since its frequency was lower than the threshold that was set (\<25%). Interestingly, the cumulative frequency of these two haplotypes in the whole Italian Holstein population was 99%, e.g. nearly all individuals carried these two (core) haplotypes. A similar pattern was observed for the two most frequent haplotypes in the *MC1R* gene (Figure [1](#Fig1){ref-type="fig"}b). In contrast to the polled locus, high EHH values (e.g. \> 0.5) were maintained at distances of more than 200 kb up- and down-stream from the core haplotype for the *MC1R* gene, which potentially indicates more recent and strong selection. A more conserved haplotype was particularly evident for the *casein* gene cluster (Figure [1](#Fig1){ref-type="fig"}c), with EHH values greater than 0.6 at distances of more than \~1 Mb up- and down-stream from the core haplotype. Interestingly, similar values (both in terms of haplotype frequency and EHH) were reported in \[[@CR2]\].
We also compared results of all candidate regions investigated in \[[@CR2]\], our results only partially overlapped with those reported by Qanbari et al. \[[@CR2]\]. Common signals were found at the *casein* gene cluster (see above) and the SST gene, while Qanbari et al. \[[@CR2]\] found significant signals also in other regions. These inconsistencies may be due to the presence of different sires in the analyses, different dataset sizes, or to the close relative reduction procedure that we adopted to decrease the effect of population structure and consequent bias. However, poor agreement across studies is similarly observed in human studies and is often due to: (i) use of different within- and between-populations statistics that potentially identify selection signatures with different characteristics (ancient/recent, segregating/fixed, under directional/balancing selection), (ii) high rate of false positive/negative results, and (iii) different ways of accounting for population structure and background selection \[[@CR32]\]. In a recent study, Mancini et al. \[[@CR33]\] estimated the fixation index (Fst) in the same populations here investigated and identified signals that do not overlap with those reported here. Although at least a partial overlap was expected, this could be explained by the intrinsic differences between Fst and rEHH methods. By comparing two populations (or groups of populations), Fst is much more efficient in capturing large allele frequency differences between breeds and thus identifies "outlier" SNPs that are fixed or close to fixation for opposite alleles. This means that the identified signals are usually markers that have been differentially selected for a relatively large number of generations (e.g. "old" selection). Conversely, the rEHH identifies long haplotypes that segregate at high frequency in the population, and thus are, by definition, recent.
The number of total and significant core haplotypes identified by the Sweep software was highest for the Italian Brown and lowest for the Piedmontese breed. Since rEHH methods rely heavily on population LD, the average LD at different genomic distances was estimated for each breed (Figure [4](#Fig4){ref-type="fig"}). Although values of LD based on sequence data decay at shorter distances than the values presented here \[[@CR34]\], this analysis highlighted a general positive correlation between the level of LD over distance and the number of total and significant core haplotypes found. However, considering that rEHH is a relative measure, the larger number of significant core haplotypes identified for dairy breeds was likely due to the higher selective pressure (and thus a higher local LD at specific loci) in dairy compared to beef breeds.Figure 4**Multi-breed average linkage disequilibrium against physical distance (in kb).** Marchigiana (blue stars), Piedmontese (green filled triangles) and Italian Simmental (red diamonds) breeds show a lower persistence of LD over distance than Italian Holstein (black filled circles) and Italian Brown (orange triangles) breeds.
Significance tests used to detect selection signatures should measure the probability of a statistic being an outlier value compared to its expected distribution under a neutral model. However, no reliable neutral model has so far been developed for cattle because of the complexity of the demographic history of this species \[[@CR13]\]. As a consequence, empirical rather than model-based significance tests are generally used to detect selection signatures. Accordingly, we considered outlier values as those falling in the 5% plus variant tail of the rEHH distribution. We kept the within-breed significance threshold loose without correcting it for multiple-testing, but considered only signals shared by two or more independent breeds of the same production type.
The parallel comparison of results from independent analyses of different breeds allowed us to reach a double objective: (i) identification of putative regions under (recent) selection in breeds with different production purposes, which was the main objective of this study, and (ii) reduction of the rate of false positives, since multi-breed analyses served as internal controls. Since the rEHH method does not consider phenotypic information, a significant signal might arise because: (i) the core haplotype is actually under selective pressure or (ii) the result is a false positive, *i.e.* caused by chance, population structure or any other driving force. However, even considering an unrealistic scenario with no false positives, a proportion of all signals will actually be selection signatures due to selection pressure on traits other than those specific to dairy or beef traits. This is because only a few dairy and beef breeds were analysed and beef breeds share a number of traits that are not directly related to dairy or beef production, such as coat colour, polled/horned, etc. Even considering this limitation, to our knowledge, this is the first multi-breed study in dairy and beef cattle that applies such a strategy to reduce the rate of false positives, at the cost of a possible loss of information due to higher false negative rates. Significant signals shared by dairy and beef breeds were used for downstream gene annotation and pathway analyses on positional candidate genes to investigate the biological processes behind the genomic signals. Only the most significant pathways for dairy and beef breeds will be discussed in detail in the following.
Dairy breeds {#Sec15}
------------
Putative signals of selection were found in regions that contain the *BARC3* and *QPCT* genes, and these were shared among all three dairy breeds. To date, neither of these genes have been studied in cattle. However, human studies have shown that these genes are linked to mammary gland metobolism and calcium regulation. *BARC3* is involved in integrin-mediated cell adhesion and signaling, which is required for mammary gland development and function \[[@CR23]\]. *QPCT* is associated with low radial body mineral density (BMD) in adult women \[[@CR22]\]. Another interesting candidate gene is *SLC2A5*, which acts as a fructose transporter in the intestine and has a significant role in energy balance of dairy cows \[[@CR35]\]. The detection of this gene in a dairy cattle specific candidate region is surprising, since, in theory, there should be little need for transporting glucose and/or fructose in the ruminant intestinal tract because simple carbohydrates are degraded into volatile fatty acids (VFA) in the rumen \[[@CR36]\]. However, it is known that large amounts of starch bypass the rumen in cows fed on diets that are rich in cereal grains \[[@CR24]\]. This bypassed starch needs to be digested in the small intestine and then absorbed, to avoid high levels of glucose in the large intestine.
We detected candidate regions that contained the *calpain complex* and *calpain-3* (*CAPN3*) genes in the dairy breeds, as reported by Utsunomiya et al. \[[@CR7]\]. Although calpain is known to be involved in postmortem meat tenderization, it is also related to dairy metabolism, since muscle breakdown promoted by *calpain* provides an energy source for milk production especially at the beginning of lactation \[[@CR37]\]. In addition, as reported by Wilde et al. \[[@CR26]\], the *calpain-calpastatin* system is related to the programmed cell death of alveolar secretory epithelial cells during lactation. The *Zap70* gene encodes a cytoplasmic protein tyrosine kinase that is related to the immune system and plays a central role in T-cell responses, as a component of the T-cell receptor \[[@CR38]\]. Bonnefont et al. \[[@CR25]\] reported that the *Zap70* gene was up-regulated in somatic cells present in the milk of sheep infected by *Staphylococcus aureus* and *Staphylococcus epidermidis*, which suggests an association with mastitis resistance.
Purine metabolism was the most significant canonical pathway in dairy breeds \[See Additional file [4](#MOESM4){ref-type="media"}: Figure S6\]. In a gene expression analysis on human breast milk fat globules, Maningat et al. \[[@CR39]\] identified purine metabolism as the most significant pathway. Synthesis and breakdown of purine is essential in the tissue metabolism of many organisms, and in particular in that of the mammary gland during lactation.
Another interesting canonical pathway was endothelin signalling (Figure [3](#Fig3){ref-type="fig"}). Endothelin functions as a vasoconstrictor and is secreted by endothelial cells \[[@CR40]\]. Acosta et al. \[[@CR41]\] reported that in cattle, endothelins are involved in the follicular production of prostaglandins and the regulation of steroidogenesis in the mature follicle. In a recent study, Puglisi et al. \[[@CR42]\] confirmed that endothelins, in particular *EDNRA* (a potential biomarker for fertility in cow) and *endothelin convertin enzyme 1*, are involved in a reproductive disorder in cows.
Beef breeds {#Sec16}
-----------
A suggestive signature of selection in all three beef breeds was found in the region of the *CSPG4* gene, which belongs to the *chondroitin sulfate proteoglycan* (*CSPG*) gene family. CSPG are proteoglycans that consist of a protein core and a chondroitin sulfate side chain. They are known to be structural components of a variety of tissues, including muscle, and to play key roles in neural development and glial scar formation. They are involved in cellular processes, such as cell adhesion, cell growth, receptor binding, cell migration, and interactions with other extracellular matrix constituents.
Many studies have reported the role of proteoglycans in the determination of meat texture of several bovine muscles \[[@CR43]\]. Dubost et al. \[[@CR44]\] highlighted a direct role of proteoglycans in cooked meat juiciness. Another putative signal of selection was found on the *RB1-inducible coiled-coil 1* (*RB1CC1*) gene. This gene plays a crucial role in muscular differentiation and its activation is essential for myogenic differentiation \[[@CR45]\]. The *monoacylglycerol acyltransferase* (*MGAT3*) gene catalyses the synthesis of diacylglycerol (DAG) using 2-monoacylglycerol and fatty acyl coenzyme A. This enzymatic reaction is fundamental for the absorption of dietary fat in the small intestine. In a study on five Chinese cattle breeds, Sun et al. \[[@CR46]\] reported that the *MGAT3* gene is associated with growth traits. The *cold inducible RNA binding protein* (*CIRPB*) gene may be part of a compensatory mechanism in muscles that undergo atrophy. It preserves muscle tissue mass during cold-shock responses, aging and disease \[[@CR47]\]. *SNUPN* is an imprinted gene that is expressed monoallelically, depending on its parental origin. *SNUPN* plays important roles in embryo survival and postnatal growth regulation \[[@CR48],[@CR49]\]. Ephrin receptor signalling was the top canonical pathway identified by IPA and has interesting biological roles for meat production \[See Additional file [5](#MOESM5){ref-type="media"}: Figure S7\]. Indeed, this pathway is important for muscle tissue growth and regeneration by participating in the correct positioning and formation of the neural muscular junction \[[@CR31]\].
Conclusions {#Sec17}
===========
In this study, we analysed candidate selection signatures at the genome-wide level in five Italian cattle breeds. Then, we used a multi-breed approach to identify the genomic regions shared among cattle breeds selected for dairy or beef production. This approach increased the potential of pin-pointing regions of the genome that play important roles in economically relevant traits. Moreover, gene annotation and pathway analyses were used to describe the gene functions in the regions potentially under recent positive selection.
Specifically, dairy cattle genes that are likely to be under directional selection are related to feeding adaptation (increasing levels of starch in the diet), mammary gland metabolism and resistance to mastitis, while putative regions under selection in beef cattle are related to animal growth, meat texture and juiciness. Considering that annotation for the bovine genome is not as accurate as for the human genome, the biological interpretation of selection signatures can be derived based only on genes that are located near candidate regions. Moreover, novel information in humans suggests that many selected variations are not located within genes and coding regions, but in regulatory sites that have been identified within the ENCODE project \[[@CR32]\]. These may control the expression of entire genomic regions or genes located at a relevant distance from the selected site, making biological interpretation more complex.
Future studies using denser SNP chips or whole-genome sequencing that provide information not subjected to ascertainment bias \[[@CR34]\], may increase the resolution of our analysis and, together with increasing knowledge on the control of gene expression, should validate our results.
Additional files {#Sec18}
================
Additional file 1: Figures S1, S2, S3, S4 and S5.Genome-wide map of P-values for core haplotypes for Italian Holstein, Italian Brown, Italian Simmental, Marchigiana and Piedmontese breeds, respectively. The file is a .zip compressed document including five .tiff images i.e. Figures S1, S2, S3, S4 and S5 that show the genome-wide map of P-values for core haplotypes with a frequency higher than 0.25 for Italian Holstein (HOL), Italian Brown (BRW), Italian Simmental (SIM), Marchigiana (MAR) and Piedmontese (PIE) breeds, respectively. Dashed lines represent the cut-off level of 0.01.Additional file 2: Table S1.Significant core haplotypes and genes shared between dairy and beef cattle breeds. Significant core haplotypes (*p*-value ≤ 0.05; haplotype frequency ≥ 0.25) shared among dairy and beef cattle breeds and the relative genes that intersect with the core haplotypes. The file is a .xls document that includes four sheets. The first two sheets show the significant core haplotypes for dairy and beef cattle breeds and the other two sheets show genes intersecting with the significant core haplotypes for dairy and beef cattle breeds.Additional file 3: Table S2.Ranking of canonical pathways in dairy or beef cattle breeds. Ranking of canonical pathways in dairy or beef cattle breeds with the list of corresponding gene symbols, ratio and --log10 of the *p*-values for each canonical pathway.Additional file 4: Figure S6.Genes detected under recent positive selection in dairy cattle and involved in the purine metabolism canonical pathway. Description: In Figure S6, nodes in red correspond to genes identified in core haplotypes that overlap in all three breeds of each production type, whereas those in green depict overlapping core haplotypes in at least two of those breeds.Additional file 5: Figure S7.Genes detected under recent positive selection in beef cattle and involved in the ephrin metabolism canonical pathway. In Figure S7, nodes in red correspond to genes identified in core haplotypes that overlap in all three breeds of each production type, whereas those in green depict overlapping core haplotypes in at least two of those breeds.
Lorenzo Bomba and Ezequiel L Nicolazzi contributed equally to this work.
**Competing interests**
The authors declare that they have no competing interests.
**Authors' contributions**
PAM, AV, AS, LB and ELN and RN contributed to the design of the study. LB, ELN, MM, GM and FB generated the data and performed data analyses. LB and ELN drafted the manuscript. PAM, LB, ELN, MM and RN interpreted the results and contributed to the editing of the manuscript. All authors read and approved the final manuscript.
ANAFI (Italian Association of Holstein Friesian Breeders), ANAPRI (Italian Association Simmental Breeders), ANABORAPI (National Association of Piedmontese Breeders), ANABIC (Association of Italian Beef Cattle Breeders) and ANARB (Italian Brown Cattle Breeders Association) are acknowledged for providing the biological samples and the necessary information for this work. The authors are also grateful to the "SelMol" project (MiPAAF, Ministero delle Politiche Agricole, Alimentari e Fortestali) and the "ProZoo" project (Regione Lombardia--DG Agricoltura, Fondazione Cariplo, Fondazione Banca Popolare di Lodi) for funding. The funders had no role in the design of the study, data collection and analysis, decision to publish, or preparation of the manuscript.
| |
By:
Paulina Mormol
Pages: 1–10 (10 total)
Displaying Labeled Quantitative Data
By:
Marcin Kozak
Pages: 11–20 (10 total)
On the Way to Online Communication. On the Content and the Language of “Nowy Akapit” Magazine
By:
Klaudia Kuraś-Szczepanek
Pages: 21–31 (11 total)
Mass Automated Internet Analysis and Cyberspace Transparency
By:
Marek Robak
Pages: 32–44 (13 total)
How Facebook Polarizes Public Debate in Poland - Polish Filter Bubble
By:
Zofia Sawicka
Pages: 45–52 (8 total)
The Kardashian Moment: Hashtag, Selfie and the Broken Internet
By:
Mariusz Pisarski
Pages: 53–61 (9 total)
Organisational Communication in the Age of Artificial Intelligence Development. Opportunities and Threats
By:
Monika Kaczmarek-Śliwińska
Pages: 62–68 (7 total)
FOMO, Brands and Consumers – about the Reactions of Polish Internet Users to the Activities of Brands in Social Media (Based on CAWI Representative Research)
By:
Anna Jupowicz-Ginalska
Pages: 69–84 (16 total)
Smartphone and Tablet in the Everyday Life of Preschool Children. Impact and Educational Options in the Opinion of Parents and Teachers of Kindergarten.
By:
Aleksandra Gralczyk
Pages: 85–102 (18 total)
Use of Digital Technologies and Social Media and the Social Relationships of Teenagers
By:
Małgorzata Chmielewska
and
Mariusz Z. Jędrzejko
Pages: 103–110 (8 total)
Internet and Mobile Applications in Work Life and Private Life of Digital Marketers - Methodology of Research
By:
Łukasz Bis
,
Mark Bannatyne
,
Kamil Radomski
, and
Krystian Szejerka
Pages: 111–116 (6 total)
OPEN ACCESS
Journal + Issues
Social Communication
Journal Information
Online ISSN:
2450-7563
First Published:
16 Apr 2015
Language:
English
Publisher: | https://content.sciendo.com/view/journals/sc/5/2/sc.5.issue-2.xml |
Sort by:
High precision temperature measurement with TIDA-01526
Duration:
High Accuracy AC Analog Input Module for Voltage & Current measurement using High Resolution Precision ADC for Protection Relay
Welcome to the world of Power Systems. This Training session covers Quick Introduction to Power Systems and need for protection relay, Protection Relay Modular Architecture, AC Analog input Module (AIM), Key Specifications, Time and Frequency domain analysis, Coherent, Simultaneous and over sampling, Selection of ADC and other key components and TI solutions. Design details for TI design TIDA-00834 and Links to TI designs customer can refer when designing AIM.
Getting Started with the ADS7042 Ultra-Low Power Data Acquisition BoosterPack
Duration:
Getting Best Performance From Your GSPS and RF Sampling ADC Designs
Duration:
Get Your Clocks in Sync: Software Setup
Duration:
Get Your Clocks in Sync: Hardware Setup
Duration:
Get Your Clocks in Sync for JESD204B Data Converters
Duration:
Frequency and Sample Rate Planning: Understanding Sampling, Nyquist zones, Harmonics and Spurious Performance in High-Speed ADCs
Duration:
Flexible interface (PRU-ICSS) for data acquisition using multiple ADCs
Duration:
Extending JESD204B Link on Low Cost Substrates
Duration:
EOS and ESD on ADC
Duration:
Engineer It: Why Not A DC/DC Converter?
Duration:
Engineer It: What is ADC PSR?
Duration:
Engineer It: How to do a ratiometric configuration of an RTD sensor application
Duration:
Engineer It: How to Design the Best DC/DC Power Supply
Duration:
Engineer It- How to select a precision DAC
Duration:
Engineer It- How to get data sheet values from your SAR ADC
Duration:
Electrical Fundamentals and Need for power systems protection
Duration:
Designing with Delta-Sigma ADCs: System design considerations to optimize performance
Delta-sigma analog-to-digital converters (ADCs) are oversampling converters typically used in applications requiring higher resolution. However, ADCs do not work by themselves. In fact, they require several key components around them, including a front-end amplifier, a voltage reference, a clock source, power supplies, and a good layout. Many devices integrate these features together with the ADC to offer a complete system solution, which simplifies the design for customers and minimizes board space.
Designing a Multi-Channel 4-20mA Analog Input Module
Duration: | https://training.ti.com/search-catalog/field_language/EN/categories/analog-digital-converters-adcs?keywords=&start&end&page=5&sort=title&%3Bamp%3Border=asc&%3Bamp%3Bkeyword_op=AND&%3Bamp%3Bkeywords=&%3Bamp%3Bstart=&%3Bamp%3Bend=&%3Bamp%3BDCMP=training&%3Bamp%3BHQS=training-wireless&%3Bamp%3Bqt-search_filters=3&%3Bqt-search_filters=0 |
According to this blog piece from Eric J. Conn, an attorney with Epstein Becker Green, OSHA already has their target list of what kinds of violations they will looking for in workplace inspections for the remainder of the year. In the healthcare field, expect the usual suspects – fall hazards, emergency exits, and hazard communication.
Read the entire blog below:
5 issues OSHA will target in remainder of 2013
Watchdog report says OSHA to blame for high rate of healthcare worker injuries
A Washington, D.C.-based worker advocacy group says OSHA needs to step up its game when it comes to protecting healthcare workers, who suffer one of the nation’s highest rates of workplace injuries.
Among other things, Public Citizen, in a report released in July, claims that while healthcare workers outnumber construction workers by a ratio of 2-to-1, OSHA performs 20 times more construction work site inspections than healthcare facilities. Blaming budgetary restraints, the group says more inspections could catch hazardous workplace conditions that could prevent some of these injuries.
Read the entire report below:
“Health Care Workers Unprotected” report by Public Citizen
OSHA awards $10 million in health and safety training grants
OSHA has awarded $10.1 million in health and safety training grants to 70 organizations, including non-profits, labor unions, faith-based organizations, and colleges. The Susan Harwood Training Grant Program gives out one-year grants to organizations for education and training programs to help employees identify workplace hazards, create injury-prevention programs, and inform employees of their rights and responsibilities as employees.
Healthcare workers near top of the list for getting hurt on the job.
Needles everywhere. Dangerous germs. Blood, urine and vomit. Users high on bath salts.
And, of course, lots of heavy lifting.
You have to be brave to face the hazards of working in health care.
Not something you didn’t know. Read the entire article below:
http://www.uticaod.com/features/x1655337490/Health-care-workers-near-top-of-the-list-for-getting-hurt-on-job
Study: Healthcare infections cost $10 billion a year
A recent study from researchers at Harvard Medical School and Brigham and Women’s Hospital in Boston found that the five most common healthcare related infections costs the U.S. healthcare system almost $10 billion per year.
The study, published in JAMA Internal Medicine, examined data collected from 1986 to April 2013 from published literature, and found that central line-associated bloodstream infection are the most costly on a case-to-case basis, costing $45,814 a year, and representing about 34% of the total number of HAIs in the U.S. Ventilator-associated pneumonia was second, at $40,144. Finishing the list was surgical site infections at $20,785, c. Difficile infections at $11,285, and catheter-associated urinary tract infections at $896.
Preventing needlesticks could save $1 billion a year
Despite federal mandates put in place 13 years ago to protect healthcare workers from needlesticks, they are still being stuck at an alarming rate. By some estimates, some 600,000 workers in medical clinics suffer needlesticks and other sharps-related injuries every year.
Safe in Common [SIC], a Lewisberry, PA-based non-profit advocate for healthcare worker safety, released statistics in late August estimating that 1,000 healthcare workers in the U.S. are stuck by a needle every day.
And those are expensive needle sticks. SIC estimates sharps injuries costs the U.S. healthcare system up to $1 billion for laboratory testing fees, counseling, and costs related to post-exposure follow-ups. This amounts to an estimated $3,042 per victim each year, according to CDC estimates.
“These completely preventable injuries, needless cost burdens on the healthcare system and psychological trauma inflicted on personnel is startling when safer equipment and smarter work practices are available to personnel across the healthcare spectrum,” says Safe in Common Chairperson Mary Foley, PhD, RN. “At a time when healthcare personnel are forced to wear Kevlar gloves to protect their hands from needlesticks, we’re highlighting the costs of ignoring safety-engineered devices to avoid these needless injuries.”
Up to 30 percent of GI scopes found to be infection hazards
A recent report released by the Association for Professionals in Infection Control and Epidemiology (APIC) chronicled a study in which up to 30 percent of scopes used in gastro-intestinal procedures such as colonoscopies and endoscopies in U.S. hospitals and clinics were found to contaminated to the point of being an infection hazard, let alone a major OSHA violation.
Because of the intimate exposure of the scopes to a part of the human body that naturally contains waste products such as fecal matter and bacteria, the scopes need to be specifically and carefully cleaned and disinfected in an industry-accepted six-step process that starts with manual cleaning at the bedside, and ends with disinfection with a high-end solution.
But proper cleaning assumes that busy clinics and hospitals have staff properly trained in the disinfection of the scopes, some of which have very specific instructions depending on the model of the device. In addition, some infection control experts say more often than not, procedures are not being followed.
“This has become a champion pet peeve of mine,” says Kathy Rooker, owner of Columbus Healthcare & Safety Consultants in Canal Winchester, Ohio. “I have seen an employee cleaning a laryngoscope with a hand wipe and the patient was waiting in the room for them. If I was [a patient], I would request a doctor who would allow me to observe the cleaning process.”
Workplace Fatalities down in 2012
New statistics released by the U.S. Department of Labor show an overall decrease in workplace fatalities in 2012 compared with a year earlier.
An Aug. 22 statement from Secretary of Labor Thomas E. Perez showed that 4,383 workers died in 2012, 310 fewer than the 4,693 fatalities in 2011.
Texas saw the most worker deaths, at 531, with Rhode Island logging only 8 deaths in 2012. Most of the worker deaths reported involved motor vehicle accidents.
About 106 fatalities were reported in the healthcare industry, representing about 2 percent of the total U.S. workplace fatality deaths. A majority of those deaths were healthcare practitioners and technical occupations and were caused by falls or roadway accidents. | http://blogs.hcpro.com/osha/2013/09/ |
The last message of a dying alien world.
A new technology offering unlimited possibilities with catastrophic consequences.
A 22nd Century collision between an ecological revolution and the fading remnants of corporate power.
Exiled CEO, Scientist, Warrior, Monk, and Corporate Clone: thrown together on an odyssey circling the Ten Directions of the solar system and of the human spirit.
Can they find redemption for mistakes made lifetimes ago in a galaxy far away?
Product Details
BISAC Categories: | https://bookshop.org/books/ten-directions/9781945604195 |
We are working together to empower individuals, strengthen families and enable communities. Our members include elementary, secondary, post-secondary and extension educators, administrators, other professionals in government, business and nonprofit sectors. Also, our members include students preparing for a career in the field of family & consumer sciences.
AAFCS is one of the oldest professional societies in the United States. Founded in 1909, our purpose is to improve the quality and standards of individual and family life through programs that educate, influence public policy, disseminate information and publish research findings.
|
|
Association for Career and Technical Education
Link: http://www.acteonline.org/
The Association for Career and Technical Education is the largest national education association dedicated to the advancement of education that prepares youth and adults for careers.
ACTE is committed to enhancing the job performance and satisfaction of its members; to increasing public awareness and appreciation for career and technical programs; and to assuring growth in local, state and federal funding for these programs by communicating and working with legislators and government leaders.
|
|
Business Professionals of America
Link: http://www.indianabpa.org/
Business Professionals of America is the leading CTSO (Career Technical Student Organization) for students pursuing careers in business management, office administration, information technology and other related career fields. Indiana BPA has close to 2,700 members in over 90 chapters.
BPA is a "co-curricular" organization that supports business and information technology educators by offering curriculum based on national standards. Resources and materials are available on-line and designed to be customized to a school's program.
The Workplace Skills Assessment Program (WSAP) prepares students to succeed and assesses real-world business skills and problem solving abilities in finance, management, IT and computer applications. Visit the national BPA web site for additional information.
Mission Statement: The mission of Business Professionals of America is to contribute to the preparation of a world-class workforce through the advancement of leadership, citizenship, academic, and technological skills.
Vision: At Business Professionals of America, we are committed to developing the best possible career and technical education organization for students in the United States. The measure of our success will be the perception that alumni of Business Professionals of America are highly competent and skilled workforce professionals who enable business and industry to maintain the economic vitality and high quality of life associated with our celebrated United States of America.
|
|
Career and Technical Education
Link: http://www.doe.in.gov/cte/partners-and-professional-associations
Many professional organizations and partners play an important role in delivering Indiana's Career and Technical Education programs by helping teachers keep up to date with content and best practices.
|
|
Engineering/Technology Educators of Indiana
Link: http://www.indianaetei.com/
The Engineering/Technology Educators of Indiana (E/TEI) is the premier professional organization for K-12 and Post-Secondary Engineering and Technology Educators (E/TEI). E/TEI is closely aligned with International Technology and Engineering Educators Association (ITEEA), which is the leading International Professional Organization. E/TEI also works diligently with the Indiana Department of Education (IDOE), Indiana Department of Workforce Development (IDWD), Indiana Association of Career and Technical Educators (IACTE), and the Indiana Career and Technical Education Districts (IACTED) to provide the best opportunities for its members.
E/TEI members and leadership team continue to be involved at the local, state, national, and international level.
|
|
Future Business Leaders of America
Link: http://www.fbla-pbl.org/
The largest and oldest business student organization in the world! A quarter of a million high school and middle school students, college and university students, faculty, educators, administrators, and business professionals have chosen to be members of the premier business education association preparing students for careers in business.
|
|
Future Educators Association
Link: http://www.futureeducators.org/
The Future Educators Association offers programs designed to help students develop the leadership skills necessary for a successful career in education. Among these programs are the FEA Honor Society, which recognizes students and chapters that have met high standards for academic and personal achievement, and national student officer positions, which provide students with opportunities for input during event and program development.
|
|
Future Health Professionals
Link: http://www.indianahosa.org/
Serving future health professionals since 1976, HOSA was created with the idea of providing students opportunities to develop as a leader and a future employee. With over 140,000 members across the nation, it is safe to say that HOSA has met our mission! HOSA creates driven, determined student leaders that are excited about healthcare and all that HOSA has to offer. Above all, HOSA is a tool-a tool vital to the success of students, teachers, and health professionals. HOSA is 100% healthcare and connects all hubs of the healthcare field. One experience ignites another creating a chain reaction between those who teach, learn, and do.
HOSA operates as an integral component of the health science education curriculum. Through its network of state and local chapters, HOSA provides powerful instructional tools, recognition, leadership, networking, scholarships, and connections to the healthcare industry to thousands of members across the United States.
Through the HOSA Competitive Events Program, members can compete in teams or as individuals in over 55 different events related to all aspects of the health care industry. HOSA integrates into the Health Science Technology Education curriculum to develop and recognize smart, dedicated, and passionate future health professionals.
|
|
Hack Michiana
Link: http://thebranchsb.com/tag/hack-michiana/
Meeting Time: Second Thursday of the Month, 5:30 pm
Meeting Place: The Branch, 105 E. Jefferson Blvd., Suite 500, South Bend, IN 46601
574-514-3285
Description: Partnership with Code for America and the City of South Bend to bring useful datasets and programs to the citizens of Michiana
|
|
Hoosier Association of Science Teachers inc.
Link: http://www.hasti.org/
The purpose of HASTI is the advancement, stimulation, extension, improvement,and coordination of science education in all fields of science at all educational levels.- HASTI Founders, 1969
Michiana is part of their Districts 1 & 2.
|
|
Indiana Academy of Science
Link: http://www.indianaacademyofscience.org/Default.aspx
The Indiana Academy of Science is a professional membership organization of Indiana scientists. Founded in 1885, it is a non-profit organization dedicated to promoting scientific research and diffusing scientific information; to encouraging communication and cooperation among scientists and to improving education in the sciences.
|
|
Indiana Association of School Broadcasters
Link: http://iasbonline.org/
The organization conducts a contest yearly in which students can compete in order to awarded best in the state in several different broadcast related categories.
If you are at a school that is working with audio and video and you are not already a member please contact the IASB board.
|
|
Indiana Business Education Association
Link: http://ind-ibea.org/index.htm
Provide a vision of business education for the future
· Provide career awareness and career exploration activities
· Continue to integrate Indiana Standards into the business curriculum
· Assist students to prepare for new careers and opportunities in our technology-rich society
· Emphasize the development of soft skill across the business education curriculum
· Use technology as a teaching tool and a critical component of the curriculum
· Promote a school-based and work-based environment for learning
· Continue to enhance partnerships with business
NBEA Scholarship Opportunities: Each year, NBEA awards two $1,000 scholarships to individuals pursuing continuing education or graduate study in business education. The scholarships recognize and support educators who give evidence of leadership and scholarship potential in the field of business education. The NBEA Scholarship Program provides financial assistance to outstanding individuals for the purpose of continuing their study and professional development in business education.
|
|
Indiana Science Olympiad
Link: https://www.indianascienceolympiad.org/
Science Olympiad a non-profit organization dedicated to improving the quality of K-12 science education, increasing male, female and minority interest in science, creating a technologically-literate workforce and providing recognition for outstanding achievement by both students and teachers. These goals are achieved by participating in Science Olympiad tournaments and non-competitive events, incorporating Science Olympiad into classroom curriculum and attending teacher training institutes. Indiana Science Olympiad is a State level organization which operates independently of Science Olympiad. Because of the history and nature of Science Olympiad, the function of the two entities, Science Olympiad and Indiana Science Olympiad, is interdependent.
America's Most Exciting Team Science Competition
For the past 27 years, Science Olympiad has led a revolution in science education. What began as a grassroots assembly of science teachers is now one of the premiere science competitions in the nation, providing rigorous, standards-based challenges to nearly 6,000 teams in 49 states. Science Olympiad's ever-changing lineup of events in all STEM disciplines provides a variety of career choices and exposure to practicing scientists and mentors. Recognized as a model program by the National Governors Association Center for Best Practices, Science Olympiad is committed to increasing global competitiveness for the next generation of scientists.
National Science Education Standards
All Science Olympiad events are aligned with current National Science Standards set by the National Research Council. Teachers seeking curriculum resources that illustrate standards in action have found success with Science Olympiad because it emphasizes the close relationship between teaching and assessment. Science Olympiad highlights many of the elements of Teaching Standards, Assessment Standards, Program Standards, and Science Education System Standards.
|
|
John J. Reilly Center
Link: http://reilly.nd.edu/
We integrate advancement of science and technology, adherence to ethical norms that uphold human dignity, and development of sound policy for the common good. Located at Notre Dame.
HistoryThe University’s John J. Reilly Center for Science, Technology, and Values was established in 1985. It is named for the father of an alumnus whose gift created the initial endowment for the center. The center’s first academic initiative, an undergraduate minor Program in Science, Technology, and Values, was launched in 1986 with the aid of a three-year start-up grant from the National Endowment for the Humanities. Since then the Reilly Center has received external financial support for many programs and activities from the National Science Foundation as well as the NEH. In addition, the Lilly Endowment, the Hazen Endowment for Excellence, the Templeton Foundation, and the former GTE Foundation have also generously supported the work of the center.
MissionWe integrate advancement of science and technology, adherence to ethical norms that uphold human dignity, and development of sound policy for the common good. We bring different disciplinary perspectives to the examination of conceptual, ethical, and policy issues related to science and technology. We pursue our mission through education, research, and outreach in a Catholic context.
|
|
Michiana Fast Forward
Link: http://www.MichianaFastForward.com
Michiana Fast Forward was formed to:
Thank you for stopping by to explore and be sure to sign-up for the Hyperdrive Newsletter and join us on Social Media to find out about new tech development and events. Join us on the Hyperdrive into the Future...
|
|
Mr. Mug - Michiana Users Group
Link: http://www.mrmug.org/default.htm
Meeting Time: Third Thursday of the Month, Noon
Meeting Place: CIBER, 4100 Edison Lakes Pkwy, Mishawaka, IN 46545
574-247-4867
Description: Monthly educational lunch forums for IT professionals in the Michiana area.
|
|
Pfeil Innovation Center
Link: http://wakeupandsmelltheinnovation.com/
The Pfeil Innovation Center in South Bend, Indiana is helping for-profit and non-profit organizations develop Innovation as a core competency. Named in honor of Indiana entrepreneur, innovation pioneer and visionary businessman Richard J. Pfeil, the Center opened March 2011, and to date, more than 500 enthusiastic thinkers from over 60 future-minded organizations are inspired to innovate daily after attending one of the Center's two-day Innovation Leadership Immersions.
|
|
Project Lead the Way
Link: http://www.pltw.org/indiana-university-purdue-university-indianapolis-iupui
Indiana University-Purdue University Indianapolis (IUPUI) is an urban research and academic health sciences campus, with 22 schools and academic units granting degrees in more than 200 programs from both Indiana University and Purdue University. IUPUI was created in 1969 as a partnership between Indiana and Purdue universities, with IU as the managing partner. IUPUI is IU’s home campus for programs in medicine, law, dentistry, nursing, health and rehabilitation science, and social work and extends its program offerings through IUPU Columbus. With more than 29,000 students, IUPUI is the third-largest campus in the state of Indiana.
IUPUI offers college credit, 100-level Biology, to any high school student who applies and passes the national assessment at a stanine of 6 or higher and matriculate to IUPUI. Stanines of 6 or 7 will be qualified for a B, while stanines of 8 or 9 will qualifiy for an A.
There are three PLTW Biomedical Sciences IUPUI coursework certification track options at IUPUI. The cost to the student will be $25 per credit hour ($75 for the course).
|
|
Science Education Foundation of Indiana Inc.
Link: http://www.sefi.org/index.html
SEFI is a not-for-profit organization whose purpose is to encourage and assist young people to become scientists and engineers and to practice their professions in Indiana. The membership of the Board of Directors is composed of volunteers from industry, the not-for-profit sector, and academia who are committed to enhancing science education in Indiana. Our mission statement is embodied in deliverable outcomes.
Specifically, the youth of our state are encouraged and assisted through the following activities.
Although SEFI is not a household word in Indiana, we are well acquainted with the over 1,000 Indiana students who were sent by our organization to the International Science Fair over a 30-year period. A recent survey of previous fair alumni revealed that many of these young adults are practicing their professions here in Indiana. SEFI is a solid foundation investing in Indiana's future. Being a 501(c)(3) not-for-profit Indiana corporation, SEFI maintains a financial structure which supports the International fair travel of the 26 state finalists, their teachers, and Regional Science Fair Directors. The trust department at Bank One, Indianapolis, Indiana provides professional investment management for our portfolio. Our mission is also dependent upon a continuing corporate and individual philanthropy program to support the growing Hoosier Science and Engineering Fair and the achievement programs. Reaching our goals will strengthen science education in Indiana, and create linkages and opportunities for Indiana companies.
|
|
Society for Science & the Public
Link: https://www.societyforscience.org/
Society for Science & the Public (SSP) is a nonprofit 501(c) (3) membership organization dedicated to public engagement in scientific research and education. Our vision is to promote the understanding and appreciation of science and the vital role it plays in human advancement: to inform, educate, and inspire.
Since 1921, SSP (formerly known as Science Service) has conveyed the excitement of science and research directly to the public through its award-winning publications and world-class science education competitions.
In 2013, SSP launched a new website that unifies our award-winning publications with our science education and competition programs. Our new flagship website is one of a series of bold new steps we are taking across the organization to better fulfill SSP’s mission.
|
|
ISEF Society for Science & the Public
Link: https://student.societyforscience.org/intel-isef
he Intel International Science and Engineering Fair (Intel ISEF), is the world’s largest international pre-college science competition, providing an annual forum for more than 1,600 high school students from over 70 countries, regions, and territories to showcase their independent research and compete for more than $4 million in awards.
Today, millions of students worldwide compete each year in local and school-sponsored science fairs; the winners of these events go on to participate in SSP-affiliated regional and state fairs from which the best win the opportunity to attend Intel ISEF.
Intel ISEF unites these top young scientific minds, showcasing their talents on an international stage, where doctoral level scientists review and judge their work.
SSP partners with Intel—along with dozens of other corporate, academic, government and science-focused sponsors—who provide the support and awards for Intel ISEF.
Intel ISEF is hosted each year in a different city (Los Angeles, Pittsburgh and Phoenix through 2019). The Local Arrangements Committees from each city partner with SSP and Intel to provide support for the event including the recruitment of 100s of volunteers and judges and in organizing an education outreach day in which more than 3,000 middle and high school students visit.
UPCOMING DATES AND LOCATIONS FOR INTEL ISEF
Los Angeles, California, May 11-16, 2014
Pittsburgh, Pennsylvania, May 10-15, 2015
Phoenix, Arizona, May 8-13, 2016
Los Angeles, California, May 14-19, 2017
Pittsburgh, Pennsylvania, May 13-18, 2018
Phoenix, Arizona, May 12-17, 2019
Local Contact Person: Mr. Glen Cook
|
|
Skills USA Indiana
Link: http://www.skillsusaindiana.org/
SkillsUSA is an applied method of instruction for preparing America’s high performance workers enrolled in public career and technical programs.
It provides quality education experiences for students in leadership, teamwork, citizenship and character development. It builds and reinforces self-confidence, work attitudes and communications skills. It emphasizes total quality at work—high ethical standards, superior work skills, life-long education, and pride in the dignity of work. SkillsUSA also promotes understanding of the free-enterprise system and involvement in community service.
|
|
Startup Weekend - google
Startup Weekend is a global grassroots movement of active and empowered entrepreneurs who are learning the basics of founding startups and launching successful ventures. It is the largest community of passionate entrepreneurs with over a thousand past events in 110+ countries around the world.
The non-profit organization is in Seattle, Washington but Startup Weekend organizers and facilitators can be found in cities around the world. From Mongolia to South Africa to London to Brazil, people around the globe are coming together for weekend long workshops to pitch ideas, form teams, and start companies.
All Startup Weekend events follow the same basic model: anyone is welcome to pitch their startup idea and receive feedback from their peers. Teams organically form around the top ideas (as determined by popular vote) and then it’s a 54 hour frenzy of business model creation, coding, designing, and market validation. The weekends culminate with presentations in front of local entrepreneurial leaders with another opportunity for critical feedback.
Whether entrepreneurs found companies, find a co-founder, meet someone new, or learn a skill far outside their usual 9-to-5, everyone is guaranteed to leave the event better prepared to navigate the chaotic but fun world of startups. If you want to put yourself in the shoes of an entrepreneur, register now for the best weekend of your life!
|
|
Tableau Users Group - Northern Indiana
Link: www.aunalytics.com/
Meetings are organized by Aunalytics. Please contact them to be added to the mailing list and for more information. www.tableau.com/
|
|
Technology Student Association
LInk: http://www.tsaweb.org/
Who Are TSA Members?The Technology Student Association (TSA) is the only student organization devoted exclusively to the needs of students interested in technology. Open to students enrolled in or who have completed technology education courses, TSA’s membership includes over 190,000 middle and high school students in over 2,000 schools spanning 49 states. TSA is supported by educators, parents and business leaders who believe in the need for a technologically literate society. Members learn through exciting competitive events, leadership opportunities and much more. The diversity of activities makes TSA a positive experience for every student. From engineers to business managers, our alumni credit TSA with a positive influence on their lives.
ChaptersTSA chapters take the study of STEM (science, technology, engineering, mathematics) beyond the classroom and give students the chance to pursue academic challenges among friends with similar goals and interests. Together, chapter members work on competitive events, attend conferences on the state and national levels and have a good time raising funds to get there. Chapter organization develops leadership, as members may become officers within their state and then run nationally. Our chapters are committed to a national service project and are among the most service-oriented groups in the community. | https://www.michianafastforward.com/tech-organizations.html |
---
abstract: 'Motivated by recent work on stochastic gradient descent methods, we develop two stochastic variants of greedy algorithms for possibly non-convex optimization problems with sparsity constraints. We prove linear convergence[^1] in expectation to the solution within a specified tolerance. This generalized framework applies to problems such as sparse signal recovery in compressed sensing, low-rank matrix recovery, and covariance matrix estimation, giving methods with provable convergence guarantees that often outperform their deterministic counterparts. We also analyze the settings where gradients and projections can only be computed approximately, and prove the methods are robust to these approximations. We include many numerical experiments which align with the theoretical analysis and demonstrate these improvements in several different settings.'
author:
- 'Nam Nguyen, Deanna Needell and Tina Woolf'
bibliography:
- 'all\_references.bib'
title: Linear Convergence of Stochastic Iterative Greedy Algorithms with Sparse Constraints
---
Introduction {#sec::Intro}
============
Over the last decade, the problem of high-dimensional data inference from limited observations has received significant consideration, with many applications arising from signal processing, computer vision, and machine learning. In these problems, it is not unusual that the data often lies in hundreds of thousands or even million dimensional spaces while the number of collected samples is sufficiently smaller. Exploiting the fact that data arising in real world applications often has very low intrinsic complexity and dimensionality, such as sparsity and low-rank structure, recently developed statistical models have been shown to perform accurate estimation and inference. These models often require solving the following optimization with the constraint that the model parameter is sparse: $$\label{opt::general loss with sparse vector constraint}
\min_w F(w) \quad\quad \text{subject to} \quad {\left\|w\right\|}_0 \leq k.$$ Here, $F(w)$ is the objective function that measures the model discrepancy, ${\left\|w\right\|}_0$ is the $\ell_0$-norm that counts the number of non-zero elements of $w$, and $k$ is a parameter that controls the sparsity of $w$.
In this paper, we study a more unified optimization that can be applied to a broader class of sparse models. First, we define a more general notion of sparsity. Given the set ${\mathcal{D}} = \{d_1,d_2,... \}$ consisting of vectors or matrices $d_i$, which we call atoms, we say that the model parameter is sparse if it can be described as a combination of only a few elements from the atomic set ${\mathcal{D}}$. Specifically, let $w \in {\mathbb{R}}^n$ be represented as $$w = \sum_{i=1}^k \alpha_i d_i, \quad\quad d_i \in {\mathcal{D}},$$ where $\alpha_i$ are called coefficients of $w$; then, $w$ is called sparse with respect to ${\mathcal{D}}$ if $k$ is relatively small compared to the ambient dimension $n$. Here, ${\mathcal{D}}$ could be a finite set (e.g. ${\mathcal{D}} = \{ e_i\}_{i=1}^n$ where $e_i$’s are basic vectors in Euclidean space), or ${\mathcal{D}}$ could be infinite (e.g. ${\mathcal{D}} = \{ u_i v_i^* \}_{i=1}^{\infty}$ where $u_i v_i^*$’s are unit-norm rank-one matrices). This notion is general enough to handle many important sparse models such as group sparsity and low rankness (see [@CRPW_2012_J], [@NCT_gradMP_2013_J] for some examples).
Our focus in this paper is to develop algorithms for the following optimization: $$\label{opt::general min}
\min_w \underbrace{\frac{1}{M}\sum_{i=1}^M f_i (w) }_{F(w)} \quad\text{subject to} \quad {\left\|w\right\|}_{0,{\mathcal{D}}} \leq k,$$ where $f_i(w)$’s, $w \in {\mathbb{R}}^n$, are smooth functions which can be *non-convex*; ${\left\|w\right\|}_{0,{\mathcal{D}}}$ is defined as the norm that captures the sparsity level of $w$. In particular, ${\left\|w\right\|}_{0,{\mathcal{D}}}$ is the smallest number of atoms in ${\mathcal{D}}$ such that $w$ can be represented by them: $${\left\|w\right\|}_{0,{\mathcal{D}}} = \min_k \{k: w = \sum_{i \in T} \alpha_i d_i \quad\text{with} \quad |T| = k \}.$$ Also in (\[opt::general min\]), $k$ is a user-defined parameter that controls the sparsity of the model. The formulation (\[opt::general min\]) arises in many signal processing and machine learning problems, for instance, compressed sensing (e.g. [@Donoho_CS_2006_J], [@CRT_CS_2004_J]), Lasso ([@Tibshirani_Lasso_1996_J]), sparse logistic regression, and sparse graphical model estimation (e.g. [@YL_2006_J]). In the following, we provide some examples to demonstrate the generality of the optimization (\[opt::general min\]).
1\) *Compressed sensing:* The goal is to recover a signal $w^{\star}$ from the set of observations $y_i = {\left<a_i,w^{\star}\right>} + \epsilon_i$ for $i=1,...,m$. Assuming that the unknown signal $w^{\star}$ is sparse, we minimize the following to recover $w^{\star}$: $$\min_{w \in {\mathbb{R}}^n} \frac{1}{m} \sum_{i=1}^m (y_i - {\left<a_i, w\right>})^2 \quad\text{subject to} \quad {\left\|w\right\|}_0 \leq k.$$ In this problem, the set ${\mathcal{D}}$ consists of $n$ basic vectors, each of size $n$ in Euclidean space. This problem can be seen as a special case of (\[opt::general min\]) with $f_i(w) = (y_i - {\left<a_i,w\right>})^2$ and $M = m$. An alternative way to write the above objective function is $$\frac{1}{m} \sum_{i=1}^m (y_i - {\left<a_i, w\right>})^2 = \frac{1}{M} \sum_{j=1}^M \frac{1}{b} \left( \sum_{i=(j-1)b+1}^{jb} (y_i - {\left<a_i,w\right>})^2 \right),$$ where $M = m/b$. Thus, we can treat each function $f_j(w)$ as $f_j(w) = \frac{1}{b} \sum_{i=(j-1)b+1}^{jb} (y_i - {\left<a_i,w\right>})^2$. In this setting, each $f_j(w)$ accounts for a collection (or [*block*]{}) of observations of size $b$, rather than only one observation. This setting will be useful later for our proposed stochastic algorithms.
2\) *Matrix recovery:* Given $m$ observations $y_i = {\left<A_i,W^{\star}\right>} + \epsilon_i$ for $i=1,..,m$ where the unknown matrix $W^{\star} \in {\mathbb{R}}^{d_1 \times d_2}$ is assumed low-rank, we need to recover the original matrix $W^{\star}$. To do so, we perform the following minimization: $$\min_{W \in {\mathbb{R}}^{d_1 \times d_2}} \frac{1}{m} \sum_{i=1}^m (y_i - {\left<A_i, W\right>})^2 \quad\text{subject to} \quad \operatorname*{rank}(W) \leq k.$$ In this problem, the set ${\mathcal{D}}$ consists of infinitely many unit-normed rank-one matrices and the functions $f_i(W) = (y_i - {\left<A_i, W\right>})^2$. We can also write functions in the block form $f_i(W) = \frac{1}{b} \sum (y_i - {\left<A_i, W\right>})^2$ as above.
3\) *Covariance matrix estimation:* Let $x$ be a Gaussian random vector of size $n$ with covariance matrix $W^{\star}$. The goal is to estimate $W^{\star}$ from $m$ independent copies $x_1,...,x_m$ of $x$. A useful way to find $W^{\star}$ is via solving the maximum log-likelihood function with respect to a sparse constraint on the precision matrix $\Sigma^{\star} = (W^{\star})^{-1}$. The sparsity of $\Sigma^{\star}$ encourages the independence between entries of $x$. The minimization formula is as follows: $$\min_{\Sigma} \frac{1}{m} \sum_{i=1}^m {\left<x_i x_i^T, \Sigma\right>} - \log \det \Sigma \quad\text{subject to} \quad {\left\|\Sigma_{\text{off}}\right\|}_0 \leq k,$$ where $\Sigma_{\text{off}}$ is the matrix $\Sigma$ with diagonal elements set to zero. In this problem, ${\mathcal{D}}$ is the finite collection of unit-normed $n \times n$ matrices $\{ e_i e^*_j \}$ and the functions $f_i(\Sigma) = {\left<x_ix_i^T,\Sigma\right>}-\frac{1}{m}\log \det \Sigma$.
Our paper is organized as follows. In the remainder of Section \[sec::Intro\] we discuss related work in the literature and highlight our contributions; we also describe notations used throughout the paper and assumptions employed to analyze the algorithms. We present our stochastic algorithms in Sections \[sec::StoIHT\] and \[sec::StoGradMP\], where we theoretically show the linear convergence rate of the algorithms and include a detailed discussion. In Section \[sec::inexact StoIHT and StoGradMP\], we explore various extensions of the two proposed algorithms and also provide the theoretical result regarding the convergence rate. We apply our main theoretical results in Section \[sec::Estimates\], in the context of sparse linear regression and low-rank matrix recovery. In Section \[sec::experiments\], we demonstrate several numerical simulations to validate the efficiency of the proposed methods and compare them with existing deterministic algorithms. Our conclusions are given in Section \[sec::conclusion\]. We reserve Section \[sec::proofs\] for our theoretical analysis.
Related work and our contribution
---------------------------------
Sparse estimation has a long history and during its development there have been many great ideas along with efficient algorithms to solve (not exactly) the optimization problem (\[opt::general min\]). We sketch here some main lines which are by no means exhaustive.
**Convex relaxation.** Optimization based techniques arose as a natural convex relaxation to the problem of sparse recovery (\[opt::general min\]). There is now a massive amount of work in the field of Compressive Sensing and statistics [@candes2006compressive; @CSwebpage] that demonstrates these methods can accurately recover sparse signals from a small number of noisy linear measurements. Given noisy measurements $y = Aw^{\star} + e$, one can solve the $\ell_1$-minimization problem $$\hat{w} = \operatorname*{argmin}_w \|w\|_1 \quad\text{such that}\quad \|Aw - y\|_2 \leq \varepsilon,$$ where $\varepsilon$ is an upper bound on the noise $\|e\|_2 \leq \varepsilon$. Candès, Romberg and Tao [@CT_CSdecoding_2005_J; @CRT_Stability_2006a_J] prove that under a deterministic condition on the matrix $A$, this method accurately recovers the signal, $$\label{recov}
\|w^\star - \hat{w}\|_2 \leq \varepsilon + \frac{\|w^\star - w^\star_k\|_2}{\sqrt{k}},$$ where $w^\star_k$ denotes the $k$ largest entries in magnitude of the signal $w^\star$. The deterministic condition is called the Restricted Isometry Property (RIP) [@CT_CSdecoding_2005_J] and requires that the matrix $A$ behave nicely on sparse vectors: $$(1-\delta)\|x\|_2^2 \leq \|Ax\|_2^2 \leq (1+\delta)\|x\|_2^2 \quad\text{for all $k$-sparse vectors $x$},$$ for some small enough $\delta < 1$.
The convex approach is also extended beyond the quadratic objectives. In particular, the convex relaxation of the optimization (\[opt::general min\]) is the following: $$\label{opt::convex relaxation}
\min_w F(w) + \lambda {\left\|w\right\|},$$ where the regularization ${\left\|w\right\|}$ is used to promote sparsity, for instance, it can be the $\ell_1$ norm (vector case) or the nuclear norm (matrix case). Many methods have been developed to solve these problems including interior point methods and other first-order iterative methods such as (proximal) gradient descent and coordinate gradient descent (e.g. [@kim2007interior; @FNW07:Gradient-Projection; @DDM04:Iterative-Thresholding]). The theoretical analyses of these algorithms have also been studied with either linear or sublinear rate of convergence, depending on the assumption imposed on the function $F(w)$. In particular, sublinear convergence rate is obtained if $F(w)$ exhibits a convex and smooth function, whereas the linear convergence rate is achieved when $F(w)$ is the smooth and strongly convex function. For problems such as compressed sensing, although the loss function $F(w)$ does not possess the strong convexity property, experiments still show the linear convergence behavior of the gradient descent method. In the recent work [@ANW_2012_J], the authors develop theory to explain this behavior. They prove that as long as the function $F(w)$ obeys the restricted strong convexity and restricted smoothness, a property similar to the RIP, then gradient descent algorithm can obtain the linear rate.
**Greedy pursuits.** More in line with our work are greedy approaches. These algorithms reconstruct the signal by identifying elements of the support iteratively. Once an accurate support set is located, a simple least-squares problem recovers the signal accurately. Greedy algorithms like Orthogonal Matching Pursuit (OMP) [@Tropp_OMP_2004_J] and Regularized OMP (ROMP) [@NV_2010_J] offer a much faster runtime than the convex relaxation approaches but lack comparable strong recovery guarantees. Recent work on greedy methods like Compressive Sampling Matching Pursuit (CoSaMP) and Iterative Hard Thresholding (IHT) offer both the advantage of a fast runtime and essentially the same recovery guarantees as (e.g. [@NT_CoSaMP_2010_J; @BD_IHT_2009_J; @zhang2011sparse; @Fourcard_2012_RIP]). However, these algorithms are only applied for problems in compressed sensing where the least square loss is used to measure the discrepancy. There certainly exists many loss functions that are commonly used in statistical machine learning and do not exhibit quadratic structure such as log-likelihood loss. Therefore, it is necessary to develop efficient algorithms to solve (\[opt::general min\]).
There are several methods proposed to solve special instances of (\[opt::general min\]). [@SSZ_2010_J] and [@SGS_2011_C] propose the forward selection method for sparse vector and low-rank matrix recovery. The method selects each nonzero entry or each rank-one matrix in an iterative fashion. [@YY_2012_C] generalizes this algorithm to the more general dictionary ${\mathcal{D}}$. [@Zhang_2011_J] proposes the forward-backward method in which an atom can be added or removed from the set, depending on how much it contributes to decrease the loss function. [@JJR_2011_C] extends this algorithms beyond the quadratic loss studied in [@Zhang_2011_J]. [@BRB_GraSP_2013_J] extends the CoSaMP algorithm for a more general loss function. Very recently, [@NCT_gradMP_2013_J] further generalizes CoSaMP and proposes the Gradient Matching Pursuit (GradMP) algorithm to solve (\[opt::general min\]). This is perhaps the first greedy algorithm for (\[opt::general min\]) - the very general form of sparse recovery. They show that under a restricted convexity assumption of the objective function, the algorithm linearly converges to the optimal solution. This desirable property is also possessed by CoSaMP. We note that there are other algorithms having also been extended to the setting of sparsity in arbitrary ${\mathcal{D}}$ but only limited to the quadratic loss setting, see e.g. [@davenport2012signal; @giryes2012rip; @giryes2014greedy; @giryes2013greedy].
We outline the GradMP method here, since it will be used as motivation for the work we propose. GradMP [@NCT_gradMP_2013_J] is a generalization of the CoSaMP [@NT_CoSaMP_2010_J] that solves a wider class of sparse reconstruction problems. Like OMP, these methods consist of four main steps: i) form a signal proxy, ii) select a set of large entries of the proxy, iii) use those as the support estimation and estimate the signal via least-squares, and iv) prune the estimation and repeat. Methods like OMP and CoSaMP use the proxy $A^*(y-Aw^t)$; more general methods like GradMP use the gradient $\nabla F(w^t)$ (see [@NCT_gradMP_2013_J] for details). The analysis of GradMP depends on the restricted strong convexity and restricted strong smoothness properties as in Definitions \[def:rsc\] and \[def:rss\] below, the first of which is motivated by a similar property introduced in [@NRWY_2010_C]. Under these assumptions, the authors prove linear convergence to the noise floor.
The IHT, another algorithm that motivates our work, is a simple method that begins with an estimation $w^0 = 0$ and computes the next estimation using the recursion $$w^{t+1} = H_k(w^t + A^*(y - Aw^t)),$$ where $H_k$ is the thresholding operator that sets all but the largest (in magnitude) $k$ coefficients of its argument to zero. Blumensath and Davies [@BD_IHT_2009_J] prove that under the RIP, IHT provides a recovery bound comparable to . [@JMD_SVP_2010_C] extends IHT to the matrix recovery. Very recently, [@YLZ_2014_C] proposes the Gradient Hard Thresholding Pursuit (GraHTP), an extension of IHT to solve a special vector case of (\[opt::general min\]).
**Stochastic convex optimization.** Methods for stochastic convex optimization have been developed in a very related but somewhat independent large body of work. We discuss only a few here which motivated our work, and refer the reader to e.g. [@spall2005introduction; @boyd2004convex] for a more complete survey. Stochastic Gradient Descent (SGD) aims to minimize a convex objective function using unbiased stochastic gradient estimates, typically of the form $\nabla f_i(w)$ where $i$ is chosen stochastically. For the optimization (\[opt::general min\]) with no constraint, this can be summarized concisely by the update rule $$w^{t+1} = w^t - \alpha \nabla f_i(w^t),$$ for some step size $\alpha$. For smooth objective functions $F(w)$, classical results demonstrate a $1/t$ convergence rate with respect to the objective difference $F(w^t) - F(w^\star)$. In the strongly convex case, Bach and Moulines [@bach2011] improve this convergence to a linear rate, depending on the average squared condition number of the system. Recently, Needell et. al. draw on connections to the Kaczmarz method (see [@Kac37:Angenaeherte-Aufloesung; @SV09:Randomized-Kaczmarz] and references therein), and improve this to a linear dependence on the uniform condition number [@needell2013stochastic]. Another line of work is the Stochastic Coordinate Descent (SCD) beginning with the work of [@Nesterov_SCD_2012_J]. Extension to minimization of composite functions in (\[opt::convex relaxation\]) is described in [@RT_BSCD_2011_J].
**Contribution.** In this paper, we exploit ideas from IHT [@BD_IHT_2009_J], CoSaMP [@NT_CoSaMP_2010_J] and GradMP [@NCT_gradMP_2013_J] as well as the recent results in stochastic optimization [@SV09:Randomized-Kaczmarz; @needell2013stochastic], and propose two new algorithms to solve (\[opt::general min\]). The IHT and CoSaMP algorithms have been remarkably popular in the signal processing community due to their simplicity and computational efficiency in recovering sparse signals from incomplete linear measurements. However, these algorithms are mostly used to solve problems in which the objective function is quadratic and it would be beneficial to extend the algorithmic ideas to the more general objective function.
We propose in this paper stochastic versions of the IHT and GradMP algorithms, which we term Stochastic IHT (StoIHT) and Stochastic GradMP (StoGradMP). These algorithms possess favorable properties toward large scale problems:
- The algorithms do not need to compute the full gradient of $F(w)$. Instead, at each iteration, they only sample one index $i \in [M]=\{1,2,...,M \}$ and compute its associated gradient of $f_i(w)$. This property is particularly efficient in large scale settings in which the gradient computation is often prohibitively expensive.
- The algorithms do not need to perform an optimal projection at each iteration as required by the IHT and CoSaMP. Approximated projection is generally sufficient to guarantee linear convergence while the algorithms enjoy significant computational improvement.
- Under the restricted strong convexity assumption of $F(w)$ and the restricted strong smoothness assumption of $f_i(w)$ (defined below), the two proposed algorithms are guaranteed to converge linearly to the optimal solution.
- The algorithms and proofs can be extended further to consider other variants such as inexact gradient computations and inexact estimation.
Notations and assumptions
-------------------------
**Notation:** For a set $\Omega$, let $|\Omega|$ denote its cardinality and $\Omega^c$ denote its complement. We will write $D$ as the matrix whose columns consist of elements of ${\mathcal{D}}$, and denote $D_\Omega$ as the submatrix obtained by extracting the columns of $D$ corresponding to the indices in $\Omega$. We denote by ${\mathcal{R}}(D_{\Omega})$ the space spanned by columns of the matrix $D_{\Omega}$. Also denote by ${\mathcal{P}}_{\Omega} w$ the orthogonal projection of $w$ onto ${\mathcal{R}}(D_{\Omega})$. Given a vector $w \in {\mathbb{R}}^n$ that can decomposed as $w = \sum_{i \in \Omega} \alpha_i d_i$, we say that the support of $w$ with respect to ${\mathcal{D}}$ is $\Omega$, denoted by $\operatorname{supp}_{{\mathcal{D}}}(w) = \Omega$. We denote by $[M]$ the set $\{1,2,...,M \}$. We also define ${\mathbb{E}}_i$ as the expectation with respect to $i$ where $i$ is drawn randomly from the set $[M]$. For a matrix $A$, we use conventional notations: ${\left\|A\right\|}$ and ${\left\|A\right\|}_F$ are the spectral norm and Frobenius norms of the matrix $A$. For the linear operator ${\mathcal{A}}: W \in {\mathbb{R}}^{n_1 \times n_2} \rightarrow y\in {\mathbb{R}}^m$, ${\mathcal{A}}^*: y \in {\mathbb{R}}^m \rightarrow W \in {\mathbb{R}}^{n_1 \times n_2}$ is the transpose of ${\mathcal{A}}$.
Denote $F(w) \triangleq \frac{1}{M}\sum_{i=1}^M f_i (w)$ and let $p(1), ..., p(M)$ be the probability distribution of an index $i$ selected at random from the set $[M]$. Note that $\sum_{i=1}^M p(i) =1$. Another important observation is that if we select an index $i$ from the set $[M]$ with probability $p(i)$, then $${\mathbb{E}}_i \frac{1}{Mp(i)} f_i(w) = F(w) \quad\text{and} \quad {\mathbb{E}}_i \frac{1}{Mp(i)} \nabla f_i(w) = \nabla F(w),$$ where the expectation is with respect to the index $i$.
Define $\text{approx}_k (w,\eta)$ as the operator that constructs a set $\Gamma$ of cardinality $k$ such that $$\label{eqt::approximation definition}
{\left\|{\mathcal{P}}_{\Gamma}w- w\right\|}_2 \leq \eta{\left\|w - w_k\right\|}_2,$$ where $w_k$ is the best $k$-sparse approximation of $w$ with respect to the dictionary ${\mathcal{D}}$, that is, $w_k = \operatorname*{argmin}_{y \in D_{\Gamma}, |\Gamma| \leq k} {\left\|w-y\right\|}_2$. Put another way, denote $$\Gamma^* = \operatorname*{argmin}_{|\Gamma| \leq k} {\left\|w - {\mathcal{P}}_{\Gamma} w\right\|}_2.$$ Then, we require that $$\label{eqt::approximation 1st requirement}
{\left\|w - {\mathcal{P}}_{\Gamma} w\right\|}_2 \leq \eta {\left\|w - {\mathcal{P}}_{\Gamma^*} w\right\|}_2.$$ An immediate consequence is the following inequality: $$\label{eqt::approximation consequence}
{\left\|w - {\mathcal{P}}_{\Gamma} w\right\|}_2 \leq \eta {\left\|w - {\mathcal{P}}_{R} w\right\|}_2.$$ for any set of $|R| \leq k$ atoms of ${\mathcal{D}}$. This follows because ${\left\|w - {\mathcal{P}}_{\Gamma^*} w\right\|}_2 \leq {\left\|w - {\mathcal{P}}_{R} w\right\|}_2$. In addition, taking the square on both sides of the above inequality and manipulating yields $${\left\|{\mathcal{P}}_R w\right\|}_2^2 \leq \frac{1}{\eta^2} {\left\|{\mathcal{P}}_{\Gamma} w\right\|}_2^2 + \frac{\eta^2-1}{\eta^2} {\left\|w\right\|}^2_2 = {\left\|{\mathcal{P}}_{\Gamma} w\right\|}_2^2 + \frac{\eta^2-1}{\eta^2} {\left\|{\mathcal{P}}_{\Gamma^c} w\right\|}^2_2.$$ Taking the square root gives us an important inequality for our analysis later. For any set of $|R| \leq k$ atoms of ${\mathcal{D}}$, $$\label{eqt::approximation consequence 2}
{\left\|{\mathcal{P}}_R w\right\|}_2 \leq {\left\|{\mathcal{P}}_{\Gamma} w\right\|}_2 + \sqrt{\frac{\eta^2-1}{\eta^2}} {\left\|{\mathcal{P}}_{\Gamma^c} w\right\|}_2.$$
**Assumptions:** Before describing the two algorithms in the next section, we provide assumptions for the functions $f_i(w)$ as well as $F(w)$. The first assumption requires that $F(w)$ is restricted strongly convex with respect to the set ${\mathcal{D}}$. Although we do not require $F(w)$ to be globally convex, it is necessary that $F(w)$ is convex in certain directions to guarantees the linear convergence of our proposed algorithms. The intuition is that our greedy algorithms only drive along certain directions and seek for the optimal solution. Thus, a global convexity assumption is not necessary.
\[${\mathcal{D}}$-restricted strong convexity (${\mathcal{D}}$-RSC)\]\[def:rsc\] The function $F(w)$ satisfies the ${\mathcal{D}}$-RSC if there exists a positive constant $\rho^-_{k}$ such that $$\label{eqt::D-RSC}
F(w') - F(w) - {\left<\nabla F(w), w' - w\right>} \geq \frac{\rho^-_k}{2} {\left\|w'-w\right\|}_2^2,$$ for all vectors $w$ and $w'$ of size $n$ such that $|\operatorname{supp}_{{\mathcal{D}}}(w) \cup \operatorname{supp}_{{\mathcal{D}}}(w')| \leq k$.
We notice that the left-hand side of the above inequality relates to the Hessian matrix of $F(w)$ (provided $F(w)$ is smooth) and the assumption essentially implies the positive definiteness of the $k \times k$ Hessian submatrices. We emphasize that this assumption is much weaker than the strong convexity assumption imposed on the full $n$ dimensional space where the latter assumption implies the positive definiteness of the full Hessian matrix. In fact, when $k=n$, $F(w)$ exhibits a strongly convex function with parameter $\rho^-_{k}$, and when $\rho^-_{k} = 0$, $F(w)$ is a convex function. We also highlight that the ${\mathcal{D}}$-RSC assumption is particularly relevant when studying statistical estimation problems in the high-dimensional setting. In this setting, the number of observations is often much less than the dimension of the model parameter and therefore, the Hessian matrix of the loss function $F(w)$ used to measure the data fidelity is highly ill-posed.
In addition, we require that $f_i(w)$ satisfies the so-called ${\mathcal{D}}$-restricted strong smoothness which is defined as follows:
\[${\mathcal{D}}$-restricted strong smoothness (${\mathcal{D}}$-RSS)\]\[def:rss\] The function $f_i(w)$ satisfies the ${\mathcal{D}}$-RSS if there exists a positive constant $\rho^+_{k}(i)$ such that $${\left\|\nabla f_i(w') - \nabla f_i(w)\right\|}_2 \leq \rho^+_{k} (i) {\left\|w'-w\right\|}_2$$ for all vectors $w$ and $w'$ of size $n$ such that $|\operatorname{supp}_{{\mathcal{D}}}(w) \cup \operatorname{supp}_{{\mathcal{D}}}(w')| \leq k$.
Variants of these two assumptions have been used to study the convergence of the projected gradient descent algorithm [@ANW_2012_J]. In fact, the names restricted strong convexity and restricted strong smoothness are adopted from [@ANW_2012_J].
In this paper, we assume that the functions $f_i(w)$ satisfy ${\mathcal{D}}$-RSS with constants $\rho^+_k(i)$ for all $i = 1,...,M$ and $F(w)$ satisfies ${\mathcal{D}}$-RSC with constant $\rho^-_k$. The following quantities will be used extensively throughout the paper: $$\label{eqt::parameters}
\alpha_{k} \triangleq \max_i \frac{\rho^+_{k}(i)}{Mp(i)}, \quad \rho^+_{k} \triangleq \max_i \rho^+_{k}(i), \quad \text{and} \quad\overline{\rho}^+_k \triangleq \frac{1}{M} \sum_{i=1}^M \rho^+_k(i).$$
Stochastic Iterative Hard Thresholding (StoIHT) {#sec::StoIHT}
===============================================
In this section, we describe the Stochastic Iterative Hard Thresholding (StoIHT) algorithm to solve (\[opt::general min\]). The algorithm is provided in Algorithm \[alg:StoIHT\]. At each iteration, the algorithm performs the following standard steps:
- Select an index $i$ from the set $[M]$ with probability $p(i)$.
- Compute the gradient associated with the index just selected and move the solution along the gradient direction.
- Project the solution onto the constraint space via the approx operator defined in (\[eqt::approximation definition\]).
Ideally, we would like to compute the exact projection onto the constraint space or equivalently the best $k$-sparse approximation of $b^t$ with respect to ${\mathcal{D}}$. However, the exact projection is often hard to evaluate or is computationally expensive in many problems. Take an example of the large scale matrix recovery problem, where computing the best matrix approximation would require an intensive Singular Value Decomposition (SVD) which often costs ${\mathcal{O}}(k m n)$, where $m$ and $n$ are the matrix dimensions. On the other hand, recent linear algebraic advances allow computing an approximate SVD in only ${\mathcal{O}}(k^2 \max\{m,n\})$. Thus, approximate projections could have a significant computational gain in each iteration. Of course, the price paid for fast approximate projections is a slower convergence rate. In Theorem \[thm::StoIHT\] we will show this trade-off.
**input:** $k$, $\gamma$, $\eta$, $p(i)$, and stopping criterion **initialize:** $w^0$ and $t=0$
---------------- ------------------------------------------------------------
**randomize:** select an index $i_t$ from $[M]$ with probability $p(i_t)$
**proxy:** $b^t = w^t - \frac{\gamma}{Mp(i_t)} \nabla f_{i_t} (w^t)$
**identify:** $\Gamma^t = \text{approx}_k (b^t, \eta)$
**estimate:** $w^{t+1} = {\mathcal{P}}_{\Gamma^t} (b^t)$
$t = t+1$
---------------- ------------------------------------------------------------
**output:** $\hat{w} = w^t$
Denote $w^{\star}$ as a feasible solution of (\[opt::general min\]). Our main result provides the convergence rate of the StoIHT algorithm via characterizing the $\ell_2$-norm error of $t$-th iterate $w^t$ with respect to $w^{\star}$. We first define some quantities necessary for a precise statement of the theorem. First, we denote the *contraction coefficient* $$\label{eqt::kappa of StoIHT}
\kappa \triangleq 2 \sqrt{\left(1- \gamma (2 - \gamma \alpha_{3k}) \rho^-_{3k} \right)} + \sqrt{(\eta^2-1)\left(1 + \gamma^2 \alpha_{3k} \overline{\rho}^+_{3k} - 2\gamma \rho^-_{3k} \right)},$$ where the quantities $\alpha_{3k}$, $\overline{\rho}^+_{3k}$, $\rho^-_{3k}$ and $\eta$ are defined in (\[eqt::parameters\]), (\[eqt::D-RSC\]), and (\[eqt::approximation definition\]). As will become clear later, the contraction coefficient $\kappa$ controls the algorithm’s rate of convergence and is required to be less than unity. This $\kappa$ is intuitively dependent on the characteristics of the objective function (via ${\mathcal{D}}$-RSC and ${\mathcal{D}}$-RSS constants $\rho^+_{3k}$ and $\rho^-_{3k}$), the user-defined step size, the probability distribution, and the approximation error. The price paid for allowing a larger approximation error $\eta$ is a slower convergence rate, since $\kappa$ will also become large; however, $\eta$ should not be allowed too large since $\kappa$ must still be less than one.
We also define the *tolerance parameter* $$\label{eqt::sigma of StoIHT}
\sigma_{w^{\star}} \triangleq \frac{\gamma}{\min_i Mp(i)} \left(2 {\mathbb{E}}_i \max_{|\Omega|\leq 3k} {\left\|{\mathcal{P}}_{\Omega} \nabla f_i(w^{\star})\right\|}_2 + \sqrt{\eta^2 - 1} {\mathbb{E}}_i{\left\|\nabla f_i(w^{\star})\right\|}_2\right),$$ where $i$ is an index selected from $[M]$ with probability $p(i)$. Of course when $w^\star$ minimizes all components $f_i$, we have $\sigma_{w^{\star}}=0$, and otherwise $\sigma_{w^{\star}}$ measures (a modified version) of the usual noise variance in stochastic optimization.
In terms of these two ingredients, we now state our first main result. The proof is deferred to Section \[sub::proof of theorem 1\].
\[thm::StoIHT\] Let $w^{\star}$ be a feasible solution of (\[opt::general min\]) and $w^0$ be the initial solution. At the $(t+1)$-th iteration of Algorithm \[alg:StoIHT\], the expectation of the recovery error is bounded by $${\mathbb{E}}{\left\|w^{t+1} - w^{\star}\right\|}_2 \leq \kappa^{t+1} {\left\|w^0 - w^{\star}\right\|}_2 + \frac{\sigma_{w^{\star}} }{(1-\kappa)}$$ where $\sigma_{w^\star}$ is defined by , $\kappa$ is defined by and is assumed to be strictly less than unity, and expectation is taken over all choices of random variables $i_0,...,i_t$.
The theorem demonstrates a linear convergence for the StoIHT even though the full gradient computation is not available. This is a significant computational advantage in large-scale settings where computing the full gradient often requires performing matrix multiplications with matrix dimensions in the millions. In addition, a stochastic approach may also gain advantages from parallel implementation. We emphasize that the result of Theorem \[thm::StoIHT\] holds for any feasible solution $w^{\star}$ and the error of the $(t+1)$-th iterate is mainly governed by the second term involving the gradient of $\{ f_i(w^{\star}) \}_{i=1,...,M}$. For certain optimization problems, we expect that the energy of these gradients associated with the global optimum is small. For statistical estimation problems, the gradient of the true model parameter often involves only the statistical noise, which is small. Thus, after a sufficient number of iterations, the error between $w^{t+1}$ and the true statistical parameter is only controlled by the model noise.
The result is significantly simpler when the optimal projection is available at each iteration. That is, the algorithm is always able to find the set $\Gamma^t$ such that $w^{t+1}$ is the best $k$-sparse approximation of $b^t$. In this case, $\eta = 1$ and the contraction coefficient $\kappa$ in (\[eqt::kappa of StoIHT\]) is simplified to $$\kappa = 2 \sqrt{\left(1- \gamma (2 - \gamma \alpha_{3k}) \rho^-_{3k} \right)},$$ with $\alpha_{3k} = \max_i \frac{\rho^+_{3k}(i)}{Mp(i)}$ and $\sigma_{w^{\star}} = \frac{2\gamma}{\min_i Mp(i)} {\mathbb{E}}_i \max_{|\Omega| \leq 3k} {\left\|{\mathcal{P}}_{\Omega} \nabla f_i(w^{\star})\right\|}_2$. In order for $\kappa < 1$, we need $\rho^-_{3k} \geq \frac{3}{4} \alpha_{3k} = \frac{3}{4} \max_i \frac{\rho^+_{3k}(i)}{Mp(i)}$ and $$\gamma < \frac{1 + \sqrt{1 - \frac{3\alpha_{3k}}{4\rho^-_{3k}}}}{\alpha_{3k}}.$$
The following corollary provides an interesting particular choice of the parameters in which Theorem \[thm::StoIHT\] is easier to access.
Suppose that $\rho^-_{3k} \geq \frac{3}{4}\rho^+_{3k}$. Select $\gamma = \frac{1}{\alpha_{3k}}$, $\eta = 1$ and the probability distribution $p(i)=\frac{1}{M}$ for all $i=1,...,M$. Then using the quantities defined by , $${\mathbb{E}}{\left\|w^{t+1}-w^{\star}\right\|}_2 \leq \kappa^{t+1} {\left\|w^0-w^{\star}\right\|}_2 + \frac{2\gamma}{(1-\kappa)\min_i Mp(i)} {\mathbb{E}}_i \max_{|\Omega| \leq 3k}{\left\|{\mathcal{P}}_{\Omega} \nabla f_i(w^{\star})\right\|}_2,$$ where $\kappa = 2\sqrt{1-\frac{\rho^-_{3k}}{\rho^+_{3k}} }$.
When the exact projection is not available, we would want to see how big $\eta$ is such that the StoIHT still allows linear convergence. It is clear from (\[eqt::kappa of StoIHT\]) that for a given step size $\gamma$, bigger $\eta$ leads to bigger $\kappa$, or slower convergence rate. It is required by the algorithm that $\kappa < 1$. Therefore, $\eta^2$ must at least satisfy $$\eta^2 \leq 1 + \frac{1}{1 + \gamma^2 \alpha_{3k} \overline{\rho}^+_{3k} - 2\gamma \rho^-_{3k} }.$$ As $\gamma = \frac{1}{\rho^+_{3k}}$ and $p(i) = \frac{1}{M}$, $i=1,...,M$, the bound is simplified to $\eta^2 \leq 1 + \frac{1}{2( 1- \rho^-_{3k})}$. This bound implies that the approximation error in (\[eqt::approximation 1st requirement\]) should be at most ($1+\epsilon$) away from the exact projection error where $\epsilon \in (0,1)$.
In Algorithm \[alg:StoIHT\], the projection tolerance $\eta$ is fixed during the iterations. However, there is a flexibility in changing it every iteration. The advantage of this flexibility is that this parameter can be set small during the first few iterations where the convergence is slow and gradually increased for the later iterations. Denoting the projection tolerance at the $j$-th iteration by $\eta^j$, we define the *contraction coefficient* at the $j$-th iteration: $$\label{eqt::kappa_t of StoIHT}
\kappa_j \triangleq 2 \sqrt{\left(1- \gamma (2 - \gamma \alpha_{3k}) \rho^-_{3k} \right)} + \sqrt{((\eta^j)^2-1)\left(1 + \gamma^2 \alpha_{3k} \overline{\rho}^+_{3k} - 2\gamma \rho^-_{3k} \right)},$$ and the *tolerance parameter* $\sigma_{w^{\star}} \triangleq \max_{j \in [t]} \sigma^j_{w^{\star}}$ where $$\label{eqt::sigma_t of StoIHT}
\sigma^j_{w^{\star}} \triangleq \frac{\gamma}{\min_i Mp(i)} \left(2 {\mathbb{E}}_i \max_{|\Omega|\leq 3k} {\left\|{\mathcal{P}}_{\Omega} \nabla f_i(w^{\star})\right\|}_2 + \sqrt{(\eta^j)^2 - 1} \max_i {\mathbb{E}}_i{\left\|\nabla f_i(w^{\star})\right\|}_2\right).$$
The following corollary shows the convergence of the StoIHT algorithm in the case where the projection tolerance is allowed to vary at each iteration:
At the $(t+1)$-th iteration of Algorithm \[alg:StoIHT\], the recovery error is bounded by $${\mathbb{E}}{\left\|w^{t+1}-w^{\star}\right\|}_2 \leq {\left\|w^0-w^{\star}\right\|}_2 \prod_{j=0}^{t+1} \kappa_j + \sigma_{w^{\star}} \sum_{i=0}^t \prod_{j=t-i}^t \kappa_j,$$ where $\kappa_j$ is defined by , and $\sigma_{w^{\star}} = \max_{j \in [t]} \sigma^j_{w^{\star}}$ is defined via .
Stochastic Gradient Matching Pursuit (StoGradMP) {#sec::StoGradMP}
================================================
CoSaMP [@NT_CoSaMP_2010_J] has been a very popular algorithm to recover a sparse signal from its linear measurements. In [@NCT_gradMP_2013_J], the authors generalize the idea of CoSaMP and provide the GradMP algorithm that solves a broader class of sparsity-constrained problems. In this paper, we develop a stochastic version of the GradMP, namely StoGradMP, in which at each iteration only the evaluation of the gradient of a function $f_i$ is required. The StoGradMP algorithm is described in Algorithm \[alg:StoGradMP\] which consists of following steps at each iteration:
- Randomly select an index $i$ with probability $p(i)$.
- Compute the gradient of $f_i(w)$ with associated index $i$.
- Choose the subspace of dimension at most $2k$ to which the gradient vector is closest, then merge with the estimated subspace from previous iteration.
- Solve a sub-optimization problem with the search restricted on this subspace.
- Find the subspace of dimension $k$ which is closest to the solution just found. This is the estimated subspace which is hopefully close to the true subspace.
At a high level, StoGradMP can be interpreted as at each iteration, the algorithm looks for a subspace based on the previous estimate and then seeks a new solution via solving a low-dimensional sub-optimization problem. Due to the ${\mathcal{D}}$-RSC assumption, the sub-optimization is convex and thus it can be efficiently solved by many off-the-shelf algorithms. StoGradMP stops when a halting criterion is satisfied.
**input:** $k$, $\eta_1$, $\eta_2$, $p(i)$, and stopping criterion **initialize:** $w^0$, $\Lambda = 0$, and $t=0$
---------------- ----------------------------------------------------------------------------------------
**randomize:** select an index $i_t$ from $[M]$ with probability $p(i_t)$
**proxy:** $r^t = \nabla f_{i_t} (w^t)$
**identify:** $\Gamma = \text{approx}_{2k} (r^t, \eta_1)$
**merge:** $\widehat{\Gamma} = \Gamma \cup \Lambda $
**estimate:** $b^t = \operatorname*{argmin}_{w} F(w) \quad w \in \text{span} (D_{\widehat{\Gamma}})$
**prune:** $\Lambda = \text{approx}_k (b^t, \eta_2) $
**update:** $w^{t+1} = {\mathcal{P}}_{\Lambda} (b^t)$
t = t+1
---------------- ----------------------------------------------------------------------------------------
**output:** $\hat{w} = w^t$
Denote $w^{\star}$ as a feasible solution of the optimization (\[opt::general min\]). We will present our main result for the StoGradMP algorithm. As before, our result controls the convergence rate of the recovery error at each iteration. We define the *contraction coefficient* $$\label{eqt::kappa of StoGradMP}
\kappa \triangleq (1+\eta_2) \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} \left( \max_i \sqrt{Mp(i)} \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1} \rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} \right),$$ where the quantities $\alpha_{4k}$, $\rho^+_{4k}$, $\rho^-_{4k}$, $\eta_1$, and $\eta_2$ are defined in (\[eqt::parameters\]), (\[eqt::D-RSC\]), and (\[eqt::approximation definition\]). As will be provided in the following theorem, $\kappa$ characterizes the convergence rate of the algorithm. This quantity depends on many parameters that play a role in the algorithm.
In addition, we define analogously as before the *tolerance parameter* $$\label{eqt::sigma of StoGradMP}
\begin{split}
\sigma_{w^{\star}} &\triangleq C (1+\eta_2) \frac{1}{\min_{i \in [M]} M p(i)} \max_{|\Omega| \leq 4k, i \in [M]} {\left\|{\mathcal{P}}_{\Omega} \nabla f_i (w^{\star})\right\|}_2,
\end{split}$$ where $C$ is defined as $C \triangleq \frac{1}{\rho^-_{4k}} \left(2 \max_{i \in [M]}Mp(i) \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} + 3\right) $.
We are now ready to state our result for the StoGradMP algorithm. The error bound has the same structure as that of StoIHT but with a different convergence rate.
\[thm::StoGradMP\] Let $w^{\star}$ be a feasible solution of (\[opt::general min\]) and $w^0$ be the initial solution. At the $(t+1)$-th iteration of Algorithm \[alg:StoGradMP\], the recovery error is bounded by $${\mathbb{E}}{\left\|w^{t+1}-w^{\star}\right\|}_2 \leq \kappa^{t+1} {\left\|w^0-w^{\star}\right\|}_2 + \frac{\sigma_{w^{\star}}}{1-\kappa}$$ where $\sigma_{w^{\star}}$ is defined by , $\kappa$ is defined by and is assumed to be strictly less than unity, and expectation is taken over all choices of random variables $i_0,...,i_t$.
When $p(i) = \frac{1}{M}$, $i=1,...,M$, and $\eta_1 =\eta_2= 1$ (exact projections are obtained), the contraction coefficient $\kappa$ has a very simple representation: $\kappa = 2\sqrt{\frac{\rho^+_{4k}}{\rho^-_{4k}} \left( \frac{\rho^+_{4k}}{\rho^-_{4k}} - 1 \right)}$. This expression of $\kappa$ is the same as that of the GradMP. In this situation, the requirement $\kappa < 1$ leads to the condition $\rho^+_{4k} < \frac{2+\sqrt{6}}{4} \rho^-_{4k}$. The following corollary provides the explicit form of the recovery error.
Using the parameters described by , suppose that $\rho^-_{4k} > \frac{4}{2+\sqrt{6}} \rho^+_{4k} $. Select $\eta_1=\eta_2 = 1$, and the probability distribution $p(i) = \frac{1}{M}$, $i=1,...,M$. Then, $${\mathbb{E}}{\left\|w^{t+1} - w^{\star}\right\|}_2 \leq \left( 2 \sqrt{\frac{\rho^+_{4k}(\rho^+_{4k} -\rho^-_{4k})}{(\rho^-_{4k})^2}} \right)^{t+1} {\left\|w^0 - w^{\star}\right\|}_2 + \sigma_{w^{\star}},$$ where $\sigma_{w^{\star}} = \frac{2}{\rho^-_{4k}} \left(2\sqrt{\frac{\rho^+_{4k}}{\rho^-_{4k}}} + 3 \right) \max_{|\Omega| \leq 4k, i \in [M]} {\left\|{\mathcal{P}}_{\Omega} \nabla f_i (w^{\star})\right\|}_2$.
Similar to the StoIHT, the theorem demonstrates the linear convergence of the StoGradMP to the feasible solution $w^{\star}$. The expected recovery error naturally consists of two components: one relates to the convergence rate and the other concerns the tolerance factor. As long as the contraction coefficient is small (less than unity), the first component is negligible, whereas the second component can be very large depending on the feasible solution we measure. We expect that the gradients of $f_i$’s associated with the global optimum to be small, as shown true in many statistical estimation problems such as sparse linear estimation and low-rank matrix recovery, so that the StoGradMP converges linearly to the optimum. We note that the linear rate here is precisely consistent with the linear rate of the original CoSaMP algorithm applied to compressed sensing problems [@NT_CoSaMP_2010_J]. Furthermore, StoGradMP gains significant computation over CoSaMP and GradMP since the full gradient evaluation is not required at each iteration.
In Algorithm \[alg:StoGradMP\], the parameters $\eta_1$ and $\eta_2$ are fixed during the iterations. However, they can be changed at each iteration. Denoting the projection tolerances at the $j$-th iteration by $\eta_1^j$ and $\eta_2^j$, we define the *contraction coefficient* at the $j$-th iteration as $$\label{eqt::kappa_t of StoGradMP}
\kappa_j \triangleq (1+\eta^j_2) \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} \left( \max_i \sqrt{Mp(i)} \sqrt{\frac{\frac{2(\eta_1^j)^2-1}{(\eta^j_1)^2} \rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} + \frac{\sqrt{(\eta^j_1)^2-1}}{\eta^j_1} \right).$$ Also define the *tolerance parameter* $\sigma_{w^{\star}} \triangleq \max_{j\in[t]} \sigma^j_{w^{\star}}$ where $$\label{eqt::sigma_t of StoGradMP}
\sigma^j_{w^{\star}} \triangleq C (1+\eta^j_2) \frac{1}{\min_{i \in [M]} M p(i)} \max_{|\Omega| \leq 4k, i \in [M]} {\left\|{\mathcal{P}}_{\Omega} \nabla f_i (w^{\star})\right\|}_2$$ and $C$ is defined as $C \triangleq 2 \max_{i \in [M]}Mp(i) \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} + 3 $. The following corollary shows the convergence of the algorithm.
At the $(t+1)$-th iteration of Algorithm \[alg:StoGradMP\], the recovery error is bounded by $${\mathbb{E}}{\left\|w^{t+1}-w^{\star}\right\|}_2 \leq {\left\|w^0-w^{\star}\right\|}_2 \prod_{j=0}^{t+1} \kappa_j + \sigma_{w^{\star}} \sum_{i=0}^t \prod_{j=t-i}^t \kappa_j,$$ where $\kappa_j$ is defined by , and $\sigma_{w^{\star}} = \max_{j\in[t]} \sigma^j_{w^{\star}}$ is defined via .
StoIHT and StoGradMP with inexact gradients {#sec::inexact StoIHT and StoGradMP}
===========================================
In this section, we investigate the StoIHT and StoGradMP algorithms in which the gradient might not be exactly estimated. This issue occurs in many practical problems such as distributed network optimization in which gradients are corrupted by noise during the communication on the network. In particular, in both algorithms, the gradient selected at each iteration is contaminated by a noise vector $e^t$ where $t$ indicates the iteration number. We assume $\{e^t\}_{t=1,2,...}$ are deterministic noise with bounded energies.
StoIHT with inexact gradients
-----------------------------
In the StoIHT algorithm, the update $b^t$ at the proxy step has to take into account the noise appearing in the gradient. In particular, at the $t$-th iteration, $$b^t = w^t - \frac{\gamma}{M p(i_t)} \left( \nabla f_{i_t} (w^t) + e^t \right).$$
Denote the quantity $$\label{eqt::sigma_e of StoIHT}
\sigma_{e} \triangleq \frac{\gamma}{\min_{i} Mp(i)} \max_{j \in [t]} \left( 2 \max_{|\Omega| \leq 3k} {\left\|{\mathcal{P}}_{\Omega} e^j\right\|}_2 + \sqrt{\eta^2 - 1} {\left\|e^j\right\|}_2 \right).$$ We state our result in the following theorem. The proof is deferred to Section \[subsection::proof of StoIHT with inexact gradients\].
\[thm::StoIHT with inexact gradients\] Let $w^{\star}$ be a feasible solution of (\[opt::general min\]). At the $(t+1)$-th iteration of Algorithm \[alg:StoIHT\] with inexact gradients, the expectation of the recovery error is bounded by $${\mathbb{E}}{\left\|w^{t+1} - w^{\star}\right\|}_2 \leq \kappa^{t+1} {\left\|w^0 - w^{\star}\right\|}_2 + \frac{1 }{(1-\kappa)} (\sigma_{w^{\star}} + \sigma_e),$$ where $\kappa$ is defined in (\[eqt::kappa of StoIHT\]) and is assumed to be strictly less than unity and expectation is taken over all choices of random variables $i_1,...,i_t$. The quantities $\sigma_{w^{\star}}$ and $\sigma_e$ are defined in (\[eqt::sigma of StoIHT\]) and (\[eqt::sigma\_e of StoIHT\]), respectively.
Theorem \[thm::StoIHT with inexact gradients\] provides the linear convergence of StoIHT even in the setting of an inexact gradient computation. The error bound shares a similar structure as that of the StoIHT with only an additional term related to the gradient noise. An interesting property is that the noise does not accumulate over iterations. Rather, it only depends on the largest noise level.
StoGradMP with inexact gradients
--------------------------------
In the StoGradMP algorithm, accounting for noise in the gradient appears in the proxy step; the expression of $r^t$, with an additional noise term, becomes $$r^t = \nabla f_{i_t} (w^t) + e^t.$$
Denote the quantity $$\label{eqt::sigma_e of StoGradMP}
\sigma_e \triangleq \frac{\max_i p(i)}{\rho^-_{4k} \min_i p(i)} \max_{j \in [t]} {\left\|e^j\right\|}_2.$$
We have the following theorem.
\[thm::StoGradMP with inexact gradients\] Let $w^{\star}$ be a feasible solution of (\[opt::general min\]). At the $(t+1)$-th iteration of Algorithm \[alg:StoGradMP\] with inexact gradients, the expectation of the recovery error is bounded by $${\mathbb{E}}{\left\|w^{t+1} - w^{\star}\right\|}_2 \leq \kappa^{t+1} {\left\|w^0 - w^{\star}\right\|}_2 + \frac{1 }{(1-\kappa)} (\sigma_{w^{\star}} + \sigma_e),$$ where $\kappa$ is defined in (\[eqt::kappa of StoGradMP\]) and is assumed to be strictly less than unity and expectation is taken over all choices of random variables $i_1,...,i_t$. The quantities $\sigma_{w^{\star}}$ and $\sigma_e$ are defined in (\[eqt::sigma of StoGradMP\]) and (\[eqt::sigma\_e of StoGradMP\]), respectively.
Similar to the StoIHT, StoGradMP is stable under the contamination of gradient noise. Stability means that the algorithm is still able to obtain the linear convergence rate. The gradient noise only affects the tolerance rate and not the contraction factor. Furthermore, the recovery error only depends on the largest gradient noise level, implying that the noise does not accumulate over iterations.
StoGradMP with inexact gradients and approximated estimation
------------------------------------------------------------
In this section, we extend the theory of the StoGradMP algorithm further to consider the sub-optimality of optimization at the estimation step. Specifically, we assume that at each iteration, the algorithm only obtains an approximated solution of the sub-optimization. Denote $$\label{opt::get b^t}
\quad b^t_{\operatorname{opt}} = \operatorname*{argmin}_w F(w) \quad \text{subject to} \quad w \in \text{span} (D_{\widehat{\Gamma}}),$$ as the optimal solution of this convex optimization, where $\hat{\Gamma} = \Gamma\cup \Lambda$ may also give rise to an approximation at the identification step. Write $b^t$ as the approximated solution available at the estimation step. Then $b^t$ is linked to $b^t_{\operatorname{opt}}$ via the relationship: ${\left\|b^t - b^t_{\operatorname{opt}}\right\|}_2 \leq \epsilon^t$. This consideration is realistic in two aspects: first, the optimization (\[opt::get b\^t\]) can be too slow to converge to the optimal solution, hence we might want to stop the algorithm after a sufficient number of steps or whenever the solution is close to the optimum; second, even if (\[opt::get b\^t\]) has a closed-form solution as the least-squares problem, it is still beneficial to solve it approximately in order to reduce the computational complexity caused by the pseudo-inverse process (see [@DMMS_leastSquare_2011_J] for an example of randomized least-squares approximation). Denoting the quantity
$$\label{eqt::sigma_epsilon}
\sigma_{\epsilon} = \max_{j \in [t]} \epsilon^j,$$
we have the following theorem.
\[thm::StoGradMP with inexact gradient and approximated estimation\] Let $w^{\star}$ be a feasible solution of (\[opt::general min\]). At the $(t+1)$-th iteration of Algorithm \[alg:StoGradMP\] with inexact gradients and approximated estimations, the expectation of the recovery error is bounded by $${\mathbb{E}}{\left\|w^{t+1} - w^{\star}\right\|}_2 \leq \kappa^{t+1} {\left\|w^0 - w^{\star}\right\|}_2 + \frac{1 }{(1-\kappa)} (\sigma_{w^{\star}} + \sigma_e + \sigma_{\epsilon}),$$ where $\kappa$ is defined in (\[eqt::kappa of StoGradMP\]) and is assumed to be strictly less than unity and expectation is taken over all choices of random variables $i_1,...,i_t$. The quantities $\sigma_{w^{\star}}$, $\sigma_e$, and $\sigma_{\epsilon}$ are defined in (\[eqt::sigma of StoGradMP\]), (\[eqt::sigma\_e of StoGradMP\]), and (\[eqt::sigma\_epsilon\]), respectively.
Theorem \[thm::StoGradMP with inexact gradient and approximated estimation\] shows the stability of StoGradMP under both the contamination of gradient noise at the proxy step and the approximate optimization at the estimation step. Furthermore, StoGradMP still achieves a linear convergence rate even in the presence of these two sources of noise. Similar to the artifacts of gradient noise, the approximated estimation affects the tolerance rate and not the contraction factor, and the recovery is only impacted by the largest approximated estimation bound (rather than an accumulation over all of the iterations).
Some estimates {#sec::Estimates}
==============
In this section we investigate some specific problems which require solving an optimization with a sparse constraint and transfer results of Theorems \[thm::StoIHT\] and \[thm::StoGradMP\].
Sparse linear regression {#subsec::Sparse linear regression}
------------------------
The first problem of interest is the well-studied sparse recovery in which the goal is to recover a $k_0$-sparse vector $w_0$ from noisy observations of the following form: $$y = A w_0 + \xi.$$ Here, the $m \times n$ matrix $A$ is called the design matrix and $\xi$ is the $m$ dimensional vector noise. A natural way to recover $w_0$ from the observation vector $y$ is via solving $$\label{opt::linear regression}
\min_{w \in {\mathbb{R}}^n} \frac{1}{2m}{\left\|y - Aw\right\|}^2_2 \quad\text{subject to} \quad {\left\|w\right\|}_0 \leq k,$$ where $k$ is the user-defined parameter which is assumed greater than $k_0$. Clearly, this optimization is a special form of (\[opt::general min\]) with ${\mathcal{D}}$ being the collection of standard vectors in ${\mathbb{R}}^n$ and $F(w) = \frac{1}{2m}{\left\|y - Aw\right\|}^2_2$. Decompose the vector $y$ into non-overlapping vectors $y_{b_i}$ of size $b$ and denote $A_{b_i}$ as the $b_i \times n$ submatrix of $A$. We can then rewrite $F(w)$ as $$F(w) = \frac{1}{M} \sum_{i=1}^M \frac{1}{2b} {\left\|y_{b_i} - A_{b_i} w\right\|}_2^2 \triangleq \frac{1}{M} \sum_{i=1}^M f_i(w),$$ where $M = m/b$. In order to apply Theorems \[thm::StoIHT\] and \[thm::StoGradMP\] for this problem, we need to compute the contraction coefficient and tolerance parameter which involve the ${\mathcal{D}}$-RSC and ${\mathcal{D}}$-RSS conditions. It is easy to see that these two properties of $F(w)$ and $\{ f_i(w) \}_{i=1}^M$ are equivalent to the RIP studied in [@CRT_Stability_2006a_J]. In particular, we require that the matrix $A$ satisfies $$\frac{1}{m}{\left\|A w\right\|}_2^2 \geq (1-\delta_k) {\left\|w\right\|}_2^2$$ for all $k$-sparse vectors $w$. In addition, the matrices $A_{b_i}$, $i=1,...,M$, are also required to obey $$\frac{1}{b}{\left\|A_{b_i} w\right\|}_2^2 \leq (1+\delta_k) {\left\|w\right\|}_2^2$$ for all $k$-sparse vectors $w$. Here, $(1+\delta_k)$ and $(1-\delta_k)$ with $\delta_k \in (0,1]$ play the role of $\rho^+_k(i)$ and $\rho^-_k$ in Definitions \[def:rsc\] and \[def:rss\], respectively. For the Gaussian matrix $A$ (entries are i.i.d. ${\mathcal{N}}(0, 1)$), it is well-known that these two assumptions hold as long as $m \geq \frac{Ck \log n}{\delta_k}$ and $b \geq \frac{c k \log n}{\delta_k}$. By setting the block size $b = c k \log n$, the number of blocks $M$ is thus proportional to $\frac{m}{k \log n}$.
Now using StoIHT to solve (\[opt::linear regression\]) and applying Theorem \[thm::StoIHT\], we set the step size $\gamma = 1$, the approximation error $\eta=1$, and $p(i)=1/M$, $i = 1,...,M$, for simplicity. Thus, the quantities in (\[eqt::parameters\]) are all the same and equal to $1+\delta_k$. It is easy to verify that the contraction coefficient defined in (\[eqt::kappa of StoIHT\]) is $\kappa = 2\sqrt{2\delta_{3k} - \delta_{3k}^2}$. One can obtain $\kappa \leq 3/4$ when $\delta_{3k} \leq 0.07$, for example. In addition, since $w_0$ is the feasible solution of (\[opt::linear regression\]), the tolerance parameter $\sigma_{w_0}$ defined in (\[eqt::sigma of StoIHT\]) can be rewritten as $$\sigma_{w_0} = 2 {\mathbb{E}}_i \max_{|\Omega| \leq 3k} \frac{1}{b} {\left\|{\mathcal{P}}_{\Omega} A_{b_i}^* \xi_{b_i}\right\|}_2 \leq \frac{2}{b} \sqrt{3k} \max_{i \in [M]} \max_{j \in [n]} |{\left<A_{b_i,j},\xi_{b_i}\right>}|,$$ where $A_{b_i,j}$ is the $j$-th column of the matrix $A_{b_i}$. For stochastic noise $\xi \sim {\mathcal{N}}(0,\sigma^2 I_m)$, it is easy to verify that $\sigma_{w_0} \leq c' \sqrt{\frac{\sigma^2 k \log n}{b}}$ with probability at least $1-n^{-1}$.
Using StoGradMP to solve (\[opt::linear regression\]) with the same setting as above, we write the contraction coefficient in (\[eqt::kappa of StoGradMP\]) as $\kappa = 2\sqrt{\frac{2\delta_{4k} (1+\delta_{4k})}{(1-\delta_{4k})^2}}$, which is less than $3/4$ if $\delta_{4k} \leq 0.05$. The tolerance parameter $\sigma_{w_0}$ in (\[eqt::sigma of StoGradMP\]) can be simplified similarly as in StoIHT. We now provide the following corollary based on what we have discussed.
\[for StoIHT and StoGradMP\] \[cor::apply to sparse linear regression\] Assume $A \in {\mathbb{R}}^{m\times n}$ satisfies the ${\mathcal{D}}$-RSC and ${\mathcal{D}}$-RSS assumptions and $\xi \sim {\mathcal{N}}(0,\sigma^2)$. Then with probability at least $1-n^{-1}$, the error at the $(t+1)$-th iterate of the StoIHT and StoGradMP algorithms is bounded by $${\mathbb{E}}{\left\|w^{t+1} - w_0\right\|}_2 \leq (3/4)^{t+1} {\left\|w_0\right\|}_2 + c \sqrt{\frac{\sigma^2 k_0 \log n}{b}}.$$
We describe the convergence result of the two algorithms in one corollary since their results share the same form with the only difference in the constant $c$. One can see that for a sufficient number of iterations, the first term involving ${\left\|w_0\right\|}_2$ is negligible and the recovery error only depends on the second term. When the noise is absent, both algorithms guarantee recovery of the exact $w_0$. The recovery error also depends on the block size $b$. When $b$ is small, more error is expected, and the error decreases as $b$ increases. This of course matches our intuition. We emphasize that the deterministic IHT and GradMP algorithms deliver the same recovery error with $b$ replaced by $m$.
Low-rank matrix recovery {#subsec::Low-rank matrix recovery}
------------------------
We consider the high-dimensional matrix recovery problem in which the observation model has the form $$y_j = {\left<A_j, W_0\right>} + \xi_j \quad, \quad j = 1,...,m,$$ where $W_0$ is the $n_1 \times n_2$ unknown rank-$k_0$ matrix, each measurement $A_j$ is an $m\times n_1$ matrix, and the noise $\xi_j$ is assumed $N(0,\sigma^2)$. Noting that $\frac{1}{2m} \sum_{j=1}^m (y_j - {\left<A_j,W\right>})^2 = \frac{1}{2m}{\left\|y - {\mathcal{A}}(W)\right\|}_2^2$, the standard approach to recover $W_0$ is to solve the minimization $$\label{opt::matrix recovery}
\min_{W \in {\mathbb{R}}^{n_1\times n_2}} \frac{1}{2m}{\left\|y - {\mathcal{A}}(W)\right\|}_2^2\quad\text{subject to} \quad\operatorname*{rank}(W) \leq k,$$ with $k$ assumed greater than $k_0$. Here, ${\mathcal{A}}$ is the linear operator. In this problem, the set ${\mathcal{D}}$ consists of infinitely many unit-normed rank-one matrices and the objective function can be written as a summation of sub-functions: $$F(W) = \frac{1}{M} \sum_{i=1}^M f_i(W) = \frac{1}{M} \sum_{i=1}^M \left( \frac{1}{2b} \sum_{j=(i-1)b+1}^{ib} (y_j - {\left<A_j,W\right>})^2 \right) \triangleq \frac{1}{M} \sum_{i=1}^M \frac{1}{2b} {\left\|y_{b_i} - {\mathcal{A}}_i (W)\right\|}_2^2,$$ where $m = Mb$ (assume $b$ is integer). Each $f_i(W)$ accounts for a collection (or block) of observations $y_{b_i}$ of size $b$. In this case, the ${\mathcal{D}}$-RSC and ${\mathcal{D}}$-RSS properties are equivalent to the matrix-RIP [@CP_MatrixRec_2009_J], which holds for a wide class of random operators ${\mathcal{A}}$. In particular, we require $$\frac{1}{m} {\left\|{\mathcal{A}}(W)\right\|}_2^2 \geq (1-\delta_k) {\left\|W\right\|}_F^2$$ for all rank-$k$ matrices $W$. In addition, the linear operators ${\mathcal{A}}_{i}$ are required to obey $$\frac{1}{b} {\left\|{\mathcal{A}}_i(W)\right\|}_2^2 \leq (1+\delta_k) {\left\|W\right\|}_F^2$$ for all rank-$k$ matrices $W$. Here, $(1+\delta_k)$ and $(1-\delta_k)$ with $\delta_k \in (0,1]$ play the role of $\rho^+_k(i)$ and $\rho^-_k$ in Definitions \[def:rsc\] and \[def:rss\], respectively. For the random Gaussian linear operator ${\mathcal{A}}$ (vectors $A_i$ are i.i.d. ${\mathcal{N}}(0, I)$), it is well-known that these two assumptions hold as long as $m \geq \frac{Ck (n_1+n_2)}{\delta_k}$ and $b \geq \frac{c k (n_1+n_2)}{\delta_k}$. By setting the block size $b = c k (n_1+n_2)$, the number of blocks $M$ is thus proportional to $\frac{m}{k (n_1 + n_2)}$.
In this section, we consider applying results of Theorems \[thm::StoIHT with inexact gradients\] and \[thm::StoGradMP with inexact gradient and approximated estimation\]. To do so, we need to compute the contraction coefficients and tolerance parameter. We begin with Theorem \[thm::StoIHT with inexact gradients\], which holds for the StoIHT algorithm. Similar to the previous section, $\kappa$ in (\[eqt::kappa of StoIHT\]) can have a similar form. However, in the matrix recovery problem, SVD computations are required at each iteration which is often computationally expensive. There has been a vast amount of research focusing on approximation methods that perform nearly as good as exact SVD but with much faster computation. Among them are the randomized SVD [@halko2011finding] that we will employ in the experimental section. For simplicity, we set the step size $\gamma = 1$ and $p(i) = 1/M$ for all $i$. Thus, quantities in (\[eqt::parameters\]) are the same and equal to $1+\delta_k$. Rewriting $\kappa$ in (\[eqt::kappa of StoIHT\]), we have $$\kappa = 2 \sqrt{2\delta_{3k} - \delta^2_{3k}} + \sqrt{(\eta^2-1)(\delta_{3k}^2 + 4\delta_{3k} )},$$ where we recall $\eta$ is the projection error. Setting $\kappa \leq 3/4$ by allowing the first term to be less than $1/2$ and the second term less than $1/4$, we obtain $\delta_{4k} \leq 0.03$ and the approximation error $\eta$ is allowed up to $1.19$.
The next step is to evaluate the tolerance parameter $\sigma_{W_0}$ in (\[eqt::sigma of StoIHT\]). The parameter $\sigma_{W_0}$ can be read as $$\begin{split}
\nonumber
\sigma_{W_0} &= 2 {\mathbb{E}}_i \max_{|\Omega| \leq 4k} \frac{1}{b} {\left\|{\mathcal{P}}_{\Omega} {\mathcal{A}}_i^* (\xi_{b_i})\right\|}_F + \sqrt{\eta^2-1} {\mathbb{E}}_i \frac{1}{b} {\left\|{\mathcal{A}}_i^* (\xi_{b_i})\right\|}_F \\
&\leq \frac{2}{b} \sqrt{4k} \max_i {\left\|{\mathcal{A}}_i^* (\xi_{b_i})\right\|} + \frac{1}{b}\sqrt{\eta^2-1} \sqrt{n} \max_i {\left\|{\mathcal{A}}_i^* (\xi_{b_i})\right\|}.
\end{split}$$ For stochastic noise $\xi \sim {\mathcal{N}}(0,\sigma^2 I)$, it is shown in [@CP_MatrixRec_2009_J], Lemma 1.1 that ${\left\|{\mathcal{A}}_i^* (\xi_{b_i})\right\|} \leq c \sqrt{\sigma^2 n b}$ with probability at least $1-n^{-1}$ where $n = \max\{n_1,n_2\}$. Therefore, $\sigma_{W_0} \leq c \left( \sqrt{\frac{\sigma^2 k n}{b}} + \sqrt{\frac{(\eta^2-1)\sigma^2 n^2}{b}}\right)$. In addition, the parameter $\sigma_e$ in (\[eqt::sigma\_e of StoIHT\]) is estimated as $$\sigma_e \leq \max_j \left(2\max_{|\Omega| \leq 3k} {\left\|{\mathcal{P}}_{\Omega} E^j\right\|}_F + \sqrt{\eta^2-1} {\left\|E^j\right\|}_F\right) \leq \max_j \left(2\sqrt{3k} {\left\|E^j\right\|} + \sqrt{\eta^2-1} {\left\|E^j\right\|}_F\right),$$ where we recall that $E^j$ is the noise matrix that might contaminate the gradient at the $j$-th iteration. Applying Theorem \[thm::StoIHT with inexact gradients\] leads to the following corollary.
\[for StoIHT\] Assume the linear operator ${\mathcal{A}}$ satisfies the ${\mathcal{D}}$-RSC and ${\mathcal{D}}$-RSS assumptions and $\xi \sim {\mathcal{N}}(0,\sigma^2)$. Set $p(i)=1/M$ for $i = 1,...,M$ and $\gamma = 1$. Then with probability at least $1-n^{-1}$, the error at the $(t+1)$-th iterate of the StoIHT algorithm is bounded by $$\begin{split}
\nonumber
{\mathbb{E}}{\left\|W^{t+1} - W_0\right\|}_F \leq (3/4)^{t+1} {\left\|W_0\right\|}_F &+ c \left( \sqrt{\frac{\sigma^2 k n}{b}} + \sqrt{\frac{(\eta^2-1)\sigma^2 n^2}{b}}\right) \\
&+ 4\max_{j \in [t]} \left( 2\sqrt{3k} {\left\|E^j\right\|} + \sqrt{\eta^2-1} {\left\|E^j\right\|}_F \right).
\end{split}$$
The discussion after Corollary \[cor::apply to sparse linear regression\] for the vector recovery can also be applied here. For a sufficient number of iterations, the recovery error is naturally controlled by three factors: the measurement noise $\sigma$, the approximation projection parameter $\eta$, and the largest gradient noise $E^j$. In the absence of these three parameters, the recovery is exact. When $\eta=1$ and $E^j = 0$, $j=0,...,t$, the error has the same structure as the convex nuclear norm minimization method [@CP_MatrixRec_2009_J] which has been shown to be optimal.
Moving to the StoGradMP algorithm, setting $p(i) = 1/M$ for $i=1,...,M$ again for simplicity, we can write the contraction coefficient in (\[eqt::kappa of StoGradMP\]) as $$\kappa = (1+\eta_2) \sqrt{\frac{1+\delta_{4k}}{1-\delta_{4k}}} \left( \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1} (1+\delta_{4k}) - (1-\delta_{4k})}{1-\delta_{4k}}} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} \right).$$ If we allow for example the projection error $\eta_1 = 1.01$ and $\eta_2 = 1.01$ and require $\kappa \leq 0.9$, simple algebra gives us $\delta_{4k} \leq 0.03$ In addition, for stochastic noise $\xi \sim {\mathcal{N}}(0,\sigma^2 I)$, the tolerance parameter $\sigma_{W_0}$ in (\[eqt::sigma of StoGradMP\]) can be read as $$\sigma_{W_0} = c (1+\eta_2) \max_{i \in [M], |\Omega| \leq 4k}{\left\|{\mathcal{P}}_{\Omega} {\mathcal{A}}_i^* (\xi_{b_i})\right\|}_F \leq c_1 (1+\eta_2) \sqrt{4k} \max_{i \in [M]} {\left\|{\mathcal{A}}_i^* (\xi_{b_i})\right\|} \leq c_2 (1+\eta_2) \sqrt{\frac{\sigma k n}{b}}$$ with probability at least $1-n^{-1}$. Again, the last inequality is due to [@CP_MatrixRec_2009_J]. Now applying Theorem \[thm::StoGradMP with inexact gradient and approximated estimation\], where we recall the parameter $\sigma_e$ in (\[eqt::sigma\_e of StoGradMP\]) is $\sigma_e = \max_j {\left\|E^j\right\|}_F$ and $\epsilon^j$ is the optimization error at the estimation step, we have the following corollary.
\[for StoGradMP\] Assume the linear operator ${\mathcal{A}}$ satisfies the ${\mathcal{D}}$-RSC and ${\mathcal{D}}$-RSS assumptions and $\xi \sim {\mathcal{N}}(0,\sigma^2)$. Set $p(i)=1/M$ for $i=1,...,M$. Then with probability at least $1-n^{-1}$, the error at the $(t+1)$-th iterate of the StoGradMP algorithm is bounded by $$\begin{split}
\nonumber
{\mathbb{E}}{\left\|W^{t+1} - W_0\right\|}_F \leq (0.9)^{t+1} {\left\|W_0\right\|}_F &+ c (1+\eta_2) \sqrt{\frac{\sigma^2 k n}{b}} + 4\max_{j \in [t]} {\left\|E^j\right\|}_F + 4 \max_{j \in [t]}\epsilon^j.
\end{split}$$
Numerical experiments {#sec::experiments}
=====================
In this section we present some experimental results comparing our proposed stochastic methods to their deterministic counterparts. Our goal is to explore several interesting aspects of improvements and trade-offs; we have not attempted to optimize algorithm parameters. Unless otherwise specified, all experiments are run with at least $50$ trials, “exact recovery” is obtained when the signal recovery error $\|w-\hat{w}\|_2$ drops below $10^{-6}$, and the plots illustrate the $10\%$ trimmed mean. For the approximation error versus epoch (or iteration) plots, the trimmed mean is calculated at each epoch (or iteration) value by excluding the highest and lowest $5\%$ of the error values (rounding when necessary). For the approximation error versus CPU time plots, the trimmed mean computation is the same, except the CPU time values corresponding to the excluded error values are also excluded from the mean CPU time. We begin with experiments in the compressed sensing setting, and follow with application to the low-rank matrix recovery problem.
Sparse vector recovery
----------------------
The first setting we explored is standard compressed sensing which has been studied in Subsection \[subsec::Sparse linear regression\]. Unless otherwise specified, the vector has dimension $256$, and its non-zero entries are i.i.d. standard Gaussian. The signal is measured with an $m \times 256$ i.i.d. standard Gaussian measurement matrix. First, we compare signal recovery as a function of the number of measurements used, for various sparsity levels $s$. Each algorithm terminates upon convergence or upon a maximum of $500$ epochs[^2]. We used a block size of $b = \min(s, m)$ for sparsity level $k_0$ and number of measurements $m$, except when $k_0=4$ and $m>5$ we used $b=8$ in order to obtain consistent convergence. Specifically, this means that $b$ measurements were used to form the signal proxy at each iteration of the stochastic algorithms. For the IHT and StoIHT algorithms, we use a step size of $\gamma =1$. The results for IHT and StoIHT are shown in Figure \[fig1\] and GradMP and StoGradMP in Figure \[fig2\]. Here we see that with these parameters, StoIHT requires far fewer measurements than IHT to recover the signal, whereas GradMP and StoGradMP are comparable.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Sparse Vector Recovery: Percent recovery as a function of the number of measurements for IHT (left) and StoIHT (right) for various sparsity levels $k_0$.\[fig1\]](IHT_PercentRecovered_d256_n100_maxEpochs500.eps "fig:"){height="2.25in" width="2.9in"} ![Sparse Vector Recovery: Percent recovery as a function of the number of measurements for IHT (left) and StoIHT (right) for various sparsity levels $k_0$.\[fig1\]](StoIHT_PercentRecovered_d256_n100_maxEpochs500.eps "fig:"){height="2.25in" width="3.22in"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Sparse Vector Recovery: Percent recovery as a function of the number of measurements for GradMP (left) and StoGradMP (right) for various sparsity levels $k_0$.\[fig2\]](GradMP_PercentRecovered_d256_n100_maxEpochs500.eps "fig:"){height="2.25in" width="2.9in"} ![Sparse Vector Recovery: Percent recovery as a function of the number of measurements for GradMP (left) and StoGradMP (right) for various sparsity levels $k_0$.\[fig2\]](SGradMP_PercentRecovered_d256_n100_maxEpochs500.eps "fig:"){height="2.25in" width="3.22in"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Next we explore how the choice of block size affects performance. We employ the same setup as described above, only now we fix the number of measurements $m$ ($m=180$ for the IHT methods and $m=80$ for the GradMP methods), allow a maximum of 100 epochs, and use various block sizes in the stochastic algorithms. The sparsity of the signal is $k_0=8$. The results are depicted for both methods in Figure \[fig3\]. Here we see that in both cases, the deterministic methods seem to offer intermediate performance, outperforming some block sizes and underperforming others. It is interesting that the StoIHT method seems to prefer larger block sizes whereas the StoGradMP seems to prefer smaller sizes. This is likely because StoGradMP, even using only a few gradients may still estimate the support accurately, and thus the signal accurately.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Sparse Vector Recovery: Recovery error as a function of epochs and various block sizes $b$ for HT methods (left) and GradMP methods (right).\[fig3\]](HT_ErrorVersusEpochs_d256_n50_s8_m180_maxEpochs100.eps "fig:"){height="2.25in" width="3.05in"} ![Sparse Vector Recovery: Recovery error as a function of epochs and various block sizes $b$ for HT methods (left) and GradMP methods (right).\[fig3\]](MP_ErrorVersusEpochs_d256_n50_s8_m80_maxEpochs100.eps "fig:"){height="2.25in" width="3.05in"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Next we repeat the same experiments but examine the recovery error as a function of the number of measurements for various block sizes (note that if the block size exceeds the number of measurements, we simply use the entire matrix as one block). Figure \[fig4\] shows these results. Because the methods exhibit graceful decrease in recovery error, here we plot the number of measurements (as a function of block size) required in order for the estimation error $\|w-\hat{w}\|_2$ to drop and remain below $10^{-6}$. Although block size is not a parameter for the deterministic methods IHT and GradMP, a red horizontal line at the number of measurements required is included for comparison. We see that the fewest measurements are required when the block sizes are about $10$ (recall the signal dimension is $256$). We also note that StoIHT requires fewer measurements than IHT for large blocks, whereas StoGradMP requires the same as GradMP for large blocks, which is not surprising. However, we see that both methods offer improvements over their deterministic counterparts if the block sizes are chosen correctly.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Sparse Vector Recovery: Number of measurements required for signal recovery as a function of block size (blue marker) for StoIHT (left) and StoGradMP (right). Number of measurements required for deterministic method shown as red solid line.\[fig4\]](HT_ErrorVersusMeasurements_d256_n50_s8_maxEpochs100_view2.eps "fig:"){height="2.25in" width="3.05in"} ![Sparse Vector Recovery: Number of measurements required for signal recovery as a function of block size (blue marker) for StoIHT (left) and StoGradMP (right). Number of measurements required for deterministic method shown as red solid line.\[fig4\]](MP_ErrorVersusMeasurements_d256_n50_s8_maxEpochs100_view2.eps "fig:"){height="2.25in" width="3.05in"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Robustness to measurement noise
-------------------------------
We next repeat the above sparse vector recovery experiments in the presence of noise in the measurements. All experiment parameters remain as in the previous setup, but a vector $e$ of Gaussian noise with $\|e\|_2 = 0.5$ is added to the measurement vector. We again compare the recovery error against the number of epochs and measurements needed. The results are shown in Figures \[fig5\] and \[fig6\] for the IHT and GradMP algorithms, respectively. The right hand plots show the number of measurements required for the error to drop below the noise level $0.5$ as a function of block size. Overall, the methods are robust to noise and demonstrate the same improvements and heuristics as in the noiseless experiments.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Sparse Vector Recovery: A comparison of IHT and StoIHT in the presence of noise. Recovery error versus epoch (left) and measurements required versus block size (right).\[fig5\]](HT_ErrorVersusEpochs_d256_n50_s8_m180_maxEpochs100_noiseNorm05.eps "fig:"){height="2.25in" width="3.22in"} ![Sparse Vector Recovery: A comparison of IHT and StoIHT in the presence of noise. Recovery error versus epoch (left) and measurements required versus block size (right).\[fig5\]](HT_ErrorVersusMeasurements_d256_n50_s8_maxEpochs100_noiseNorm05_view2.eps "fig:"){height="2.25in" width="2.9in"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Sparse Vector Recovery: A comparison of GradMP and StoGradMP in the presence of noise. Recovery error versus epoch (left) and measurements required versus block size (right).\[fig6\]](MP_ErrorVersusEpochs_d256_n50_s8_m80_maxEpochs100_noiseNorm05.eps "fig:"){height="2.25in" width="3.22in"} ![Sparse Vector Recovery: A comparison of GradMP and StoGradMP in the presence of noise. Recovery error versus epoch (left) and measurements required versus block size (right).\[fig6\]](MP_ErrorVersusMeasurements_d256_n50_s8_maxEpochs100_noiseNorm05_view2.eps "fig:"){height="2.25in" width="2.9in"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The choice of step size in StoIHT
---------------------------------
Our last experiment in the sparse vector recovery setting explores the role of the step size $\gamma$ in StoIHT. Keeping the dimension of the signal at $256$, the sparsity $k_0=8$, the number of measurements $m=80$, no noise, and fixing the block size $b=8$, we test the algorithm using various values of the step size $\gamma$. The results are shown in Figure \[fig7\]. We see that the value of $\gamma$ clearly plays a role, but the range of successful values is quite large. Not surprisingly, too small of a step size leads to extremely slow convergence, and too large of one leads to divergence (at least initially).
![Sparse Vector Recovery: A comparison of StoIHT for various values of the step size $\gamma$ (shown in the colorbar).\[fig7\]](StoIHT_ErrorVersusIteration_d256_n100_s8_m80_gammaMany_maxEpochs200.eps){height="3in"}
Low-Rank Matrix Recovery {#low-rank-matrix-recovery}
------------------------
We now turn to the setting where we wish to recover a low-rank matrix $W_0$ from $m$ linear measurements as studied in Subsection \[subsec::Low-rank matrix recovery\]. Here $W_0$ is the $10\times 10$ matrix with rank $k_0$ and we take $m$ linear Gaussian measurements of the form $y_i = \langle A_i, W_0 \rangle$, where each $A_i$ is a $10\times 10$ matrix with i.i.d. standard Gaussian entries. As before, we first compare the percentage of exact recovery (where again we deem the signal is recovered exactly when the error $\|W_0-\hat{W}\|_F$ is below $10^{-6}$) against the number of measurements required, for various rank levels. For the matrix case, we use a step size of $\gamma=0.5$ for both the IHT and StoIHT methods, which seems to work well in this setting. The results for IHT and StoIHT are shown in Figure \[fig8\] and for GradMP and StoGradMP in Figure \[fig9\]. For this choice of parameters, we see that both StoIHT and StoGradMP tend to require fewer measurements to recover the signal.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Low-Rank Matrix Recovery: Percent recovery as a function of the number of measurements for IHT (left) and StoIHT (right) for various rank levels $s$.\[fig8\]](IHT_MatRec_PercentRecovered_d10_10_n50_maxEpochs300_gamma05.eps "fig:"){height="2.25in" width="2.9in"} ![Low-Rank Matrix Recovery: Percent recovery as a function of the number of measurements for IHT (left) and StoIHT (right) for various rank levels $s$.\[fig8\]](StoIHT_MatRec_PercentRecovered_d10_10_n50_maxEpochs300_gamma05.eps "fig:"){height="2.25in" width="3.22in"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Low-Rank Matrix Recovery: Percent recovery as a function of the number of measurements for GradMP (left) and StoGradMP (right) for various rank levels $k_0$.\[fig9\]](GradMP_MatRec_PercentRecovered_d10_10_n50_maxEpochs300.eps "fig:"){height="2.25in" width="2.9in"} ![Low-Rank Matrix Recovery: Percent recovery as a function of the number of measurements for GradMP (left) and StoGradMP (right) for various rank levels $k_0$.\[fig9\]](SGradMP_MatRec_PercentRecovered_d10_10_n50_maxEpochs300.eps "fig:"){height="2.25in" width="3.22in"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Next we examine the signal recovery error as a function of epoch, for various block sizes and against the deterministic methods. We fix the rank to be $k_0=2$ in these experiments. Because both block size and number of measurements affect the convergence, we see different behavior in the low measurement regime and the high measurement regime. This is apparent in Figure \[fig10\], where $m=90$ measurements are used in the plot on the left and $m=140$ measurements are used in the plot on the right, which shows the convergence of the IHT methods per epoch for various block sizes. We again see that for proper choices of block sizes, the StoIHT method outperforms IHT. It is also interesting to note that IHT seems to reach a higher noise floor than StoIHT. Of course we again point out that we have not optimized any of the algorithm parameters for either method. Results for the GradMP methods are shown in Figure \[fig11\], again where $m=90$ measurements are used in the plot on the left and $m=140$ measurements are used in the plot on the right. Similar to the IHT results, proper choices of block sizes allows StoGradMP to require much fewer epochs than GradMP to achieve convergence.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Low-Rank Matrix Recovery: Recovery error as a function of the number of epochs for StoIHT methods using $m=90$ (left) and $m=140$ (right) for various block sizes $b$.\[fig10\]](HT_ErrorVersusEpochs_d10_10_n50_s2_m90_maxEpochs200_gamma05.eps "fig:"){height="2.25in" width="2.9in"} ![Low-Rank Matrix Recovery: Recovery error as a function of the number of epochs for StoIHT methods using $m=90$ (left) and $m=140$ (right) for various block sizes $b$.\[fig10\]](HT_ErrorVersusEpochs_d10_10_n50_s2_m140_maxEpochs200_gamma05.eps "fig:"){height="2.25in" width="3.22in"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Low-Rank Matrix Recovery: Recovery error as a function of the number of epochs for the StoGradMP algorithm using $m=90$ (left) and $m=140$ (right) for various block sizes $b$.\[fig11\]](MP_ErrorVersusEpochs_d10_10_n50_s2_m90_maxEpochs200.eps "fig:"){height="2.25in" width="2.9in"} ![Low-Rank Matrix Recovery: Recovery error as a function of the number of epochs for the StoGradMP algorithm using $m=90$ (left) and $m=140$ (right) for various block sizes $b$.\[fig11\]](MP_ErrorVersusEpochs_d10_10_n50_s2_m140_maxEpochs200.eps "fig:"){height="2.25in" width="3.22in"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Figure \[fig12\] compares the block size and the number of measurements required for exact signal recovery for the IHT methods and the GradMP methods, again for a fixed rank of $k_0=2$. We again see that StoIHT and StoGradMP prefer small block sizes.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Low-Rank Matrix Recovery: Number of measurements required for signal recovery as a function of block size (blue marker) for StoIHT (left) and StoGradMP (right). Number of measurements required for deterministic method shown as red solid line. \[fig12\]](HT_MatRec_ErrorVersusMeasurements_d10_10_n50_s2_maxEpochs200_gamma05_view2.eps "fig:"){height="2.25in" width="3.05in"} ![Low-Rank Matrix Recovery: Number of measurements required for signal recovery as a function of block size (blue marker) for StoIHT (left) and StoGradMP (right). Number of measurements required for deterministic method shown as red solid line. \[fig12\]](MP_MatRec_ErrorVersusMeasurements_d10_10_n50_s2_maxEpochs200_view2.eps "fig:"){height="2.25in" width="3.05in"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Recovery with approximations
----------------------------
Finally, we consider the important case where the identification, estimation, and pruning steps can only be performed approximately. In particular, we consider the case of low-rank matrix recovery in which these steps utilize only an approximate Singular Value Decomposition (SVD) of the matrix. This may be something that is unavoidable in certain applications, or may be desirable in others for computational speedup. For our first experiments of this kind, we use $N=1024$ and generate a $N\times N$ rank $k_0=40$ matrix. We take $m$ permuted rows of the $N\times N$ discrete Fourier transform as the measurement operator, use $2$ blocks in the stochastic algorithms, and run $40$ trials. In the StoIHT experiments we take $m=0.3N^2$, and in the StoGradMP experiments we take $m=0.35N^2$; these values for $m$ empirically seemed to work well with the two algorithms. For each trial of the approximate SVD, we also run $5$ sub-trials, to account for the randomness used in the approximate SVD algorithm. Here we use the randomized method described in [@halko2011finding] to compute the approximate SVD of a matrix. Briefly, to obtain a rank-$s$ approximation of a matrix X and compute its approximated SVD, one applies the matrix to a randomly generated $N\times (s+d)$ matrix $\Omega$ to obtain the product $Y=X\Omega$ and constructs an orthonormal basis $Q$ for the column space of $Y$. Here, $d$ is an *over-sampling factor* that can be tuned to balance the tradeoff between accuracy and computation time. Using this basis, one computes the SVD of the product $B=Q^*X = U\Sigma V^*$, and approximates the SVD of $X$ by $X\approx (QU)\Sigma V^*$. Because $(s+d)$ is typically much less than $N$, significant speedup can be gained. In addition, [@halko2011finding] proves that the approximation error is bounded by $${\left\|X - X_s\right\|}_F \leq \left(1 + \sqrt{\frac{s}{s+d}} \right) {\left\|X - X^{\text{best}}_s\right\|}_F,$$ where $X^{\text{best}}_s$ is the best rank-$s$ approximation of $X$ and $X_s$ is the approximate rank-$s$ matrix produced from the above procedure. Here, the multiplicative error is associated with the quantity $\eta$ in the approximation operator $\text{approx}_s(w,\eta)$ defined in (\[eqt::approximation definition\]).
Figure \[fig13\] shows the approximation error as a function of epoch and runtime for the StoIHT algorithm, for various over-sampling factors $d$ as well as the full SVD computation for comparison. We again use a $10\%$ trimmed mean over all the trials, and a step-size of $\gamma=0.5$. We see that in terms of epochs, for reasonably sized over-sampling factors, the convergence using the SVD approximation is very similar to that of using the full SVD. In terms of runtime, we see a significant speedup for moderate choices of over-sampling factor, as expected. Recall that $2$ blocks were used for this experiment, but we have observed a very similar relationship between the curves when increasing the number of blocks to $10$.
The analogous results for StoGradMP are very similar, and are shown in Figure \[fig14\]. We again see that for certain over-sampling factors, the convergence of the approximation error as a function of epoch is similar when using the approximate SVD and the full SVD. We also see a very significant speedup when using the approximate SVD; in this case, all the over-sampling factors used in this experiment offer an improved runtime over the full SVD computation.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Low-Rank Matrix Recovery with Approximations: Trimmed mean recovery error as a function of epochs (left) and runtime (right) for various over-sampling factors $d$ using the StoIHT algorithm. Performance using full SVD computation shown as dashed line. \[fig13\]](StoIHT_MatRec_SVD_ErrorVersusEpoch_trimmed10_N1024_r40_m03_nb2_n40_nrand5_gamma05_maxEpochs300.eps "fig:"){height="2.25in" width="2.9in"} ![Low-Rank Matrix Recovery with Approximations: Trimmed mean recovery error as a function of epochs (left) and runtime (right) for various over-sampling factors $d$ using the StoIHT algorithm. Performance using full SVD computation shown as dashed line. \[fig13\]](StoIHT_MatRec_SVD_ErrorVersusCPUTime_trimmed10_N1024_r40_m03_nb2_n40_nrand5_gamma05_maxEpochs300.eps "fig:"){height="2.25in" width="3.22in"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Low-Rank Matrix Recovery with Approximations: Trimmed mean recovery error as a function of epochs (left) and runtime (right) for various over-sampling factors $d$ using the StoGradMP algorithm. Performance using full SVD computation shown as dashed line. \[fig14\]](StoGradMP_MatRec_SVD_ErrorVersusEpoch_trimmed_N1024_r40_m035_nb2_n40_nrand5_maxEpochs200.eps "fig:"){height="2.25in" width="2.9in"} ![Low-Rank Matrix Recovery with Approximations: Trimmed mean recovery error as a function of epochs (left) and runtime (right) for various over-sampling factors $d$ using the StoGradMP algorithm. Performance using full SVD computation shown as dashed line. \[fig14\]](StoGradMP_MatRec_SVD_ErrorVersusCPUTime_trimmed_N1024_r40_m035_nb2_n40_nrand5_maxEpochs200.eps "fig:"){height="2.25in" width="3.22in"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Conclusion {#sec::conclusion}
==========
We study in this paper two stochastic algorithms to solve a possibly non-convex optimization problem with the constraint that the solution has a simple representation with respect to a predefined atom set. This type of optimization has found tremendous applications in signal processing, machine learning, and statistics such as sparse signal recovery and low-rank matrix estimation. Our proposed algorithms, called StoIHT and StoGradMP, have their roots back to the celebrated IHT and CoSaMP algorithms, from which we have made several significant extensions. The first extension is to transfer algorithmic ideas of IHT and CoSaMP to the stochastic setting and the second extension is the allowance of approximate projections at each iteration of the algorithms. More importantly, we theoretically prove that the stochastic versions with inexact projections enjoy the same linear convergence rate as their deterministic counterparts. We also show that the algorithms behave predictably even when the gradients are contaminated by noise. Experimentally, stochastic approaches have shown particular advantages over the deterministic counterparts in many problems of interest such as linear regression and matrix recovery.
Proofs {#sec::proofs}
======
Consequences of the ${\mathcal{D}}$-RSC and ${\mathcal{D}}$-RSS
---------------------------------------------------------------
The first corollary provides a useful upper bound for the gradient, which we call co-coercivity.
\[cor::consequences of RSS\] Assume the function $f(w)$ satisfies the ${\mathcal{D}}$-RSS property, then $$\label{inq::33}
{\left<w'-w,\nabla f(w') - \nabla f(w)\right>} \leq \rho^+_s {\left\|w'-w\right\|}_2^2$$ for all vectors $w$ and $w'$ of size $n$ such that $|\operatorname{supp}_{{\mathcal{D}}}(w) \cup \operatorname{supp}_{{\mathcal{D}}}(w')| \leq s $. In addition, let $\Omega = \operatorname{supp}_{{\mathcal{D}}} (w) \cup \operatorname{supp}_{{\mathcal{D}}}(w')$; then we have $$\label{inq::coercivity}
[\textbf{Co-coercivity}] \quad\quad {\left\|{\mathcal{P}}_{\Omega}({\mathcal{\nabla}} f(w') - \nabla f(w))\right\|}^2_2 \leq \rho^+_s {\left<w'-w,\nabla f(w') - \nabla f(w)\right>}.$$
From the definition of ${\mathcal{D}}$-RSS, we can show that $$\label{inq::D-RSS variant}
f(w') - f(w) - {\left<\nabla f(w), w'-w\right>} \leq \frac{\rho^+_s}{2} {\left\|w'-w\right\|}_2^2.$$ Similarly, interchanging the role of $w$ and $w'$, we have $$f(w) - f(w') - {\left<\nabla f(w'), w-w'\right>} \leq \frac{\rho^+_s}{2} {\left\|w'-w\right\|}_2^2.$$ Taking the summation of these two inequalities leads to the first claim.
To prove the second claim, we define a function $G(x) \triangleq f(x) - {\left<\nabla f(w), x\right>}$, it is easy to see that for any $x$ and $y$ with $\operatorname{supp}_{{\mathcal{D}}}(x) \cup \operatorname{supp}_{{\mathcal{D}}}(y) \in \Omega$, we have $${\left\|\nabla G(x) - \nabla G(y)\right\|}_2 = {\left\|\nabla f(x) - \nabla f(y)\right\|}_2 \leq \rho^+_s {\left\|x-y\right\|}_2.$$ This implies that $G(x)$ has ${\mathcal{D}}$-RSS with constant $\rho^+_s$. In particular, we get a similar inequality as in (\[inq::D-RSS variant\]) $$\label{inq::D-RSS for G(x)}
G(x) - G(y) - {\left<\nabla G(y), x-y\right>} \leq \frac{\rho^+_s}{2} {\left\|x-y\right\|}_2^2.$$
We also observe that $$G(x) - G(w) = f(x) - f(w) - {\left<\nabla f(w), x-w\right>} \geq 0$$ for all $x$ such that $\operatorname{supp}_{{\mathcal{D}}}(x) \in \Omega$. Let $x \triangleq w' - \frac{1}{\rho^+_s} {\mathcal{P}}_{\Omega} \nabla G(w')$, then it is clear that $\operatorname{supp}_{{\mathcal{D}}}(x) \in \Omega$. Thus, by ${\mathcal{D}}$-RSS property of $G(x)$, we have $$\begin{split}
\nonumber
G(w) &\leq G \left(w' - \frac{1}{\rho^+_s} {\mathcal{P}}_{\Omega} \nabla G(w') \right) \\
&\leq G(w') + {\left<\nabla G(w'), - \frac{1}{\rho^+_s} {\mathcal{P}}_{\Omega} \nabla G(w')\right>} + \frac{1}{2\rho^+_s} {\left\|{\mathcal{P}}_{\Omega} \nabla G(w')\right\|}_2^2 \\
&= G(w') - \frac{1}{2\rho^+_s} {\left\|{\mathcal{P}}_{\Omega} \nabla G(w')\right\|}_2^2.
\end{split}$$ Replacing the function $G(x)$ in this equality we get $$\frac{1}{2\rho^+_s} {\left\|{\mathcal{P}}_{\Omega} (\nabla f(w') - \nabla f(w))\right\|}_2^2 \leq f(w') - f(w) - {\left<\nabla f(w), w'-w\right>}.$$ The claim follows by adding the two inequalities with $w$ and $w'$ interchanged.
The following corollary provides the lower bound for the gradient.
\[cor::consequences of RSC\] Assume the function $F(w)$ satisfies the ${\mathcal{D}}$-RSC, then $$\label{inq::37}
\rho^-_{s} {\left\|w'-w\right\|}_2^2 \leq {\left<w'-w,\nabla F(w') - \nabla F(w)\right>}$$ for all $w$ and $w'$ such that $|\operatorname{supp}_{{\mathcal{D}}}(w) \cup \operatorname{supp}_{{\mathcal{D}}}(w')| \leq s$.
From the ${\mathcal{D}}$-RSC assumption, we can write $$F(w') - F(w) - {\left<\nabla F(w), w'-w\right>} \geq \frac{\rho^-_s}{2} {\left\|w'-w\right\|}_2^2.$$ Swapping $w$ and $w'$, we also have $$F(w) - F(w') - {\left<\nabla F(w'),w-w'\right>} \geq \frac{\rho^-_s}{2} {\left\|w'-w\right\|}_2^2.$$ The result follows by adding the two inequalities.
The next corollary provides key estimates for our convergence analysis. Recall that we assume $\{f_i(w) \}_{i=1}^M$ satisfies the ${\mathcal{D}}$-RSS and $F(w) = \sum_{i=1}^M f_i(w)$ satisfies the ${\mathcal{D}}$-RSC.
\[cor::StoIHT 1st corollary\] Let $i$ be an index selected with probability $p(i)$ from the set $[M]$. For any fixed sparse vectors $w$ and $w'$, let $\Omega$ be a set such that $\operatorname{supp}_{{\mathcal{D}}}(w) \cup \operatorname{supp}_{{\mathcal{D}}}(w') \in \Omega$ and denote $s = |\Omega|$. We have $$\label{inq::1st key observation}
{\mathbb{E}}_i {\left\|w' - w - \frac{\gamma}{Mp(i)} {\mathcal{P}}_{\Omega} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right\|}_2 \leq \sqrt{1- (2 - \gamma \alpha_s) \gamma \rho^-_s } {\left\|w'-w\right\|}_2$$ where we define $\alpha_s \triangleq \max_i \frac{\rho^+_s(i)}{Mp(i)}$. In addition, we have $$\label{inq::2nd key observation}
{\mathbb{E}}_i {\left\|w' - w - \frac{\gamma}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right\|}_2 \leq \sqrt{1 + \gamma^2 \alpha_s \overline{\rho}^+_s - 2\gamma \rho^-_s } {\left\|w'-w\right\|}_2,$$ where $\overline{\rho}^+_s \triangleq \frac{1}{M} \sum_i \rho^+_s(i)$.
The difference between the estimates (\[inq::1st key observation\]) and (\[inq::2nd key observation\]) is the additional term ${\left\|{\mathcal{P}}_{\Omega^c} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right\|}_2$ with $\Omega^c = [n]\backslash \Omega$ appearing in (\[inq::2nd key observation\]).
We will use the co-coercivity property that appeared in inequality (\[inq::coercivity\]) in Corollary \[cor::consequences of RSS\]. We have $$\begin{split}
\nonumber
&{\mathbb{E}}_i{\left\|w' - w - \frac{\gamma}{Mp(i)} {\mathcal{P}}_{\Omega} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right\|}^2_2 \\
&= {\left\|w' - w\right\|}_2^2 + {\mathbb{E}}_i \frac{\gamma^2}{(Mp(i))^2} {\left\|{\mathcal{P}}_{\Omega} \left( \nabla f_i(w') - \nabla f_i(w) \right)\right\|}_2^2 \\
&\quad- 2 \gamma {\mathbb{E}}_i{\left<w' - w, {\mathcal{P}}_{\Omega} \frac{1}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right>} \\
&\leq {\left\|w' - w\right\|}_2^2 + \gamma^2 {\mathbb{E}}_i \frac{\rho^+_{s}(i)}{(Mp(i))^2} {\left<w'-w, \nabla f_i(w') - \nabla f_i(w) \right>} \\
&\quad- 2 \gamma {\mathbb{E}}_i{\left<w' - w, \frac{1}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right) \right>} \\
&\leq {\left\|w' - w\right\|}_2^2 + \gamma^2 \max_i \frac{\rho^+_{s}(i)}{Mp(i)} {\mathbb{E}}_i {\left<w'-w, \frac{1}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right>} \\
&\quad- 2 \gamma {\mathbb{E}}_i{\left<w' - w, \frac{1}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right) \right>} \\
&= {\left\|w' - w\right\|}_2^2 - \left(2\gamma - \gamma^2 \max_i \frac{\rho^+_{s}(i)}{Mp(i)} \right) {\mathbb{E}}_i{\left<w' - w, \frac{1}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right) \right>} \\
&= {\left\|w' - w\right\|}_2^2 - (2\gamma - \gamma^2 \alpha_s) {\left<w' - w, \nabla F(w') - \nabla F(w) \right>} \\
&\leq {\left\|w' - w\right\|}_2^2 - (2\gamma - \gamma^2 \alpha_s) \rho^-_s {\left\|w' - w\right\|}_2^2,
\end{split}$$ where the first inequality follows from (\[inq::coercivity\]) and the last inequality follows from (\[inq::37\]). Applying the known result $({\mathbb{E}}Z)^2 \leq {\mathbb{E}}Z^2$ completes the proof of (\[inq::1st key observation\]).
The proof of (\[inq::2nd key observation\]) is similar to that of (\[inq::1st key observation\]), except now we are not able to apply the co-coercivity inequality. Expanding the left hand side and applying the definition of ${\mathcal{D}}$-RSS together with the inequality (\[inq::37\]), we derive $$\begin{split}
\nonumber
&{\mathbb{E}}_i{\left\|w' - w - \frac{\gamma}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right\|}^2_2 \\
&= {\left\|w' - w\right\|}_2^2 + {\mathbb{E}}_i \frac{\gamma^2}{(Mp(i))^2} {\left\|\nabla f_i(w') - \nabla f_i(w) \right\|}_2^2 \\
&\quad- 2 \gamma {\mathbb{E}}_i{\left<w' - w, \frac{1}{Mp(i)} \left(\nabla f_i(w') - \nabla f_i(w) \right)\right>} \\
&= {\left\|w' - w\right\|}_2^2 + {\mathbb{E}}_i \frac{\gamma^2}{(Mp(i))^2} {\left\|\nabla f_i(w') - \nabla f_i(w) \right\|}_2^2\\
&\quad- 2 \gamma {\left<w' - w, \nabla F(w') - \nabla F_i(w) \right>} \\
&\leq {\left\|w' - w\right\|}_2^2 + {\mathbb{E}}_i \frac{\gamma^2}{(Mp(i))^2} (\rho^+_s(i))^2 {\left\|w'-w\right\|}_2^2 - 2 \gamma \rho^-_{s} {\left\|w'-w\right\|}_2^2.
\end{split}$$ We further have $${\mathbb{E}}_i \frac{\gamma^2}{(Mp(i))^2} (\rho^+_s(i))^2 \leq \gamma^2 \max_i \frac{\rho^+_s(i)}{Mp(i)} {\mathbb{E}}_i \frac{\rho^+_s(i)}{Mp(i)} = \gamma^2 \alpha_s \sum_i \frac{\rho^+_s(i)}{Mp(i)} p(i) = \gamma^2 \alpha_s \overline{\rho}^+_s,$$ where we recall that $\alpha_s = \max_i \frac{\rho^+_s(i)}{Mp(i)}$ and $\overline{\rho}^+_s = \frac{1}{M} \sum_i \rho^+_s(i)$. Substitute this result into the above inequality and then use the inequality $({\mathbb{E}}Z)^2 \leq {\mathbb{E}}Z^2$ to complete the proof.
Proof of Theorem \[thm::StoIHT\] {#sub::proof of theorem 1}
--------------------------------
\[Proof of Theorem \[thm::StoIHT\]\] We notice that ${\left\|w^{t+1} - b^t\right\|}_2 \leq \eta {\left\|b^t_k - b^t\right\|}_2 \leq \eta {\left\|w^{\star} - b^t\right\|}_2$ where $b^t_k$ is the best $k$-sparse approximation of $b^t$ with respect to ${\mathcal{D}}$. Thus, $${\left\|w^{t+1} - w^{\star} + w^{\star} - b^t\right\|}_2^2 \leq \eta^2 {\left\|w^{\star} - b^t\right\|}_2^2.$$
Expanding the left hand side of this inequality leads to $$\begin{split}
\nonumber
{\left\|w^{t+1} - w^{\star}\right\|}_2^2 &\leq 2 {\left<w^{t+1} - w^{\star}, b^t - w^{\star}\right>} + (\eta^2-1) {\left\|b^t-w^{\star}\right\|}_2^2 \\
&= 2 {\left<w^{t+1} - w^{\star}, w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \nabla f_{i_t}(w^t)\right>} \\
&\quad+ (\eta^2-1) {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \nabla f_{i_t}(w^t)\right\|}_2^2 \\
&= 2 {\left<w^{t+1} - w^{\star}, w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \left( \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star}) \right)\right>} \\
&\quad- 2{\left<w^{t+1} - w^{\star}, \frac{\gamma}{Mp(i_t)} \nabla f_{i_t} (w^{\star})\right>} + (\eta^2-1) {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \nabla f_{i_t}(w^t)\right\|}_2^2.
\end{split}$$
Denote $\Omega = \operatorname{supp}_{{\mathcal{D}}}(w^{t+1}) \cup \operatorname{supp}_{{\mathcal{D}}}(w^t) \cup \operatorname{supp}_{{\mathcal{D}}}(w^{\star})$ and notice that $|\Omega| \leq 3k$, we get $$\begin{split}
\nonumber
{\left\|w^{t+1} - w^{\star}\right\|}_2^2 &\leq 2 {\left<w^{t+1} - w^{\star}, w^t - w^{\star} - \frac{\gamma}{Mp(i_t)}{\mathcal{P}}_{\Omega} \left( \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star}) \right)\right>} \\
&\quad- 2{\left<w^{t+1} - w^{\star}, \frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \nabla f_{i_t} (w^{\star})\right>} + (\eta^2-1) {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \nabla f_{i_t}(w^t)\right\|}_2^2 \\
&\leq 2 {\left\|w^{t+1} - w^{\star}\right\|}_2 \\
&\quad\times \underbrace{\left( {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \left( \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star}) \right)\right\|}_2 + {\left\|\frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \nabla f_{i_t}(w^{\star})\right\|}_2 \right)}_u \\
&\quad+ \underbrace{(\eta^2-1) {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \nabla f_{i_t}(w^t)\right\|}_2^2 }_v.
\end{split}$$
Solving this quadratic inequality $x^2 - 2ux - v \leq 0$ with $x = {\left\|w^{t+1} - w^{\star}\right\|}_2$, we get $x \leq u + \sqrt{u^2+v} \leq 2u+ \sqrt{v}$. Substituting the expressions for $u$ and $v$ above, we arrive at $$\begin{split}
\nonumber
{\left\|w^{t+1} - w^{\star}\right\|}_2 &\leq 2\left( {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \left( \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star}) \right)\right\|}_2 + {\left\|\frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \nabla f_{i_t}(w^{\star})\right\|}_2 \right) \\
&\quad+ \sqrt{\eta^2-1} {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \nabla f_{i_t}(w^t)\right\|}_2 \\
&\leq 2\left( {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \left( \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star}) \right)\right\|}_2 + {\left\|\frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \nabla f_{i_t}(w^{\star})\right\|}_2 \right) \\
&\quad+ \sqrt{\eta^2-1} \left( {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \left( \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star}) \right)\right\|}_2 + {\left\|\frac{\gamma}{Mp(i_t)} \nabla f_{i_t}(w^{\star})\right\|}_2 \right).
\end{split}$$
Denote $I_t$ as the set containing all indices $i_1, i_2,..., i_t$ randomly selected at or before step $t$ of the algorithm: $I_t = \{i_1,...,i_t\}$. It is clear that $I_t$ determines the solutions $w^1,...,w^{t+1}$. We also denote the conditional expectation ${\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^{t+1} - w^{\star}\right\|}_2 \triangleq {\mathbb{E}}_{i_t} ({\left\|w^{t+1} - w^{\star}\right\|}_2| I_{t-1})$. Now taking the conditional expectation on both sides of the above inequality we obtain $$\begin{split}
\nonumber
&{\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^{t+1} - w^{\star}\right\|}_2 \\
&\leq 2\left( {\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \left( \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star}) \right)\right\|}_2 + {\mathbb{E}}_{i_t | I_{t-1}} {\left\|\frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \nabla f_{i_t} (w^{\star})\right\|}_2 \right) \\
&\quad+ \sqrt{\eta^2-1} \left( {\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \left( \nabla f_{i_t} (w^t) - \nabla f_{i_t} (w^{\star}) \right)\right\|}_2 + {\mathbb{E}}_{i_t | I_{t-1}} {\left\|\frac{\gamma}{Mp(i_t)} \nabla f_{i_t} (w^{\star})\right\|}_2 \right) .
\end{split}$$
Conditioning on $I_{t-1}$, $w^t$ can be seen as a fixed vector. We apply the inequality (\[inq::1st key observation\]) of Corollary \[cor::StoIHT 1st corollary\] for the first term and (\[inq::2nd key observation\]) for the third term, we get $$\begin{split}
\nonumber
&{\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^{t+1} - w^{\star}\right\|}_2 \\
&\leq 2 \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{3k}) \rho^-_{3k} \right)} {\left\|w^t-w^{\star}\right\|}_2 + 2 \frac{\gamma}{\min_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t}{\left\|{\mathcal{P}}_{\Omega} \nabla f_{i_t} (w^{\star})\right\|}_2 \\
&\quad+ \sqrt{\eta^2-1} \sqrt{1 + \gamma^2 \alpha_{3k} \overline{\rho}^+_{3k} - 2\gamma \rho^-_{3k} } {\left\|w^t-w^{\star}\right\|}_2 + \sqrt{\eta^2-1} \frac{\gamma}{\min_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t} {\left\|\nabla f_{i_t} (w^{\star})\right\|}_2 \\
&= \left( 2 \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{3k}) \rho^-_{3k} \right)} + \sqrt{(\eta^2-1)\left(1 + \gamma^2 \alpha_{3k} \overline{\rho}^+_{3k} - 2\gamma \rho^-_{3k} \right)} \right) {\left\|w^t - w^{\star}\right\|}_2 \\
&\quad+ \frac{\gamma}{\min_{i_t} Mp(i_t)} \left( 2 {\mathbb{E}}_{i_t} {\left\|{\mathcal{P}}_{\Omega} \nabla f_{i_t} (w^{\star})\right\|}_2 + \sqrt{\eta^2 - 1} {\mathbb{E}}_{i_t} {\left\|\nabla f_{i_t} (w^{\star})\right\|}_2 \right) \\
&\leq \kappa {\left\|w^t -w^{\star}\right\|}_2 + \sigma,
\end{split}$$ where $\kappa$ and $\sigma_{w^{\star}}$ are defined in Theorem \[thm::StoIHT\]. Taking the expectation on both sides with respect to $I_{t-1}$ yields $${\mathbb{E}}_{I_t} {\left\|w^{t+1}-w^{\star}\right\|}_2 \leq \kappa {\mathbb{E}}_{I_{t-1}} {\left\|w^t-w^{\star}\right\|} + \sigma.$$
Applying this result recursively over $t$ iterations yields the desired result: $$\begin{split}
\nonumber
{\mathbb{E}}_{I_t} {\left\|w^{t+1}-w^{\star}\right\|}_2 &\leq \kappa^{t+1} {\left\|w^0 - w^{\star}\right\|}_2 + \sum_{j=0}^t \kappa^j \sigma \\
&\leq \kappa^{t+1} {\left\|w^0 - w^{\star}\right\|}_2 + \frac{1}{1-\kappa} \sigma.
\end{split}$$
Proof of Theorem \[thm::StoGradMP\] {#subsec::Proof of StoGradMP theorem}
-----------------------------------
The proof of Theorem \[thm::StoGradMP\] is a consequence of the following three lemmas. Denote $I_t$ as the set containing all indices $i_1, i_2,..., i_t$ randomly selected at or before step $t$ of the algorithm: $I_t = \{i_1,...,i_t\}$ and denote the conditional expectation ${\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^{t+1} - w^{\star}\right\|}_2 \triangleq {\mathbb{E}}_{i_t} ({\left\|w^{t+1} - w^{\star}\right\|}_2| I_{t-1})$.
\[lem::bound l2 w\^(t+1)-w\*\] The recovery error at the $(t+1)$-th iteration is upper bounded by $${\left\|w^{t+1} - w^{\star}\right\|}_2 \leq (1+\eta_2) {\left\|b^t - w^{\star}\right\|}_2.$$
\[lem::bound l2 b\^t - w\*\] Denote $\widehat{\Gamma}$ as the set obtained from the $t$-th iteration and $i$ as the index selected randomly from $[M]$ with probability $p(i)$. We have, $$\begin{split}
\nonumber
{\mathbb{E}}_{I_t} {\left\|b^t - w^{\star}\right\|}_2 &\leq \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} {\mathbb{E}}_{I_t} {\left\|P_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 + \sigma_1
\end{split}$$ where $\alpha_k = \max_i \frac{\rho^+_k(i)}{Mp(i)}$ and $$\sigma_1 \triangleq \frac{3}{\rho^-_{4k}} \frac{1}{\min_i Mp(i)} \max_{|\Omega| \leq 3k, i \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_i(w^{\star})\right\|}_2.$$
\[lem::bound L2 P\_Gamma\^c b\^t-w\*\] Denote $\widehat{\Gamma}$ as the set obtained from the $t$-th iteration. Then, $$\begin{split}
{\mathbb{E}}_{i_t} {\left\|{\mathcal{P}}_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 &\leq \left( \max_i \sqrt{Mp(i)} \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1}\rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} \right) {\left\|w^t-w^{\star}\right\|}_2 + \sigma_2,
\end{split}$$ where $$\sigma_2 \triangleq \frac{2 \max_{i \in [M]} p(i)}{\rho^-_{4k} \min_{i \in [M]} p(i)} \max_{|\Omega| \leq 4k, i \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_{i} (w^{\star})\right\|}_2.$$
We are now able to prove Theorem \[thm::StoGradMP\]. We have a series of inequalities that follow from the above lemmas, $$\begin{split}
&{\mathbb{E}}_{I_t} {\left\|w^{t+1} - w^{\star}\right\|}_2 \\
&\leq (1+\eta_2) {\mathbb{E}}_{I_t} {\left\|b^t-w^{\star}\right\|}_2 \quad\quad \text{(Lemma \ref{lem::bound l2 w^(t+1)-w*})}\\
&\leq (1+\eta_2) \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} {\mathbb{E}}_{I_t} {\left\|P_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 + (1+\eta_2) \sigma_1 \quad\quad \text{(Lemma \ref{lem::bound l2 b^t - w*}) }\\
&\leq (1+\eta_2) \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} \left( \max_i \sqrt{Mp(i)} \sqrt{\frac{ \frac{2\eta^2_1-1}{\eta^2_1}\rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} \right) {\mathbb{E}}_{I_{t-1}} {\left\|w^t-w^{\star}\right\|}_2 \\
&+ (1+\eta_2)\left( \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} \sigma_2 + \sigma_1 \right),
\end{split}$$ where the last inequality follows from Lemma \[lem::bound L2 P\_Gamma\^c b\^t-w\*\]. Replacing the definition of $\kappa$ in (\[eqt::kappa of StoGradMP\]) and noticing that $\sigma_{w^{\star}}$ defined in (\[eqt::sigma of StoGradMP\]) is greater than the second term of the last equation (it is due to $\max_{|\Omega| \leq 4k, i \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_{i} (w^{\star})\right\|}_2 \geq \max_{|\Omega| \leq 3k, i \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_{i} (w^{\star})\right\|}_2$), we arrive at $${\mathbb{E}}_{I_t} {\left\|w^{t+1} - w^{\star}\right\|}_2 \leq \kappa {\mathbb{E}}_{I_{t-1}}{\left\|w^t-w^{\star}\right\|}_2 + \sigma_{w^\star}.$$ Applying this inequality recursively $t$ times will complete the proof.
To the end of this section, we prove three lemmas stated above.
\[Proof of Lemma \[lem::bound l2 w\^(t+1)-w\*\]\] Recall that $b^t$ is the vector obtained from the $t$-th iteration. From the algorithm, we have $${\left\|w^{t+1} - b^t\right\|}_2 \leq \eta_2 {\left\|b^t_k - b^t\right\|}_2 \leq \eta_2 {\left\|w^{\star} - b^t\right\|}_2$$ where $b^t_k$ is the best $k$-sparse approximation of $b^t$ with respect to the set ${\mathcal{D}}$. We thus have $$\begin{split}
{\left\|w^{t+1} - w^{\star}\right\|}_2 &\leq {\left\|(w^{t+1} - b^t) + (b^t -w^{\star})\right\|}_2 \\
&\leq {\left\|w^{t+1} - b^t\right\|}_2 + {\left\|b^t - w^{\star}\right\|}_2 \leq (1+\eta_2) {\left\|b^t - w^{\star}\right\|}_2.
\end{split}$$
\[Proof of Lemma \[lem::bound l2 b\^t - w\*\]\] Denote the set ${\mathcal{C}}_{\widehat{\Gamma}} \triangleq \{ w: w = \sum_{j \in \widehat{\Gamma}} \alpha_j d_j \}$. It is clear that ${\mathcal{C}}_{\widehat{\Gamma}}$ is a convex set, so the estimation step can be written as $$b^t = \operatorname*{argmin}_w F(w) \quad\text{such that} \quad w \in {\mathcal{C}}_{\widehat{\Gamma}}.$$
Optimization theory states that (Proposition 4.7.1 of [@DNO_optimization_2003_B]) $$\nonumber
{\left<\nabla F(b^t), b^t - z\right>} \leq 0 \quad \text{for all } z \in {\mathcal{C}}_{\widehat{\Gamma}}.$$
Put differently, we have $$\nonumber
{\left<\nabla F(b^t), {\mathcal{P}}_{\widehat{\Gamma}}(b^t - z)\right>} \leq 0 \quad \text{for all } z.$$ Denote by $i$ an index selected randomly from $[M]$ with probability $p(i)$ and independent from all the random indices $i_t$ and recall that $\nabla F(b^t) = {\mathbb{E}}_i \frac{1}{Mp(i)} \nabla f_i (b^t)$. The above inequality can be read as $$\label{inq::optimization condition}
0 \geq {\left<{\mathbb{E}}_i \frac{1}{Mp(i)} \nabla f_i (b^t), {\mathcal{P}}_{\widehat{\Gamma}}(b^t - z)\right>} = {\mathbb{E}}_i {\left<\frac{1}{Mp(i)} {\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i (b^t), {\mathcal{P}}_{\widehat{\Gamma}} (b^t - z)\right>} \quad \text{for all } z.$$
We first derive the upper bound of ${\left\|{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star})\right\|}_2$. For any $\gamma > 0$, we have $$\begin{split}
\nonumber
{\left\|{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star})\right\|}_2^2 &= {\left<{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star}), b^t - w^{\star} \right>}\\
&= {\left<{\mathcal{P}}_{\widehat{\Gamma}} (b^t - w^{\star}), b^t - w^{\star} - {\mathbb{E}}_i \frac{\gamma}{Mp(i)}{\mathcal{P}}_{\widehat{\Gamma}}(\nabla f_i(b^t) - \nabla f_i(w^{\star}))\right>} \\
&\quad+ {\left<{\mathcal{P}}_{\widehat{\Gamma}} (b^t - w^{\star}), {\mathbb{E}}_i \frac{\gamma}{Mp(i)} {\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(b^t)\right>} - {\left<{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star}), {\mathbb{E}}_i \frac{\gamma}{Mp(i)}{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right>} \\
&= {\mathbb{E}}_i {\left<{\mathcal{P}}_{\widehat{\Gamma}} (b^t - w^{\star}), b^t - w^{\star} - \frac{\gamma}{Mp(i)}{\mathcal{P}}_{\widehat{\Gamma}}(\nabla f_i(b^t) - \nabla f_i(w^{\star}))\right>} \\
&\quad+ {\mathbb{E}}_i {\left<{\mathcal{P}}_{\widehat{\Gamma}} (b^t - w^{\star}), \frac{\gamma}{Mp(i)} {\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(b^t)\right>} - {\mathbb{E}}_i {\left<{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star}), \frac{\gamma}{Mp(i)}{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right>} \\
&\leq {\left\|{\mathcal{P}}_{\widehat{\Gamma}} (b^t - w^{\star})\right\|}_2 {\mathbb{E}}_i {\left\|b^t - w^{\star} - \frac{\gamma}{Mp(i)} {\mathcal{P}}_{\widehat{\Gamma}}(\nabla f_i(b^t) - \nabla f_i(w^{\star}))\right\|}_2 \\
&\quad+ {\left\|{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star})\right\|}_2 {\mathbb{E}}_i \frac{\gamma}{Mp(i)} {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2
\end{split}$$ where the inequality follows from (\[inq::optimization condition\]) and the Cauchy-Schwarz inequality. Canceling the common term in both sides, we derive $$\nonumber
{\left\|{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star})\right\|}_2 \leq {\mathbb{E}}_i {\left\|b^t - w^{\star} - \frac{\gamma}{Mp(i)} {\mathcal{P}}_{\widehat{\Gamma}}(\nabla f_i(b^t) - \nabla f_i(w^{\star}))\right\|}_2 + {\mathbb{E}}_i \frac{\gamma}{Mp(i)} {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2.$$
We bound the first term of the right-hand side. For a fixed realization of the random vector $b^t$, we apply Corollary \[cor::StoIHT 1st corollary\] to obtain $$\begin{split}
{\mathbb{E}}_{i} {\left\|b^t - w^{\star} - \frac{\gamma}{Mp(i)} {\mathcal{P}}_{\widehat{\Gamma}}(\nabla f_i(b^t) - \nabla f_i(w^{\star}))\right\|}_2 \leq \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k} \right)} {\left\|b^t-w^{\star}\right\|}_2 .
\end{split}$$
Applying this result to the above inequality and taking the expectation with respect to $i_t$ yields $$\begin{split}
{\left\|{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star})\right\|}_2 \leq \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k} \right)} {\left\|b^t-w^{\star}\right\|}_2 + \frac{\gamma}{\min_i Mp(i)} {\mathbb{E}}_i {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2,
\end{split}$$
We now apply this inequality to get $$\begin{split}
{\left\|b^t - w^{\star}\right\|}_2^2 &= {\left\|{\mathcal{P}}_{\widehat{\Gamma}}(b^t - w^{\star})\right\|}_2^2 + {\left\|P_{\widehat{\Gamma}^c} (b^t - w^{\star}) \right\|}_2^2 \\
&\leq \left( \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k} \right)} {\left\|b^t-w^{\star}\right\|}_2 + \frac{\gamma}{\min_i Mp(i)} {\mathbb{E}}_i {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2 \right)^2 \\
&\quad+ {\left\|{\mathcal{P}}_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2^2.
\end{split}$$
Solving the quadratic polynomial $a x^2 - 2bx - c \leq 0$ with $x = {\left\|b^t - w^{\star}\right\|}_2$, $a = (2\gamma - \gamma^2 \alpha_{4k})\rho^-_{4k}$, $b = \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k} \right)} \frac{\gamma}{\min_i Mp(i)} {\mathbb{E}}_i {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2$, and $c = (\frac{\gamma}{\min_i Mp(i)} {\mathbb{E}}_i {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2)^2 + {\left\|{\mathcal{P}}_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2^2$, we get $${\left\|b^t - w^{\star}\right\|}_2 \leq \frac{b + \sqrt{b^2+ac}}{a} \leq \sqrt{\frac{c}{a}} + \frac{2b}{a}.$$ Replacing these quantities $a$, $b$, and $c$ yields $$\begin{split}
\nonumber
{\left\|b^t - w^{\star}\right\|}_2 &\leq \frac{1}{\sqrt{ (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k}}} {\left\|P_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 \\
&\quad+ \left( \frac{1}{\sqrt{ (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k}}} + \frac{2\sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k} \right)}}{ (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k}}\right) \frac{\gamma}{\min_i Mp(i)} {\mathbb{E}}_i {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2 \\
&\leq \frac{1}{\sqrt{ (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k}}} {\left\|P_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 + \frac{3}{ (2\gamma - \gamma^2 \alpha_{4k}) \rho^-_{4k}} \frac{\gamma}{\min_i Mp(i)} {\mathbb{E}}_i {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2.
\end{split}$$ Optimizing $\gamma$ that maximizes ($2\gamma - \gamma^2 \alpha_{4k}$), we get $\gamma = \frac{1}{\alpha_{4k}}$. Plugging this value into the above inequality and taking the expectation with respect to $I_t$ (notice that the random variable $b^t$ is determined by random indices $i_0,...,i_t$) $$\begin{split}
\nonumber
{\mathbb{E}}_{I_t} {\left\|b^t - w^{\star}\right\|}_2 &\leq \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} {\mathbb{E}}_{I_t} {\left\|P_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 + \frac{3}{\rho^-_{4k}} \frac{1}{\min_i Mp(i)} {\mathbb{E}}_{i, I_t} {\left\|{\mathcal{P}}_{\widehat{\Gamma}} \nabla f_i(w^{\star}))\right\|}_2.
\end{split}$$ The proof is completed.
\[Proof of Lemma \[lem::bound L2 P\_Gamma\^c b\^t-w\*\]\] Since $b^t$ and $w^t$ are in $\text{span}({{\mathcal{D}}_{\widehat{\Gamma}}})$, we have ${\mathcal{P}}_{\widehat{\Gamma}^c} b^t = 0$ and ${\mathcal{P}}_{\widehat{\Gamma}^c} w^t = 0$. Therefore, $$\label{inq::1st intemediate result}
{\left\|{\mathcal{P}}_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 = {\left\|{\mathcal{P}}_{\widehat{\Gamma}^c} (w^t - w^{\star})\right\|}_2 \leq {\left\|{\mathcal{P}}_{{\Gamma}^c} (w^t - w^{\star})\right\|}_2 = {\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2,$$ where we denote $\Delta \triangleq w^{\star} - w^t$. The goal is to estimate ${\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2$. Let $R \triangleq \operatorname{supp}_{{\mathcal{D}}} (\Delta)$ and apply the ${\mathcal{D}}$-RSC, we have $$\begin{split}
\label{inq::proof Lemma 3-1st inequality}
F(w^{\star}) - F(w^t) - \frac{\rho^-_{4k}}{2} {\left\|w^{\star} - w^t\right\|}_2^2 &\geq {\left<\nabla F(w^t),w^{\star} - w^t\right>} \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t),\Delta\right>} \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} {\mathcal{P}}_R \nabla f_{i_t}(w^t),\Delta\right>} \\
&\geq - {\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_R \nabla f_{i_t}(w^t)\right\|}_2 {\left\|\Delta\right\|}_2.
\end{split}$$ The right-hand side can be lower bounded by applying inequality (\[eqt::approximation consequence 2\]), which yields ${\left\|{\mathcal{P}}_R \nabla f_{i_t}(w^t)\right\|}_2 \leq {\left\|{\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)\right\|}_2 + \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|{\mathcal{P}}_{\Gamma^c} \nabla f_{i_t}(w^t)\right\|}_2$. We now apply this observation to the above inequality. Denote $z \triangleq -\frac{{\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)}{{\left\|{\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)\right\|}_2} {\left\|\Delta\right\|}_2$ and $x \triangleq \frac{{\mathcal{P}}_{\Gamma^c} \nabla f_{i_t}(w^t)}{{\left\|{\mathcal{P}}_{\Gamma^c} \nabla f_{i_t}(w^t)\right\|}_2} {\left\|\Delta\right\|}_2$, we have $$\begin{split}
\label{inq::proof Lemma 3-2nd inequality}
&- {\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_R \nabla f_{i_t}(w^t)\right\|}_2 {\left\|\Delta\right\|}_2 \\
&\geq -{\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)\right\|}_2 {\left\|\Delta\right\|}_2 - \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_{\Gamma^c} \nabla f_{i_t}(w^t)\right\|}_2 {\left\|\Delta\right\|}_2 \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} {\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t), z\right>} - \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} {\mathcal{P}}_{\Gamma^c} \nabla f_{i_t}(w^t), x\right>} \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), z\right>} - {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), \frac{\sqrt{\eta^2_1-1}}{\eta_1} x\right>} \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), z - \frac{\sqrt{\eta^2_1-1}}{\eta_1} x\right>},
\end{split}$$ where the second equality follows from $\operatorname{supp}_{{\mathcal{D}}}(z) = \Gamma$ and ${\left<{\mathcal{P}}_{\Gamma}r ,z\right>} = {\left<r,{\mathcal{P}}_{\Gamma}z\right>} = {\left<r,z\right>}$. Denote $y \triangleq z - \frac{\sqrt{\eta^2_1-1}}{\eta_1} x$ and combine (\[inq::proof Lemma 3-1st inequality\]) and (\[inq::proof Lemma 3-2nd inequality\]) to arrive at $$\label{inq::proof Lemma 3-3rd inequality}
F(w^{\star}) - F(w^t) - \frac{\rho^-_{4k}}{2} {\left\|\Delta\right\|}_2^2 \geq {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), y\right>}.$$
We now use the ${\mathcal{D}}$-RSS property to lower bound the right-hand side of the above inequality. Recall that from the definition of ${\mathcal{D}}$-RSS, we can show that $$\begin{split}
\nonumber
{\left<\nabla f_{i_t}(w^t), y\right>} &\geq f_{i_t}(w^t+ y) - f_{i_t}(w^t) - \frac{\rho^+_{4k}(i_t)}{2} {\left\|y\right\|}_2^2.
\end{split}$$ Multiply both sides with $\frac{1}{Mp(i_t)}$ and take the expectation with respect to the index $i_t$ and recall that ${\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} f_{i_t} (w^t) = F(w^t)$, we have $${\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), y\right>} \geq {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} f_{i_t }(w^t + y) - F(w^t) - \frac{1}{2} {\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t)}{Mp(i_t)} {\left\|y\right\|}_2^2.$$
Combining with inequality (\[inq::proof Lemma 3-3rd inequality\]) and removing the common terms yields $$\begin{split}
\nonumber
\frac{1}{2} {\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t)}{Mp(i_t)} {\left\|y\right\|}_2^2 - \frac{\rho^-_{4k}}{2} {\left\|\Delta\right\|}_2^2 &\geq {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} f_{i_t }(w^t+ y) - F(w^{\star}) \\
&= {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} \left( f_{i_t }(w^t+ y) - f_{i_t }(w^{\star}) \right)
\end{split}$$ where the equality follows from $F(w^{\star}) = {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} f_{i_t }(w^{\star})$. Applying the ${\mathcal{D}}$-RSC one more time to the right-hand side and then taking the expectation, we get $$\label{inq::proof Lemma 3-quadratic inequality}
\begin{split}
&\frac{1}{2} {\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t)}{Mp(i_t)} {\left\|y\right\|}_2^2 - \frac{\rho^-_{4k}}{2} {\left\|\Delta\right\|}_2^2 \\
&\geq \frac{\rho^-_{4k}}{2} {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} {\left\|w^t+ y-w^{\star}\right\|}_2^2 + {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^{\star}), w^t+ y-w^{\star}\right>} \\
&= \frac{\rho^-_{4k}}{2} {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} {\left\|\Delta - y\right\|}_2^2 + {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} {\mathcal{P}}_{\Gamma \cup R} \nabla f_{i_t}(w^{\star}), y-\Delta\right>} \\
&\geq \frac{\rho^-_{4k}}{2} {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} {\left\|\Delta - y\right\|}_2^2 - {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} {\left\|{\mathcal{P}}_{\Gamma \cup R} \nabla f_{i_t} (w^{\star})\right\|}_2 {\left\|\Delta - y\right\|}_2 \\
&\geq \frac{\rho^-_{4k}}{2\max_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2^2 - \frac{\max_{i_t} {\left\|{\mathcal{P}}_{\Gamma \cup R} \nabla f_{i_t} (w^{\star})\right\|}_2}{\min_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t}{\left\|\Delta - y\right\|}_2 \\
&\geq \frac{\rho^-_{4k}}{2\max_{i_t} Mp(i_t)} \left({\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2 \right)^2 - \frac{\max_{i_t} {\left\|{\mathcal{P}}_{\Gamma \cup R} \nabla f_{i_t} (w^{\star})\right\|}_2}{\min_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t}{\left\|\Delta - y\right\|}_2 .
\end{split}$$
Solving the quadratic inequality $au^2 - 2bu -c\leq 0$ with $u = {\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2$, $a = \frac{\rho^-_{4k}}{\max_{i_t} Mp(i_t)}$, $b = \frac{\max_{i_t} {\left\|{\mathcal{P}}_{\Gamma \cup R} \nabla f_{i_t} (w^{\star})\right\|}_2}{\min_{i_t} Mp(i_t)}$, and $c = {\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t) }{Mp(i_t)} {\left\|y\right\|}_2^2 - \rho^-_{4k} {\left\|\Delta\right\|}_2^2$, we obtain $$\label{inq::proof Lemma 3-4th inequality}
{\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2 \leq \sqrt{\frac{c}{a}} + \frac{2b}{a}.$$
Now plugging the definition of $y$ we can obtain the lower bound of the left-hand side. We have, $$\begin{split}
\nonumber
{\left\|\Delta - y \right\|}_2 &= {\left\|\Delta - z + \frac{\sqrt{\eta^2_1-1}}{\eta_1} x\right\|}_2 \\
&\geq {\left\|\Delta - z\right\|}_2 - {\left\|\frac{\sqrt{\eta^2_1-1}}{\eta_1} x\right\|}_2 \\
&= {\left\|\Delta + \frac{{\left\|\Delta\right\|}_2}{{\left\|{\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)\right\|}_2} {\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t) \right\|}_2 - {\left\|\frac{\sqrt{\eta^2_1-1}}{\eta_1} \frac{\nabla f_{i_t}(w^t)}{{\left\|\nabla f_{i_t}(w^t)\right\|}_2} {\left\|\Delta\right\|}_2\right\|}_2 \\
&\geq {\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2 - \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|\Delta\right\|}_2,
\end{split}$$ where the first inequality follows from the triangular argument; the last inequality follows from the observation that for any vector $v$, ${\left\|\Delta - {\mathcal{P}}_{\Gamma} v \right\|}_2 \geq {\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2$. Here, $v = -\frac{{\left\|\Delta\right\|}_2}{ {\left\|{\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)\right\|}_2} \nabla f_{i_t}(w^t)$. Therefore, $$\nonumber
{\mathbb{E}}_{i_t} {\left\|\Delta - y \right\|}_2 \geq {\mathbb{E}}_{i_t} {\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2 - \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|\Delta\right\|}_2.$$ Plugging this inequality into (\[inq::proof Lemma 3-4th inequality\]), we get $$\label{inq::proof Lemma 3-5th inequality}
{\mathbb{E}}_{i_t} {\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2 \leq \sqrt{\frac{c}{a}} + \frac{2b}{a} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|\Delta\right\|}_2.$$
The last step is to substitute values of $a$, $b$, and $c$ defined above into this inequality. From the definition of $y$ together with the observation that $x$ is orthogonal with $z$, we have $$\begin{split}
\nonumber
{\left\|y\right\|}^2_2 &= {\left\|z\right\|}^2_2 + \frac{\eta_1^2-1}{\eta^2_1} {\left\|x\right\|}^2_2 \\
&= {\left\|\frac{{\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)}{{\left\|{\mathcal{P}}_{\Gamma} \nabla f_{i_t}(w^t)\right\|}_2} {\left\|\Delta\right\|}_2\right\|}^2_2 + \frac{\eta_1^2-1}{\eta^2_1} {\left\|\frac{{\mathcal{P}}_{\Gamma^c} \nabla f_{i_t}(w^t)}{{\left\|{\mathcal{P}}_{\Gamma^c} \nabla f_{i_t}(w^t)\right\|}_2} {\left\|\Delta\right\|}_2\right\|}^2_2 \\
&= {\left\|\Delta\right\|}^2_2 + \frac{\eta_1^2-1}{\eta^2_1} {\left\|\Delta\right\|}^2_2 = \frac{2\eta_1^2-1}{\eta^2_1} {\left\|\Delta\right\|}^2_2.
\end{split}$$ Thus, the quantity $c$ defined above is bounded by $$\begin{split}
\nonumber
c &= {\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t)}{Mp(i_t)} \frac{2\eta_1^2-1}{\eta^2_1} {\left\|\Delta\right\|}_2^2 - \rho^-_{4k} {\left\|\Delta\right\|}_2^2 \\
&\leq \max_{i_t} \rho^+_{4k}(i_t) {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} \frac{2\eta_1^2-1}{\eta^2_1} {\left\|\Delta\right\|}_2^2 - \rho^-_{4k} {\left\|\Delta\right\|}_2^2 \\
&= \left(\frac{2\eta_1^2-1}{\eta^2_1} \rho^+_{4k} - \rho^-_{4k} \right) {\left\|\Delta\right\|}_2^2.
\end{split}$$
Now combine this inequality with (\[inq::proof Lemma 3-5th inequality\]) and plug values of $a$ and $b$, we obtain $$\begin{split}
\nonumber
{\mathbb{E}}_{i_t} {\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2 &\leq \sqrt{\frac{c}{a}} + \frac{2b}{a} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|\Delta\right\|}_2 \\
&\leq \max_{i_t} \sqrt{Mp(i_t)} \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1}\rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} {\left\|\Delta\right\|}_2 \\
&\quad+ \frac{2\max_{i_t} p(i_t)}{\rho^-_{4k} \min_{i_t} p(i_t)} \max_{|\Omega| \leq 4k, i_t \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_{i_t} (w^{\star})\right\|}_2 + \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|\Delta\right\|}_2 \\
&= \left( \max_{i_t} \sqrt{Mp(i_t)} \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1} \rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} \right) {\left\|\Delta\right\|}_2 \\
&\quad+ \frac{2\max_{i_t} p(i_t)}{\rho^-_{4k} \min_{i_t} p(i_t)} \max_{|\Omega| \leq 4k, i_t \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_{i_t} (w^{\star})\right\|}_2
\end{split}$$ The proof follows by combining this inequality with (\[inq::1st intemediate result\]).
Proof of Theorem \[thm::StoIHT with inexact gradients\] {#subsection::proof of StoIHT with inexact gradients}
-------------------------------------------------------
At the $t$-th iteration, denote $g_{i_t} (w) \triangleq \nabla f_{i_t}(w) + e^t$. The proof of Theorem \[thm::StoIHT with inexact gradients\] is essentially the same as that of Theorem \[thm::StoIHT\] with $\nabla f_{i_t} (w^t)$ and $\nabla f_{i_t} (w^{\star})$ replaced by $g_{i_t} (w^t)$ and $g_{i_t} (w^{\star})$, respectively. Following the same proof as in Theorem \[thm::StoIHT\], we arrive at $$\begin{split}
\nonumber
&{\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^{t+1} - w^{\star}\right\|}_2 \\
&\leq 2\left( {\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} \left( g_{i_t}(w^t) - g_{i_t}(w^{\star}) \right)\right\|}_2 + {\mathbb{E}}_{i_t | I_{t-1}} {\left\|\frac{\gamma}{Mp(i_t)} {\mathcal{P}}_{\Omega} g_{i_t} (w^{\star})\right\|}_2 \right) \\
&\quad+ \sqrt{\eta^2-1} \left( {\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^t - w^{\star} - \frac{\gamma}{Mp(i_t)} \left( g_{i_t} (w^t) - g_{i_t} (w^{\star}) \right)\right\|}_2 + {\mathbb{E}}_{i_t | I_{t-1}} {\left\|\frac{\gamma}{Mp(i_t)} g_{i_t} (w^{\star})\right\|}_2 \right) .
\end{split}$$ Notice that $g_{i_t}(w^t) - g_{i_t}(w^{\star}) = \nabla f_{i_t}(w^t) - \nabla f_{i_t}(w^{\star})$, we can apply the inequality (\[inq::1st key observation\]) of Corollary \[cor::StoIHT 1st corollary\] for the first term and (\[inq::2nd key observation\]) for the third term of the summation to obtain $$\begin{split}
\nonumber
&{\mathbb{E}}_{i_t | I_{t-1}} {\left\|w^{t+1} - w^{\star}\right\|}_2 \\
&\leq 2 \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{3k}) \rho^-_{3k} \right)} {\left\|w^t-w^{\star}\right\|}_2 + 2 \frac{\gamma}{\min_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t}{\left\|{\mathcal{P}}_{\Omega} \nabla g_{i_t} (w^{\star})\right\|}_2 \\
&\quad+ \sqrt{\eta^2-1} \sqrt{1 + \gamma^2 \alpha_{3k} \overline{\rho}^+_{3k} - 2\gamma \rho^-_{3k} } {\left\|w^t-w^{\star}\right\|}_2 + \sqrt{\eta^2-1} \frac{\gamma}{\min_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t} {\left\|\nabla g_{i_t} (w^{\star})\right\|}_2 \\
&= \left( 2 \sqrt{\left(1- (2\gamma - \gamma^2 \alpha_{3k}) \rho^-_{3k} \right)} + \sqrt{(\eta^2-1)\left(1 + \gamma^2 \alpha_{3k} \overline{\rho}^+_{3k} - 2\gamma \rho^-_{3k} \right)} \right) {\left\|w^t - w^{\star}\right\|}_2 \\
&\quad+ \frac{\gamma}{\min_{i_t} Mp(i_t)} \left( 2 {\mathbb{E}}_{i_t} {\left\|{\mathcal{P}}_{\Omega} \nabla g_{i_t} (w^{\star})\right\|}_2 + \sqrt{\eta^2 - 1} {\mathbb{E}}_{i_t} {\left\|\nabla g_{i_t} (w^{\star})\right\|}_2 \right) \\
&\leq \kappa {\left\|w^t -w^{\star}\right\|}_2 + (\sigma_{w^{\star}} + \sigma_{e^t} ),
\end{split}$$ where $\kappa$ and $\sigma_{w^{\star}}$ are defined in (\[eqt::kappa of StoIHT\]) and (\[eqt::sigma of StoIHT\]) and $$\sigma_{e^t} \triangleq \frac{\gamma}{\min_{i} Mp(i)} \left( 2 \max_{|\Omega| \leq 3k} {\left\|{\mathcal{P}}_{\Omega} e^t\right\|}_2 + \sqrt{\eta^2 - 1} {\left\|e^t\right\|}_2 \right).$$ Taking the expectation on both sides with respect to $I_{t-1}$ yields $${\mathbb{E}}_{I_t} {\left\|w^{t+1}-w^{\star}\right\|}_2 \leq \kappa {\mathbb{E}}_{I_{t-1}} {\left\|w^t - w^{\star}\right\|}_2 + (\sigma_{w^{\star}} + \sigma_{e}),$$ where $\sigma_e = \max_{j \in [t]} \sigma_{e^j}$. Applying this result recursively completes the proof.
Proof of Theorem \[thm::StoGradMP with inexact gradients\]
----------------------------------------------------------
The analysis of Theorem \[thm::StoGradMP with inexact gradients\] follows closely to that of Theorem \[thm::StoGradMP\]. In particular, we will apply Lemmas \[lem::bound l2 w\^(t+1)-w\*\], \[lem::bound l2 b\^t - w\*\], and the following lemma to obtain the proof.
\[lem::bound L2 P\_Gamma\^c b\^t-w\*-inexact gradient\] Denote $\widehat{\Gamma}$ as the set obtained from the $t$-th iteration. Then, $$\begin{split}
{\mathbb{E}}_{i_t} {\left\|{\mathcal{P}}_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 &\leq \left( \max_i \sqrt{Mp(i)} \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1}\rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} \right) {\left\|w^t-w^{\star}\right\|}_2 + \sigma_2,
\end{split}$$ where $$\sigma_2 \triangleq \frac{2 \max_{i_t} p(i_t)}{\rho^-_{4k} \min_{i_t} p(i_t)} \left( \max_{|\Omega| \leq 4k, i_t \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_{i_t} (w^{\star})\right\|}_2 + \max_t {\left\|e^t\right\|}_2 \right).$$
Given this lemma, the proof is exactly the same as that of Theorem \[thm::StoGradMP\]. Now, we proceed to prove Lemma \[lem::bound L2 P\_Gamma\^c b\^t-w\*-inexact gradient\]. Denote $\Delta = w^{\star} - w^t$ and $g_{i_t}(w) \triangleq \nabla f_{i_t} (w) + e^t$. Similar to the analysis of Lemma \[lem::bound L2 P\_Gamma\^c b\^t-w\*\], we start by applying the ${\mathcal{D}}$-RSC, $$\begin{split}
\label{inq::proof Lemma 3-1st inequality-inexact gradient}
F(w^{\star}) &- F(w^t) - \frac{\rho^-_{4k}}{2} {\left\|w^{\star} - w^t\right\|}_2^2 \\
&\geq {\left<\nabla F(w^t),w^{\star} - w^t\right>} \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t),\Delta\right>} \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} {\mathcal{P}}_R (g_{i_t}(w^t)) - e^t,\Delta\right>} \\
&\geq - {\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_R g_{i_t} (w^t)\right\|}_2 {\left\|\Delta\right\|}_2 - {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} {\mathcal{P}}_R e^t ,\Delta\right>} .
\end{split}$$ Again, applying inequality (\[eqt::approximation consequence 2\]) allows us to write ${\left\|{\mathcal{P}}_R g_{i_t}(w^t)\right\|}_2 \leq {\left\|{\mathcal{P}}_{\Gamma} g_{i_t}(w^t)\right\|}_2 + \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|{\mathcal{P}}_{\Gamma^c} g_{i_t}(w^t)\right\|}_2$. We now apply this observation to the above inequality. Denote $z \triangleq -\frac{{\mathcal{P}}_{\Gamma} g_{i_t}(w^t)}{{\left\|{\mathcal{P}}_{\Gamma} g_{i_t}(w^t)\right\|}_2} {\left\|\Delta\right\|}_2$ and $x \triangleq \frac{{\mathcal{P}}_{\Gamma^c} g_{i_t}(w^t)}{{\left\|{\mathcal{P}}_{\Gamma^c} g_{i_t}(w^t)\right\|}_2} {\left\|\Delta\right\|}_2$ and follow the same procedure in formula (\[inq::proof Lemma 3-2nd inequality\]) with $\nabla f_{i_t}(w^t)$ replaced by $g_{i_t}(w^t)$, we arrive at $$\begin{split}
\label{inq::proof Lemma 3-2nd inequality-inexact gradient}
&- {\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_R g_{i_t}(w^t)\right\|}_2 {\left\|\Delta\right\|}_2 \\
&\geq -{\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_{\Gamma} g_{i_t}(w^t)\right\|}_2 {\left\|\Delta\right\|}_2 - \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\mathbb{E}}_{i_t} {\left\|\frac{1}{Mp(i_t)} {\mathcal{P}}_{\Gamma^c} g_{i_t}(w^t)\right\|}_2 {\left\|\Delta\right\|}_2 \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} g_{i_t}(w^t), z - \frac{\sqrt{\eta^2_1-1}}{\eta_1} x\right>} \\
&= {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), z - \frac{\sqrt{\eta^2_1-1}}{\eta_1} x\right>} + {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} e^t, z - \frac{\sqrt{\eta^2_1-1}}{\eta_1} x\right>}.
\end{split}$$ Denote $y \triangleq z - \frac{\sqrt{\eta^2_1-1}}{\eta_1} x$ and combine (\[inq::proof Lemma 3-1st inequality-inexact gradient\]) and (\[inq::proof Lemma 3-2nd inequality-inexact gradient\]), we get $$\label{inq::proof Lemma 3-3rd inequality-inexact gradient}
\begin{split}
F(w^{\star}) - F(w^t) - \frac{\rho^-_{4k}}{2} {\left\|\Delta\right\|}_2^2 &\geq {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), y\right>} - {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} e^t, \Delta - y\right>} \\
&\geq {\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), y\right>} - \frac{1}{\min_{i_t} Mp(i_t)} {\left\|e^t\right\|}_2 {\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2,
\end{split}$$ where the last argument follows from Cauchy-Schwarz inequality. We now use the ${\mathcal{D}}$-RSS property to lower bound the right-hand side of the above inequality. Recall that from the definition of ${\mathcal{D}}$-RSS, we can show that $$\begin{split}
\nonumber
{\left<\nabla f_{i_t}(w^t), y\right>} &\geq f_{i_t}(w^t+ y) - f_{i_t}(w^t) - \frac{\rho^+_{4k}(i_t)}{2} {\left\|y\right\|}_2^2.
\end{split}$$ Multiply both sides with $\frac{1}{Mp(i_t)}$ and take the expectation on both sides with respect to the index $i_t$ and recall that ${\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} f_{i_t} (w^t) = F(w^t)$, we have $${\mathbb{E}}_{i_t} {\left<\frac{1}{Mp(i_t)} \nabla f_{i_t}(w^t), y\right>} \geq {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} f_{i_t }(w^t + y) - F(w^t) - \frac{1}{2} {\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t)}{Mp(i_t)} {\left\|y\right\|}_2^2.$$
Combining with inequality (\[inq::proof Lemma 3-3rd inequality-inexact gradient\]) and removing the common terms yields $$\begin{split}
\nonumber
\frac{1}{2}{\mathbb{E}}_{i_t} &\frac{\rho^+_{4k}(i_t)}{Mp(i_t)} {\left\|y\right\|}_2^2 - \frac{\rho^-_{4k}}{2} {\left\|\Delta\right\|}_2^2 \\
&\geq {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} f_{i_t }(w^t+ y) - F(w^{\star}) - \frac{1}{\min_{i_t} Mp(i_t)} {\left\|e^t\right\|}_2 {\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2\\
&= {\mathbb{E}}_{i_t} \frac{1}{Mp(i_t)} \left( f_{i_t }(w^t+ y) - f_{i_t }(w^{\star}) \right) - \frac{1}{\min_{i_t} Mp(i_t)} {\left\|e^t\right\|}_2 {\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2.
\end{split}$$ Apply the ${\mathcal{D}}$-RSC one more time to the right-hand side and follow a similar procedure as (\[inq::proof Lemma 3-quadratic inequality\]) with an additional term involving the gradient noise, we get $$\begin{split}
\frac{1}{2}{\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t)}{Mp(i_t)} {\left\|y\right\|}_2^2 & - \frac{\rho^-_{4k}}{2} {\left\|\Delta\right\|}_2^2 \\
&\geq \frac{\rho^-_{4k}}{2\max_{i_t} Mp(i_t)} \left({\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2 \right)^2 - \frac{\max_{i_t} {\left\|{\mathcal{P}}_{\Gamma \cup R} \nabla f_{i_t} (w^{\star})\right\|}_2}{\min_{i_t} Mp(i_t)} {\mathbb{E}}_{i_t}{\left\|\Delta - y\right\|}_2 \\
&\quad- \frac{1}{\min_{i_t} Mp(i_t)} {\left\|e^t\right\|}_2 {\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2.
\end{split}$$
Solving the quadratic inequality $au^2 - 2bu -c\leq 0$ where $u = {\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2$, $a = \frac{\rho^-_{4k}}{\max_{i_t} Mp(i_t)}$, $b = \frac{\max_{i_t} {\left\|{\mathcal{P}}_{\Gamma \cup R} \nabla f_{i_t} (w^{\star})\right\|}_2}{\min_{i_t} Mp(i_t)}+\frac{1}{\min_{i_t} Mp(i_t)} {\left\|e^t\right\|}_2$, and $c = {\mathbb{E}}_{i_t} \frac{\rho^+_{4k}(i_t)}{Mp(i_t)} {\left\|y\right\|}_2^2 - \rho^-_{4k} {\left\|\Delta\right\|}_2^2$, we obtain $$\label{inq::proof Lemma 3-4th inequality-inexact gradient}
{\mathbb{E}}_{i_t} {\left\|\Delta - y\right\|}_2 \leq \sqrt{\frac{c}{a}} + \frac{2b}{a}.$$
Following the same steps after inequality (\[inq::proof Lemma 3-4th inequality\]), we arrive at $$\begin{split}
\nonumber
{\mathbb{E}}_{i_t} {\left\|\Delta - {\mathcal{P}}_{\Gamma} \Delta\right\|}_2 &\leq \sqrt{\frac{c}{a}} + \frac{2b}{a} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|\Delta\right\|}_2 \\
&= \max_i \sqrt{Mp(i)} \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1}\rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} {\left\|\Delta\right\|}_2 \\
&\quad+ \frac{2\max_i p(i)}{\rho^-_{4k} \min_i p(i)} \left( \max_{|\Omega| \leq 4k, i \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_i (w^{\star})\right\|}_2 + {\left\|e^t\right\|}_2 \right) + \frac{\sqrt{\eta^2_1-1}}{\eta_1} {\left\|\Delta\right\|}_2 \\
&= \left( \max_i \sqrt{Mp(i)} \sqrt{\frac{\frac{2\eta_1^2-1}{\eta^2_1} \rho^+_{4k} - \rho^-_{4k}}{\rho^-_{4k}}} + \frac{\sqrt{\eta^2_1-1}}{\eta_1} \right) {\left\|\Delta\right\|}_2 \\
&\quad+ \frac{2 \max_i p(i)}{\rho^-_{4k} \min_i p(i)} \left(\max_{|\Omega| \leq 4k, i \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_i (w^{\star})\right\|}_2 + {\left\|e^t\right\|}_2 \right).
\end{split}$$ The proof follows by combining this inequality with (\[inq::1st intemediate result\]).
Proof of Theorem \[thm::StoGradMP with inexact gradient and approximated estimation\]
-------------------------------------------------------------------------------------
The proof follows exactly the same steps as that of Theorem \[thm::StoGradMP\] in Section \[subsec::Proof of StoGradMP theorem\]. The only difference is Lemma \[lem::bound l2 b\^t - w\*\], which is now replaced by the following lemma.
\[lem::bound l2 b\^t - w\* - approximated estimation\] Denote $\widehat{\Gamma}$ as the set obtained from the $t$-th iteration and $i$ as the index selected randomly from $[M]$ with probability $p(i)$. We have, $$\begin{split}
\nonumber
{\mathbb{E}}_{I_t} {\left\|b^t - w^{\star}\right\|}_2 &\leq \sqrt{\frac{\alpha_{4k}}{\rho^-_{4k}}} {\mathbb{E}}_{I_t} {\left\|P_{\widehat{\Gamma}^c} (b^t - w^{\star})\right\|}_2 + \sigma_1 + \epsilon^t,
\end{split}$$ where $\alpha_k = \max_i \frac{\rho^+_k(i)}{Mp(i)}$ and $
\sigma_1 \triangleq \frac{3}{\rho^-_{4k}} \frac{1}{\min_i Mp(i)} \max_{|\Omega| \leq 3k, i \in [M]}{\left\|{\mathcal{P}}_{\Omega} \nabla f_i(w^{\star}))\right\|}_2$.
From the triangular inequality, $${\mathbb{E}}_{i_t} {\left\|b^t - w^{\star}\right\|}_2 \leq {\mathbb{E}}_{i_t} {\left\|b^t_{\operatorname{opt}} - w^{\star}\right\|}_2 + {\mathbb{E}}_{i_t} {\left\|b^t - b^t_{\operatorname{opt}}\right\|}_2 \leq {\mathbb{E}}_{i_t} {\left\|b^t_{\operatorname{opt}} - w^{\star}\right\|}_2 + \epsilon^t.$$ Applying Lemma \[lem::bound l2 b\^t - w\*\] to get upper bound for ${\mathbb{E}}_{i_t} {\left\|b^t_{\operatorname{opt}} - w^{\star}\right\|}_2$ will complete the proof of this lemma.
[^1]: Linear convergence is sometime called exponential convergence
[^2]: We refer to an epoch as the number of iterations needed to use $m$ rows. Thus for deterministic non-blocked methods, an epoch is one iteration, whereas for a method using blocks of size $b$, an epoch is $m/b$ iterations.
| |
By Toufic Gaspard
This paintings assesses Lebanon's improvement adventure in the course of 1948-2002, a case research of laissez-faire functionality over greater than 50 years. The textual content analyzes the robust financial main issue of the mid-1980s and the reconstruction coverage that produced susceptible development and excessive govt indebtedness.
Read Online or Download A Political Economy of Lebanon, 1948-2002: The Limits of Laissez-Faire (Social, Economic and Political Studies of the Middle East and Asia) PDF
Similar interior decorating books
The Dead Sea Scrolls Reader, Vol. 1: Texts Concerned With Religious Law
This version offers for the 1st time all of the non-biblical Qumran texts categorized in accordance with their genres, including English translations. of those texts, a few twenty weren't formerly released. The Hebrew-Aramaic texts during this version are in most cases in keeping with the FARMS database of Brigham younger collage, which, in its flip, displays the textual content versions of the traditional scrolls with nice precision, together with glossy diacritical symptoms.
La Decretale Ad Gallos Episcopos: Son Texte Et Son Auteur; Texte Critique, Traduction Francaise Et Commentaire (Supplements to Vigiliae Christianae)
In 1904, Ed. -Ch. Babut issued a brand new variation of the real Decretale advert Gallos episcopos with the aid of a moment manuscript of the canonical "collection of St. Maur. " He attributed it to Pope Damasus (366-384), and never to Sirice (384-398). however, he did forget about the life of the ancestor of the 2 past manuscripts and of an different assortment, materialized at the present time through fragmented manuscripts.
Galen and Chrysippus on the Soul: Argument and Refutation in the de Placitis Books II-III
This quantity offers with books II and III of the "On the Doctrines of Hippocrates and Plato" via the scientific scientist and thinker Galen of Pergamum (129-c. 210 CE ). In those books Galen deals an in depth critique of Stoic psychology, quoting numerous passages from the differently misplaced treatise "On the Soul" through the nice Stoic thinker Chrysippus.
The most argument of this e-book, opposed to a triumphing orthodoxy, is that the examine of common sense used to be an important - and a favored - a part of stoic philosophy within the early imperial interval. The argument is predicated totally on certain analyses of yes texts within the Discourses of Epictetus. It contains a few account of logical 'analysis', of 'hypothetical' reasoning, and of 'changing' arguments.
- Today's Country Houses
- Investigating Arabic: Current Parameters in Analysis and Learning (Studies in Semitic Languages and Linguistics)
- The De Excidio of Gildas: Its Authenticity and Date (Columbia Studies in the Classical Tradition)
- The Idea of History in Rabbinic Judaism (Brill Reference Library of Judaism)
- Byzantine Authors: Literary Activities and Preoccupations: Texts and Translations Dedicated to the Memory of Nicolas Oikonomides
- Silanes and Other Coupling Agents, Vol. 3
Additional info for A Political Economy of Lebanon, 1948-2002: The Limits of Laissez-Faire (Social, Economic and Political Studies of the Middle East and Asia)
Sample text
Thus, the material efficiency of capitalism coexists with a tendency to generate waste and instability. Capitalism is also seen to generate a poor working class and an expanding “reserve army of the unemployed”. More specifically, real wages are predicted to fluctuate around a socially accepted minimum, in addition to a growing polarization in incomes between the rich and the poor. The prediction of the growing poverty of the working class did not materialize in industrial societies, where real wages and standards of living continue to grow.
See Krueger, 1991. GASPARD_F3_5-41 11/11/03 11:23 AM Page 15 15 The NC view is therefore that markets are the best form of economic organization for efficiency and growth, whatever the stage of development of the economy. If there is market failure and some markets are absent, markets may still be established now or in the future. g. defense and law and order, only in limited and specific circumstances would government intervention contribute a net positive outcome. Above all, the rule is that free markets should dominate and spread in the domain of economic exchange.
LDCs mostly suffer from shortages of skills, capital and technology rather than from the excess capacity and associated demand management problems of Keynesian economics. Moreover, LDCs have to cope with problems unknown, or resolved long time since, in industrial countries, namely ruralurban migration and underdeveloped government and legal institutions. Keynesian economics may therefore appear to be less well equipped to deal with LDC-specific problems that, on the other hand, NC economics could pretend to address through the establishment of markets and the free operation of the price mechanism. | http://xn--12c8bfbe0ab1gza7bo4ae0yqa.net/index.php/epub/a-political-economy-of-lebanon-1948-2002-the-limits-of-laissez-faire-social |
Happy Independence Day India!
“Long years ago we made a tryst with destiny, and now that time comes when we shall redeem our pledge, not wholly or in full measure, but very substantially. At the stroke of today’s midnight hour, when the world sleeps, India will awake to life and freedom.”
Fast forward 71 years and the date is August 15, 2018. Prime Minister Narendra Modi is addressing the Indian country. As he speaks, I watch my parents sit in a solemn silence. I couldn’t help but notice their heads slightly bowed to the hum of Vande Mataram while their eyes glimmered with orange, white, and green.
7,299 miles of water, land, and atmosphere lie between my parents and this foreign, intangible country. Yet, the patriotism, the respect, and the loyalty still remain. India is their motherland, where they took their first breath, where they took their first steps, where they played in the streets until night, where they and their siblings used to fight.
As I watched the jubilant Indian independence day celebrations on the television, I began to wonder what exactly happened in 1947 that spearheaded the bustling republic my parents once proudly called home. What were the circumstances? Who was involved? Why were they involved? I wanted to know not because it was world history but because it is my parents’ history and consequently, my own. I wanted to know where it all started, for them and for me. So, I began to google.
According to a Times of India article, Happy Independence Day 2018: The history and significance of our Independence Day, and a ThoughtCo article titled, What Was the Partition of India?, the history of India dated back to the East India Company’s arrival to the country in the 1600s. The company’s merchants began to exert military and administrative control, suppressing many of the Indian kingdoms and constructing footholds across the land.
Resentment spread amongst countrymen due to the company’s oppression, igniting many revolts against the British merchants, most notably the Sepoy Mutiny of 1857. Independence from the British was now more than just an idea — it was a movement. In the upcoming years, the struggle for independence, led by freedom fighters like Mohandas Karamchand Gandhi and Subhash Chandra Bose, would take a horrible toll on both the British and Indian people.
Now with another exhausting war — World War II — behind them, the British were unsure of their financial ability to keep India within their grasp. So, the British government announced in early 1947 that they would transfer control to the Indian people by June 1948.
Bitter argument ensued over the new country’s government. The Muslim League, under the leadership of Muhammad Jinnah, demanded a separate Muslim State, knowing that if they remained unified with the Indian National Congress, the Hindus would have majority representation in the newly independent country. On the other hand, the Indian National Congress, under the leadership of Jawaharlal Nehru, wanted a unified country due to the Hindu majority in the Indian population.
However in March 1947, with a new viceroy — Lord Louis Mountbatten — came new plans. Mountbatten wanted to expedite the end of the British Raj and thus announced that independence to India would be granted on June 3rd of that same year. With little time to contemplate, Nehru and Jinnah had very little choice but to agree to the creation of two separate states: India and Pakistan.
I became very fascinated with this history. For two countries as bitter as India and Pakistan, it is very ironic how interconnected both nations truly are. Not too long ago, they shared the same desire for independence under the British Raj. They shared the tragedy of missing loved ones and bloody riots during the Partition of 1947. At one point, they spoke the same language, ate the same food, called the same place home. To me, the annual ceremony of Indian and Pakistani soldiers offering sweets to one other on their respective independence days confirms this inherent connection between the two countries.
I learned that Independence Day in India is observed to pay respect to those who believed in India when times were tough — this includes the freedom fighters we read about in textbooks and the unsung heroes that lost their lives to the independence effort without any hesitation. On August 15, the country is donned in majestic oranges, whites, and greens to honor the courage, truth, and auspiciousness of the land. However, I have also come to realize that Independence Day in India is truly a historic day because it marks the date when not only one, but when two new nations were born. | http://sharmakarma.com/happy-independence-day-india/ |
Under the overall direction of the Area Manager and the supervision of Area Associate, the Project Logistic is responsible for the implementation and financial issues of EDL “Rural Development” projects in Area of integrated intervention, as stipulated in the programme support document, and as assigned to him/her by his/her Supervisor.
Functions / Key Results Expected
1. Ensure logistical arrangement readiness for all project activities.
2. 2. Managing the project's logistical needs and daily tasks.
3. Maintain inventory of records of project deliverables, documents, files, equipment and materials.
4. Organize offers and micro-purchases and follow-up on finalizing procurement process on time and arrange for delivery of goods and services according to activities agenda.
5. Keep a track record of project activities including attendance, complains, problem encountered, results and achievements Monitor all project activities, expenditures and progress towards achieving the project output.
6. Preparing monthly salary list in collaboration with the project coordinator.
7. Collect and keep a full track record of invoices and receipts and submit to UNDP field office.
8. Provide feedback to the Project coordinator on project activities.
9. Process payments, file and archive all relevant documents (vouchers, invoices).
10. Develop budget status reports, monthly and quarterly financial progress reports.
11. Full collaboration with project coordinator and UNDP field office.
12. Other duties as they arise.
Education Faculty specialization in Engineering or Economics or Law
Experience – 3 years’ experience related to financial and administrative
– Experience with the United Nations Development Program is an asset.
– Experience in participatory community action and working with civil society institutions and local public organizations is an advantage.
– Computer administrative and financial skills are a prerequisite
Language Arabic
– Responds positively to feedback. | https://rawabet.org/listing/project-logistic/ |
High psychiatric comorbidity in spasmodic torticollis: a controlled study.
A study of the phenomenology of panic attacks in patients from India.
Exploring out-patient behaviors in claim database: a case study using association rules.
Anxiety and defense styles in eating disorders.
The thrall of the negative and how to analyze it.
Psychoanalytic peregrination. V: The Zollikon Lectures.
Can legislation provide a better match between demand and supply in psychotherapy?
Craving, longing, denial, and the dangers of change: clinical manifestations of greed.
The interaction between acne vulgaris and the psyche.
The meaning of acute confusional state from the perspective of elderly patients.
Psychotherapists' representations of their patients.
Serotonin reuptake inhibitors for dizziness with psychiatric symptoms.
Bias in computerized neuropsychological assessment of depressive disorders caused by computer attitude.
Dissociative phenomenology of dissociative identity disorder.
Effect of a total smoking ban in a maximum security psychiatric hospital.
Enactment as understanding and as misunderstanding.
Overcoming therapeutic pessimism in hypochondriasis.
MMPI profile characteristics of women with varying levels of normal dissociation.
Nonresponder anorectic patients after 6 months of multimodal treatment: predictors of outcome.
Clinical predictors of drug response in obsessive-compulsive disorder.
Minnesota Multiphasic Personality Inventory profile of patients with chronic sinusitis.
Treatment of agitation and aggression in four demented patients using ECT.
Electroconvulsive therapy (ECT) has been shown to be effective in treating the behavioral symptoms associated with psychiatric disorders in demented patients. Four case studies are presented that show its efficacy in treating behavioral symptoms in demented patients. We suggest that ECT is beneficial in these potentially life-threatening behavioral disturbances.
Feather picking and self-mutilation in psittacine birds.
Factors associated with readmission to a psychiatric facility.
Cognitive correlates of obsessive and compulsive symptoms in Huntington's disease.
His mother-tongue: from stuttering to separation, a case history.
Psychopathology in paintings: a meta-analysis of studies using paintings by psychiatric patients.
Insight and resistance in patients with obsessive-compulsive disorder.
Erotized transference in the male patient-female therapist dyad.
Deliberate self-harm patients with alcohol disorders: characteristics, treatment, and outcome.
Delusions of parasitosis. A dermatologist's guide to diagnosis and treatment.
Network therapy for addiction: bringing family and peer support into office practice.
Effect of music on anxiety of women awaiting breast biopsy.
Psychiatric disorders in patients with blepharospasm - a reactive pattern? | http://www.biomedsearch.com/cluster/19/Clinical-Trials/sub-97-p4.html |
Law firm cyber breaches are a growing problem, and most experts agree it’s not a question of “if” a cyberattack will occur, but “when.” Attorneys are under increasing pressure from government regulators and clients to ensure that sensitive client data stored in law firm servers is secure. PLI’s experts will explore the legal, technological, and ethics issues lawyers should know about, and discuss best practices for guarding against cyber-attacks and handling the aftermath when a cyber breach occurs.
What You Will Learn
- Why lawyers and law firms are particularly attractive targets for cyber-attacks
- Best cybersecurity practices for attorneys, law firms and law departments
- Special cybersecurity risks and requirements for law firms with banking/financial services client
- A look at the SEC’s cybersecurity guidance and what it means for law firms that represent public companies
Program Level: Overview
Intended Audience: In-house counsel, outside attorneys, and related professionals responsible for maintaining client privacy and securing client data.
Prerequisites: An interest in maintaining client privacy and securing client data. | https://www.pli.edu/programs/cybersecurity-for-legal-services-providers?t=ondemand |
URI’s network of grassroots groups works in over a hundred countries around the world, gathering groups of people from different cultures, faiths, and traditions to work side-by-side for a common cause.
Our member groups are categorized into eight regions: Africa, Asia, Europe, Latin America and the Caribbean, the Middle East and North Africa, Multiregion, North America, and Southeast Asia and the Pacific. See where URI members are currently working to improve their communities and the world we share. | https://uri.org/where-we-work |
In recent years, artificial intelligence (AI) systems including machine learning, are expected to be used to reduce the burden on doctors and healthcare workers. However, there are many challenges in clinical implementation. In order to facilitate discussions on medical AI software systems among healthcare professionals, technology developers, policy makers, and public/patients, this paper proposed a type classification for medical AI systems (MA Type). In addition to technical requirements, we have developed a classification system that includes the perspectives of user interface, institutional design, and the role and impact on health professionals and patients/users. In developing and implementing medical AI systems, we hope that MA types will be used to share awareness among healthcare professionals and technology developers. The following three recommendations are made regarding the use of MA types and their future development.
発行年
2020-04-14
権利
CC BY 4.0
著者版フラグ
publisher
出版者
Institute for Future Initiatives, The University of Tokyo
関係URI
https://ifi.u-tokyo.ac.jp/en/news/4638/
https://doi.org/10.1145/3375627.3375846
アイテムへのリンクの際はタイトル右肩にある「Permalink」をご利用ください!
Please use "Permalink" at the top right of the title when linking to items! | https://repository.dl.itc.u-tokyo.ac.jp/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=54048&item_no=1&page_id=28&block_id=31 |
The project management executive shall develop and maintain detailed project schedules including administrative and operational activities.
He/She will also work closely with the execution and markets team to communicate clients requirement to the consulting team, coordinate meetings, including domestic and international travel arrangements.
The project management officer will also be responsible to ensure adherence to deadlines and raise flags where necessary by informing both clients and team members.
Eligibility Criteria
BE/B.Tech/M.Tech.
B.Com/M.Com/BBA/MBA.
About Company
ERD Group, established in 1997, manufactures various products under the brand name “ERD”. The various Power Electronic products are Mobile Phone Batteries, Travel Chargers, Car Chargers, Power Banks, USB Cables, Universal Battery Charger, LED Lights, CCTV Power Supplies etc.
The Company is having a state of art facility to manufacture products under strict quality control standards. Every manufacturing process is well institutionalized and equipped with highly precise testing instruments to control the production and product quality. Every component is being screened fully burn-n tested for the products to ensure 100% in quality. | https://www.hiretale.com/jobs/view/10128/ERD-Technologies-Project-Manager |
MicroRNAs (miRNAs) regulate gene expression by repressing target genes at the posttranscriptional level. Since miRNAs have unique expression profiles in different tissues, they provide pivotal regulation of many biological processes. The present study defined miRNA expression during murine myogenic progenitor cell (MPC) proliferation and differentiation to identify miRNAs involved in muscle regeneration. Muscle-related gene expression analyses revealed that the time course and expression of myosin heavy chain (MHC) and transcription factors (Myf5, MyoD, myogenin, and Pax7) were similar during in vitro MPC proliferation/differentiation and in vivo muscle regeneration. Comprehensive profiling revealed that 139 or 16 miRNAs were significantly changed more than twofold [false discovery rate (FDR) < 0.05] during MPC differentiation or proliferation, respectively; cluster analyses revealed five distinct patterns of miRNA expression during the time course of MPC differentiation. Not unexpectedly, the largest miRNA changes occurred in muscle-specific miRNAs (miR-1, -133a, and -499), which were upregulated > 10-fold during MPC differentiation (FDR < 0.01). However, several previously unreported miRNAs were differentially expressed, including miR-10b, -335-3p, and -682. Interestingly, the temporal patterns of miR-1, -499, and -682 expression during in vitro MPC proliferation/ differentiation were remarkably similar to those observed during in vivo muscle regeneration. Moreover, in vitro inhibition of miR-682, the only miRNA upregulated in proliferating compared with quiescent MPC, led to decreased MPC proliferation, further validating our in vitro assay system for the identification of miRNAs involved in muscle regeneration. Thus the differentially expressed miRNAs identified in the present study could represent new regulatory elements in MPC proliferation and differentiation. | https://scholars.uthscsa.edu/en/publications/temporal-microrna-expression-during-in-vitro-myogenic-progenitor- |
In the present study, the possible involvement of nitric oxide systems in the ventral tegmental area (VTA) in nicotine?s effect on morphine-induced amnesia and morphine state-dependent memory in adult male Wistar rats was investigated. Step-through type inhibitory avoidance task was used to test memory retrieval. Post-training administration of morphine (5 and 7.5 mg/kg) induced amnesia. The response induced by post-training morphine was significantly reversed by pre-test administration of the drug. Pretest injection of nicotine (0.4 and 0.8 mg/kg s.c.) alone and nicotine (0.1, 0.4 and 0.8 mg/kg s.c.) plus an ineffective dose of morphine also significantly reversed the amnesia induced by morphine. Morphine amnesia was also prevented by pretest administration of L-arginine (1 and 3 �g/rat, intra-VTA), a nitric oxide (NO) precursor. Interestingly, an ineffective dose of nicotine (0.1 mg/kg s.c.) in combination with low dose of L-arginine (0.3 g/rat, intra-VTA) synergistically improved memory performance impaired by morphine given after training. In contrast, pre-test administration of NG nitro-L-arginine methyl ester hydrochloride (L-NAME), a nitric oxide synthase (NOS) inhibitor (2 �g/rat, intra-VTA) prevented the nicotine reversal of morphine effect on memory. The results suggest a possible role for nitric oxide of ventral tegmental area in the improving effect of nicotine on the morphine-induced amnesia. | http://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=12179&school=Cognitive%20Sciences |
After having the experience of working in football in different countries, I would define the sport as an art. An abstract art in which everyone involved, every spectator sees a canvas; upon which they see a different painting.
Many envisage what they consider to be a ‘good game of football’: for some it’s simply when their team wins; for others it is when it wins and plays well. But what can we consider ‘playing well’? If we take the scientific approach, we could say that playing well is the result of the actions and movements carried out by the team in possession, with the objective of scoring a goal. But, of course, the great thing about football is the existence of these uncontrollable aspects that can determine the final result.
Coming to a club like Málaga FC has allowed me to put some of the ideas that I have picked up from working in the Major Soccer League (Toronto and Ginga Soccer), the Premier League (Arsenal and Brighton) and the Spanish Liga, into practice. In my opinion, there are many aspects to be discussed: offensive play, style of play, different variations...But I feel that the most important thing for a trainer or coach is cultural knowledge: every place has its own culture, neither better nor worse than others, just different and what you may learn in one place may be completely impractical in another. However, the foundations are the same no matter where you are: passion for the game is universal and it makes it possible for us to share experiences as a form of enrichment.
I have encountered all kinds of coaches: some who are unimaginative, others disorganised, some who put emphasis on physical fitness, others who concentrate on motivational aspects of the game...but in the end, everything is based on something that many of us have forgotten due to the professionalisation of schools and clubs: football is a game. It was created as a form of recreation, and no matter how many plans that we coaches draw up, the players are the ones who entertain the supporters whilst dealing with the different situations that may arise throughout the match. Many players in our youth team train long hours - not because the club asks that of them, but because they keep on training with their friends, the kids on their street, at school...and they arrive charged-up from the perfect training session: the game itself. | https://www.gerardnus.com/single-post/2013/02/06/One-sport-different-points-of-view-by-Juan-Ferrando-Youth-team-coach-at-Malaga-FC |
Thursday, January 12, 2012
Microsoft and LG Electronics have signed a patent agreement that covers LG’s Android mobile phones and tablets under Microsoft’s patent portfolio. It’s a wide-reaching deal that builds on an existing agreement and also covers any other LG consumer devices, including those running the Chrome OS.
Horacio Gutierrez, corporate vice president and deputy general counsel of the Intellectual Property Group at Microsoft, said “We are pleased to have built upon our longstanding relationship with LG to reach a mutually beneficial agreement. Together with our 10 previous agreements with Android and Chrome OS device manufacturers, including HTC, Samsung and Acer, this agreement with LG means that more than 70% of all Android smartphones sold in the U.S. are now receiving coverage under Microsoft’s patent portfolio. We are proud of the continued success of our program in resolving the IP issues surrounding Android and Chrome OS.”
Categories: Handsets and manufacturers, Operating systems, NewsNumber of views: 1940
Tags: android legal smartphone microsoft lg Chrome tablet
Copyright 2006 - 2021 by thefonecast.com. First and third party cookies are used on our site. | https://thefonecast.com/News/lg-is-the-latest-android-manufacturer-to-sign-a-microsoft-licensing-deal |
Para ver esta informacion en español, haga aquí.
Creating a common vision and implementation plan for Bend's core areas by combining tools, incentives, and programs such as Urban Renewal.
|
|
PROJECT
AREA
|
|
ADVISORY
BOARD
|
|
PROJECT
UPDATES
|
|
DOCUMENTS
GENERAL OVERVIEW
This project will work to create a common vision and implementation plan for the Core Area of the City. This includes four of the opportunity areas included in the 2016 Urban Growth Boundary expansion (Bend Central District, East Downtown, Inner Highway 20 / Greenwood, and KorPine). Through this process, the City will work with property owners, area residents, and other stakeholders to:
- Develop an urban design framework for the area.
- Identify needed circulation improvements to enhance connectivity within and between areas as well as to the city at large.
- Identify programs and projects for the area, including streetscape improvements, public spaces, gateways, affordable housing, or art and beautification programs.
- Determine location, phasing, and costs for necessary infrastructure (sewer, water, storm water and transportation) to support potential development and redevelopment of the area.
- Develop funding strategies, incentives, and other implementation tools, such as urban renewal, to achieve the vision for the area and encourage public-private partnerships.
- Identify barriers to development and any needed code amendments or zoning changes, if necessary, to achieve the vision for this area.
- Determine the boundary of a potential urban renewal district that would encourage investment within the area through tax increment financing.
- If recommended by the Bend Urban Renewal Agency (BURA), adopt an Urban Renewal Plan and new Urban Renewal District.
Where is the Core Area of Bend?
While the City has invested in a lot of planning for the Bend Central District, which consists generally of the area between Revere and Franklin Avenues and 1st and 4th Streets, the City has not yet developed detailed plans for the other sub districts within the project study area.
These include the KorPine and East Downtown opportunity areas as well as areas that run along Division Street and north of Wilson Avenue. A great part of this planning process will be to understand the uniqueness of each of these sub-areas while also identifying projects and programs to connect these areas to one another.
Urban Renewal Advisory Board (URAB)
URAB was appointed as a citizen advisory committee to provide recommendations to the Bend Urban Renewal Agency on the development of an Urban Renewal Feasibility Study for the central area of Bend to accomplish the goals and policies outlined in the Bend Comprehensive Plan.
This project is being coordinated with the Bend Transportation System Plan (TSP).
PROJECT UPDATES
- 06/21/2019 4:33 PM
Online Open House available until July 13!
A variety of updates related to the Core Are Project including a link to the online open house, an update on some of the comments we've heard from the community thus far, and details about the next URAB meeting in August.
- 06/13/2019 11:33 AM
Core Area Events start tonight!
- 05/31/2019 1:00 PM
Core Area Upcoming Events- Save the dates!
- 04/25/2019 9:48 AM
Get Involved in the Core Area Project!
The City will be hosting 6 pop-up events throughout the month of May to inform the community about the project and advertise our June 15 community workshop!
- 04/08/2019 5:30 PM
URAB #2
The Urban Renewal Advisory Board (URAB) met for the second time on April 2, 2019. At this meeting, the committee discussed a set of guiding principles as they move through this planning process. They also received presentations on urban renewal, urban design, and development feasibility in the area.
- 02/16/2019 9:28 AM
Urban Renewal Advisory Board (URAB) elects Chair and Vice Chair
Find out what happened at the first URAB meeting on February 12, 2019.
- Core Area Open House Project Type Examples
- Development Feasibility Analysis - April 2, 2019
- Draft Urban Design Framework- May 14, 2019
- June Open House & Guest Speaker Flier
- URAB Guiding Principles
- Spanish Outreach Flier- ¡Cómo involvucrarse! | https://www.bendoregon.gov/government/departments/growth-management/coreareaimplementation |
Warning:
more...
Fetching bibliography...
Generate a file for use with external citation management software.
Directional light scattering by spherical silicon nanoparticles in the visible spectral range is experimentally demonstrated for the first time. These unique optical properties arise because of simultaneous excitation and mutual interference of magnetic and electric dipole resonances inside a single nanosphere. Such behaviour is similar to Kerker's-type scattering by hypothetic magneto-dielectric particles predicted theoretically three decades ago. Here we show that directivity of the far-field radiation pattern of single silicon spheres can be strongly dependent on the light wavelength and the nanoparticle size. For nanoparticles with sizes ranging from 100 to 200 nm, forward-to-backward scattering ratio above six can be experimentally obtained, making them similar to 'Huygens' sources. Unique optical properties of silicon nanoparticles make them promising for design of novel low-loss visible- and telecom-range metamaterials and nanoantenna devices.
National Center for
Biotechnology Information, | https://www.ncbi.nlm.nih.gov/pubmed/23443555?dopt=Abstract |
On Saturday, Nov. 19, I had the honor of performing as part of the Vassar College Orchestra in our second and final concert of the fall semester. The orchestra’s previous concert had taken place on Saturday, Oct. 9, leading to a speedy turnaround in order to prepare our repertoire following October Break. Despite the difficulty of this task, all members of the ensemble pulled through to put on a great show, which included some of my favorite pieces I’ve played here at Vassar. Although I have written mainly on the experience of music from the perspective of a listener in previous articles, I thought it would be insightful to reflect upon the artform from a performer’s point of view, highlighting the work of the group as a participant myself.
I have been playing trombone since the fourth grade, and I entered Vassar having already played with various wind ensembles and bands. Wanting to try something new, I decided to audition for the orchestra here, of which I have now been playing with for three semesters. This recent concert’s lineup included four pieces, with two having trombone parts for me to play. I was able to spend the first portion of the concert as a half-audience member, listening in from the hallway outside of the Skinner Recital Hall. The strings were the first to take the stage followed by the concertmaster, leading the group in tuning after applause; Director of the Orchestra Eduardo Navega then took the stage to additional applause. “Deux Propos” by 20th-century French composer Henry Fevrier was the chosen work for the strings, a slower piece that highlights the violins in the melody. The performance came to a subdued ending, leading into the introduction of additional members of the orchestra for Mozart’s “Clarinet Concerto.” Two flutes, two bassoons and two horns joined the strings to accompany Adjunct Artist in Music Ian Tyson, the featured clarinet soloist. The piece is structured in standard concerto form, consisting of three movements: the first fast, the second slow and the third fast again. Tyson played with a virtuosic ability that grabbed my attention, despite my personal unfamiliarity with his instrument; excellent dynamic contrast and performative interpretation created a nuanced and engaging performance requiring technical precision. Lasting around 30 minutes, the solo functioned as the centerpiece to the concert, requiring a large deal of preparation on part of the orchestra.
After intermission, the full orchestra entered onstage for the next two pieces. I enjoyed my parts for each of these two works, as they struck a nice balance between intrigue and a lower overall level of performance stress. Although I’ve managed to get to the practice room in preparation for the concert, I knew I still had to be 100 percent focused in order to play all of my parts well. We entered onstage again before playing Charles Ives “Postlude in F,” a lesser-known work with romantic tendencies that depart from Ives’ typical association with the American avant-garde. Situated between the two longer pieces on the program, “Postlude” served as a transition for the orchestra, allowing the group to establish their togetherness. The middle section contains a brief melody line played by all three trombones, later followed by a powerful, resonant brass chorale that drives the piece towards a minimal conclusion. A passage of this sort is one that is felt physically by the performer of the piece, leaving you completely devoted to bringing this written-down piece to life. The nuance of this work lies in its interpretation, rather than what is strictly made clear to us in notation; even if the lines often appeared technically easy, I had to exert great effort into my timing and breathing with the rest of the orchestra so as to maintain uniform sound.
The final piece of our concert was Sibelius’s “En Saga”, It showcases the composer’s late Romantic period style while also hinting at the advent of modernism within particular passages. Although the piece looked simple upon first impression, I quickly realized the amount of lung power and concentration my performance would require in order to achieve a result I could be proud of. “En Saga” ended up being one of my favorite classical works by the time the orchestra had begun to play it in its entirety; as the piece began, I sat in excited anticipation, grinning as the bassoons made their melodic entrance a minute into the piece.The brass section became more and more involved as the piece increased with intensity, focusing on unity in order to stay in tune as a section and time our rhythms cohesively. My part often moved into the upper register of my range, physically tiring me out. The areas towards the end contained key melodic figures, requiring me to push past any fatigue in order to project my sound into the space of the auditorium and reach the back of the crowd. At this point, I had gone through rushes of adrenaline, attempting to keep my composure for important moments in order to continually maximize my breath capacity. After finishing my last section, I noticed my body shaking from a combination of nervous energy, tiredness and excitement from being able to be part of the performance. Although a piece can sound satisfying to me as an audience member, having the chance to play for a group is far more intense, allowing me to hear the performance in the totality of its nuances and details when I am placed at the center of an ensemble. It enables a direct connection to the music itself that makes you feel oriented towards its authorial intent, bringing it alive to others as your instrument rings in harmony with the rest of your fellow performers. As the concert came to an end and we stood for the crowd, I was once again reminded of the joys involved with performance and why I wish to continue being a musician. | https://miscellanynews.org/2022/12/01/arts/playing-in-the-orchestra-a-students-perspective/ |
As Playground Games writer and narrative designer Anna Megill puts it, ‘it’s tough to get a job in the video game industry’. Skill alone often isn’t enough to break into the notoriously tricky space – sometimes, it’s also a matter of luck, good timing, creativity, and being polite. On Twitter, Megill recently kickstarted a thread about her experiences, encouraging developers to share how they got their start.
For Megill, serendipity played a big part. ‘I met the QA lead at the only game studio in town right when he was about to post an open role,’ she explained. Other creators have shared similar stories: of being young, available and skilled in a steadily growing industry.
Tomm Hulett, game developer at WayForward, described breaking in during 1991; as a young kid, he was asked to ‘QA’ the games of a family acquaintance due to his experience playing Battletoads. James Swallow, who has written for a number of video games including Marvel’s Guardians of the Galaxy, said luck, opportunity and relevant skill were all key to being hired for his first writing project.
Connections were another key theme of the Twitter thread.
Some responders described meeting just the ‘right person’ at the ‘right time’ – a customer at a hobby store who happened to be a game designer, a wild person at a rave, a family friend, a fellow Japanese language learner, and even a stranger on the internet.
It wasn’t just these connections that facilitated future jobs, though. As many developers describe, it’s still a matter of personal responsibility to ensure you’re well-prepared, well-educated, and well-experienced enough to ‘show up’ when opportunities present themselves.
Flexibility is key to breaking into the video game industry
If we’ve all learned one thing over the last few years, it’s that life can be unpredictable – and plans can fall through at any time. To that end, constant preparation and flexibility is essential for ‘breaking in’.
Even if you’re not a particularly ‘prepared’ person, a tool like an up-to-date portfolio can help showcase your skills and what you bring to the table in a practical and easily accessibly way. In fact, it can become the exact reason why somebody is hired over another person.
For businesses, hiring is often a matter of ‘value’ on the surface level. Interviewers want to know what skills you possess, and how you can aid a business. That extends to learning how you deal with change and opportunity, how you communicate with others, and whether your work stands on its own.
Communication skills and friendliness are key, but your work will often be the most eye-catching and attractive part of your applications.
‘I started in games journo knowing beforehand that I wanted to transition into games writing and made twines to pad my portfolio until I started getting contracts,’ explained Emma Kidwell, writer at Hangar 13. ‘My twine stuff is the reason I was able to get interviews/jobs at the beginning. Write what games you want to write and put them on your portfolio website.’
Many developers in the thread agreed that being able to demonstrate individual creativity was key to landing the right role. Developer Epic Stirpē, a current writer on Fortnite, successfully applied to work at the previous version of Telltale Games with a unique application that included a point-and-click adventure game he’d built from the ground up.
While this level of detail may not be necessary, it’s the creative flair and spark that got Stirpē noticed. Companies need to know you’re passionate about what you do, and that your skills match their job description.
The path forward may be winding
Even if you don’t have the skills for your ‘dream’ game industry job right away, there are still pathways for breaking in, as is made very clear in the thread. While some developers were able to combine luck and creativity to nab their first jobs, others found their way by taking smaller steps.
Internships featured heavily in the thread, with many describing summer work experience placements in social media, game design or engineering as being essential for taking the next steps in their career. These internships can be incredibly valuable, so long as they are either paid (preferred) or provide genuinely valuable experiences worth the energy and time put in.
Many companies will advertise these placements at specific times throughout the year – but some companies are also happy to hear from passionate candidates at any point. If you’re keen on learning from a particular company, a polite email enquiring about a placement could take you a long way. There’s still a high chance of rejection, but it’s good experience either way.
Read: How to apply for a job in games (and what not to do)
The unfortunate fallacy of gaining a job, in many global industries, is that you need experience to get experience. Internships solve some of this dilemma, but it is important to make sure you’re learning on the job, and not being exploited.
Don’t work for free if you can help it, but if you believe you’re gaining something of equal value in return (and you have the financial support to spare a few hours), internships can be incredibly worthwhile. That said, they’re not the only pathway in.
The Twitter thread that kickstarted the conversation is well worth a look, and contains a multitude of stories from across the video game industry. What’s clear from scrolling is that everybody who currently works in games had different pathways to get there. Some are strange, others are more straightforward. There are raves and toga parties, but there’s also dedication, polished portfolios, and plenty of rejection along the way.
The common thread is working hard, being prepared for any opportunities that come your way, and never being afraid to experiment with your career. Breaking into the game video industry is tough, but there are always paths in – some you may have never even considered. | https://www.gameshub.com/news/opinions-analysis/break-into-the-video-game-industry-tips-guide-11183/ |
Tiohhian is a registered public joint stock corporation in Zürich, Switzerland and its consulting division offer services to Governments, Non-Governmental Agencies, Corporations, and Companies that create and open-up new markets.
Our clients choose us because they know we offer the knowledge, insight and guidance they need to move forward with confidence. Our tailor made client oriented approaches, ensures that decision making processes are fully informed through customized tailor made solutions. We deliver practical and rigorous analysis which yields pragmatic solutions that bears immediate high-impact results – quickly.
We consider our clients’ needs as our top priority and ensure timely delivery of the final product at the highest best standards to our clients. Our offices and consultants in various countries provide regional presence and expertise with a holistic understanding of our local and international clientele. | http://www.tiohhian.com/tiohhian-consulting/about-consulting |
High-Quality Single-Shot Capture of Facial Geometry
This project develops a passive stereo system for capturing the 3D geometry of a face in a single-shot with standard light sources.
July 26, 2010
ACM SIGGRAPH 2010
Authors
Thabo Beeler (Disney Research)
Bernd Bickel (Disney Research)
Paul Beardsley (Disney Research)
Robert W. Sumner (Disney Research)
Markus Gross (Disney Research/ETH Zurich)
High-Quality Single-Shot Capture of Facial Geometry
The system is low-cost and easy to deploy. Results are sub-millimeter accurate and commensurate with those from state-of-the-art systems based on active lighting (e.g., laser scanners), and the models meet the quality requirements of a demanding domain like the movie industry. Recovered models are shown for captures from both high-end cameras in a studio setting and from a consumer binocular-stereo camera, demonstrating scalability across a spectrum of camera deployments, and showing the potential for 3D face modeling to move beyond the professional arena and into the emerging consumer market in stereoscopic photography. Our primary technical contribution is a modification of standard stereo refinement methods to capture pore-scale geometry, using a qualitative approach that produces visually realistic results. The second technical contribution is a calibration method suited to face capture systems. The systemic contribution includes multiple demonstrations of system robustness and quality. These include capture in a studio setup, capture off a consumer binocular-stereo camera, scanning of faces of varying gender and ethnicity and age, capture of highly-transient facial expression, and scanning a physical mask to provide ground-truth validation. | https://studios.disneyresearch.com/2010/07/26/high-quality-single-shot-capture-of-facial-geometry/ |
This workshop focuses on a key question – what matters to investors? Using the main themes arising from this question the presentation will develop a set of guiding principles for the establishment of good practices that any Investment Promotion Agency (IPA) needs to adopt. These range from planning stages through to self-evaluation and the inter-connections between the IPA and other key Government agencies and Departments. The workshop is primarily practice based but there is a significant amount of theoretical concepts which lie behind this. The workshop will provide the fundamentals and better understanding of the concept and practice of technology transfer with several examples from developing countries. Several examples and reasons for DCs poor progress with technology transfer will be discussed as well as examples from the UK Technology Strategy Board; Bridging the divide project – University of California, Berkeley, USA; India Biotechnology and Africa.
Objectives
- Recent global developments in FDI
- Barriers to FDI and domestic investment
- Evaluating the host economy environment for FDI
- IPA’s and Good Investment Promotion Practice
- Demonstrate understanding of the determinants of FDI & criteria motivating investors in a specific location.
- Identify the policies most conducive to FDI and evaluate the investment environment of a given country.
- Maximise impact of Investment Promotion Agency
- Next steps – developing a modern IPA for your country
- Technology transfer
- Technology diffusion
- Technical change and productivity gap
- TT key success factors
- Transfer models
- Transfer linkages
Program structure
- Investment Promotion: Creating a Good Practice Environment
- Technology Transfer: Concepts and Practice
Benefit for participants
- It is a great opportunity for participants to gain knowledge on FDI and technology transfer.
- Clear understanding of FDI and technology transfer concepts and trends.
- Understanding of the role and limits of IPA’s.
- Active participation in theoretical and policy debate.
- Providing training and educational opportunities for participants.
- Offering networking opportunities for participants and businesses.
- Meeting with professional international keynote speakers.
- Meeting with professionals who are considering changing their careers.
- Motivating participants to armaments science.
Benefits for employers
- Enhance relationship with different organization.
- Meeting with participants.
- Raising institutional image, profile, awareness, and market exposure for the future intakes.
- Post event publicity through social media marketing, bulletins and word of mouth.
- The opportunity for employers to showcase their role in community service.
Course material
Included in the course fee, the following learning materials will be provided:
- All overhead slides/transparencies.
- Case studies (print and video) used on the course.
- Certificate of attendance from WASD. | https://wasd.org.uk/academy/training/workshops/investment-technology-transfer/ |
Machining & Finishing
We offer significant primary or secondary machining and finishing services through our sister company located next door.
This 45,000 sq.ft. fully equipped machining, fabrication and assembly facility saves our customers time and money on secondary operations.
Specializes in machining of aluminum, copper, bronze and many other alloys.
Quality
We have earned our reputation for quality the old fashioned way – part by part through our long journey of over 80 years.
All our professionals are well trained to continuously monitor the casting process to ensure that quality standards are met through every phase of production, to match or exceed our customers’ specifications.
Quality at Illini Foundry is a blend of experience, hard work, craftsmanship and technology. It’s a never ending process which covers every part produced here, from raw to finished castings. | https://www.illinifoundry.com/secondary-operations/ |
CROSS-REFERENCE TO RELATED APPLICATION
This application claims benefit of provisional application Ser. No. 60/135,030, filed May 20, 1999, entitled Low Temperature Oxidation Using Supported Molten Salt Catalysts, which is incorporated herein by reference in its entirety.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
The research and development leading to the subject matter disclosed herein was made under contract with the United States Department of Energy, Contract No. DEFG26-98FT40122.
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
This invention relates to processes for conducting chemical reactions in the presence of a molten inorganic salt.
A number of chemical reactions can be conducted in the presence of a molten salt. Among these are, for example, coal gasification reactions, propene oxidation, pyrolysis of kraft lignin, preparation of mixed phosphates, electrodeposition of aluminum from aluminum chloride, deoxidation of niobium and titanium, destruction of cyanides, titanium production, synthesis of 2-arylpropionic esters, reductions of nitrogen gas, oxidations of methane to methanol, electrodeposition of titanium onto alumina, and silicon nitride, spent oxide fuel reduction processes, palladium-catalyzed Trost-Tsuji C-C coupling reactions, various electrochemical reactions and among many others. The molten salt can perform various functions, depending on the particular reaction being performed. Thus, the molten salt can act as a reagent, a catalyst or even a solvent, depending on the particular reaction system. In some cases, the salt may perform more than one of these functions.
Notable classes of chemical reactions that can be conducted in the presence of a molten salt are oxidation reactions of organic compounds, i.e., combustion reactions. Combustion reactions are perhaps more important than any other class of industrial chemical reactions, as they provide tremendous quantities of heat to drive, or to produce steam which is then used to drive, the turbines that generate most of the world's electrical power. The abatement and control of environmental pollution caused by combustion products are a major world problem and the focus of global efforts to reduce “greenhouse gas” emissions that ultimately contribute to global warming.
x
x
x
x
x
Of particular significance is the reduction of NOand SOthat are associated with the combustion of natural gas at high temperatures near 1200° C. Although modern systems use lean premixed combustion gases to reduce temperatures, the combustion temperatures are still high enough to promote NOformation. In addition, the high temperatures also require that dilution air be added to meet turbine temperature requirements. Because the production of NOdepends exponentially on temperature, it is desirable to develop a method for combustion of natural gas and other vaporizable fuels, such as light diesel fuel, at lower temperature, preferably less than 1000° C. It is also desirable to simultaneously capture in-situ any residual sulfur species present in the fuel to eliminate SOformation as well.
It has been proposed to accomplish this by conducting the oxidation reactions in the presence of certain molten salts, which permit lower temperature combustion to take place. See, for example, U.S. Pat. Nos. 3,647,358, 3,642,583 and 4,246,255. Molten salts have been used to selectively catalyze various oxidation reactions. Most non-charged materials are soluble in molten salts. It is believed that the solute acquires an electrostatic orientation in the melt, reducing the energy required to initiate and sustain chemical reactions. Under these conditions, the solute, when exposed to oxygen, will oxidize at temperatures lower than those normally required for oxidation while maintaining high oxidation efficiencies. Molten salt catalysts have been used to selectively oxidize various fuels at reduced temperatures due to the fuel's solubility in the molten salt and the lower required activation energies.
Although molten salt technology provides substantial potential advantages, practical problems have largely prevented its implementation into commercial processes. The main problem is that it is has been difficult to get sufficient mass transfer between a reagent stream and a bed of molten salt to operate the process efficiently. Gaseous reactants that are bubbled through a molten salt bed tend to form large bubbles that have relatively low interfacial surface areas (i.e. between the gas and molten salt phases). As mass transfer rates, and thus reaction rates, will depend on interfacial surface area, this low surface area tends to result in low conversions. Conversions in principle can be improved by making the interfacial surface area larger (such as by making smaller bubbles) or by making the residence time greater, but neither of these approaches has been found to be practical, especially for large scale operations. These approaches result in excessive energy requirements or excessive capital expenses for equipment necessary to contain a large bed of molten salt.
Thus, it would be desirable to provide an improved method for conducting chemical reactions in the presence of a molten salt.
In one aspect, this invention is a method for using a molten salt to carry out a chemical reaction, comprising passing a chemical reagent or mixture thereof through a fluidized bed of support particles that support at least one molten salt, under conditions such that said reagents or mixture of chemical reagents react in the presence of the molten salt.
In a second aspect, this invention is a method for the low-temperature oxidation of an organic compound, comprising passing a mixture of the organic compound and an oxygen source through a fluidized bed of support particles that support at least one molten salt that catalyzes the oxidation of the organic compound, under conditions such that the organic compound is oxidized in the presence of the molten salt.
In a third aspect, this invention is a process for using a molten salt to carry out a chemical reaction of a gas, comprising:
providing a molten salt supported on fine particles in a fluidized bed; and
reacting the gas in the presence of the molten salt; wherein the molten salt catalyzes a reaction of or reacts with the gas.
In a fourth aspect, this invention is a method for low-temperature oxidation of gaseous species, comprising the steps of:
providing a supported molten salt catalyst in a fluidized bed; and
contacting the catalyst with at least one gaseous species to be oxidized.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a schematic drawing, in section, of a fluidized bed process according to the bed.
FIG. 2
is an enlarged view of a porous support particle for use in the invention.
FIG. 3
is an enlarged view, in section, of a portion of a porous support particle for use in the invention, showing a molten salt deposited in the pores.
DETAILED DESCRIPTION OF THE INVENTION
EXAMPLE 1
EXAMPLE 2
1
1
2
1
2
3
3
3
3
31
1
4
5
2
2
FIG. 1
FIG. 1
In this invention, chemical reactions are conducted in the presence of a fluidized bed of particles that support a molten salt. One or more reactants are passed through the fluidized bed, where the desired reaction takes place. Waste and product streams are removed either above or below bed level, depending on their density. The operation of a fluidized bed, in simplified form, is schematically represented in FIG. . In , vertical column includes perforated support plate . Column will typically have a height (above the perforated support plate) to diameter ratio of about 0.5:1 to about 10:1, more typically about 0.7:1 to about 2:1. Perforated support plate supports the weight of particle bed (when not fluidized) and permits the fluidizing gas and/or gaseous reagent to pass upward through particle bed to create the fluidizing conditions. As shown in , particle bed is in a fluidized state. Particle bed includes a plurality of particles (shown several orders of magnitude larger than scale) made of a support that carries a molten salt, as described more below. Column includes means and for introducing a fluidizing gas and/or a gaseous reactant beneath perforated support plate . These means can be, for example, a conduit for carrying the gas into the column, and one or more spargers, one or more gas jets, a perforated plate, a bubbler, or similar device that distributes the gas across the bottom of perforated support plate . Of course, various kinds of apparatus can be included to regulate flow, provide safety features, and the like.
In some cases, such as in combustion reactions, one or more gaseous reactants may serve as the fluidizing gas, so that no gases other than the reactant(s) are needed.
3
31
The fluidizing gas and/or gaseous reactant pass upwardly through particle bed , causing the particles to become fluidized. Fluidization causes a small (i.e., typically no more than 50%, preferably no more than about 20%) expansion of the height of the particle bed. Any gaseous reactant comes into intimate contact with the molten salt on the particles , where the desired reaction takes place.
1
7
Column further includes at least one outlet for removing effluent gasses, i.e., the fluidizing gas and/or any gaseous reaction products.
FIG. 1
1
6
6
In the embodiment shown in , column also contains solid/liquid-introducing means for introducing solid or liquid materials. Multiple such means may be included if desirable. Conversely, solid/liquid-introducing means may not be required, depending on the nature of the process. The solid/liquid-introducing means can be a chute, a system for metered delivery of the solid or liquid materials, or a simple opening though which materials can be added. The solid/liquid introducing means is preferably adapted with a door or other closure so that effluent gases do not escape through it as solid or liquid materials are added to the column.
FIG. 1
8
Examples of solid or liquid materials that can be added include, for example, additional support particles, additional quantities of the salt, solid or liquid reagents, and the like. In addition, the embodiment shown in also includes solid/liquid removal means , through which solid and/or liquid waste or product streams can be removed. Again, depending on the particular process, no solid/liquid removal means may be needed, or multiple such means may be provided.
31
Particles include a support material and a molten salt. As discussed more below, the support is preferably porous. The support material is selected with the particular process conditions in mind and generally will meet the following criteria: (1) particle size and density such that the particles can be fluidized and maintain fluidization in the presence of the molten salt, (2) sufficient mechanical integrity to withstand operation at fluidizing conditions without significant mechanical degradation, (3) chemically stable under process conditions, including being nonreactive with the molten salt and reagents, (4) thermally and, if the process is conducted under oxidation conditions, oxidatively stable under the process conditions, (5) insoluble in the various other components present in the process and (6) chemical nature such that the molten salt will readily adhere to the surface of the support particles.
2
2
3
2
2
3
2
3
4
As will be apparent from the foregoing, the selection of a support for a particular chemical process will depend on a number of process conditions, including the temperature, the particular molten salt or salts, the particular reactants, the selection of fluidizing gas and other conditions. However, in general, inorganic materials that have melting and decomposition temperatures above about 1000K, preferably above 1500K, more preferably above 2000 K can be used. Of course, the melting and decomposition temperatures of the support must exceed the melting temperature of the molten salt. Supports that can be used, depending of course on the particular process being run, include SiO, SiC, AlO, ZrO, NiO, FeO, WC, TiO, CaO, CaPO, AlN, various types of zeolite materials, and the like.
2
4
2
4
2
3
2
3
2O
3
2
2
3
3
2
3
2
3
2
3
2
3
2
3
4
2
Silicon dioxide is a particularly suitable support for reactions involving sodium tungstenate (NaWO), sodium sulfate (NaSO), cesium carbonate (CsCO) and boric oxide (BO) over a wide temperature range. Alumina (Al) is a particularly suitable support for reactions involving sodium tungstenate, sodium sulfate and cesium carbonate, again over a wide temperature range. Zirconia (ZrO) is a particularly suitable support, over a wide temperature range, for reactions involving sodium carbonate (NaCO), potassium carbonate (KCO), sodium tungstenate, sodium sulfate and boric oxide. Nickel oxide (NiO) is a particularly suitable support, over a wide temperature range, for reactions involving sodium carbonate (NaCO), lithium carbonate (LiCO), potassium carbonate (KCO), sodium tungstenate, sodium sulfate, cesium carbonate and boric oxide. Iron oxide (FeO) is particularly suitable for reactions involving sodium carbonate, potassium carbonate, sodium tungstenate, sodium sulfate, cesium carbonate and boric oxide. Tungsten carbide (WC) is useful in conjunction with boric oxide over a wide temperature range, and with sodium carbonate, potassium carbonate, lithium carbonate and sodium tungstenate at temperatures below about 1200-1400K. Titanium dioxide (TiO) is useful in conjunction with sodium tungstenate, sodium sulfate, cesium carbonate and boric oxide over a wide temperature range. Calcium oxide (CaO) is a particularly suitable support, over a wide temperature range, for reactions involving sodium carbonate, lithium carbonate, potassium carbonate, sodium tungstenate, sodium sulfate and cesium carbonate. Calcium phosphate (Ca(PO)) is useful at temperatures of about 1400K or below in reactions involving sodium carbonate and over a wide temperature range for reactions involving lithium carbonate, potassium carbonate, sodium tungstenate, sodium sulfate, cesium carbonate and boric oxide.
Iron oxide, zirconium dioxide, calcium oxide and alumina are preferred support materials for a wide range of reactions.
The salt is one that is molten at the desired process conditions, and which acts as a reagent, a catalyst, or a solvent in the process. The salt may perform two or more of these functions. The melting temperature is not critical so long as the salt (or mixture of salts) is liquid at the desired process temperatures. Salts or a mixture of salts having melting temperatures of below about 1000K, preferably below about 700K, more preferably below about 400K. The melting temperature of the salt may be below room temperature, and preferably is at least about 325K. A number of salts are known to be useful in molten salt reactions, including, for example, sodium carbonate, potassium carbonate, lithium carbonate, cesium carbonate, sodium tungstenate, lithium nitrate, sodium sulfate, boric oxide, calcium oxide, zinc chloride/potassium chloride mixtures, zinc chloride/sodium chloride mixtures, aluminum chloride/N-(n-butyl)pyridinium chloride mixtures, lithium chloride, calcium chloride, alkali metal nitrates, alkali metal polysulfides, aluminum chloride-1-methyl-3-ethylimidazolium chloride, and the like. Mixtures of two or more salts can be used, and may be desirable for specific purposes in certain reaction systems. For example, certain salt mixtures exhibit eutectic melting points that permit operation within some desired temperature range. Sodium carbonate/lithium carbonate mixtures are an example of this. In other cases, mixtures of salts may be used because each salt component performs a different function within the process.
Molten salt loadings are generally preferred to be as high as possible, consistent with maintaining the salt on the support, being able to fluidize the bed and obtaining an acceptable reaction rate. For non-porous supports, molten salt loadings tend to be less than 2% by weight, and more typically from about 0.1 to about 0.5% by weight, based on the weight of the support particles. Generally, maximum molten salt levels tend to increase with increasing support particle size and with increasing support particle density. The ability to maintain fluidization is related to the momentum of the particles, as well as to capillary forces that are present when molten salt is present on the particle surface. The capillary forces tend to cause particles to adhere on contact. On the other hand, increased particle momentum provides more energy for the particle to overcome the capillary forces. Increasing particle size reduces the surface area/unit weight, thus reducing the potential number of contact points between particles, in turn reducing the amount of particle agglomeration. Denser particles tend to gain greater momentum in the fluidized bed, due to their higher mass. Thus, fluidization conditions can be maintained at higher molten salt loadings when the support particle size is increased or its density increased.
Considerably greater molten salt loadings are achievable when the support is porous. When porous supports are used, molten salt loadings of up to 20% by weight or more molten salt based on the weight of the support particles can be achieved with stable fluidization of the bed. Preferably, the molten salt loading onto a porous support is from about 3 to about 15%, more preferably from about 5 to about 10% by weight.
As using porous particles provides a simple method of increasing the molten salt loading, their use is preferred. Porosity of small particles is generally expressed in terms of the surface area/unit weight of the particles. Preferred supports are sufficiently porous that their surface area per unit weight is increased to at least 1.5 times, preferably to at least 2 times, that of nonporous particles of the same average particle size and made from the same material. Thus, particles having a pore fraction of about 10% or more, preferably about 25% or more are preferred. At some point, however, the benefits of porosity are exceeded by the fragility of the particles. It is preferred that the particles do not have a pore fraction of greater than about 70%, more preferably not more than 50%.
Although this invention is not limited by any theory, it is believed that the presence of pores permits the molten salt to become absorbed into the support particle. Thus less of the salt is present on the surface of the support particles. As a result, the particles become heavier with little or no increase in diameter, thus increasing their density. In addition, the relative absence of molten salt on the surface of the particles reduces capillary forces that ordinarily tend to adhere liquid-coated particles together. Both the increased density of the particles and the relative lack of adhesive capillary forces on the surface of the particles tend to disfavor particle agglomeration. As a result, bed fluidization is more easily maintained, because larger, difficult-to-fluidize particles form at a low rate.
FIG. 2
200
201
202
203
201
202
illustrates a porous support impregnated with a molten salt. Particle includes support having pores schematically represented by lines . As shown a film or coating of molten salt may be present on the external surfaces of support , but preferably substantially all of the molten salt is contained within pores .
The diameter of the support particle can be as small as about 60 microns, and as large as about 5000 microns. As will be apparent from the foregoing discussion, the optimum particle size for any particular application will be related to several other parameters, such as the loading of molten salt, the bulk density of the support material, the porosity of the support, fluidizing gas velocities, and the like. Non-porous supports are preferably at least about 300 microns in diameter, more preferably at least about 500 microns in diameter. Porous supports are preferably at least about 250 microns in diameter, and are most preferably from about 350-1000 microns in diameter. Again, it is noted that optimum particle sizes may vary according to the particular process.
FIG. 3
202
204
202
202
202
shows an enlarged view of pores , showing quantities of impregnated molten salt resident upon the interior surfaces of pores . The embodiment shown is a preferred one in which the impregnated molten salt only partially fills pores . This allows for rapid transfer of the reactants into and out of pores , so that mass transfer in and out of the pores is facilitated. In this way, faster reaction rates can be achieved. When a porous support is used, it is preferred to load sufficient molten salt so that from about 5%, more preferably to about 10%, even more preferably about 20%, to about 75%, more preferably to about 50%, more preferably to about 35% of the pore volume of the support is filled with the molten salt.
The support can be charged with the salt in several ways. One convenient method is to form a solution of the salt in water or other convenient solvent and wash the support particles with the solution until the desired loading is achieved. This solvent technique can be done at ambient temperatures or at any elevated temperature up to or even above the melting temperature of the salt. This can be done in situ in the fluidization column if desired, or else the support can be charged with the salt in some different vessel. When the salt is added to the support in this manner, it is usually preferred to remove the water or solvent from the support particles, such as by filtration, centrifugation, heating, vacuum stripping, or a combination of these techniques. In some cases, though, the water or solvent does not interfere with the chemical reactions being conducted in the bed, in which case a removal step is not necessary.
An alternative way of making the supported salt is to mix the molten salt directly with the support particles, at some temperature above the melting temperature of the salt. Again, this can be done within the bed or elsewhere. An advantage of this technique is that no water or solvents are needed. However, care must be taken to avoid particle agglomeration due to the formation of salt bridges between particles if the molten salt is cooled below its melting temperature. This technique often can be used to add salt to the support during operation of the bed without disrupting the process.
FIG. 1
4
5
4
5
6
7
8
Chemical processes are conducted according to the invention by fluidizing the bed, heating the fluidized particles to the desired reaction temperature and then introducing the reactants under conditions that the desired reaction temperature and bed fluidization are maintained. If necessary, the reactants and/or fluidizing gas may be preheated before introducing them into the column. Once the reaction has begun, fluidization gas is continuously fed into the column through (referring to ) gas introduction means and/or . If the reactants do not serve as the fluidizing gas, they are continuously or intermittently added to the bed, either through gas introduction means and/or (if a gas), or through solid/liquid introduction means (if introduced as a liquid or solid). Vaporizable liquids or solids can also be added at or below bed level. Reaction products can be taken out above the bed (if a gas), such as through outlet . Higher density reaction products can be removed through solid/liquid removal means .
6
8
In, some cases, it may be desirable to add more of the supported molten salt during the operation of the process. Examples of this include cases where the molten salt is a reagent in the chemical process, or in which it operates as a catalyst but becomes exhausted over time. In such cases, the additional supported molten salt is conveniently added through a solid/liquid introduction means , either periodically or continuously throughout the process. To avoid overfilling the bed, spent supported molten salt can be removed through a sold/liquid removal means .
Total gas velocities (including fluidizing gas plus any gaseous reactant stream, if different than the fluidizing gas) are not critical provided they are sufficient to fluidize the bed and provide a desirable gas residence time within the fluidized bed. Typical gas velocities can range broadly from about 0.01 to about 30 m/s or more, depending on column dimensions, particle size and mass, loading of salt, porosity of the support, and the like. It is usually desirable to select a gas velocity that is somewhat above the minimum required to fluidize the bed.
Bed thickness is selected in conjunction with gas velocities to achieve both bed fluidization and a desirable gas residence time within the fluidized bed. Gas residence times are advantageously at least about 0.1 second, preferably at least about 0.5 second, and more preferably at least about 1 second, up to 10 seconds, preferably up to about 7 seconds, more preferably up to about 3 seconds.
The process is typically conducted isothermally, with the fluidized particles being maintained at a more or less constant temperature once steady-state conditions are reached. If necessary to maintain isothermal conditions, the column may be equipped with cooling to remove excess heat. In processes such as combustion, the exothermic nature of the reaction provides the necessary process heat to maintain isothermal conditions; excess heat can be recovered and the energy values used, for example, for heating or to produce process steam for a variety of uses. Process steam may be used to generate power such as through operation of a turbine.
Thus, the process of the invention provides a convenient and efficient way to conduct molten salt reactions. The process provides greatly improved mass transfer of reagents to and from the surface of the molten salt, so that reactions occur more rapidly and with higher yield. When supported on the support particles, the molten salt assumes a very high surface area configuration that promotes operational efficiency.
High Temp. Mater. P
US
Appl. Catal. A
Gen
Chem. Mater
Indian J. Chem. Techn
J. Alloy Compd
Environ. Sci. Technol
Metall. Mater. Trans. B
Tetrahedron Lett
Electrochim Acta
Z. Naturforsch B
Mater. Lett
J. Mol. Catal. A
Chem
1-6
x
The method is applicable to any process in which the reagents and reaction products are capable of being passed through the fluidized bed. Thus, it is generally applicable to reactions such as, for example, condensation reactions, hydrocarbon cracking, isomerization, halogenation, oxychlorination, oxidation, dehalogenation, and the like. Specific processes include hydrocarbon cracking and desulfurization using molten carbonate salts, chlorine removal from hydrogen sulfide and oxidation of sulfur dioxide to sulfur trioxide. The invention is also adaptable to coal and biomass gasification reactions of the type described by Matsunami et al, Sol. Energy 68(3) 257261, (2000) and Peelen et al. in -2: (4) 471-482 (1998); propene oxidation of the type described by Nijhuis et al., in -196(2), (2000); pyrolysis of kraft lignin; the preparation of mixed phosphates of the type described by Afanasiev in . 11: (8) 1999-2007 (1999); the electrodeposition of aluminum from aluminum chloride-N-(n-butyl)pyridinium chloride of the type described by Ali et al in . 6(6) 317-324 (1999): deoxidation of niobium and titanium as described by Suzuki et al. in . 288 (1-2) 173-182 (1999); cyanide destruction reactions of the type described by Alam et al. in . 32: (24) 3986-3992 (1998); titanium production processes of the type described by Deura et al. in 29 (6) 1167-1174 (1998); 2-arylpropionic esters synthesis reactions of the type described by Zim et al. in . 39: (39) 7071-7074 (1998); nitrogen gas reduction reactions of the type described by Goto et al., in 43: (21-22) 3379-3384 (1998); oxidations of methane to methanol such as are described by Lee et al. in 53: (2) 249-255 (1998); electrodeposition of titanium onto alumina, and silicon nitride, as described by Wei et al. in . 31: (3-6) 317-320 and 359-362 (1997), spent oxide fuel reduction processes, and palladium-catalyzed Trost-Tsuji C-C coupling reactions as described by de Bellofon et al. in -145 (1-2) 121-126 (1999). A process of particular interest is the combustion of fuel gasses such as natural gas, a vaporizable liquid hydrocarbon, syngas and a Calkane. Using the process of this invention, rapid and complete combustion of these gases can be conducted at temperatures well below 1200K, preferably below 1100K and more preferably from about 600-1000K. At temperatures below 1100K, and especially below 1000K, NOformation is minimized.
In combusting fuel gasses according to the invention, a mixture of fuel gas and an oxygen source (typically air) is used as both the fluidizing gas and the reactant stream. An excess of oxygen, preferably a 5-25% excess based on the moles of fuel gas, is typically used. The gas stream is passed upwardly through the fluidized bed, contacting the molten salt on the particles. The hydrocarbon values are oxidized to carbon dioxide and water, with very small amounts of carbon monoxide being produced.
2
3
2
3
2
3
2
3
3
2
4
2
4
2
3
2
x
The molten salt can be any that catalyzes the oxidation at the desired temperature, including, for example NaCO, KCO, LiCO, CsCO, LiNO, NaWO, NaSO, BOand the like. However, commercially available fuel gasses typically contain varying amounts of sulfur-containing and/or halogenated impurities such as HS and COS, various organic polysulfides, halogenated alkanes and similar compounds. These may range from trace amounts to, in rare cases, 90 mole-% of the fuel. The sulfur-containing compounds ordinarily would be oxidized to SOcompounds that would escape with the effluent gas. Thus, preferred molten salts (or molten salt mixtures) are those that react with the sulfur and/or halogen-containing compounds to capture sulfur and/or halogens and remove them from the gaseous effluent stream.
Molten salts that perform this latter function well include alkali metal carbonates, in particular lithium carbonate, sodium carbonate, potassium carbonates and cesium carbonate, or mixtures of two or more of these. Sulfur capture mainly proceeds through the reaction
&lt;PTEXT&gt;&lt;PDAT&gt;M&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;CO&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;3&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;&plus;H&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;S&plus;2O&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;&rarr;M&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;SO&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;4&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;&plus;CO&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;&plus;H&lt;/PDAT&gt;&lt;HIL&gt;&lt;SB&gt;&lt;PDAT&gt;2&lt;/PDAT&gt;&lt;/SB&gt;&lt;/HIL&gt;&lt;PDAT&gt;O&lt;/PDAT&gt;&lt;/PTEXT&gt;
wherein M represents the alkali metal ion. Halogen capture is believed to be due to the formation of HCl under the conditions of the combustion, which reacts with the alkali metal carbonate to form the corresponding halide salt (e.g., NaCl). Lithium, sodium carbonate and potassium carbonate, or mixtures of any two or more of these are especially preferred. Mixtures of lithium and sodium carbonate and of sodium and potassium carbonate are most preferred. If desired, the aforementioned carbonate salt can be used in combination with another type of molten salt such as those described above. Particularly suitable salts for use in combination with the carbonate salts include sodium sulfate, sodium nitrate and the like.
Because the molten salts that are reactive with sulfur and/or halogens are consumed, it is necessary to replenish those salts periodically. As discussed before, this can be accomplished by introducing additional amounts of supported molten salt continuously or intermittently during the operation of the process, or by shutting down the process periodically to substitute fresh supported molten salt for the spent catalyst.
x
x
Thus, this invention provides an economical method by which large quantities of combustible gasses can be burned at temperatures at which NOformation is minimized, while at the same time reducing SOand halogen emissions. Thus, the process is readily adaptable for use in electrical power and/or steam generation, wherein the excess heat of reaction is captured and converted to steam and/or electrical power.
Another use of the invention is in hazardous waste processing. Organic waste products of various types can be oxidized in accordance with the invention by passing a stream of the waste products through a fluidized bed of supported molten salt. Among such waste products are chlorinated hydrocarbons, including PCBs (polychlorinatedbiphenyls), trichlorobenzene, DDT (dichlorodiphenhl-trichloroethane) and other chlorinated aromatic compounds; nitrated and polynitrated compounds such as ammonium picrate, trinitrotoluene, mixed radioactive wastes, organic sulfides and polysulfides, dioxin, asphaltenes, organic cyanide compounds and the like.
The following examples illustrate a suitable set of fluidization conditions for certain embodiments of the invention. The examples are not to be construed as limiting the invention in any way.
2
3
3
A fluidized bed reactor having an internal diameter of 7.5 cm and equipped with a perforated plate distributor containing 18-1 mm holes is charged with 350 grams of Alcoa CPN 28×48 mesh alumina particles. The porous particles have an average diameter of 450 microns and a surface area of 318 m/g. The particles are heated to 250° C. for several hours in the reactor to drive off any absorbed water. The particles are then heated to 400° C. and air is pumped in through the distributor plate at varying rates to establish a minimum fluidization velocity for the neat particles. Once the minimum fluidization velocity is established, the fluidizing gas flow rate is adjusted to 15 cm/s above the minimum fluidization velocity. The pressure drop across the bed is about 725 Pa. While maintaining reactor temperature at 400° C. and constant fluidizing gas velocity, 225 g of a lithium nitrate solution (9 g LiNO/100 ml water) is added to the reactor in 5-ml increments over several days. Bed fluidization is evaluated by measuring the pressure drop across the bed; a constant pressure drop reflects good bed fluidization as the salt solution is added. Bed fluidization is easily maintained while the entire quantity of LiNO(20.5 g) is added.
The ability to maintain bed fluidization with added molten salt is evaluated in a manner similar to that described in Example 1. In this example, equal weights of nonporous silicon dioxide particles of various diameters are used instead of the porous alumina particles used in Example 1.
3
3
3
When nonporous particles having a diameter of 350 microns are used, bed fluidization is lost before even 0.5 g of LiNOis added. However, fluidization conditions are sustained when a loading of 0.5 g LiNOis supported on 350 g of nonporous, 850 micron diameter particles. When the particle size is increased to 1100 microns, nearly 2 grams of LiNOcan be added before bed fluidization is lost. | |
What is the DNA effect? The DNA effect is the ability of large technology companies to build a competitive advantage by leveraging user generated data in their networks. DNA in this context stands for ‘data-network-activities’ and refers to how the business model of large technology companies (like Google, Apple, Facebook, Alibaba, Tencent aka Big Tech) depends on direct interactions of users which generates lost of data and the ability of these companies to use this data to scale up operations and enter into new areas like financial services.
Category: Digitalization
Future of Work
The popular view that emerging technologies like Artificial Intelligence (AI), Robotics will dramatically improve our personal and professional lives usually gets contrasted against the threat of the millions of jobs that are at risk from automation. Against this backdrop, a report last year based on a three year study by MIT offers a balanced perspective on the relationship between these emerging technologies and future of work and the labor market.
FRB bitcoin
Reality of a central bank digital currency (CDBC) is coming closer. However the development of a CDBC is said to pose the biggest threat to banks in developed economies like US?
2021 NASDAQ Tech Trends report
The annual Nasdaq report highlights 4 most important trends that will have an impact in coming year
Technology ADOPTIOn – FASTER & shorter
The pace of technology adoption is getting faster and the time period for a new technology to become mainstream is getting shorter – it took 40 years for electrical power to be present in 90% of US households but only 24 years for computers and 10 years for social media usage.
How much would you pay to use Wikipedia? – Measuring value of free digital services
An approach to measuring the value of the free digital services provided by the Big Tech companies i.e. Google, Facebook, Wikipedia by valuing how much users would have to be compensated to not use them for a period of time. | https://just-random-thoughts.blog/category/digitalization/ |
The herniated disc pushes into the hollow tube of the spinal canal and directly pushes against the spinal cord passing over the spinal column. This can be the cause of damage to the spinal cord. Herniated discs can also stop the blood from flowing from the one and only blood vessel passing to the front of the spinal cord in the thoracic spine region, which can totally damage the nerve tissue in the spinal cord. The discs in the thoracic spine area are very thin as compared to other parts of the spine, resulting in loss of motion of the upper back. However, the thoracic area is quite sensitive to disease and injury that, in severe cases, it needs spinal surgery. The goal of the surgery is to remove the affected part or full herniated disc pressing on the spinal cord or nerve root and is called thoracic discectomy.
Who Needs Thoracic Discectomy?
Usually, symptoms of thoracic disc herniation show up after two years before the patient presents for treatment. These symptoms depend upon the area, position, or size of the degenerative disc disease, nerve irritation, or damage to the spinal cord. Some of the symptoms may include:
- Mid-back pain, radicular pain.
- Numbness and weakness.
- Bladder dysfunction.
- Lower extremity weakness.
How is Thoracic Discectomy Performed?
The following procedure is used to treat thoracic disc herniation. Thoracic discectomy targets to open the disc using laser ablation and a percutaneous needle. Significantly it is specified where the disc herniation has occurred inside the nucleus pulpous and is contraindicated where free disc disintegration is obvious.
Thoracic discectomy can be performed with two type procedures, the anterior approach (front side) or posterolateral approach (backside)
Anterior Procedure
This procedure usually involves open thoracotomy in which the herniated disc is approached through the chest cavity. This procedure is Video-Assisted Thoracic Surgery (VATS). It is a minimally invasive spine surgery that is performed so carefully by several techniques and uses a thoracoscope, which is a surgical tool with a small camera. Thoracoscope is set in the area of thorax with a small cut to provide actual images of the surgical part or area on a screen. These images help the surgeon to take out and the herniated disc and remove it by using several instruments put in through small incisions. This procedure is minimally invasive and results in faster recovery than any other procedure.
Posterolateral Procedure
The thoracic herniated disc is accessed through a small cut on the back of the spine. An opening through the bones that shield the herniated disc is made by removing a small part of the rib where the spine connects to it and transverse process (the spine is attached to a small bone). The thoracic discectomy is then performed with different types of small instruments. This surgery can be completed with the help of an X-ray and endoscope, on which basis the surgery to be carefully fulfilled through tiny openings. The removal of the thoracic herniated disc relieves pressure on the spinal cord or nerves and reduces pain.
Our board-certified neuro-spine surgeons will discuss your options with you and choose the one which is most effective for you.
Need help finding a doctor who gives you the best neuro spine care in New Jersey? Call us now and schedule your appointment today.
FAQ’S
How Long Does It Take to Recover From Thoracic Spine Surgery?
- Recovery from thoracic spine surgery can take anywhere from a period of four to six weeks.
- Your surgeon will tell you when you can resume your normal activities or go back to work.
- If you feel pain and discomfort, it can be easily managed by applying ice or taking pain medications, which will be prescribed by your surgeon.
Why Is Thoracic Pain a Red Flag?
- If you are experiencing back pain, which is localized in the thoracic region, it may hint to some serious underlying issue.
- If you are experiencing thoracic back pain after a violent trauma such as a car accident or a fall, you should consult a neuro spine surgeon.
- People with osteoporosis are more prone to serious issues in the thoracic region, even with minor injuries to the back.
What Is the Success Rate of Thoracic Discectomy?
- Patients who have received thoracic discectomy report good outcomes at a two-year time after surgery.
- The success rate of thoracic discectomy is 80 percent.
Is Thoracic Spine Surgery Dangerous?
- Like any surgical procedure, thoracic discectomy also poses some level of risk and complications.
- Because your surgeon will be operating on your spine, it is always a good idea to take time and review the risks associated with it.
- Usually, complications are easily managed by surgeons. | https://completemedicalwellness.com/departments/neuro-spine/thoracic-procedures/thoracic-discectomy/ |
We Have More About This Ancestor!
We have more information about this person. Create your free account now to see all the information we have about this person.
Karen Anderssen
1727-
1727
Birth • 0 Sources
1727
England, United Kingdom
Death • 0 Sources
Family Members
SPOUSES AND CHILDREN
1724-1774
Marriage: 1745
Sitchest, Somersetshire, England
Karen Anderssen
1727-
Children ()
PARENTS AND SIBLINGS
Unknown
Marriage:
Unknown
Children (1)
Karen Anderssen
1727-
About FamilySearch
It's all about family. FamilySearch is a nonprofit family history organization dedicated to connecting families across generations. FamilySearch believes that families bring joy and meaning to life. | https://ancestors.familysearch.org/en/LZ4G-5BB/karen-anderssen-1727 |
Support our nonpartisan, nonprofit research and insights which help leaders address societal challenges.Donate
- Authors:
-
Publication Date:
January 2017
Meeting the regulation and risk challenge in 2017 means dealing with the effects of unprecedented disruption, from the fallout of the 2008 financial crisis to geopolitical uncertainty to the effects of technological change. Organizations are responding to these anxiety producers with a variety of strategies, say CEOs in our annual survey. They are updating contingency plans and crisis procedures that go beyond their own corporate walls to manage external risk factors and secure supply chains. They are exercising fiscal prudence, tackling cyber risk internally and externally, and in general looking to improve their agility in these uncertain times.
Explore our full portfolio of thought leadership on Ceo Challenges 2017 here.
- CREATE AN ACCOUNT SIGN IN
-
Only available to members. Become a member. | https://www.conference-board.org/publications/publicationdetail.cfm?publicationid=7404 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.