content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Shivering. a state of exhaustion or feeling exceedingly fatigued all the time. Confusion. Hands that are stumbling.
What are the signs and symptoms of hypothermia?
In most cases, mild hypothermia will advance to moderate hypothermia, and then finally to severe hypothermia. Hypertension, shivering, fast breathing and heart rate, constricted blood vessels, apathy and weariness, poor judgment, and lack of coordination are all symptoms of an overdose. When asked at the beginning: what does it feel like to have hypothermia?
What happens when you become moderately hypothermic?
When you reach the stage of having moderate hypothermia, the condition requires immediate attention; otherwise, it will continue to deteriorate, and you will eventually get severe hypothermia. As soon as your internal body temperature falls below 83 degrees, it is quite possible that you will become asleep and unresponsive to the majority of external stimuli.
Do people with hypothermia exhibit bizarre behaviors?
People can lose consciousness and perhaps pass away if their hypothermia progresses to a severe state. This happens because their respiratory and cardiac rates can drop down to dangerously low levels. However, before they lose consciousness, persons who are suffering from hypothermia have been observed to display certain strange behaviors that may in fact be related to their condition.
What happens to a person with hypothermia before death?
People can lose consciousness and perhaps pass away if their hypothermia progresses to a severe state.This happens because their respiratory and cardiac rates can drop down to dangerously low levels.But before they lose consciousness, persons who are suffering from hypothermia have been known to display certain unusual actions that may in reality be a last-ditch effort to survive.These behaviors have been seen.
How do you know if you get hypothermia?
Shivering, which may halt as hypothermia develops (although shivering is really a good indicator that a person’s heat regulation mechanisms are still working), is one of the symptoms of hypothermia in adults. Other symptoms include: A shallow and labored breathing pattern. Confusion as well as a loss of recall
Do you feel hot when you have hypothermia?
In actuality, in severe cases of hypothermia, you could have a sensation of great warmth because your body is making a desperate effort to prevent the tissue in your limbs from freezing and thereby expanding blood vessels.Shaking is one of the most common signs that someone is suffering from hypothermia (Though this may stop as symptoms increase in severity.) breathing that is shallow or sluggish
Can you have hypothermia and not know it?
The vast majority of people do not become aware that they have it until it is too late. Hypothermia can result in a heart attack, damage to the liver, failure of the kidneys, or even death if it is not treated. The condition known as hypothermia is distinct from frostbite.
What are the 5 stages of hypothermia?
- The Management of Hypothermia HT I: Mild Hypothermia, temperatures between 35 and 32 degrees. Consciousness that is normal or very close to normal, shaking
- HT II refers to moderate hypothermia, which occurs between 32 and 28 degrees. The shivering ceases, and your consciousness starts to get worse
- HT III: Extreme Hypothermia, Temperatures Between 24-28 degrees
- HT IV: Death Appearing to Have Occurred, 15-24 degrees
- HT V: Fatal hypothermia with permanent consequences
How long does hypothermia last?
If a person is submerged in water that has a temperature of 32.5 degrees Fahrenheit or 0.3 degrees Celsius, they will begin to experience symptoms of mild hypothermia in less than 2 minutes, and they will become unconscious in less than 15 minutes. The average amount of time that they will survive is between 15 and 45 minutes.
How quickly can you get hypothermia?
An Overview of the Hypothermia Timeline The onset of hypothermia can occur as quickly as a few minutes after falling into cold water, but in most cases, the symptoms appear gradually. If the water temperature is below 40 degrees Fahrenheit, significant harm can occur in as little as a few minutes. [Case in point:]
What are the stages of hypothermia?
- The following are the signs and symptoms that characterize each of the three phases of hypothermia: The first stage is characterized by trembling and decreased circulation.
- The second stage is characterized by a slow and weak pulse, sluggish breathing, loss of coordination, irritability, disorientation, and drowsy behavior
- Advanced stage characterized by sluggish, weak, or nonexistent pulse and respiration
Can you have low temp with Covid?
Temperature is one of the main symptoms of COVID-19; nevertheless, it is possible to be infected with the coronavirus and have a cough or other symptoms without having a fever, or having a fever of a very low grade — particularly in the initial few days of the infection.Keep in mind that it is also possible to have the coronavirus with very few symptoms or perhaps no symptoms at all.This is something that should not be discounted.
How cold does it need to be to get hypothermia?
When your body temperature drops below 95 degrees Fahrenheit, a condition known as hypothermia can develop (35 C).
Does hypothermia go away on its own?
The majority of healthy persons who suffer from mild to moderate hypothermia recover completely. And they don’t have issues that persist over time. However, younger children and people who are frail or elderly may be more susceptible to hypothermia. This is due to the fact that their bodies are unable to regulate temperature as well.
At what temperature does shivering stop?
Shivering often ceases between 86 and 90 degrees Fahrenheit (30 and 32 degrees Celsius).
Does hypothermia make you sleepy?
Your body has the ability to create less heat than it loses, and vice versa. This has the potential to bring on hypothermia, also known as an unusually low body temperature. It has the potential to render you drowsy, disoriented, and clumsy. Because it is progressive and has an effect on your thinking, you might not notice that you require assistance right away.
How do you warm up hypothermia?
To warm up a person, you should use many layers of dry blankets or clothing. Put something over the person’s head so that just their face is visible. Protect the person’s body from the chilly ground by providing insulation. Place the individual so they are lying on their back on a blanket or some other warm surface if you are outside. | https://considercommonsense.com/interesting/what-does-it-feel-like-to-have-hypothermia.html |
We did it!
Amelia Martin raised £785 from 28 supporters
or
Start your own crowdfunding page
Closed 30/05/2020
Weʼve raised £785 to go towards funds for my World Challenge trip to Botswana, where I will be taking part in a community project.
- Northamptonshire
- Funded on Saturday, 30th May 2020
Don't have time to donate right now?
Story
I will be travelling to Botswana in 2020 to help with a community project. As a developing country, we will be trying to help them with supplies and building, as well as education. I am so lucky to live where I do, and have all the oppurtunities I have, and therefore am so thrilled to have a chance to help other young people who haven’t been so lucky.
Updates
0
Updates appear here
Amelia Martin started crowdfunding
Leave a message of support
Supporters
28
Niamh
Oct 1, 2019
Good luck x
£5.00
Char
Sep 30, 2019
£10.00
Brian Martin
Sep 29, 2019
Hope all goes well Amelia.
£40.00
Anonymous
Sep 26, 2019
Cally Hampson
Sep 25, 2019
Good luck❤️❤️
£10.00
Tracey Kitchen
Sep 23, 2019
Good luck with the fundraising Amelia x
£10.00
Janthea Brigden
Sep 23, 2019
Well done Amelia, hope you have a fantastic time!
£25.00
What is crowdfunding?
Crowdfunding is a new type of fundraising where you can raise funds for your own personal cause, even if you're not a registered nonprofit.
The page owner is responsible for the distribution of funds raised.
Great people make things happen
Do you know anyone in need or maybe want to help a local community cause? | https://www.justgiving.com/crowdfunding/amelia-martin-botswana |
Place the laundry in the tub, and submerge all the items in the soapy water.
Let the laundry soak for 10 minutes.
Swirl the laundry in the water, and agitate items against each other aggressively.
Rub articles of clothing together to remove any stains.
Is hand washing clothes effective?
Pros: Hand washing clothes is first and foremost energy efficient. All you need is a tub, water, and your hands (or feet in some instances) to wash your clothes. No need for you to use electricity since you will be using manual labor.
How do you manually wash clothes?
How to Hand Wash Clothes –
How can I wash without a washing machine?
Eco-Friendly Ways to Wash Your Clothes Without a Machine : Green
How do you bleach clothes in a bathtub?
Add 1/4 cup of bleach to 1 gallon of water and soak the clothes for only 5 to 10 minutes; any more, and you’ll start to break down the fabric. If you have stains on pastel, colorfast clothes, try soaking them in all-fabric bleach, which is gentler than chlorine bleach.
Does hand washing clothes kill germs?
To kill the germs in your laundry, wash your clothes on the hot cycle, then put everything in the dryer for 45 minutes. Wash whites with bleach, and use peroxide or color-safe bleach for colors. Do your laundry in water that’s at least 104 F to kill any viruses or bacteria.
Is it better to hand wash or machine wash?
The difference between handwashing and machine washing lies in the intensity with which the fabrics are agitated and rinsed. Your washing machine will not be as gentle as you will be when you wash by hand while certain fabrics and embellishments may shrink or become damaged in warm water.
How can I get my clothes really clean?
Add a half-cup of baking soda to the wash cycle along with your normal detergent to get rid of odors and residues left in clothing. 3. Add a cup of vinegar to your wash to remove any residue left by fabric softeners.
How do you make clothes easier to wash by hand?
Part 2 of 3: Hand Washing the Clothes
- Wash light and dark clothes separately. Start with the lightest colored items first.
- Fill two tubs with water. Use wide deep tubs that can fit at least one item of clothing.
- Add the detergent to one tub.
- Wash the clothes in the water.
- Rinse the clothes in the other tub.
Can I hand wash clothes with shampoo?
When you’re out of laundry detergent or traveling, you can still hand wash your clothes using shampoo. Wash your garments by hand in the sink or spot-treat stains with shampoo and water.
How do you wash a small load of laundry?
Washing Machine Settings
Check the manual, but as a rule of thumb, a small load will fill up the drum about a quarter of the way, medium will fill half, and large will fill three-quarters. Don’t pack your washer completely full; make sure the clothes have room to tumble freely. When in doubt, use cold water.
How do you wash clothes when washing machine is broken?
You’ll need an empty laundry basket to toss in the washed, rinsed, and wrung-out clothes.
- Step 1: Don’t Panic! Your washing machine just broke.
- Step 2: Get Organized.
- Step 3: Wash the Clothes.
- Step 4: Bleach the Clothes.
- Step 5: Hang Laundry Up to Dry.
- Step 6: Use the Laundromat If You Need It.
Do I really need to hand wash clothes?
Not all your delicates need to be hand washed. Most things labeled as “delicate,” “dry clean,” or “hand wash” (as opposed to “dry clean ONLY” or “hand wash ONLY”) can be safely put in a washing machine — as long as you’re careful. Using the “delicate” setting on the machine is another obvious aid to gentle washing. | https://powerwashlb.com/recommendations/how-to-wash-your-clothes-in-the-bathtub.html |
In Ireland, approximately, five million tonnes of cement are produced annually. Every tonne of this cement emits almost one tonne of CO2.
One-third of these emissions come from the energy required for the cement manufacturing process. Two-thirds, however, come from the burning of limestone to produce clinker – an ingredient of cement.
It’s for this reason, it’s incredibly important that the Irish Government correctly identified clinker as “extremely carbon-intensive” in the Climate Action Plan, released this week.
The Climate Action Plan
The Climate Action Plan is the most progressive undertaking presented by the Government in addressing Ireland’s role and the impact of the long term climate crisis. The action to reduce carbon emissions by 51% by 2030 is bold, admirable, and should be achievable for every aspect of the industry.
For the cement and construction industries, the Plan contains several proposals to help decarbonise both industries, and in particular, strengthen the pathway towards low carbon, innovative technologies.
The key, however, is to ensure a low carbon transition in the cement and construction industries at an acceptable cost.
The challenge
A sectoral breakdown of industry emissions, in the Climate Action Plan, shows that manufacturing combustion and process emissions from the mineral industry – and primarily cement manufacturing – account for the most significant share of emissions in this sector.
The Irish Green Building Council reports that the built environment sector accounts for around 22 million tonnes of CO2 in a standard year, equal to around 35% of all emissions produced in Ireland. 6.9 million tonnes (11%) of these emissions come from the embodied carbon of the materials themselves.
The National Development Plan, published in October, sets out ambitious infrastructure and residential projects that, while necessary to accommodate our growing population, will result in the embodied carbon increasing by 10% every year.
The programme of work to decrease embodied carbon in construction materials, included in the Climate Action Plan, is welcomed. The performance-based approach is key to accelerating innovative low-carbon technologies being adopted and endorsed by the construction industry.
Only by deploying the full spectrum of technical solutions, however, will we meaningfully accelerate decarbonisation of the construction sector. Fortunately, these solutions already exist.
Technical solutions
The cement and construction industries are developing and deploying a range of emission reduction technologies. At Ecocem, we add a further dimension to these efforts.
For more than 20 years, we have been developing, manufacturing, and supplying low carbon cement and construction solutions, providing the lowest carbon cement ever used in Europe at scale. From Dublin’s Aviva Stadium to the Convention Centre, we have achieved a cumulative reduction in CO2 emissions of almost 14 million tonnes.
We are ready and able to play an important role in assisting Ireland to reduce our emissions – and our technology has the potential to reduce the carbon footprint of the cement industry by 50%, in line with the Government’s ambitions. | https://www.ecocem.ie/ecocem-irelands-response-to-the-climate-action-plan-2021/ |
Each day Andrea does 25 squats to warm up for gymnastics practice and 15 squats to cool down after practice. How many squats does she do in all when she practices Monday through Friday?
Benny gets $5 a week for allowance. After saving his money for 20 weeks, how much more does Benny need to buy a bike that costs $108?Genevieve makes 43 bracelets. She gives 13 bracelets away as gifts and sells the rest for $4 each. How much money does Genevieve make in all?
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | https://www.onlinemathlearning.com/solve-two-step-word-problems-multiply.html |
The bouncing ball is an indispensable part of the vibrating screen. Its function is to pulsate and hit the up sieve and down sieve , so that the material adhering to the sieve surface is separated from it to avoid the influence of the grid on the sieving output and accuracy.
The bouncing ball is generally divided into φ10, φ15, φ20, φ25, φ28, φ30, φ40, φ50, φ60, etc. The commonly used ones are φ25 and φ28, and φ40, φ50 and φ60 are mainly used in the mining vibrating screen. on.
There are several types of bouncing balls, such as rubber balls, silicone balls, and PTFE bouncing balls. Different bouncing balls are used for different sieving purpose.
Natural rubber bouncing ball: suitable for general material screening process.
Silicone bouncing ball: It has the advantages of good elasticity, good wear resistance and no falling off. It is mainly suitable for sieving of food, medicine and other materials.
PTFE bouncing ball: It has the characteristics of oil resistance, immersion in oil and no deformation, and is mainly suitable for screening oily liquid materials.
According to different material characteristics, choose different bouncing balls for sieving machines to meet different requirements. Bouncing balls are small accessories, but they play a large role during sieving process. | https://www.vibrosievingmachine.com/the-role-of-bouncing-ball-in-vibrating-sieve-machine/ |
Thursday, 8 February 2007
Eternal Return
Eternal return (also known as "eternal recurrence") is a concept which posits that the universe has been recurring, and will continue to recur in the exact same self-similar form an incomprehensible and infathomable number of times. The concept has roots in ancient Egypt, and was subsequently taken up by the Pythagoreans and Stoics. With the decline of antiquity and the spread of Christianity, the concept fell into disuse, though Fredrich Nietzsche briefly resurrected it.In addition, the philosophical concept of eternal recurrence was addressed by Arthur Schopenhauer. It is a purely physical concept, involving no "reincarnation," but the return of beings in the same bodies. Time is viewed as being not linear but cyclical.The basic premise is that the universe is limited in extent and contains a finite amount of matter, while time is viewed as being infinite. The universe has no starting or ending state, while the matter comprising it is constantly changing its state. The number of possible changes is finite, and so sooner or later the same state will recur. At least one mathematical proof has been developed to disprove this rationale for eternal return.Physicists such as Stephen Hawking and J. Richard Gott have proposed models by which the (or a) universe could undergo time travel, provided the balance between mass and energy created the appropriate cosmological geometry. More philosophical concepts from physics, such as Hawking's arrow of time for example, discusses cosmology as proceeding up to a certain point, whereafter it undergoes some form of time reversal, (which due to T-symmetry is thought to bring about a chaotic state due to thermodynamic entropy).
| |
Notice is hereby given of the intent of the NCI Agency, as the Host Nation, to issue an Invitation for Bid (IFB) for the Provision of Information Technology Modernisation (ITM) for the NATO Enterprise Work Package 1.
The ITM project is intended to:
a. Enable NATO Enterprise Users to continue to conduct their operational business, via the NCI Agency provision of the required level of end user services, which are underpinned by infrastructure services.
b. Provide modern effective and cost-efficient Infrastructure as a Service (IaaS).
A summary of the requirements of the project is set forth in Annex A, attached to the Notification of Intent
IFB-CO-13703-ITM
The estimated cost, subject to authorisation, for the services and deliverables included within the basic scope of the intended contract (Wave I to IV) is 105.63 M EUR Investment, and 63.03 M EUR Operations and Support over a life of 5 years. The investment cost of Wave I is 42.39 M EUR.
The not-to-exceed cost for bids submitted in response to the IFB shall be 132.03 M EUR , subject to authorisation, (125% of the estimated investment cost), or the equivalent expressed in any other allowed currency calculated in accordance with the currency conversion prescriptions that will be expressed in the IFB. | https://www.ncia.nato.int/about-us/newsroom/-105-million-invitation-for-bid-for-nato-enterprise-information-technology-modernisation.html |
Recently the L.A. Lakers have taken in a new player named Andre Ingram. Ingram is a 32-year-old basketball player who has played in G-League for a decade. The G-League is the official minor league basketball organization. Recently he was given the opportunity to join the NBA by accepting the offer made by the Lakers on April 11, 2018.
Most people are not to sure why he made it into the NBA through the Lakers, but they all share a common thread.
Isaac Hopkins, a senior at Sedro-Woolley High School, talked about why he thinks Ingram made it.
“I think he made it to the NBA because he’d been working most of his life to achieve a level of skill in basketball that no one else has achieved,” said Hopkins.
Ingram spent 10 years of practice in the G-League to earn his dream. In the end his hard work paid off.
The L.A. Lakers have been on a decline for the past few years, but recently they’re making an effort to change the outcome. The Lakers decide to take Ingram on their team due to his skills and perseverance.
The website Quora said, “the average NBA player scored 9.7 points per game in 2015-16 season.”
Some players do score above the average, but the rest are there to work as a team so that players who are open can make the shots and score. Also the L.A. Lakers points per game (ppg) average 7.63 for the team as of 2017-18 season.
Ingram has, “averaged 10.2 points in his career but right now he has a 12 point average for the Lakers as of 2017-2018 Season,” according to ESPN. | http://thecub.swsdonline.com/2323/sports/nba-rookie/ |
§ 22. Mr. Edmund Harvey
asked the Secretary of State for the Home Department whether he can give the latest figures of the numbers of children awaiting accommodation in approved schools; whether numbers of boys are still detained in prison owing to the shortage of remand homes; and what progress has been made in the provision of additional remand homes and additional approved schools?
§ The Secretary of State for the Home Department (Mr. Herbert Morrison)
At the end of last year about 1,300 boys and girls under 17 were awaiting vacancies in approved schools, but it must be remembered that it usually takes time to find for each case the appropriate school and there must, therefore, always be a substantial number waiting while arrangements are made for their reception into schools. The number waiting in prison was less than 20. As my hon. Friend is aware, no child under 14 can be detained in prison, and young persons between 14 and 17 cannot be so detained unless the court gives a certificate of unruliness or depravity. During the year nearly 600 additional places were provided in approved schools and about 450 in remand homes. Further plans are on foot and, although in war conditions the difficulties of finding premises and getting necessary work done are very great, it is hoped that a number of additional approved schools and remand homes may be opened during the next few months.
§ Mr. Harvey
Would it be possible to lease premises or urge authorities to do so instead of merely acquiring by purchase?
§ Mr. Morrison
I do not think that would be excluded from consideration.
§ Mr. Kenneth Lindsay
Have they power to requisition?
§ Mr. Morrison
I am not sure.
§ Mr. Thorne
Can anything be done to help local authorities to obtain more premises?
§ Mr. Morrison
We give them all the help we can in that direction. | https://api.parliament.uk/historic-hansard/commons/1942/feb/12/approved-schools-and-remand-homes |
Similar to a Column definition for a DB or a spread sheet (or, with reservations, a CSV file), a Column describes properties such as name (key) and type of elements related to that Column (e.g. the according elements of the Row lines).
A Column implementation can provide its own text exchange format for the given objects. This method enables the Column to convert a value of the given type to a String and via fromStorageString(String) back to the value (bijective). This method supports data sinks (such as relational databases) which allow only a single value in a row's entry: In case T is an array type, then the storage String representation of the elements in that array are represented by a single returned String. In case a data sink (such as Amazon's SimpleDb) is to be addressed which provides dedicated support for multiple values in one row's entry, then the method toStorageStrings(Object) may be used instead.
A Column implementation can provide its own text exchange format for the given objects. This method enables the Column to convert a value of the given type to a String array and via fromStorageStrings(String) back to the value (bijective). This method supports data sinks (such as Amazon's SimpleDb) which provide dedicated support for multiple values in a row's entry: In case T is an array type, then the storage String representations of the elements in that array may be placed in dedicated entries of the returned String array. In case T is not an array type then the returned String array may contain just one value. In case data sinks (such as relational databases) are to be addressed which allow only a single value in a row's entry, then the method toStorageString(Object) may be used instead.
A Column implementation can provide its own text exchange format for the given objects. This method enables the Column to convert a String value to a value of the given type and via toStorageString(Object) back to the String (bijective). This method supports data sinks (such as relational databases) which allow only a single value in a row's entry: In case T is an array type, then the storage String representation of the elements in that array are represented by the single passed String. In case a data sink (such as Amazon's SimpleDb) is to be addressed which provides dedicated support for multiple values in one row's entry, then the method fromStorageStrings(String) may be used instead.
A Column implementation can provide its own text exchange format for the given objects. This method enables the Column to convert a String array value to a value of the given type and via toStorageStrings(Object) back to the String array (bijective). This method supports data sinks (such as Amazon's SimpleDb) which provide dedicated support for multiple values in a row's entry: In case T is an array type, then the storage String representations of the elements in that array may be placed in dedicated entries of the provided String array. In case T is not an array type then the passed String array may contain just one value. In case data sinks (such as relational databases) are to be addressed which allow only a single value in a row's entry, then the method #fromStorageString(Object) may be used instead.
aStringArray - The value to be converted to a type instance.
A Column implementation can provide its own printable format of the given objects; for example a human readable text representation of the value (or in very specialized cases even enriched with ANSI escape codes). This method enables the Column to convert a value of the given type to a human readable text. The human readable text, in comparison to the method #toString() (or #toStorageFormat(Object)) is not intended to be converted back to the actual value (not bijective). This method may be used a Header instance's method Header.toPrintable(Record). | https://static.javadoc.io/org.refcodes/refcodes-tabular/1.0.1/org/refcodes/tabular/Column.html |
Q:
SUMPRODUCT dynamic columns with multiple criteria
Attached is the screenshot of the data.
What I am trying to do is to sum all the criterias that match and sum the columns dynamically.
I believe the picture describes what I'd like to accomplish.
For clarity, given the structure of the data, what I'd like to do is to sum the relevant columns (Actuals per desired date) as well as other criterias.
In this example, I'd like to sum only for the Actuals from Feb 2016 to Apr 2016 for "USA", "John" and "Milk"
A:
=SUMPRODUCT(($B$14=$A$3:$A$8)*($B$15=$B$3:$B$8)*($B$16=$C$3:$C$8)*($B$17=$E$2:$P$2)*($B$18<=$E$1:$P$1)*($C$18>=$E$1:$P$1)*($E$3:$P$8))
that should do
| |
It is common practice to estimate a mean diameter for spherical or sub-spherical particles or vesicles in a rock by multiplying the average diameter of the approximately circular cross-sections visible in thin section by a factor of 1.273. This number-weighted average may be dominated by the hard-to-measure fine tail of the size distribution, and is unlikely to be representative of the average particle diameter of greatest interest for a wide range of geological problems or processes. Average particle size can be quantified in a variety of ways, based on the mass or surface area of the particles, and here we provide exact relations of these different average measures to straightforward measurements possible in thin section, including an analysis of how many particles to measure to achieve a desired level of uncertainty. The use of average particle diameter is illustrated firstly with a consideration of the accumulation of olivine phenocrysts on the floor of the 135 m thick picrodolerite/crinanite unit of the Shiant Isles Main Sill. We show that the 45 m thick crystal pile on the sill floor could have formed by crystal settling within about a year. The second geological example is provided by an analysis of the sizes of exsolved Fe-rich droplets during unmixing of a basaltic melt in a suite of experimental charges. We show that the size distribution cannot be explained by sudden nucleation, followed by either Ostwald ripening or Brownian coalescence. We deduce that a continuous process of droplet nucleation during cooling is likely to have occurred.
Introduction
Useful answers to many geological problems can be obtained from relatively simple calculations that provide time- or length-scales, correct within an order of magnitude, which can then be used to place constraints on the processes likely to have been involved in the problem in question. Good examples of this kind of approach are based on determinations of grain size, with the quantification of particle size in rocks (either grains or bubbles) providing an opportunity to make progress on many problems of petrological interest, such as magma solidification time-scales (Cashman and Marsh, 1988; Cashman, 1993; Higgins, 1996), crystallization during magma ascent (Cashman, 1992; Hammer et al., 1999), rates of production and coalescence of volatile-filled bubbles from magma (Herd and Pinkerton, 1997), buoyancy-driven particle migration or other fluid dynamical processes (Robertson and Barnes, 2015), rates of Ostwald ripening (Cabane et al., 2001, 2005) and pattern formation in metamorphic rocks (Holness, 1997).
Much recent work using grain size to quantify geological processes is based on a sophisticated treatment involving the characterization and interpretation of the particle size distribution (as introduced by Marsh, 1988). The accuracy of such an approach is enhanced by determination of the true 3D distribution of grain sizes by disaggregation, dissolution of the matrix (e.g. Holness, 1997) or tomographic analysis (e.g. Carlson and Denison, 1992; Denison and Carlson, 1997). However, given the limitations of the materials we work with, most studies using grain size are based on observations of thin sections, in which case stereological corrections are required to convert the range of grain intersection size to a 3D grain-size distribution (e.g. Cashman and Marsh, 1988; Johnson, 1994; Higgins, 2000).
For spherical particles, converting the distribution of circular cross-sections observed in thin section into an estimate for the true 3D distribution of particle diameters is mathematically well defined (Wicksell, 1925). For non-spherical particles, such as parallelepipeds, available numerical methods are based on the assumption of invariant particle shape regardless of size (e.g. Higgins, 1994; Sahagian and Proussevitch, 1998; Morgan and Jerram, 2006), which is not likely to be true for natural samples (e.g. Mock and Jerram, 2005; Duchêne et al., 2008). However, for many applications a mean particle diameter is often sufficient to provide order of magnitude estimates that can be used to constrain timescales of geologically interesting problems. The question then arises as to how one might obtain a mean particle diameter from thin-section observations.
For the particular case of a monodisperse population of spheres (i.e. one with a uniform particle size), the average diameter of circular cross-sections obtained by random cross-sections through the population is π/4 times the sphere diameter. The simplicity of this relationship has led to its common application to estimate an average 3D particle diameter for polydisperse (i.e. a population with a range of 3D particle sizes) as well as monodisperse particle distributions (Hughes, 1978; Cashman and Marsh, 1988; Kong et al., 1995; Herd and Pinkerton, 1997), although model-based maximum likelihood approaches are also used (Kong et al., 1995).
In this contribution we concentrate on systems containing spherical particles, such as bubbles, droplets in an emulsion, or equant mineral grains such as olivine or spinel, and argue that such a simple approach to determining the average particle diameter has three problems, which can be remedied easily. Firstly, the average value obtained using this method may be affected strongly by the smallest particles in the population, which is precisely the part of the size distribution that is most likely to be overlooked or not properly resolved. Secondly, this approach provides no estimate of the uncertainties in the result. Thirdly, and perhaps most importantly, it doesn't address a question of great significance for polydisperse particle populations which is absent for the monodisperse case, namely: which of the various ways of calculating the average diameter is most appropriate for the problem we are interested in?
Firstly we discuss the various merits of different ways of calculating the average for sphere diameters, present some simple exact results linking them to circular cross-sections, and provide simulated data to show how many grains need to be measured to achieve any required degree of accuracy. We then explore how sensitive these statistics are to ignoring the smallest cross-sections in a sample, and whether the proposed method can be applied to non-spherical, but equant grains (specifically, we look at cubes). Lastly we illustrate the usefulness of various measures of the average particle size to constrain timescales of settling of olivine grains on the floor of a basaltic sill, and the mechanisms of coarsening of an unmixed immiscible basaltic melt.
Calculating the average
The choice of average diameter depends on which captures the relevant properties of the system under investigation. D4,3 represents the size class around which most of the mass of the particles lies, and for that reason may be taken as a good measure of particle diameter from a compositional point of view. In contrast, if the problem under consideration involves processes controlled by interfacial area (for example the aggregation of crystals to make sintered clusters, or the adsorption of water onto the surface of soil particles) then would be the best measure of average particle dimension. This is because the amount of surface area S per unit volume of sample is simply , where ϕ is the volume fraction of spherical grains. If we are interested in the permeability of rocks then we note that the Kozeny-Carman relation (Carman, 1937) gives an approximate expression for the hydrodynamic permeability in terms of S, so that is once more a key quantity. Hydrodynamic permeability may also be relevant to sedimentation of concentrated suspensions, as the rate of sedimentation is likely to be determined by D'Arcy flow through the bed as a whole, rather than by particles settling individually. In Ostwald ripening, crystal growth is driven by interfacial energy, but in the scaling regime of LSW theory (Lifshitz and Soyolov, 1961; Wagner, 1961) all the mean diameters have the same cube root dependence on time, so there is no obvious preferred choice in this case. Comparison of several mean diameters can, however, be used to shed light on whether Ostwald ripening is the dominant growth mechanism. For settling of dilute suspensions, we show below that is the more relevant parameter to calculate. As a less geologically relevant aside, statistical studies suggest that when observers look at cross-sections of different sphere distributions, they tend to rank them by size according to (Alderliesten, 2008); a correlation which has yet to be given a rigorous (physiological) explanation.
Sections through sphere distributions
Although these relations are independent of the size distribution of spheres, they are only exact in the limit of an infinite number of individual measurements. For a finite number of measurements of individual particles, there will be some scatter in results if different particle populations are measured (e.g. different parts of the same thin section, or different thin sections of the same sample), and there may also be some systematic error in the mean taken over many realizations of the experiment. As an extreme example of systematic bias, if only one grain is measured (i.e. if ), all the different averages , etc. would be identical, leading to the absurd prediction , which can never be true for any sphere size distribution.
Confidence intervals: how many circular sections to measure?
In this section we address the question of how many particles should be measured to achieve a given level of confidence in the estimates for average grain diameter. The results of our analysis depend on the distribution of sizes and, for simplicity, we assume this to be lognormal. The different lognormal distributions we consider are shown in Fig. 1.
Suppose we measure the diameters of particles in a thin section, and then calculate estimates for , and using equations 11 and 12. If this procedure were performed several times, each time measuring a different population of particles in the thin section, the results would have some scatter, due to statistical fluctuations, which (together with any systematic bias) will give an estimate of the uncertainty in the result.
Figure 2 shows results where we have used a computer to generate monodisperse spheres in random locations in space and generated circular sections from a plane drawn through this distribution. No account is taken of sphere overlaps, so the simulations strictly represent the dilute limit. The plots show the 68% confidence intervals for the predicted quantities compared to their true values (which are known in this case). The confidence intervals mean that 68% of the results lie in the interval; 68% being chosen because for a normal distribution this would represent plus or minus one standard deviation.
If we define the fractional error of the method to be the difference between the least accurate point in the confidence interval and the true value, divided by the true value, then we can plot this fractional error also for different lognormal sphere size distributions. This is done in Fig. 3, where we see that in general the fractional error is proportional to , as would be expected from the central limit theorem, but overall it is harder to accurately measure parameters for the wider sphere size distributions.
Sensitivity to ignoring the smallest circular cross sections
The smallest cross-sections in a sample may fall below the limit of resolution in an image, so it is important to know how sensitive are the statistics we propose to the omission of the small tail of the circle distribution. Figure 4 shows similar data to Fig. 3, but where some of the small circles have been omitted from the statistics, specifically, all those circles with diameters less than some fraction α of . We see that estimates of itself are not affected materially even when α is as large as 0.2, while estimates for are a little more sensitive, and require for there to be no measurable effect. This lack of sensitivity, of both the volume-weighted and area-weighted averages, to an under-representation of the smallest particles is unsurprising. Therefore for the statistics we propose here (, and their ratio), a sensible rule of thumb would be to check post hoc that all circles larger than one fifth of the calculated value of have been included in the averages, and preferably all those larger than one tenth of .
Does the procedure work for cubic crystals?
For non-spherical particles, we define the equivalent diameter of a particle as the diameter of a sphere that has the same volume. Similarly for the cross-sections, we define the equivalent circle diameter as the diameter of a circle with the same area as the cross-section of the particle (which for a cubic particle will be a polygon with 3 to 6 sides; see Higgins (1994) and Morgan and Jerram (2006) for examples).
Figure 5 shows the errors incurred when the proposed procedure in this paper (derived for spheres) is applied to a random distribution of randomly oriented cubes. While the errors do not die out at , and never fall below ∼3%, this procedure can indeed be used to obtain reasonable estimates of the equivalent mean diameters for the cube population.
We note however that although values for equivalent , and their ratio are well predicted, it would not be appropriate to apply the sphere result of to estimate the specific surface area in the system. The correct expression for cubes results in a surface area about 24% higher (the surface area of a cube being 24% higher than a sphere of the same volume). Instead, the standard stereological method (Russ, 1986) that the specific surface area is times the perimeter per unit area in the cross section would be the appropriate method for this statistic.
Application to gravitational settling of a polydisperse grain population
An almost universal process occurring during the solidification of basaltic magma is the relative movement of crystals and residual liquid under the influence of gravity. It is this process which is the fundamental driver for fractionation. Here we discuss a simple treatment of settling under gravity of an initially dilute suspension of crystals entrained in a basaltic magma. If we assume that emplacement of the crystal-bearing magma is essentially instantaneous, and that the crystal-bearing magma has a Newtonian rheology, the initial buildup rate of the thickness of the layer of crystals on the magma chamber floor for a dilute suspension of polydisperse spheres in a non-convecting Newtonian liquid can be calculated as follows:
As settling proceeds, the suspension becomes less dilute near the floor and the particles become closer together. During the final stages of sedimentation therefore, Stokes' Law no longer holds and sedimentation rates become controlled more by the permeability of the particle accumulation.
Olivine settling in the Shiant Isles Main Sill
To illustrate our treatment of settling timescales using average particle size we focus on the Shiant Isles Main Sill, which is the largest of the four separate Tertiary alkaline basalt sills exposed on the Shiant Isles (Outer Hebrides, Scotland), and intruded into Jurassic sediments (Gibb and Henderson, 1984). The Shiant Isles Main Sill is 165 m thick (Gibb and Henderson, 1984) and is a composite body (Drever and Johnston, 1959; Gibb and Henderson, 1989, 1996). The bulk of the sill (135 m of stratigraphy) formed from a single pulse of olivine-phyric magma that contained 10 vol.% olivine phenocrysts together with 1–2 wt.% Cr-spinel and a small amount of plagioclase (Gibb and Henderson, 2006). The olivine phenocrysts settled to the (contemporary) floor of the intrusion to form a picrodolerite ∼45 m thick, leaving an essentially aphyric magma that crystallized to form the remainder (a crinanite, dominated by plagioclase and augite, with interstitial olivine, Fe-Ti oxides and analcime). Here we concentrate on the olivine accumulation that forms the picrodolerite. The olivine grains are generally equant and rounded, commonly forming clusters and loose chains in which the individual grains are joined by small areas of grain boundary.
The smallest and largest equivalent diameters in the sample are and .
The corrected olivine mode in SC459 is lower than the 56–54 vol.% expected for a random loose packing of cohesionless monodisperse spheres (Onoda and Liniger, 1990; Ciamarra and Coniglio, 2008; Zamponi, 2008; Farrell et al., 2010), and lower still than random loose packings achieved for polydisperse particles (Epstein and Young, 1962; Jerram et al., 2003), suggesting that olivine was not the only phase settling from the incoming magma. However, the efficiency of random loose packing is reduced for strongly cohesive particles, for which a stable distribution can be achieved at lower volume fractions (Dong et al., 2006; Yang et al., 2007). The presence of highly non-spherical, loose clustered chains of olivine will reduce this still further (Campbell et al., 1978; Jerram et al., 2003), with packings as low as 37 vol.% observed for settled accumulations of loose chains and clusters of olivine and magnetite (Campbell et al., 1978),. If we assume that the crystal pile was not densified by compaction, shear or shaking, it is therefore plausible that the accumulated olivine grains in SC459 preserve a randomly loose packed, mechanically stable framework of crystals and loose crystal clusters and chains with an overall solid fraction of ∼45 vol.%.
Note that although the volume fraction of the settled olivine is ∼45 vol.% in the lower part of the picrodolerite, the olivine mode decreases upwards towards the crinanite (Gibb and Henderson, 1996). It is probable that this reduction is matched by an increase in the accumulation of another phase, such as plagioclase. The implications of this will be explored in a future contribution.
Mechanisms of droplet growth in phase-separating magmas
Silicate liquid immiscibility in basaltic systems, first recognized by Roedder and Weiblen (1971) (with significant further observations by Philpotts (1979, 1982)), is recognized increasingly as an important factor controlling fractionation and the compositions of erupted magmas (Veksler et al., 2007; Charlier et al., 2011). The potential for immiscibility to affect the liquid line of descent on the scale of a magma chamber depends on the ease with which the two conjugate liquids can separate under the influence of gravity (e.g. Holness et al., 2011; VanTongeren and Mathez, 2012) and this, in turn, is affected strongly by the coarseness of the emulsion (e.g. Chung and Mungall, 2009). At present almost nothing is known about the kinetics of emulsion coarsening (e.g. Martin and Kushiro, 1991; Veksler et al., 2010). However the size distribution of the droplets potentially carries information about the mechanisms of their formation and subsequent growth.
One possible scenario for emulsion formation is that there is an initial interval when many nuclei form, followed by evolution of the structure without further nucleation. This could happen, for example, with spinodal decomposition, or if the system is between the binodal and spinodal lines and there are many potential sites for heterogeneous nucleation. Once droplets have formed in such a system, they can grow by various mechanisms. In a non-convecting liquid, growth can happen either by Ostwald ripening, or through coalescence as droplets diffuse and collide under Brownian forces. Both of these mechanisms lead to a state where all the mean diameters grow as the cube root of time (Crist and Nesarikar, 1995), but each has a characteristic size distribution, which can be probed by moment-based averages. From the derivations in Appendix B, we see that Ostwald ripening leads asymptotically to , whereas the size distribution resulting from coalescence under Brownian forces is characterized by .
We analysed the sizes of droplets in experimental charges described previously by Charlier and Grove (2012). The charges, containing material representative of compositions of tholeiitic basalts from the Sept Iles intrusion (charges SI-5, SI-8 and SI-13, Table 2) and Mull (charges M-5, M-6 and M-9, Table 2), were cooled at 1°C per hour from a starting temperature of 1100°C to a range of temperatures at which they were held for periods of up to 96 hours before quenching. At the end of each experiment the charges contained solid phases (detailed in Table 2 of Charlier and Grove (2012)) together with Si-rich glass containing exsolved quenched droplets of Fe-rich liquid (Fig. 7). Using back-scatter images we measured the diameters of isolated droplets in large regions of glass, avoiding those in direct contact with the mineral phases (thus avoiding droplets that may have nucleated heterogeneously on the mineral surfaces).
|Sample||Final T (°C)||Equilibration time (hours)||D4,3/µm||D3,2/µm||D4,3/D3,2|
|SI-5||282||1006||96||3.94 ± 1.54||2.22 ± 0.87||1.77 ± 0.69|
|SI-8||484||963||48||2.26 ± 0.30||1.59 ± 0.21||1.42 ± 0.19|
|SI-13||299||1020||64||1.36 ± 0.04||1.28 ± 0.04||1.06 ± 0.03|
|M-5||3205||1005||92||2.42 ± 0.10||1.80 ± 0.07||1.35 ± 0.05|
|M-6||621||963||48||1.85 ± 0.11||1.51 ± 0.09||1.23 ± 0.07|
|M-9||2017||1020||64||2.22 ± 0.09||1.75 ± 0.07||1.27 ± 0.05|
|Sample||Final T (°C)||Equilibration time (hours)||D4,3/µm||D3,2/µm||D4,3/D3,2|
|SI-5||282||1006||96||3.94 ± 1.54||2.22 ± 0.87||1.77 ± 0.69|
|SI-8||484||963||48||2.26 ± 0.30||1.59 ± 0.21||1.42 ± 0.19|
|SI-13||299||1020||64||1.36 ± 0.04||1.28 ± 0.04||1.06 ± 0.03|
|M-5||3205||1005||92||2.42 ± 0.10||1.80 ± 0.07||1.35 ± 0.05|
|M-6||621||963||48||1.85 ± 0.11||1.51 ± 0.09||1.23 ± 0.07|
|M-9||2017||1020||64||2.22 ± 0.09||1.75 ± 0.07||1.27 ± 0.05|
The number of droplets measured is . The equilibration time gives the time for which the charge was held at the final temperature after having been cooled from a starting temperature of 1100°C at 1°C/hr. Charges SI-5, SI-8 and SI-13 have a bulk composition identical to that of a dyke cutting the Sept Iles intrusion and charges M-5, M-6 and M-9 have a bulk composition typical of an intermediate basalt from the Mull Tertiary volcano (for further details see Charlier and Grove (2012)).
We find values of between 1.06 and 1.8, implying a size distribution in all cases except one (and including the estimated uncertainty) significantly more broad than either of these two mechanisms would predict. We interpret this discrepancy as evidence for continuous nucleation while the existing droplets are ripening, consistent with the design of the experiments in which the temperature was reduced at a steady rate into the binodal.
Conclusions
Moment-based methods for particle size characterization provide a simple way to describe a population of (sub-) spherical particles (crystals, sedimentary clasts, emulsion droplets or bubbles), and have the advantage that exact results allow the different averages of the three dimensional population to be deduced (with estimated error bars) from two dimensional sections. Which of the mean diameters to use depends on the phenomena of interest, but we suggest that and give a good first characterization of the population (including the spread of diameters), while can be useful for accumulations of sedimenting grains or rising bubbles. Their ratio is a measure of the width of the size distribution, and this can carry information about growth mechanisms of inclusions.
Acknowledgements
We thank Fergus Gibb and Michael Henderson for their help to access material from the Shiant Isles Main Sill. We acknowledge the loan of Shiant samples from the British Geological Survey and thank Michael Togher for his helpfulness and efficiency at facilitating the loans. We are grateful to Bernard Charlier for providing images of his experimental charges, and for the loan of the charges themselves to enable us to create further images. Comments from two anonymous reviewers greatly improved the manuscript. V.C.H. is supported by a Natural Environment Research Council studentship. M.B.H. acknowledges support from the Natural Environment Research Council [grant number NE/J021520/1].
References
Appendix A: Solution for the moments
In what follows, we shall use upper case symbols to refer to three-dimensional quantities, and lower case to refer to two-dimensional quantities. Suppose we have a random distribution of spheres in space, where there are N spheres per unit volume. Further, suppose that the fraction of the number of spheres that have diameters in the range D to is , so that the probability density function is normalized: . We then imagine passing a plane section through this distribution, which produces an infinite collection of circular cross-sections. Let there be n such cross-sections per unit area of the plane, and let the fraction of the number of circles which have diameters between d and be , so that is also normalized: .
A sphere will only intersect the plane if it lies in a volume close to the plane, in particular if the perpendicular distance y of the sphere centre from the plane is such that . Thus the number of circles per unit area will be . | https://pubs.geoscienceworld.org/minmag/article-standard/81/3/515/285426/Mean-grain-diameters-from-thin-sections-matching |
The present invention which is an assist device developed for applying an angiographic process on the arm relates to an angiography assist device suitable for the anatomy of the arms, having a pneumatic air sac on the wrist and elbow part, providing a lengthwise shortening movement, moving up and down.
The present invention can be used in the angiographic processes applied with iodic liquid not penetrating X-ray and called as opaque substance. During the angiographic processes, a needle should go into the vascular system of the patient. Inguinal artery can be selected for this process, however the comfort is not provided for the patient during this process. Upon the process applied on the inguinal artery, local bleedings called as hematoma occur more frequently. In order to overcome such problems, brachial arteries are selected ever-increasingly. The difficulty in this selection is the need to work on an artery of relatively small size. Therefore the doctors applying the process refrain from making angiography on the brachial artery.
In order to apply angiography on the brachial artery, the patient should lie on his/her back on the angiography device table. In the meanwhile his/her arm should be extending to the side in a straight position without being twisted on the elbow part.
In the state of the art, simple arm supports are used to support the arm of the patient during the angiography. These supports are rectangular hard objects of the same length with the harm of the patient. While the patient is lying on his/her back, his/her arm stays stable on this object in a position extending to the side. Until recently, these supports haven't been designed as ergonomically to the arm of the patient. While the said objects were being designed, the outwards physiological angle of the front arm has not been taken into consideration. Likewise, it could not provide an outward extension (dorsiflexion) movement to the hand necessary for applying the process. When it is considered that the arm length and structures of the people may vary depending on their sex, age, form, etc., the need for adjusting the length of the support to be placed under the arms during the process comes out. However the length of the arm support used within the state of the art cannot be shortened in its own axis.
The arm support apparatuses used in the state of the art are in a direction parallel with the arm and cannot move up and down, from top to toe. Moreover, in the case that the arm support of the patient stay up or down, in order to move the patient support device towards the bottom of the patient arm, the patient should stand up and lie again, and then he/she should take the necessary position again.
FIG. 1
For the angiography processes applied on the arm, the radial, brachial, axillary arteries are selected as the artery. For an angiography process on the radial artery, the wrist part of the patient should be stretch, and the hand should be stretched backwards when the palm is open facing the top. In this position, the artery comes closer to the skin, and therefore it assumes a position more appropriate for the puncture into the artery. Therefore within the state of the art, wrist rolls are used for this purpose. The roll is located between the support device and the wrist and thus the wrist can assume a stretched position ().
In the case that the brachial artery present in the elbow is used, the elbow should be partially upheld. Therefore roll-like objects to keep the elbow in an upwards stable position are used.
The most important reason lying behind the unsuccessful angiographic operations that have been planned on the arm arteries is that the arm cannot be made to assume the desired position. In such a case, it becomes very difficult to puncture the artery with a needle. The need of being sterilized for the doctor during the operation prevents him/her from touching the patient directly, however in the systems according to the state of the art, the doctors may be obliged to intervene manually in order to get the arm of the patient in an appropriate position. This causes a microbial pollution because of doctor's direct touch on the patient.
Another important problem experienced within the systems according to the present invention is that the upholding apparatuses like a roll or etc. that are located under the wrist or elbow are stable. The change in the position of the patient, the displacement of the system with an arm movement, being in stable sizes and being discomfortingly hard are the problems experienced within the state of the art. When different positions or uprisings are needed while the operation is being carried out, as the patient is covered with a sterilized cloth, the operation is continued by the operation technician under the table.
1
. Hand-finger fixing apparatus
1
1
.. Middle block projection
1
2
.. Edge projection
2
. Power and pneumatic pumping system
3
. Selenoid valve
4
. Bedding platform
5
. Air hose
6
. Flexible connection band
7
. Pressure sensor
8
. Wrist air bag
9
. Elbow air bag
10
. Hand placing platform
11
. Housing support part
12
. Connection means for housing support part
13
. Arm support part
14
. Level adjustment shaft part
15
. Level adjustment fixing pin
16
. Level adjustment bedding housing part
17
. Arm support part movement pin
18
. Arm extending shaft part
19
. Fixing pin for arm extending part
20
. Arm extending housing part
21
. Hand fixing bands
22
. Length adjustment apparatus
23
. Level control apparatus
The operation device that can be used for the angiographic operations applied on the arms according to the present invention is designed as appropriate to the arm anatomy of the patients. The air sacs filled with air to provide an easy operation on the arteries of the patient with needle can be controlled with a remote control and the arm of the patient is made to take the appropriate position. The elbow parts of the patients are enabled to extend outwards physiologically. In order for the hand wrist to get an angle in a way that the palm will face upwards, the wrist part gets an angle backwards.
In the tall or short patients, in order for the differences in the arm length not to affect the anatomic placement, the arm of the support device can be shortened or extended by the operator. Moreover, the angiography arm support device can be moved up and down in a parallel manner with the table axis. And also, the angiography arm support device can be rotated to the left or right in accordance with the patient position.
Instead of the under-wrist or under-elbow upholding rolls or prothesis that may be needed to get an appropriate localization under the skin of the radial or brachial arteries, air bags improving the comfort of the patient are used. Thanks to these objects, the wrist or elbow of the patient can be upheld in a desired extent.
Within the present invention, as the angle and position of the arm of the patient can be changed by the user doctor when desired by means of the operation device, all of the problems experienced in the stable apparatuses used in the state of the art are eliminated. Therefore, the operation flaws emerging during the angiographic operations are prevented and the success ratio of the angiographic operations is increased.
Likewise, thanks to the control of the present invention, the hygiene problems (infection risk) emerging when the doctor touches on the patient during the application of the angiographic operation are prevented. Within the present invention, the doctor does not need to touch on the patient for controlling the system or adjusting the arm position any more. The user can control the system and therefore the position of the patient's arm by means of a remote control.
Thanks to the operation device according to the present invention, the wrist and the elbow can be upheld and made to assume a position by means of air sacs. This upholding and positioning process can be stopped whenever desired. The desired measurements are determined with a millimetric sensitivity. Therefore the success ratio of the operation can be increased, the artery puncture is made easier, the artery can be cannulated with a needle more easily, the operation reaches to success with fewer attempts, and the patient fells less pain.
As the system described within the present invention has anatomically more appropriate structure and configuration, the arm of the patient can stay on the system more easily. As the structure can change its lengthwise axis and as its size parallel to the angiography table can change, it is possible for the system to be adjusted to different arm lengths without requiring the patient to move. Therefore the comfort of the patient is improved.
1
1
2
1
1
13
8
13
1
22
13
9
22
13
11
13
17
23
11
6
11
12
2
9
8
5
An operation device according to the present invention which can be used for angiographic operations applied on the arm comprises a hand-finger fixing apparatus () which has been designed to be larger than the edge projections (.) of a preferably middle block projection (.) at the edge of the arm support part () or to form a bifurcated structure; at least one wrist air bag () located between preferably the arm support part () and hand-finger fixing apparatus (); preferably a length adjustment apparatus () on the arm support part (), which can change the length of the operation device in accordance with the arm length of the patient; an elbow air bag () located preferably on the rear part of preferably the length adjustment apparatus () on the arm support part (); a housing support part () designed in a rectangular form connected to the rear part of preferably the arm support part () by means of a movement pin () of the arm support part; at least one level adjustment apparatus () located on the housing support part (); a flexible connection band () connected to preferably the rear part of the housing support part () with a connection means () for housing support part; a power and a pneumatic pumping system () connected to the elbow air bag () and wrist air bag () by means of air pipes (); and a remote control unit that can be used for operating the system.
22
20
13
18
20
20
18
18
20
13
19
The length adjustment apparatus () described above comprises preferably at least two arm extending housing parts () fixing the structure to the arm support part (); arm extending part fixing pins () preferably located one for each arm extending housing part (), which is designed in the form of a stick or projection fitted on the arm extending housing part (); at least one arm extending shaft part () which can be fixed to the arm extending part fixing pins () or moved on the arm extending housing part (), which extends in a parallel manner to the arm support part () plane and comprises indents appropriate for the dented or stick structure on the arm extending part fixing pin ().
13
18
13
18
19
13
13
18
In the case that the user desires to extend the length of the arm support part (), he/she moves the arm extending shaft part () to a direction parallel with the extending direction of the arm support part (), and the arm extending shaft part () leaves the arm extending part fixing pin () which is holding it and therefore can extend the length of the arm support part () by moving. Likewise, when the user desires to shorten the length of the arm support part (), he/she can manage to do so by changing the position of the arm extending shaft part ().
23
16
11
15
16
16
14
15
11
18
20
The level adjusting apparatus () described above comprises preferably at least two level adjustment housing parts () fixing the structure to the housing support part (); level adjusting fixing pin () preferably located one for each level adjusting housing part () designed in the form of a stick or a dent fixed on the level adjusting housing part (); at least one level adjustment shaft part () comprising indents appropriate for the stick and dent structure on the level adjustment fixing pin () extending vertically to the plane of the housing support part (), which can be fixed to the arm extending part fixing pins () or moved on the arm extending housing part ().
22
14
23
13
Like in the length adjustment apparatus (), the user can change the position of the level adjustment shaft part () on the horizontal plane, comprised in the level adjustment apparatus (), and therefore he/she can change the position of the arm support part () on the horizontal plane as well.
22
23
22
23
When desired, an engine can be connected to the length adjustment apparatus () and to the level adjustment apparatus (), and the length adjustment apparatus () and the level adjustment apparatus () can be controlled with an electrical engine by means of a remote control.
17
11
13
13
11
17
13
An arm support part movement pin () is located between the housing support part () and the arm support part (). Therefore the arm support part () can move independently from the housing support part () on the horizontal plane in a way that it takes the arm support part movement pin () as the center. As a result, when it is desired to change the inclination of the patient's arm on the horizontal plane, the arm support part () can be moved and adjusted.
17
4
13
The arm support part movement pin () is located on the bedding platform () extending as a flat body at the end part of the arm support part ().
2
3
5
7
5
8
9
The power and pneumatic pumping system () described above comprises at least one selenoid valve () for each air pipe () regulating the air flow into the system, and at least one pressure sensor () for each air pipe () which can control the air inside the system and therefore control inflation of the wrist air bag () and the elbow air bag ().
8
9
When the user desires to change the height of the patient's arm, he/she can change the pressure of the air inside the wrist air bag () and elbow air bag () either together or separately by using the remote control. Therefore the user can millimetrically adjust the position of the patient's arm for each coordinate axis.
6
11
12
11
A flexible connection band () connected to the housing support part () by means of a housing support part connection means () is connected on the rear part of the housing support part (). Therefore the operation device can be mounted to the housing. This connection cuff can easily be removed. None of the structures that are used are radio-opaque. Therefore the interaction with the images obtained from the angiography device is prevented.
1
1
2
1
1
1
1
10
1
1
1
2
21
The hand-finger fixing apparatus () mentioned above is designed in a way that it will be bigger than preferably the edge projections (.) of the middle block projection (.) and it will form a bifurcated structure. The middle block projection (.) is inclined to hold the middle three fingers of the patient and to be proper for the palm structure of the patient, and therefore a hand-placing platform () is created within the structure. In order for the fingers of the patient to be fixed, the middle block projection (.) and the edge projections (.) comprise hand-fixing bands () on which the fingers can be fixed.
The air bags used within the present invention are UV stabilized, appropriate for medical use, not allowing gas air leakage, resistant to high air pressure, resistant to high air volumes, semi compliant, of 300 μm medical polyurethane, and rectangular. While creating the structure, thermal welding method is used.
DESCRIPTION OF THE FIGURES
FIG. 1
. The view of the support apparatus used within the state of the art
FIG. 2
. The top view of the operation device
FIG. 3
. The side view of the operation device
FIG. 4
. The detailed view of the length adjustment apparatus
FIG. 5
. The detailed view of the level controlling device
FIG. 6
. The view of the operation device when it is connected to the arm
DESCRIPTION OF THE PARTS | |
Some dude in Pittsburgh spent 42 hours alphabetizing every single word in 'Star Wars,' because why not?
As you'll see in the above 43-minute video, 'Star Wars: Episode IV -- A New Hope' with all of its words arranged in alphabetical order is actually a really annoying thing. From multiple instances of the word "harvest" to just one utterance of "suicide," everything flies by so fast you'd think you were on a mission to save a galaxy with a bunch of nonsense-sputtering freaks.
As Wired notes in a story about the man behind the project, the video's creator knows that what he did was pretty much a useless waste of time. But, you know, he had the time. So why not?
Murphy details how he put together 'ARST ARSW' (which is 'Star Wars'' letters arranged alphabetically). According to Wired, he basically used some familiar hacking programs to rearrange every word uttered onscreen. But first he needed to watch the movie and jot down every single word. In the end, he had an 11,000-word transcript in his hands.
Then he inputted it all into software. A day or so later, he had the above video. It's all very technical. And way nerdy. And a total waste of time. But in a good way. Sorta. | https://kqvt.com/star-wars-words/ |
Saturday 25 October 1941US date format: 10/25/1941, UK date format: 25/10/1941
It was Saturday, under the sign of Scorpio (see planets position on October 25, 1941). The US president was Franklin D. Roosevelt (Democratic).
Famous people born on this day include Helen Reddy and Anne Tyler.
How Green Was My Valley, directed by John Ford, was one of the most viewed movies released in 1941
But much more happened that day: find out below..
You can also have a look at the whole 1941 or at October 25 across the years.
Old Newspapers
Have a look at the old newspapers from 25 October 1941 and get them!
What the sky looked like on 25 October 1941
Movies
Which were the most popular movies released in the last months ?
Personalized book about You
Your book about what happened on 25 October 1941
Wish him/her..
Find out your future
Get a FREE Numerology report based on the digits of 25 October 1941!
Historical Events
Which were the important events of 25 October 1941 ?
Holidays:
- Day of the Romanian Army
- Virgin Islands - Thanksgiving Day : end of the hurricane season
- Republic of China - Taiwan Retrocession Day (1945)
- Also see October 25 (Eastern Orthodox liturgics)
- R.C. Saints - Feast day of Saints Crispin and Crispian; Six Welsh martyrs and companions; forty martyrs of England and Wales
- Kazakhstan - Republic Day
Famous Birthdays:
- Helen Reddy: Australian musician, a singer and actress who skyrocketed to superstardom in 1972 with her Grammy winning song, "I Am Woman," which became the anthem for the women's movement.
- Anne Tyler: American writer, highly acclaimed for novels that use witty colloquial dialogue to reveal the tensions and tragedies underlying everyday life in Baltimore and the small communities of the American South.
- Bobby Keetch: Soccer player / entrepreneur.
- Anna Tyler: Minneapolis Writer.
- Lynda Benglis: American sculptor and painter.
Facts:
- 16,000 Jews massacred in Odessa Ukraine
- Germany attacks Moscow
- Winston Churchill routes Forces South to SE Asia
Planetary positions on this day
Discover the planets positions on October 25, 1941 as well as what the sky looked like at your place
Subscribe to our Newsletter
Don't miss our monthly news, no spam guarantee Click here to Subscribe
...and if 25 October 1941 was your Birth Date then Join our Birthday Club!
Magazine Covers
What news were making the headlines those days in October 1941? | https://takemeback.to/25-October-1941 |
Exploring Our Solar System with Wolfram|Alpha
Wolfram|Alpha contains a wealth of astronomy data on many areas of our universe, such as objects within our solar system and in the deep sky, constellations, and computational astronomy, making it a handy resource for astronomers, students, and hobbyists. Some of the most intriguing space activity takes place right here at home, inside of our own solar system. Wolfram|Alpha makes computations and explores properties and locations for objects and events in our solar system, such as the sun, planets, planetary moons, minor planets, comets, eclipses, meteor showers, sunrise and sunset, and solstices and equinoxes. You can query any one of these objects or phenomena, and learn information such as their position in the sky relative to your location, size, or distance; their next occurrence; and much more.
Wolfram|Alpha automatically assumes your geographic location based on your IP address, which is handy when querying for the time and location of an upcoming sky event. For instance, a quick “lunar eclipse” query in Wolfram|Alpha tells us that, for our location in Champaign, Illinois, the next one will occur on August 5, 2009 at 7:38pm U.S. Central Daylight Time and will be penumbral, which means the moon will enter the Earth’s penumbra (the outer part of its shadow), resulting in an apparent darkening of the moon. A penumbral eclipse is often hard to see because the penumbra isn’t very dark.
Wolfram|Alpha can also provide interesting facts about distances, temperatures, and dimensions of objects in our solar system that are specific to the time of day and your location. What is unique about querying Wolfram|Alpha for an object’s distance is that the distance is returned in real time, based on where the Earth is in its orbit. A textbook can only provide an average distance. For instance, at the time this post was written, Wolfram|Alpha reported that the sun was approximately 1.015 astronomical units (94.35 million miles) from Earth—enter “Sun” to see its current distance.
Wolfram|Alpha also reports plenty of less time-sensitive data about the sun, such as its apparent and absolute magnitude, spectral class, surface temperature, and mass.
In the coming weeks we will explore more interesting and useful astronomy data for stargazing, exploring deep-sky objects, and computational astronomy. Has anything interesting been happening in your night’s sky? You can connect with enthusiasts from around the world having this conversation at the Wolfram|Alpha Community site.
A feature hard to find somewhere else is to calculate the distance between planets at any date:
http://www18.wolframalpha.com/input/?i=distance+from+venus+to+mars+23+oct+2008
I want to give some other implementation suggestions:
Distance between two moons (it doesn’t work currently):
http://www18.wolframalpha.com/input/?i=distance+from+io+to+europa
Distance between planets (other than Earth) and space probes:
http://www18.wolframalpha.com/input/?i=distance+from+Mars+to+Pioneer+11
Plot of distances:
http://www18.wolframalpha.com/input/?i=plot+distance+from+Mars+to+Earth+from+1+jan+2009+to+31+dec+2009
Regards!
igo,
We have passed your great suggestions along to our astronomy team! Thank you for the feedback!
Someone wrote:
“A feature hard to find somewhere else is to calculate the distance between planets at any date”
Unfortunately, the ephemerides used in Wolfram|Alpha (and Mathematica??) are rather crude. For example, the position of the planet Jupiter can be wrong by a substantial fraction of a degree (e.g. on January 1, 1800, the position is wrong by over 28 minutes of arc). This would frequently correspond to errors of physical distances between planets greater than a million miles. It’s hard to see why the astronomical data in Alpha is so inaccurate. I’ve been doing this sort of thing for thirty years, and my best guess is that the algorithms are the fairly primitive ones published fifteen to twenty years ago when computing resources were memory-starved (probably one of the works of Jean Meeus?). The positions of the planets for several hundred years around the current date should never be wrong by more than a tenth of a second of arc.
-FER
it wanted to be able to make calculations with distributions. For example to solve the differential equation f'(x)=D(x), where D is the Dirac’s delta.
Thanks for good tips and advices. (As I remember, “distance from Earth to Mars” was one of my first questions to Alpha 😉
I’m very sorry, but where is my _last_ post about “distance from Earth to Mars” query (this query works strange)? (Aug 5, about a minute after my first comment). I saw, it have passed moderation, but disappeared in a few minutes…
Thanks for sharing this much useful information about solar system with Wolfram | Alpha. I am waiting to read your coming week post.
Very nice.
I could tell you similiar story.
Will you look at metheor shower this night?
I read it will be great show.
So, about my lost (2009-08-05) comment.
’twas about “distance from Earth to Mars” query.
In May it worked well. But now it seems, it doesn’t work!
Input: distance from Earth to Mars
Input interp: distance | from world to Mars,Pennsylvania,United States
Result: 5697 miles
Then go “unit conversions” and “direct travel times”.
And then goes the map. Hmm, I didn’t know (but I suspected it 😉 that “Earth” is GeoPosition[0,0].
But the main queston is: why the default interpretation for “Mars” is not-well-known American city, but not planet Mars?
The second question is: why, with such interpretation, I still see sidelink to “Mars” article in Wikipedia (yes, planet Mars)?
And of course, despite all of this, I still can learn about the distance from Earth to Mars. Query “distance from Earth to planet Mars” works very well 😀
Sorry for boring, but I hope I can help you in our great deal: making all the world knowledge computable 🙂
And one more.
I’ve just explored a fun bug in Alpha engine.
Making a typo in “distance from Earth to Mars” query and typing “distance rom Earth to Mars”, I’ve got some really _strange_ result. Namely:
Input: distance rom Earth to Mars
Note: Assuming multiplication | Use a list instead
Input interp: distance | from Automatic to Rome,Lazio,Italy Mars | distance from Earth
Result: (1.707 AU (astronomical units)) distance | from Automatic to Rome,Lazio,Italy
Oh, _what_ does it mean?
Aritaborian,
Our team is looking into the linguistics issues that are occurring in these examples. We appreciate your feedback! | https://blog.wolframalpha.com/2009/08/03/exploring-our-solar-system-with-wolframalpha/ |
Brutal Weather Sets Roadside Assistance Record
Requests to AAA Mid-Atlantic for roadside assistance climbed to record numbers in January as motorists were left crippled by snow and batteries couldn't handle the frigid temperatures.
The auto club, which serves four million members in a handful of states, saw its emergency roadside assistance volume top 222,000, shattering the previous record month of December 2010 by nearly 16,000.
Up from an average volume of 5,900 calls per day last January, last month's average daily number hit nearly 7,200.
“With February picking up where January left off, drivers should make sure their vehicles are prepared with proper levels of antifreeze, a strong battery, and plenty of windshield washer fluid,” said Tracy Noble, spokesperson for AAA Mid-Atlantic. “Also keep an emergency kit in the trunk should you run into any problems during your commute. Careful preparation is the key to weathering the wrath of Old Man Winter.”
The top three reasons for roadside assistance: | http://nj1015.com/brutal-weather-sets-roadside-assistance-record/ |
Most readers are aware that rationing was an integral part of life in the Second World War and afterwards, but it is less well known that it also occurred in the previous war, though not nearly so stringently and to some extent, voluntarily.
The following is a copy of a cutting found in the recipe book of one of my great-grandmothers and from an advertisement on the back, it can be dated to early 1917. My only recollection of the rationing of this period being mentioned is that my maternal grandmother, her daughter, did not like having to cook with margarine instead of butter! Presumably the quantities given below were for a family with two parents, two children and two servants.
Susan Miller member 477
The State Ration – Meals for a week
At the National Training School of Cookery careful and exhaustive trials have been made as to the actual possibilities of the voluntary rations for households of varying numbrs and means. The results given here have been specially compiled for a family of the professional and middle classes, and have been worked out with a view of being of general assistance.
Bread and Flour
- One loaf of 2lb weight needs 1½lb of flour. Lord Devonport allows 3lb flour per head per week.
- In the menus below three loaves of 2lb each a week are allowed for two people, giving 2¼lb of flour made into 1½ loaves, and ¾lb flour for puddings, meatless dishes, soups, sauces &c., for each person
- Bread cannot be spared for luncheons or dinners, unless none is eaten at tea time
- Baking at home may be done on Wednesdays and Saturdays
- New bread must not be eaten
- All bought bread must be weighed
- Flour may be saved by the use of fine oatmeal for sauces, and in suet crusts, soups, &c., and of medium oatmeal in bread (4oz to 1½lb flour is a good proportion)
- Cornflour or arrowroot may be used for sauces to save flour
Meat
- Suet is extra to the meat allowance, and also bones bought apart from meat for stock
- If bones are bought with meat as in a leg of mutton they count in the meat allowance
- Most meats should be boned before cooking, in order to use the bones to their full advantage, e.g. ribs of beef, shoulder of mutton
- During cooking meat loses weight, about 4 oz to the 1¼lb, but much more if badly cooked
- Good carving and careful serving help to make food go as far as possible
- Meat must be eaten at lunch or dinner only, and there should be at least one meatless day a week.
Suggested weekly menu
Sunday
Breakfast – Porridge, scrambled eggs, bread and butter, marmalade. Meat 8lb
Dinner – Roast leg of mutton (8lb), two vegetables, jam tart (¾lb flour) Flour 1¾lb
Tea – Cake (1lb flour) to last the week, bread and butter, jam
Supper – Artichoke soup, rice and cheese, stewed figs
Monday
Breakfast – Porridge and sausage, bread and butter, marmalade or honey Meat 1lb
Lunch – Curried vegetables (pulse and fresh) two slices of mutton hashes Flour, none for the children, orange jellies.
Tea – Bread and butter, jam
Dinner – Maize and tomato soup, cold mutton, potatoes in jackets, fruit, (apples and bananas). Children to have the fruit and soup.
Tuesday
Breakfast – Porridge, Bacon (6oz). Bread and butter and honey (2oz flour) Meat 6oz
Lunch – Purée d’Artois.* Spiced apple tart. (12 oz flour) Fish and tart for Flour 14oz the children
Tea – Bread and butter and cake
Dinner – Shepherd’s pie. Vegetables. Chocolate mould.
Wednesday (meatless day)
Breakfast – Porridge. Fish cakes. Bread and butter
Lunch – Braised onions. Milk pudding. Oat cake (1½oz flour). Cheese. Meat none. Bananas. For the children – Poached eggs, pudding and fruit. Flour 10oz
Tea – Bread and butter and jam. Scones (8oz).
Dinner – Chestnut soup. Egyptian pie (½oz flour). Celery or any seasonable vegetable. Riz Imperatrice. **
Thursday
Breakfast – Porridge, boiled eggs, bread, butter marmalade. Meat 2lb
Lunch – Gravy soup (stock made from fresh bones and the bone of leg Flour ½lb of mutton). Semolina croquets and tomato sauce. Sussex pudding.
Tea – Bread and butter and cake Supper – Liver and bacon (1½lb liver, ½lb bacon), two vegetables. range Pudding (8oz flour).
Friday (Fish Day)
Breakfast – Porridge, herrings (not for the children). Bread and butter and honey. Meat none
Lunch – Cheese and potato pie. Baked apples and junket. For the children herrings and junket.
Tea – Cornflour cakes (2oz flour). Bread and butter.
Supper – Lentil soup. Salmon Kedgeree (smoked salmon or any fish that is cheap. Artichokes au gratin.
Saturday
Breakfast – Porridge and brawn (1lb), bread and butter and marmalade. Meat 3½lb
Lunch – Celery and beetroot soup. Apricot sea-pie. For the children chops and pudding.
Tea – Bread and butter, cake, jam.
Dinner – Stewed steak and vegetables (1½lb meat). Fruit salad. Cheese soufflés (1oz flour).
The allowance of sugar is ample and therefore has not been calculated.
N.B. – Luncheons and dinners may be reversed and termed dinners and suppers, in which case, catering for the children would be simplified. When not mentioned the children may have one or at most two of the dishes chosen for dinner. The children should have their heaviest meal in the middle of the day.
*Purée d’Artois – soup made with Artois potatoes
**Riz Imperatrice – Rice cooked in sweetened milk, to which is added gelatine & whipped cream or egg custard, then poured into a border mould. When turned out, the hollow centre is filled with fruit salad or crystallised fruit and whipped cream.
Editor – my own mother was born during the First World War, the daughter of a coal miner. Coal miners were exempt from compulsory military service so all of my immediate coal mining ancestors survived both the First and the Second World Wars. I know that they ate nothing like the amount, quality nor quantity of what is listed above. Indeed being one of nine children, two parents and no servants, my mother would have thought this ‘rationing’ would have been a veritable feast. ‘Ribs of Beef’ ‘Shoulders of Mutton’ ‘whipped cream’ ‘fruit salad’… if only. | https://www.gwsfhs.org.uk/2015/03/01/rationing-in-world-war-i/ |
Distance: 1.65 miles, but varying distances are available as there are many different paths and routes.
Time of Year: February 2015
Cost: Parking and entry are free
Parking: The main car park for Park Hall is at the visitor centre at a different location, but we used the Bolton Gate car park (no tarmac and some potholes) which is just off the A520 Leek road. The nearest postcode is ST3 6QD.
Paths and Accessibility: The paths can be muddy, but if you keep to the main paths in better weather, you could get a pushchair around. There are lots of different paths so you can take numerous different routes and there are coloured routes which you can follow if you wish. There are no steps, but paths can be steep in parts.
General Information: Park Hall has lots of interesting landscape from canyons to woodland areas, meadows and open fields. There are many different paths for kids to explore and adventure, which are less accessible. On this particular walk, we were locating some geocaches hidden in the country park, so our route is erratic in parts! It's advisable not to follow the route that we took (uploaded above). Park Hall is popular with dog walkers and dogs are often off lead, so it's an ideal location if you have a dog with you but not ideal if this causes you concern.
Amenities: There is no adjoining cafe Park Hall and the visitor centre is no longer there, but there are toilets next to the main car park.
Walked and submitted by Jenny, Jorja (7) and Dexter (5)
For more information about walks in this location, see the leaflet and details from Closer to Home walks here. For more information on geocaching, go to the website here. | http://photowalknetwork.org/photowalks/2016/6/10/park-hall-country-park-photowalk-stoke-on-trent-staffordshire |
Enid Blyton's much-loved classic series, packed full of adventure and mystery.
Philip, Dinah, Lucy-Ann and Jack are not pleased when the wimpish Gustavus has to come with them on holiday. Even Kiki the parrot dislikes him! But when Gustavus is kidnapped along with Philip, Dinah and Lucy-Ann, Jack bravely follows them to a faraway country and unravels a plot to kill the king ...
Perfect for fans of the Famous Five looking for their next adventure.
(P) 2018 Hodder Children's Books
Enid Blyton's books have sold over 500 million copies and have been translated into other languages more often than any other children's author.
She wrote over 700 books and about 2,000 short stories, including favourites such as The Famous Five,The Secret Seven, The Magic Faraway Tree, Malory Towers and Noddy.
Born in London in 1897, Enid lived much of her life in Buckinghamshire and adored dogs, gardening and the countryside. She died in 1968 but remains one of the world's best-loved storytellers. | https://www.hachette.com.au/enid-blyton/the-circus-of-adventure |
Learning, in acting as a basis for our personal choices, decisions and actions, determines the difference between what we become and do not become. It helps to shape the inarticulate hum of the universal consciousness, present in all that exists and in each of us, into a unique voice, raising, for a few beats, a unique song amidst the ever performing orchestra of life.
Without learning, personality and individuality would not be possible. It is the prism in which one cold ray of light is broken into a sunset of diversity; one dark void of monochrome made into an opalescent stardust-beach of individuals washed by the sighing waves of time. Without it, we would not be able to distinguish us from others and the world would be a palace of mirrors reflecting only its own emptiness.
Because pairs and harmony can only be made from once broken fragments, learning, as the source of individuality and diversity, is the source of love and friendship and of all that gives life its beautiful vulnerability. As the source of such deep-reaching mutual attachment, learning is also the source of society and of our desire for humanity’s continued coherent existence beyond our own life.
Having such a strong power to determine the course of our lives, learning is the source of the only freedom that is unconditional and absolute. It offers shelter from the fact that we did not ask for our own creation and that we are in essence determined from without and not from within. Learning offers freedom from the unbending impossibilities of fate itself.
Learning means much more than the memorisation of items belonging to a certain subject matter. It means to engage with the very will of the universe to evolve; to create the unexpected; to create at all. Learning means to interact with the force that is at the root of all things. Without learning, time itself would not exist, since nothing would melt its glacial monotony for its trickle to be heard
All this has, since antiquity, contributed to establish teachers as men and women who deserve respect because they devote their time and their lives to help others become meaningful individuals. Today, in a world where commodities are obtained in return for their equal value in money, it is too often forgotten that although teachers are paid they can never really be paid back for what they give.
This short article is to remind all students and all who employ teachers that the payment they make to a teacher is only a token of their respect, not a remuneration in the modern sense of the word. When asking about the true value of instruction one asks about the true value of one’s own unique personality. And who can ever hope to answer such a question?
When asking about the value of instruction one eventually asks about the value of time, and, although everybody knows that time is money, no one has yet come up with an accurate valuation of what lies between the on and off of the separators of our phone’s or computer’s timekeeper – the wand of the conductor of the ever performing orchestra of life.
Silvio Zinsstag,
teacher for ancient languages. | https://www.zabaan.com/blog/teachers-for-everything-else-there-is-mastercard/ |
in my free time and its awesome. There are so many things to learn in Uminto.com. My area of Specialization is IT - Software / Software Services. I used to play games and hourly quizzes daily on Uminto.com.
Basic Information:
Date of Birth :
Tuesday, April 23, 1991 (23 Years)
Gender :
Male
Contact Information:
Mobile Number :
78######36
Email Address :
md••••••••@•••••••.com
Location :
Chidambaram, India
Pincode :
608001
Uminto Activities of mohamed haaris
Credited Rewards :
14433
Debited Rewards :
400
Current Rewards :
14033
Quiz Played :
0
Quiz Score :
0
Movie Review Posted :
10
Jupiter Money Level :
2
Jupiter Money Score :
795
Umnito Flip Level:
2
Umnito Flip Score :
15
Photos of mohamed haaris
There are no images to display. | http://www.uminto.com/profile/?u=mohamed.haaris.241821 |
The sample is recrystallized using 20mL of ethanol. It is boiled with 20mL of ethanol, filtered by gravity and then cooled in ice and filtered by suction. How much compound A should be obtained as the final product?
Homework Equations
Percent recovery = pure mass / crude mass x 100
The Attempt at a Solution
0.53 g/mL. 20 mL of ethanol. totals to 10.4 g.
2.5 g is insoluble due to Impurity C
Honestly, I do not know where to start as it feels like I am missing one piece of essential information. | https://www.physicsforums.com/threads/recrystallization-finding-the-amount-of-the-final-product-of-cmpd-a.714708/ |
I cringed when I saw J.R. and Bobby’s instant-messaging exchange in “Blame Game.” This episode was filmed after Larry Hagman’s death last fall, so I’m guessing the producers created the sequence using leftover footage of the actor. (J.R.’s presence during Ann’s sentencing appears to be recycled too.) I’m all for rescuing Hagman from the cutting room floor, but having J.R. send IMs to pester Bobby into watching an online video of a basketball-playing dog? That felt silly. It also reminded me of how the old show used one-sided telephone conversations to keep Jock around after Jim Davis died, which is one “Dallas” tradition I’d just as soon not continue.
By the end of “Blame Game,” though, I had a change of heart. I’m not sure why the show had Patrick Duffy shout Bobby’s responses to J.R.’s instant messages (even if J.R. was supposed to be down the hall, couldn’t Bobby have typed his answers?), but the revelation that the viral video was really a Trojan horse to erase Bobby’s notorious cloud drive was pretty nifty. J.R. pulled a fast one on Bobby, and “Dallas” pulled a fast one on its audience. I always fall for this show’s fake-outs, which either means I’m really gullible or the people who make the show are really clever. I’ll let you decide.
Overall, “Blame Game” is another solid hour of “Dallas.” The script comes from Gail Gilchriest, who also wrote last season’s “The Enemy of My Enemy,” the episode that brought Sue Ellen off the sidelines and got her involved in the Southfork oil saga. In “Blame Game,” Gilchriest once again demonstrates a knack for writing for “Dallas’s” first lady, giving Linda Gray some of her best material yet. I love the scene where Sue Ellen and Bobby lament the rivalry between their sons, as well the jailhouse pep talk Sue Ellen gives Ann. The friendship between these women has become one of my favorite relationships on the show. It feels believable, especially now that we know that Ann, like Sue Ellen, was once a less-than-perfect wife and mother. (As far as Ann’s release on probation: Yes, it’s a little convenient, but when has a Ewing ever gone to jail and stayed?)
I wish Sue Ellen hadn’t been so easily manipulated by John Ross into seizing Elena’s share of Ewing Energies, but I don’t really mind because it’s so much fun to see the return of the shrewd, bitchy Sue Ellen from the late ’80s. With J.R. exiting the stage, Sue Ellen is now poised to succeed him as John Ross’s mentor and the thorn in Bobby’s side. What a tantalizing prospect. Hopefully this will cement Gray’s place in the narrative for a long time to come. Likewise, I’m thrilled to see Pamela finally snag her piece of the company. Think about how entertaining the Ewing Energies’ board meetings will be once Sue Ellen and Pamela join the fray.
“Blame Game’s” other V.I.P.: Jesse Metcalfe, who has quietly become one of the new “Dallas’s” best performers. The actor has found the right balance between strength and sensitivity, much like Duffy did during the original series. I also like how Christopher has succeeded Bobby as “Dallas’s” resident action hero. In “Blame Game,” Christopher makes a valiant attempt to turn the tables on the thug holding him at gunpoint at Ewing Energies. Later, he shields Elena when Vicente points his gun at her. Jesse Bochco does a nice job directing both sequences, and he gets a big assist from “Dallas” composer Rob Cairns, whose score during the showdown with Vicente feels even more cinematic than usual.
It’s also nice to see Kuno Becker’s Drew Ramos take down Vicente, although the body count on this show is beginning to trouble me. During the past 10 hours of “Dallas,” Marta, Tommy, Frank and Vicente have died; Harris was gunned down but survived. On a lighter note, since Becker arrived a few episodes ago, I find myself looking forward each week to his scenes with Jordana Brewster. Drew brings out Elena’s feistiness in a way only a sibling could. Do I dare suggest these two are “Dallas’s” best brother/sister act since Victoria Principal and Ken Kercheval?
The rest of the “Blame Game” hostage crisis yields mixed feelings. In addition to the Sue Ellen/Bobby scene, I like the moment when Vicente realizes the Ewing cousins have traded romantic partners since his last encounter with them. (“You Ewing boys share after all! I love it!”) Likewise, it’s impossible to not cheer when John Ross and Christopher come together to overpower Vicente’s henchmen. As much fun as it is to see the Ewings squabble, it’s always more satisfying when they band together.
My gripes: The hostage sequences are too compressed. “Blame Game” invites comparisons to the classic “Winds of Vengeance,” an early “Dallas” episode where the Ewings are held hostage. (Fans of “Dallas” producer Cynthia Cidre’s previous series, “Cane,” will recall that show did a family-held-hostage episode too.) But the reason “Winds of Vengeance” succeeds is because the slower pace of 1970s television allowed the tension to build steadily. “Blame Game” squeezes its crisis into a roughly 15-minute period, and some of that time is taken up by Ann’s sentencing.
This is also one of those times I wish the new “Dallas’s” Southfork interiors more closely resembled those seen on the old show. The living room where the Ewings are held captive in “Blame Game” looks nothing like the one where the “Winds of Vengeance” hostage crisis unfolds. The only time you feel the history of this house is when you see it from the outside.
Of course, it’s not like I haven’t become attached to the new Southfork set too. The “Blame Game” scene where Bobby bursts into J.R.’s bedroom and finds it empty is surprisingly poignant. The brief glimpse of J.R.’s empty table is what moves me. This is where our hero glanced at Miss Ellie’s picture before signing over the Southfork deed to Bobby last season. It’s where he told John Ross to never take advantage of the family when they’re in the trouble, and where he learned to use his tablet. How sad to think we’ll never see him sit there again.
Grade: B
_______________________________________________________________________________________________________________________________________________
‘BLAME GAME’
Season 2, Episode 6
Telecast: February 25, 2013
Writer: Gail Gilchriest
Director: Jesse Bochco
Audience: 2.6 million viewers on February 25
Synopsis: During mediation, Christopher agrees to give 10 percent of Ewing Energies to Pamela, who refuses to share it with John Ross. J.R. erases Bobby’s cloud drive and leaves Southfork unannounced. When Vicente stages an ambush on Southfork and tries to kidnap Elena, Drew shoots and kills him. Sue Ellen uses the morals clause in Elena’s contract to seize her shares in the company.
Cast: Kuno Becker (Drew Ramos), Emma Bell (Emma Brown), Carlos Bernard (Vicente Cano), Pablo Bracho (consul general), Jordana Brewster (Elena Ramos), Jesse Campos (Jose), Vanessa Cedotal (District Attorney), Damon Dayoub (Vicente’s henchman), Patrick Duffy (Bobby Ewing), Julie Gonzalo (Pamela Barnes), Linda Gray (Sue Ellen Ewing), Larry Hagman (J.R. Ewing), Josh Henderson (John Ross Ewing), Jason Kravitz (Pamela’s lawyer), Judith Light (Judith Ryland), Jesse Metcalfe (Christopher Ewing), Glenn Morshower (Lou Bergen), Mitch Pileggi (Harris Ryland), Freddie Poole (Ramon), Krishna Smitha (Shireen Patel), Brenda Strong (Ann Ewing), Rebekah Turner (Jury Forman), Wilbur Fitzgerald (Judge Wallace Tate)
“Blame Game” is available at DallasTNT.com, Amazon.com and iTunes. Watch the episode and share your comments below. | https://dallasdecoder.com/2013/02/28/critique-dallas-episode-16-blame-game/ |
These ornament storage baskets are suitable for storing books,
arts and crafts, office supplies, towels, winter clothes and so on,
or serve as DVD cases on shelves, clothes in closets or pantries,
toy baskets for kids, etc.
Specifications
Basket Measures 15.7(L) x 11.8(W) x 8.3(H)inches
Quantity: Set of 3
Suitable size for most situations use, such as closet,
basket table and office for toys, books, CDs,clothes,
underwear storage.
Jute / Cotton Blend Exterior
The material is made of thick fabric and keeps its shape even when empty
and can make this storage basket last a good long time.
Linen Lining
Lined with a thin beige muslin fabric, It is easy to clean, just wipe with a damp sponge or cloth.
Rope Handle
Cotton rope handles for easy slide in and pull out of shelves or closet. | https://thewarmhome.com/thewarmhome-storage-basket-with-sturdy-rod-collapsible-storage-bins-set-works-as-baby-basket-toy-storage-nursery-baskets-pink-3-pack-p0015-p0015.html |
Related Policies:
Related Forms, Procedures and References: Tuition Refund Schedules
For Questions Contact: Office of Student Accounts| 230 Derham | 651.690.6503 | [email protected]
Students are financially responsible for every course in which they register. The amount of tuition refunded for a dropped/withdrawn course is established by the deadlines found in the Summary of Financial Procedures (SFP). The SFP can be found on the Student Accounts website. Students are expected to read this publication and adhere to published deadlines.
College for Women Banded Tuition Refund Information
College for Women students who enroll or re-enroll at St. Kate's Fall 2021 or later are assessed tuition under the banded rate model will be charged the banded tuition rate if enrolled in 12-18 credit hours; therefore, tuition adjustments may occur only if the student drops below the banded amount or adds credit hours above the banded amount. To better understand the refund process for banded tuition, please see the examples under: How does Banded Tuition Affect My Refund Calculations When I Add or Drop a Class?
Federal Refund Policy
The "Return to Title IV Funds" policy applies to any student who receives federal Title IV funding and withdraws. The Return to Title IV funds formula determines the amount of Title IV funds a student has earned at the time the student ceases attendance, and the amount of Title IV funds a student must return. The amount of Title IV funds a student earns is a proportional calculation based on the amount of time the student attends school through the 60 percent of the term.
If a student ceases to attend school after 60 percent of the term, the student earns 100 percent of the Title IV funds. If an unofficial withdrawal is determined (all failing and/or non-credit grades), the 50 percent date of the term is used as the last date of attendance to calculate refunds if the last date of attendance is unknown.
Refunds for state aid programs and non-state aid programs are calculated on a proportional basis using the state mandated or institutional refund policy. To calculate the minimum refund due to the Minnesota State Grant Program (Minnesota Child Care and CSC Child Care Grants), the SELF Loan Program, and other aid programs (with the exception of the State Work Study Program), the MOHE Refund Calculation Worksheet is used. Please contact the Financial Aid Office for the complete policy. | https://catalog.stkate.edu/policies/stu-non-acad/stu-finances/refund/schd/ |
Doctor insights on:
Statin 10 Mg Tablet
1
Certirizine 10 mg for?
Antihistamine : Cetirizine is an antihistamine. It is useful commonly for allergies and for itchy rashes. (ex brand zyrtec (cetirizine) in the us). ...Read more
Statins (Definition)
Several drugs exist which lower cholesterol by inhibiting an enzyme in liver cells (hmg-coa reductase) that is involved in cholesterol synthesis. These all have generic names that end in -statin, e.g. Atorvastatin (lipitor) and Simvastatin (zocor). Since nearly all of the meds in this class have been shown to have similar cardiovascular benefits, we often discuss them ...Read more
2
Does nuerontin interact with any of the following dugs?
Coumadin
Digoxin 0.25 mg
Tamsulosin Cap 0.4 mg
Atorvastatin Tab 10 mg
Triam/HCTZ Tab 37.5-25
Flecainide Tab 100 mg
Acetaminopin 500mg
Neurontin (gabapentin): Neurontin (gabapentin is generic form) has low potential for drug interactions-including the ones you listed. Monitor for side effects of Neurontin which commonly include sedation, dizziness. ...Read more
3
Valium 10 mg suppository side effects?
Sedation: Sedation and a drunk like feeling. Possible sleepiness and lack of coordination as well as the discomfort of the insertion itself which should be mild. ...Read more
4
10 mg Lexapro (escitalopram) with 50 mg tramadol okay to take?
It depends.: Some people may be very sensitive to low doses of meds, especially if you have liver disease or your liver enzymes are slow metabolizers of drugs. You could develop serotonin syndrome w/ low doses of Lexapro (escitalopram) & tramadol. Let your doctor know if you experience nausea, flushing, diarrhea, headaches, profuse sweating, muscle twitching, rigidity, confusion, hallucinations or seizures. ...Read more
5
Take Buspar (buspirone) 7.5 mg 2x daily. 10 mg ramipril am, 12.5 mg metoperlol, 10 mg Lexapro at bedtime. Pulse rate slow 52 -59 bpm. Not athletic. Concerning?
HR okay: Nothing at all wrong with a resting hr in the 50s. The metoprolol you take is contributing. ...Read more
6
Can i take 5, 000 mg vitamin d, 600 mg omega-3's, 500 mg epa &dha , raspberry ketons, with 400mg bupropion & 10 mg citalopram each day?
Drug interactions: No direct negative interactions between the medications listed, although i recommend reviewing them with your prescribing doc. ...Read more
7
Taking 10 mg Norvasc (amlodipine) and diazepam causing water retention?
Edema and drugs: Diazepam (d) by itself rarely causes edema. Norvasc (amlodipine) can do this and this may be difficult to treat. You should see your physician to make sure you do not have cardiac, renal or any venous disease of your legs causing this fluid accumulation. Good luck. ...Read more
8
What vitamin supplements should one avoid when taking 75 mg plavix, two 81mg aspirin and 40mg crestor (rosuvastatin) daily?
High dose E BUT...: Vitamins are by definition essential for health so I'm wary of any drug that makes it necessary to avoid vitamins. The only vit. that interacts with these is Vit E, which in high doses thins blood so may increase risk of bleeding with Plavix (clopidogrel) & aspirin.However, this potential risk pales in comparison with the known hazards of combining Plavix (clopidogrel) & aspirin. Please see http://bit.ly/VHiLuH & my comment: ...Read more
9
Can you talk Lexapro (escitalopram) 10 mg daily and Arginmax low dose like 3 capsules together agrimax contains ginko biloba L argenine ginsing etc???
If it feels good: its OK. You might want to first sit down and figure out how much this will cost you a month. You might want to consult with someone who specializes in optimal health / complementary medicine to come up with an approach that achieves your therapeutic needs. ( including proper exercise, sleep, balanced nutrition, emotional / educational and spiritual growth.) ...Read moreSee 1 more doctor answer
10
Taking Lexapro 10 mg lamotrigine 200mg ,seroquel 100 mg pregnant 5 weeks taking 1 prenatale and folic acid 800mcg is that enough folic acid ?
More than enough: The CDC recommends that pregnant women take 400 micrograms per day of folic acid to give a 70% reduction in neural tube defects. They occur most commonly in the Irish ethnic population. You state that you are taking a prenatal vitamin. It probably contains the right amount of folic acid in it. Check the label and if it does then you can stop taking one of your pills! ...Read more
11
Friend on kombiglyze (5 mg sexaglyptin,1000 mg metformin)& blisto (4 mg glimpride ,1000 mg metformin)once daily.Fasting comes 110 & PP 200.Help.
Diabetes non cot: Your friend should address this issue. How old she is, 43??, for how long has been managed with this medications ? and is an endocrinologist doing the management? For me is difficult to manage her. She requires a Diabetologist, to go over the medications, have a full diabetic panel, She probably will require to start insulin, and very close follow up. ...Read more
12
Is it safe to take lisinopril 40 mg twice daily, metoprolol tartrate 25 mg once daily, and bumetanide 1 mg once daily safe. I'm also hep c.?
Medicine safety: Yes it is safe. Make sure the hepatitis c is under control. ...Read more
13
Total chloresterol 124,LDL 60,HDL 41,triglicedes 116,VLDL chloresterol 23. Had CABG 7 years ago. Take 10 mg crestor (rosuvastatin). 52 old, do I need statin?
Crestor (rosuvastatin) is a statin.: Crestor (rosuvastatin) is a statin used to treat cholesterol. ...Read more
14
20 mg Ambien 3 mg lunesta (eszopiclone) 10 mg flexeril 300 mg Wellbutrin 10 mg Lexapro and a does of podiapn all in a few minutes Will I be ok?
NO, Call 911: If you are trying to harm yourself or to harm others you should call your local emergency number 911 or the National Poison Control Center at 1-800-222-1222. Take care and stay out of trouble. ...Read moreSee 1 more doctor answer
15
Is carvedilol 25mg 1 tab twice daily equal to bystolic (nebivolol) 10mg once daily?
Not known: Exact conversion to my knowledge not there. They both work on seperate set of receptors. Ask you pcp as to what set pf end points: indication for the drug and they should be able to optimize your dose over the next upcoming weeks. ...Read moreSee 1 more doctor answer
16
Can i take melatonin? I currently take neurontion 300 mg t.I.D., qd, klonopin 0.5 mg b.I.D., prn, Soma 350 mg h.S., tylenol (acetaminophen) qd
Yes, but: Higher doses of melatonin can actually interfere with normal sleep. Take 0.3 to 1 mg. 90 minutes before intended sleep and see how you do. There is a possible additive effect with your other meds that also can have sedative effects., . ...Read moreSee 1 more doctor answer
17
Is lorazepam 0.5 mg or Xanax (alprazolam) .25 mg stronger?
Which is stronger: About the same.Get a more detailed answer ›
18
Is 80 mg omeprazole equivalent to 60 mg dexilant (dexlansoprazole)?
Ppi dosing: The maximum doses of each ppi are equipotent in reducing acid. Going higher than these doses impart no greater benefit and should not be exceeded. Prilosec/omeprazole: 20 BID. Prevacid/lansoprazole:30 BID. Nexium/esomeprazole:40 bid. Protonix/pantoprazole: 40 bid.Aciphex/rabeprazole:20 bid.Zegerid 40 daily. and Dexilant (dexlansoprazole) 60 daily. So to answer Dexilant (dexlansoprazole)60=omeprazole40, ...Read moreSee 1 more doctor answer
20
Max daily dose gabapentin
? | https://www.healthtap.com/topics/statin-10-mg-tablet |
Creating an environment of continuous learning that is inclusive and creates a brave space where people are open to ideas, expression, embrace differences, and are accepting of all people.
Acknowledging power and its effects of the “ism” by creating accessible opportunities in which diverse individuals can participate.
Being mindful that our identities influence how we perceive and how others perceive us.
Connecting and inspiring members with professional development and learning opportunities centered on EDI.
Diversity: We cultivate diversity by sharing our own sets of experiences, culture, beliefs, interests, and education that shape our perspectives and actions.
Equity: We advance equity through fairness by acknowledging that fairness is not always equal.
All people and/or groups are given access to the precise number and types of resources for them to achieve equal results.
Inclusion: We practice inclusion by creating accessible opportunities in which diverse individuals are able to participate in the decision-making process.
Based on these commitments and the mission of the Equity Inclusion Action Committee, the Committee requests that the EC approve the name change to the Equity Diversity Inclusion Action Committee.
This page provides EDI resources and training.
If you have topics you would like added to the page or in an offered OASFAA training, or need to speak with someone about an EDI situation you are encountering, please email the EDI committee at: TBD.
Ability: A concept that symbolizes or categorizes people based on person’s ways of navigating and negotiating society – physically, emotionally, psychologically, and/or mentally.
Diversity: Everyone is diverse. We all bring our own sets of experiences, culture, socioeconomic status, upbringing, interests, and education that shape our perspectives and actions.
Equity: All people and/or groups are given access to the precise number and types of resources for them to realize (attain, reach, achieve) equal results. Acknowledging one’s biases also ensures that a fair and impartial results occurs.
Inclusion: The extent to which diverse individuals are able to participate in the decision-making process within an organization and/or group.
A 4 year-old girl asked a Lesbian if She’s a Boy. She responded the awesomest way possible. | https://www.oasfaaonline.org/equity-and-inclusion-action |
If your taste buds desire some sweet surprise, titillate them with the Pasticciotti. Though there is no dearth of Italian pastries, this one has some unique characteristics. With a filling of heavy cream, eggs, almond extract, and having an egg wash on the outer layer, this dessert is easy to make and tastes heavenly. Then without wasting time, let us start with the procedure.
Ingredients
For Making The Dough
- Flour – 2 cups,
- Egg – 1
- Sugar – 1/2 cup
- Vanilla extract – 1 tsp
- Baking powder – 1 Tsp
- Salt – just a pinch
- Milk – ¼ cup
- Butter – ¼ cup
- Lard – ¼ cup
For Making The Filling
- Cornstarch – 3 tbsp
- Butter – 1 tbsp
- Egg yolks – 2
- Milk – 1 cup
- Cream – ½ cup
- Sugar – ½ cup
- Beaten egg – 1
- Almond extract – 1 tsp
Directions
First Prepare The Filling
- Take the cornstarch and sugar in a saucepan. Now pour the heavy cream as well as the milk. Stir all the ingredients with a spoon so that you get a smooth mixture.
- Now put the saucepan on medium flame and keep stirring till the concoction becomes thick. Now add the almond extract, butter along with the egg yolks and mix very well.
- Remove the pan from heat and pour the thick mixture in a bowl. Cover the bowl so that no layer appears on the top during cooling.
Now Making The Pastry
- In a large bowl, mix flour, salt, sugar, and baking powder. Add the lard and butter and mix them so that crumbling starts to appear. Now pour the milk, vanilla extract and egg and knead very well so that you get a smooth dough.
- When the dough is all smooth, cut it into 2 halves and keep them within the fridge for about 1 hour. Do not forget to cover them with plastic.
- Spread some flour on the cooking board and start to roll the first piece of dough. Make sure that the rolled out dough is a ¼ inch thick. Now take a pastry cutter of 3 inches and cut 12 rounds from the rolled out dough.
- Now take a pastry mold and place 1 round in it. Put some of the fillings in it and place another 3 inches around on the top. Press the mold so that the upper and lower dough layer gets fixed and the filling remains intact in the middle.
- Make all pastries like this. When done, keep the molded pastry in the fridge for the whole night. Next morning, coat them with the beaten eggs.
- Take a baking dish and put the egg-washed pastries on it. Insert in the oven and bake for around 15 minutes at 425 degrees Fahrenheit.
Do you like chocolate? Then you can prepare chocolaty custard to go inside the pastries. If you are having a party next weekend, you can make the pastries in advance and keep them in the fridge. Just take them out 15 minutes before baking and bake the pastries and serve fresh and delicious pastries to your guests. | https://justitalianfood.com/making-pasticciotti-a-wonderful-italian-pastry/ |
The aim of this work is to present a theoretical analysis of a 2D ultrasound transducer comprised of crossed arrays of metal strips placed on both sides of thin piezoelectric layer (a). Such a structure is capable of electronic beam-steering of generated wavebeam both in elevation and azimuth. In this paper a semi-analytical model of the considered transducer is developed. It is based on generalization of the well-known BIS-expansion method. Specifically, applying the electrostatic approximation, the electric field components on the surface of the layer are expanded into fast converging series of double periodic spatial harmonics with corresponding amplitudes represented by the properly chosen Legendre polynomials. The problem is reduced to numerical solving of certain system of linear equations for unknown expansion coefficients.
The relationship between eigenstructure (eigenvalues and eigenvectors) and latent structure (latent roots and latent vectors) is established. In control theory eigenstructure is associated with the state space description of a dynamic multi-variable system and a latent structure is associated with its matrix fraction description. Beginning with block controller and block observer state space forms and moving on to any general state space form, we develop the identities that relate eigenvectors and latent vectors in either direction. Numerical examples illustrate this result. A brief discussion of the potential of these identities in linear control system design follows. Additionally, we present a consequent result: a quick and easy method to solve the polynomial eigenvalue problem for regular matrix polynomials.
The object of the present paper is to investigate several general families of bilinear and bilateral generating functions with different argument for the Gauss’ hypergeometric polynomials.
According to fuzzy arithmetic, dual fuzzy polynomials cannot be replaced by fuzzy polynomials. Hence, the concept of ranking method is used to find real roots of dual fuzzy polynomial equations. Therefore, in this study we want to propose an interval type-2 dual fuzzy polynomial equation (IT2 DFPE). Then, the concept of ranking method also is used to find real roots of IT2 DFPE (if exists). We transform IT2 DFPE to system of crisp IT2 DFPE. This transformation performed with ranking method of fuzzy numbers based on three parameters namely value, ambiguity and fuzziness. At the end, we illustrate our approach by two numerical examples.
In this work, we apply the Modified Laplace decomposition algorithm in finding a numerical solution of Blasius’ boundary layer equation for the flat plate in a uniform stream. The series solution is found by first applying the Laplace transform to the differential equation and then decomposing the nonlinear term by the use of Adomian polynomials. The resulting series, which is exactly the same as that obtained by Weyl 1942a, was expressed as a rational function by the use of diagonal padé approximant.
Reduction of Single Input Single Output (SISO) discrete systems into lower order model, using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method the original discrete system is, first, converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively, the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. The reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
The Helmholtz equation often arises in the study of physical problems involving partial differential equation. Many researchers have proposed numerous methods to find the analytic or approximate solutions for the proposed problems. In this work, the exact analytical solutions of the Helmholtz equation in spherical polar coordinates are presented using the Nikiforov-Uvarov (NU) method. It is found that the solution of the angular eigenfunction can be expressed by the associated-Legendre polynomial and radial eigenfunctions are obtained in terms of the Laguerre polynomials. The special case for k=0, which corresponds to the Laplace equation is also presented.
A numerical method for Riccati equation is presented in this work. The method is based on the replacement of unknown functions through a truncated series of hybrid of block-pulse functions and Chebyshev polynomials. The operational matrices of derivative and product of hybrid functions are presented. These matrices together with the tau method are then utilized to transform the differential equation into a system of algebraic equations. Corresponding numerical examples are presented to demonstrate the accuracy of the proposed method.
In this paper a unified approach via block-pulse functions (BPFs) or shifted Legendre polynomials (SLPs) is presented to solve the linear-quadratic-Gaussian (LQG) control problem. Also a recursive algorithm is proposed to solve the above problem via BPFs. By using the elegant operational properties of orthogonal functions (BPFs or SLPs) these computationally attractive algorithms are developed. To demonstrate the validity of the proposed approaches a numerical example is included.
Fuzzy fingerprint vault is a recently developed cryptographic construct based on the polynomial reconstruction problem to secure critical data with the fingerprint data. However, the previous researches are not applicable to the fingerprint having a few minutiae since they use a fixed degree of the polynomial without considering the number of fingerprint minutiae. To solve this problem, we use an adaptive degree of the polynomial considering the number of minutiae extracted from each user. Also, we apply multiple polynomials to avoid the possible degradation of the security of a simple solution(i.e., using a low-degree polynomial). Based on the experimental results, our method can make the possible attack difficult 2192 times more than using a low-degree polynomial as well as verify the users having a few minutiae.
New generalization of the new class matrix polynomial set have been obtained. An explicit representation and an expansion of the matrix exponential in a series of these matrix are given for these matrix polynomials.
Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Problems on algebraical polynomials appear in many fields of mathematics and computer science. Especially the task of determining the roots of polynomials has been frequently investigated.Nonetheless, the task of locating the zeros of complex polynomials is still challenging. In this paper we deal with the location of zeros of univariate complex polynomials. We prove some novel upper bounds for the moduli of the zeros of complex polynomials. That means, we provide disks in the complex plane where all zeros of a complex polynomial are situated. Such bounds are extremely useful for obtaining a priori assertations regarding the location of zeros of polynomials. Based on the proven bounds and a test set of polynomials, we present an experimental study to examine which bound is optimal. | https://waet.org/search?q=Polynomials. |
Terrible Advice Tuesdays (T.A.Tues): The Secret To Drilling Tempered Glass?
Terrible Advice Tuesdays: Tempered glass can be drilled. Just apply slow pressure on the drill bit.
The rest of the story: Give that technique a try and let me know how it works for you!
Tempered glass can’t be drilled. If you drill it, the glass will shatter. This fact is why a lot of mass manufactured aquariums have a sticker on the bottom of the tank that clearly states the glass is tempered.
How to do tell if a piece of glass is tempered? The easiest way is to put on a pair of polarized sunglasses and look at the piece of glass. Black lines will appear on the glass especially when you rotate your head and look at the glass sideways.
Note: I did find this article on eHow telling you how to drill tempered glass. I’ve never heard of this technique before and I didn’t find it on any other searches so I’m extremely skeptical that it will work.Browse the Store! Questions?
Comments for this article (24)
Drill it…. I dare you..
I think that is a mis-type in the name of the article.
All the steps and directions look like everything I’ve seen and read about drilling Non-tempered glass.
I would take this with a grain of salt and not attempt it without at least trying it on a scrap piece of glass first.
I still think it’s a long shot.
Good luck with that!!!!!!!
I have seen the exact method mention and it worked on only one piece of tempered glass, the other sheets shattered and the guy was a pro glass cutter/worker, he said the reason it shattered was due to the age and use of the glass, he was right as the old pieces came from the cabinet was years old and the piece that worked was less than 4 months old, with a stamp saying when it was made… so there are many factors to take into account when drilling tempered glass,,,,,, good luck
Drilling three holes in my newly acquired 90 gallon days after getting it was probably the most nerve-racking experience I’ve had in this hobby. Very gratifying though to know that I did it and didn’t end up shattering a perfectly good tank. 🙂
How to turn an aquarium into a trickle tower 101:
Take a standard aquarium with a tempered glass bottom and drill a hole as if you were trying to cut an opening for a bulk head overflow kit. Second step- after applying the drill and water cooling set up, use a broom to clean up the shattered aquarium base. Remove excess splinters still clinging to the frame and insert egg crate as a support shelf for your plastic media. install this new four sided biofilter glass cage ( former aquarium) to the top of your sump and you now have a trickle tower on your sump. Firmly plant tongue in cheek and pretend that this was done on purpose and not an attempt to drill an aquarium with a tempered glass base. 🙂 JasPR
I got a new 55 gal and tried drilling the long side glass. I drilled it just as the above eHow article suggests. It was tempered and unmarked. Half way through the drill it shattered into a million pieces. I know to never drill the bottom glass, but was surprised the side glass was tempered. I have drilled many aquariums without a problem. Another way to know if the glass is tempered: drill it… if it shatters into a zillion little pieces, Yep… it’s tempered! Tempering glass not only makes it stronger, but it also makes it safer, thus “safety glass”. It is used in cars side and rear glass, table tops, etc. It is made to break, when under stress, into a million little pieces rather than large knife like shapes. So… tempered glass will shatter when under stress (ie. drilled). It is made to do so.
It depends on the tank and the manufacturer. I had a 125 that was able to be drilled through the bottom.
Now my 55 has a sticker on the bottom that says this side is tempered Do not drill.
So that leads me to believe only the bottom is tempered and not the sides.
I haven’t tried it or do I think I’m going to on the 55. I don’t see this being my final tank,it’s too narrow.
Mark, that article you linked for drilling tempered glass is more than likely misnamed. That’s the same procedure I have used for drilling non tempered glass, and if you search youtube you can find people “attempting” to drill tempered glass with that same method. The results are always hysterical.
Umm… It’s hilarious that people actually try in the first place. Good luck if you choose to ruin a tank.
You find the funniest advice.. If you’re going to invest so much into your hobby why not just have it drilled before you bring it home? The old adage is”Penny Wise, Pound Foolish”..
I had to replace a 90 and asked the LFS if it could be drilled. He said only the bottom was tempered and couldn’t be drilled. There was a sticker on the bottom but it said ” do not drill tank. Tempered glass”. Called manufacturer and it indeed was all tempered. Never even considered drilling it.
I have a Fluval Spec V… There is nothing on the tank that says it is tempered or not. However the glas is super super thin, 1/8″ thick! Well I drilled one hole 1.5″ and in an attempt to get my set up to run one bulkhead with return and drain didn’t work so I drilled a second hole 30mm for a 1/2″ bulk. That was the most nerve racking experience of my life!!! I thought drilling the first hole was bad but the second only inches away on the bottom of an unknown piece of glass! Thank god it worked and the pico build continues!
If you do buy a tank with a tempered bottom like an aqueon. The sides were not tempered on my tank. So I drilled the side walls about 2 inches from the bottom of the tank and installed a bulkhead as normal. Works great! Been up and running for the last couple years so far.
This is off topic!!! But I heard a myth that you can’t mix salt in a trash can cause it releases phosphates in the water. Is this true? I’m using a Brute food grade one & need to know if I’m doing any harm.
so I drilled it. (not having read the advice) It’s going slowly but that’s ok. I take bio break – go back and there are many little pieces of glass on my bench.
Sadly this leaves me with and AWESOME flying saucer with no lid – and a show in 5 days. bummer.
Hi there, Tempered glass can be drilled. I have seen it done on aquariums but it’s a special technique. If you take a piece of normal glass the same thickness as the tempered glass and cut it into say 3 inch by 3 inch square and silicone it where you need to drill the hole and let it cure. Drilling from the inside outward. You need to start drilling the non tempered glass first and when you hit the tempered glass the normal glass holds the energy in the tempered glass so it won’t shatter.
Cheers!!!
Glass physics 101. Tempered glass has three layers – surface compression, center tension, other surface compression.
If you pierce the compression/tension layer from either direction you release stored energy that results in lots of little bits of glass.
There is NO way to cut or drill tempered glass without it shattering. Not waterjet, not laser, not plasma cutter, not diamond drill or saw, not underwater, nothing.
Speed and care has nothing to do with it. It is simply physically impossible.
If you THINK that you have cut or drilled tempered glass you are wrong.
I have personally drilled tempered glass with a 1″ hole saw diamond Dremel bit it was a dry bit on my 18volt drill it went through fine .problem was hole was too small and as I was reaming the hole just a bit bigger to cut what I needed ..pooofffff…glass bits everywhere so it can be drilled ..will it still be strong after I have no idea but it can be done I’ve done it
Well worth a read. Got great insights and information from your blog. Thanks.
I have a tank that’s drilled but is tagged tempered. | http://www.mrsaltwatertank.com/terrible-advice-tuesdays-t-a-tues-the-secret-to-drilling-tempered-glass/ |
Loading...
Loading...
Row & Seat Numbers
- Rows in Section 217 are labeled 1-5, BS6
- An entrance to this section is located at Row BS6
- have 22 seats labeled 1-22
- has 24 seats labeled 1-24
- have 16 seats labeled 1-24
- All Seat Numbers
- When looking towards the court, lower number seats are on the left
Ratings, Reviews & Recommendations
200 Level Baseline(Seating Zone) -
There aren't many seats behind the basket on the 200 Level, which is a good thing for a few reasons.
First, These seats are really far away from the action. With nearly 40 rows of seating ... | https://rateyourseats.com/sections/madison-square-garden-nba/217 |
There are several factors you need to consider in coordinating the time lapse settings to your most ideal setup, and it primarily depends on your goals and requirements. If you want to achieve professional time lapse videos, you’ll have to start with the basics.
In shooting a time-lapse, you will come across three primary settings that need to be adjusted before starting a shoot: how many frames per second (fps) to apply, also known as the frame rate, interval rate, and memory capacity. These should be modified accordingly if you want to get good results based on the guiding principles of time-lapse photography.
Why Choose The Right Settings For Time Lapse?
The primary goal of choosing the best time lapse mode on your camera is to capture more progressive details of a particularly long scene in a short period. You want to be as accurate as possible since you are essentially recording in real-time. Therefore, the amount of time you spend recording and playing back in your digital video recorder should be long enough to capture all of the events you want to keep track of.
The time lapse settings will help achieve your desired recording goals with specific adjustments. Take note that you need to make sure that the suggested setup is suitable for your camera model to get the most accurate video recording.
There are different time lapse modes available, and each has its purposes. You may be required to turn some of the extra settings down if they are not suitable for your needs.
What Are The Best Settings For A Professional Time Lapse Video?
Even if you are new to this, you might have already seen a time lapse video several times on the internet. These videos offer an interesting window into the world of photography. You can use these videos as training aids or get some input about positioning and exposing your camera. They can also show you the importance of getting the right settings to use so that you can create a professional-looking time lapse video.
When a series of still photos are taken over a specific period, you can put them together with the impression of time speeding up. A video is a compilation of captured images or frames in quick progression. If you view a video at a very slow speed, you can see the succession of every frame that makes up the whole recording for a longer time. In a time lapse, you want to quicken the sequence of the frames in order to capture the entire movement or scene at a fast pace and in a short time. For example, you can show off a beautiful sunrise, sunset, blooming flowers, or whatever else you want to capture in a time lapse without letting the viewers watch them for hours.
You may find that you can easily use the time-lapse settings on your camera to create a very cool effect while shooting a new project. When you are shooting a new project, especially if you don’t know the photo settings yet, you can use the guidelines here, so you can make adjustments to the settings to have the best professional result.
So, how can you achieve a good time lapse quality?
Let’s talk more about the functions of each setting to help you achieve the perfect time lapse shoot.
1. Frame Rate
The first thing you need to consider on your camera’s time-lapse settings is how many frames per second you want to record. The frames refer to the photos captured per second. This setting should go hand in hand with the interval rate you want to set for the shoot.
How Many Frames Per Second is Good For Time Lapse?
Time lapse videos are generally rendered at 24 or 30 fps. You can start shooting your time lapse at the initial frame rate, and you can adjust it later on if it seems necessary. This will make the actual footage look much more professional. The resulting footage will also have much better color and image resolution.
However, for you to calculate the accurate frame rate for the time lapse, you have to determine how fast the movements will be in a scene, which involves the next settings option you need to adjust.
2. Interval Rate
Time-lapse interval determines the speed and length of your output video. To time it properly, you need to consider the following factors:
- Type of event or scene
- How long the event/shoot would take
- Rate of action that will take place
- Length of time-lapse compilation
How to Choose a Time-lapse Interval
Here is a basic time interval guideline for beginners who want to learn to capture a time lapse shoot with the best settings:
- 1-second time interval: fast-moving scenes (i.e., car drive)
- 1 to 3-second time interval: moderately fast-moving scenes (i.e., cityscapes, sunsets)
- 15 to 60-second time interval: moderately slow-moving scenes (i.e., moving shadows, stars)
- 90-second to 15-minute time interval: average movement speed at more extended periods (i.e., building construction)
3. Memory Card Capacity
If you want an estimated measurement of the amount of memory a time lapse is stored, you have to break it down to every frame captured per second and multiply each of their sizes to the total length of the video.
The size of the file will vary depending on the camera’s image size for the time lapse and the quality settings you want to use for the shoot. Naturally, videos with higher resolutions are going to be stored in bigger file sizes.
Lastly, make sure that you use a memory card with the capacity to store the entire file. Anyway, you won’t be needing the expensive type of media cards with high write speed as you need to prioritize having a higher capacity for smoother recording. Otherwise, you’ll need to keep an eye on the scenes or exposures to replace the media cards just in time as the other one runs out, which highly applies to longer time lapses.
Final Takeaway
In addition to the settings you need to consider in terms of the time-lapse interval and how many frames per second is good enough for the shoot, your video quality has a lot to do with your camera and the quality settings available in it. Many professional photographers prefer to shoot using high-definition cameras because the image is more precise and more detailed. Some would instead shoot in a lower resolution because they think it is more aesthetically pleasing. Whatever your preferences are, don’t hesitate to explore and experiment until you find the ideal settings for time-lapse recordings that best suit your needs.
If you’re looking for quality time-lapse cameras, you can visit CamDo’s website today and check out their offerings. | http://photodrole.net/author/jamesp/ |
Here is an interesting link from a university. The subject is “Water in Art”.
http://witcombe.sbc.edu/water/art.html
In the Impressionist Period, artists spent hours painting water. As the light changed during the day, so did the reflections and colors in the water. Van Gogh spent hours, all day, with his canvas and brushes, painting and repainting the changing colors of the water at the seaside. During this period in art history, the Impressionists looked at painting water in a scientific way, as they worked at learning how to blend colors to illuminate the picture, as a reflective effect.
At that time, this was not the standard teaching in art schools. Art schools preferred to teach the same methods of the past to art students advocating great detail and realism as the only art style to learn. They looked at the Impressionist painters as the “hippies” of that period and didn’t take them seriously. Art schools looked at these “renagades” as going down an unsuccessful path.
What they didn’t realize, or consider important, was that some of these “rogue” artists were experimenting with light and color in a scientific as well as artful style. They used water as their subject observing the effects of light on color wanting to duplicate that on canvas. In this process they continued to learn about color and how one color effects another.
Even the subject of “water” can hold different meanings among artists. Water can be viewed scientifically or purely asthetic. Civilizations have always looked at water as “life-giving”. Water has always been a very useful subject matter in this way because it is always important for artists to be able to express their ideas. This allows people to see there is always a new way of looking at things. Water is a popular subject because it can be very interesting artistically, scientifically, and so important to life in general. We can view it as romantic, dramatic, serene, violent, inspiring, a form of transportation, living quarters, and life-giving. | https://gardengloflowerart.com/2008/05/08/water-in-art-has-different-meanings-in-civilization-and-among-artists/ |
# Direxion
Direxion is a provider of financial products known for its leveraged ETFs. Founded in Alexandria, Virginia, the company also has offices in New York City, Boston, and Hong Kong.
## History
Direxion was founded in 1997 under the name Potomac Funds as a provider of mutual funds. The original name referred to the Potomac River near the company's first office in Alexandria, Virginia. In November 1997, Potomac Funds became the second company to introduce an inverse mutual fund, following a similar move by Rydex Investments in 1994. The company began using the Direxion name in 2006. The use of the letter "X" in the new name was intended to draw attention to the leveraged index funds in the company's offerings. That year the company also opened an office in the Prudential Tower in Boston, Massachusetts.
Direxion launched its first leveraged ETFs in 2008. In November 2008 the company was the first to offer ETFs with 3X leverage, a move that was copied some months later by its competitors ProShares and Rydex Investments. The move made it one of the fastest-growing ETF companies, with its sixteen 3X ETFs reaching a total of $3.4 billion in assets by April 2009. The move towards higher-leverage offerings by the three companies provoked scrutiny from the U.S. Securities and Exchange Commission and the Massachusetts Secretary of the Commonwealth, and a number of broker-dealers stopped selling leveraged ETFs. The criticisms centered around perceived tracking error: the ETFs were designed to achieve the stated multiple of the return on the underlying on a daily basis only (with the cost of the daily rebalancing passed on to investors in the form of higher expense ratios), but commentators suggested that some investors, even institutional investors, had mistakenly tried to use the inverse products as longer-term hedges against their underlyings.
In December 2010, Direxion added 24 ETFs to its range of offerings, including some non-leveraged funds, and continued to expand its offering of non-leveraged funds in 2011. The company's gold miner bull and bear ETFs are among the most-traded gold-related leveraged ETFs. In February 2020, the company announced the launch of its first leveraged environmental, social and corporate governance fund, offering 150% long exposure and 50% short exposure respectively to the best- and worst-scoring companies in the MSCI USA ESG index, with a quarterly rebalance. In March 2020, amidst the 2020 stock market crash, Direxion announced that it would reduce the leverage of ten of its ETFs from 3x to 2x and close eight others. This was part of a broader trend among providers of leveraged ETFs during the first quarter of 2020; nevertheless, Direxion saw inflows of nearly $4 billion during March 2020 alone. | https://en.wikipedia.org/wiki/Direxion |
Presentation is loading. Please wait.
Published byHaylee Shields Modified over 4 years ago
1
A WORK IN PROGRESS….. Western’s RTI Program
2
RtI “ The process of implementing high quality scientifically based instructional practices with students based on identified needs, monitoring the response of the student, and changing the instruction based on the student’s response and their progress data”
3
Importance to Western Western Elementary has attempted to provide some ‘form’ of an intervention program for the last six years due to the economic and culturally diverse makeup of the student population. Not eligible for Title One Not eligible for Reading Recovery Increasing immigrant population
4
RtI Should Encompass… All grade levels across all core areas Reading Writing Math Behavior Language Western has taken steps to address Reading and Math
5
Six Critical Components of an RtI Model Universal Screening Measurable definition of problem area Baseline data prior to an intervention Establishment of a WRITTEN plan detailing accountability Progress monitoring Comparison of pre intervention data to post intervention data for efficacy
6
What Are Interventions? Targeted assistance based on progress monitoring Administered by classroom teacher, para- educator, external interventionist Provides additional instruction Individual Small group And/or technology assisted
7
Intervention Decision Teams Team Leader Principal or Dean Case Manager/Data Manager Classroom teacher Responsible for interviewing referring individuals, gathering and assembling information, presenting case and monitoring intervention PE, Music, Art, Writing teachers or para-educators School Psychologist Organizes presentation of data, assists in plotting student progress
8
Essential Features Universal Screening Performed 3 or 4 times annually Provides data on all children in that grade Provides early identification of children who are not meeting academic expectations What we found: Students that would be targeted at Western were not necessarily going to be targeted in other schools (bottom 20% varies school to school) Still a problem with transfer students
11
Our First Step….. Finding a universal screening assessment Used assessments that we had available and therefore specific to grade level Primary: Phonological Awareness Skills Test (PAST) Literacy First Phonics Assessment Rigby PM Ultra Benchmark Assessment Third Grade: Rigby PM Benchmark Assessment Fourth Grade: PAS (Predictive Assessment) Language/Math Fifth Grade: PAS, Placement Tests Language/Math
12
Determine Which Students Need Intervention… Students performing 6 months to a year below grade level The bottom 20% of the grade level
14
Number of Students Currently In RTI READINGMATH #2 #3 #2 #3 Primary Third Grade 15 11 - - Fourth Grade 10 6 18 - Fifth Grade 5 7 16 6 Total Number of Students Receiving Service: 94 Percentage of Total School Population 13.6%
15
How We Set Up Interventions Considerations: Time When intervention would occur Length of intervention Who would provide the intervention Teachers? Specifically designated person? Person responsible for record keeping (weekly assessments)
16
Our Intervention Team Para-educators, Parents and Retired Teachers Retired teachers are monetarily compensated Picture
17
Tiered Level Instruction Tier 1: General Classroom instruction with 120 minutes daily devoted to Language Arts Tier 2: Additional research based instruction provided 3 to 5 days a week Tier 3: Research based instruction (different from Tier 2) provided 3 to 5 days a week Tier 4: Special Education
20
Primary Reading Intervention Student Returns to Classroom Special Education Referral
21
RtI 4 th Grade Math
22
RtI 5 th Grade Math
25
Tier I Intervention FocusFor All Students ProgramRigby Literacy (Harcourt Rigby Ed. 2000) Primary/4 th Scott Foresman Reading (2004) 3 rd, 5th GroupingMultiple Age Grouping Formats to Meet Student Needs Time120 minutes a day (primary and 3 rd ) 90 minutes a day (4 th and 5 th ) AssessmentBenchmark Assessment 4 times a year (Primary) PAS Assessment 3 times a year (3 rd, 4 th, 5 th ) InterventionistGeneral Education Teacher SettingGeneral Education Classroom
27
What We Do All primary students required to attend the CORE section of daily group instruction (120 minutes a day for Language Arts- Inclusive of Tier 1. Number of students currently receiving intervention assistance: Tier 2: Tier 3: Program in use for these students is Early Success and Continuum of Literacy Learning (Fountas/Pinnell)
29
Primary Restructured and expanded an existing program Tier 2 consists of: Instructional Aide working 2- 2.5 Hours/ 5 days a week Identified children scheduled for 20 minute blocks of intervention Children working in small groups (5-6 children) with the aide Tier 3 now consists of: Two retired primary teachers working nine hours a week (3 hours a day – 3 days a week) Identified children scheduled for 30 minute blocks of intervention Children working in small (1-3 children) groups with the teacher using running records, anecdotal record keeping results, researched based leveled reading and an intense home reading program.
31
Features of Tier II Purpose: To support individual students who have not met benchmarks Targeted Students: Those with significantly lower levels of performance than their peers, who are learning at a much slower rate than are falling behind their classmates
33
Tier II Intervention Instruction FocusFor students identified with marked difficulties and have not responded to Tier I efforts ProgramReading: Primary: Literacy Continuum 3 rd : Ladders To Success 4 th : Ladders to Success 5 th : Study Island Math: Primary: N/A 3 rd : N/A 4 th : Study Island 5 th : Study Island GroupingHomogenous small group instruction (1:5) Time5 days a week- 30 minute sessions 3rd Grade Reading: 3 days a week- 30 min sessions AssessmentProgress monitoring weekly on target skill InterventionistPrimary: Aides 3 rd /4 th /5 th : Parents or paid interventionist SettingOutside the classroom
34
Tier III Intervention Instruction FocusFor students with marked difficulties and have not responded to Tier I or Tier II ProgramReading: Primary: Early Success 3rd: Soar to Success 4 th : Soar to Success 5 th : Study Island (web) Math: Primary: N/A 3 rd : N/A 4 th : Study Island (web) 5 th : Voyager Math (web) GroupingHomogeneous small group (1:1, 1:2, or 1:3) TimeReading: Primary: 3 days a week/ 30 min 3 rd : 3 days a week/30 min 4 th : 3 days a week/30 min 5 th : 5 days a week/ 30 min Math: Primary: N/A 3 rd : N/A 4 th : 5 days a week/ 30 min 5 th : 5 days a week/30 min AssessmentProgress monitoring weekly InterventionistPaid Interventionist SettingOutside the classroom
37
RTI Schedule for Tier II and Tier III Primary Interventions Green Pod I (Sapp, Moors and Stout) Uninterrupted Time:7:40-8:40 (60 minutes) 9:40-10:40 (60 minutes) Intervention Times: Tier II (Para-educators):Sharon8:40-9:40 Tier III (teachers)Vicki S./Julia8:40-9:40 (3 groups each) Green Pod II (Hemmerlein, Livingston, Daley) Uninterrupted Time: 7:40-9:40 (120 minutes) Intervention Times: Tier II (Para-educators)Sharon9:40-10:40 Tier III (Teachers)Vicki S./Julia9:40-10:40 Yellow Pod (Parker, Corman, Mullins, Sutton) Uninterrupted Time: P1’s: 8:40-10:30 (110 minutes) P2’s: 8:40-9:40 (60 minutes) Intervention Times: Tier II (Para-educators) Vicki J.8:00-8:40(P1’s) 9:40-10:30 (P 2’s) Tier III (Teachers)Vicki S./Julia7:40-8:40 (P1’s & P2’s)
38
Parent Communication Dear Parents, As we begin the second quarter of the school year, my hope that every parent has had the opportunity to meet with their child’s teacher and discuss their student’s academic progress. As a school Western strives to create a climate of high expectations for student learning. Our mission as well as our belief is that all children are able to achieve the essential learning of their grade level. A good indicator of how fully we are committed to this idea is the manner in which we address those students who ‘have not learned’. Although Western’s educational philosophy has always included an academic intervention procedure for those students in need this year we have worked to improve and refine that process. Many of you may have had your child tell you they are participating in RTI, which is an acronym for ‘Response to Intervention.’ Let me first state that this is not a ‘special education’ program. Basically, it is our teacher’s response when we have a student who is experiencing difficulty in learning. It is a systematic program meant to ensure that these students receive additional time and support for learning. Research has made it clear that it is impossible for all students to learn at high levels if some do not receive additional time and support for learning. Everyone does not learn at the same rate and the amount of support needed varies between students. Western has set certain criteria in determining when to implement these interventions. We follow a systematic process of intervention to ensure that students receive additional time and support according to a school-wide plan: * Intervention occurs in a timely manner- at first indication the student is experiencing difficulty * Intervention is a ‘direct’ process rather than a voluntary one. Parents are contacted to inform them that their child will be receiving additional assistance and specify which area will be addressed * All students have equal access to the assistance-it is not based on the teacher * The additional time and support is offered during the school day and is designed in a way that does not deprive students of new direct instruction in their classroom. We know that if we offered this service before or after school, many students could not or would not utilize the service. * The system is fluid. It is not designed as a permanent support for individual students. Students will receive the appropriate level of intervention but only until they have acquired the intended knowledge or skill. They are then weaned from the system until they experience difficulty in the future. We strive for an easy flow of students into and out of the levels of support. I hope this has answered some questions. As a principal I have been very fortunate to have so many parents who are truly concerned about their children’s academic future and who are willing to work with teachers to ensure that their children succeed at a high level. Please let me know if you have a specific question or concern about your child’s education. Enjoy a wonderful Thanksgiving holiday, Deborah Omick-Haddad
41
If We Do It Right… Research has shown that if we have chosen the correct intervention (correctly identified the problem) and we are delivering the intervention consistently 75% of the initial 20% of identified students will respond positively Let’s look at how we are doing with primary reading students…
Similar presentations
© 2018 SlidePlayer.com Inc.
All rights reserved. | http://slideplayer.com/slide/2381019/ |
But I realized the other day that this can’t be right. The Declaration says we have an inalienable right to “life, liberty, and the pursuit of happiness.” The reference to “liberty” already covers the freedom to pursue your own goals, whether that’s your own happiness or something else. So what did Jefferson mean by the right to the pursuit of happiness other than freedom from governmental restraint?
A little on-line research reveals that a number of philosophers of the time extolled the rational pursuit of “true and solid happiness.” Locke argued that in “pursuing true happiness as our greatest good, obliged to suspend the satisfaction of our desires in particular cases.” In other words, there is a more deliberative aspect to choice than merely fulfilling our immediate desires. Locke also contended that we need to be able to avoid committing ourselves to any particular goal until we know “whether it has a tendency to, or be inconsistent with, our real happiness.” Thus, for Locke, the pursuit of happiness involves self-knowledge about what would really make us happy, the ability to discern what actions will promote that happiness, and the self-control to rein in contrary impulses. The pursuit of happiness is different from hedonism or what economists call preference satisfaction; it requires a certain kind of wisdom and character.
For at least some eighteenth century philosophers, real happiness involved a person’s pursuit of society’s happiness, not just his or her own. Either way, the sense seems different than that of homo economicus, pursuing arbitrary individual preferences by accumulating wealth. Rather, the sense seems closer to Aristotle’s view that “the happy man lives well and does well; for we have practically defined happiness as a sort of good life and good action.”
There is a growing body of happiness research showing that wealth has little relationship with happiness once a minimum threshold has been met. Poor people are less happy than the middle-class, but after that, additional wealth does not contribute much to happiness. This research also reveals that fame, like wealth, has only a fleeting impact on happiness; social ties and family are much more important. Unemployment creates great unhappiness that lasts even after the individual finds work again. Thus, some scholars argue persuasively that well-being analysis (WBA) would look much different than cost-benefit analysis.
Strikingly, unlike consumption (which is essentially an individual activity), most of the sources of happiness inherently require not just the cooperation of other specific individuals but supportive social conditions — for instance, an economy that provides employment opportunities or communities that provide the opportunity to form lasting friendships. In other words, “the pursuit of happiness” requires not just being allowed to “do your own thing”; it requires society to provide the conditions that make happiness possible.
If society has a duty to provide such conditions, that means that all of us collectively have that duty. (After all, collectively we are society.) So the inalienable right to the pursuit of happiness necessarily means the unavoidable duty (where required) to support the conditions under which others are able to pursue happiness. The flip side of the right to pursue happiness is a responsibility for maintaining a certain kind of community — thus, a degree of civil duty.
What does all this have to do with environmental law? It means that libertarian visions based purely on individual autonomy are missing the meaning of “life, liberty, and the pursuit of happiness.” Perhaps most obviously, it also means that cost-benefit analysis is the wrong way to think about social policy, because money is a poor measure of well-being. Individually and as a society, we need to be more concerned about the quality of our lives and our communities, and less about the quantity of our cash. | https://legal-planet.org/2010/07/04/some-thoughts-about-the-pursuit-of-happiness/ |
If you thought that people always celebrated birthdays as grandly as they do now, then you are wrong. People from medieval Europe started the tradition of birthday parties because they believed that these would ward off evil spirits. Family members, friends and other guests who came to the parties came and bring presents as this is also believed to ward off the spirits.
But what good are festivities without food, right? In case you were wondering, here are some of the traditional foods that people from all over the world prepare for birthday parties:
Australia
In Australia, people who celebrate their birthdays would have a birthday cake with lit candles. They will make a wish before blowing the candles.
England
In England, the cake will have a symbolic object that foretells the future. If it has a coin for instance, the person is said to become wealthy someday.
Ghana
Children who celebrate their birthdays will wake up to a fried patty from mashed sweet potato and eggs as their birthday breakfast. Their guests will have a dish made from plantain during the party.
Korea
A child who is celebrating his first birthday will be dressed and sat before some objects that include rice, fruit, money, and calligraphy brushes. The item that the child picks up will predict his future. Once the ceremony is over, the guests will have rice cakes.
Mexico
The piñata, a paper mache with a shape of an animal is filled with candies, lollies, and other treats. The child will be blindfolded and will hit it until it breaks so all the kids could share and enjoy the treats.
Western Russia
The child celebrating his birthday will be given a fruit pie.
Have you been to a birthday party of someone from a different country? What birthday foods do they always have? | http://mybirthday.email/2017/08/04/6-birthday-foods-from-all-over-the-world/ |
The Flat Rocks fossil site at Inverloch is located approximately 150 km south-east of Melbourne, on the south coast of Victoria. The area has special significance to Australia’s fossil history as the discovery of Australia’s first dinosaur bone, the Cape Paterson Claw, was found at a nearby site in 1903 by William Ferguson. The currently active site was discovered in 1991 when a group of researchers from Monash University and Museum Victoria were prospecting that part of the coastline for suitable locations for potential fossil dig sites. The dinosaur dreaming project annually does fossil hunts. According to the museum Victoria website they are no longer taking volunteers.
On a whim I took my daughter who is dinosaur mad out to the region.
The Many-legged Myriapoda: Centipedes and Millipedes
by on Jun.03, 2012, under Fauna, Information, Invertebrates
The Many-legged Myriapods
Whilst centipedes and millipedes have jaw-like mandibles on their heads for feeding like insects and crustaceans, they are classified in their own Arthropod subphylum: the Myriapoda. In additional to centipedes (Class Chilopoda) and millipedes (Class Diplopoda), the Myriapoda also includes two other little- known microscopic classes: symphalans (Class Symphyla) and pauropodans (Class Pauropoda).
Myriapods are immediately recognizable by their long, segmented bodies, with each segment possessing one or two pairs of jointed legs. The name Myriapoda is derived from Greek murias meaning ten thousand, + Latin pod meaning foot. As the name of this group suggests, these animals have a myriad of legs but whilst nowadays a ‘myriad’ denotes something countless or extremely great in number, a myriad classically referred to a unit of ten thousand – and no myriapod even comes close to possessing this many legs. Actually, some myriapod species have as few as 10 legs in total. The record for the greatest number of legs is held by a species of millipede: Illacme plenipes, which has 750 legs. This extremely rare, species of millipede is restricted to a tiny area in California. Illacme plenipes was thought to be extinct as it had not been seen for over 80 years since its initial discovery, and was only rediscovered in 2008. Illacme plenipes not only has the greatest number of legs of all myriapods, but also in fact holds the world record for the greatest number of legs of any animal! During locomotion, the legs move in waves that travel down the length of the body. It’s amazing that they can travel, often considerably rapidly, without getting all those legs tangled up! Their coordination is quite remarkable.
Qantassaurus
by simon on May.24, 2012, under Dinsoaurs, Fauna, Information, Magazine, Victoria
Originally found in 1996 near Inverloch at dinosaur cove. The find was part of a dig by Monash University and Museum of Victoria. An original jaw bone has provided some amazing clues to the now called Qantassaurus intrepidus. The bones are believed to be from 115 million years ago. See original article Volume 1 no 2.
The jaw is slightly stubby, likely to be around 2 meters high. It ran on two legs and fed on plants.
The replicas of this dinosaur are available at Museum Victoria and Australian Museum in Sydney. It was called Qantassaurus after the Australian airline Qantas which had shipped fossils all over the country for a prior exhibit.
Planet Dinosaur
by sean on Apr.21, 2012, under Birds, Mammals, Media, Reptiles
I recently watched a remarkable series on dinosaurs from the BBC called “Planet Dinsoaur”.
This show is a must-see for wildlife lovers because it provides such a great context for the wildlife we currently see on the planet today. In particular, questions around the evolution of birds from dinosaurs, egg laying versus birth of live young, protective adaptations, symbiotic relationships amongst species and much more.
While many dinosaurs look nothing like animals in existence today, many are remarkably similar to species still found. These include many reptiles such as crocodiles and lizards as well as aquatic animals like sharks that have changed little over millions of years.
The last ten years has seen an extraordinary amount of unique fossil finds, mainly in new sites in China and Mongolia. The BBC take this as a focus point for their presentation, relaying the magnitute of the finds through amazing recreations of the likely scenarios the dinosaurs can be found in. These might include intra-species fighting (as evidenced by a fossil tooth of a dinsoaur lodged in a member of its own species), feeding, hunting and breeding.
These recent finds have apparently turned the tables on the previous limits of dinsoaur knowledge, finding as many new species of dinosaurs as there were known beforehand. In addition, the interactions and attributes of many dinsoaurs have been inferred through some remarkable fossil discoveries.
The subsequent recreations of the dinosaur world from the BBC are made all the more remarkable through the use of modern CGI graphics, which bring to life a wide mixture of dinosaurs in very realistic reproductions. Dinsosaurs run like you would expect them to do, pteradactyls fly like you would imagine!
With an extraordinary ability to infer things like skin colouration (explained through pigmentation finds in fossils), the reconstructions are extremely life like and, one gets the impression from the authority of the show, probably very close to how they were actually.
This show is highly recommended due both to the amazing presentation and the realisation that the scientific substance of these recent finds needs to be made public. Like many viewers of the show, my knowledge of the subject matter was shown to be horribly inadequate in light of the gap of knowledge since my indoctrinaation into the dinosaur world from high school days. | http://blog.wildlifesecrets.com.au/tag/fossils/ |
Weather Data from the Bridge:
Observational Data:
Latitude: 55˚ 10.643′ N
Longitude: 132˚ 54.305′ W
Air Temp: 17˚C (63˚F)
Water Temp: 11˚C (52˚F)
Ocean Depth: 33 m (109 ft.)
Relative Humidity: 52%
Wind Speed: 10 kts (12 mph)
Barometer: 1,014 hPa (1,014 mbar)
Science and Technology Log:
With much of the survey team either on leave or not yet here for the next leg of the hydrographic survey, it can be easy to be lulled into the sense that not much is going on onboard the Fairweather while she is in port, but nothing could be further from the truth. Actually, having the ship docked is an important time for departments to prepare for the next mission or carry out repairs and maintenance that would be more difficult to perform or would cause delays during an active survey mission. On that note while the Fairweather was docked was a perfect time to see the largely unseen and unappreciated: engineering. Engineering is loud and potentially hazardous even when the engines are not running, much less, when we are underway. One of the key purposes of engineering is to monitor systems on the ship to make sure many of the comforts and conveniences that we take for granted seemingly just happen. Sensors constantly monitor temperature, pressure, and other pertinent information alerting the crew when a component drifts outside of its normal range or is not functioning properly. Catching an issue before it progresses into something that needs to be repaired is a constant goal. Monitoring in engineering includes a wide array of systems that are vital to ship operations, not just propulsion. Sanitation, heating, refrigeration, ventilation, fuel, and electric power are also monitored and regulated from engineering. Just imagine spending the day without any of these systems while the loss of all of them would send us reeling to earlier seafaring days when humanity was entirely at the mercy of nature’s whim.
Two diesel generators can produce enough power to power a small town. Water systems pressurize and regulate water temperatures for use throughout the ship while filtration systems clean used water before it is released according to environmental regulations. Meanwhile, enough salt water can be converted to freshwater to meet the needs of the ship and crew. The method of freshwater production ingeniously uses scientific principles from gas laws to our advantage by boiling off freshwater from salt water under reduced air pressures increasing freshwater production while minimizing energy consumption. Steam is generated to heat the water system and provide heat for radiators throughout the ship, and of course the two large diesel engines that are used to provide propulsion for the ship are also located in engineering.
How does one get to work in engineering onboard a ship like the Fairweather? There are several different positions in the ship’s engineering department. An oiler is largely responsible for maintenance, repair, and fabrication and must pass a qualifying test for this designation focusing on boilers, diesel technology, electrical, and some refrigeration. Once the qualifying test is passed, the Coast Guard issues a Merchant Mariner credential. Only then can one apply for that position. Junior engineers must pass a test demonstrating that they have the working knowledge of the systems involved with engineering especially in areas of auxiliary systems and repair. Junior engineers generally need less supervision for various operations than oilers and have a greater scope in responsibility that may also include small boat systems and repair. The scale of responsibility does not stop there, but continues through Third, Second, and First Engineers. Each involving a qualifying test and having more requirements involving education and experience. Finally, the Chief Engineer heads the department. This too requires a qualifying test and certain experience requirements. There are two different ways in which one can progress through these different levels of responsibility. They can attain the formal education or they can document the job-related experience. Usually both play a role in where someone is ultimately positioned determining their role onboard the ship as part of an engineering team.
Personal Log:
Dear Mr. Cody,
The crew is very friendly. They take care of everything that we need on our trip to Alaska. They also take care of the ship. They must have to do a lot of work to keep such a large ship going and take care of this many people on vacation at sea. (Dillion is one of my science students who went on an Alaska cruise with his family in May and will be corresponding with me about his experiences as I blog about my experiences on the Fairweather.)
Dear Dillion,
The Fairweather also has a crew that takes care of the ship and its very own fleet of boats. While in port, I worked with our deck department to get a very small sense of what they do on a day-to-day basis to keep the ship running. The pitfall of having a lot of equipment and having the capability of doing many multifaceted missions is that all of this equipment needs to be maintained, cleaned, repaired, and operated. This includes maintaining both the ship’s exterior and interior, deployment and retrieval of boats, buoys, arrays, and various other sampling and sensory systems. When not assisting with carrying out a component of a mission such as launching a boat, the deck crew is often performing some sort of maintenance, standing watch, mooring and anchoring the ship, unloading and loading supplies, and stowing materials. Depending on years of experience and whether they have a Merchant Mariner’s certification or not will determine the level of responsibility. On a survey ship, the deck department specializes in boat launches and maintenance; so, the levels of responsibility reflect that central area of concern. Beginning experience starts with general vessel assistant and ordinary seaman progressing through able seaman with Merchant Mariner’s certification and seaman surveyor or deck utility man to boatswain group leader to chief boatswain. The chief boatswain is in charge of training and supervision regarding all of the areas pertinent to the deck department. This is a stark contrast compared to the deck department on the Pisces that specialized in techniques associated with fish surveys.
When I was with the Fairweather’s deck crew, they were working on taking an old coating of grease off cables and applying a new coating back on. The cables are used to raise and lower the 28’ long hydrographic survey launches. This will be a system that will be in use throughout the next leg; so, now is a great time to clean and replace that grease! After using rags and degreasing agents to strip the old grease off, a new coating was added to the cables. The crew is always conscientious about using chemicals that are friendly to the environment and proper containment strategies to prevent runoff from the deck directly into the ocean. Deck crew need to be very flexible with the weather. Since the weather was not cooperating for painting, we moved indoors and did “heads and halls,” sweeping and mopping hallways and stairs and cleaning bathrooms. The Fairweather resembles an ant colony in its construction; so, heads and halls can be a lot of work even for a whole team of people, but as I am reminded by one of our deck crew, “Teamwork will make the dream work.” It is, indeed, teamwork that makes Fairweather’s missions, not only possible, but successful.
Did You Know?
The boiler system produces steam that provides a heat source for the water system and the heating system.
Can You Guess What This Is?
A. an ocean desalinization unit B. an oil filter C. a fuel tank D. a sewage treatment unit
The answer will be provided in the next post! | https://noaateacheratsea.blog/2016/06/08/spencer-cody-no-survey-no-problem-june-8-2016/ |
After obtaining a gift like the butterfly fairy baby, Ouyang Bing was on cloud nine.
After which, she sat quietly beside her brother, accompanying her brother to watch the New Year’s Eve shows that have been playing for over 200 years. Xue’er sat on her shoulder, curiously staring at the television.
At 9:00 PM, Ouyang Shuo received a video call. He took up his phone to take a look, and it was actually from little auntie. As the only relative of the sibling, this little aunt had a special place in their hearts.
Little aunt was known as Lin Jing. Compared to her sister who was Ouyang Shuo's mother, she was younger by 15 years. Hence, she and Ouyang Shuo could be said to be from the same generation as she was only older than him by 6 years.
Under normal circumstances, Ouyang Shuo and the little aunt should've been very close, but it was the total opposite. It wasn’t that Ouyang Shuo didn’t want to get close to her, but she had hated men since she was born and her nephew was no exception.
It was because of such an eccentricity, since Lin Jing graduated from university, that she’d found another girl and married without getting the blessing of her family. Ouyang Shuo's grandparents did not have a son and only had these two daughters. The two elderly didn’t have good health, and after the double blow of their daughter died from a car accident and the other undergoing a same-sex marriage, in two short years both of them passed away one after the other.
As his paternal grandparents gave birth to his father late, not long after Ouyang Shuo was born, they kicked the bucket. Hence, his relationship with his maternal grandparents was very deep. This was also the reason why he drifted away from this little aunt and kept little contact. As for Bing’er who had seen the little aunt a few times, her impression of her was even more of a blur.
Hence, to receive a call from little aunt on New Year’s Eve shocked Ouyang Shuo. After picking up the call, the image of Lin Jing appeared on the screen. In truth, she looked exactly the same as his mother, inheriting the elegance and beauty of his grandmother.
Beside his aunt sat another beauty. She looked like a city strongwoman, exuding an aura of strength and confidence. Her name was Xie Siyun and she was his little aunt's partner, and was his "Aunt in Law".
After experiencing a rebirth, Ouyang Shuo cherished his loved ones even more. In his past life, because of his stubbornness, before he'd boarded the intergalactic spacecraft, he did not think about contacting his little aunt and exchange his in-game ID. 5 years in the game, he hadn’t met his aunt even once, becoming his biggest regret.
What happened in the past had already happened; one must learn to look forward. Ouyang Shuo wasn’t as cold in the past, calling Bing’er to his side and laughed,"Little Aunt, Happy New Year!"
Lin Jing was surprised by the actions of her nephew. She was already prepared for him to hang up the phone call; never did she expect for such a situation to occur. In the last life, Ouyang Shuo did hang up on her, never contacting each other ever since.
"Little Shuo, Happy New Year! The one beside you is Bing’er right? This little brat has grown up so much, she looks so much like her mother." Little aunt smiled as she said.
Only then did Bing’er remember that the woman in the video was her little aunt, immediately saying crisply, "En, Little aunt, I am Bing’er. Happy Chinese New Year!"
Hearing her master speak, the little Xue’er who was standing on her shoulder copied her and said," Happy New Year, Happy New Year!"
Xue’er's accent caused Bing’er to laugh. Bing’er held up Xue’er on her hand like how one would when she is presenting a prize, said,"Little aunt, little aunt, this is the pet that brother gave me. Bing’er gave her a name which is Xue’er. Is it cute?"
Although this type of smart pet wasn’t very common, Lin Jing still knows of it. However, she assumed that it was a normal smart pet and didn’t pay much attention to it, laughing as she said, "Cute, just like Bing’er."
Xie Siyun who was sitting beside Lin Jing recognized that Xue’er wasn’t ordinary, a flash of amazement and surprise appearing in her eyes.
After the greetings, Lin Jing recalled the reason why she’d made the call, turning to Ouyang Shuo and saying, "Little Shuo, have you heard of a game known as Earth Online?"
Ouyang Shuo was shocked, why did little aunt raise this up, don’t tell me she knew something? This wasn’t something that was impossible, after all, after she graduated she hadn’t returned to the State of Jiao and had remained in Shang Hai where she did her university education. After marrying, she lost all contact with her family and what she’d been doing, no one knew.
On the surface, Ouyang Shuo still kept his smile, saying with no change in emotion, "Earth Online? How special is this game that it’s worth little aunt to think about during New Year’s?"
"The exact reason little aunt cannot tell you. What I can say is that I recommend you to play it. If you need money, little aunt can transfer you some." Lin Jing was very careful and added.
Ouyang Shuo nodded his head. He could practically confirm that little aunt knew some of the hidden details of the game from certain channels. He was hesitating whether or not to tell everything to her.
He thought about it long and hard, after all the attention he was receiving and the animosity from Di Chen wasn’t what he wanted to get her into. If it got exposed, the outcome would be irreversible.
Ouyang Shuo purposely said with a relaxed tone, "No need, actually I do play this game."
"Ah? You have already started playing. That’s great, what’s your ID? Little aunt has some forces of my own in the game." Lin Jing said with surprise.
Ouyang Shuo shook his head, "There's no need, I'm used to playing alone. Why not little aunt tell me your ID If there's a time when I can’t survive, I will depend on little aunt. Haha."
If Ouyang Shuo wasn’t her nephew, and that towards this nephew she hadn’t felt guilty, she would have already felt annoyed. This time, she could just bear with it, "Ok then, you are still so stubborn. Little aunt won’t force you. But remember, little aunt's ID is Snow Rose, if you have something, you must come find me."
Ouyang Shuo was shocked as the words left her mouth, "Snow-War Rose mercenary group?"
"Hmph! It’s good that you know. Let’s stop the conversation here. Bye bye!" Lin Jing's patience was already used up and she turned to Bing’er," Bing’er say bye to aunt. If I'm free I will go see you."
Bing’er obediently nodded her head and said sweetly,"Bye, little aunt."
After hanging up, Ouyang Shuo couldn’t calm himself down. He didn’t expect that his little aunt would be the vice-captain of the second-ranked mercenary group. Then Xie Siyun should be the captain, War Rose.
Fate really played a joke on him. If in his last life he didn’t hang up on little aunt, then he might have entered the game earlier. In the game, having the backing of War Rose, would he still need to wander around with Bing’er? Everything would have been very much different.
However, there were no ifs in life. This time, by accident he received such help, what kind of effect will it have on him in the game? Everything was unknown.
At the same time, between Lin Jing and Xie Siyun occurred an interesting conversation.
Seeing that Lin Jing had already hung up, the silent Xie Siyun said as if she were contemplating something, "Jing Jing, your nephew is not ordinary."
"That brat is so annoying. I originally wanted to help him, but he still dared to act cool. Saying he liked to play alone." Lin Jing was enraged.
Xie Siyun softly shook her head and smiled," You ah, you were bluffed by him and you didn’t know. Your niece's smart pet, do you know how much it costs?"
"How expensive can it be? Although it looked more exquisite, at most only 10-20 thousand!"
"You have no taste. That is the limited edition smart pet from ILAX. Its retail price is 1 million credits." Xie Siyun glanced at Lin Jing, saying calmly.
"Ah? So expensive, when did this kid get so much money? If I remembered correctly he just graduated this year!"
"That’s why I said he’s not simple. Talking back to Earth Online, how would a university graduate pay attention to such a game and be willing to spend money on a game capsule. That in itself is not ordinary. I guess he is probably a beta player like us." Xie Siyun deduced.
"Yaoyao, what you say makes sense. Hen, that brat even wants to lie to his aunt. I’m going to teach him a lesson." Lin Jing wanted to call his phone again.
Xie Siyun immediately prevented her, "Look at your nephew. he definitely isn’t a clumsy person. When he found out you were the vice-captain of Snow-War Rose mercenary group, he could stop himself from telling you his ID. It’s obvious that he isn’t as simple as a solo player. If you call back now, he definitely wouldn’t tell you the truth. The way I see it, let’s just spectate the situation quietly."
For Lin Jing to be the vice-captain of the second biggest mercenary group in China, she obviously wasn’t a simple person. After calming herself down, she agreed that what Xie Siyun said made sense and that it wasn’t the right time to ask for all the answers.
"Okay. Let’s follow what you said. Anyways, the reason for calling him has already been achieved. This can be counted as doing my duty as an auntie. What happens after this, let’s talk about it next time!" Thinking about her parents who’d passed away, Lin Jing felt down-hearted. | http://tales-of-demons-and-gods.com/The-World-Online/1109610.html |
where epsilon0 is the dielectric constant, G the gravitational constant and r the distance between the point charges or point masses.
Both of equation (1) and (2) are mathematically equivalent, that is, they are proportional to the inverse squared distance.
Since the dimension of both of them is [Newton], the dimension of the constant parts are also identical.
[q1q2/4 pi epsilon]=[Gm1m2] (3)
Since in the electromagnetism the forces are governed by the Maxwell's equations or Lorentz force, one might be seduced to think about a gravitational version of the Maxwell's equations. Are such equations known or do they exist?
The most intriguing thing is the similar function of the products q1q2 and m1m2. Do they behave the similar methematical play in each mechanism? What are the electric charge and mass at all? What should be asked should be asked many times until we can give a satisfactory answer. What will your speculation about this be, Sirs?
A Pithecantropus Japonicus who is writing graffiti on the wall of a cave
March 16, 2013
Dear Sirs,
The problem is this.
According to Einstein, mass is equivalent to energy. That is
mc^2=E (4)
so that,
m=E/c^2 (5)
That is, mass is proportional to energy.
Therefore we cannot but imagine that a mathematical product of energy can generates gravitational force.
Since m1 is energy and m2 is also energy, then the product of m1 and m2 means energy squared, and this energy squared quantity becomes gravitational force.
In much the similar way, electric charges q1 and q2 can generate Coulombic force. By reverse inference, the electric charge q1 and q2 must be proportional to energy.
The conclusion is this. The electric charge might be equivalent to energy.
q=f(Energy) (6)
How do you think, Sirs? | https://communities.acs.org/t5/Science-Questions-and/Why-are-they-similar-The-Coulombic-force-and-Gravitational-force/m-p/8284 |
A new algorithm developed by IBM could double the speed of secure online communications. IBM says the combination encryption/authentication technique is particularly suited to securing Internet protocols, storage area network protocols, fiber-optic networks and e-business transactions. But analysts say the new . . . A new algorithm developed by IBM could double the speed of secure online communications. IBM says the combination encryption/authentication technique is particularly suited to securing Internet protocols, storage area network protocols, fiber-optic networks and e-business transactions. But analysts say the new technique needs further study.
The as-yet-unnamed security algorithm simultaneously encrypts and authenticates messages. That innovation significantly improves the speed of the security process over that of previous approaches, which perform encryption and authentication separately. The new algorithm works using symmetric cryptography, in which the same secret key -- or mathematical code -- is used to encrypt and decrypt a message. Another popular security technique, called public-key cryptography, uses two different keys, one to encrypt and one to decrypt. It is slower but is considered more secure. | https://linuxsecurity.com/news/cryptography/ibm-reveals-new-qsigncryptionq-algorithm |
Breakfast + paleo pizza = 2 of our most favourite things haha!! Def have to try this out! e & c
Making this pizza crust is a breeze. In fact, it’s even easier than making regular, non-Paleo pizza dough. The coconut flour adds a hint of sweetness, which goes nicely with any tomato sauce that you might want to top it with. When you mix the ingredients together it forms more of a batter than a dough, so there won’t be any dough tossing with this recipe. The crust comes together when you bake it for about 20 minutes, at which point you flip it over so that it cooks evenly, and then place the toppings on it.
The coconut flour crust also goes well with breakfast pizza, which is what I made this time. I’m definitely going through a breakfast for dinner phase. The runny yolks and bacon make a delicious combination that I will definitely be using again.
For more regular pizza options, you can top it with tomato sauce, pesto, veggies, sausage, pepperoni, arugula, spinach, and the list goes on. Bring out your creative side. Or, if there is more than one person eating the pizza, have each person assemble their own desired toppings on different parts of the pizza. With this recipe for Paleo pizza crust, everybody can enjoy pizza night.
Ingredients
- For the crust
- 3 eggs
- 1 cup full-fat canned coconut milk
- 1/2 cup of coconut flour
- 2 tsp of garlic powder
- 1 tsp onion powder
- 1 tsp Italian seasoning
- 1/2 tsp baking soda
- For the breakfast pizza
- 3 strips bacon
- 1/4 cup scallions, chopped
- 1-2 tomatoes, sliced thin
- 2 cups spinach
- 4 eggs
- 1 tbsp fresh parsley, chopped
Directions
- Preheat the oven to 375 degrees F. To form the pizza dough, lightly beat the eggs and coconut milk in a bowl. Add in the coconut flour, baking soda, and seasonings and mix into a smooth batter.
- Spread the batter onto a baking sheet lined with parchment paper, using a spatula to smooth into either a circle or rectangle. Bake for 18-20 minutes or until the top is golden brown. Remove from oven. Carefully flip over.
- While the crust is baking, cook the bacon in a skillet over medium heat. Reserving the bacon fat in the pan, set the bacon aside to cool and crumble into pieces. Barely wilt the spinach in the leftover bacon fat.
- Add toppings to the baked crust. Start with bacon, tomato, spinach, and scallions. Carefully crack eggs onto the crust. Sprinkle with parsley. Bake for 12-15 minutes more, just until the egg whites have set. Slice and serve warm.
Servings
7 Comments
-
-
Hi , one question. After it bakes on the first side and then it says to take out and flip over do you put it back in to bake some more or is that when I would put the toppings on right after flipping it? Thank you.
-
Can almond flour be substituted for the coconut flour or can you do half and half?
-
What other options besides coconut milk and flour? And if almond what amount of flour since less absorbant I believe? Would oat flour work? Thanks.
-
Does the fat content of the milk matter to the recipe? I just want to know if alternative would work out if they would need to be adjusted for the fat content? | https://paleogrubs.com/paleo-pizza-crust?view=print |
Tourrettes-sur-Loup: situated in a quiet residential area and dominant position, beautiful stone villa with a panoramic sea view, offering: entrance hall, vast and living/ dining room with a fire place, fully equipped open-plan kitchen, 3 bedrooms including a master room on the ground floor and 2 bathrooms. Landscaped park of 8,000 m². Guest cotage. Double garage.
Contact a manager at Tranio.com, and we will find you a suitable Villa or property from among our
Villa – Tourrettes-sur-Loup, Côte d'Azur (French Riviera), France
1,280,000 €
Total area: 160 m² Land area: 8,000 m² 4 bedrooms 3 bathrooms
Features
- Built in 1990
- 2-floor building
- Total rooms: 4
- Sea view
More properties
If this property wasn’t for you, take a look at other houses, villas, cottages for sale in France.
You can see more houses, villas, cottages for sale in Tourrettes-sur-Loup elsewhere on our website. | https://tranio.com/france/adt/1564572/ |
Give that granola an upgrade with coconut and pecans! This recipe for Coconut, Pecan and Flax Seed Granola with Dried Fruit from Central Market can be whipped up several days ahead of Mother's Day. We love the idea of serving it as a parfait in a mini Mason jar (above) for a pretty presentation touch!
|Servings||
|
Ingredients
- 5 cups old-fashioned oats
- ½ cup wheat germ
- ½ cup milled flax seed
- ½ lb unsweetened dried shredded coconut
- 1 cup Sesame seeds
- 1½ cups slivered almonds
- 1½ cups chopped pecans
- ¾ cups canola oil
- ½ cup honey
- ½ cup molasses
- 1½ teaspoon salt
- 3 teaspoons Cinnamon
- 2 cups dried fruits (combo of raisins, blueberries, cranberries)
|
|
Ingredients
|
|
Instructions
- Preheat oven to 300 degrees.
- In a large bowl, combine oats, wheat germ, flax seed, coconut, sesame seeds and nuts, mixing well.
- In a large saucepan, combine the brown sugar, water, oil, honey, molasses, salt, and cinnamon. Heat until thoroughly mixed; do not boil.
- Pour syrup over dry ingredients, and stir until evenly coated. Spread evenly into large roasting pan.
- Bake 20 to 30 minutes, stirring occasionally. If a crunchier texture is desired, bake for an additional 10 minutes.
Recipe Notes
This recipe is provided courtesy of Central Market. | https://www.goodtaste.tv/recipe/coconut-pecan-and-flax-seed-granola-with-dried-fruit/ |
It was the best of marathons, it was the worst of marathons.
There are few moments in life when you’re given the opportunity to take part in a historic event. Being one of 50 people to run the first marathon held completely inside Fenway Park is one of them.
Race Preparation
The race consists of running 116 laps around the warning track. Each lap is roughly a fifth of a mile. Running in circles in complete flatness. For hours.
For my weekend training, I’d run over to a middle school about 4 miles from my house. I’d run 8 to 13 miles on the track, and then run home. Training was in the middle of summer, and I appreciated the water fountain. In a few weeks, I overcame the monotony by letting my mind wander, because I didn’t have to worry about avoiding cars, and felt prepared for race day.
The other preparation for this race was in the form of fundraising. To qualify for the race, each runner had to commit to raising at least $5,000 for the Red Sox Foundation – a charity that makes a difference in the lives of children, veterans, families and communities throughout New England.
For weeks, I sent emails, wrote blogs, tweeted, and posted on Facebook. I was amazed at the generosity of so many people. Through their kindness, I raised $6,766.20. (the $0.20 is because one friend donated $26.20 – reflecting the 26.2 miles of the marathon).
Race Day
In addition to being 116 laps, the race had the unique start time of 5:00pm, a 3:30pm reporting time. The runners were assigned lockers in the Visiting Team’s locker room. Then the 3 World Series trophies were brought into the locker room. We took turns posing and taking photos of each other. I’d never seen so many runners so happy before a race.
My bib number was 27, a special number in my life. My father, my mother and I were all born on the 27th. The best catcher whoever played the game (and my wife’s all-time favorite) – Carlton “Pudge” Fisk – wore number 27 for the Boston Red Sox.
The race director was Dave McGillivray – the acclaimed race director for the Boston Marathon, who 39 years ago ran across the country from Medford, Oregon to Medford, Massachusetts to raise money for the Jimmy Fund. He ended his trek with a couple of laps around the Fenway Park warning track, and a dream was born. Tonight, we’d make that dream come true.
Before the start, we were allowed some time to walk around the warning track, drop off gear at the aid station and take photos. My wife was in the stands, and she texted me that my brother David was also there. Later, my nephew Jared and his girlfriend would join them. Some quick photos, and we were ready to begin.
For me, the best race weather is between 40 and 50 degrees. It’s why I love fall marathons. At race start, it was 75 degrees and vey humid. There was some light rain in the first few hours, but I struggled with the temperature.
My struggles were compounded by not knowing my pace. My Garmin was displaying a lot of strange paces – one second I was running 7:30 miles, and the next moment 11:00 miles. The monitor displaying our lap count short-circuited, and was out of commission for several miles. I didn’t have a time goal, so I just ran by feel.
However, I wasn’t feeling well. My body kept heating up despite staying hydrated. As I approached the halfway mark, I was already 15 minutes behind my normal 4-hour pace. I stopped and told my wife, “This is gonna be ugly.” Her response – “But you like ugly.” I ran away laughing.
Around Mile 16, the skies opened with a torrential downpour. It was just what I needed to cool down. However, it also meant the track was getting muddy. A fair trade-off.
That was also when I started using walk breaks. Run 5 laps and then walk. Which turned into run 3 laps, and then walk. Finally, run 2 laps and then walk. Relentless forward motion.
I used those walks to support – and get support from – my fellow runners. Several of us had met before for dinners, so there was already a connection. Other connections were made on the track. Complete strangers bond quickly when sharing the same challenge. We cheered on the people who were passing us, and cheered on those we were passing. It wasn’t about individual finishing times, it was about all of us finishing. And we all did.
Spectators were packed along the First base line, so we were cheered every lap. I was wearing my US Army running shirt, and many people would yell, “Go Army!”. Except for one group.
At the pre-race dinner, I met a Navy officer who was running the marathon. As I passed her family, they would “heckle me” with “Go Navy” or “Anchors Aweigh!”. I’d respond with “Huah!” or “Army Strong!” When the Navy officer started her final lap – as the first female finisher – I yelled up to them, “Go Navy!”
But my personal cheering section – my wife – meant the most. For over 5 hours, including in the pouring rain, Dolores was there for every lap. Cheering me on, waving, taking photos and making a video. There was no way that I could have done this without her.
The Results
Counting down the last 10 laps felt great. Running the last lap felt even better. Crossing the finish line at 5:07:18 felt the best. You can watch my 5 hours, complete with rainstorms, in 2 and half minutes on this YouTube video.
My slowest ever marathon time, by more than an hour. My legs were spasming, and my chafing had chafing. I was in more pain than any previous race.
But I didn’t care.
I was on the field at Fenway Park.
Thanks for reading, and thanks for your support.
Mark brings a positive, inspiring message based on actual successes in the military, corporate world, marathon running and as a consultant.
Thanks for your leadership and sharing your years of experience with us. | http://www.markfallon.com/fenway-park_marathon |
One of the largest constellations of the southern sky is Centaurus. The constellation is the home of the nearest star to our solar system, Proxima Centauri and represents Chiron the Centaur, a mythical half-man, half-horse creature of the Greek mythology that was accidentally killed by Hercules.
Main Characteristics of the Centaurus Constellation
- Abbreviation: Cen
- Symbolism: the Centaur
- Right ascension: 13 h
- Declination: −50°
- Area: 1060 sq. deg. (The 9th largest constellation by area.)
- Main stars: 11
- Brightest star: α1 Cen (−0.01m)
- Nearest star: Proxima Centauri (α Cen C)
- Distance of the nearest star: 4.24 ly, 1.30 pc
- No messier objects
Bordering constellations
- Antlia
- Carina
- Circinus
- Crux
- Hydra
- Libra
- Lupus
- Musca
- Vela
The central constellation of the image is Centaurus.
Brightest Stars of Centaurus
Rigil Kentaurus or Alpha Centauri is part of a triple star system that consists of Alpha Centauri A and B, a binary star and Proxima Centauri (foot of the Centaur). Proxima Centauri is a red dwarf at a distance of 4.37 light-years and it is thought to be the closest star to the Sun.
Agena or beta Centauri (knee of the Centaur) is a magnitude 1 blue-white giant at a distance of 525 light-years.
Hadar (ground of the Centaur) is a triple star, the 11th brightest star of the sky. Both Hadar and Alpha Centauri are called the Southern Pointer Stars since they point to the Southern Cross.
Menkent or theta Centauri (shoulder of the Centaur) is a 2.06 magnitude orange giant and it is the third brightest star of the constellation. Menkent’s distance from Earth is 61 light-.years.
Lucy or BPM 37093 is a white dwarf made of carbon atoms at a distance of 50 light-years. It is the “largest known diamond” equal to 10 billion trillion trillion carats!
Deep Sky Objects and Meteor Showers of Centaurus
DEEP SKY OBJECTS:
- Globular cluster Omega Centauri (NGC 5139)
- Radio source Centaurus A (lenticular galaxy NGC 5128)
- Galaxy ESO 325-G004
METEOR SHOWERS:
- Alpha Centaurids: Visible in early February (maximum about three meteors an hour).
- Omicron Centaurids: Visible from late January through February. The peak is in mid-February
- Theta Centaurids: Visible from late January to middle March. This is a weak meteor shower.
Mythology and History of the Constellation
A few thousand years ago, Centaurus was an equatorial constellation. However the precession of the Earth’s axis, has moved it to the southern sky. Since this is a periodic phenomenon, the constellation will become visible again by both hemispheres after thousands of years. The first to mention the constellation were Eudoxus and Aratus, during the 4th and the 3rd century BCE respectively. Ptolemy was the first to catalog 37 of the constellation’s stars in the 2nd century AD.
As far as mythology is concerned, centaurs were mythical creatures half-men, half-horse that lived in the region of Magnesia and Mount Pelion in Thessaly, Greece. Chiron was the wise king of the centaurs.
During a fight between Hercules and the centaurs, a poisoned arrow accidentally strikes Chiron on the knee. Chiron was immortal and the wound could only cause him eternal pain. In order for Chiron not to suffer any more, Zeus and Prometheus agree to take away his immortality and place him among the stars. The outline of Centaurus in the night sky actually resembles a centaur.
How to Find Centaurus in the Southern Sky
Some of the brightest stars in the southern sky belong to the asterisms of Centaurus and the Southern Cross. Centaurus is a clearly discernible constellation that wraps around the Southern Cross. As soon as you locate the two brightest stars Alpha and Beta Centauri, you will be able to see the rest of the constellation. It resembles a centaur facing toward Lupus the Wolf, holding a sword or a spear. The Southern Cross lies just under the belly of the centaur. With the use of a small telescope or binoculars, it is easy to distinguish the three stars of the Alpha Centauri triple system.
From these constellations, it is possible to locate other fainter constellations in the southern sky.
Sources:
- Centaurus Constellation by topastronomer.com
- Centaurus the Centaur by skyscript.co.uk: https://www.skyscript.co.uk/centaur.html
- earthsky.org
Image Credits: | https://www.brighthub.com/science/space/articles/110359/ |
Two new papers highlight promising methods for making shapeshifting structures.
Jennifer Ouellette
–
Luxo, Jr., Pixar’s trademark animated Luxo balanced-arm lamp, is based on a classic design known as the anglepoise lamp, invented by British designer George Carwardine in 1932. Almost ninety years later, the anglepoise lamp has helped inspire a novel approach to building multifunctional shapeshifting materials for robotics, biotechnology, and architectural applications, according to a new paper published in the Proceedings of the National Academy of Sciences.
Meanwhile, physicists at Case Western Reserve University and Tufts University have stumbled on another promising approach to creating novel shapeshifting materials. The researchers remotely manipulated the ordinarily flat surface of a liquid crystal without any kind of external stimulus (such as pressure or heat), changing its physical appearance merely with the nearby presence of a bumpy surface. It’s early days, but the researchers suggest their approach could someday enable materials that can shapeshift with the ease of The X-Men‘s Mystique. They described their work in a new paper published in the journal Physical Review Letters.
Developing novel shapeshifting materials is a very active area of research because there are so many promising applications, such as building artificial muscles—manmade materials, actuators, or similar devices that mimic the contraction, expansion, and rotation (torque) characteristics of the movement of natural muscle. For instance, in 2019, a team of Japanese researchers spiked a crystalline organic material with a polymer to make it more flexible, demonstrating their proof of concept by using their material to make an aluminum foil paper doll do sit-ups. Most artificial muscles are designed to respond to electric fields (such as electroactive polymers), changes in temperature (such as shape-memory alloys and fishing line), and changes in air pressure via pneumatics.
Later that same year, MIT scientists created a class of so-called “4D materials” that employ the same manufacturing technique as 3D printing but which are designed to deform over time in response to changes in the environment, like humidity and temperature. They’re also sometimes known as active origami or shape-morphing systems.
The MIT structures can transform into much more complicated structures than had previously been achieved, including a human face. These kinds of shapeshifting materials might one day be used to make tents that can unfold and inflate on their own, just by changing the temperature (or other ambient conditions). Other potential uses include deformable telescope lenses, stents, scaffolding for artificial tissue, and soft robotics.
T is for Totimorphic
What’s unique about the latest research from the Harvard team is that their assemblies of interlocking blocks, or cells, can take on and maintain any number of configurations; most shapeshifting materials are limited to just a handful. That’s why they are called “totimorphic” structural materials.
“Today’s shapeshifting materials and structures can only transition between a few stable configurations, but we have shown how to create structural materials that have an arbitrary range of shape-morphing capabilities,” said co-author L Mahadevan of Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS). “These structures allow for independent control of the geometry and mechanics, laying the foundation for engineering functional shapes using a new type of morphable unit cell.”
The trick to any shapeshifting material is to find the sweet spot where both rigidity and elasticity (or conformability) are optimized. If a material has too much conformability, it can’t maintain the different shapes it adopts because the configuration won’t be stable. If a material is too rigid, it won’t be able to take on new configurations at all. That’s where the anglepoise lamp comes in. The lamp head “is infinitely morphable by virtue of its having a set of opposing springs in tension that change their lengths while the total energy remains constant,” the authors wrote.
In other words, Luxo Jr.’s head will remain stable in any position because its springs will stretch and compress however they need to in order to counteract the force of gravity. The technical term is a “neutrally stable structure”: a structure in which the rigid and elastic elements are ideally balanced, enabling them to transition between an infinite number of positions or orientations while still remaining stable in all of them. Mahadevan and his colleagues essentially built an assembly of unit cells as building blocks, connected by individual switchable hinges, to get the same balance between rigidity and conformability.
“By having a neutrally stable unit cell, we can separate the geometry of the material from its mechanical response at both the individual and collective level,” said co-author Gaurav Chaudhary, a postdoctoral fellow at SEAS. “The geometry of the unit cell can be varied by changing both its overall size as well as the length of the single movable strut, while its elastic response can be changed by varying either the stiffness of the springs within the structure or the length of the struts and links.”
As a proof of concept, the team demonstrated that a single sheet of their totimorphic cells could curve up, twist into a helix, bear weight, and even morph into face-like shapes. “We show that we can assemble these elements into structures that can take on any shape with heterogeneous mechanical responses,” said co-author S. Ganga Prasath, another SEAS postdoctoral fellow. “Since these materials are grounded in geometry, they could be scaled down to be used as sensors in robotics or biotechnology or could be scaled up to be used at the architectural scale. | https://www.ava360.com/pixar-lamp-and-mystique-inspire-novel-approaches-to-shapeshifting-materials/ |
GAINESVILLE, Fla. --- For young hellbenders, choosing the right home is more than a major life decision. Their survival can depend on it.
These aquatic salamanders, natives of streams in the Ozarks and Appalachia, spend most of their life in the shadowy crevice between the underside of a rock and a river bed, picking off crayfish and, occasionally, each other.
In the first study of young hellbenders' habitats, University of Florida ecology doctoral candidate Kirsten Hecht found that larvae tend to live under small rocks, progressively moving to larger rocks as they grow. Selecting a "just right" rock - too tiny for one's bigger neighbors - could help young hellbenders avoid getting ambushed and eaten, Hecht said.
The findings could inform and improve conservation efforts, as the salamanders are in rapid decline across their range, primarily due to habitat loss and degradation.
"The ultimate goal is to restore hellbender populations so that they're self-sustaining, but that's basically impossible until we have the right habitat in place for them to survive and reproduce," Hecht said. "We know very little about the habitats of young hellbenders. Having this information can help us start thinking about these factors as we restore streams."
Hellbenders begin life as larvae less than an inch long and grow into adults that measure up to 2.5 feet. For humans, this is roughly equivalent to an average-size baby growing to be more than 31 feet tall. The dramatic size difference between young and adult hellbenders can result in cannibalism, Hecht said.
"They'll eat almost anything they can fit in their mouths," she said.
Selecting habitats of varying sizes helps hellbenders avoid competing with one another and potentially reduces cannibalism, said Hecht, who also works in the Division of Herpetology at the Florida Museum of Natural History. Her previous work showed that hellbenders also divide food resources, with larvae feeding on aquatic insects and adults eating crayfish and small fish.
While adults are often found under large boulders, little has been known about where young hellbenders shelter beyond a few anecdotal observations of larvae burrowing into gravel beds or hiding inside crevices in limestone.
Hecht and her collaborators gathered data on the homes of more than 200 hellbenders in the Little River of Tennessee, a sandstone environment where large amounts of sand make it difficult for larvae to bury into gravel.
Larvae, hellbenders about 5 inches long or smaller, lived under boulders averaging about 1.5 feet in length. Subadults, 5 to 11 inches long, tended to shelter under boulders a little over 2 feet long. Adults selected boulders with an average length of about 2.5 feet.
"It's like 'Goldilocks and the Three Bears,'" Hecht said. "They're sort of self-separating their average shelter size."
One décor preference Hecht noticed among hellbenders of all sizes? Coarse gravel flooring.
"People had previously looked at gravel and cobble but hadn't divided them into subcategories," she said. "What's neat about this is that it's not just gravel. It's this specific type of gravel. That's really important because it could relate to how much space and prey are available under the rock."
But Hecht cautioned against applying the study's findings to all streams, which can vary in geology and ecology.
"You can't necessarily take the results from this stream and assume they hold true for all streams," she said. "But you can recognize that rock and gravel size are having some type of impact. There are things we can do across the range, but the way they're implemented has to be locally determined."
Hecht said people can help protect hellbenders by leaving river rocks undisturbed; releasing hellbenders caught on fishing hooks or line; reporting hellbender sightings to a local Department of Natural Resources; and minimizing the use of pesticides and herbicides, which can affect water quality in streams.
"When you have a stream with healthy hellbenders, that means you also have good drinking water, a good trout stream - other things people tend to care about are doing well if hellbenders are doing well," she said.
###
Michael Freake of Lee University, Max Nickerson of the Florida Museum and Phil Colclough of Zoo Knoxville also co-authored the study.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system. | |
To improve the power generation efficiency, the parameters matching analysis method for the downhole generator with asymmetric turbine is established on basis of the experimental design theory, response surface methodology and orthogonal experimental design. Based on the calculation results by Computational Fluid Dynamics (CFD), the blade parameters of stator and rotor which affect the objective function a lot are screened out by the single-factor experimental design. Then, to get the optimal design results, these parameters are analyzed and determined by the Box-Behnken design and response surface methodology. Once the approximation model of objective function is constructed, the interplay between these parameters is discussed in this paper. Furthermore, the experimental study is conducted on the optimal design point of the new asymmetric turbine. The results show that CFD simulations are in good accordance with the calculations based on response surface method. The relative error of experimental value is smaller compared with the predictive value, and the trends of performance curves are almost the same. What’s more, the efficiency of new asymmetric turbine increases by 10% after optimizing matching. It declares that the design method based on Box-Behnken and the orthogonal design experiments can be used in the matching analysis of asymmetric turbine’s parameters. The research in this paper provides reliable guidance in turbine blade design and technological parameters optimizing. | https://www.ijeart.com/optimization-design-for-downhole-turbine-generator-based-on-response-surface-method |
This course focuses on statistical tools and methods necessary for the characterization and modeling of business processes (for the production of both physical goods and services). It will cover topics such as: Graphical and quantitative analysis of data, probability, random variables, probability distributions, some discrete and continuous distribution functions, behavioral patterns of processes, tools for parameter estimation and methods of statistical comparison.
Course Learning Outcomes:
After completing the course, the student must be able to:
I. Identify variables of interest associated with processes of counting and measuring, in order to conduct its statistical analysis, and organize, analyze, characterize and construct different types of graphics.
II. Extract information from grouped qualitative and quantitative data sets.
III. Calculate and interpret each one of the measures of central tendency, position and variability from an ungrouped data and be able to use it for decision making.
IV. Calculate the probability of an event using different techniques such as counting techniques (permutations and combinations), probability axioms, conditional probability and independent events and/or Bayes theorem.
V. Determine the probability distribution of a discrete random variable in order to use it for decision making.
VI. Use the density or distribution function of a continuous random variable for decision making processes.
VII. Apply the properties of expected value and variance for decision making processes.
VIII. Model a discrete or continuous random variable associated experiment using various probability distributions.
IX. Use joint probability distributions for decision making processes.
X. Use the appropriate sampling distribution to calculate associated probabilities and make inference about the parameters of one or two populations.
XI. Given a statement about the parameters of one or two populations, use estimation to determine whether it is true or false.
XII. Given a statement about the parameters of one or two populations, use hypothesis testing to determine whether it is true or false.
XIII. Given a set of either discrete or continuous data, fit it to a specific probability distribution, using the Chi-squared test.
XIV. Given an independent and dependent variable, determine if it is possible to fit it to a linear regression model, after verifying the normality, constant variance and linearity assumptions.
XV. Find a regression model to generate point estimations, confidence and prediction intervals.
Use statistical software to develop statistical techniques such as descriptive analysis, estimation, hypothesis testing, regression models, and factorial analysis, among others.
Descriptive statistics:
Basic concepts of statistics. The role of statistics in Engineering and Science. Descriptive and inferential statistics.
4
1
1
Probability:
Basic concepts. Definition of probability axioms. Counting techniques (permutations and combinations). Conditional probability and independent events. Bayes theorem.
10
5
2-4
Probability distributions:
Discrete and random variables and its probability distributions. Expected value and variance. Discrete and continuous probability distributions.
10
5
5-7
Sampling Distribution:
Basic concepts. Distributions related to the normal distribution. Sampling distribution of mean and proportion.
6
4
8-9
Estimation:
Point estimation. Interval estimation. Confidence and prediction intervals.
6
4
10-11
Hypothesis Testing:
General concepts. Hypothesis tests. Chi-squared test. P values.
12
5
12-14
Linear simple Regression:
Parameters estimation. Variance analysis. Validation of assumptions. Prediction of new observations. Confidence and prediction intervals.
6
2
14-15
Course Disclaimer
Courses and course hours of instruction are subject to change.
Eligibility for courses may be subject to a placement exam and/or pre-requisites.
Please note that some courses with locals have recommended prerequisite courses. It is the student's responsibility to consult any recommended prerequisites prior to enrolling in their course. | https://www.studiesabroad.com/destinations/latin-america/colombia/barranquilla/science-technology-engineering--mathematics-stem/ibqu1220/data-analysis-for-engineering-440678 |
Definition: Depreciable cost, also called the basis for depreciation, is the amount of cost that can be depreciated on an asset over time. The depreciable cost is calculated by subtracting the salvage value of an asset from its cost.
What Does Depreciable Cost Mean?
Notice I said cost and not purchase price. The depreciable cost is not solely based on the purchase price of an asset. Other costs like repairs, upgrades, and taxes also attribute to the cost of an asset. The cost of an asset is the total price to acquire an asset and make it ready for use.
Example
Take a manufacturer for example. It purchases a large piece of machinery for $100,000 to put in its production plant. The machine is so big that it can’t fit through the doors. It has to be taken apart taken apart to get into the building and reassembled in place. It costs the company $10,000 to have the machine torn down and put back together again. This cost is added to the original purchase price of the machine bringing the total cost to $110,000.
Based on past history, management thinks this machine will probably last about 10 years and will have a salvage value of about $15,000. This means the depreciable cost would be $95,000 ($110,000 – $15,000). In other words, the company can depreciate $95,000 of the machine’s cost over time. It cannot be fully depreciated.
Managerial accountants also use the depreciable cost to compute the amount of depreciation taken each year. Straight-line depreciation is calculated by dividing the depreciable cost by the useful life of the asset. In our plant asset example, the straight-line depreciation per year would be $9,500 ($95,000 / 10 years). This means the assets recognize $9,500 of cost per year for ten years. | https://www.myaccountingcourse.com/accounting-dictionary/depreciable-cost |
Q:
How to find a non-trivial combination of the rows and columns
Matrix A =\begin{bmatrix}1&2&1&1&2\\0&1&1&1&2\\1&0&1&0&2\\1&2&3&0&1\end{bmatrix}
If possible, how can I find a non-trivial linear combination of the rows equal to 0, as well as a combo of the columns equal to 0. I was thinking for the rows, maybe getting the RREF(A) then multiply that matrix by R1, R2, R3, and R4. Am I doing this correctly? And how would I go about finding the columns?
A:
The RREF of $A$ is
$$
\begin{bmatrix} 1 & 0 & 0 & 0 & \frac54 \\ 0 & 1 & 0 & 0 & \frac{-5}4 \\ 0 & 0 & 1 & 0 & \frac34 \\ 0 & 0 & 0 & 1 & \frac52 \end{bmatrix}
$$
This shows that the rows are linearly independent, and the fifth column $C_5$ is a linear combination of $C_1$, $\ldots$, $C_4$. Explicitly,
$$
C_5 = \frac 54 C_1 -\frac54 C_2 + \frac34 C_3 + \frac52 C_4.
$$
| |
Organization: The conflict occurred in my workplace at a previous company, which was a small consulting firm.
The Conflict: After my former employer brought in a new employee, they decided that the manager and employee would have monthly check-in meetings to discuss progress and offer feedback. Unfortunately there was disagreement between the two sides over what to expect and how it should be done after their first meeting.
Pick a side: I focused on my ex-employer’s perspective for this assignment.
Part II – How Was it Negotiated? This page is approximately 1 page long
PN – Principled Negotiation was not used in this case as it is often seen as too complex or time consuming for smaller organizational conflicts such as these. However, if PN had been employed it would have greatly benefited both sides by encouraging cooperation rather than an adversarial approach while allowing them both to develop mutually beneficial solutions that respected each other’s needs. This could have resulted in better communication between them since PN emphasizes listening actively and trying to understand where ones counterpart is coming from before presenting one’s own position on matters being discussed at hand thus promoting trust further down line proving be successful tactic end game .
Principles – While PN wasn’t employed directly in this case there were still aspects of its four main principles utilized during negotiation process such as attempts separate people problem focusing attention facts themselves instead excessive personalization situation further helped emphasize importance win-win solution wanted achieve long run add always recommended accompany dialogues keep open mind listen out others remarks think through issues carefully arrive reaching consensus point but neither party able commit fully doing so result stalemate without anyone winning losing exchanges altogether kept going back forth until finally ultimately dropped subject entirely all due unwillingness move aside respective egos side important thing try resurrect relationship belief give benefit doubt matter what happens outcome signify death sentence harm either involved parts continue relationships pleasantly possible leave room grow both individuals organizations larger scale .
BATNA – There was no pre-determined BATNA present in this particular case however one should come prepared just event does not arise situation forces turn hands practical sense prevent detail slips cracks could allow take control looking ahead expected situations adjust accordingly seem fit properly cover necessary bases whenever comes knowing alternatives presented wouldn’t shouldn’t change stance already taken preserving dignity safety investing precious resources risk best defense strong offense given circumstances said pitfalls avoided moving forward performing exhaustive analysis potential high reward low cost plans free ride “winning never felt so good” feeling afterward when least expected same goes armed with deep understanding opponent capabilities don’t rely heavily luck cutting corners stop level playing field understand preferences stated implied during conversation will make difference types rewards gained reduction losses suffered thereafter tip hat hats off stating need prepare alternative courses action they appear ideal world dependable solid choice. | https://essayshelponline.com/2023/01/14/was-the-principled-negotiation-pn-method-used-in-your-case-if-so-how-if-not-could-it-have-been-implemented/ |
Framed in: A blooming flower at any point of time is a sight to savor. The freshness associated with it is unexplainable. With those tiny droplets of water sticking on to it… the whole world will look new. I don’t know whether it is the blooming flower or those droplets of water in those petals that add freshness to the flower, but it is true that those tiny droplets of water can make almost everything look fresh and anew. Newness is an ineluctable part of our life. Anything new is interesting and it makes us curious. A new book, a new pair of dress, a new pen and even a new word that we have learnt is exciting. But as nobody can stop the ticking of clock, everything grows old and makes us lose interest. Once we fail to find newness in things we do, it makes life a scum bag of routines and make us think that the life is boring to the core. We should be able to safeguard freshness in life. Like those tiny droplets of water add freshness to almost everything, we should be able to discover that tiny newness in everything we do and everything we see. If there is something new to do everyday, life never goes dull. And this world is so vast that you will never run out of newness. You just have to put in a little extra effort to discover that newness in everything you do. Everything that will have some thing new in it which would have been missed by us. The music of the rain, the missed sentences in a poem, the new tastes in those mom’s same old menu and so on… Once we are able do discover them , life will turn out to be a deep meditation where you will re-discover yourself and you will be reborn every second. What is fresher than a birth? It’s always a new start! always be a new born! Let’s start this life anew every second by discovering those little droplets of newness and live life to its fullest extent!
Framed out: This photograph was clicked during one of those dull days in Munnar, during my estate days. It was a period when I was considering myself dead creatively as there was nothing to do that made me exciting and my thoughts were full of logistics and mathematics. But there was this fire to be different and lighten up those creative and romantic parts of the brain. Whenever I could, I used to wander around in my 1962 model royal Enfield with my Yashica to do what I liked the most, clicking pictures. But the time was a scarcity there as I was caught in the web of all those nuisances associated with a planter’s life and I confess, I failed to find newness in everything I do. But still I did safeguard some of the freshness as far as I could. This picture is a result of that. I was returning to the field on that day morning after the breakfast and as always was deep in despair due to the usual routine affairs. When I was about to start my bullet this flower caught my attention. Together with the diffused rays of the early morning sun and the dew drops settled in it, it aroused me. I grabbed my yashica FX3 loaded with Fuji 200 and clicked this pic. This frame still evokes the smell of dry days I spent there in that estate! | https://sajeeshrajendran.in/2011/10/26/tiny-droplets-of-newness/ |
TECHNICAL FIELD
The present disclosure relates to virtualization, and more particularly, to resiliency of computing environments.
BACKGROUND
Many organizations are now using application and/or desktop virtualization to provide a more flexible option to address the varying needs of their users. In desktop virtualization, a user's operating system, applications, and/or user settings may be separated from the user's physical smartphone, laptop, or desktop computer. Using client-server technology, a “virtualized desktop” may be stored in and administered by a remote server, rather than in the local storage of a client computing device.
There are several different types of desktop virtualization systems. As an example, Virtual Desktop Infrastructure (VDI) refers to the process of running a user desktop and/or application inside a virtual machine that resides on a server. Virtualization systems may also be implemented in a cloud computing environment in which a pool of computing desktop virtualization servers, storage disks, networking hardware, and other physical resources may be used to provision virtual desktops, and/or provide access to shared applications.
SUMMARY
A client device includes a plurality of resource caches, and a processor coupled to the plurality of resource caches. The processor is configured to receive resources from a plurality of different resource feeds, and cache user interfaces (UI) of the resources from the plurality of different resource feeds, with at least one resource feed having a resource cache separate from the resource cache of the other resource feeds. Statuses of the resource feeds are determined, with at least one status indicating availability of the at least one resource feed having the separate resource cache. The processor retrieving for display UI elements from the separate resource cache in response to the at least one resource feed associated with the separate resource cache not being available.
Each resource feed may have a respective resource cache separate from the resource caches of the other resource feeds. The processor may be further configured to determine the status of each resource feed, and in response to the resource feeds that are not available, retrieve for display the UI elements from the respective resource caches of the resource feeds that are not available.
The at least one resource feed having the separate resource cache is not available and the at least one resource associated therewith is leasable, and wherein the processor may be further configured to display the UI elements.
The at least one resource feed having the separate resource cache is not available and the at least one resource associated therewith is not leasable, and wherein the processor may be further configured to display the UI elements as at least one of unavailable, unusable, grayed-out and not launchable.
The processor may be further configured to determine static assets and dynamic assets in the resources of the plurality of different resource feeds. The static assets include application code. The dynamic assets include at least one of application icons, desktop icons, file icons, resource names, and notifications. The resource names may be published application or desktop names, for example.
The plurality of resource caches may comprise a static assets resource cache and a plurality of dynamic assets resource caches, with at least one of the dynamic assets resource caches corresponding to a particular resource feed. The processor may be further configured to cache the static assets in the static assets resource cache, and cache the dynamic assets in the plurality of dynamic assets resource caches.
The processor may be further configured to determine a status of the client device having a network connection to cloud services, and display an offline banner notifying a user that the client device is off-line and the resources may not be available in response to there being no network connection.
The processor may be further configured to determine a health status of a cloud service the client device is connected to, and display an offline banner notifying a user that the client device is off-line and the resources may not be available in response to the health status of the cloud service indicating that the cloud service is unavailable.
A cloud-based authentication of the user to launch a particular resource may not be available in response to there being no network connection. The processor may be further configured to prompt the user for local authentication for the particular resource to be launched, with the local authentication including at least one of a local personal identification number (PIN) and biometrics.
At least one of the resources may require user authentication before launching the resource, with the authentication being provided by using an authentication protocol or by using a connection lease. The processor may be configured to start with using the authentication protocol first before falling back to the connection lease in response to the resource feed providing the resource to be launched being available, and start with using the connection lease first before falling back to the authentication protocol in response to the resource feed providing the resource to be launched not being available.
Another aspect is directed to a method comprising receiving resources from a plurality of different resource feeds, and caching user interfaces (UI) of the resources from the plurality of different resource feeds, with at least one resource feed having a resource cache separate from the resource cache of the other resource feeds. The method further includes determining status of the resource feeds, with at least one status indicating availability of the at least one resource feed having the separate resource cache. UI elements from the separate resource cache are retrieved for display in response to the at least one resource feed associated with the separate resource cache not being available.
Yet another aspect is directed to a computing system comprising a server configured to receive resources from a plurality of different resource feeds, and a client device as defined above.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a schematic block diagram of a network environment of computing devices in which various aspects of the disclosure may be implemented.
FIG. 2
FIG. 1
is a schematic block diagram of a computing device useful for practicing an embodiment of the client machines or the remote machines illustrated in .
FIG. 3
is a schematic block diagram of a cloud computing environment in which various aspects of the disclosure may be implemented.
FIG. 4
is a schematic block diagram of desktop, mobile and web based devices operating a workspace app in which various aspects of the disclosure may be implemented.
FIG. 5
is a schematic block diagram of a workspace network environment of computing devices in which various aspects of the disclosure may be implemented.
FIG. 6
is a schematic block diagram of a computing system providing a connection lease architecture for accessing virtual computing sessions in which various aspects of the disclosure may be implemented.
FIG. 7
is a schematic block diagram of a computing system providing workspace resiliency with multi-feed status resource caching in which various aspects of the disclosure may be implemented.
FIG. 8
FIG. 7
is a screenshot providing user interface (UI) elements for a resource feed down scenario along with resource feeds that are available to the client device illustrated in .
FIG. 9
is a schematic block diagram of a web browser with a progressive web app (PWA) service worker used to cache user interface (UI) of a web application in which various aspects of the disclosure may be implemented.
FIG. 10
FIG. 9
is a screenshot providing a message that no applications are available based on a resource feed not being available to the web browser illustrated in .
FIG. 11
FIG. 7
is a screenshot providing a grayed-out application icon and a message that the application is currently unavailable since the application is not leasable to the client device illustrated in .
FIG. 12
FIG. 7
is a more detailed schematic block diagram of the client device illustrated in .
FIG. 13
FIG. 7
is a sequence diagram of resource enumeration of the resource feeds for the computing system illustrated in .
FIG. 14
FIG. 7
is a sequence diagram of updating the multi-feed resource cache for the computing system illustrated in .
FIG. 15
FIG. 7
is a sequence diagram of resource rendering from the multi-feed resource cache for the computing system illustrated in .
FIG. 16
FIG. 7
is a sequence diagram of resource launch optimization for the computing system illustrated in .
FIG. 17
FIG. 7
is a flowchart illustrating a method for operating the client device illustrated in .
DETAILED DESCRIPTION
The present description is made with reference to the accompanying drawings, in which exemplary embodiments are shown. However, many different embodiments may be used, and thus the description should not be construed as limited to the particular embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. Like numbers refer to like elements throughout, and prime notation is used to indicate similar elements in different embodiments.
In desktop virtualization, a user of a client device accesses a workspace. The workspace may include, for example, virtual apps and desktops from the cloud, virtual apps and desktops from on-premises, endpoint management services, content collaboration services, and SaaS apps.
Connection leases can be used to authorize the user to access the resources, and to provide the ability to launch a resource as long as the user can click on a visualization of the resource. The visualization is the user interface (UI) of the resource, such as an icon.
Workspace resiliency with connection leases requires not only connection leases containing pre-defined resource entitlements but also resilient UI that allows the user to see and interface with the UI assets in an offline mode or in a cloud outage mode. In an offline mode, there is no Internet connection which means there is no network connection to cloud services. In a cloud outage mode, there is an Internet connection providing a network connection to cloud services, but particular services within the cloud services may be experiencing complete or partial outage.
Currently, UI caching is achieved via progressive web app (PWA) service workers. Service workers are scripts that run in the background in the user's browser. Service workers enable applications to control network requests, cache those requests to improve performance, and provide offline access to cached content.
A drawback of progressive web app caching is that the progressive web app is not aware of the granularity of different workspace resource feeds. In other words, progressive web app caching is not aware of the existence of multiple resource feeds with individual independent health status. As a result, if a single resource feed is down or intermittently unhealthy (e.g. returns errors or empty resource enumeration results under load), but otherwise network connectivity to the workspace is present, then the client device still consumes resources from the network but the resources from the unavailable resource feed disappear as opposed to being presented from previously populated cache.
The techniques and teachings of the present disclosure provide the ability for the client device to recognize when a specific resource feed is down and retrieve the respective UI assets from the cache. This enables workspace resiliency on a per feed basis without compromising security and user experience.
FIG. 1
10
12
12
16
16
14
14
18
10
12
12
16
16
14
14
Referring initially to , a non-limiting network environment in which various aspects of the disclosure may be implemented includes one or more client machines A-N, one or more remote machines A-N, one or more networks , ′, and one or more appliances installed within the computing environment . The client machines A-N communicate with the remote machines A-N via the networks , ′.
12
12
16
16
18
18
14
14
108
18
18
14
14
In some embodiments, the client machines A-N communicate with the remote machines A-N via an intermediary appliance . The illustrated appliance is positioned between the networks , ′ and may also be referred to as a network interface or gateway. In some embodiments, the appliance may operate as an application delivery controller (ADC) to provide clients with access to business applications and other data deployed in a data center, the cloud, or delivered as Software as a Service (SaaS) across a range of client devices, and/or provide other functionality such as load balancing, etc. In some embodiments, multiple appliances may be used, and the appliance(s) may be deployed as part of the network and/or ′.
12
12
12
12
12
12
12
12
12
12
12
16
16
16
16
12
16
16
12
12
14
14
14
14
The client machines A-N may be generally referred to as client machines , local machines , clients , client nodes , client computers , client devices , computing devices , endpoints , or endpoint nodes . The remote machines A-N may be generally referred to as servers or a server farm . In some embodiments, a client device may have the capacity to function as both a client node seeking access to resources provided by a server and as a server providing access to hosted resources for other client devices A-N. The networks , ′ may be generally referred to as a network . The networks may be configured in any combination of wired and wireless networks.
16
A server may be any server type such as, for example: a file server; an application server; a web server; a proxy server; an appliance; a network appliance; a gateway; an application gateway; a gateway server; a virtualization server; a deployment server; a Secure Sockets Layer Virtual Private Network (SSL VPN) server; a firewall; a web server; a server executing an active directory; a cloud server; or a server executing an application acceleration program that provides firewall functionality, application functionality, or load balancing functionality.
16
A server may execute, operate or otherwise provide an application that may be any one of the following: software; a program; executable instructions; a virtual machine; a hypervisor; a web browser; a web-based client; a client-server application; a thin-client computing client; an ActiveX control; a Java applet; software related to voice over internet protocol (VoIP) communications like a soft IP telephone; an application for streaming video and/or audio; an application for facilitating real-time-data communications; a HTTP client; a FTP client; an Oscar client; a Telnet client; or any other set of executable instructions.
16
16
12
In some embodiments, a server may execute a remote presentation services program or other program that uses a thin-client or a remote-display protocol to capture display output generated by an application executing on a server and transmit the application display output to a client device .
16
12
12
16
In yet other embodiments, a server may execute a virtual machine providing, to a user of a client device , access to a computing environment. The client device may be a virtual machine. The virtual machine may be managed by, for example, a hypervisor, a virtual machine manager (VMM), or any other hardware virtualization technique within the server .
14
14
14
14
802
11
In some embodiments, the network may be: a local-area network (LAN); a metropolitan area network (MAN); a wide area network (WAN); a primary public network ; and a primary private network . Additional embodiments may include a network of mobile telephone networks that use various protocols to communicate among mobile devices. For short range communications within a wireless local-area network (WLAN), the protocols may include ., Bluetooth, and Near Field Communication (NFC).
FIG. 2
20
12
18
16
20
22
24
30
38
26
48
depicts a block diagram of a computing device useful for practicing an embodiment of client devices , appliances and/or servers . The computing device includes one or more processors , volatile memory (e.g., random access memory (RAM)), non-volatile memory , user interface (UI) , one or more communications interfaces , and a communications bus .
30
The non-volatile memory may include: one or more hard disk drives (HDDs) or other magnetic or optical storage media; one or more solid state drives (SSDs), such as a flash drive or other solid-state storage media; one or more hybrid magnetic and solid-state drives; and/or one or more virtual storage volumes, such as a cloud storage, or a combination of such physical storage volumes and virtual storage volumes or arrays thereof.
38
40
42
The user interface may include a graphical user interface (GUI) (e.g., a touchscreen, a display, etc.) and one or more input/output (I/O) devices (e.g., a mouse, a keyboard, a microphone, one or more speakers, one or more cameras, one or more biometric scanners, one or more environmental sensors, and one or more accelerometers, etc.).
30
32
34
36
32
34
22
24
24
40
42
20
48
The non-volatile memory stores an operating system , one or more applications , and data such that, for example, computer instructions of the operating system and/or the applications are executed by processor(s) out of the volatile memory . In some embodiments, the volatile memory may include one or more types of RAM and/or a cache memory that may offer a faster response time than a main memory. Data may be entered using an input device of the GUI or received from the I/O device(s) . Various elements of the computer may communicate via the communications bus .
20
The illustrated computing device is shown merely as an example client device or server, and may be implemented by any computing or processing environment with any type of machine or set of machines that may have suitable hardware and/or software capable of operating as described herein.
22
The processor(s) may be implemented by one or more programmable processors to execute one or more executable instructions, such as a computer program, to perform the functions of the system. As used herein, the term “processor” describes circuitry that performs a function, an operation, or a sequence of operations. The function, operation, or sequence of operations may be hard coded into the circuitry or soft coded by way of instructions held in a memory device and executed by the circuitry. A processor may perform the function, operation, or sequence of operations using digital values and/or using analog signals.
In some embodiments, the processor can be embodied in one or more application specific integrated circuits (ASICs), microprocessors, digital signal processors (DSPs), graphics processing units (GPUs), microcontrollers, field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), multi-core processors, or general-purpose computers with associated memory.
22
22
The processor may be analog, digital or mixed-signal. In some embodiments, the processor may be one or more physical processors, or one or more virtual (e.g., remotely located or cloud) processors. A processor including multiple processor cores and/or multiple processors may provide functionality for parallel, simultaneous execution of instructions or for parallel, simultaneous execution of one instruction on more than one piece of data.
26
20
The communications interfaces may include one or more interfaces to enable the computing device to access a computer network such as a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or the Internet through a variety of wired and/or wireless connections, including cellular connections.
20
20
20
20
In described embodiments, the computing device may execute an application on behalf of a user of a client device. For example, the computing device may execute one or more virtual machines managed by a hypervisor. Each virtual machine may provide an execution session within which applications execute on behalf of a user or a client device, such as a hosted desktop session. The computing device may also execute a terminal services session to provide a hosted desktop environment. The computing device may provide access to a remote computing environment including one or more applications, one or more desktop applications, and one or more desktop sessions in which one or more applications may execute.
16
An example virtualization server may be implemented using Citrix Hypervisor provided by Citrix Systems, Inc., of Fort Lauderdale, Fla. (“Citrix Systems”). Virtual app and desktop sessions may further be provided by Citrix Virtual Apps and Desktops (CVAD), also from Citrix Systems. Citrix Virtual Apps and Desktops is an application virtualization solution that enhances productivity with universal access to virtual sessions including virtual app, desktop, and data sessions from any device, plus the option to implement a scalable VDI solution. Virtual sessions may further include Software as a Service (SaaS) and Desktop as a Service (DaaS) sessions, for example.
FIG. 3
50
50
Referring to , a cloud computing environment is depicted, which may also be referred to as a cloud environment, cloud computing or cloud network. The cloud computing environment can provide the delivery of shared computing services and/or resources to multiple users or tenants. For example, the shared resources and services can include, but are not limited to, networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, databases, software, hardware, analytics, and intelligence.
50
52
52
54
54
52
52
50
50
50
52
52
In the cloud computing environment , one or more clients A-C (such as those described above) are in communication with a cloud network . The cloud network may include backend platforms, e.g., servers, storage, server farms or data centers. The users or clients A-C can correspond to a single organization/tenant or multiple organizations/tenants. More particularly, in one example implementation the cloud computing environment may provide a private cloud serving a single organization (e.g., enterprise cloud). In another example, the cloud computing environment may provide a community or public cloud serving multiple organizations/tenants. In still further embodiments, the cloud computing environment may provide a hybrid cloud that is a combination of a public cloud and a private cloud. Public clouds may include public servers that are maintained by third parties to the clients A-C or the enterprise/tenant. The servers may be located off-site in remote geographical locations or otherwise.
50
52
52
50
52
52
50
52
50
The cloud computing environment can provide resource pooling to serve multiple users via clients A-C through a multi-tenant environment or multi-tenant model with different physical and virtual resources dynamically assigned and reassigned responsive to different demands within the respective environment. The multi-tenant environment can include a system or architecture that can provide a single instance of software, an application or a software application to serve multiple users. In some embodiments, the cloud computing environment can provide on-demand self-service to unilaterally provision computing capabilities (e.g., server time, network storage) across a network for multiple clients A-C. The cloud computing environment can provide an elasticity to dynamically scale out or scale in responsive to different demands from one or more clients . In some embodiments, the computing environment can include or provide monitoring services to monitor, control and/or generate reports corresponding to the provided shared services and resources.
50
56
58
60
62
In some embodiments, the cloud computing environment may provide cloud-based delivery of different types of cloud computing services, such as Software as a service (SaaS) , Platform as a Service (PaaS) , Infrastructure as a Service (IaaS) , and Desktop as a Service (DaaS) , for example. IaaS may refer to a user renting the use of infrastructure resources that are needed during a specified time period. IaaS providers may offer storage, networking, servers or virtualization resources from large pools, allowing the users to quickly scale up by accessing more resources as needed. Examples of IaaS include AMAZON WEB SERVICES provided by Amazon.com, Inc., of Seattle, Wash., RACKSPACE CLOUD provided by Rackspace US, Inc., of San Antonio, Tex., Google Compute Engine provided by Google Inc. of Mountain View, Calif., or RIGHTSCALE provided by RightScale, Inc., of Santa Barbara, Calif.
PaaS providers may offer functionality provided by IaaS, including, e.g., storage, networking, servers or virtualization, as well as additional resources such as, e.g., the operating system, middleware, or runtime resources. Examples of PaaS include WINDOWS AZURE provided by Microsoft Corporation of Redmond, Wash., Google App Engine provided by Google Inc., and HEROKU provided by Heroku, Inc. of San Francisco, Calif.
SaaS providers may offer the resources that PaaS provides, including storage, networking, servers, virtualization, operating system, middleware, or runtime resources. In some embodiments, SaaS providers may offer additional resources including, e.g., data and application resources. Examples of SaaS include GOOGLE APPS provided by Google Inc., SALESFORCE provided by Salesforce.com Inc. of San Francisco, Calif., or OFFICE 365 provided by Microsoft Corporation. Examples of SaaS may also include data storage providers, e.g. DROPBOX provided by Dropbox, Inc. of San Francisco, Calif., Microsoft ONEDRIVE provided by Microsoft Corporation, Google Drive provided by Google Inc., or Apple ICLOUD provided by Apple Inc. of Cupertino, Calif.
Similar to SaaS, DaaS (which is also known as hosted desktop services) is a form of virtual desktop infrastructure (VDI) in which virtual desktop sessions are typically delivered as a cloud service along with the apps used on the virtual desktop. Citrix Cloud is one example of a DaaS delivery platform. DaaS delivery platforms may be hosted on a public cloud computing infrastructure such as AZURE CLOUD from Microsoft Corporation of Redmond, Washington (herein “Azure”), or AMAZON WEB SERVICES provided by Amazon.com, Inc., of Seattle, Wash. (herein “AWS”), for example. In the case of Citrix Cloud, Citrix Workspace app may be used as a single-entry point for bringing apps, files and desktops together (whether on-premises or in the cloud) to deliver a unified experience.
FIG. 4
70
70
70
70
The unified experience provided by the Citrix Workspace app will now be discussed in greater detail with reference to . The Citrix Workspace app will be generally referred to herein as the workspace app . The workspace app is how a user gets access to their workspace resources, one category of which is applications. These applications can be SaaS apps, web apps or virtual apps. The workspace app also gives users access to their desktops, which may be a local desktop or a virtual desktop. Further, the workspace app gives users access to their files and data, which may be stored in numerous repositories. The files and data may be hosted on Citrix ShareFile, hosted on an on-premises network file server, or hosted in some other cloud storage provider, such as Microsoft OneDrive or Google Drive Box, for example.
70
70
70
72
70
74
70
70
76
To provide a unified experience, all of the resources a user requires may be located and accessible from the workspace app . The workspace app is provided in different versions. One version of the workspace app is an installed application for desktops , which may be based on Windows, Mac or Linux platforms. A second version of the workspace app is an installed application for mobile devices , which may be based on iOS or Android platforms. A third version of the workspace app uses a hypertext markup language (HTML) browser to provide a user access to their workspace environment. The web version of the workspace app is used when a user does not want to install the workspace app or does not have the rights to install the workspace app, such as when operating a public kiosk .
70
72
74
76
72
74
76
Each of these different versions of the workspace app may advantageously provide the same user experience. This advantageously allows a user to move from client device to client device to client device in different platforms and still receive the same user experience for their workspace. The client devices , and are referred to as endpoints.
70
70
80
90
80
90
80
90
As noted above, the workspace app supports Windows, Mac, Linux, iOS, and Android platforms as well as platforms with an HTML browser (HTML5). The workspace app incorporates multiple engines - allowing users access to numerous types of app and data resources. Each engine - optimizes the user experience for a particular resource. Each engine - also provides an organization or enterprise with insights into user activities and potential security threats.
80
70
70
An embedded browser engine keeps SaaS and web apps contained within the workspace app instead of launching them on a locally installed and unmanaged browser. With the embedded browser, the workspace app is able to intercept user-selected hyperlinks in SaaS and web apps and request a risk analysis before approving, denying, or isolating access.
82
82
82
82
82
A high definition experience (HDX) engine establishes connections to virtual browsers, virtual apps and desktop sessions running on either Windows or Linux operating systems. With the HDX engine , Windows and Linux resources run remotely, while the display remains local, on the endpoint. To provide the best possible user experience, the HDX engine utilizes different virtual channels to adapt to changing network conditions and application requirements. To overcome high-latency or high-packet loss networks, the HDX engine automatically implements optimized transport protocols and greater compression algorithms. Each algorithm is optimized for a certain type of display, such as video, images, or text. The HDX engine identifies these types of resources in an application and applies the most appropriate algorithm to that section of the screen.
84
84
70
For many users, a workspace centers on data. A content collaboration engine allows users to integrate all data into the workspace, whether that data lives on-premises or in the cloud. The content collaboration engine allows administrators and users to create a set of connectors to corporate and user-specific data storage locations. This can include OneDrive, Dropbox, and on-premises network file shares, for example. Users can maintain files in multiple repositories and allow the workspace app to consolidate them into a single, personalized library.
86
86
70
70
86
A networking engine identifies whether or not an endpoint or an app on the endpoint requires network connectivity to a secured backend resource. The networking engine can automatically establish a full VPN tunnel for the entire endpoint device, or it can create an app-specific μ-VPN connection. A μ-VPN defines what backend resources an application and an endpoint device can access, thus protecting the backend infrastructure. In many instances, certain user activities benefit from unique network-based optimizations. If the user requests a file copy, the workspace app can automatically utilize multiple network connections simultaneously to complete the activity faster. If the user initiates a VoIP call, the workspace app improves its quality by duplicating the call across multiple network connections. The networking engine uses only the packets that arrive first.
88
88
An analytics engine reports on the user's device, location and behavior, where cloud-based services identify any potential anomalies that might be the result of a stolen device, a hacked identity or a user who is preparing to leave the company. The information gathered by the analytics engine protects company assets by automatically implementing counter-measures.
90
70
70
A management engine keeps the workspace app current. This not only provides users with the latest capabilities, but also includes extra security enhancements. The workspace app includes an auto-update service that routinely checks and automatically deploys updates based on customizable policies.
FIG. 5
100
70
70
102
104
102
108
104
108
110
112
114
116
118
102
70
Referring now to , a workspace network environment providing a unified experience to a user based on the workspace app will be discussed. The desktop, mobile and web versions of the workspace app all communicate with the workspace experience service running within the Citrix Cloud . The workspace experience service then pulls in all the different resource feeds via a resource feed micro-service . That is, all the different resources from other services running in the Citrix Cloud are pulled in by the resource feed micro-service . The different services may include a virtual apps and desktop service , a secure browser service , an endpoint management service , a content collaboration service , and an access control service . Any service that an organization or enterprise subscribes to are automatically pulled into the workspace experience service and delivered to the user's workspace app .
120
108
122
124
126
128
130
120
122
132
134
108
In addition to cloud feeds , the resource feed micro-service can pull in on-premises feeds . A cloud connector is used to provide virtual apps and desktop deployments that are running in an on-premises data center. Desktop virtualization may be provided by Citrix virtual apps and desktops , Microsoft RDS or VMware Horizon , for example. In addition to cloud feeds and on-premises feeds , device feeds from Internet of Thing (IoT) devices , for example, may be pulled in by the resource feed micro-service . Site aggregation is used to tie the different resources into the user's overall workspace experience.
120
122
132
The cloud feeds , on-premises feeds and device feeds each provides the user's workspace experience with a different and unique type of application. The workspace experience can support local apps, SaaS apps, virtual apps, and desktops browser apps, as well as storage apps. As the feeds continue to increase and expand, the workspace experience is able to include additional resources in the user's overall workspace. This means a user will be able to get to every single application that they need access to.
20
70
102
104
Still referring to the workspace network environment , a series of events will be described on how a unified experience is provided to a user. The unified experience starts with the user using the workspace app to connect to the workspace experience service running within the Citrix Cloud , and presenting their identity (event 1). The identity includes a user name and password, for example.
102
140
104
140
142
144
146
148
150
The workspace experience service forwards the user's identity to an identity micro-service within the Citrix Cloud (event 2). The identity micro-service authenticates the user to the correct identity provider (event 3) based on the organization's workspace configuration. Authentication may be based on an on-premises active directory that requires the deployment of a cloud connector . Authentication may also be based on Azure Active Directory or even a third party identity provider , such as Citrix ADC or Okta, for example.
102
108
106
108
152
Once authorized, the workspace experience service requests a list of authorized resources (event 4) from the resource feed micro-service . For each configured resource feed , the resource feed micro-service requests an identity token (event 5) from the single-sign micro-service .
122
124
106
The resource feed specific identity token is passed to each resource's point of authentication (event 6). On-premises resources are contacted through the Citrix Cloud Connector . Each resource feed replies with a list of resources authorized for the respective identity (event 7).
108
106
102
102
The resource feed micro-service aggregates all items from the different resource feeds and forwards (event 8) to the workspace experience service . The user selects a resource from the workspace experience service (event 9).
102
108
108
152
102
The workspace experience service forwards the request to the resource feed micro-service (event 10). The resource feed micro-service requests an identity token from the single sign-on micro-service (event 11). The user's identity token is sent to the workspace experience service (event 12) where a launch ticket is generated and sent to the user.
160
160
106
The user initiates a secure session to a gateway service and presents the launch ticket (event 13). The gateway service initiates a secure session to the appropriate resource feed and presents the identity token to seamlessly authenticate the user (event 14). Once the session initializes, the user is able to utilize the resource (event 15). Having an entire workspace delivered through a single access point or application advantageously improves productivity and streamlines common workflows for the user.
FIG. 6
250
250
Turning now to , a computing system providing a connection lease architecture for accessing virtual computing sessions will be discussed. The computing system may be implemented using the above described computing devices, and in some implementations within the workspace infrastructure.
Access to virtual computing sessions may be provided using Citrix Virtual Apps and Desktops (CVAD) from Citrix Systems, Inc. Citrix Virtual Apps is an application virtualization solution that helps optimize productivity with universal access to virtual apps and server-based desktops from different client devices. CVAD carries all the same functionality as Citrix Virtual Apps, plus the option to implement a scalable Virtual Desktop Infrastructure (VDI). Citrix Virtual Apps/CVAD are available as a cloud service or a traditional software configuration.
252
254
253
263
259
254
Such computer virtualization infrastructures may traditionally utilize Independent Computing Architecture (ICA) protocol and ICA files for authenticating the client device to access the virtual computing session and computing resources to which the user is entitled. ICA is a protocol designed for transmitting Windows graphical display data as well as user input over a network. ICA files contain short-lived Secure Ticket Authority (STA) and logon tickets. The STA ticket may be used to authorize a connection to a virtual delivery appliance (e.g., Citrix Virtual Delivery Agent (VDA)) via a Gateway (e.g., Citrix Gateway) or via Gateway Service (e.g., Citrix Gateway Service). The logon ticket may single-sign-on (SSOn) the user into the virtual computing session . In the case of CVAD, this is done through a “high-definition” experience (HDX) session, which may be available to users of centralized applications and desktops, on different client devices and over different networks. Citrix HDX is built on top of the ICA remoting protocol.
With any network infrastructure, remote or otherwise, security from external attacks is always a significant concern. Moreover, connection leases are long-lived (e.g., a few hours to weeks based on policies), and the attack opportunity window is therefore increased. The security requirements are also increased compared to traditional ICA files. Therefore, connection leases are encrypted and signed.
258
260
258
260
252
Connection leases may also be revoked to cope with events such as stolen devices, compromised user accounts, closed user accounts, etc. Connection lease revocation is applied when a client/endpoint device or host is online with respect to a Connection Lease Issuing Service (CLIS) or broker . However, the CLIS or broker does not typically have to be online for a client device to use a previously issued connection lease, since connection leases are meant to be used in an offline mode.
258
260
252
252
252
The connection lease issuing service (CLIS) or broker may store and update published resource entitlements for the client device . The published resource entitlements may relate to the virtual computing resources (e.g., SaaS apps, DaaS sessions, virtual apps/desktops, etc.) that the client device is permitted or authorized to access. The client device may be a desktop or a laptop computer, a tablet computer, a smartphone, etc.
250
253
252
253
252
252
253
The computing system includes a virtual delivery appliance that communicates with the client device via a network (e.g., the Internet or Web). The virtual delivery appliance is configured to receive a connection request from the client device that includes connection leases issued based upon the respective published resource entitlements for the client device . In an example implementation, the virtual delivery appliance may be implemented using Citrix Virtual Delivery Agents (VDAs), for example, although other suitable virtual delivery appliances may be used in different implementations.
253
260
252
254
260
253
The virtual delivery appliance is also configured to request validation of the connection leases from the broker , and provide the client device with access to a virtual computing session corresponding to the published resource entitlements responsive to validation of connection leases from the broker . In this regard, responsive to validation requests from the virtual delivery appliance , the connection leases are compared to the updated published resource entitlements that are being maintained so as to validate the virtual computing session requests.
254
250
255
256
252
Independent flow sequences for accessing a virtual computing session within the computing system will be discussed. In the illustrated example, the lease generation functions are performed within a cloud computing service (e.g., Citrix Cloud) which illustratively includes a cloud interface configured to interface with the client device for enrollment and lease generation.
256
252
255
257
258
259
260
261
In an example implementation, the cloud interface may be implemented with Citrix Workspace, and the client device may be running Citrix Workspace App, although other suitable platforms may be used in different embodiments. The cloud computing service further illustratively includes a root of trust (RoT) , Connection Lease Issuing Service (CLIS) , Gateway Service , broker , and database , which will be described further below.
252
262
262
252
252
The client device has a public-private encryption key pair associated therewith, which in the illustrated example is created by a hardware-backed key store . The hardware-backed key store prevents the client device operating system (OS) from accessing the private key. The client device operating system performs cryptographic operations with the private key, but without the ability to access/export the key. Examples of hardware-backed key stores include Trusted Platform Module (TPM) on a personal computer (PC), iOS Secure Enclave, and Android Hardware Key Store, for example, although other suitable encryption key generation platforms may also be used.
262
252
262
252
262
262
As background, in some embodiments, a hardware-backed key store , such as a TPM, is a microchip installed on the motherboard of client device and designed to provide basic security-related functions, e.g., primarily involving encryption keys. A hardware-backed key store communicates with the remainder of the system by using a hardware bus. A client device that incorporates a hardware-backed key store can create cryptographic keys and encrypt them so that they can only be decrypted by the hardware-backed key store .
262
262
262
262
This process, referred to as wrapping or binding a key, can help protect the key from disclosure. A hardware-backed key store could have a master wrapping key, called the storage root key, which is stored within the hardware-backed key store itself. The private portion of a storage root key or endorsement key that is created in a hardware-backed key store is never exposed to any other component, software, process, or user. Because a hardware-backed key store uses its own internal firmware and logic circuits to process instructions, it does not rely on the operating system, and it is not exposed to vulnerabilities that might exist in the operating system or application software.
252
256
257
257
263
253
260
FIG. 6
FIG. 6
FIG. 6
The client device provides its public key to the cloud interface (step (1) in ), which then has the public key signed by the RoT (step (2) in ) and returns the signed public key to the client device (step (3) in ). Having the public key signed by the RoT is significant because the gateway , the virtual delivery appliance , and the broker also trust the RoT and can therefore use its signature to authenticate the client device public key.
252
258
256
252
253
260
252
253
252
253
FIG. 6
The client device may then communicate with the CLIS via the cloud interface to obtain the connection lease (step (4) in ). The client device public key may be provided to a host or virtual delivery appliance (e.g., Citrix VDA) either indirectly via the broker or directly by the client device. If the client device public key is indirectly provided to the virtual delivery appliance , then the security associated with the client-to-broker communications and virtual deliver appliance-to-broker communications may be leveraged for secure client public key transmission. However, this may involve a relatively large number of client public keys (from multiple different client devices ) being communicated indirectly to the virtual delivery appliance .
252
253
263
252
253
257
253
257
253
252
260
FIG. 6
On the other hand, the client device public key could be directly provided by the client device to the virtual delivery appliance , which in the present case is done via the gateway (step (5) in ). Both the client device and the virtual delivery appliance trust the RoT . Since the virtual delivery appliance trusts the RoT and has access to the RoT public key, the virtual delivery appliance is able to verify the validity of the client device based on the RoT signature on the public key and, if valid, may then trust the client device public key. In yet another embodiment, the client device public key may also optionally be signed by the broker beforehand.
252
253
260
253
260
253
252
252
253
263
263
Both the client device and the virtual delivery appliance trust the broker . Since the virtual delivery appliance trusts the broker and has access to the broker public key, the virtual delivery appliance is able to verify the validity of the client device based on the broker signature on the public key and, if valid, may then trust the client device public key. In the illustrated example, the signed public key of the client device is provided directly to the virtual delivery appliance along with the connection lease via a gateway . In an example implementation, the gateway may be implemented using Citrix Gateway, for example, although other suitable platforms may also be used in different embodiments.
253
263
260
259
264
264
The virtual delivery appliance and gateway may communicate with the broker and gateway service (which may be implemented using Citrix Secure Web Gateway, for example) via a cloud connector . In an example embodiment, the cloud connector may be implemented with Citrix Cloud Connector, although other suitable platforms may also be used in different embodiments. Citrix Cloud Connector is a component that serves as a channel for communications between Citrix Cloud and customer resource locations, enabling cloud management without requiring complex networking or infrastructure configuration. However, other suitable cloud connection infrastructure may also be used in different embodiments.
252
258
252
The client device signed public key or a hash of the client device signed public key (thumbprint) is included in the connection lease generated by the CLIS and is one of the fields of the connection lease that are included when computing the signature of the connection lease. The signature of the connection lease helps ensure that the connection lease contents are valid and have not been tampered with. As a result, a connection lease is created for the specific client device , not just a specific authenticated user.
253
252
253
252
257
260
252
253
252
253
258
260
261
260
253
FIG. 6
Furthermore, the virtual delivery appliance may use a challenge-response to validate that the client device is the true owner of the corresponding private key. First, the virtual delivery appliance validates that the client device public key is valid, and more particularly signed by the RoT and/or broker (step (6) in ). In the illustrated example, the client device public key was sent directly by the client device to the virtual delivery appliance , as noted above. In some embodiments, connection lease revocation may be applied when a client device or virtual delivery appliance is offline with respect to the CLIS or broker . Being online is not a requirement for use of a connection lease since connection leases may be used in an offline mode. Connection lease and revocation list details may be stored in the database for comparison by the broker with the information provided by the virtual delivery appliance .
252
253
253
252
253
252
253
252
252
257
260
Second, upon early session establishment, e.g., after transport and presentation-level protocol establishment, between the client device and virtual delivery appliance , the virtual delivery appliance challenges the client device to sign a nonce (an arbitrary number used once in a cryptographic communication) with its private key (step (7)). The virtual delivery appliance verifies the signature of the nonce with the client device public key. This allows the virtual delivery appliance to know that the client device is in fact the owner of the corresponding private key. It should be noted that this step could be performed prior to validating the public key of the client device with the RoT and/or broker in some embodiments, if desired.
253
252
253
260
257
Furthermore, the virtual delivery appliance validates that the connection lease includes the public key (or hash of public key) matching the client device public key. More particularly, the virtual delivery appliance first validates the connection lease signature and date, making sure that the broker signature on the lease is valid (using the RoT signed broker public key, since the virtual delivery appliance trusts the RoT) and that the lease has not expired.
253
252
252
253
Moreover, the virtual delivery appliance may verify that the connection lease includes the client device public key, or a hash of the client device public key, in which case the virtual delivery appliance computes the hash of the client device public key. If the connection lease includes the matching client device public key, then the virtual delivery appliance confirms that the connection lease was sent from the client device for which it was created.
252
253
262
As a result, if a connection lease is stolen from the client device and used from a malicious client device, the session establishment between the malicious client and the virtual delivery appliance will not succeed because the malicious client device will not have access to the client private key, this key being non-exportable and stored in the hardware-backed key store .
FIG. 7
300
310
340
1
340
5
312
1
312
5
Referring now to , the illustrated computing system provides the ability for a client device to recognize when specific resource feeds ()-() are down and retrieve the UI assets from the respective resource caches ()-(). This enables workspace resiliency on a per feed basis without compromising security and user experience.
330
330
340
1
340
5
340
340
1
340
5
330
314
330
340
1
340
5
330
340
1
340
5
340
1
340
5
A server , e.g. a workspace platform , is configured to aggregate or receive resources from different resource feeds ()-(). A resource feed represents a uniform abstraction for any type of cloud service, on-premises service or appliance that provides published resources to be delivered to a user. The different resource feeds ()-() may be provided by different cloud or on-premises services and may be configured based on customer and user entitlements. In some embodiments, the server may present the received resources in different UI panes or views for each resource feed, where the UI panes are part of a single Workspace UI . In other embodiments, the server may aggregate received resources from different resource feeds ()-() into a single pane. For example, the server may aggregate virtual apps and desktops and SaaS apps into a single UI pane for apps and desktops. The illustrated resource feeds ()-() are not to be limiting. Other types of resource feeds may be used in addition to or in place of the illustrated resource feeds ()-().
1
340
1
70
70
70
FIG. 7
Resource feed () provides virtual apps and desktops from the cloud. The virtual apps and desktops may be Citrix Virtual Apps and Desktops (CVAD) from Citrix Systems, Inc. The cloud may be Citrix Cloud, for example. In the case of Citrix cloud, the Citrix Workspace app (CWA) may be used as a single-entry point for bringing apps, files and desktops together to deliver a unified experience, although other suitable platforms may be used in different embodiments. As discussed above and, as illustrated in , the Citrix Workspace App (CWA) may be generally referred to as the workspace app .
2
340
2
3
340
3
4
340
4
5
340
5
Resource feed () provides virtual apps and desktops from an on-premises deployment. The virtual apps and desktops may also be Citrix Virtual Apps and Desktops (CVRD). Resource feed () provides endpoint management, such as Citrix Endpoint Management (CEM). Endpoint management enables a single unified workspace from any device, whether laptops, smart phones, tablets or any other device. Resource feed () provides content collaboration services, such as Citrix Files, formerly known as ShareFile. The content collaboration services may be generally referred to as shared filed. Resource feed () provides SaaS apps.
310
314
340
1
340
5
312
1
312
5
340
1
340
5
310
340
1
340
5
314
330
FIG. 12
The client device is configured to cache or otherwise store in memory user interfaces (UI) of the resources from the different resource feeds ()-(), with individual resource feeds having a respective resource cache ()-() separate from the resource caches of the other resource feeds. A health status of resource feeds ()-() is determined by the client device . The health status indicates availability of a particular resource feed. The health status of a resource feed ()-() may also be generally referred to as a status. Health status checks can be initiated by the workspace UI and/or can be collaboratively performed by the server (workspace platform) , as will be further described later in .
310
310
In response to the resource feeds that are not available, the client device retrieves UI elements from the resource caches of the resource feeds that are not available for display. For the resource feeds that are available, the client device bypasses the resource caches for these feeds and displays live UI elements.
314
400
FIG. 8
An illustrated user interface that includes a resource feed down and available resource feeds is provided by the screenshot in . The icons and resource names correspond to the apps and desktops that may be available to the user. In other embodiments only icons or only resource names may be presented. Since there is nothing secure or private about the icons and resource names, they may be displayed without user authentication. However, user authentication may be required for the user to access a particular resource.
300
314
310
As will be explained in detail below, the computing system provides the ability for a user to see the user interface (UI) from a resource feed in an offline mode or a cloud outage mode without authentication. This is based on the client device including a processor configured to distinguish between static assets and dynamic assets, and with the dynamic assets being cached on a per feed basis and including status. Even though the user is able to see the cached assets without authentication, authentication to the virtual delivery appliance is still required to access the resource represented by the icon and/or resource name.
314
70
314
Static assets are part of the application user interface and include application code, such as HTML and JavaScript (JS) of the application. Authentication is not required, since the application code itself does not contain user-specific information nor any other sensitive information, but rather the application business logic and UI. In some embodiments the application code may contain customer-specific information that is not sensitive in nature, e.g. background color, company logo or other customizations. In addition, the application code may contain code, e.g. Authentication Manager code, which enables the execution of the authentication process. Furthermore, the application is rarely changed unless the application has been upgraded. Dynamic assets require authentication to obtain from the Workspace, and include icons, resource names and additional resource metadata corresponding to files, applications and desktops. Dynamic assets also include actions or notifications. An example notification may be for the user to approve an expense report. The additional resource metadata enumerated as part of the dynamic assets may not be for display purposes. For example, the resource metadata may contain information indicating to the workspace app and the workspace UI weather a particular published resource is leasable (i.e. can be launched in offline mode), the order of precedence in which connection descriptor files (e.g. ICA files) versus connection leases should be used during a launch process, etc.
70
As noted above, there is no Internet connection in the offline mode. Consequently, there is no network connection from the workspace app to cloud services. However, connectivity to VDAs may still exist on LANs. In the cloud outage mode, there is an Internet connection providing a network connection to cloud services, but particular services within the cloud services may be experiencing complete or partial outage. Connectivity to gateway service, on-premises Gateways, Connectors and/or VDAs may still exist.
300
In addition to multi-feed resource caching with an understanding of cloud and individual feed health status for enabling resiliency on a per feed basis, other features of the computing system will also be discussed in detail below. One such feature is group level resource caching. Group level resource caching is applicable to kiosk or shared device use cases. In response to a user logging off the kiosk or shared device, the user's individual resource cache is emptied but the group-level resource cache remains. Also, the connection leases are issued per group and not to individual users so the shared connection leases remain.
Another feature is secure access to long-lived cache and connection leases using local authentication when the cloud is down. The local authentication may be provided using a local personal identification number (PIN) or biometric in lieu of cloud authentication.
310
254
310
Yet another feature is connect time optimization based on resource cache status. To authenticate the client device to access a virtual computing session and computing resources to which the user is entitled, connection descriptor files, e.g. ICA files, and connection leases are used. Depending on the status of a resource feed, the client device may be instructed to launch an ICA file first followed by launching a connection lease, or the policy may be reversed so that a connection lease is launched first followed by launching an ICA file.
An ICA file is first launched followed by launching a connection lease in a cloud-online condition where the resource is available (i.e., success). In a cloud-online condition where the resource is not available (i.e., failed) or in a cloud-offline condition, then the the policy may be reversed so that a connection lease is launched first followed by launching an ICA file.
386
386
380
386
388
386
388
384
388
390
388
388
390
388
386
386
388
386
388
386
384
FIG. 9
Progressive Web App (PWA) technology by Google, which includes service workers , is currently used for UI caching, as illustrated in . A service worker is a script that runs in the background in a web browser to support one of the main features of progressive web applications, which is the offline work mode. The service worker is a JavaScript file running separately from the web app . The service worker acts as a network proxy for the web app and caches UI of the web app in a cache . A standard web app makes direct requests to a web server . This means that if the network connection is down, a standard web app will fail to load or, if the network outage occurs after the web app has already loaded, then it will fail to fulfill further requests to the web server . However, when a web app is enhanced with PWA technology and, in particular, includes a service worker installation, then the service worker acts as a network proxy for the web app . A service worker responds to user interactions with the web app , including network requests made from pages it serves. If the network is down, the service worker is able to serve requests from the cache that it maintains.
384
384
310
A limitation of current PWA caching is that it does not distinguish between static and dynamic assets. Static assets include application code, whereas dynamic assets include icons, resource names and additional resource matadata corresponding to assets requiring authentication to obtain from the workspace. PWA uses a shared cache , and if a single resource feed is down or intermittently unhealthy, but otherwise network connectivity to the workspace is present, then the entire shared cache is invalidated. The client device still consumes resources from the network but the resources from the unavailable resource feed disappear as opposed to being presented from previously populated cache.
412
410
412
FIG. 10
As an example, a client device using current PWA caching is online to the cloud (i.e., Internet connectivity is present), and the Workspace is reachable. However, the Workspace cannot reach a brokering service that issues connection leases to client devices and provides virtual apps and desktops. As a result, the workspace cannot enumerate virtual apps and desktops (e.g., CVAD). Consequently, the workspace shows a tool tip or message to the user as illustrated by screenshot provided in . The message indicates the user needs to contact their administrator to provide access to apps, which is inaccurate, since actually the user already has apps/desktops assigned to them.
384
384
Furthermore, if the client device using current PWA caching were to treat an error from any single feed as an indication of the entire workspace being down, e.g., during partial Workspace resource enumeration, then the cache will become stale even for resource feeds that are still available. Thus, failure in one resource feed causes stale cached-based UI for all other feeds, which is a user experience and security issue. For example, failure of an on-prem site causes virtual apps and desktop assets from the cloud to also be consumed from cache and become stale. If changes in user entitlements occur, they would not be reflected in the UI. Likewise, if virtual apps and desktop assets (e.g., CVAD) from the cloud fails, it will cause staleness of non-CVAD cloud assets, as well as aggregated on-prem sites.
Although additional security restrictions would be applied to leasable resources, such as connection lease validation and additional authentication requirement during VDA session establishment, some SaaS/Web apps that do not require gateway authentication will be vulnerable if they are still presented from cache.
FIG. 7
70
310
Referring back to , different scenarios on what happens when a user opens the workspace app or refreshes an application in offline and cloud outage conditions will now be discussed. A basic assumption is that the user has previously logged in, and the static and dynamic assets have already been cached. This includes a user that has not previously logged out, or has explicitly logged out and is being allowed to come back in. This happens if the user refreshes the application, or reboots the client device , or if the user closes and restarts the application.
314
In one scenario, virtual app and desktop assets from the cloud provide resources that are leasable. The user is allowed to see these assets since these resources have connection leases. Other resource types may also be leasable, e.g., SaaS apps, endpoint management, and non-cloud on-premises virtual apps and desktops. There is no distinction on these resources being made available for viewing and interaction in either an offline mode or a cloud outage mode. The user is not being blocked by any error on seeing the cached assets, including an authentication error. There is no cache encryption if relying on cloud-supplied keys because a fundamental requirement is for the Workspace UI to function in offline and cloud outage conditions, otherwise, cache encryption is possible if relying on local authentication (i.e., local pin or biometrics).
312
1
312
5
312
1
312
5
330
The lifetime of the UI cache (dynamic assets) is independent of the app code (static assets). This is necessary so that when the app code (HTML, JS) is updated, the UI cache ()-() is not flushed. The UI cache ()-() needs to survive past the user logging out. Cache coherency, i.e. synchronization of local and remote state, is maintained via Workspace refresh (UI talking to backend) whenever network connectivity to workspace platform is available.
314
70
There is no kiosk or shared device support since the user interface is shown corresponding to the last user because the virtual app and desktop cache is preserved. Cache coherency is maintained. For example, if a user updates favorites or non-favorites while offline, then goes back online, then the updates are propagated to the Workspace. Also, connection lease synch from the cloud to the workspace app is intendent, and may be implemented in type script or JS as part of Workspace UI but by using a hidden browser instance or window.
With the virtual app and desktop assets from the cloud being leasable, in some embodiments, for all other intelligent Workspace assets, the assumption is made that they are not leasable. These assets are sensitive and require authentication. These assets include notifications, actions, and files, for example, and are not presented to the user. Instead, the user is presented with a meaningful error saying that difficulties are being experienced and the user has to go back online and successfully authenticate to see the assets. However, errors should not discourage usage of leasable resources, such as virtual app and desktop (e.g., CVAD) assets from the cloud. In other embodiments, notifications, actions, files and other Workspace assets may also be leasable. Due to the sensitive nature of these assets they may be presented after a local authentication, e.g. using a local PIN or biometric, as previously described, as opposed to a cloud-based authentication.
420
422
FIG. 11
Assets that are not sensitive do not require authentication to present an icon or resource name, such as CVAD assets from an on-prem site and SaaS/Web apps with or without Gateway requirement. In some embodiments where these assets are not leasable, they are presented as grayed-out (i.e. unavailable, unusable), disallowing launch. As illustrated by the screenshot in , a tool tip could say “This app is currently unavailable.” The user must go back online and successfully authenticate for the icons to be un-grayed and allow launch. There may be a number of reasons why the app is not leasable, such as an older infrastructure is being used or the app simply does not support leasing.
314
424
420
70
FIG. 11
There are different options for the user to refresh the Workspace UI . For example, a ribbon/banner provided by the screenshot illustrated in may be displayed at the top of the screen saying “Unable to connect to some of your resources. Some virtual apps and desktops may still be available. Reconnect.” The user can either manually click on reconnect in the UI to reconnect, or the user can go to the menu to refresh. Also, a refresh may be done in response to detecting automatically a network recovery. Also, push notifications from the Workspace, or another cloud service, or a periodic heartbeat from the Workspace app , may be used to account for an online but cloud outage condition.
In addition, the Workspace UI supports both CVAD-only and mixed entitlements, such as Workspace Intelligence (WSi) features, Citrix Files, and CVAD.
70
Another scenario is when a user logs out. This scenario involves an unauthenticated returning user after a previous explicit log out/sign out. In this case, a user explicitly logs out, then the connection leases are cleared but the UI cache is being kept for performance reasons. What this means is that if the user tries to come back the user's authentication could be blocked. But if the user does authenticate, the UI cache allows for better performance. In other words, the workspace app does not have to go all the way to the cloud to get all these assets. It can present them right away.
70
Another scenario is switching users. Switching users is not supported in offline or cloud outage mode. To support a user switching in offline/cloud outage mode, the workspace app could present a local PIN or biometric to determine the user's identity. This allows the UI to refresh from the resource cache, and consequently, connection leases are used. Support for switching users during offline or cloud outage conditions may be controlled by policy.
Yet another scenario is a user switching stores. If a user has already set up different stores and has not logged out from them then switching stores will be supported. A store is an abstraction of a set of apps and desktops that have been published. For example, the user may point to Workspace in the cloud as one store, and then the user may point to an on-premises store. The user can switch between the different stores. Each store will have multiple feeds. Some of the feeds could be from the cloud or some of them could be on-premises. In another embodiment switching stores may be supported even after logging out based on a local PIN or biometric to determine the user's identity. In yet another embodiment the user may have a single store account (not multiple store accounts), and a single store could be aggregating different feeds.
70
Having described above the basic offline or complete Workspace outage scenarios, cases where the cloud may be online but there could be a partial feed failure from one or more resource providers will now be discussed. This corresponds to multi-feed aware resource caching. For example, the cloud broker may be down for the virtual apps and desktops (e.g., CVAD), and the workspace app is online and the workspace store is reachable. However, the workspace cannot reach the cloud broker XML service. As a result, the workspace cannot enumerate the virtual apps and desktops. The workspace will generate a ribbon/banner to notify the user that some of the resources may be available while some of the resources may not be available. The ribbon/banner is displayed as opposed to displaying no resources are available (current undesired behavior), and the cached virtual apps and desktops are shown.
FIG. 12
FIG. 7
310
300
300
340
1
340
5
330
310
Referring now to , obtaining per-feed status for the resources, performing workspace-cloud-overall and per-feed health checks, and maintaining per-feed resource caching by the client device in the computing system will be discussed. The same computing system provided in focused more on aggregating or receiving the multiple feeds ()-() by the workspace platform for the client device .
310
316
312
1
312
5
As noted above, the client device includes a processor configured to distinguish between static assets and dynamic assets, and with the dynamic assets being cached on a per feed status. The static assets are stored in a shared static assets cache , and the dynamic assets are stored in respective dynamic assets caches ()-().
314
318
318
Static assets are part of the application user interface and include application code, such as HTML, JavaScript (JS), cascading style sheets (CSS) and images. Progressive Web App (PWA) service workers are used for caching static assets. PWA service workers are configured to a network-first-fallback-to-cache caching strategy, because it is desirable to load the latest version of the app as opposed to loading a previous stale app version from cache, for example.
314
Dynamic assets are also part of the application user interface and include icons, resource names and additional resource metadata corresponding to files, applications and desktops. Dynamic assets also include actions or notifications. An example notification may be for the user to approve an expense report.
320
An in-app caching interceptor is used to perform custom in-app caching for the dynamic assets. Dynamic assets correspond to the resources (e.g., apps, desktops) the user is entitled to. A cache-first-fallback-to-network or cache-network-race caching strategy is used for performance reasons.
318
320
316
312
1
312
5
In an example embodiment, libraries (e.g., a WorxBox and Axios libraries) are used to implement PWA service workers and the in-app caching interceptor . The libraries can be a collection of libraries and tools used for generating a service worker, and for pre-caching, routing, and runtime-caching. For example, Axios library is a Javascript library used to make HTTP/HTTPS requests. The Axios library can communicate witha WorxBox library by virtue of the requests it makes. The Axios library is effectively an interceptor for HTTP/HTTPS requests, and is used to implement the in-app caching. The browser local storage (DOM storage) is used for the static assets cache and the dynamic assets caches ()-().
314
322
340
1
340
5
322
The workspace UI may be configured to implement a health check worker that monitors the health of both the Workspace cloud as a whole and that of the individual feeds ()-(). Inputs from the health check worker are used to drive the multi-feed aware cache.
314
330
340
1
340
5
330
314
Health checks can be initiated by the workspace UI and/or can be collaboratively performed by the workspace platform , which aggregates resources from all the feeds ()-(). Further, the workspace platform can send push notifications to the workspace UI upon detecting status changes.
314
322
324
70
314
322
70
324
70
70
JavaScript (JS) bridge interfaces can be used for communication between the workspace UI and the health check worker on one side, and a workspace app JS bridge host (e.g. native workspace app code) on the other side. For example, the workspace UI and the health check worker can call a JS bridge method to check the overall cloud online status, or to request going online by instructing the workspace app to reach the workspace and attempt authentication. The JS bridge host applies to the workspace app being a native implementation, or to the workspace app being a hybrid implementation (i.e., native and web based).
314
70
340
1
340
5
312
1
312
5
70
310
The workspace UI running in the workspace app is caching the resource feeds ()-() in individual resource caches ()-() per feed. This is not done globally, but instead distinguishes between static and dynamic cache assets. Within the dynamic assets, the assets are cached per feed. The workspace app checks cloud and feed health in order to determine, as it presents the UI, which UI assets should be presented live from workspace and which assets that are leasable and should be presented from the cache without authentication. The client device advantageously performs the separation of static and dynamic per feed caching along with determining a health status of the feeds to decide based on the health status whether a resource should be presented from the cache or should be presented live.
FIG. 13
500
340
1
340
5
314
330
502
Referring now to , a sequence diagram providing multi-feed resource enumeration of the resource feeds ()-() for multi-feed aware caching will be discussed. The workspace UI initiates a resource enumeration request to the workspace platform at line .
330
314
1
340
1
504
1
340
1
2
340
2
506
2
340
2
2
3
340
3
508
3
340
3
3
4
340
4
510
4
340
4
5
340
5
512
5
340
5
5
The workspace platform then determines a status of individual respective feeds and, where the status of a resource feed is successful, a list of resources provided by the resource feed are also enumerated and returned to the workspace UI . Resource feed () provides virtual apps and desktops (e.g., CVAD) from the cloud. Status line indicates a failed status for resource feed (). Resource feed () provides virtual apps and desktops (e.g., CVAZD) on-premises. Status line indicates a successful status for resource feed (), along with enumerated resource list . Resource feed () provides endpoint management, such as Citrix Endpoint Management (CEM). Status line indicates a successful status for resource feed (), along with enumerated resource list . Resource feed () provides content collaboration services, such as Citrix ShareFile. Status line indicates a failed status for resource feed (). Resource feed () provides SaaS apps. Status line indicates a successful status for resource feed (), along with enumerated resource list .
330
314
514
2
340
2
3
340
3
5
340
5
1
340
1
4
340
4
312
1
312
5
The workspace platform then provides to the workspace UI at line the aggregated resources that are successful, and updates the health status of each resource feed. The per feed health status corresponds to the availability of a particular resource feed (i.e., successful or failed). In this example, resource feeds (), () and () are successful and resource feeds (), and () have failed. The respective resource caches ()-() are updated for the resources that are available, and updated with the health status of each resource feed.
FIG. 14
520
312
1
312
5
312
1
312
5
314
522
2
312
2
3
312
3
5
312
5
524
1
312
1
4
312
4
Referring now to , a sequence diagram of updating the multi-feed resource caches ()-() will be discussed. Updates are provided to the resource caches ()-() by the workspace UI . The updates are part of a loop process. Line corresponds to the resource feeds that have a success status. Resource caches (), () and () are updated with a successful status along with a timestamp. Line corresponds to the resource feeds that have a failed status. Resource caches () and () are updated with a failed status. In this case, these resource caches are not invalidated.
FIG. 15
530
312
1
312
5
70
340
1
340
5
530
532
542
Referring now to , a sequence diagram of resource rendering from the multi-feed resource caches ()-() will be discussed. The workspace app renders the UI based off the individual health and availability of the resource feeds ()-(). The sequence diagram provides for a cloud-online condition and a cloud-offline condition . A loop process is implemented for both conditions with respect to rendering of the resources.
532
330
314
340
1
340
5
340
1
340
5
542
330
314
330
340
1
340
5
The cloud-online condition refers to the condition where Internet connection to the cloud is available and the workspace platform is reachable from the workspace UI . In this condition, some or all of the resource feeds ()-() may be available, and some or all of the resource feeds ()-() may not be available. The cloud-offline condition refers to the condition where Internet connection to the cloud is not available and the workspace platform is unreachable from the workspace UI , or the workspace platform is generally unresponsive. Naturally, in this condition, each of the resource feeds ()-() is unavailable.
532
534
314
536
314
538
314
FIG. 11
For the cloud-online condition , a resource feed is either successful or has failed, and if the resource feed has failed, then the resource may be leasable or not leasable. Leasable means that the cloud service is capable of issuing connection leases for this resource type. In section , if a resource feed status is successful, then the workspace UI displays the resource. In section , if a resource feed status has failed and the resource is leasable, then the workspace UI displays the resource because it is leasable. This means the resource could essentially be presented from the resource cache. In section , if a resource feed status has failed and the resource is not leasable, then the workspace UI disables launch and displays a grayed-out (i.e. unavailable, unusable, not launchable) resource, as illustrated in . For example, the resource could be a SaaS or web app.
542
544
314
546
314
FIG. 11
In the cloud-offline condition an offline banner is displayed. The offline banner lets the user know that they are off-line (i.e., each resource feed has failed) and the user can manually refresh, or if the refresh happens automatically, then the user will be prompted for authentication. In section , if the resource is leasable, then the workspace UI reads from the corresponding resource cache and displays the resource. In section , if the resource is not leasable, then then the workspace UI disables launch of the resource and displays the resource as grayed-out (i.e. unavailable, unusable, not launchable), as illustrated in .
To handle the case of a Kiosk/shared device, both group-level resource cache and group-level connection leases may be used. Normally, the resource cache and the connection leases are user-device bound, i.e., they are user-specific.
By using group-level resource cache and connection leases, the logout behavior may be modified. The Kiosk/shared device continues to persist static assets. Upon logout, user-specific UI resources and connection leases are removed, as opposed to keeping UI cache for performance reasons in the non-Kiosk case. Also upon logout, group-level UI resources and connection leases are kept on the Kiosk/shared device. This allows users to crowd source the retrieval/caching of dynamic assets that are shared between users in a resource group and the corresponding connection leases. For example, doctors and nurses in a hospital would all share the same apps.
However, not all Kiosk/shared device scenarios are as clean or straightforward as the hospital example above. In some cases, users may have user-specific resource entitlements. To allow for that in a Kiosk/shared device scenario, the user's resource cache and connection leases may not be removed upon log out. That requires a very strong device-level protection of the cache and connection leases, especially the connection leases. A local device authentication could be imposed to allow access to user-specific cache and connection leases in an offline/cloud-outage condition. The local authentication may be provided using a local pin or biometric.
Even with this optimization it is assumed that the combined user population would have visited all (or a substantial number) of Kiosk/shared devices within the connection lease expiration period, which may not be the case. It is also assumed that a specific user would have visited the same Kiosk/shared devices they are currently on within the connection lease expiration period, in order to successfully use the user-specific cache and connection leases. This may also be false. Furthermore, the Kiosk/shared devices may be completely stateless, i.e. unable to persistently store resource caches and connection leases, or otherwise lose state after a reboot.
Therefore, the approach described is useful in some Kiosk/shared device scenarios but only best effort in others. For example, it is a best effort to allow resiliency in early morning outage for users that just walk up to a terminal: user-specific entitlements (non-shared resource group) may not be available. Once caching is performed and connection leases are synched, a mid-day outage will be successfully handled assuming the user is on the same device.
FIG. 16
550
310
Referring now to , a sequence diagram providing resource launch optimization for the client device will be discussed. This is an example of connection time optimization based on both overall Workspace cloud online/offline condition and resource cache status: live (success) or from cache (failed). The usefulness of this example optimization assumes a workspace virtual app and desktop (e.g., CVAD) policy preference of “ICA file first, fallback to connection leases”. An ICA file may be generally referred to as a launch method and an authentication protocol. Such policy preference may exist due to lack of single sign-on (SSOn) support in VDA sessions or other reasons.
70
552
314
554
314
556
Initially, when the workspace app starts, an attempt is made to authenticate the user to the workspace at line . If the authentication attempt fails for whatever reason, e.g., the cloud is down, the identity provider is down or slow to respond, user mistypes their credentials, etc., then, as previously discussed, the workspace UI is still presented but in an cloud-offline mode with banner and reconnect button, as provided in section . Otherwise, if authentication to the workspace is successful, then the cloud-online condition is assumed and the workspace UI is loaded without the banner and reconnect button as provided in section .
558
560
Following a resource launch request, in a cloud-online condition, if a resource is live, then ICA-first is preferred, consistent with the policy preference, as provided in section . In the cloud-online condition, if a resource is cached, the policy preference is inverted and connection lease first is preferred to avoid a long ICA file retrieval timeout during cloud outage, followed by eventual fallback to connection lease launch which would hurt user experience, as provided in section . This may happen if the user initially successfully authenticates and enumerates workspace resources but later, as detected by the caching, network connectivity is lost, or connectivity is still present but a specific workspace feed is down, in this example for cloud virtual apps and desktops (e.g., CVAD).
562
In a cloud-offline condition, the policy preference is also inverted and connection lease first is preferred to avoid a long ICA file retrieval timeout, as provided in section .
As an alternative embodiment to the multi-feed resource caching, a selective UI cache merge (union operation) may be performed based on a single global partial enumeration error status. With this approach the cache is not updated, i.e., the resource is not removed, for a resource that is leasable.
330
330
An essential difference between the selective UI cache merge and the multi-feed resource caching is that multi-feed resource caching provides a per feed status, whereas in this selective UI cache merge approach the workspace platform just communicates that one of the feeds is not healthy. Thus, the workspace platform does not communicate granular per-feed status but still communicates that the resource enumeration was not complete, i.e. there is a partial enumeration error status.
In both of the multi-feed resource caching approach previously described in detail and the selective UI cache merge approach, two cases are considered: the partial enumeration and the failed enumeration. In both cases, if a resource is leasable, e.g. as indicated by a flag in the resource metadata, then it is not removed from the cache. If a resources is not leasable, then it is removed from the cache.
330
Thus, although there is no per-feed granularity of status, there is still a correlation because the workspace platform conveys that the resource enumeration is not complete. Every leasable resource is cached and presented regardless of the individual field status.
However, the selective UI cache merge approach has a few limitations. The virtual apps and desktops (e.g., CVAD) resources presented may be stale if another service is down. For example, an administrator may have removed App Z from the CVAD entitlements, CVAD broker is up, but on-prem StoreFront (SF) or Citrix Files are down. Then App Z is still presented to the user from cached assets. The user is able to see the stale resource. Launch could also succeed through a Local Host Cache (LHC), which is a high-availability backup broker for a cloud-based CVAD broker.
70
70
70
20
330
314
314
70
Another limitation is that there is a longer connect time for virtual apps and desktops (e.g., CVAD) resources. For example, if the CVAD broker is down, the workspace app does not know which service specifically is down. So the workspace app has to try ICA file first before falling back to connection lease as a launch and authentication method. As a result, the workspace app will time-out (e.g., after seconds) before falling back to connection leases. In contrast, if per-feed status is supported by the workspace platform and workspace UI , and therefore the workspace UI is aware that the CVAD broker is down, the workspace app will directly use connection leases (as is the optimized behavior in an offline/cloud outage condition).
FIG. 17
600
310
602
340
1
340
5
604
312
1
312
5
340
1
340
5
606
340
1
340
5
608
610
612
Referring now to , a flowchart illustrating a method for operating the client device will be discussed. From the start (Block ), the method includes receiving resources from different resource feeds ()-() at Block . The method further includes caching ()-() user interface (UI) of the resources from the different resource feeds ()-() at Block , with at least one resource feed having a resource cache separate from the resource cache of the other resource feeds. Statuses of the resource feeds ()-() are determined at Block . The method further includes retrieving for display UI elements from the separate resource cache at Block in response to the at least one resource feed associated with the separate resource cache not being available. The method ends at Block .
As will be appreciated by one of skill in the art upon reading the above disclosure, various aspects described herein may be embodied as a device, a method or a computer program product (e.g., a non-transitory computer-readable medium having computer executable instruction for performing the noted operations or steps). Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
Furthermore, such aspects may take the form of a computer program product stored by one or more computer-readable storage media having computer-readable program code, or instructions, embodied in or on the storage media. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof.
Many modifications and other embodiments will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is understood that the foregoing is not to be limited to the example embodiments, and that modifications and other embodiments are intended to be included within the scope of the appended claims. | |
BACKGROUND OF THE INVENTION
This invention relates to an access and apparatus for selectively extracting or updating subarrays of a larger array stored in a modified work organized random access memory system, and more particularly, relates to the modifications to a conventional work organized memory used for image processing.
As understood, a digital image is considered to be a two- dimensional array of image points, each of which comprises an integer or a set of integers. Image manipulation ideally subsumes the capability of storing an image array in a memory and operating upon selected clusters of points simultaneously, such as sequences of points in a single row of the array and points within a small rectangular area. This imposes the constraint that the memory must allow all points in any selected cluster to be accessed in one memory cycle. If any desired combination of points in the array could be accessed simultaneously from a bit addressable memory, then storage and retrieval of clusters of image points would pose no problem. However, because digital images form large arrays, only word organized memories are economically available. A conventional word organized memory includes a plurality of randomly accessible "words" of storage locations, each word of which can store a cluster of image points. However, it is necessary to modify the accessing mechanism of this conventional memory in order to permit access to clusters of image points when the points are not all in the same word of storage.
An image can be represented by a M × N array I(*,*) of image points, where each point I(i,j) for O≦i < M and O≦ j < N is an integer or a set of integers which represents the color and intensity of a portion of the image. For simplicity, attention can be restricted to black/white images, for which I(i,j) is a single bit of information. Typically, I(i,j)=1 represents a black area of the image, and I(i,j)=0 represents a white area.
Images are most commonly generated by scanning pictorial data such as 8 1/2 inch × 14 inch documents. Thereafter, they can be stored, viewed from a display, transmitted, or printed. Since most scanners and printers process an image from top to bottom and from left to right, images are normally transmitted in the standard "row major" sequence: I(0, 0), I(0,1), . . . , I(0,N-1), I(1,0), . . . , I(M-1, N-1). Therefore, a memory system for image processing operations should at least permit simultaneous access to a number of adjacent image points on a single row of I(*,*). This would permit the image or a partial image to be transferred rapidly into and out of the memory system, with many image points in each row being transferred simultaneously.
It is also desireable to access rectangular blocks of points within the image to accomodate another class of image processing operations, such as block insertion, block extraction, and contour following. For example, it may be desirable to add alphanumeric characters to the image from a stored dictionary, which dictionary includes a predefined bit array for each character. Similarly, it may be desirable to delete or edit characters or other rectangular blocks from an image. Lastly, algorithms for locating the contours of objects in the image involve moving a cursor from one image point to another along a border or boundary of an object. The contour following algorithms require rapid access to an image point and a plurality of its near neighbors, which together constitute a block of image points.
Typically, a word organized random acess memory comprises a plurality of memory modules, each module being a storage device with a plurality of randomly accessible storage cells. Although each cell is able to store an image point which comprise a single bit of information, only one cell in a module can be accessed (read from or stored into) at a time. The accessing mechanism of a conventional word organized random acess memory provides a single cell address to all of its constituent memory modules, so that the ith cell in one module can be accessed only in conjunction with the ith cell of all other modules. (These cells together comprise the ith word of the memory). A conventional word organized random access memory thus provides access to a cluster of image points only if they are all stored in the same word of the memory. However, a suitable modification of the accessing mechanism for a word organized memory can permit acess to any desired cluster of image points, provided each module stores at most one point in the cluster.
As stated previously, a memory system is desired which permits access to horizontal sequences and rectangular blocks of image points. Therefore, it is necessary to determine a method for distributing image points among memory modules which places the elements of horizontal sequences in distinct memory modules and which also places the elements of rectangular blocks in distinct memory modules. Relatedly, it is necessary to devise addressing circuitry which permits simultaneous access to all elements of the horizontal sequences or rectangular blocks. Lastly, it is necessary to design circuitry which arranges the elements of the sequences or blocks accessed into a convenient order, such as row major order.
SUMMARY OF THE INVENTION
It is accordingly an object of this invention to modify a conventional word organized random access memory for image processing operations so that it is capable of storing an image or partial image therein, and so that it permits access to sequences of image points along any row of the image array and to the image points within any small rectangular area of this array. Restated, it is an object to modify a conventional word organized random access memory which stores an rp × sq or smaller image array such that any 1 × pq or p . times. q subarray of the image can be accessed (read or written) in a single memory cycle, p, q, r, and s being design parameters.
The foregoing objects are believed satisfied by an apparatus for storing black/white images, which apparatus includes a novel accessing arrangement. The apparatus comprises memory means for storing the image points in the cells of pq different memory modules, each module being an entity capable of storing rs image points in distinguishable cells, only one cell of which is randomly accessible at a single instant of time. The apparatus further comprises means for extracting from the memory means either horizontal linear sequences of length pq or rectangular matrices of dimension p × q, the starting point in the array for either sequence or matrix being arbitrary. Relatedly, the apparatus also comprises means for arranging the elements of the sequences or blocks accessed into row major order.
Restated, the disclosed apparatus includes pq memory modules labled 0, 1, . . . , pq-1, which modules can together store an rp × sq image array consisting of image points I(i,j), where i lies on the range O. ltoreq.i < rp and j lies on the range 0≦j > sq. Secondly, the disclosed apparatus includes routing means which causes image point I(i,j ) to be routed to or from memory module M(i,j)=(iq+ j) //pq, where (iq+j)//pq constitutes the remainder resulting from the integer division of the quantity (iq+j) by the quantity pq. Thirdly, the disclosed apparatus includes address calculation means which, in conjunction with the routing means, causes image point I(i,j) to be stored into or retrieved from location A(i,j)=(i/p)s+ (j/q) of memory module M(i,j), where (i/p) and (j/q) represent integer quotients. Lastly, the disclosed apparatus includes control means which achieves simultaneous storage or retrieval of the pq image points in any 1 × pq or p × q subarry of the image array.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows the architecture of a word organized memory modified according to the invention.
FIGS. 2A and B illustrate the module assignment and the address assignment for the case that p=q=4, r=4, and s=8.
FIG. 3 shows the selective logical details of the address and control circuitry set forth in FIG. 1.
FIGS. 4-6 illustrate detailed logical designs of the global, row, and module logics of the counterpart functional elements seen in FIG. 3.
FIGS. 7-9 show detailed logic for the routing circuitry seen in FIG. 1.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Referring now to FIG. 1, there is shown the architecture for the modified word organized random access memory system. The apparatus includes pq memory modules 21, 23, and 25. Each module is able to store rs image points, which comprise rs bits of information. Address and control circuitry 7 permits these modules to store an rp × sq (or smaller) image array I(*,*), and to access any 1 × pq or p × q array of I(*,*). A data register 39 is provided to hold any of these pq element subarrays prior to storage or following retrieval of the image information from the memory modules. Also included are permuters 47 and 49. Permuters generally are specialized circuits for rearranging data. In the context of this invention, the permuters 47 and 49, respectively, rotate subarrays to the right and to the left. Functionally, the permuters route elements of the subarrays to or from the appropriate memory modules for storage and retrieval. Control of the permuters is resident in the address and control circuitry 7 and connectable thereto over path 15.
When a particular subarray is to be stored in the memory system, the one bit t register is set to one of the values t=0 or t=1 in order to indicate whether the subarray shape is 1 × pq or p × q. The i and j registers 3 and 5 are set to indicate coordinates of the upper lefthand element I(i,j) of the subarray. The subarray itself is placed in data register 39 in row major order, such that I(i,j) is in the leftmost position of the register. Based upon the values of t, i and j, the control portion of address and control circuitry 7 provides a control signal on line 15 which causes permuter 47 to route each element of the subarray over counterpart paths 27, 31, and 35 to that module within which it is to be stored. The address portion of address and control circuitry 7 calculates its location within that module the address information is supplied via lines 9, 11 and 13 to memory modules 21, 23 and 25. Finally, a write signal from an external read/write control source 17 causes the pq elements of the subarray to be stored simultaneously in the different memory modules.
When a particular subarray is to be retrieved from the memory system, the t, i, and j registers are set as described above so as to indicate the shape of the subarray and to identify its upper lefthand element. The address portion of the address and control circuitry 7 uses the values of t, i and j in order to calculate for each memory module the location of the unique element of the subarray which it contains. After the calculations are made, a read signal from 17 causes the pq elements of the subarray to be retrieved from the modules and routed by permuter 49 to data register 39 over paths 51, 53, and 55. The control portion of address and control circuitry 7 provides a control signal on line 15 which causes permuter 49 to arrange the elements of the subarray in row major order, such that I(i,j) is routed to the leftmost position of register 39.
Whenever a 1 × pq or p × q subarray of I(*,*) is retrieved from or stored into the memory system, the address portion of the address and control circuitry 7 must calculate, for 0≦k < pq, the location 1(i,j,k,t) of the unique element e(i,j,k,t) of the subarray either contained by or to be placed in the kth memory module. The control circuitry portion of address and control circuitry 7 must, in combination with permuters 47 and 49, arrange for element e(i,j,k,t) to be routed to or from the appropriate position in register 39. Table 1 summarizes the address calculations and routing patterns required for access to a subarray whose upper left-hand element is image point I(i,j). The routing pattern specification indicates which of the pq positions d(0), d(1), . . . , d(pq-1) of data register 39 is to receive or supply element e(i,j,k,t) .
TABLE 1
_________________________________________________________________________ _
Subarray Shape
t Address Calculation Required Routing
_________________________________________________________________________ _
1 × pq
0 M(i,j)=(iq+j)//pq; e(i,j,k,t)→d[g(i,j, k)].
g(i,j,k)=[k-M(i,j)]//pq;
l(i,j,k,t)=(i/p)s+[j+g(i,j,k)]/q.
p × q
1 M(i,j)=(iq+j)//pq; e(i,j,k,t)→d[g(i,j, k)].
g(i,j,k)=[k-M(i,j)]//pq;
l(i,j,k,t)=[(i+g(i,j,k)/q)/p]s+(j+g(i,j,k)//q)/q.
_________________________________________________________________________ _
Exemplary circuitry implementing the above address calculations and routing patterns is amply set forth in FIGS. 3-9, which are described below. Of course it should be understood that alternative circuitry, for example, circuitry based upon table lookup could be designed to perform the same functions.
The address calculations and routing patterns noted above are based upon a predetermined distribution of image points among the pq memory modules. Before describing the preferred embodiment, appreciation of the true nature and scope of the invention will be enhanced by first considering the justification for the chosen distribution strategy, and the manner in which the distribution leads to the address calculations and routing patterns summarized in Table 1.
DISTRIBUTION STRATEGY
As state previously, it is an object of the invention to construct a memory system capable of storing an rp × sq image array I(*,*) consisting of image points I(i,j), where i lies in the range 0≦ i < rp and j lies in the range 0≦j < sq. Furthermore, the memory system is required to store the image in a manner permitting access to all 1 × pq and p × q subarrays of I(*,*).
If the memory system outlined in FIG. 1 is to store the image array I(*,*), then for each image point I(i,j) it is necessary to determine which of the pq memory modules 21, 23, or 25 should store I(i, j). It was observed that when memory modules were assigned the memory module numbers 0, 1 . . . , pq-1 as indicated in FIG. 1, the distribution of image points among the memory modules could be described succintly by specifying an integer-valued module assignment function M(i, j) with the following characteristic:
for any integers i and j on the ranges 0≦i < rp and
0≦j < sq, the value of M(i,j) lies in the range 0≦ M(i, j) < pq. Each image point I(i,j) is then stored in the M(i,j)th memory module.
If the memory system outlined in FIG. 1 is to store the image array I(*,*) in a manner permitting simultaneous access to the pq image points in any 1 × pq subarray of I(*,*), then these images must be stored in different memory modules. This is because only one storage cell of each memory module is randomly accessible at a single instant of time. Similarly, if the memory system in FIG. 1 is to store the image array I(*, *) in a manner permitting simultaneous access to the pq image points in any p × q subarray of I(*,*), than these image points must be stored in different memory modules.
It was unexpectedly observed that if the module assignment function M(i,j) assumed the form M(i,j)=(iq+j)//pq, where (iq+j)//pq denotes the remainder resulting from the integer division of the quantity (iq+j) by the quantity pq, then the pq image points of every 1 . times. pq and p . times. q subarray would be stored in different memory modules. This would permit simultaneous accessing of the pq image points in the desired subarrays.
The module assignment function M(i,j)=(iq+j)//pq is illustrated in FIG. 2A for the case that p=q=r=4 and s=8. The hexidecimal number in the jth position of the ith row of the 16 × 32 array in FIG. 2A denotes the memory module M(i,j) for storing image point I(i,j). For example, the circled entry in the 5th position of the 6th row is D, which is the hexidecimal notation for 13. This indicates that the image point I(6,5) is stored in the 13th memory module. This may be calculated as M(i, j)=M(6, 5)=(iq+j)//pq=(6×4+5)// 4×4=29//16=13.
It should be readily observed from FIG. 2A that the pq=16 image points in any 1 × pq = 1 × 16 subarray are stored in different memory modules. For example, the 16 element horizontal sequence indicated in FIG. 2 shows that the image points I(6,13), I(6, 14), . . . , I(6,28) are stored, respectively, in memory modules 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 2, 3, 4. Also, it will be observed from FIG. 2A that the pq=16 image points in any p × q= 4× 4 subarray are stored in different memory modules. For example, the 4. times.4 block indicated in FIG. 2A identifies the memory module assignments for the image points in the 4×4 subarray whose upper lefthand element is the image point I(9,6).
The above module assignment function M(i,j) assigns rs image points to each of the pq memory modules without specifying the cell locations in which they are to be stored. It was unexpectedly observed that the image points could be conveniently stored in location A(i,j) of memory module M(i,j) if such a function varied according to the form A(i, j)=(i/p)s+ (j/q), where i/p and j/p are integer quotients.
The address assignment function A(i,j) is illustrated in FIG. 2B for the case that p=q=r=4 and s=8. The decimal integer within each p . times. q = 4×4 block indicates the address of the corresponding pq=16 image points. For example, the fifth position on the sixth row falls in the 4×4 block labeled with decimal 9. This indicates that image point I(6,5) is stored in the 9th cell of memory module M(6,5). This may be calculated as A(i,j)=A(6,5)=(6/4(8+(5/4)=(1)8+(1)=9.
ADDRESS CALCULATION
When any of the 1 × pq or p × q subarrays is to be accessed (read or written), the address calculation portion of address and control circuitry 7 shown in FIG. 1 must calculate, for 0≦k < pq, the address of the unique image point in the subarray stored by the kth memory module.
Stated algebraically if the upper lefthand element of the desired 1. times. pq or p × q subarray is image point I(i,j), and if the Boolean variable t is set to one of the values t=0 or t=1 to indicate, respectively, whether a 1 × pq or a p × q subarray is to be accessed, then the address to be calculated for module k can be denoted 1(i,j,k,t). The form of this address function was noted previously in Table 1, and it can be justified by the following argument.
Suppose that access to a 1 × pq subarray is desired, so that t=0. As discussed previously, the module assignment function M(i, j) =(iq+j) //pq guarantees that module k stores one of the desired image points I(i, j), I(i,j+1), . . . , I(i, j+pq-1). Equivalently, module k stores image point I(i,j+b) where b is an integer lying in the range 0. ltoreq.b < pq. The distribution of image points among memory modules guarantees that image point I(i,j+b) is stored in location A(i,j+ b)=(i/p) s+(j+b)/q of memory module M(i,j+b)=(iq+j+b)//pq. It follows, therefore, that k=M(i,j+ b) and that 1(i,j,k,t)=A(i,j+b). The foregoing relations can be used to show that b=[M(i,j+b)-iq-j]//pq,= [k-iq-j]//pq. Hence, defining the function g(i,j,k)=(k-iq-j)//pq, we conclude that b=g(1,j,k), and that when t=0, 1(i,j,k,t)= A(i,j+b)= (i/p)s+(j+ b)/q=(i/p)s+[j+g(i,j, k)]/q.
Similarly, let it be supposed that access to a p × q subarray is desired. Thus, t-1. As a consequence of the module assignment function M(i,j), module k stores one of the desired image points I(i,j), I(i,j+1), . . . , I(i,j+q-1), I(i+1,j), . . . , I(i+p-1, j+q-1). Equivalently, module k stores image point I(i+a,j+b), where the integers a and b lie in the respective ranges 0≦a < p and 0. ltoreq.b < q. However, the distribution of image points among memory modules guarantees that image point I(i+a,j+ b) is stored in location A(i+a,j+ b) =[(i+a)/p]s+(j+b) /q of memory module M(i+a,j+ b)=(iq+aq+j+ b)//pq. Therefore, it follows that k=M(i+a,j+b) and that 1(i,j,k,t)= A(i+ a,j+j+b) . The foregoing relations can be used to show that aq+b=[M(i+a,j+ b)-iq-j] //pq=(k-iq-j) //pq. Hence by defining the function g(i,j,k)=(k-iq- j)//pq, we conclude that a=g(i,j,k)/q and b=g(i,j,k)//q, and that when t=l, l(i, j,k,t)= A(i+a, j+b)=[(i+aj/p]s+(j+b)/q=[(i+g(i,j,k)/q)/p]s+(j+ g(i,j,k) //q)/q.
ROUTING PATTERNS
As stated previously, whenever any 1 × pq or p × q subarray of the image array I(*,*) is stored into or retrieved from the memory shown in FIG. 1, each of the memory modules 21, 23, and 25 stores or retrieves a single element of the subarray. Relatedly, the elements of this subarray are routed by permuter 47 from data register 39 to the memory modules for store operations. Likewise, the subarray elements are routed by permuter 49 from the memory modules to the data register for retrieval operations. The operation of permuters 47 and 49 are controlled by a signal on line 15 provided by the control portion of the address and control circuitry 7.
Stated algebraically, it is apparent that if the upper lefthand element of the 1 × pq or p × q subarray to be accessed is the image point I(i,j), and if the Boolean variable t is set to one of the values t=0 or t=1 so as to indicate, respectively, whether a 1 . times. pq or p × q subarray is to be accessed, then the unique subarray element to be stored into or retrieved from module k can be denoted e(i,j, k,t), This element must be routed to or from one of the pq positions d(0) , d(1), . . . , d(pq-1) of the data register, as indicated previously in Table 1. The routing pattern specified in Table 1 can be justified by the following arguments.
Suppose that a 1 × pq subarray is to be accessed, so that t=0. Since the subarray is held in row major order in the data register, element I(i,j+b) of the subarray should be routed to or from position d(b) of the data register. As described in the last section, image point I(i, j+b) is stored in memory module k, where k and b are related according to the formula b=(k-iq-j)//pq=g(i,j,k). Therefore, the unique element of the 1 × pq subarray to be retrieved from or stored into module k, namely I(i,j+b)=e(i,j,k,t), is routed to or from position d(b)=d(g(i,j,k)) of the data register.
Similarly, suppose that access to a p × q subarray is desired, so that t=1. Then since the subarray is held in row major order in the data register, element I(i+a,j+j+b) of the subarray array sould be routed to or from position d(aq+b) of the data register. As described in the last section, image point I(i+a,j+ b) is stored in memory module k, where k is related to a and b according to the formula aq+b=(k-iq- j)// pq=g(i, j,k). Therefore, the unique element of the p × q subarray to be retrieved from or stored into module k, namely, I(i+a,j+ b)=e(i,j,k, t), is routed to or from position d(aq+b)= d(g(i,j,k)) of the data register.
STRUCTURAL DESIGN
Referring now to FIG. 3, there is provided an overview of the address and control circuitry 7 shown in FIG. 1. As indicated in FIG. 3, the pq memory modules 21, 23, and 25 are arranged into p rows of q modules each. The address and control circuitry comprises: a single global logic component 51; p identical row logic components 53, 55, and 57, one for each row of memory modules; and pq identical module logic component 59, 61 and 63, one for each memory module.
The global logic component 51 operates in response to the subarray shape designation t and the subarray starting coordinates i and j for calculating the quantities M(i,j), R, and 1v(0), 1v(1), . . . , 1v(q-1). The quantity M(i,j) is used to control permuters 47 and 49 over path 15, as shown in FIG. 1. The quantity R consists of values used by row logic components 53, 55, and 57. The quantities 1v(0), 1v(1), . . . , 1v(q-1) are used by module logic components 59, 61 and 63.
Each of the row logic components 53, 55 and 57 operates in response to a fixed row designation number, and in response to the quantity R calculated by the global logic component 51, to calculate address information used for the calculation of cell addresses for memory modules in the associated row of modules. This address information is provided over lines 65, 67, and 69 to the module logic components connected to these memory modules.
Each of the module logic components 59, 61, and 63 operates in response to the address information calculated by one of the row logic components 53, 55, and 57; and in response to one of the signals 1v(0), 1v(1), . . . , 1v(q-1) calculated by the module logic component 51, in order to formulate a cell address. In particular, the module logic component associated with the kth memory module calculates the cell address l(i, j, k, t). The cell addresses are supplied to the respective memory modules over lines 9, 11, and 13.
FIGS. 4-6 provide, respectively, detailed descriptions of: the global logic component 51; one of the row logic components 53, 55, and 57; and one of the module logic components 59, 61, and 63. The operation of each component is described both algebraically and with an exemplary circuit design. The algebraic descriptions summarize the inputs to, outputs from, and calculations performed by each of the components. These algebraic descriptions are appropriate for any combination of design parameters p, q, r, and s. The exemplary circuit designs are specific for the case that p=q=r=4 and s=8.
Referring now to FIG. 4, there is provided a detailed description of the global logic component 51. The inputs to this circuit are the subarray shape designation t and the subarray starting location coordinates i and j. The outputs from this circuit are the quantities M(i, j), R, and 1v(0), 1v(1), . . . , 1v(q-1). As indicated, the output quantity R comprises a bundle of control signals consisting of values t, xo, io, yo, and zo. Each of these values is calculated by the global logic component according to the formulas provided in FIG. 4.
The first two values to be calculated by the global logic component are quantities xo=i/p and io=i//p. That is, xo and io are the quotient and the remainder that result from the integer division of i by p. Since the image coordinate i is a binary coded integer, and since p=4 for the exemplary circuit in FIG. 4, io is just the least significant two bits of i, and xo is the remaining bits of i.
The next two values to be calculated by the global logic component are the quantities yo=j/q and vo=j//q. Since the image coordinate j is a binary-coded integer, and since q=4 for the exemplary circuit in FIG. 4, vo and yo are, respectively, the least significant two bits of j and the remaining bits of j.
Another value to be calculated by the global logic component is the quantity uo=(io+ yo)//p. That is, uo is the remainder that results from the integer division by p of the sum of the two previously calculated quantities io and yo. For the exemplary circuitry in FIG. 4 the quantities io and y are supplied over lines 405 and 403 to adder 401, which calculates their sum. Since p=4 for the exemplary circuit, the desired quantity uo=(io+yo)//p is just the least significant two bits of the sum outputted from adder 401 on lines 407.
Another value to be calculated by the global logic component is the quantity M(i,j)=(iq+j)//pq. This quantity can be calculated from the two previously calculated quantities uo and vo according to the relation M(i, j)=uo.sup.. q+vo. Since vo and uo are binary numbers, since vo .sub.+ - g, and since q=4 for the exemplary circuit in FIG. 4, the calculation M(i,j) =uo q+vo.can be achieved simply by concatenating (juxtapositioning) the values uo and vo, appearing respectively on lines 407 and 421.
Another value to be calculated by the global logic component is the quantity zo=t.uo+t(yo//p). That is, if the shape designation value t has the Boolean value t=0, then its logical complement t has the value t=1, so that zo assumes the value zo=uo. Conversely, if t has the Boolean value t=1, then t=0 and zo=yo//p. For the exemplary circuit in FIG. 4, yo//p comprises the least significant two bits of the previously calculated quantity yo. The quantity yo//p is supplied to over lines 417 to AND gates 415. The quantity t comprises a second input to AND gates 415. Similarly, the quantity t calculated by INVERTER 411 is supplied to AND gates 409, along with the quantity uo calculated by adder 401 and appearing on line 407. The outputs from AND gates 409 and 415 are in turn supplied to OR gates 419. The output from OR gates 419 constitutes the desired quantity zo.
The final values to be calculated by the global logic component are the quantities 1v(0), 1v(1), . . . , 1v(q-1). For integer values of k on the range 0≦k < q, 1v(k) is defined to have the value 1v(k)=1 if k < vo and the value 1v(k)=0 if k≧vo, where vo is the previously calculated quantity appearing on lines 421. Symbolically, this written 1v(k)=LT(k,vo). The quantity 1v(0) is calculated by OR gate 423, and has the Boolean value 1v(0)=1 if either bit of vo is 1. Similarly, 1v(1)=1 if the most significant bit of vo is 1, and 1v(2)=1 if both bits of vo are 1, as determined by AND gate 425. Finally, 1v(3)=0, because the two-bit value vo cannot be larger than 3.
Referring now to FIG. 5, there is provided a detailed description of one of the row logic components 53, 55, or 57 as shown in FIG. 3. More particularly, the row logic component associated with the uth row of memory modules is described, where u lies in the range 0≦u < p. The inputs to this row logic component are the row designation number u and the bundle of signals R. R comprises the values t, xo, yo, io, and zo provided by the global logic component 51. The outputs from the row logic component consist of the values t, xo, yu, lu, and eu calculated according to the formulas provided in FIG. 5. These values comprise address information used in the calculation of cell addresses for memory modules on the uth row of modules.
The first value to be calculated by the row logic component is the quantity z=(u-zo)//p. For the exemplary circuit in FIG. 5, INVERTER gates 501 and Adder 503 serve to subtract zo from u, according to the well- known relation u-zo=u+zo+1. Since p=4, the least significant two output bits from Adder 503 comprise the desired quantity z. INVERTER 505 and AND gates 507 supply the quantity t.z to Adder 509, and hence Adder 509 and Half-adder 511 serve to calculate yu=yo+t.z.
Another value to be calculated by the row logic component is the quantity eul=EQ(z,O). That is, eul is a Boolean variable with the value eul=1 if z+0 and with the value eul=0 if z≠0. In FIG. 5, OR gate 513 and INVERTER 515 determine whether z=0 and provide the signal eul=EQ(z,0) on line 517.
Additional values to be calculated by the row logic component are the Boolean variables lu=LT(z,io) and eu2=EQ (z,io). That is, lu=1 if z < io and eu2=1 if z=io. In FIG. 5, INVERTER gates 519 and Adder 521 serve to subtract io from z according to the relation z-io=z+io+1. INVERTER 523 operates on the carry from Adder 521 to calculate lu=LT(z- io,o)=LT(z,io), while OR gate 525 and INVERTER 527 provide the signal eu2=EQ(z-io,o) =EQ(z,io) on line 529.
The final value calculated by the row logic component is the Boolean variable eu=t.eul+t.eul(lu+eu2). In FIG. 5, this variable is calculated by OR gates 531 and 541, INVERTER gates 533 and 537, and AND gates 535 and 539.
Referring now to FIG. 6, there is shown a detailed description of one of the module logic components 59, 61, or 63 of FIG. 3. More particularly, the module logic component associated with the kth memory module is shown, where k lies on the range 0≦k < pq. The inputs to this circuit are the quantity lv(k//q) calculated by the global logic component 51, and the quantities t, xo, yu, lu, and eu calculated by the row logic component associated with the uth row of memory modules, where u=k/q. The single output from the module logic component is the cell address 1(i,j,k,t) calculated according to the formulas provided in FIG. 6. Note that the combinational logic interior to the kth memory module responsive to the cell address 1(i,j,k,t) may be fashioned according to any one of numerous methods, as for example, that shown in "Logical Design for Digital Computers" by Montgomery Phister, John Wiley and Sons, New York, 1958.
The first value to be calculated by the module logic component is the quantity x=xo+t(lv.lu+lv.eu). Here lv denotes the value lv(k//q) received from the global logic component 51. In FIG. 6, the desired Boolean value t(lv.lu+lv.eu ) is obtained by operation of INVERTER 601, AND gates 603, 605, and 615, and OR gate 609. This Boolean value is then added to xo by Half-adder 619. This provides the value x on lines 625.
The next value to be determined by the module logic is the quantity y=yu+p.t.eu.lv+t.lv. From this formula it is clear that, since either t=0 or t=0, y is achieved by adding either 0, 1, or p to yu, with the value added determined by the Boolean variables t, eu, and lv. In FIG. 6, the Boolean variable t.lv is calculated by AND gate 607 and is supplied over line 608 to Half-adder 621. The Boolean variable t.eu.lv is calculated by AND gates 605 and 613, operating in conjunction with INVERTER 611, and is then supplied over line 614 to OR gate 617. If t. lv=l, then necessarily t=1 and t=0, so that t.eu.lv=0. In this case Half- adders 621 and 623 add t.lv=1 to the quantity yu, with any carry generated by Half-adder 621 being routed to Half-adder 623 via OR gate 617 and line 618. Alternatively, if t.eu.lv=1, then necessarily t=0 and t=1, so that t.lv=0. In this case the value t eu lv=1 is routed via OR gate 617 and line 618 to Half-adder 623, and thus is added to the most significant bit of yu. Since Half-adder 621 adds the value t.lv=0 to the least significant two bits of yu, the net result is that Half-adders 621 and 623 add p=4 to yu, as desired. In all cases, the desired quantity y=yu+p.t.eu.lv+t.lv is provided on lines 627.
The final value to be ascertained by the module logic associated with the kth memory module is the cell address l(i,j,k,t)=x.s+ y. For the exemplary circuit in FIG. 6, s=8 and y < 8, so that 1(i,j,k, t) can be achieved simply by juxtapositioning the values x and y appearing, respectively, on lines 625 and 627. The cell address 1(i,j,k,t) is supplied to memory module k over lines 629.
FIGS. 7-9 illustrate the routing circuitry 8 shown in FIG. 1. The primary functions of this circuitry are to route the image points of any 1 × pq or p × q array of points between memory modules 21, 23, and 25 and the data register 39. Restated, the circuitry must respond to an appropriate control signal M(i,j) on line 15 by routing image points between memory modules and the appropriate positions of the data register. The routing circuity 8 comprises right rotate permuter 47 and left permuter 49. FIGS. 7-9 describe the operation of the routing circuitry both algebraically and with an exemplary circuit design. The algebraic descriptions are appropriate for any combination of design parameters p, q, r and s, although the exemplary circuit design is specific for the case that p=q=r=4 and s=8.
Referring now to FIG. 7, there is set forth a logic design of right rotate permuter 47. One input to this circuit is the quantity M(i, j) calculated by the global logic component 51, which is provided in lines 15. The remaining inputs are the pq image points held in data register 39, which are supplied on lines 41, 43, and 45. These image points are outputted to the respective memory modules on lines 27, 31, and 35, according to the following rule. The kth memory module receives the image point held in the g(i,j,k)th position of the data register, where the function g(i,j,k) is defined by the relation g(i,j,k)=[k-M(i, j) ]//pq. This routing is achieved by rotating the contents of data register 39 by M(i,j) positions.
The circuit in FIG. 7 uses four simple permuters 701, 703, 705, and 707 to achieve the desired rotation. Each of these simple permuters responds to a single bit of the quantity M(i,j) by rotating its inputs by a predetermined amount if the bit of M(i,j) is a 1 and by not rotating its inputs if that bit of M(i,j) is a 0. For example, FIG. 8 depicts the simple permuter 701 that responds to the least significant bit of M(i,j) supplied thereto on line 709. If the bit on line 709 is a logical 0, then AND gates 805 are blocked and INVERTER 801 provides a logical 1 on line 803 which enables AND gates 807. The inputs on lines 41, 43, and 45 are thus supplied without rotation to the outputs, via AND gates 807 and OR gates 809. Conversely, if the bit on line 709 is a logical 1, then this value enables AND gates 805 while INVERTER 801 provides a blocking signal on line 803 to AND gates 807. The inputs on lines 41, 43, and 45 are thus rotated to the right by one position and supplied to the outputs by AND gates 805 and OR gates 809.
Referring now to FIG. 9, there is shown an embodiment of the left rotate permuter 49. One input to this circuit is the quantity M(i,j) calculated by the global logic component 51, which is provided on lines 15. The remaining inputs are the pq image points being accessed from memory modules 21, 23, and 25, which are supplied on lines 29, 33, and 37. These image points are outputted to the data register 39 on lines 51, 53, and 55 according to the following rule. The image point supplied by the kth memory module is routed to the g(i,j,k)th position of the data register, where the function g(i,j,k) is defined by the relation g(i,j, k) =[k-M(i,j)]//pq. This routing is achieved by rotating by M(i,j) positions the sequence of image points retrieved from the memory modules 21, 23 and 25.
The exemplary circuit in FIG. 9 uses four simple permuter 901, 903, 905, and 907 to achieve the desired rotation. Each of these simple permuters responds to a single bit of the quantity M(i,j) by rotating its inputs by a predetermined amount if that bit of M(i,j) is a 1 or by not rotating its inputs if that bit of M(i,j) is a 0. These simple permuters are quite similar in design to the permuters 701, 703, 705, and 707 shown in FIGS. 7 and 8.
In summary a memory access method and apparatus has been described which permits access to all 1 × pq and p × q subarrays within an image array of size rp × sq stored in a word organized random access memory if the data is distributed and accessed according to the predetermined relationships. The memory system implementing the distribution and functions requirres essentially only pq memory modules, two variable rotate permuters, and associated access circuitry in order to provide access to the subarrays. Also, the memory system can be extended by an n fold replication to handle grey scale or color images whose image points each require two or more bits of storage.
It is to be understood that the particular embodiment of the invention described above and shown in the drawings is merely illustrative and not restrictive on the broad invention, that various changes in design, structure and arrangement may be made without departure from the spirit of the broader aspects of the invention as defined in the appended claims. | |
Random is the real life. What we see and sense everyday are absolutely randomly happened. Randomization is the process of making something random, as the nature.
Given a tree with N nodes, to be more precisely, a tree is a graph in which each pair of nodes has and only has one path. All of the edges’ length is a random integer lies in interval [0, L] with equal probability. iSea want to know the probability to form a tree, whose edges’ length are randomly generated in the given interval, and in which the path’s length of every two nodes does not exceed S.
For each test case, output the case number first, then the probability rounded to six fractional digits. | http://vawait.com/2015/07/hdu-4219/ |
Support the American Dream Accounts Act•
Did you know the number one reason students fail to complete their college education is lack of financial resources?
That is why we’re thrilled that just two weeks after the National Opportunity Summit, the bipartisan American Dream Accounts Act has been reintroduced by Sen. Chris Coons (D-DE) and Sen. Marco Rubio (R-FL).
This legislation, which Opportunity Nation has long supported and championed, focuses on innovation and cross-sector partnerships that help students make smart investments in their educational future and reach their career goals.
Studies show that low- and moderate-income students with a college savings account are three times more likely to attend college and four times more likely to graduate.
We believe that the energy and focus unleashed by the third National Opportunity Summit in D.C. last month can help mobilize our national network to help get this bill passed during the current Congressional Session. Supporting college savings plans combined with online support to track college readiness for low-income children is part of our call to action: We Got This. | https://opportunitynation.org/latest-news/blog/support-american-dream-accounts-act/ |
18 Jul 2017
“When you begin planning a kitchen renovation project, you may have no idea how much your ideal vision might cost. The answer will likely depend on several factors, including the size of your space, what you will do to it, and your budget. In the end, the price of a renovation should largely be driven by your own choices.
That said, there are some common reasons kitchen renovations go over the original budget. We asked three kitchen designers to tell us what they most commonly see. | http://www.lincorpborchert.com/1478-2/ |
- Medico Friends Circle - Chennai first meeting on the evening of March 4th, 2017 in Saturday 5.30 pm to 7.30 pm @ Spaces, 1, Elliot's Beach Road, Besant Nagar, Chennai.agenda (1) Welcome and introductions Intoduction to mfc and objectives of the Chennai chapter (2) Study on clinical trials in India – Participants' expereinces and issues related to accessibility and affordability of drugs– Presentation by Sarojini N, Sama, New Delhi, followed by discussion (3) Presentation of the report of the fact finding on health effects of oil spill at Ennore followed by discussion (4) Suggestions for way forward.Welcome All.
Previous years
November 2010
- 3rd National Bioethics Conference on the theme Governance in Healthcare, Ethics, Equity and Justice to held at All India Institute of Medical Sciences, New Delhi between 18 - 20 November, 2010.
February 2010
- Seminar on Pharmaceutical Policy in India: Challenges for the Campaign for Access organised by JSA and Community Development Medical Unit at Kolkota between 19 - 20 February 2010.
January 2010
- Health and Human Rights Course organised by CEHAT and Tata Institute of Social Sciences from 18 - 27 January 2010 at FIAMC Bio-Medical Ethics Centre, St. Pius College, Mumbai. For more details visit www.esocialsciences.com/opportunities/workshops/workshopDetails.aspx?workshopid=227
- 3rd South Asian Regional Symposium on Evidence Informed Healthcare organised by Christian Medical College, Vellore, Indian Council of Medical Research and the South Asian Cochrane Network & Centre at Vellore from 11- 14 January 2010. For more details visit www.cochrane-sacn.org/Symposium2010/Index.htm.
- Medico Friend Circle Annual Meet organised by Medico Friend Circle at Wardha from 8 - 9 January 2010.
October 2009
- National Symposium on Medical And Health Science Education in India organised by Peoples Council of Education and Homi Bhabha Centre for Science Education as part of the Second People's Education Congress in Mumbai from 5 - 8 October 2009. See www.hbcse.tifr.res.in/second-peoples-education-congress for more details.
September 2009
- National Conference on Emerging Health Care models: Engaging the Private Health Sector organised by CEHAT on 25 - 26 September 2009. See www.cehat.org/go/PPP/News for more details.
- Training course on Health and Equity organised by International People's Health University in association with Jan Swasthya Abhiyan, Community Health Cell and Prayas at Bangalore from 1-9 September, 2009. See www.phmovement.org/iphu/en/bangalore/announcement for more details.
March 2009
- JAAK Public Hearing on Health organised by Janaarogya Andolana - Karnataka at Sir M. Puttannachetty Town Hall in Bangalore on 31 March, 2009 from 10.30am onwards.
January 2009
- Medico Friend Circle Annual Meet on the theme 'Forced Displacement and Health' organised by Medico Friend Circle at Bongaigaon, Assam from 16 - 17 January, 2009.
July 2008
- Medico Friend Circle Mid Annual Meet organised by Medico Friend Circle at Sevagram, Wardha from 4 - 5 July, 2008.
- All India Drug Action Network Meeting organised by All India Drug Action Network at Sevagram, Wardha on 3 July, 2008
March 2008
- Equity and Health Rights- A training program for health activists organised by the International People's Health University at Jaipur from 15 to 21 March 2008. See http://phmovement.org/iphu/en/jaipur for more details.
- 52nd National Conference of the Indian Public Health Association at New Delhi from 7 - 9 March, 2008. See http://www.iphaconference.mamc.ac.in for more details. | http://www.communityhealth.in/~commun26/wiki/index.php?title=Current_events |
Each year New Day Christian Centre travels to Discovery Camp near Houston, Texas, with our children and youth. The 2015 Discovery Camp focus was “Being an Eye Witness to God’s Goodness”.
We had thousands of children and youth attending and over three hundred said yes to Jesus for the very first time. Additionally, many renewed their commitment to Christ.
Leisure activities include water sliding, swimming, go-carts, horseback riding and an opportunity to participate in crafts.
On the second day of camp last year, New Day’s children participated in the Bible trivia contest. Not only did they know more than the other groups, but they also elaborated with additional comments about the context of the questions. Our children have been taught very well at New Day and needless to say, they won the Bible drill with flying colors.
This camp also teaches our young people about themselves. One subject matter was about life’s treasures and trash. A very sweet seven-year-old, who had self-esteem problems told me she had learned one main thing about herself. She realized that the trash in her life was all the bad things she thought about herself. She learned that she was beautiful and a treasure to God. Summer camp changed her life forever.
During prayer time in one of the worship services, one child in our group was so deeply engrossed in her prayers, that a camp leader asked her what she was talking to God about. She replied that God was so good and the biggest thing in the universe, yet he still loves and treasures me. That response spoke volumes about what goes on at New Day.
It is such a privilege and honor to take our children and youth to camp each year. They are all so well behaved and stand out immensely from all others. I am truly proud to be from New Day Christian Centre, but more than that, I am proud to be the parent of a child raised at New Day.
It is such an honor to bring the children from New Day. They are so well behaved, they know so much about the Bible, they know how to pray, and they know how to worship. New Day children truly stood out to the leaders of the camp. I am so proud to be from New Day and to take out children, but more than that I am proud to be the parent of a child who was raised at New Day. | https://newdaychristiancentre.org/children-activities/2015-summer-discovery-camp/ |
Three-Dimensional Coordinates and the...
3:07
Who was Frederick Douglass?
Other Resource Types (1,475)
Lesson Planet: Curated OER
Will You All Please Rise?
A three-lesson unit teaches fifth and sixth graders about the importance of participation in a democratic society. The first lesson focuses on the purpose of and importance of civic duty. The second lesson looks at the justice system and...
Lesson Planet: Curated OER
Don't Mess with Mercury
The three lessons in the don’t Mess with Mercury Unit Module are designed to teach middle schoolers about the dangers of mercury. The first option is teacher-led. Class members learn about mercury by reading case studies and...
Lesson Planet: Curated OER
Global Problem Solvers
Cisco’s animated series, Global Problem Solvers, teaches viewers about social responsibility and teamwork. The two-season series follows a diverse group of talented teenagers who combine their unique individual skills and cooperate to...
Lesson Planet: Curated OER
This I Can Do!
Personal interest, strengths, talents, and abilities can be used to make a difference. Young learners consider how they can share their talents with others through volunteering, what they can do to take care of the natural environment,...
Lesson Planet: Curated OER
Crash Course: Media Literacy
Viewers take a Crash Course in Media Literacy. They watch 12 videos that take them through media history, the positive and negative effects of media, and regulations and policies affecting media producers. The series aims to help viewers...
Lesson Planet: Curated OER
KidsHealth in the Classroom: Personal Health Series Grades 6-8
The Personal Health Series is designed to teach middle schoolers how to improve their own health. The 22 lessons address the CASEL competencies and are divided into five categories: Fitness and Fun; Hygiene; Nutrition; Puberty, Growing...
Lesson Planet: Curated OER
Introduction to the Universal Declaration of Human Rights
Introduce high schoolers to the Universal Declaration of Human Rights with a four-lesson collection. Class members watch videos, examine illustrations from the book We Are All Born Free, create a visual display promoting human rights,...
Lesson Planet: Curated OER
Dealing with Dilemmas: Upstanders, Bystanders and Whistle-Blowers
There are upstanders, bystanders, and whistle-blowers when it comes to dealing with dilemmas. The four lessons in this unit module ask young scholars to think about injustice and how to resolve difficult situations. Learners research...
Lesson Planet: Curated OER
Economic Lowdown Podcast Series
Accepting a cow as payment for a car is not udder-ly ridiculous. A collection of 21 podcasts provide high schoolers with the lowdown on economics. Topics covered include economics, banking, monetary policy, and the role the Federal...
Lesson Planet: Curated OER
Public Service Announcement: Civic Responsibility
Get your message across. Scholars use their prior knowledge and artistic skills to create public service announcements. The project is designed to explain the importance of civic harmony and the responsibility of all citizens to...
Lesson Planet: Curated OER
Public Service Announcements
Students work cooperatively to create a 30-second PSA that can air on alocal access cable channel, be shown to the school, or posted on a school website.
Lesson Planet: Curated OER
Research Project Embedded with Media Literacy
Here is a phenomenal language arts lesson on media literacy for your middle and high schoolers. In it, learners produce a research product in the form of a public service announcement (PSA). First, they view examples of these PSA's to...
Lesson Planet: Curated OER
Cancer Public Service Announcement
Fourth graders create a Public Service Announcement and explore the causes and treatments of cancer.
Lesson Planet: Curated OER
"I'm Not Old Enough to Vote, but If I Was..." Creating Video Public Service Announcements
Learners create a short multi-media public service announcement aimed at increasing voter participation after determining how the issues can impact their communities. They apply critical viewing skills while determining the effectiveness...
Lesson Planet: Curated OER
Don't Let the Earth Down
Writing a persuasive argument starts with a clear thesis. Using this resource, your class will write a persuasive paper on a conservation issue. They will then transform their argument into a 30-second public service announcement. If...
Lesson Planet: Curated OER
Goods and Services: Some are Private, Some are Not
Why doesn't the government provide all goods and services if we pay taxes? Pupils investigate the difference between public and private services. They analyze their communities together with an interactive classroom model and...
Lesson Planet: Curated OER
Public Service Graphic Design
Twelfth graders create a billboard design (on the computer) to promote and increase awareness of a social problem. Students conduct research of public service issues that are of interest to them. They collect photos and images through...
Lesson Planet: Curated OER
Don't Drink and Drive: Assessing the Effectiveness of Anti-Drinking Campaigns
Have your class explore alcohol awareness public service announcements. Provided are a detailed plan and a complete set of materials for doing just this. Learners are exposed to a series of approaches and advertisements and decide which...
Lesson Planet: Curated OER
Goods and Services: Some are Private, Some are Not
Who fixes the swings at the park? The class creates a community bulletin board to explore the goods and services provided for their community in both the private and government sectors. They discuss taxes, consumers' wants and needs, and...
Lesson Planet: Curated OER
Should We Have Mandatory Military Service? | America From Scratch
Mandatory service in a democratic society? On July 1, 1973, the draft ended. Now the United States relies on an all-volunteer military. But what if all citizens were required to perform some sort of service, either military or public...
Lesson Planet: Curated OER
Service Learning in the Social Studies
Active Citizenship Today (ACT) is a "unique social studies service learning program" that requires students to learn about the public policy associated with community issues they identify in their local community. This web site provides...
Lesson Planet: Curated OER
“Pardon This Interruption-Columbus Has Landed!!!”
Students research, design, rehearse, record, and present a 60 second Public Service Announcement, based on Columbus' arrival in the America. The students, working in groups, utilize the design process in creating their PSA. This activity...
Lesson Planet: Curated OER
The Titanic Impact of Science
Discuss personal ideas about science and how a filmmaker can employ the arts to promote science. After reading an article, young scientists will discover how James Cameron is trying to interest people in the oceans. In groups, they will...
Lesson Planet: Curated OER
Cause Célèbre
In this exercise, learners identify characters from an "Archie" comic and discuss the relevance of "Archie" to today's youth. They create public service advertisements featuring celebrities to address common concerns among teenagers in... | https://lessonplanet.com/search?keywords=public+service |
CryEngine 3 programmer joins Red 5 StudiosUry Zhilinsky will take role of senior graphics programmer at Firefall dev
One of Crytek's leading graphics programmers has joined Red 5 Studios, developer of the ambitious free-to-play shooter, Firefall.
While at Crytek, Ury Zhilinsky worked as R&D manager and senior R&D graphics programmer, and contributed to both Crysis 2 and the development of CryEngine 3.
At Red 5 Studios he will take the role of senior graphics programmer, joining a team that includes veterans of World of Warcraft, Tribes and M.A.G..
"We're still a relatively small team here at Red 5 Studios because we've been very selective when choosing 'tribe' members," said CEO Mark Kern.
"It's not just about talent, it's about what happens when you put together a group that shares the same commitment to quality, community, and fun. Ury's skill and experience make him an asset that any team would be happy to have, but it's his development philosophy that makes him a fit for the 'tribe'."
In January 2010, Red 5 Studios was experiencing financial difficulties that led to redundancies and voluntary departures. However, the Chinese online developer The9 acquired a majority stake in the company in March of the same year. | https://www.gamesindustry.biz/cryengine-3-programmer-joins-red-5-studios |
---
abstract: 'Sparsity-constrained optimization is an important and challenging problem that has wide applicability in data mining, machine learning, and statistics. In this paper, we focus on sparsity-constrained optimization in cases where the cost function is a general nonlinear function and, in particular, the sparsity constraint is defined by a **graph-structured sparsity** model. Existing methods explore this problem in the context of sparse estimation in linear models. To the best of our knowledge, this is the first work to present an efficient approximation algorithm, namely, <span style="font-variant:small-caps;">Graph</span>-structured <span style="font-variant:small-caps;">M</span>atching <span style="font-variant:small-caps;">P</span>ursuit (<span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>), to optimize a general nonlinear function subject to graph-structured constraints. We prove that our algorithm enjoys the strong guarantees analogous to those designed for linear models in terms of convergence rate and approximation accuracy. As a case study, we specialize <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> to optimize a number of well-known graph scan statistic models for the connected subgraph detection task, and empirical evidence demonstrates that our general algorithm performs superior over state-of-the-art methods that are designed specifically for the task of connected subgraph detection.'
author:
- |
Feng Chen, Baojian Zhou\
Computer Science Department, University at Albany – SUNY\
1400 Washington Avenue, Albany, NY, USA\
{fchen5, bzhou6}@albany.edu
title: 'Technical Report: A Generalized Matching Pursuit Approach for Graph-Structured Sparsity'
---
Introduction
============
In recent years, that is a growing demand on efficient computational methods for analyzing high-dimensional data in a variety of applications such as bioinformatics, medical imaging, social networks, and astronomy. In many settings, sparsity has been shown effective to model latent structure in high-dimensional data and at the same time remain a mathematically tractable concept. Beyond the ordinary, extensively studied, sparsity model, a variety of **structured sparsity models** have been proposed in the literature, such as the sparsity models defined through trees [@hegde2014fast], groups [@jacob2009group], clusters [@huang2011learning], paths [@asterisstay2015icml], and connected subgraphs [@hegde2015nearly]. These sparsity models are designed to capture the interdependence of the locations of the non-zero components via prior knowledge, and are considered in the general sparsity-constrained optimization problem: $$\begin{aligned}
\min_{{\bf x} \in \mathbb{R}^n} f({\bf x})\ \ s.t. \ \ \text{supp}({\bf x}) \in \mathbb{M}, \label{problem:general}\end{aligned}$$ where $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is a differentiable cost function and the sparsity model $\mathbb{M}$ is defined as a family of structured supports: $\mathbb{M} = \{S_1, S_2, \cdots, S_L\}$, where $S_i \subseteq [n]$ satisfies a certain structure property (e.g., trees, groups, clusters). The original $k$-sparse recovery problem corresponds to the particular case where the model $\mathbb{M} = \{S\subseteq [n]\ |\ |S| \le k\}$.
The methods that focus on general nonlinear cost functions fall into two major categories, including **structured sparsity-inducing norms based** and **model-projection based**, both of which often assume that the cost function $f({\bf x})$ satisfies a certain convexity/smoothness condition, such as *Restricted Strong Convexity/Smoothness* (RSC/RSS) or *Stable Mode-Restricted Hessian* (SMRH). In particular, the methods in the first category replace the structured sparsity model with regularizations by a sparsity-inducing norm that is typically non-smooth and non-Euclidean [@bach2012structured]. The methods in the second category decompose Problem (\[problem:general\]) into an unconstrained subproblem and a model projection oracle that finds the best approximation of an arbitrary ${\bf x}$ in the model $\mathbb{M}$: $$\text{P}({\bf x}) = \arg \min_{{\bf x}^\prime \in \mathbb{R}^n} \|{\bf x} - {\bf x}^\prime\|_2^2\ \ \ s.t. \ \ \ \text{supp}({\bf x}^\prime) \in \mathbb{M}.$$ A number of methods are proposed specifically for the k-sparsity model $\mathbb{M} = \{S\subseteq [n]\ |\ |S| \le k\}$, including the forward-backward algorithm [@zhang2009adaptive], the gradient descent algorithm [@tewari2011greedy], the gradient hard-thresholding algorithms [@yuan2014icml; @bahmani2013greedy; @jain2014iterative], and the Newton greedy pursuit algorithm [@yuan2014newton]. A limited number of methods are proposed for other types of structured sparsity models via projected gradient descent, such as the union of subspaces [@blumensath2013compressed] and the union of nested subsets [@bahmani2016learning].
In this paper, we focus on general nonlinear optimization subject to graph-structured sparsity constraints. Our approach applies to data with an underlying graph structure in which nodes corresponding to $\text{supp}({\bf x})$ form a small number of connected components. By a proper choice of the underlying graph, several other structured sparsity models such as the “standard” $k$-sparsity, block sparsity, cluster sparsity, and tree sparsity can be encoded as special cases of graph-structured sparsity [@hegde2015fast].
We have two key observations: 1) **Sparsity-inducing norms.** There is no known sparsity-inducing norm that is able to capture graph-structured sparsity. The most relevant norm is generalized fused lasso [@xin2014efficient] that enforces the smoothness between neighboring entries in ${\bf x}$, but does not have fine-grained control over the number of connected components. Hence, existing methods based on sparsity-inducing norms are not directly applicable to the problem to be optimized. 2) **Model projection oracle.** There is no exact model projection oracle for a graph-structured sparsity model, as this exact projection problem is NP-hard due to a reduction from the classical Steiner tree problem [@hegde2015nearly]. As most existing model-projection based methods assume an exact model projection oracle, they are not directly applicable here as well. To the best of our knowledge, there is only one recent approach that admits inexact projections for a graph-structured sparsity model by assuming “head” and “tail” approximations for the projections, but is only applicable to linear regression problems [@hegde2015nearly]. This paper will generalize this approach to optimize general nonlinear functions. The main contributions of our study are summarized as follows:
- **Design of an efficient approximation algorithm.** A new and efficient algorithm, namely, <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>, is developed to approximately solve Problem (\[problem:general\]) with a differentiable cost function and a graph-structured sparsity model. We show that <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> reduces to a state-of-the-art algorithm for graph-structured compressive sensing and linear models, namely, <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span>, when $f({\bf x})$ is a least square loss function.
- **Theoretical analysis and connections to existing methods.** The convergence rate and accuracy of the proposed <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> are analyzed under a condition of $f({\bf x})$ that is weaker than popular conditions such as RSC/RSS and SMRH. We demonstrate that <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> enjoy strong guarantees analogous to <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span> on both convergence rate and accuracy.
- **Compressive experiments to validate the effectiveness and efficiency of the proposed techniques.** The proposed <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> is applied to optimize a variety of graph scan statistic models for the task of connected subgraph detection. Extensive experiments demonstrate that <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> performs superior over state-of-the-art methods that are customized for the task of connected subgraph detection on both running time and accuracy.
The rest of this paper is organized as follows. Section 2 introduces the graph-structured sparsity model. Section 3 formalizes the problem and presents an efficient algorithm <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>. Sections 4 and 5 present theoretical analysis. Section 6 gives the applications of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>. Experiments are presented in Section 7, and Section 8 describes future work.
Graph-Structured Sparsity Model
===============================
Given an underlying graph $\mathbb{G} = (\mathbb{V}, \mathbb{E})$ defined on the coefficients of the unknown vector ${\bf x}$, where $\mathbb{V} = [n]$ and $\mathbb{E} \subseteq \mathbb{V}\times \mathbb{V}$, a graph-structured sparsity model has the form:
$$\begin{aligned}
\mathbb{M}(k, g) = \{S \subseteq \mathbb{V}\ |\ |S| \le k, \gamma(S) = g\},\end{aligned}$$
where $k$ refers to an upper bound of the sparsity (total number of nodes) of $S$ and $\gamma(S) = g$ refers to the maximum number of connected components formed by the forest induced by $S$:
$\mathbb{G}_{S} = (S, \mathbb{E}_S)$
, where
$\mathbb{E}_S = \{(i, j) \ |\ i, j \in S, (i, j) \in \mathbb{E}\}$
. The corresponding model projection oracle is defined as $$\begin{aligned}
\text{P}({\bf x}) = \arg\min_{{\bf x}^\prime \in \mathbb{R}^n} \|{\bf x} - {\bf x}^\prime\|_2^2\ \ s.t.\ \ \text{supp}({\bf x}^\prime) \in \mathbb{M}(k,g). \label{eqn:projection}\end{aligned}$$ Solving Problem (\[eqn:projection\]) exactly is NP-hard due to a reduction from the classical Steiner tree problem. Instead of solving (\[eqn:projection\]) exactly, two nearly-linear time approximation algorithms with the following complementary approximation guarantees are proposed in [@hegde2015nearly]:
- **Tail approximation** ($\text{T}({\bf x})$): Find
$S\in \mathbb{M}(k_T, g)$
such that
$$\begin{aligned}
\|{\bf x} - {\bf x}_S\|_2 \le c_T \cdot \min_{S^\prime \in \mathbb{M}(k, g)} \|{\bf x} - {\bf x}_{S^\prime}\|_2,\end{aligned}$$
where $c_T = \sqrt{7}$ and $k_T=5k$.
- **Head approximation** ($\text{H}({\bf x})$): Find
$S\in \mathbb{M}(k_H,g)$
such that
$$\begin{aligned}
\|{\bf x}_S\|_2 \ge c_H\cdot \max_{S^\prime \in \mathbb{M}(k, g)} \|{\bf x}_{S^\prime}\|_2,\end{aligned}$$
where $c_H = \sqrt{1/14}$ and $k_H = 2k$.
If $c_T = c_H = 1$, then $\text{T}({\bf x}) = \text{H}({\bf x}) = S$ provides the exact solution of the model projection oracle: $\text{P}({\bf x}) = {\bf x}_S$, which indicates that the approximations stem from the fact that $c_T > 1$ and $c_H < 1$. We note that these two approximations originally involve additional budgets ($B$) based on edge weights, which are ignored in this paper by setting unit edge weights and $B = k - g$.
**Generalization:** The above graph-structured sparsity model is defined based on the number of connected components in the forest induced by $S$. This model can be generalized to graph-structured sparsity models that are defined based on other graph topology constraints, such as density, k-core, radius, cut, and various others, as long as their corresponding head and tail approximations are available.
Problem Statement and Algorithm
===============================
Given the graph-structured sparsity model, $\mathbb{M}(k, g)$, as defined above, the sparsity-constrained optimization problem to be studied is formulated as:
$$\begin{aligned}
\min_{{\bf x} \in \mathbb{R}^n} f({\bf x})\ \ s.t. \ \ \text{supp}({\bf x}) \in \mathbb{M}(k, g),\end{aligned}$$
where $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is a differentiable cost function, and the upper bound of sparsity $k$ and the maximum number of connected components $g$ are predefined by users.
Hegde et al. propose <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span>, a variant of <span style="font-variant:small-caps;">Cosamp</span> [@hegde2015nearly] to optimize the least square cost function $f({\bf x}) = \|{\bf y}-{\bf A}{\bf x}\|_2^2$ based on the head and tail approximations. The authors show that <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span> achieves an information-theoretically optimal sample complexity for a wide range of parameters. In this paper, we genearlize <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span> and propose a new algorighm named as <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> for Problem (6), as shown in Algorithm \[alg:Graph-MP\]. The first step (Line 3) in each iteration, ${\bf g} = \nabla f({\bf x}^i)$, evaluates the gradient of the cost function at the current estimate. Then a subset of nodes are identified via head approximation, $\Gamma = \text{H}({\bf g})$, that returns a support set with head value at least a constant fraction of the optimal head value, in which pursuing the minimization will be most effective. This subset is then merged with the support of the current estimate to obtain the merged subset $\Omega$, over which the function f is minimized to produce an intermediate estimate, ${\bf b} = \arg\min_{{\bf x} \in \mathbb{R}^n} f({\bf x})\ \ s.t. \ \ {\bf x}_{\Omega^c} = 0$. Then a subset of nodes are identified via tail approximation, $B = \text{T}({\bf b})$, that returns a support set with tail value at most a constant times larger than the optimal tail value. The iterations terminate when the halting condition holds. There are two popular options to define the halting condition: 1) the change of the cost function from the previous iteration is less than a threshold ($|f({\bf x}^{i+1}) - f({\bf x}^i)| \le \epsilon$); and 2) the change of the estimated minimum from the previous iteration is less than a threshold ($\|{\bf x}^{i+1} - {\bf x}^i\|_2 \le \epsilon$), where $\epsilon$ is a predefined threshold (e.g., $\epsilon = 0.001$).
$i = 0$, ${\bf x}^i = 0$; ${\bf g} = \nabla f({\bf x}^i)$; $\Gamma = \text{H}({\bf g})$; $\Omega = \Gamma \cup \text{supp}({\bf x}^i)$ ${\bf b} = \arg\min_{{\bf x} \in \mathbb{R}^n} f({\bf x})\ \ s.t. \ \ {\bf x}_{\Omega^c} = 0$ $B = \text{T}({\bf b})$; ${\bf x}^{i+1} = {\bf b}_B$ ${\bf x}^{i+1}$
\[alg:Graph-MP\]
Theoretical Analysis of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> under SRL condition
================================================================================================================================================
In this section, we give the definition of Stable Restricted Linearization (SRL) [@bahmani2013greedy] and we show that our <span style="font-variant:small-caps;">Graph-Mp</span> algorithm enjoys a theorectial approximation guarantee under this SRL condition.
We denote the restricted Bregman divergence of $f$ as $B_f \Big(\cdot \| \cdot \Big)$. The restricted Bregman divergence of $f: \mathbb{R}^p \rightarrow \mathbb{R}$ between points ${\bf x}$ and ${\bf y}$ is defined as $${B}_{f} \Big( {\bf x} \| {\bf y} \Big) = f({\bf x}) - f({\bf y}) - \langle \nabla_f({\bf y}), {\bf x} - {\bf y} \rangle,$$ where $\nabla_f({\bf y})$ gives a restricted subgradient of $f$. We say vector $\nabla f({\bf x})$ is a restricted subgradient of $f: \mathbb{R}^p \rightarrow \mathbb{R}$ at point ${\bf x}$ if $$f({\bf x} + {\bf y}) - f({\bf x}) \geq \langle \nabla f({\bf x}), {\bf y} \rangle$$ holds for all $k$-sparse vectors ${\bf y}$. \[definition\_restricted\_bregman\_divergence\]
Let ${\bf x}$ be a $k$-sparse vector in $\mathbb{R}^p$. For function $f: \mathbb{R}^p \rightarrow \mathbb{R}$ we define the functions $$\alpha_k ({\bf x}) = \textbf{sup} \Bigg\{ \frac{1}{\| {\bf y} \|_2^2} {B}_{f} ({\bf x} + {\bf y} \| {\bf x}) \Big| {\bf y} \neq {\bf 0} \text{ and } |supp({\bf x}) \cup supp({\bf y}) | \leq k \Bigg\}$$ and $$\beta_k ({\bf x}) = \textbf{inf} \Bigg\{ \frac{1}{\| {\bf y} \|_2^2} {B}_{f} ({\bf x} + {\bf y} | {\bf x}) \Big| {\bf y} \neq {\bf 0} \text{ and } |supp({\bf x}) \cup supp({\bf y}) | \leq k \Bigg\}$$ Then $f(\cdot)$ is said to have a Stable Restricted Linearization with constant $\mu_k$, or $\mu_k$-**SRL** if $\frac{\alpha_k({\bf x})}{\beta_{k} ({\bf x})} \leq \mu_k$ \[definition\_restricted\_linearization\]
Denote $\Delta = {\bf x_1} - {\bf x_2}, \Delta' = \nabla f({\bf x_1}) - \nabla f({\bf x_2})$, and let $ r \geq | supp({\bf x_1}) \cup supp({\bf x_2})|$, $\bar{\alpha}_l({\bf x_1,x_2}) = \alpha_l ({\bf x_1}) + \alpha_l ({\bf x_2})$, $\bar{\beta}_l ({\bf x_1,x_2}) = \beta_l ({\bf x_1}) + \beta_l ({\bf x_2})$, $\bar{\gamma}_l ({\bf x_1,x_2}) = \bar{\alpha}_l({\bf x_1,x_2}) - \bar{\beta}_l({\bf x_1,x_2})$. For any $R' \subseteq R = supp({\bf x_1} - {\bf x_2})$, we have $$\begin{aligned}
\| {\Delta'}_{R'} \| &\leq \bar{\alpha}_r \| \Delta_{R'}\|_2 + \bar{\gamma}_r\| \Delta \|_2 \\
\| {\Delta'}_{R'} \| &\geq \bar{\beta}_r \| \Delta_{R'}\|_2 - \bar{\gamma}_r\| \Delta_{R\backslash R'} \|_2 \label{equation_11_0}\end{aligned}$$ \[lemma\_4.2\]
We can get the following properties $$\begin{aligned}
\Big| \bar{\alpha}_r \| \Delta_{R'} \|_2^2 - \langle \Delta',\Delta_{R'} \rangle \Big| &\leq \bar{\gamma}_r \| \Delta_{R'} \|_2 \| \Delta \|_2
\label{equation_11_1}
\\
\Big| \| {\Delta'}_{R'}\|_2^2 - \bar{\alpha}_r \langle \Delta',\Delta_{R'} \rangle \Big| &\leq \bar{\gamma}_r \| {\Delta'}_{R'} \|_2 \| \Delta \|_2\label{equation_12_1}\end{aligned}$$ from [@bahmani2013greedy], where $R'$ be a subset of $R = \text{supp}(\Delta)$. It follows from (\[equation\_11\_1\]) and (\[equation\_12\_1\]) that $$\begin{aligned}
\| {\Delta'}_{R'}\|_2^2 - \bar{\alpha}_r^2\| \Delta_{R'}\|_2^2 &= \| {\Delta'}_{R'}\|_2^2 - \bar{\alpha}_r \langle \Delta', \Delta_{R'} \rangle + \bar{\alpha}_r \Big[ - \bar{\alpha}_r \| \Delta_{R'}\|_2^2 + \langle \Delta', \Delta_{R'} \rangle \Big] \nonumber \\
&\leq \bar{\gamma}_r \| {\Delta'}_{R'}\|_2 \| \Delta \|_2 + \bar{\alpha}_r \bar{\gamma}_r \| \Delta_{R'} \|_2 \| \Delta \|_2 \nonumber .\end{aligned}$$ It can be reformulated as the following $$\begin{aligned}
\| {\Delta'}_{R'}\|_2^2 - \bar{\gamma}_r \| {\Delta'}_{R'}\|_2 \| \Delta \|_2 &\leq \bar{\alpha}_r^2\| \Delta_{R'}\|_2^2 + \bar{\alpha}_r \bar{\gamma}_r \| \Delta_{R'} \|_2 \| \Delta \|_2 \nonumber \\
\| {\Delta'}_{R'}\|_2^2 - \bar{\gamma}_r \| {\Delta'}_{R'}\|_2 \| \Delta \|_2 + \frac{1}{4}{\bar{\gamma}_r}^2 \| \Delta \|_2^2 &\leq \bar{\alpha}_r^2\| \Delta_{R'}\|_2^2 + \bar{\alpha}_r \bar{\gamma}_r \| \Delta_{R'} \|_2 \| \Delta \|_2 + \frac{1}{4}{\bar{\gamma}_r}^2 \| \Delta \|_2^2 \nonumber \\
(\| \Delta'_{R'} \|_2 - \frac{1}{2} \bar{\gamma}_r \| \Delta \|_2 )^2 &\leq ( \bar{\alpha}_r \| \Delta_{R'} \|_2 + \frac{1}{2} \bar{\gamma}_r \| \Delta\|_2 )^2\end{aligned}$$ Hence, we have $\| \Delta'_{R'}\|_2 \leq \bar{\alpha}_r \| \Delta_{R'} \|_2 + \bar{\gamma}_r \| \Delta \|_2$. We directly get (\[equation\_11\_0\]) from [@bahmani2013greedy].
Suppose that $f$ satisfies $\mu_{8k}$-SRL with $\mu_{8k} \leq 1+\sqrt{\frac{1}{56}}$. Furthermore, suppose for $\beta_{8k}$ in Definition \[definition\_restricted\_linearization\] exists some $\epsilon > 0$ such that $\beta_{8k} \geq \epsilon$ holds for all $8k$-sparse vectors ${\bf x}$. Then ${\bf x}^{i+1}$, the estimate at the $i+1$-th iteration, satisfies. for any true ${\bf x} \in \mathbb{R}^n$ with $\text{supp}({\bf x}) \in \mathbb{M}(k,g)$, the iterates of Algorithm \[alg:Graph-MP\] must obey $$\| {\bf r}^{i+1}\| \leq \sigma \|{\bf r} ^i\| + \nu \| \nabla_I f({\bf x}) \|_2,$$ where $\sigma = \sqrt{ \mu_{8k}^2 - \Big( 2+ c_H - 2 \mu_{8k}\Big)^2}$ and $\nu = \frac{ (2+c_H - 2 \mu_{8k})(1+c_H)+ \sigma}{2 \epsilon \sigma }$. \[theorem\_4.2\_SRL\]
Let ${\bf r}^{i+1} = {\bf x}^{i+1} - {\bf x}$. $\| {\bf r}^{i+1} \|_2$ is upper bounded as $$\begin{aligned}
\| {\bf r}^{i+1} \|_2 = \| {\bf x}^{i+1} - {\bf x} \|_2 &\leq \| {\bf x}^{i+1} - {\bf b} \|_2 + \| {\bf x} - {\bf b} \|_2 \nonumber \\
&\leq c_T \| {\bf x} - {\bf b} \|_2 + \| {\bf x} - {\bf b}\|_2 \nonumber \\
&= (1+c_T)\| {\bf x} - {\bf b} \|_2. \nonumber\end{aligned}$$ The first inequality above follows by the triangle inequality and the second inequality follows by tail approximation. Since $\Omega = \Gamma \cup \text{supp}({\bf x}^i)$ and ${\bf b} = \arg\min_{{\bf x} \in \mathbb{R}^n} f({\bf x})\ \ s.t. \ \ {\bf x}_{\Omega^c} = {\bf 0}$, we have $$\begin{aligned}
\| {\bf x - b} \|_2 &\leq \| ({\bf x -b})_{\Omega^c} \|_2 + \|({\bf x -b})_{\Omega} \|_2 \nonumber \\
&= \| {\bf x}_{\Omega^c} \|_2 + \|({\bf x -b})_{\Omega} \|_2 \nonumber \\
&= \| ({\bf x - x}^{i})_{\Omega^c} \|_2 + \|({\bf x -b})_{\Omega} \|_2 \nonumber \\
&= \| {{\bf r}^{i}_{\Omega^c} } \|_2 + \|({\bf x -b})_{\Omega} \|_2 \nonumber\end{aligned}$$ Since ${\bf b}$ satisfies ${\bf b} = \arg\min_{{\bf x} \in \mathbb{R}^n} f({\bf x})\ \ s.t. \ \ {\bf x}_{\Omega^c} = 0$, we must have ${\nabla f({\bf b})}|_{\Omega} = {\bf 0}$. Then it follows from Corollary 2 in [@bahmani2013greedy], $$\begin{aligned}
\| (\nabla f({\bf x}) - \nabla f({\bf b}))_{\Omega} \|_2 &\geq \bar{\beta}_{6k} \| ({\bf x -b} )_{\Omega} \|_2 - \bar{\gamma}_{6k} \| ({\bf x - b})_{\Omega^c} \|_2 \nonumber \\
\| \nabla_{\Omega} f({\bf x}) \|_2 &\geq \bar{\beta}_{6k} \| ({\bf x -b} )_{\Omega} \|_2 - \bar{\gamma}_{6k} \| ({\bf x - b})_{\Omega^c} \|_2 \nonumber \\
\| \nabla_{\Omega} f({\bf x}) \|_2 &\geq \bar{\beta}_{6k} \| ({\bf x -b} )_{\Omega} \|_2 - \bar{\gamma}_{6k} \| ({\bf x} - {\bf x}^i)_{\Omega^c} \|_2, \nonumber\end{aligned}$$ where $\bar{\alpha}_{6k}({\bf x_1,x_2}) = \alpha_{6k} ({\bf x_1}) + \alpha_{6k} ({\bf x_2})$, $\bar{\beta}_{6k}({\bf x_1,x_2}) = \beta_{6k} ({\bf x_1}) + \beta_{6k} ({\bf x_2})$ and $\bar{\gamma}_{6k}({\bf x_1,x_2}) = \bar{\alpha}_{6k}({\bf x_1,x_2}) - \bar{\beta}_{6k}({\bf x_1,x_2})$. As $|supp({\bf x -b}) | \leq 6k$, we have $6k$-sparsity by Definition (\[definition\_restricted\_linearization\]). Note that $\Omega \cap R$ is a subset of $R$ and $\| (\nabla f({\bf x}) - \nabla f({\bf b}))_{\Omega} \|_2 \geq \| (\nabla f({\bf x}) - \nabla f({\bf b}))_{\Omega \cap R} \|_2$. Similarly, we have $({\bf x - b})_{\Omega} = ({\bf x - b})_{\Omega \cap R} $ and $({\bf x - b})_{\Omega^c} = ({\bf x - b})_{R \backslash (\Omega \cap R)} $. The second inequality follows by ${\nabla_{\Omega} f({\bf b})} = {\bf 0}$, and the third inequality follows by ${\bf b}_{\Omega^c} = {\bf 0}$ and ${\bf x}^{i}_{\Omega^c} = {\bf 0}$. Therefore, $\| {\bf x -b}\|_2$ can be further upper bounded as $$\begin{aligned}
\|{\bf x - b}\|_2 &\leq \| {{\bf r}^{i}_{\Omega^c} } \|_2 + \|({\bf x -b})_{\Omega} \|_2 \nonumber \\
&\leq \| {{\bf r}^{i}_{\Omega^c} } \|_2 + \frac{\bar{\gamma}_{6k} \| ({\bf x} - {\bf x}^i)_{\Omega^c} \|_2 }{\bar{\beta}_{6k}} + \frac{\| \nabla f({\bf x})_{\Omega}\|_2}{\bar{\beta}_{6k}} \nonumber \\
&= \Big[ 1 +\frac{\bar{\gamma}_{6k}}{\bar{\beta}_{6k}} \Big]\| {{\bf r}^{i}_{\Omega^c} } \|_2 + \frac{\| \nabla f({\bf x})_{\Omega}\|_2}{\bar{\beta}_{6k}}\end{aligned}$$ Let $R = \text{supp}({\bf x}^i - {\bf x})$ and $\Gamma = \textbf{H}(\nabla f({\bf x}^i)) \in \mathbb{M}^{+} = \{H \cup T|H\in \mathbb{M}(k_H,g),T \in \mathbb{M}(k_T,g)\}$. We notice that $R\in \mathbb{M}^{+}$. The component $\| \nabla_{\Gamma} f({\bf x}^i)\|_2$ can be lower bounded as $$\begin{aligned}
\| \nabla_{\Gamma} f({\bf x}^i)\|_2 &\geq c_H \| \nabla_{R} f({\bf x}^i) \|_2 \nonumber \\
&\geq c_H\| \nabla_{R} f({\bf x}^i) -\nabla_{R} f({\bf x}) \|_2 - c_H \| \nabla_{R} f({\bf x}) \|_2 \nonumber \\
&\geq c_H \bar{\beta}_{6k} \| {\bf x}^i - {\bf x} \|_2 - c_H \| \nabla_{I} f({\bf x}) \|_2 \nonumber \\
&= c_H \bar{\beta}_{6k} \| {\bf r}^i \|_2 - c_H \| \nabla_{I} f({\bf x}) \|_2
%&\geq \| \nabla_{\Phi \cap R} f({\bf x}^i) -\nabla_{\Phi \cap R} f({\bf x}) \|_2 - \| \nabla_{I} f({\bf x}) \|_2 \nonumber \\
%&\geq \bar{\beta}_{3k} \| ({\bf x^i - x})_{\Phi \cap R}\|_2 - \bar{\gamma}_{3k} \| ({\bf x^i - x})_{R \backslash (\Phi \cap R)} \|_2 - \| \nabla_{I} f({\bf x}) \|_2 \nonumber \\
%&= \bar{\beta}_{3k} \| ({\bf x^i - x})_{\Phi}\|_2 - \bar{\gamma}_{3k} \| ({\bf x^i - x})_{\Phi^c} \|_2 - \| \nabla_{I} f({\bf x}) \|_2 \nonumber \\
%&= \bar{\beta}_{3k} \| {\bf r}^{i}_{\Phi}\|_2 - \bar{\gamma}_{3k} \| {\bf r}^{i}_{\Phi^c} \|_2 - \| \nabla_{I} f({\bf x}) \|_2 \nonumber \\
%&\geq \textcolor{red}{\bar{\beta}_{3k} \| {\bf r}^{i}\|_2 - \bar{\gamma}_{3k} \| {\bf r}^{i} \|_2 - \| \nabla_{I} f({\bf x}) \|_2}
\label{equ_15}\end{aligned}$$ The first inequality follows the head approximation and $R \in \mathbb{M}^{+}$. The second one is from triangle inequality and the third one follows by Lemma (\[lemma\_4.2\]). The component $\| \nabla_{\Gamma} f ({\bf x}^i) \|_2$ can also be upper bounded as $$\begin{aligned}
\| \nabla_{\Gamma} f ({\bf x}^i) \|_2 &\leq \| \nabla_{\Gamma} f ({\bf x}^i) - \nabla_{\Gamma} f({\bf x}) \|_2 + \| \nabla_{\Gamma} f({\bf x}) \|_2 \nonumber \\
&\leq \| \nabla_{\Gamma \backslash R^c} f ({\bf x}^i) - \nabla_{\Gamma \backslash R^c} f({\bf x}) + \nabla_{\Gamma \cap R^c} f ({\bf x}^i) - \nabla_{\Gamma \cap R^c} f({\bf x}) \|_2 + \| \nabla_{\Gamma} f({\bf x}) \|_2 \nonumber \\
&\leq \| \nabla_{\Gamma \backslash R^c} f ({\bf x}^i) - \nabla_{\Gamma \backslash R^c} f({\bf x}) \|_2 + \| \nabla_{\Gamma \cap R^c} f ({\bf x}^i) - \nabla_{\Gamma \cap R^c} f({\bf x}) \|_2 + \| \nabla_{\Gamma} f({\bf x}) \|_2 \nonumber \\
&\leq \| \nabla_{\Gamma \backslash R^c} f ({\bf x}^i) - \nabla_{\Gamma \backslash R^c} f({\bf x}) \|_2 + \bar{\gamma}_{8k} \| {\bf r}^i \|_2 + \| \nabla_{\Gamma} f({\bf x}) \|_2 \nonumber \\
&\leq \bar{\alpha}_{6k} \| {\bf r}_{\Gamma \backslash R^c}^{i}\|_2 + \bar{\gamma}_{6k} \| {\bf r}^i \|_2 + \bar{\gamma}_{8k} \| {\bf r}^i \|_2 + \| \nabla_{I} f({\bf x}) \|_2
\label{equ_16}\end{aligned}$$ The first and third inequalities follow by the triangle inequality. The second inequality follows by $\Gamma = (\Gamma \cap R^c) \cup (\Gamma \backslash R^c)$. And the last inequality follows by $\|( f({\bf x}^i) - f({\bf x}))_{R'}\|_2 \leq \bar{\gamma}_{k+r} \| {\bf x}^i - {\bf x}\|_2$, where $k \leq |R'|, r = |supp({\bf x}^i - {\bf x}) |$ and $R' \subseteq R^c$. By Lemma (\[lemma\_4.2\]), we have $\| \nabla_{\Gamma \backslash R^c} f ({\bf x}^i) - \nabla_{\Gamma \backslash R^c} f({\bf x}) \|_2 \leq \bar{\alpha}_{6k} \| {\bf r}_{\Gamma \backslash R^c}^{i}\|_2 + \bar{\gamma}_{6k} \| {\bf r}^i \|_2$. Combining Equation (\[equ\_15\]) and Equation (\[equ\_16\]), we have $$\begin{aligned}
c_H\bar{\beta}_{6k} \| {\bf r}^{i}\|_2 - c_H \| \nabla_{I} f({\bf x}) \|_2 &\leq \bar{\alpha}_{6k} \| {\bf r}_{\Gamma \backslash R^c}^{i}\|_2 + \bar{\gamma}_{6k} \| {\bf r}^i \|_2 + \bar{\gamma}_{8k} \| {\bf r}^i \|_2 + \| \nabla_{I} f({\bf x}) \|_2 \nonumber \\
c_H \bar{\beta}_{3k} \| {\bf r}^{i}\|_2 - c_H \| \nabla_{I} f({\bf x}) \|_2 &\leq \bar{\alpha}_{6k} \| {\bf r}_{\Gamma}^i\|_2 + \bar{\gamma}_{6k} \| {\bf r}^i \|_2 + \bar{\gamma}_{8k} \| {\bf r}^i \|_2 + \| \nabla_{I} f({\bf x}) \|_2 \nonumber \\
(c_H \bar{\beta}_{3k} - \bar{\gamma}_{6k} - \bar{\gamma}_{8k}) \| {\bf r}^{i} \|_2 - (1+c_H) \| \nabla_{I} f({\bf x}) \|_2 &\leq \bar{\alpha}_{6k} \| {\bf r}_{\Gamma}^{i}\|_2 \nonumber \\
\mu_{8k} \| {\bf r}_{\Gamma}^{i}\|_2 &\geq (c_H+2 - 2\mu_{8k}) \| {\bf r}^{i} \|_2 - \frac{1+c_H}{2\epsilon} \| \nabla_{I} f({\bf x}) \|_2 \nonumber\end{aligned}$$ Finally, we get $\| {\bf r}_{\Gamma}^i \| \geq \Big(\frac{2+c_H}{\mu_{8k}} - 2 \Big) \|{\bf r}^i \| - \frac{1+c_H}{2 \epsilon \mu_{8k}} \| \nabla_I f({\bf x})\|$. Let us assume the SRL parameter $\mu_{8k} \leq \frac{2 + c_H}{2}$. Using the same computing procedure of Lemma 9 in [@hegde2015approximation], we have $$\| {\bf r}_{\Gamma^c}^i \|_2 \leq \eta \| {\bf r}^i\| + \frac{(2+c_H - 2 \mu_{8k} )(1+c_H)}{2 \epsilon \mu_{8k}^2 \eta } \| \nabla_{I} f({\bf x})\|_2,$$ where $\eta = \sqrt{ 1 - (\frac{2+c_H}{\mu_{8k}} - 2)^2}$. Combine them together, we have $$\begin{aligned}
\|{\bf x - b}\|_2 &\leq \Big( 1 +\frac{\bar{\gamma}_{6k}}{\bar{\beta}_{6k}} \Big)\| {{\bf r}^{i}_{\Omega^c} } \|_2 + \frac{\| \nabla f({\bf x})_{\Omega}\|_2}{\bar{\beta}_{6k}} \nonumber \\
&\leq \mu_{8k} \| {{\bf r}^{i}_{\Omega^c} } \|_2 + \frac{\| \nabla f({\bf x})_{\Omega}\|_2}{\bar{\beta}_{6k}} \nonumber \\
&\leq \mu_{8k} \| {{\bf r}^{i}_{\Gamma^c} } \|_2 + \frac{\| \nabla f({\bf x})_{\Omega}\|_2}{2\epsilon} \nonumber \\
&\leq \sigma \|{\bf r} ^i\| + \nu \| \nabla_I f({\bf x}) \|_2 ,\end{aligned}$$ where $\sigma = \sqrt{ \mu_{8k}^2 - \Big( 2+ c_H - 2 \mu_{8k}\Big)^2}$ and $\nu = \frac{ (2+c_H - 2 \mu_{8k})(1+c_H)+ \sigma}{2 \epsilon \sigma }$. Hence, we prove this theorem.
Let the true parameter be ${\bf x} \in \mathbb{R}^n$ such that $\text{supp}({\bf x}) \in \mathbb{M}(k, g)$, and $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be cost function that satisfies SRL condition. The <span style="font-variant:small-caps;">Graph-MP</span> algorithm returns a $\hat{\bf x}$ such that, $\text{supp}(\hat{\bf x}) \in \mathbb{M}(5k, g)$ and $\|{\bf x} - \hat{\bf x}\|_2 \le c \|\nabla_I f({\bf x})\|_2$, where $c= (1+\frac{\nu}{1-\sigma})$ and $I = \arg \max_{S \in \mathbb{M}(8k, g)} \|\nabla_S f({\bf x})\|_2$. The parameters $\sigma$ and $\nu$ are fixed constants defined in Theorem \[theorem\_4.2\_SRL\]. Moreover, <span style="font-variant:small-caps;">Graph-MP</span> runs in time $$\begin{aligned}
O\left((T+|\mathbb{E}|\log^3 n) \log (\|{\bf x}\|_2/ \|\nabla_I f({\bf x})\|_2)\right) \label{timecomplexity-1},\end{aligned}$$ where $T$ is the time complexity of one execution of the subproblem in Step 6 in <span style="font-variant:small-caps;">Graph-MP</span>. In particular, if $T$ scales linearly with $n$, then <span style="font-variant:small-caps;">Graph-MP</span> scales nearly linearly with $n$.
The i-th iterate of <span style="font-variant:small-caps;">Graph-MP</span> satisfies
$$\begin{aligned}
\|{\bf x} - {\bf x}^i\|_2 \le \sigma^i \|{\bf x}\|_2 + \frac{\nu}{1-\sigma} \|\nabla_I f({\bf x})\|_2.\end{aligned}$$
After $t = \left \lceil \log \left(\frac{\|{\bf x}\|_2}{\|\nabla_I f({\bf x})\|_2}\right) / \log \frac{1}{\sigma} \right \rceil $ iterations, <span style="font-variant:small-caps;">Graph-MP</span> returns an estimate $\hat{x}$ satisfying $\|{\bf x} - \hat{{\bf x}}\|_2 \le (1 + \frac{\nu}{1 - \sigma}) \|\nabla_I f({\bf x})\|_2$ as $\sigma < 1$ and the summation of $\sum_{k = 0}^{i} \nu \sigma^k = \frac{\nu(1-\sigma^i)}{1-\sigma} \leq \frac{\nu}{1-\sigma}$. The time complexities of both head approximation and tail approximation are $O(|\mathbb{E}| \log^3 n)$. The time complexity of one iteration in <span style="font-variant:small-caps;">Graph-MP</span> is $(T+|\mathbb{E}|\log^3 n)$, and the total number of iterations is $\left \lceil \log \left(\frac{\|{\bf x}\|_2}{\|\nabla_I f({\bf x})\|_2}\right) / \log \frac{1}{\alpha} \right \rceil $, and hence the overall time follows.
Theoretical Analysis of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> under RSC/RSS condition {#theoremtical_1}
====================================================================================================================================================
[@yuan2013gradient]. For any integer $k>0$, we say $f({\bf x})$ is restricted $m_k$-strongly convex and $M_k$-strongly smooth of there exist $\exists m_k$, $M_k >0$ such that $$\frac{m_k}{2} \| {\bf x} -{\bf y} \|_2^2 \leq f({\bf x}) - f({\bf y}) - \langle \nabla f({\bf y}),{\bf x}-{\bf y} \rangle \leq \frac{M_k}{2} \| {\bf x} -{\bf y} \|_2^2, \forall \| {\bf x} - {\bf y} \|_0 \leq k$$ \[definition\_RSC\]
Let $S$ be any index set with cardinality $|S| \leq k$ and $S \in \mathbb{M}(k,g)$. If $f$ is **($m_k,M_k,\mathbb{M}$)-RSC/RSS**, then $f$ satisfies the following property $$\|{\bf x} - {\bf y} - \frac{m_k}{M_k^2} \Big( \nabla_S f({\bf x}) - \nabla_S f({\bf y}) \Big) \|_2 \le \sqrt{1- (\frac{m_k}{M_k})^2} \|{\bf x} - {\bf y}\|_2$$ \[lemma\_3\]
By adding two copies of the inequality (\[definition\_RSC\]) with ${\bf x}$ and ${\bf y}$, we have $$m_k \| {\bf x} -{\bf y} \|_2^2 \leq \langle \nabla f({\bf x}) - \nabla f({\bf y}),{\bf x}-{\bf y}\rangle \leq M_k \| {\bf x} -{\bf y} \|_2^2, \forall \| {\bf x} - {\bf y} \|_0 \leq k .
\label{equation_3}$$ By Theorem 2.1.5 in [@nesterov2013introductory], we have $\langle \nabla f({\bf x}) - \nabla f({\bf y}),{\bf x}-{\bf y}\rangle \geq \frac{1}{L} \| \nabla f({\bf x}) - \nabla f({\bf y})\|_2^2$, which means $$\| \nabla_S f({\bf x}) - \nabla_S f({\bf y}) \|_2^2 \leq \| \nabla f({\bf x}) - \nabla f({\bf y}) \|_2^2 \leq M_k L \| {\bf x} - {\bf y} \|_2^2.$$ Let $L = M_k$ and then $\| \nabla_S f({\bf x}) - \nabla_S f({\bf y}) \|_2 \leq M_k \| {\bf x} - {\bf y} \|_2$. The left side of inequality (\[equation\_3\]) is $$m_k \| {\bf x} -{\bf y} \|_2^2 \leq \langle \nabla f({\bf x}) - \nabla f({\bf y}),{\bf x}-{\bf y}\rangle = ({\bf x} - {\bf y})^T (\nabla_S f({\bf x}) - \nabla_S f({\bf y})).
\label{equation_11}$$ The last equation of ( \[equation\_11\]) follows by ${\bf x-y} = ({\bf x -y})_{S}$. For any ${\bf a}$ and ${\bf b}$, we have $\| {\bf a} - {\bf b} \|_2^2 = \| {\bf a} \|_2^2 + \| {\bf b} \|_2^2 - 2 {\bf a}^T {\bf b}$. By replacing ${\bf a}$ as $({\bf x} - {\bf y})$ and ${\bf b}$ as $\frac{m_k}{M_k^2} \Big( \nabla_S f({\bf x}) - \nabla_S f({\bf y}) \Big)$, we have
$$\begin{aligned}
\| {\bf x} - {\bf y} -\frac{m_k}{M_k^2} \Big( \nabla_S f({\bf x}) - \nabla_S f({\bf y}) \Big) \|_2^2 & =& \| {\bf x} - {\bf y} \|_2^2 + \frac{m_k^2}{M_k^4} \| \nabla_S f({\bf x}) - \nabla_S f({\bf y}) \|_2^2 \\
& -& \frac{2m_k}{M_k^2}({\bf x} - {\bf y})^T(\nabla_S f({\bf x}) - \nabla_S f({\bf y})) \nonumber \\
& \leq& (1+ \frac{m_k^2}{M_k^2}- \frac{2m_k^2}{M_k^2}) \| {\bf x} - {\bf y} \|_2^2 \nonumber \\
& =& (1 - \frac{m_k^2}{M_k^2}) \| {\bf x} - {\bf y} \|_2^2.
\label{inequal_}\end{aligned}$$
By taking the square root for both sides of (\[inequal\_\]), we can prove the result. If one follows Lemma 1 in [@yuan2013gradient] by replacing $\delta$ as $\frac{m_k}{M_k^2}$ and $\rho_s $ as $\sqrt{1- (\frac{m_k}{M_k})^2}$, one can also get same result.
Consider the graph-structured sparsity model $\mathbb{M}(k, g)$ for some $k, g\in \mathbb{N}$ and a cost function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ that satisfies condition $\left(m_k, M_k, \mathbb{M}(8k, g)\right)$-RSC/RSS. If
$\alpha_0 = c_H - \sqrt{1 - \frac{m_k^2}{M_k^2}} \cdot (1 + c_H), $
then for any true ${\bf x} \in \mathbb{R}^n$ with $\text{supp}({\bf x}) \in \mathbb{M}(k, g)$, the iterates of Algorithm 1 obey
$$\begin{aligned}
\|{\bf x}^{i+1}-{\bf x}\|_2 \le \frac{M_k(1+c_T)\sqrt{1 - \alpha_0^2}}{M_k -\sqrt{M_k^2 - m_k^2}} \cdot \|{\bf x}^i-{\bf x}\|_2 + \frac{m_k(1 + c_T)}{M_k^2 - M_k \sqrt{M_k^2 - m_k^2}} \Big(\frac{1 + c_H+\alpha_0}{\alpha_0} + \frac{\alpha_0 (1 + c_H)}{\sqrt{1 - \alpha_0^2}} \Big)\|\nabla_I f({\bf x})\|_2,\nonumber \label{decay-rate-0}\end{aligned}$$
where
$ I = \arg \max_{S \in \mathbb{M}(8k, g)} \|\nabla_{S} f({\bf x})\|_2 $
\[theorem:convergence-0\]
Let ${\bf r}^{i+1} = {\bf x}^{i+1} - {\bf x}$. $\|{\bf r}^{i+1}\|_2$ is upper bounded as
$$\begin{aligned}
\|{\bf r}^{i+1}\| = \|{\bf x}^{i+1} - {\bf x}\|_2
&\le & \|{\bf x}^{i+1} - {\bf b}\|_2 + \|{\bf x} - {\bf b}\|_2 \nonumber\\
&\le& c_T \|{\bf x} - {\bf b}\|_2 + \|{\bf x} - {\bf b}\|_2 \nonumber\\
&\le& (1 + c_T) \|{\bf x} - {\bf b}\|_2, \nonumber\end{aligned}$$
which follows from the definition of tail approximation. The component $\|({\bf x} - {\bf b})_\Omega\|_2^2$ is upper bounded as
$$\begin{aligned}
\|({\bf x} - {\bf b})_\Omega\|_2^2 &= & \langle {\bf b} - {\bf x}, ({\bf b} - {\bf x})_\Omega \rangle \nonumber \\
&= & \langle {\bf b} - {\bf x} - \frac{m_k}{M_k^2} \nabla_\Omega f({\bf b}) + \frac{m_k}{M_k^2} \nabla_\Omega f({\bf x}), ({\bf b} - {\bf x})_\Omega\rangle - \langle \frac{m_k}{M_k^2} \nabla_\Omega f({\bf x}), ({\bf b} - {\bf x})_\Omega \rangle \nonumber \\
&\le& \sqrt{1-\frac{m_k^2}{M_k^2}} \|{\bf b} - {\bf x}\|_2\cdot \|({\bf b} - {\bf x})_\Omega\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2 \cdot \|({\bf b} - {\bf x})_\Omega\|_2 \nonumber,\end{aligned}$$
where the second equality follows from the fact that $\nabla_\Omega f({\bf b}) = 0$ since ${\bf b}$ is the solution to the problem in Step 6 of Algorithm 1, and the last inequality follows from Lemma \[lemma\_3\]. After simplification, we have $$\|({\bf x} - {\bf b})_\Omega\|_2 \le \sqrt{1-\frac{m_k^2}{M_k^2}} \|{\bf b} - {\bf x}\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2$$ It follows that
$$\begin{aligned}
\|{\bf x} - {\bf b}\|_2 \le \|({\bf x} - {\bf b})_\Omega\|_2 + \|({\bf x} - {\bf b})_{\Omega^c}\|_2
\le \sqrt{1-\frac{m_k^2}{M_k^2}} \|{\bf b} - {\bf x}\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2 + \|({\bf x} - {\bf b})_{\Omega^c}\|_2 \nonumber \end{aligned}$$
After rearrangement we obtain
$$\begin{aligned}
\|{\bf b} - {\bf x}\|_2 & \le& \frac{M_k}{M_k - \sqrt{M_k^2 - m_k^2}} \Big( \|({\bf b} - {\bf x})_{\Omega^c}\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2 \Big) \nonumber \\
& =& \frac{M_k}{M_k - \sqrt{M_k^2 - m_k^2}} \Big( \|{\bf x}_{\Omega^c}\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2 \Big) \nonumber \\
& =& \frac{M_k}{M_k - \sqrt{M_k^2 - m_k^2}} \Big( \|({\bf x} - {\bf x}^i)_{\Omega^c}\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2 \Big) \nonumber \\
& =& \frac{M_k}{M_k - \sqrt{M_k^2 - m_k^2}} \Big( \|{\bf r}_{\Omega^c}\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2 \Big) \nonumber \\
& \le& \frac{M_k}{M_k - \sqrt{M_k^2 - m_k^2}} \Big(\|{\bf r}^i_{\Gamma^c}\|_2 + \frac{m_k}{M_k^2} \|\nabla_\Omega f({\bf x})\|_2 \Big) \nonumber \end{aligned}$$
where the first equality follows from the fact that $\text{supp}({\bf b}) \subseteq \Omega$, the second and last inequalities follow from the fact that $\Omega = \Gamma \cup \text{supp}({\bf x}^i)$. Combining above inequalities, we obtain
$$\begin{aligned}
\|{\bf r}^{i+1}\|_2 \le \frac{M_k (1+c_T)}{M_k - \sqrt{M_k^2 - m_k^2}} \Big(\|{\bf r}^i_{\Gamma^c}\|_2 + \frac{m_k}{M_k^2} \|\nabla_I f({\bf x})\|_2 \Big) \nonumber\end{aligned}$$
From Lemma \[lemma:r-Complement-SRC\], we have
$$\begin{aligned}
\|{\bf r}^i_{\Gamma^c}\|_2 \le \sqrt{1 - \alpha_0^2} \|{\bf r}^i\|_2 +\left[\frac{\beta_0}{\alpha_0} + \frac{\alpha_0\beta_0}{\sqrt{1-\alpha_0^2}}\right] \|\nabla_I f({\bf x})\|_2\end{aligned}$$
Combining the above inequalities, we prove the theorem.
Let ${\bf r}^i = {\bf x}^i - {\bf x}$ and $\Gamma = H(\nabla f({\bf x}^i))$. Then
$$\begin{aligned}
\|{\bf r}^i_{\Gamma^c}\|_2 \le \sqrt{1 - \alpha_0^2} \|{\bf r}^i\|_2 +\left[\frac{\beta_0}{\alpha_0} + \frac{\alpha_0\beta_0}{\sqrt{1-\alpha_0^2}}\right] \|\nabla_I f({\bf x})\|_2\end{aligned}$$
,where
$\alpha_0 = c_H - \sqrt{1 - \frac{m_k^2}{M_k^2}} \cdot (1 + c_H), $
and
$\beta_0 = \frac{m_k(1+c_H)}{M_k^2}$
, and
$I = \arg \max_{S \in \mathbb{M}(8k, g)} \|\nabla_S f({\bf x})\|_2.$
We assume that $c_H$ and $\sqrt{1- \frac{m_s^2}{M_s^2}}$ are such that $\alpha_0 > 0$. \[lemma:r-Complement-SRC\]
Denote $\Phi = \text{supp}({\bf x}) \in \mathbb{M}(k, g), \Gamma = H(\nabla f({\bf x}^i)) \in \mathbb{M}(2k, g)$, ${\bf r}^i = {\bf x}^i - {\bf x}$, and $\Omega = \text{supp}({\bf r}^i) \in \mathbb{M}(6k, g)$. The component $\|\nabla_\Gamma f({\bf x}^i)\|_2$ can be lower bounded as
$$\begin{aligned}
\|\nabla_\Gamma f({\bf x}^i)\|_2 &\ge& c_H (\| \nabla_\Phi f({\bf x}^i)- \nabla_\Phi f({\bf x}) \|_2 - \|\nabla_\Phi f({\bf x})\|_2) \nonumber\\
&\ge& c_H \frac{M_2^2 - M_k \sqrt{M_k^2 - m_k^2}}{m_k} \|{\bf r}^i\|_2 - c_H \|\nabla_I f({\bf x})\|_2, \nonumber\end{aligned}$$
where the last inequality follows from Lemma \[lemma:twoinequalities\]. The component $\|\nabla_\Gamma f({\bf x}^i)\|_2$ can also be upper bounded as
$$\begin{aligned}
\|\nabla_\Gamma f({\bf x}^i)\|_2 &\le& \frac{M_k^2}{m_k} \|\frac{m_k}{M_k^2} \nabla_\Gamma f({\bf x}^i)- \frac{m_k}{M_k^2}\nabla_\Gamma f({\bf x})\|_2 + \|\nabla_\Gamma f({\bf x})\|_2 \nonumber \\
&\le& \frac{M_k^2}{m_k} \| \frac{m_k}{M_k^2} \nabla_\Gamma f({\bf x}^i) - \frac{m_k}{M_k^2} \nabla_\Gamma f({\bf x}) - {\bf r}^i_\Gamma + {\bf r}^i_\Gamma\|_2 + \|\nabla_\Gamma f({\bf x})\|_2 \nonumber\\
&\le & \frac{M_k^2}{m_k} \| \frac{m_k}{M_k^2} \nabla_{\Gamma\cup \Omega} f({\bf x}^i) - \frac{m_k}{M_k^2} \nabla_{\Gamma\cup \Omega} f({\bf x}) - {\bf r}^i_{\Gamma\cup \Omega}\|_2 + \frac{M_k^2}{m_k} \|{\bf r}^i_\Gamma\|_2 + \|\nabla_\Gamma f({\bf x})\|_2 \nonumber\\
&\le& \frac{M_k \sqrt{M_k^2 - m_k^2}}{m_k} \cdot \|{\bf r}^i\|_2 + \frac{M_k^2}{m_k} \|{\bf r}^i_\Gamma\|_2 + \|\nabla_{I} f({\bf x})\|_2, \nonumber\end{aligned}$$
where the last inequality follows from condition $(\xi, \delta, \mathbb{M}(8k, g))$-RSC/RSS and the fact that ${\bf r}^i_{\Gamma\cup \Omega} = {\bf r}^i$. Combining the two bounds and grouping terms, we have $$\begin{aligned}
\|{\bf r}^i_\Gamma\|_2 &\ge& \alpha_0 \cdot \|{\bf r}^i\|_2 - \beta_0 \cdot \|\nabla_I f(x)\|_2\end{aligned}$$ ,where $\alpha_0 = \Big[ c_H - \sqrt{1 - \frac{m_k^2}{M_k^2}} \cdot (1 + c_H) \Big]$ and $\beta_0 = \frac{m_k(1+c_H)}{M_k^2}$. We assume that the constant $\delta = \sqrt{1- \frac{m_k^2}{M_k^2}}$ is small enough such that $c_H > \frac{\delta}{1-\delta}$. We consider two cases.
**Case 1**: The value of $\|{\bf r}^i\|_2$ satisfies $\alpha_0 \|{\bf r}^i\|_2 \le \beta_0 \|\nabla f({\bf x})\|_2$. Then consider the vector ${\bf r}^i_{\Gamma^c}$. We have $$\begin{aligned}
\|{\bf r}^i_{\Gamma^c}\|_2 \le \frac{\beta_0}{\alpha_0} \|{\bf r}^i\|_2 \nonumber\end{aligned}$$
**Case 2**: The value of $\|{\bf r}^i\|_2$ satisfies $\alpha_0 \|{\bf r}^i\|_2 \ge \beta_0 \|\nabla f({\bf x})\|_2$. We get $$\|{\bf r}^i_\Gamma\|_2 \ge \|{\bf r}^i\|_2 \left(\alpha_0 - \frac{\beta_0 \|\nabla_I f({\bf x})\|_2 }{\|{\bf r}^i\|_2} \right)$$
Moreover, we also have $\|{\bf r}^i\|_2 = \|{\bf r}^i_\Gamma\|_2^2 + \|{\bf r}^i_{\Gamma^c}\|_2$. Therefore, we obtain $$\|{\bf r}^i_{\Gamma^c}\|_2 \le \|{\bf r}^i\|_2 \sqrt{1 - \left(\alpha_0 - \frac{\beta_0 \|\nabla_I f({\bf x})\|_2 }{\|{\bf r}^i\|_2} \right)^2}.$$
We have the following inequality, for a given $0 < \omega_0 < 1$ and a free parameter $0 < \omega < 1$, a straightfoward calculation yields that $ \sqrt{1-\omega^2} \le \frac{1}{\sqrt{1 - \omega^2}} - \frac{\omega}{\sqrt{1-\omega^2}} \omega_0$. Therefore, substituting into the bound for $\|{\bf r}^i_{\Gamma^c}\|_2$, we get $$\begin{aligned}
\|{\bf r}^i_{\Gamma^c}\|_2 &\le& \|{\bf r}^i\|_2 \left(\frac{1}{\sqrt{1 - \omega^2}} - \frac{\omega}{\sqrt{1-\omega^2}} \left(\alpha_0 - \frac{\beta_0 \|\nabla_I f({\bf x})\|_2}{\|{\bf r}^i\|_2} \right)\right) \\
&=& \frac{1 - w\alpha_0 }{\sqrt{1 - \omega^2}} \|{\bf r}^i\|_2 + \frac{\omega\beta_0}{\sqrt{1-\omega^2}} \|\nabla_I f({\bf x})\|_2\end{aligned}$$
The coefficient prceding $\|{\bf r}^i\|_2$ determines the overall convergence rate, and the minimum value of the coefficient is attained by setting $\omega = \alpha_0$. Substituting, we obtain $$\|{\bf r}^i_{\Gamma^c}\|_2 \le \sqrt{1 - \alpha_0^2} \|{\bf r}^i\|_2 +\left[\frac{\beta_0}{\alpha_0} + \frac{\alpha_0\beta_0}{\sqrt{1-\alpha_0^2}}\right] \|\nabla_I f({\bf x})\|_2,$$ which proves the lemma.
Theoretical Analysis of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> under WRSC condition
=================================================================================================================================================
In order to demonstrate the accuracy of estimates using Algorithm 1 we require a variant of the *Restricted Strong Convexity/Smoothness* (RSC/RSS) conditions proposed in [@yuan2014icml] to hold. The RSC condition basically characterizes cost functions that have quadratic bounds on the derivative of the objective function when restricted to model-sparse vectors. The condition we rely on, the Weak Restricted Strong Convexity (WRSC), can be formally defined as follows:
A function $f({\bf x})$ has condition ($\xi$, $\delta$, $\mathbb{M}$)-WRSC if $\forall {\bf x}, {\bf y} \in \mathbb{R}^n$ and $\forall S \in \mathbb{M}$ with $\text{supp}({\bf x}) \cup \text{supp}({\bf y}) \subseteq S $, the following inequality holds for some $\xi > 0$ and $0 < \delta < 1$: $$\begin{aligned}
\|{\bf x} - {\bf y} - \xi \nabla_S f({\bf x}) + \xi \nabla_S f({\bf y})\|_2 \le \delta \|{\bf x} - {\bf y}\|_2. \end{aligned}$$
1\) In the special case where $f({\bf x}) = \|{\bf y} - A{\bf x}\|_2^2$ and $\xi = 1$, condition ($\xi$, $\delta$, $\mathbb{M}$)-WRSC reduces to the well known Restricted Isometry Property (RIP) condition in compressive sensing. 2) The RSC and RSS conditions imply condition WRSC, which indicates that condition WRSC is no stronger than the RSC and RSS conditions [@yuan2014icml].
[@yuan2014icml] Assume that $f$ is a differentiable function. If $f$ satisfies condition $(\xi, \delta, \mathbb{M})$-WRSC, then $\forall {\bf x}, {\bf y} \in \mathbb{R}^n$ with $\text{supp}({\bf x})\cup \text{supp}({\bf y})\subset S \in \mathbb{M}$, the following two inequalities hold
$$\begin{aligned}
\frac{1 - \delta}{\xi} \|{\bf x} - {\bf y}\|_2 \le \|\nabla_S f({\bf x}) - \nabla_S f({\bf y})\|_2 \le \frac{1 + \delta}{\xi} \|{\bf x} - {\bf y}\|_2, \nonumber \\
f({\bf x}) \le f({\bf y}) + \langle \nabla f({\bf y}), {\bf x} - {\bf y} \rangle + \frac{1+\delta}{2\xi} \|{\bf x} - {\bf y}\|_2^2. \nonumber
\end{aligned}$$
\[lemma:twoinequalities\]
Let ${\bf r}^i = {\bf x}^i - {\bf x}$ and $\Gamma = H(\nabla f({\bf x}^i))$. Then
$$\begin{aligned}
\|{\bf r}^i_{\Gamma^c}\|_2 \le \sqrt{1 - \eta^2} \|{\bf r}^i\|_2 + \left[\frac{\xi(1 + c_H)}{\eta} + \frac{\xi \eta (1 + c_H)}{\sqrt{1 - \eta^2}}\right] \|\nabla_I f({\bf x})\|_2,\nonumber\end{aligned}$$
where
$\eta = c_H(1 - \delta) - \delta$
and
$I = \arg \max_{S \in \mathbb{M}(8k, g)} \|\nabla_S f({\bf x})\|_2.$
We assume that $c_H$ and $\delta$ are such that $\eta > 0$. \[lemma:r-Complement\]
Denote $\Phi = \text{supp}({\bf x}) \in \mathbb{M}(k, g), \Gamma = H(\nabla f({\bf x}^i)) \in \mathbb{M}(2k, g)$, ${\bf r}^i = {\bf x}^i - {\bf x}$, and $\Omega = \text{supp}({\bf r}^i) \in \mathbb{M}(6k, g)$. The component $\|\nabla_\Gamma f({\bf x}^i)\|_2$ can be lower bounded as
$$\begin{aligned}
\|\nabla_\Gamma f({\bf x}^i)\|_2 &\ge& c_H (\| \nabla_\Phi f({\bf x}^i)- \nabla_\Phi f({\bf x}) \|_2 - \|\nabla_\Phi f({\bf x})\|_2 )\nonumber\\
&\ge& \frac{c_H (1 - \delta)}{\xi} \|{\bf r}^i\|_2 - c_H \|\nabla_I f({\bf x})\|_2, \nonumber\end{aligned}$$
where the last inequality follows from Lemma \[lemma:twoinequalities\]. The component $\|\nabla_\Gamma f({\bf x}^i)\|_2$ can also be upper bounded as
$$\begin{aligned}
\|\nabla_\Gamma f({\bf x}^i)\|_2 &\le&\frac{1}{\xi} \|\xi \nabla_\Gamma f({\bf x}^i)- \xi\nabla_\Gamma f({\bf x})\|_2 + \|\nabla_\Gamma f({\bf x})\|_2 \nonumber \\
&\le& \frac{1}{\xi} \|\xi \nabla_\Gamma f({\bf x}^i) - \xi \nabla_\Gamma f({\bf x}) - {\bf r}^i_\Gamma + {\bf r}^i_\Gamma\|_2 + \|\nabla_\Gamma f({\bf x})\|_2 \nonumber\\
&\le & \frac{1}{\xi} \| \xi \nabla_{\Gamma\cup \Omega} f({\bf x}^i) - \xi \nabla_{\Gamma\cup \Omega} f({\bf x}) - {\bf r}^i_{\Gamma\cup \Omega}\|_2 + \|{\bf r}^i_\Gamma\|_2 + \|\nabla_\Gamma f({\bf x})\|_2 \nonumber\\
&\le&\frac{\delta}{\xi} \cdot \|{\bf r}^i\|_2 + \frac{1}{\xi}\|{\bf r}^i_\Gamma\|_2+ \|\nabla_{I} f({\bf x})\|_2, \nonumber\end{aligned}$$
where the last inequality follows from condition $(\xi, \delta, \mathbb{M}(8k, g))$-WRSC and the fact that ${\bf r}^i_{\Gamma\cup \Omega} = {\bf r}^i$. Let $\eta = \left(c_H \cdot (1 - \delta) - \delta\right)$. Combining the two bounds and grouping terms, we have
$\|{\bf r}^i_\Gamma\| \ge \eta \|{\bf r}^i\|_2 - \xi(1+c_H) \|\nabla_I f({\bf x})\|_2
$
. After a number of algebraic manipulations similar to those used in [@hegde2014approximation] Page 11, we prove the lemma.
Consider the graph-structured sparsity model $\mathbb{M}(k, g)$ for some $k, g\in \mathbb{N}$ and a cost function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ that satisfies condition $\left(\xi, \delta, \mathbb{M}(8k, g)\right)$-WRSC. If $\eta = c_H(1 - \delta) - \delta > 0$, then for any true ${\bf x} \in \mathbb{R}^n$ with $\text{supp}({\bf x}) \in \mathbb{M}(k, g)$, the iterates of Algorithm 1 obey $$\begin{aligned}
\|{\bf x}^{i+1}-{\bf x}\|_2 \le \alpha \|{\bf x}^i-{\bf x}\|_2 + \beta \|\nabla_I f({\bf x})\|,\label{decay-rate-1}\end{aligned}$$ where $\beta = \frac{\xi(1+c_T)}{1-\delta} \left[\frac{(1 + c_H)}{\eta} + \frac{\eta (1 + c_H)}{\sqrt{1 - \eta^2}} + 1\right]$, $\alpha = \frac{(1+c_T)}{1-\delta} \sqrt{1 - \eta^2}$, and $I = \arg \max_{S \in \mathbb{M}(8k, g)} \|\nabla_{S} f({\bf x})\|_2.$ \[theorem:convergence\]
Let ${\bf r}^{i+1} = {\bf x}^{i+1} - {\bf x}$. $\|{\bf r}^{i+1}\|_2$ is upper bounded as $$\begin{aligned}
\|{\bf r}^{i+1}\|_2 = \|{\bf x}^{i+1} - {\bf x}\|_2
&\le & \|{\bf x}^{i+1} - {\bf b}\|_2 + \|{\bf x} - {\bf b}\|_2 \nonumber\\
&\le& c_T \|{\bf x} - {\bf b}\|_2 + \|{\bf x} - {\bf b}\|_2 \nonumber\\
&=& (1 + c_T) \|{\bf x} - {\bf b}\|_2, \nonumber\end{aligned}$$ which follows from the definition of tail approximation. The component $\|({\bf x} - {\bf b})_\Omega\|_2^2$ is upper bounded as $$\begin{aligned}
\|({\bf x} - {\bf b})_\Omega\|_2^2 & =& \langle {\bf b} - {\bf x}, ({\bf b} - {\bf x})_\Omega \rangle \nonumber \\
& =& \langle {\bf b} - {\bf x} - \xi \nabla_\Omega f({\bf b}) + \xi \nabla_\Omega f({\bf x}), ({\bf b} - {\bf x})_\Omega\rangle - \langle\xi \nabla_\Omega f({\bf x}), ({\bf b} - {\bf x})_\Omega \rangle \nonumber \\
& \le& \delta \|{\bf b} - {\bf x}\|_2 \|({\bf b} - {\bf x})_\Omega\| + \xi \|\nabla_\Omega f({\bf x})\|_2 \|({\bf b} - {\bf x})_\Omega\|_2 \nonumber,\end{aligned}$$ where the second equality follows from the fact that $\nabla_\Omega f({\bf b}) = 0$ since ${\bf b}$ is the solution to the problem in Step 6 of Algorithm 1, and the last inequality follows from condition $(\xi, \delta, \mathbb{M}(8k, g))$-WRSC. After simplification, we have $$\|({\bf x} - {\bf b})_\Omega\|_2 \le \delta \|{\bf b} - {\bf x}\|_2 + \xi \|\nabla_\Omega f({\bf x})\|_2.$$ It follows that $$\begin{aligned}
\|({\bf x} - {\bf b})\|_2 \le \|({\bf x} - {\bf b})_\Omega\|_2 + \|({\bf x} - {\bf b})_{\Omega^c}\|_2 \le \delta \|{\bf b} - {\bf x}\|_2 + \xi \|\nabla_\Omega f({\bf x})\|_2 + \|({\bf x} - {\bf b})_{\Omega^c}\|_2. \nonumber \end{aligned}$$ After rearrangement we obtain $$\begin{aligned}
\|{\bf b} - {\bf x}\|_2 & \le& \frac{\|({\bf b} - {\bf x})_{\Omega^c}\|_2}{1 - \delta} + \frac{\xi \|\nabla_\Omega f({\bf x})\|_2}{1 - \delta} \nonumber \\
& =& \frac{\|{\bf x}_{\Omega^c}\|_2}{1 - \delta} + \frac{\xi \|\nabla_\Omega f({\bf x})\|_2}{1 - \delta}
= \frac{\|({\bf x} - {\bf x}^i)_{\Omega^c}\|_2}{1 - \delta} + \frac{\xi \|\nabla_\Omega f({\bf x})\|_2}{1 - \delta} \nonumber \\
& =& \frac{\|{\bf r}_{\Omega^c}^{i}\|_2}{1 - \delta} + \frac{\xi \|\nabla_\Omega f({\bf x})\|_2}{1 - \delta}
\le \frac{\|{\bf r}^i_{\Gamma^c}\|_2}{1 - \delta} + \frac{\xi \|\nabla_\Omega f({\bf x})\|_2}{1 - \delta}, \nonumber \end{aligned}$$ where the first equality follows from the fact that $\text{supp}({\bf b}) \subseteq \Omega$, the second and last inequalities follow from the fact that $\Omega = \Gamma \cup \text{supp}({\bf x}^i)$. Combining above inequalities, we obtain $$\begin{aligned}
\|{\bf r}^{i+1}\|_2 \le (1 + c_T) \frac{\|{\bf r}^i_{\Gamma^c}\|_2}{1-\delta} + (1 + c_T) \frac{\xi \|\nabla_I f({\bf x})\|_2}{1-\delta}. \nonumber\end{aligned}$$ From Lemma \[lemma:r-Complement\], we have $$\begin{aligned}
\|{\bf r}^i_{\Gamma^c}\|_2 \le \sqrt{1 - \eta^2} \|{\bf r}^i\|_2 + \left[\frac{\xi(1 + c_H)}{\eta} + \frac{\xi \eta (1 + c_H)}{\sqrt{1 - \eta^2}}\right] \|\nabla_I f({\bf x})\|_2 \nonumber\end{aligned}$$ Combining the above inequalities, we prove the theorem.
As indicated in Theorem \[theorem:convergence\], under proper conditions the estimator error of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> is determined by the multiplier of $\|\nabla_{S} f({\bf x})\|_2$, and the convergence rate before reaching this error level is geometric. In particular, if the true ${\bf x}$ is sufficiently close to an unconstrained minimum of $f$, then the estimation error is negligible because $\|\nabla_{S} f({\bf x})\|_2$ has a small magnitude. Especially, in the ideal case where $\nabla f({\bf x}) = 0$, it is guaranteed that we can obtain the true ${\bf x}$ to arbitrary precision. If we further assume that $\alpha = \frac{(1+c_T) \sqrt{1 - \eta^2}}{\sqrt{1-\delta}} < 1$, then exact recovery can be achieved in finite iterations.
The shrinkage rate $\alpha < 1$ controls the convergence of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>, and it implies that when $\delta$ is very small, the approximation factors $c_H$ and $c_T$ satisfy $$\begin{aligned}
c^2_H > 1 - 1/ (1+c_T)^2.\end{aligned}$$ We note that the head and tail approximation algorithms designed in [@hegde2015nearly] do not satisfy the above condition, with $c_T = \sqrt{7}$ and $c_H = \sqrt{1/14}$. However, as proved in [@hegde2015nearly], the approximation factor $c_H$ of any given head approximation algorithm can be **boosted** to any arbitrary constant $c_H^\prime < 1$, such that the above condition is satisfied. Empirically it is not necessary to “boost” the head-approximation algorithm as strongly as suggested by the analysis in [@hegde2014approximation].
Let ${\bf x} \in \mathbb{R}^n$ be a true optimum such that $\text{supp}({\bf x}) \in \mathbb{M}(k, g)$, and $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a cost function that satisfies condition $\left(\xi, \delta, \mathbb{M}(8k, g)\right)$-WRSC. Assuming that $\alpha < 1$, <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> returns a $\hat{\bf x}$ such that, $\text{supp}(\hat{\bf x}) \in \mathbb{M}(5k, g)$ and $\|{\bf x} - \hat{\bf x}\|_2 \le c \|\nabla_I f({\bf x})\|_2$, where $c= (1+\frac{\beta}{1-\alpha})$ is a fixed constant. Moreover, <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> runs in time $$\begin{aligned}
O\left((T+|\mathbb{E}|\log^3 n) \log (\|{\bf x}\|_2/ \|\nabla_I f({\bf x})\|_2)\right) \label{timecomplexity-0},\end{aligned}$$ where $T$ is the time complexity of one execution of the subproblem in Line 6. In particular, if $T$ scales linearly with $n$, then <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> scales nearly linearly with $n$.
The i-th iterate of Algorithm 1 satisfies $$\begin{aligned}
\|{\bf x} - {\bf x}^i\|_2 \le \alpha^i \|{\bf x}\|_2 + \frac{\beta}{1-\alpha} \|\nabla_I f({\bf x})\|_2.\end{aligned}$$ After $t = \left\lceil \log \left(\frac{\|{\bf x}\|_2}{\|\nabla_I f({\bf x})\|_2}\right) / \log \frac{1}{\alpha} \right\rceil $ iterations, Algorithm 1 returns an estimate $\hat{x}$ satisfying $\|{\bf x} - \hat{{\bf x}}\|_2 \le (1 + \frac{\beta}{1 - \alpha}) \|\nabla_I f({\bf x})\|_2.$ The time complexities of both head and tail approximations are $O(|\mathbb{E}| \log^3 n)$. The time complexity of one iteration in Algorithm 1 is $(T+|\mathbb{E}|\log^3 n)$, and the total number of iterations is $\left \lceil \log \left(\frac{\|{\bf x}\|_2}{\|\nabla_I f({\bf x})\|_2}\right) / \log \frac{1}{\alpha} \right \rceil $, and the overall time complexity follows.
The previous algorithm <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span> [@hegde2015nearly] for compressive sensing is a special case of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>. Assume $f({\bf x}) = \|{\bf y} - {\bf A}{\bf x}\|_2^2$. 1) **Reduction.** The gradient in Step 3 of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> has the form: $\nabla f({\bf x}^i) = -{\bf A}^T({\bf y}-{\bf A}{\bf x}^i)$, and an analytical form of ${\bf b}$ in Step 6 can be obtained as: ${\bf b}_\Omega = {\bf A}^+_\Omega {\bf y}$ and ${\bf b}_{\Omega^c} = 0$, where ${\bf A}^+ = {\bf A}^T({\bf A}^T{\bf A})^{-1}$, which indicates that <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> reduces to <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span> in this scenario. 2) **Shrinkage rate.** The shrinkage rate $\alpha$ of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> is analogous to that of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span>, even though that the shrinkage rate of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span> is optimized based on the $RIP$ sufficient constants. In particular, they are identical when $\delta$ is very small. 3) **Constant component.** Assume that $\xi = 1$. Condition $(\xi, \delta, \mathbb{M}(k, g))$-WRSC then reduces to the RIP condition in compressive sensing. Let ${\bf e} = {\bf y}-{\bf A}{\bf x}$. The component $\|\nabla f({\bf x}^i)\|_2 = \|{\bf A}^T {\bf e}\|_2$ is upper bounded by $\sqrt{1 + \delta} \|{\bf e}\|_2$ [@hegde2014approximation]. The constant $ \beta \|\nabla_I f({\bf x})\|$ is then upper bounded by $\frac{\xi(1+c_T)\sqrt{1 + \delta}}{1-\delta} \left[\frac{(1 + c_H)}{\eta} + \frac{\eta (1 + c_H)}{\sqrt{1 - \eta^2}} + 1\right]\|{\bf e}\|_2$ that is analogous to the constant of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Cosamp</span>, and they are identical when $\delta$ is very small.
Application in Graph Scan Statistic Models
==========================================
In this section, we specialize <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> to optimize a number of graph scan statistic models for the task of connected subgraph detection. Given a graph $\mathbb{G} = (\mathbb{V}, \mathbb{E})$, where $\mathbb{V} = [n]$, $\mathbb{E} \subseteq \mathbb{V} \times \mathbb{V}$, and each node $v$ is associated with a vector of features ${\bf c}(v) \in \mathbb{R}^p$. Let $S\subseteq \mathbb{V}$ be a connected subset of nodes. A graph scan statistic, $F(S) = \log \frac{\text{Prob}(\text{Data} | H_1(S))}{\text{Prob}(\text{Data} | H_0)}$, corresponds to the generalized likelihood ratio test (GLRT) to verify the null hypothesis ($H_0$): ${\bf c}(v) \sim \mathcal{D}_1, \forall v \in \mathbb{V}$, where $\mathcal{D}_1$ refers to a predefined background distribution, against the alternative hypothesis ($H_1(S)$): ${\bf c}(v) \sim \mathcal{D}_2, \forall v \in S$ and ${\bf c}(v) \sim \mathcal{D}_1, \forall v \in \mathbb{V} \setminus S$, where $\mathcal{D}_2$ refers to a predefined signal distribution. The detection problem is formulated as $$\begin{aligned}
\min_{S \subseteq \mathbb{V}} -F(S)\ \ \ s.t.\ \ \ |S| \le k \text{ and } S \text{ is connected},\end{aligned}$$ where $k$ is a predefined bound on the size of $S$.
Taking elevated mean scan (EMS) statistic for instance, it aims to decide between $H_0: {\bf c}(v) \sim \mathcal{N}(0, 1), \forall v \in \mathbb{V}$ and $H_1(S)$: ${\bf c}(v) \sim \mathcal{N}(\mu, 1), \forall v \in S$ and ${\bf c}(v) \sim \mathcal{N}(0, 1), \forall v \in \mathbb{V} \setminus S$, where for simplicity each node $v$ only has a univariate feature $c(v) \in \mathbb{R}$. This statistic is popularly used for detecting signals among node-level numerical features on graph [@qian2014connected; @arias2011detection] and is formulated as $F(S) = (\sum_{v \in S} c(v))^2 / |S|$. Let the vector form of $S$ be $x\in \{0, 1\}^n$, such that $\text{supp}(x) = S$. The connected subgraph detection problem can be reformulated as $$\begin{aligned}
\min_{x \in \{0, 1\}^n} - \frac{({\bf c}^T {\bf x})^2}{({\bf 1}^T {\bf x})} \ \ \ s.t.\ \ \ \text{supp}({\bf x}) \in \mathbb{M}(k, g = 1), \label{emss}\end{aligned}$$ where ${\bf c} = [c(1), \cdots, c(n)]^T$. To apply <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>, we relax the input domain of ${\bf x}$ such that ${x \in [0, 1]^n}$, and the connected subset of nodes can be found as $S=\text{supp}({\bf x}^\star)$, the support set of the estimate ${\bf x}^\star$ that minimizes the strongly convex function [@bach2011learning]: $$\begin{aligned}
\min_{x \in \mathbb{R}^n} f({\bf x}) = - \frac{({\bf c}^T {\bf x})^2}{({\bf 1}^T {\bf x})} + \frac{1}{2} {\bf x}^T {\bf x} \ \ s.t.\ \text{supp}({\bf x}) \in \mathbb{M}(k, 1).\nonumber \end{aligned}$$ Assume that ${\bf c}$ is normalized, and hence $0 \le c_i < 1, \forall i$. Let $\hat{c} = \max \{c_i\}$. The Hessian matrix of the above objective function $\nabla^2 f({\bf x})\succ 0$ and satisfies the inequalities: $$\begin{aligned}
(1 - \hat{c}^2) \cdot \textbf{I}\preceq \textbf{I} - ({\bf c} - \frac{{\bf c}^T {\bf x}}{{\bf 1}^T {\bf x}} {\bf 1} ) ({\bf c} - \frac{{\bf c}^T {\bf x}}{{\bf 1}^T {\bf x}} {\bf 1})^T \preceq 1\cdot \textbf{I}. \end{aligned}$$ According to Lemma 1 (b) in [@yuan2014icml]), the objective function $f({\bf x})$ satisfies condition $(\xi, \delta, \mathbb{M}(8k, g))$-WRSC that $\delta = \sqrt{1 - 2 \xi (1 - \hat{c}^2) + \xi^2},$ for any $\xi$ such that $\xi < 2 (1 - \hat{c}^2)$. Hence, the geometric convergence of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> as shown in Theorem \[theorem:convergence\] is guaranteed. We note that not all the graph scan statistic functions satisfy the WRSC condition, but, as shown in our experiments, <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> works empirically well for all the scan statistic functions tested, and the maximum number of iterations to convergence for optimizing each of these scan statistic functions was less than 10.
We note that our proposed method <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> is also applicable to general sparse learning problems (e.g. sparse logistic regression, sparse principle component analysis) subject to graph-structured constraints, and to a variety of subgraph detection problems, such as the detection of anomalous subgraphs, bursty subgraphs, heaviest subgraphs, frequent subgraphs or communication motifs, predictive subgraphs, and compression subgraphs.
Experiments
===========
This section evaluates the effectiveness and efficiency of the proposed <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> approach for connected subgraph detection. The implementation of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> is available at https://github.com/baojianzhou/Graph-MP.
Experiment Design
-----------------
[** Datasets:** ]{} 1) **Water Pollution Dataset.** The Battle of the Water Sensor Networks (BWSN) [@ostfeld2008battle] provides a real-world network of 12,527 nodes and 14831 edges, and 4 nodes with chemical contaminant plumes that are distributed in four different areas. The spreads of contaminant plumes were simulated using the water network simulator EPANET for 8 hours. For each hour, each node has a sensor that reports 1 if it is polluted; 0, otherwise. We randomly selected $K$ percent nodes, and flipped their sensor binary values in order to test the robustness of methods to noises, where $K \in \{2, 4, 6, 8, 10\}$. **The objective is to detect the set of polluted nodes**. 2) **High-Energy Physics Citation Network.** The CitHepPh (high energy physics phenomenology) citation graph is from the e-print arXiv and covers all the citations within a dataset of 34,546 papers with 421,578 edges during the period from January 1993 to April 2002. Each paper is considered as a node, each citation is considered as a edge (direction is not considered), and each node has one attribute denoting the number of citations in a specific year ($t = 1993, \cdots, t = 2002$), and another attribute denoting the average number of citations in that year. **The objective is to detect a connected subgraph of nodes (papers) whose citations are abnormally high in comparison with the citations of nodes outside the subgraph.** This subgraph is considered an indicator of a potential emerging research area. The data before 1999 is considered as training data, and the data from 1999 to 2002 is considered as testing data.
----------------------- ------------- ------------ ------------- ---------- -------------- --------------- ------------- ----------
Kulldorff EMS EBP Kulldorff EMS EBP Run Time
Our Method **1668.14** 499.97 **4032.48** 40.98 **13859.12** **142656.84** **9494.62** 97.21
`GenFusedLasso` 541.49 388.04 3051.22 901.51 2861.6 60952.57 6472.84 947.07
`EdgeLasso` 212.54 308.11 1096.17 70.06 39.42 2.0675.89 261.71 775.61
`GraphLaplacian` 272.25 182.95 928.41 228.45 1361.91 29463.52 876.31 2637.65
`LTSS` 686.78 479.40 1733.11 **1.33** 11965.14 137657.99 9098.22 **6.93**
`EventTree` 1304.4 744.45 3677.58 99.27 10651.23 127362.57 8295.47 100.93
`AdditiveGraphScan` 1127.02 **761.08** 2794.66 1985.32 12567.29 140078.86 9282.28 2882.74
`DepthFirstGraphScan` 1059.10 725.65 2674.14 8883.56 7148.46 62774.57 4171.47 9905.45
`NPHGS` 686.78 479.40 1733.11 1339.46 12021.85 137963.5 9118.96 1244.80
----------------------- ------------- ------------ ------------- ---------- -------------- --------------- ------------- ----------
[**Graph Scan Statistics:** ]{} Three graph scan statistics were considered, including Kulldorff’s scan statistic [@neill2012fast], expectation-based Poisson scan statistic (EBP) [@neill2012fast], and elevated mean scan statistic (EMS, Equation (\[emss\])) [@qian2014connected]. The first two require that each node has an observed count of events at that node, and an expected count. For the water network dataset, the report of the sensor (0 or 1) at each node is considered as the observed count, and the noise ratio is considered as the expected count. For the CiteHepPh dataset, the number of citations is considered as the observed count, and the average number of citations is considered as the expected count. For the EMS statistic, we consider the ratio of observed and expected counts as the feature.
[**Comparison Methods:**]{} Seven state-of-the-art baseline methods are considered, including `EdgeLasso` [@sharpnack2012sparsistency], `GraphLaplacian` [@sharpnack2012changepoint], LinearTimeSubsetScan (`LTSS`) [@neill2012fast], `EventTree` [@RozenshteinAGT14], `AdditiveGraphScan` [@conf/icdm/SpeakmanZN13], `DepthFirstGraphScan` [@Speakman-14], and `NPHGS` [@DBLP:conf/kdd/ChenN14]. We followed strategies recommended by authors in their original papers to tune the related model parameters. Specifically, for EventTree and Graph-Laplacian, we tested the set of $\lambda$ values: $\{ 0.02, 0.04, \cdots, 2.0\}$. `DepthFirstScan` is an exact search algorithm and has an exponential time cost in the worst case scenario. We hence set a constraint on the depth of its search to 10 in order to reduce its time complexity.
We also implemented the generalized fused lasso model (`GenFusedLasso`) for these three graph scan statistics using the framework of alternating direction method of multipliers (ADMM). `GenFusedLasso` is formalized as $$\begin{aligned}
\min_{{\bf x} \in \mathbb{R}^n} -f({\bf x}) + \lambda \sum\nolimits_{(i, j) \in \mathbb{E}} \|x_i - x_j\|,\end{aligned}$$ where $f({\bf x})$ is a predefined graph scan statistic and the trade-off parameter $\lambda$ controls the degree of smoothness of neighboring entries in ${\bf x}$. We applied the heuristic rounding step proposed in [@qian2014connected] to the numerical vector ${\bf x}$ to identify the connected subgraph. We tested the $\lambda$ values: $\{0.02, 0.04, \cdots, 2.0, 5.0, 10.0\}$ and returned the best result.
[**Our Proposed Method <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>:** ]{} Our proposed <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> has a single parameter $k$, an upper bound of the subgraph size. We set $k = 1000$ by default, as the sizes of subgraphs of interest are often small; otherwise, the detection problem could be less challenging. We note that, to obtain the best performance of our proposed method <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>, we should try a set of different $k$ values ($k=50, 100, 200, 300, \cdots, 1000$) and return the best.
[**Performance Metrics:** ]{} The overall scores of the three graph scan statistics of the connected subgraphs returned by the competitive methods were compared and analyzed. The objective is to identify methods that could find the connected subgraphs with the largest scores. The running times of different methods are compared.
Evolving Curves of Graph Scan Statistics
----------------------------------------
Figure \[fig:comparsion-of-iterations\] presents the comparison between our method and `GenFusedLasso` on the scores of the best connected subgraphs that are identified at different iterations based on the Kulldorff’s scan statistic and the EMS statistic. Note that, a heuristic rounding process as proposed in [@qian2014connected] was applied to the numerical vector ${\bf x}^i$ estimated by `GenFusedLasso` in order to identify the best connected subgraph at each iteration $i$. As the setting of the parameter $\lambda$ will influence the quality of the detected connected subgraph, the results based on different $\lambda$ values are also shown in Figure \[fig:comparsion-of-iterations\]. We observe that our proposed method <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> converged in less than 5 steps and the qualities (scan statistic scores) of the connected subgraphs identified <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> at different iterations were consistently higher than those returned by `GenFusedLasso`.
Comparison on Optimization Quality
----------------------------------
The comparison between our method and the other eight baseline methods is shown in Table \[table:comparison\]. The scores of the three graph scan statistics of the connected subgraphs returned by these methods are reported in this table. The results in indicate that our method outperformed all the baseline methods on the scores, except that `AdditiveGraphScan` achieved the highest EMS score (761.08) on the water network data set. Although `AdditiveGraphScan` performed reasonably well in overall, this algorithm is a heuristic algorithm and does not have theoretical guarantees.
Comparison on Time Cost
-----------------------
Table \[table:comparison\] shows the time costs of all competitive methods on the two benchmark data sets. The results indicate that our method was the second fastest algorithm over all the comparison methods. In particular, the running times of our method were 10+ times faster than the majority of the methods.
Conclusion and Future Work
==========================
This paper presents, <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span>, an efficient algorithm to minimize a general nonlinear function subject to graph-structured sparsity constraints. For the future work, we plan to explore graph-structured constraints other than connected subgraphs, and analyze theoretical properties of <span style="font-variant:small-caps;">Graph</span>-<span style="font-variant:small-caps;">Mp</span> for cost functions that do not satisfy the WRSC condition.
Acknowledgements
================
This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center (DoI/NBC) contract D12PC00337. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the US government.
Ery Arias-Castro, Emmanuel J Cand[è]{}s, and Arnaud Durand. Detection of an anomalous cluster in a network. , pp 278–304, 2011.
Megasthenis Asteris, Anastasios Kyrillidis, Alexandros G Dimakis, Han-Gyol Yi, et al. Stay on path: PCA along graph paths. In [*ICML*]{}, 2015.
Francis Bach, Rodolphe Jenatton, et al. Structured sparsity through convex optimization. , 27(4):450–468, 2012.
Francis Bach. Learning with submodular functions: A convex optimization perspective. , 6(2-3), pp 145-373, 2011.
Sohail Bahmani, Bhiksha Raj, and Petros T Boufounos. Greedy sparsity-constrained optimization. , 14(1):807–841, 2013.
Sohail Bahmani, Petros T Boufounos, and Bhiksha Raj. Learning model-based sparsity via projected gradient descent. , 62(4):2092–2099, 2016.
Thomas Blumensath. Compressed sensing with nonlinear observations and related nonlinear optimization problems. , 59(6):3466–3474, 2013.
Feng Chen and Daniel B. Neill. Non-parametric scan statistics for event detection and forecasting in heterogeneous social media graphs. In [*KDD*]{}, pp 1166–1175, 2014.
Marco F Duarte and Yonina C Eldar. Structured compressed sensing: From theory to applications. , 59(9):4053–4085, 2011.
Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt. Approximation-tolerant model-based compressive sensing. In [*SODA*]{}, pp 1544–1561, 2014.
Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt. A fast approximation algorithm for tree-sparse recovery. In [*ISIT*]{}, pp 1842–1846, 2014.
Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt. Fast algorithms for structured sparsity. , 3(117), 2015.
Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt. A nearly-linear time framework for graph-structured sparsity. In [*ICML*]{}, pp 928–937, 2015.
Junzhou Huang, Tong Zhang, and Dimitris Metaxas. Learning with structured sparsity. , 12:3371–3412, 2011.
Laurent Jacob, Guillaume Obozinski, and Jean-Philippe Vert. Group lasso with overlap and graph lasso. In [*ICML*]{}, pp 433–440, 2009.
Prateek Jain, Ambuj Tewari, and Purushottam Kar. On iterative hard thresholding methods for high-dimensional m-estimation. In [*NIPS*]{}, 2014.
Daniel B Neill. Fast subset scan for spatial pattern detection. , 74(2):337–360, 2012.
Avi Ostfeld, James G Uber, et al. The battle of the water sensor networks (BWSN): A design challenge for engineers and algorithms. , 134(6):556–568, 2008.
J. Qian, V. Saligrama, and Y. Chen. Connected sub-graph detection. In [*AISTATS*]{}, 2014.
Polina Rozenshtein, Aris Anagnostopoulos, Aristides Gionis, and Nikolaj Tatti. Event detection in activity networks. In [*KDD*]{}, 2014.
James Sharpnack, Alessandro Rinaldo, and Aarti Singh. Changepoint detection over graphs with the spectral scan statistic. , 2012.
James Sharpnack, Alessandro Rinaldo, and Aarti Singh. Sparsistency of the edge lasso over graphs. 2012.
Skyler Speakman, Edward McFowland Iii, and Daniel B. Neill. Scalable detection of anomalous patterns with connectivity constraints. , 2016.
S. Speakman, Y. Zhang, and D. B. Neill. Dynamic pattern detection with temporal consistency and connectivity constraints. In [*ICDM*]{}, 2013.
A. Tewari, P. K Ravikumar, and I. S Dhillon. Greedy algorithms for structurally constrained high dimensional problems. In [*NIPS*]{}, pp 882–890, 2011.
Bo Xin, Yoshinobu Kawahara, Yizhou Wang, and Wen Gao. Efficient generalized fused lasso and its application to the diagnosis of alzheimer’s disease. In [*AAAI*]{}, pp 2163–2169, 2014.
Xiao-Tong Yuan and Qingshan Liu. Newton greedy pursuit: A quadratic approximation method for sparsity-constrained optimization. In [*CVPR*]{}, pp 4122–4129, 2014.
Xiao-Tong Yuan, Ping Li, and Tong Zhang. Gradient hard thresholding pursuit for sparsity-constrained optimization. In [*ICML*]{}, 2014.
Xiao-Tong Yuan, Ping Li, and Tong Zhang. Gradient hard thresholding pursuit for sparsity-constrained optimization. In [*arXiv preprint arXiv:1311.5750*]{}, 2013.
Tong Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In [*NIPS*]{}, pp 1921–1928, 2009.
Hegde, Chinmay and Indyk, Piotr and Schmidt, Ludwig Approximation algorithms for model-based compressive sensing In [*IEEE Transactions on Information Theory*]{}, pp 5129–5147, 2015.
Nesterov, Yurii Introductory lectures on convex optimization: A basic course In [*Springer Science & Business Media*]{}, 2013.
| |
XCRO currently supports the ability to split payments based on percentage of the account balance. For example, if the account balance is 1500, we can configure a split of 10% and 90% resulting in two transactions of 150 and 1350 respectively.
Under the hoods, XCRO uses java libraries to perform the calculation. We have observed defects related to calculating the floating point precision to determine the amount to be considered, resulting in a calculation of 0.02 lesser or more.
Impact and resolution
This problem applies only to certain scenarios where the split percentage and balance are having decimal values. | http://support.capiot.com/support/solutions/articles/42000067087-xcro-2019-11-20-related-to-split-amount-calculation |
Wednesday, February 17, 2010
Personalizing Classes in Under the Dying Sun
Sooner or later, some player wants his character to be “different” and a frequent response has been to create a new class to accommodate that or to allow multiple classes. The Classes in Under the Dying Sun are very broad in scope and are intended to accommodate many different types of character. However, if the group feels the desire to change things around, one simple option is to swap out certain analogous Class-based Abilities.
For example, removing the Slayer’s Steely Thews and replacing it with the Survivor’s Survival Instinct creates a more dexterous type of warrior. The Referee should consider this carefully however. Some Class-based Abilities are more useful than others. Swapping out Steely Thews for the Survivor’s Jury-Rig ability is probably an unfair trade given the greater applicability of the latter ability.
In general, the bonuses to Ability Throws are probably equivalent swaps. The “signature abilities” of the Slayer (Combat Reflexes, Weapon Mastery, and Mass Slaughter) and of the Survivor (Stealth, Jury-Rig, and Back Stab) may be reasonable swaps depending upon play-style and expectations. I could see playing up a more assassin-type guy by swapping out the Slayer's Mass Slaughter for the Survivor's Back-Stab and less-technical Survivor who gives up Jury-Rig in favour of the Slayer's Combat Reflexes.
The Sorcerer’s “signature ability” (Psychic Sorcery) is not a good candidate for this sort of thing. It is the defining ability of the class and allowing another class to take it renders the Sorcerer pretty pointless and the other class a bit too flexible.
(Here I resist the urge to make a snarky comment about more recent iterations of Ye Auld Game.)
| |
CONDENSATION IS THE change of phase of a substance from vapor to liquid. It is the opposite of evaporation, the change of phase from liquid to vapor. The condensation of water is one of the most important physical processes of the Earth's climate system. Condensation forms cloud particles and precipitation. It is the main sink of atmospheric water. On Earth, water can be found in the solid, liquid, and gas phases. Evaporation (and sublimation) of water substance, the transport of water vapor, and condensation away from its sources, is the most important heat transport mechanism in the Earth's climate system. Condensation is an extremely important process in the earth's water cycle because it is responsible for the formation of clouds and precipitation.
Weak forces between molecules cause them to stick to each other and produce the various phases of a substance. Random thermal motions cause some molecules to overcome these intermolecular forces and escape from the liquid and form a gas phase around it. The number of molecules that leave the liquid phase increases with its temperature because of the increase in the thermal energy, and, therefore, kinetic energy, of the substance. The molecules in the gas phase jiggle and move randomly because of their thermal energy. Some of the gas molecules stick to the liquid when they strike it. The numbers of molecules that return to the liquid phase increases with their concentration.
Thus, the concentration of molecules in the gas phase increases until a balance between molecules leaving and returning to the liquid surface is reached. This is called thermodynamic equilibrium. The concentration of molecules in the gas phase, in equilibrium with the liquid phase at a given temperature, is defined as the saturation value.
The molecules that leave the liquid phase are the ones moving faster than the average molecule in the liquid. That is, they have larger kinetic energy than the average molecule. This is the reason a liquid cools while it evaporates. There is a sudden large attraction when a molecule of water vapor approaches the surface of liquid water. This speeds up the incoming molecule and increases its kinetic energy. Thus, a liquid releases heat while it condenses. This is known as the latent heat of condensation. The saturation vapor pressure depends on the curvature of the surface of the liquid phase.
This has important implications for climate processes. In a curved surface, such as that of a cloud droplet, each water molecule has fewer nearby neighbors than on a flat surface. Thus, the intermolecular attractive forces holding them together are smaller; a water molecule can escape a curved surface easier than a flat surface. At equilibrium, the concentration of water molecules in the gas phase has to be larger than over a curved surface over a flat surface to compensate for the fact that a larger number of molecules leave the liquid at a given temperature. Therefore, the saturation water vapor pressure is larger over curved surfaces.
Condensation occurs when the concentration of molecules in the gas phase exceeds the equilibrium, or saturation value, at a given temperature. Relative humidity is defined as the ratio of the water vapor concentration to the saturation value, with respect to a flat surface of pure water. Thus, the relative humidity of the air in equilibrium with a cloud droplet can be much greater than 100 percent, depending on the curvature (or size) of the droplet. This frequently occurs in clouds and is called supersaturation. Therefore, condensation and cloud droplets form preferentially over impurities that reduce the curvature of the surface of the liquid phase. These impurities are called cloud condensation nuclei.
When the temperature of the liquid phase is lowered, a smaller number of molecules leave it, and the saturation water vapor concentration is reduced. This is what occurs when moist air rises in convec-tive updrafts and cools adiabatically. Thus, when the air rises sufficiently for saturation to occur, clouds form. This is the process by which convective clouds form. The cloud base is at the saturation condensation level of the rising air parcels. However, the fraction of the condensate that falls as rain, as opposed to evaporating and moistening the environment, is not easily determined. In fact, this is one of the most uncertain processes in cloud models. Cloud processes have been identified by various researchers, and more recently by the United Nations Intergovernmental Panel on Climate Change (IPCC), as one of the most uncertain processes in climate models. Condensation partially controls the content of vertical distribution of water vapor, the most important greenhouse gas in the Earth's atmosphere.
SEE ALSO: Cloud Feedback; Clouds, Cirrus; Clouds, Cumulus; Clouds, Stratus; Evaporation and Transpiration; Evaporation Feedbacks.
BIBLIOGRApHY. J.V.D. Iribarne and W.L. Godson, Atmospheric Thermodynamics (Reidel, Dordrecht, 1981); N.O. Renno, K.A. Emanuel, and P.H. Stone, "A Radiative-Con-vective Model with an Explicit Hydrologic Cycle: 1. Formulation and Sensitivity to Model Parameters," Journal of Geophysical Research (v.99, 1994); N.O. Renno, P.H. Stone, and K.A. Emanuel. A Radiative-Convective Model with an Explicit Hydrological Cycle: 2. Sensitivity to Large Changes in Solar Forcing," Journal of Geophysical Research (v.99, 1994).
Renno Nilton University of Michigan
Guide to Alternative Fuels
Your Alternative Fuel Solution for Saving Money, Reducing Oil Dependency, and Helping the Planet. Ethanol is an alternative to gasoline. The use of ethanol has been demonstrated to reduce greenhouse emissions slightly as compared to gasoline. Through this ebook, you are going to learn what you will need to know why choosing an alternative fuel may benefit you and your future. | https://www.climate-policy-watcher.org/global-climate-2/condensation.html |
In F Major (1st movement) I enjoyed listening to this piece. All throughout the first movement, I constantly felt tension rising and relaxing (tension rising because of the repetitive melodies with crescendo and changing the melody key to a minor key and tension relaxing when it has descend or a key changes to a Major); I think this was Beethoven’s Intention to have the audience keep their attention on his music rather than have the audience come inattentive during the middle of the performance.
Also, in this movement, I noticed that there were many opening and closing phrases or sections-?like a book with many brief chapters. I know this because I noticed a very brief pause whenever a phrase ended and then right after there would be a section that sounds totally different or similar to the previous phrase. I have played many of Beethoven’s piano pieces and I have noticed that in all of his pieces, he chooses very simple notes (for this movement the notes he chose was F-G- F-E-F-C) of Its Major or minor key and utilize those several specific notes to create music.
Hire a custom writer who has experience.
It's time for you to submit amazing papers!
order now
He constantly changes keys of that same melody as if he is trying his very best to hear them. That was what I concluded on the reason why he repeated the same melody with the same rhythm-? as if he was pondering which will sound better; for example at or at 2:40-?2:46, at first the quartet plays the notes (F-G-F-E-F-C) but as If Beethoven were not satisfied or wondered If he have not chose the notes right, he writes (F-G-F-E-F-D) right after. This tells me that I believe Beethoven was trying to experiment with Is melody.
I also want to share that liked the parts (for example at 0:34 – 0:44) where the music builds tension by making a crescendo; soon after the tension relaxes by a quick decrescendo at the end of the phrase. Furthermore, I like the part at 4:19-?4:36 where the viola plays the melody line and violin also play the melody line right after and they keep on playing repeatedly as If they were having an agitated conversation until the violin plays the runs and the other Instruments Join soon after and all instruments crescendo together until the phrase ends.
String Quartet Pop. 18 No. 6 in a-flat Major (2nd movement) The second movement Is always the slow tempo piece. I felt very calm and I heard harmonize and the connecting lines (or the legato lines) are the highlights of this particular piece. The violin takes the main melody from the beginning of the piece, and at 0:23, the viola slowly and gradually takes over and repeats the main melody and ends at 0:43.
Furthermore, at 1:10, the violin plays the main melody again at a higher register but slightly more musical than how it was played in the beginning of the piece; this time the cello plays dotted note rhythms and the other violin and viola lays the long base lines with the violin to make it more interesting (ends TTL :30). Right after, there is a mood change and the tension grows when cello and violin plays the minor scale. However, as if Beethoven was not satisfied, they repeat the minor scale (viola) but it is played differently.
I thought this part was interesting at 2:57; thieving tricks the audience by pretending to finish a phrase but plays a different note to make the phrase hang in the air as if to make the audience hold their breath or feel agitated. At 2:57, the audience probably wanted to hear “E flat-C)-C,” but instead, Beethoven wrote “E flat-D-E” to create suspense in the air. After, the quartet continues to play the melodic line they were playing before the suspense.
At 3:53, the violin plays the chromatic scale to lead to the main melody (at 4:02) that was played in the beginning of the piece. After this part, everything else sound similar from the beginning part of the recording. At 6:40, the performers decrescendo, and surprises the audience with a submit forte or fortissimo. The piece ends cutely with two pizzicato notes. Piano Sonata in d, Pop. 31 No. 2 (2nd movement) This Piano Sonata by Beethoven has a nickname called “The Tempest” which by definition means a violent commotion or disturbance-?like a bad storm.
It does sound like that in the first movement. It does not, however, sound like a violent disturbance in the second movement. It is as if the “storm” in the first movement has settled and the sea has become quiet and is taking a break. The first chord that starts in the second movement shows calmness in the air-?like a sign of relief. In the beginning, the first few measures are the main melody theme that Beethoven will be using throughout this piece. The melody starts right after the first chord in the beginning (which is at 0:07).
Again, Beethoven repeats the main theme again right after at 0:48 but with a turn-?which is a sign that tells the performer to play the note that is under to play a note step up, come back to its note, go down, and then get back to the notes again. Furthermore about this repeated melody, there Nerve more tension; the sounds were much stressed and had more feeling of agitation between the notes. I think I felt agitation between the notes was because hat the left hand notes were dissonances. Going along, at 1:26, the left had starts to play octaves notes which are triplets.
And as the music continues, tension builds gradually with the left hand’s octaves become heavier and louder till the tension ends briefly with a right hand staccato run at 2:40. The left hand octaves come back as if the relaxed tension never existed before and tension builds again-?the tension is released again at 3:46 with the main melody theme come back. I want to mention hand of the part, there is a resolved part where the left hand plays E natural and hen F which gives Off resolved feeling.
Right after that resolve in the left hand, I heard a surprise which were notes of thirds and fifths going down and sometimes an arpeggio appears here and there while the main melody theme plays (4:32-?5:07). Soon after this, the left hand octaves come back again-?which is the repetition from earlier section of this piece. From 8:00, I feel like he was making a very short summery of what had happened in this second movement. The piece ends with a single note of B flat from right hand and then left hand right after. | https://directcurrentmusic.com/wolfgang-amadeus-mozart-essays/beethoven-2-3444/ |
You have $2000 in non-cash current assets, $1000 in cash, and $1000 in current liabilities. You also have total assets of $3000 and total liabilities of $4550. Sales is $1000, COGS is $100 and Net Income is $150.
What is the current ratio?What is return on assets?
You have $1,000 of your own money to invest. You can borrow or lend at the risk-free rate of 2%. You want to invest in a mutual fund that has an expected return of 10% and a standard deviation of 30%. Suppose you borrow $1,000 and combine that with your own money to invest $2,000 in this mutual fund.
What is the expected return and standard deviation of the portfolio that you have just created?
You have $1,000 invested in an account that pays 16% compounded annually
A commission agent can locate for you an equally safe deposit that will pay 16%, compounded quarterly, for 2 years
What is the maximum amount you should be willing to pay him now as a fee for locating the new account? | https://www.thestudypool.com/finance-287/ |
Happy Canada Day to my fellow canucks!
The year is half gone and after my little spending spree the other week, I realized I haven’t really stuck to my resolution to get my spending under control. Financially, I’m in better shape than I have been in the past, but I could be a heck of a lot better. Right after that spree, I had to do some emergency work on Dory. It was only because of Dave’s free labour and a small family loan that I was able to manage it. I really need an emergency fund and to do that, I really need to curb my spending.
So… I redid my budget, and I can be debt-free by October. But only if I stop spending willy nilly and commit to paying off those credit cards. So I set myself some rules – well really one rule: If you don’t need it – don’t buy it! This applies to:
- Clothes: Honestly this one won’t be hard to accomplish. My drawers and closets are over-flowing with clothes. I have more than 30 Old Navy Fit and Flare dresses – not to mention about a dozen other styles. What I really need are new/bigger dressers, but they can wait until the cards are paid.
- Shoes: Again, another one that’s not hard to stick to. It’s actually been months since I’ve even looked at new shoes. Now that I work from home, I don’t really have anywhere to wear all the fabulous shoes I have!
- Candles – with or without jewelry: This one hurts a little. You know I loved Charmed Aroma, and they came out with the prettiest little shell candle that would make a lovely planter after the candle is gone. But no. I don’t need any more candles (I really only burn them in winter and still have several unburnt ones left) and the last thing I need is more jewelry. Much like my clothes, I’m swimming in trinkets. I bought a huge jewelry armoire, and it was still not big enough to hold it all. Both of my dressers are covered in boxes and earring stands.
- Plants: Household or garden. Since most of the nearby garden centres are closing up for the season, this won’t be too hard. What will be hard is not grabbing a new house fern or two on grocery day. But the house does look like a jungle come fall when all the house plants come in from their summer in the back yard so I definitely don’t need more.
- Yarn: This one is not terrible, I have a pretty good stash and still have a bunch of undyed yarn in the basement I can customize in a pinch! I should be able to stick to it pretty easy.
- Fabric: This is going to be the hard one. Like any new hobby, the desire to “buy all the things” hits hard. But, I already have a sizeable stash, with plenty of fabric to keep me piecing for quite a while. I just have to stay strong.
Now… that said, there are some… caveats. I said “If I don’t need it…” If I do need it, I can get it. However, we all know how flexible that word “need” can be. So I’ve set some boundaries around that. To purchase, I must truly need it. For example:
- My last pair of underwear is full of holes. I can buy new underwear.
- My only pair of running shoes lost a sole. I can buy new running shoes. However, if one of my pair of high heels breaks… no dice… I have eleventy billion pairs of heels. Even if it’s my favourite only green pair. Something else in my collection will work.
- I absolutely need it to complete a crafting project. Like my Time to Sew squares… I currently need some cream sashing fabric that is close to the background fabric in the blocks. Nothing in my stash is close, so I can purchase that fabric, and only that fabric. This will also apply to backing fabric and batting, but only for tops that I currently have completed or in progress. No “for future use”.
When I told Mom my plan and my rules, she said she’d join me. While she doesn’t have a problem with clothes, shoes and plants like I do, we all know she loves to buy sewing gadgets, rulers, stencils and fabric like it’s going out of style.
She also suggested we mark each day we stick to our plan on the calendar with a BIG X, that was we have a visual of our progress and how well we are doing. We are both terrible online shoppers, so I think having this visual right by the computer will help us both immensely!
And while I said I’ll be debt-free by October (if all goes to plan), I think I’ll try to stick to the plan to the end of 2019! By then I might even have a nice little emergency fund squirreled away.
If you’d like to join us, feel free. I think it’s going to liberating to see that credit card balance say ‘0″. | https://wanderingcatstudio.com/2019/07/01/up-the-the-challenge/ |
A few months back I ordered a bunch of 4×4 prints during a sale at Persnickety Prints. I printed out most of my instagram pics from the spring and summer and after sorting out photos for Project Life and vacation mini-albums, I was left with a handful of awesome #thursday3 pictures.
I wanted to put them together with a quick project. I used the Studio Calico Wink Wink scrapbook kit and add-ons to put together this album.
How to create this album in 7 steps:
- Lay out eight photos.
- Trim eight sheets of patterned paper (4×4 inch squares)
- Adhere each photo back to back with a sheet of patterned paper.
- Adhere acrylic numbers to patterned paper.
- Add strips of washi tape to the inside edge of the pages
- Use a hole punch to punch two holes in each page (approx 1.5 inch apart)
- Bind all the pages with two binder rings.
This is a great album to create anytime you have a bunch of photos with a single theme or from a specific event. It works well with other size photos too. I’ve made 4×6, 3×4, and 3×3 albums using the same method.
Click play to watch me flip through the entire album. | https://rukristin.com/blog/2015/10/thursday3-mini-album/ |
This oil painting is 10 x 8 on canvas panel. For more information, click here.
Author: cronesinger
Arkansas Oaks
Old Home Near Heber Springs
Country House in Early Morning
This is an 8 x 10 oil painting on canvas panel. Notice the smoke coming out of the chimney. It is a chilly early spring morning. For more information, click here.
The Cake Mix Cake (add 2 eggs)
This painting shows several items I had never tried to paint before–a stainless steel pan, eggs, and a stainless steel spoon with wooden handle! It is a “kitchen painting” and for further information, click here.
Red Apples on a White Cloth
The light on the two apples modeling for this painting is coming from both the right and directly over head. Most of the time, I have my lamp focused on the left hand side but I wanted to try something different. For more information, click here.
Three Apples
The two apples in the front are honey crisp apples but the apple directly behind them was a solid red delicious (I think). This is a small 5 x 7 painting on canvas panel. For more information, click here.
Two Honey crisp Apples
Honey crisp apples are not solid red, but always have yellow on part of their skin which makes them fun to paint. These two were part of a group of three but one got eaten before this painting was begun! For more information, click here.
My new book, The Civil War Letters of Deakins, Perry, Wollery, Fox and Others.
I have just published a book of Civil War letters that my father, John William Perry, inherited from his grandfather, a Civil War soldier. The book contains photographs of my great grandfather in his Civil War uniform and of his wife and son. You can see and read a lot about this book on Amazon. Click here for more details.
Big Boy Tomatoes
Three juicy tomatoes! I was not able to grow tomatoes in our yard last summer but a friendly neighbor gave me these and I used them as reference models before painting them. The painting went up pretty quick as my husband wanted them for lunch! For more information, click here. | https://cronesinger.com/author/cronesinger/ |
The Migrants Mile Trail trail has two loops, each about 1/2 mile in length. The inner loop is paved, and offers close views of freshwater marsh, a pond, prairie, and a woodlot. Visitors can expect to see an abundance of wildlife - in addition to the many migratory waterfowl the wildlife refuge is home to beaver, coyote, rabbits, deer, muskrat, gophers, racoon. squirrel and the eastern woodrat. Pro tip: the best wildlife viewing can be found on Wildlife Drive, a 5.15 mile loop with a gravel surface, easily navigated by Freedom Chair riders. Anyone who is not equipped for gravel can head to the Birdhouse Boulevard Nature Trail, a paved .2 mile trail.
The Bottomland Nature trail is a crushed limestone hardened trail that has two loops, one that is 0.5 and a second option that extends 0.75 mile. Visitors will experience a developing bottomland prairie restoration area, while gaining an understanding of its rarity as a natural plant community and its importance in the human history of the Flint Hills region. Other features of the trail include an information trail-head kiosk, five interpretive waysides, benches, and a comfort station. Pro tip: Visitors who are not using an all terrain wheelchair should plan to visit the trail during dry weather, as rain may affect the integrity of the trail's surface.
The SMP Paved Trail is a two mile loop that offers park visitors the opportunity to enjoy scenic views of the entire park including Shawnee Mission Lake, which extends for 150 acres. Travelers will also enjoy beautiful views of ponds, marshes, forests, and meadows along its lengths. Pro tip: Those looking for additional cardio can access an additional 11 miles of paved trail on the Mill Creek Streamway Trail, which is connected to the SMP paved trail at the west end of the park.
A level, paved 1/4 mile trail offers visitors of all mobility levels the opportunity to taking in the spectacular view from Pinnacle Overlook. With an elevation of 2,440 feet, this is undoubtedly the most visited - and awe inspiring - area within the park. Accessible restrooms are available near the overlook, and accessible drive-in campsites are available at the park's Wilderness Road Campground.
A great trail for a good cardio work out, the Dawkins Line Rail Trail is a paved trail that follows the old Dawkin Line railroad bed for 18 miles. Opened to the public since 2013, the trail has retained it's historical vibe thanks to the 24 trestles along it's lengths as well as the Gun Creek Tunnel, which is 662 feet long.
The Kiwanis Walking Trail - which is 1.6 miles long and offers visitors gorgeous views of the lake and it's supported wildlife - is rated "easy" but it has not been officially rated "accessible", which means it may be a better option for riders with all terrain wheelchairs. However, Paintsville Lake State Park itself has done a wonderful job of making sure that all of the areas are accessible to visitors of every ability. This includes a scenic overlook of the Lake at the end of a short paved trail, the campgrounds, restrooms, parking and the camp's amphitheater.
The Wetland Walkway is a 1.5 fully accessible boardwalk that brings visitors across a freshwater marsh, an observation tower with viewing scopes, five trail rest shelters with benches, and an accessible restroom. In this refuge, wildlife abounds: In addition to migratory waterfowl, keep your eyes peeled for large wading birds, muskrat, raccoon, marsh rabbit, and alligators. Pro tip: fishing and crabbing enthusiasts can enjoy opportunities all year round at several wheelchair accessible fishing piers.
The Bobcat Trail is a 1.1. mile long hard surfaced trail that begins at the picnic area and brings visitors along the scenic high banks of the Bayou Bartholomew. Look closely for the opportunity to see over 115 species of fish who inhabit the Bayou, and fishermen would do well to remember that in order to preserve the delicate ecosystem - only shore fishing is allowed. Pro tip: ADA compliant cabins are available to visitors looking to stay overnight, and visitors with American the Beautiful Pass are also entitled to a 50% discount on camping fees.
The Lakeview Nature Trail is a 1/2 mile paved trail that provides views of the scenic reservoir that the park - one of Louisiana's newest - surrounds. Be sure to keep your eyes peeled for bald eagles, as the area remains a popular nesting ground for our nation's symbol. Pro tip: If you choose to stay overnight in the accessible lodging, be sure to plan for a barbeque, as an accessible grill is also provided!
Did we miss an accessible trail that should be included? Let us know in the comment section below and sign up below to get our next installment featuring Maine, Maryland, and Massachusetts straight to your inbox! | https://www.gogrit.us/news/2016/5/24/the-abcs-of-accessible-trails-kansas-kentucky-louisiana?__hstc=21858660.fbd0390daa5e9dbc661a553ec79ecc0f.1489166137331.1493047195529.1493049769777.51&__hssc=21858660.1.1493049769777&__hsfp=240191977 |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION
The present invention relates to a storage rack and, more specifically, to an adjustable storage rack for accommodating cooking utensils.
One class of cooking utensils includes a vessel for holding food that is to be cooked or otherwise prepared and an extended handle that is connected to the vessel and allows the cook to manipulate the vessel. In this class of cooking utensils are skillets, fry pans, woks, grill pans, chef's pans, sauciers, deep fryers and sauté pans to name a few. Many of the larger cooking utensils in this class that are capable or holding a significant amount of food and/or have a large diameter vessel have a second handle that is disposed opposite to the extended handle. The extended handle and the second handle facilitate the movement of the utensil from one place to another by the cook. Hereinafter the term cooking utensil or utensil is used to refer to cooking utensils of the noted class.
In many households, storage space in the kitchen is limited. Consequently, to save space, cooking utensils are stacked one on top of the other in a cabinet, in a drawer, or on a shelf. Typically, the utensils are stacked in a nested fashion with the utensil having the largest diameter vessel located at the bottom of the stack, the smallest diameter vessel located at the top of the stack, and utensils with intermediate diameters located between the utensils at the top and bottom of the stack. Stacking the utensils in this way can make the retrieval of a particular utensil awkward and cumbersome. To alleviate this problem, various types of storage racks for such cooking utensils have evolved. For example, there is a storage rack that is typically attached to a ceiling and has a plurality of hooks from which the cooking utensils can be hung. Typically, the hook is passed through a hole in the end of the extended handle or a loop of wire associated with the extended handle. There are also vertical storage racks that hold the utensils in a vertical stack but separated from one another so that the cook does not have “de-nest” the utensils from one another to retrieve the desired utensil. Additionally, there are storage racks that hold utensils in a horizontal “stack,” similar to files in a file cabinet.
Cooking utensils of the noted class have a vessel with an exterior surface, an interior surface, and a substantially circular rim separating the exterior surface from the interior surface. The exterior surface includes a substantially flat and circular bottom surface and a side surface that extends between the bottom surface and the rim. The substantially circular rim may include a spout, as in a saucier.
One characteristic of a cooking utensil is its height. The height of a cooking utensil is the perpendicular distance between a plane that is defined by the bottom surface and a plane that is defined by the rim. There is a broad range in the height of cooking utensils. For instance, a grill pan can have a height 25 mm or less, while a stir fry pan can have a height of 75 mm or more.
Another characteristic of a cooking utensil is its side profile. When the rim has substantially the same diameter as the bottom surface, the side surface extends substantially perpendicular to the bottom surface, as in many sauté pans. Alternatively, when the rim has a larger diameter than the bottom surface, the side surface is not perpendicular to the bottom surface. In this case, the side surface can be angled relative to the bottom surface, i.e., in cross-section follows a line that is at an angle to the bottom surface that is greater than 90°. Alternatively, the side surface can follow a curve when viewed in cross-section. Exemplary of utensils with an angled or curved side surface are skillets. Side profiles with straight and curved sections when viewed in cross-section are also possible.
Yet another characteristic of a cooking utensil is the difference in the radius of the bottom surface and the rim. In the case of the bottom surface and the rim having substantially the same radius, there is little if any difference. When, however, the side surface is not perpendicular to the bottom surface, there is a difference in the radius of the bottom surface and the rim. A typical skillet has a difference in radius of about 25 mm. In contrast, a typical wok or stir fry pan can have a difference in radius of 100 mm.
One embodiment of the invention is directed to a method of adjusting a storage rack that accommodates cooking utensils in a horizontal stack. The method involves providing an adjustable storage rack and providing a cooking utensil that the storage rack is to be adjusted to accommodate. The storage rack includes a base for placing on a substantially horizontal surface, such as shelf, counter, or the bottom of a cabinet or drawer. Operatively attached to the base are a plurality of bendable members, i.e., members that are capable of being bent from one position to another position by the user's own strength, the user's strength supplemented with a typical household tool (e.g., a pair of pliers), or the user's own strength in conjunction with a simple tool that can be provided with the rack or otherwise readily obtained. The method further involves assessing at least one characteristic of the cooking utensil. Based on the assessment, at least one of plurality of bendable members is bent such that a pair of the plurality of bendable members define a slot with a width that potentially accommodates a portion of the vessel in a stable manner, i.e., when the vessel is placed in the slot, the vessel does not have a tendency to tip such that the interior surface of the vessel begins to face the base. The placement of the vessel in the defined slot is then attempted such that one of the pair of bendable members that define the slot engages the rim of the vessel and the other of the pair of bendable members that define the slot engages the exterior of the vessel.
If the cooking utensil has a tendency to tip such that the interior surface of the utensil begins to face the base, the slot is too narrow. In this case, the method further includes bending at least one of the pair of bendable members that define the slot. This can involve bending one of the bendable members so that the member is at an angle that brings the member closer to the base but still in a position to support the vessel. Alternatively, this can involve bending one of the bendable members such that the member is no longer in a position to support the vessel. In this case, the original pair of bendable members no longer define a slot for accommodating the vessel. If necessary, at least one of the plurality of bendable members is then bent so that a different pair of the plurality of bendable members define a wider slot than the original slot. In either case, the placement of the utensil in the wider slot is then attempted. Typically, the utensil will be accommodated in the wider slot in a stable manner such that no further adjustment is required. If this is not the case, the process of defining a wider slot can be repeated.
If the cooking utensil does not have a tendency to tip but the slot could be narrower and still accommodate the utensil in a stable manner, the slot may be considered to be too wide. In this case, the method further includes bending a least one of the plurality of bendable members that define the slot. This can involve bending one of the bendable members so that the member is at an angle that brings the member closer to the base but still in a position to support the vessel. Alternatively, this can involve bending one of the bendable members such that a different pair of bendable members define a new slot with a width that is less than the original slot. In either case, the placement of the utensil in the narrower slot is then attempted. If the utensil is accommodated in the narrower slot in a stable fashion and a yet narrower slot does not appear feasible, no further adjustment is needed. On the other hand, if the utensil is not accommodated in the narrower slot, further adjustment will be required to define a slot with a width that is greater than the narrower slot.
Yet another embodiment of the invention is directed to a storage rack that is comprised of a base, a plurality of members operatively attached to the base with at least two of the members defining a slot for accommodating a cooking utensil with an extended handle, and a bendable handle support. The bendable handle support is capable of being bent to a first position for supporting the extended handle of a cooking utensil so that the handle is supported above the surface on which the base is positioned and to a second position at which the extended handle of a cooking utensil is not supported above the surface on which the base is positioned.
Another embodiment of the invention is directed to a storage rack that is comprised of a base, a plurality of members operatively attached to the base with at least two of the members defining a slot for accommodating a cooking utensil with an extended handle, and a bendable positioner for use in fixing the position of the base within a cabinet or drawer, or on a shelf. The bendable positioner is capable of being bent so as to extend away from the base and to engage a surface, such as a wall of a cabinet, so as to facilitate the fixing of the base at a desired position. In one embodiment, multiple bendable positioners are provided.
In one embodiment, one of the bendable members comprises a wire that extends from a first end that is operatively attached to the base to a second end that is free. The wire can be bent about the attachment point as needed to define the slot into which a particular vessel is to be positioned. In another embodiment, the wire is folded at a point that is in between the first and second ends. Consequently, the wire can be unfolded to provide a longer support structure.
10
10
12
14
12
12
16
18
20
16
18
18
22
24
22
20
10
22
20
24
22
16
18
20
26
12
12
28
22
20
26
FIGS. 1A-1C
An embodiment of a cooking utensil, hereinafter referred to as utensil , is illustrated in . The utensil includes a vessel for holding food and an extended handle that allows a cook to manipulate the vessel . The vessel is comprised of an interior surface , an exterior surface , and a rim extending between the interior surface and exterior surface . The exterior surface is comprised of a flat bottom surface and a side surface that extends between the flat bottom surface and the rim . In utensil , the flat bottom surface and the rim have substantially the same radius. As a consequence, the side surface has a side profile that is substantially perpendicular to the flat bottom surface and has a substantially cylindrical shape. A portion of the interior surface , exterior surface , and rim are shaped to form a spout . Nonetheless, in the plan view, the vessel has a substantially circular outline. The vessel also has a height that is the perpendicular distance between a plane defined by the flat bottom surface and a substantially parallel plane defined by the rim , exclusive of the spout .
32
32
34
36
34
38
40
42
38
40
40
44
46
44
42
44
42
32
40
38
40
34
48
44
42
FIGS. 2A-2C
Another embodiment of a cooking utensil, hereinafter referred to as utensil , is illustrated in . The utensil includes a vessel and an extended handle . The vessel includes an interior surface , an exterior surface , and a rim that extends between the interior surface and the exterior surface . The exterior surface is comprised of a flat bottom surface and a side surface that extends between the flat bottom surface and the rim . The flat bottom surface has a smaller radius than the rim . As such, the utensil illustrates one of the types of side profiles that occurs when there is a difference in the radiuses of the bottom surface and the rim . The side profile is angled relative to the plane defined by the flat bottom surface , i.e., not perpendicular to the flat bottom surface. Alternatively, the side profile is characterized as the conic section that is presented between the base of a right circular cone and a plane that is perpendicular to the axis of the cone and intersects the cone between the apex and the base. The vessel also has a height that is the perpendicular distance between a plane defined by the flat bottom surface and a substantially parallel plane defined by the rim .
50
50
52
54
52
56
58
60
56
58
58
62
64
62
60
62
60
50
62
60
52
66
62
60
50
68
54
FIGS. 3A-3C
Yet another embodiment of a cooking utensil, hereinafter referred to as utensil , is illustrated in . The utensil includes a vessel and an extended handle . The vessel includes an interior surface , an exterior surface , and a rim that extends between the interior surface and the exterior surface . The exterior surface is comprised of a flat bottom surface and a side surface that extends between the flat bottom surface and the rim . The flat bottom surface has a smaller radius than the rim . Utensil illustrates a third type of side profile, namely, a curved side profile. As shown in cross-sectional view, the side surface follows a curve between the flat bottom surface and the rim . The vessel has a height that is the perpendicular distance between a plane defined by the flat bottom surface and a substantially parallel plane define by the rim . The utensil also has a second handle disposed opposite to the extended handle .
FIG. 4
80
80
82
84
82
82
86
88
88
86
86
86
90
90
92
92
94
94
96
96
90
90
92
90
90
92
94
94
90
94
94
90
94
94
90
96
96
90
96
96
90
96
96
90
illustrates an embodiment of a storage rack that is capable of being adjusted to accommodate cooking utensils having different dimensional characteristics, hereinafter referred to as rack . The rack is comprised of a base and an array of bendable members that is operatively attached to the base . The base is comprised of a planar wire frame and four wire legs A-D that are for engaging a substantially horizontal support surface and elevating the frame above the horizontal surface. The wire used in the frame is the kind of wire that is commonly used in dish drainers and the like. Further, the wire can be coated or non-coated. The methods for manufacturing such frames are well known in the art. The frame is comprised of longitudinal wire members A-F, end wire member A, B, a first set of lateral wire member A-D, and a second set of lateral wire members A-D. One end of each of the longitudinal wire members A-F is connected to end wire member A and the other end of each of the longitudinal wire members A-F is connect to the end wire member B. One of each of the lateral wire members A-D is connected to the longitudinal wire member A, the other end of each of the lateral wire members A-D is connected to the longitudinal wire member C, and a point in between the ends of each of the lateral wire members A-D is connected to longitudinal wire member B. One of each of the lateral wire members A-D is connected to the longitudinal wire member F, the other end of each of the lateral wire members A-D is connected to the longitudinal wire member D, and a point in between the ends of each of the lateral wire members A-D is connected to longitudinal wire member E.
84
98
98
90
90
98
98
90
90
98
The array of bendable members is comprised of a plurality of substantially identical bendable member . Each of the bendable member have a U-type shape with one end attached to the longitudinal wire member C and the other end attached to the longitudinal wire member D. Each of the bendable member is capable of being bent so as to rotate about an axis that extends between the two points at which the member attaches to the longitudinal members C and D. In the illustrated embodiment, each of the member is in non-supportive position.
82
84
98
98
82
90
90
100
90
90
24
12
90
90
100
100
90
90
102
84
90
90
90
90
98
100
90
90
90
90
100
80
90
90
90
90
90
90
FIG. 5
The base , in addition to supporting the array , also provides a structure for positioning a cooking utensil so that two of the support member can engage the utensil, one member engaging the rim of the vessel and the other engaging the exterior surface of the vessel. To elaborate and with reference to , the base functions to position the longitudinal wire members C and D above a support surface so that the wire members C, D can each engage the side surface of a vessel of a cooking utensil (e.g., the side surface of vessel ) and there is sufficient space between the wire members C, D and the support surface so that the vessel does not engage the support surface . As such, the wire members C, D serve to cradle the vessel so as to prevent the utensil from being displaced forward or backward, as represented by arrow . Further, since the array of bendable members is disposed between the wire members C, D, the wire members C, D also serve to position a utensil so that two of the support member can support the utensil such that the rim and flat bottom surface of the utensil are substantially perpendicular to the support surface . The spacing between the wire members C, D and the distance that the wire members C, D are supported above the support surface are typically chosen to accommodate utensils having vessel diameters between about 175 mm and about 350 mm. If the rack is to be disposed in a cabinet or drawer with a height constraint and relatively large diameter utensils are going to be accommodated in the rack, the distance that the wire members C, D are supported above the support surface is reduced relative to applications in which there is no or a reduced overhead constraint (e.g., a shelf or counter top). While the wire members C, D are shown as lying in a plane that, when the rack is in use, will be substantially parallel to a support surface, it should be appreciated that numerous other orientations of the wire members are feasible that would position a utensil for support by bendable members. Further, it should also be appreciated that material other than wire can be used to realize the position function provided by longitudinal wires C, D. For example, a pair of rails made from wood or a polymer that are attached to or integral with an underlying wood or polymer base can be utilized as a base. As such, a base that is made from a material other than wire is feasible. A base made from a combination of materials (e.g., wood and wire) is also feasible.
FIGS. 6A-6C
FIG. 6A
FIGS. 1A-1C
FIG. 6B
80
98
98
84
98
98
98
98
98
90
90
98
98
90
90
90
90
98
98
20
22
12
20
82
98
98
20
98
98
22
12
98
98
12
12
98
98
90
90
98
98
90
90
98
98
98
98
110
12
98
20
12
With reference to , an example of the method of adjusting the rack to accommodate a utensil when the bendable members A-C in a portion of the array that is to subsequently support the utensil are each in a non-supportive position. illustrates each of the bendable members A-C in the portion of the array that is to accommodate a utensil in the non-supportive position. For this example, consecutive attachment points of each of the bendable members A-C to the longitudinal wire members C, D are separated from one another by about 25 mm, each of the bendable members A-C will extend about 50 mm above the longitudinal wire members C, D when bent so as to be perpendicular to the plane defined by the wire members C, D, and the utensil is of the type shown in and has a height of about 45 mm. To adjust the rack to accommodate the utensil, an assessment is made of the characteristics of the utensil relative to the bendable members A-C. In this case, the rim and the flat bottom surface have substantially the same radius. Consequently, when the vessel is supported by the rack such that the plane of the rim is substantially perpendicular to whatever support surface the base is disposed upon, one of the bendable members A-C will engage the rim and another of the bendable members A-C will engage the flat bottom surface of the vessel . As such, the controlling factor in determining which of the bendable members A-C to place in a supportive position is the height of the vessel . Since the vessel has a height that is greater than the distance between the attachment points of consecutive bendable members A-C to the longitudinal members C, D and less than the distance between the attachment points of the bendable members A and C to the longitudinal members C, D, the first and third bendable members A, C are chosen to be bent or placed in a supportive position, as shown in . Once bent, the first and third bendable members A, C define a slot with a width that potentially accommodates the vessel . While the user can measure the distance between the bendable members and the height of a vessel to determine which bendable members in a group of bendable members need to be in a support position and which need to be in a non-support position, such measurements are typically not necessary. The user can place a bendable member in a support position and then position the vessel such that the rim or flat bottom engages the support member and the vessel is positioned so that the plane of the rim is substantially perpendicular to the support surface. The user can then determine which of the remaining support members in the non-support position should be placed in the support position to maintain the vessel such that the rim is substantially perpendicular to the support surface. Alternatively, the user can place the vessel on the rack such that the plane defined by the rim is substantially perpendicular to the support surface and identify the two support members needed to define a slot for potentially accommodating the vessel .
98
98
12
110
98
98
110
12
12
20
112
12
12
16
112
110
12
12
110
12
12
20
112
110
12
FIG. 6C
Once the two members A, C have been placed in the supportive position, the user attempts to place the vessel in the slot defined by the two members A, C. As shown in , the slot accommodates the vessel and supports the vessel such that the plane defined by the rim of the vessel is substantially perpendicular to the support surface . Further, the vessel is supported such that the vessel is stable, i.e., does not have a tendency to tip such that the interior begins to face towards the support surface . If the slot did not accommodate the vessel in a stable manner, i.e., was too narrow, a different pair of bendable members could be used to define a wider slot that is capable of accommodating the vessel . If, on the other hand, the slot accommodated the vessel in a stable position, but the vessel could be accommodated in a stable manner such that the plane defined by the rim was closer to being perpendicular to the support surface , the slot might be considered to be too wide and potentially reduce the number of utensils that could be stored using the rack. In this case, a different pair of bendable members could be used to define a narrower slot that is capable of accommodating the vessel .
FIG. 7A-7E
FIG. 7A
FIGS. 2A-2C
FIG. 7B
FIG. 7C
FIG. 7D
FIG. 7E
80
98
98
84
98
98
98
98
98
90
90
98
98
90
90
90
90
98
98
98
98
98
98
90
90
98
114
98
98
34
34
114
42
112
34
34
112
98
34
42
112
98
98
34
42
112
40
40
44
46
34
98
98
98
98
With reference to , an example of the method of adjusting the rack to accommodate a utensil when the bendable members A-C in a portion of the array that is to subsequently support the utensil are each in a supportive position. illustrates each of the bendable members A-C in the portion of the array that is to accommodate a utensil in the supportive position. For this example, consecutive attachment points of each of the bendable members A-C to the longitudinal wire members C, D are separated from one another by about 25 mm, each of the bendable members A-C will extend about 50 mm above the longitudinal wire members C, D when bent so as to be perpendicular to the plane defined by the wire members C, D, and the utensil is of the type shown in , has a height of about 30 mm, has a difference in radius of about 35 mm, and an angled side profile. To adjust the rack to accommodate the utensil, an assessment is made of the utensil relative to the bendable members A-C. In this case, the height of the vessel is greater than the distance between the attachment points of consecutive bendable members A-C and less than the distance between the attachment points of the bendable members A and C to the longitudinal members C, D. Consequently, as shown in , the second member B can be bent to a non-supportive position to define a slot between the first and third members A and C that potentially accommodates the vessel . As shown in , when the vessel is accommodated by the slot in a stable position, the plane defined by the rim of the vessel is substantially perpendicular to the support surface . However, the vessel is supported at an angle such that the vessel consumes a significant amount of horizontal space, which potentially reduces the number of utensils that can be accommodated by the rack. If the slot can be redefined such that the vessel is supported at an angle that is closer to being perpendicular to the support surface , it may be possible to accommodate more utensils in the rack. As shown in , the bendable member C can be adjusted so that the vessel is supported such that the plane defined by the rim is closer to being perpendicular to the support surface . Alternatively, as shown in , the bendable members B can be used in conjunction with bendable member A to support the vessel such that the plane defined by the rim is at an angle that is closer to being perpendicular to the support surface . In this case, the bendable support engages the exterior surface at the location at which the exterior surface transitions between the flat bottom surface and the side surface . It should be appreciated that the user of the rack may initially define the slot for supporting the vessel using bendable members A and B rather than bendable members A and C.
FIG. 8A-8E
FIG. 8A
FIGS. 3A-3C
FIG. 8B
FIG. 8B
80
98
98
84
98
98
98
98
98
90
90
98
98
90
90
90
90
98
98
52
98
98
52
98
98
90
90
98
98
52
98
98
90
90
64
52
62
52
98
98
98
98
52
52
98
98
90
90
98
98
52
52
60
112
With reference to , an example of the method of adjusting the rack to accommodate a utensil when bendable members A-E in a portion of the array that is to subsequently support the utensil are each in either a supportive position or a non-supportive position. illustrates each of the bendable members A-E in the portion of the array that is to accommodate a utensil in one of a supportive position or a non-supportive position. For this example, consecutive attachment points of each of the bendable members A-E to the longitudinal wire members C, D are separated from one another by about 25 mm, each of the bendable members A-E will extend about 50 mm above the longitudinal wire members C, D when bent so as to be perpendicular to the plane defined by the wire members C, D, and the utensil is of the type shown in , has a height of about 80 mm, has a difference in radius of about 100 mm, and has a curved side profile. To adjust the rack to accommodate the utensil, an assessment is made of the characteristics of the utensil relative to the bendable members A-E. In this case, the height of the vessel (80 mm) is slightly less than the distance between the attachment points of the bendable members A and E. However, the difference in radius of the vessel (100 mm) is greater than the maximum distance that the bendable members A-D can extend above the longitudinal wire members C, D. Consequently, if bendable members A and E are chosen to define a slot that potentially accommodates the vessel , one of the bendable members A and E (if positioned to be perpendicular to the longitudinal wire members C, D) will engage the curved side surface of the vessel at a point that is close to the flat bottom surface . See . While the vessel is supported by the bendable members A and E in a stable manner, the cooking utensil occupies a considerable amount of horizontal space in the rack and is likely to limit the number of cooking utensils that can be accommodated by the rack. While the user may initially use bendable members A and E to define a slot for accommodating the vessel , the large difference in radius of the vessel relative to the maximum height that the bendable members A-D are capable of attaining relative to the longitudinal wire members C, D suggests that using two of the bendable members A-E that are separated from one another by at least one bendable member or are adjacent to one another is likely to result in the defining of a slot that will accommodate the vessel and is capable of supporting the vessel in a position such that the plane defined by the rim is closer to being perpendicular relative to support surface than shown in .
FIG. 8C
FIG. 8D
FIG. 8E
FIG. 8E
98
98
116
52
50
50
118
56
52
112
60
98
98
116
52
50
98
98
98
98
98
98
52
98
98
52
60
112
98
98
With reference to , if the bendable members A, B are chosen to define a slot for accommodating the vessel , the cooking utensil is not supported in a stable fashion. More specifically, cooking utensil tips, as shown by arrow , such that the interior surface of the vessel begins to face the supporting surface and the rim of the vessel either falls out of contact with the bendable member A or never comes into contact with the bendable member A. As such, the slot is too narrow. With reference to , one approach to defining a new slot that will both accommodate the vessel and support the cooking utensil in a stable position is to bend the bendable member B so as to rotate away from bendable member A. Alternatively and with reference to , two of the bendable members A-D that are separated from one another by at least one of the other bendable members A-D can be used to define a new slot that is potentially capable of accommodating the vessel in a stable manner. In this case, bendable members A, D are used to define a new slot. As shown in , the vessel is supported in the new slot such that the plane defined by the rim of the vessel is substantially perpendicular to the support surface . If having the plane even closer to being perpendicular is desired, the bendable member C can be bent so as to rotate towards the bendable member A.
FIG. 4
FIG. 9
FIG. 8
80
120
122
120
124
86
124
86
122
122
120
14
12
122
120
80
84
120
With reference to , the rack further includes a bendable handle support that is capable of being bent into a position to support an extended handle of a cooking utensil above a supporting surface on which the rack is positioned and into a position at which an extended handle of a cooking utensil is not supported above a supporting surface. With reference to , the bendable handle support can be bent to a position in a range extending from a position A adjacent to the top side of the wire frame to a position B adjacent to the bottom side of the wire frame . It should be appreciated that smaller ranges are also feasible, provided the handle support is capable of being positioned to engage an extended handle of a cooking utensil so that the handle is spaced from the supporting surface and to not engage an extended handle of a cooking utensil so that the handle is spaced from the supporting surface . In , the bendable handle support has been bent so as to be positioned to support the extended handle of the cooking utensil so that the handle is spaced from the support surface . While the bendable handle support is associated with one side of the rack , a second bendable handle support can be associated with the opposite side of the rack if needed or desired. Further, it should be appreciated that the bendable handle support can be replaced with a multiple handle supports that would each allow individual extended handles or a group of extended handles to be positioned as needed or desired by the user.
FIG. 4
FIG. 10
84
130
130
82
130
130
82
132
130
130
134
134
132
82
132
136
132
80
130
130
134
132
120
134
132
130
130
120
82
132
138
132
84
With continuing reference to , the rack further includes a pair of bendable positioning members A, B for facilitating the fixing of the position of the base within a cabinet or drawer, or on a shelf or counter top, or on a similar support surface by engaging an upwardly extending surface, typically, a wall of some kind. illustrates an example of the use of the bendable positioning members A, B to fix the position of the base within a drawer . The bendable positioning members A, B have been bent so as to engage the sides A, B of the drawer so as to prevent the base from moving side-to-side in the drawer , i.e., in the directions of arrow , which might otherwise occur during opening and closing of the drawer or during insertion or removal of cooking utensils from the rack . In addition, the bendable positioning members A, B have been bent so as to engage the front side C of the drawer . In this case, the bendable handle support has also been employed as a bendable positioning member to engage the back side D of the drawer . The bendable positioning members A, B and the bendable handle support cooperate to prevent the base from moving back-and-forth in the drawer , i.e., in the directions of arrow , which also might otherwise occur during opening and closing of the drawer or during insertion or removal of cooking utensils from the rack . It should be appreciated that a rack can include one or more bendable positioning members and any such positioning members can be used to facilitate the fixing of a rack to some extent within a cabinet or drawer, or on a shelf or counter top, or on a similar surface provided that is an adjacent surface that the bendable positioning member can be positioned to engage in light of the position at which the user wants to place the base of the rack. As an alternative to the use of a bendable positioning member, one or more flanges associated with the base and that each have a hole or slot for accommodating a screw, nail, or other similar fastener that can be used to fix the position of the base within a cabinet or drawer, or on a shelf or counter top, or on a similar support surface.
FIG. 11
140
82
140
82
140
140
90
142
140
140
144
140
illustrates an alternative bendable member that can be associated with a base, such as base . The bendable member is illustrated as being associated with base . However, it should be appreciated that other bases are feasible and the bendable member is capable of being adapted to such other bases. The bendable member is attached to the longitudinal member D at location . As such, the bendable member can be bent so as to rotate about any one or a combination of three orthogonal axes. Further, the bendable member can be bent about point to extend the reach of the bendable member , which is or may be desirable when attempting to adjust the rack to accommodate cooking utensils of large diameter and/or cooking utensils with a large difference of radius, such as stir fry pans and woks.
The foregoing description of the invention is intended to explain the best mode known of practicing the invention and to enable others skilled in the art to utilize the invention in various embodiments and with the various modifications required by their particular applications or uses of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A-1C
respectively are a first perspective view, second perspective view, and a cross-sectional view of a cooking utensil with a perpendicular side profile;
FIGS. 2A-2C
respectively are a first perspective view, second perspective view, and a cross-sectional view of a cooking utensil having a difference between the radius of the rim and the radius of the bottom surface and a side profile that is angled;
FIGS. 3A-3C
respectively are a first perspective view, second perspective view, and a cross-sectional view of a cooking utensil having a relatively large difference between the radius of the rim and the radius of the bottom surface, a relatively large height, and a side profile that is curved;
FIG. 4
illustrates an embodiment of adjustable storage rack for cooking utensils that includes a single array of bendable members that can be adjusted to support multiple cooking utensils having different height, side profile, and/or difference in radius characteristics in a horizontal stack;
FIG. 5
FIG. 4
illustrates the use of a pair of the longitudinal wires of the wire frame shown in to position a cooking utensil so that two bendable members can support the cooking utensil;
FIGS. 6A-6C
respectively illustrate the array of bendable members in an initial state in which all the members are in a non-supportive position, in a subsequent state in which two of the members have been bent to a supportive position that defines a slot for potentially accommodating a vessel of a cooking utensil, and the members that are in the supportive position supporting a vessel in a stable fashion;
FIGS. 7A-7E
respectively illustrate the array of bendable members in an initial state in which all the bendable members are in a supportive position, a subsequent state in which the bendable member between two bendable members that define a slot for potentially accommodating a cooking utensil has been bent to a non-supportive position, the members that are in the supportive position supporting a vessel in a stable fashion, and the adjustment of one of the members in the supportive position to support the vessel at a steeper angle, and the adjustment of one of the intermediate members to support the vessel at a yet steeper angle;
FIGS. 8A-8E
respectively illustrate the array of bendable members in an initial state in which some of the members are in a supportive position and some of the members are in a non-supportive position, a subsequent state in which two bendable members define a slot for accommodating a cooking utensil in a manner that occupies significant horizontal space in the rack, a subsequent state in which two bendable members define a slot for accommodating the cooking utensil in an unstable manner, a subsequent state in which two bendable members define a slot for accommodating the cooking utensil in a stable manner that occupies less horizontal space in the rack, and a subsequent state in which two bendable members define a slot for accommodating the cooking utensil in a stable manner that occupies less horizontal space in the rack;
FIG. 9
illustrates a bendable handle support for supporting an extended handle of a cooking utensil located in a slot defined by the rack at a desired position relative to the wire frame of the rack;
FIG. 10
is a plan view of the rack located in a drawer and with two bendable positioning members bent so as to limit movement of the base of the rack within the drawer; and
FIG. 11
illustrates an alternative bendable member that is operatively attached to the base at one end. | |
*Invitation only A limited number of seats is available for the audience of the workshop. If you are interested in participating, please send an email to [email protected]. You need to be a knowledgeable and active researcher in the field. Please submit a short justification.
The world is changing dramatically as AI integrates into our society and work. To keep pace, it’s imperative we reimagine education for adults and youth of all backgrounds and cultures using the transformative power of AI responsibly. Read a full description of the AI Education track.
The track will identify projects that could be developed using AI and Space to help address issues on a local and global scale, such as predicting, preparing for, and mitigating the effects of climate change.
Identify areas where there is a high potential for impact for AI in Space, for collective benefit and the potential partnerships and models that might enable progress.
Identify selected projects or efforts that will spin-out of the track and begin to be realised.
BT1: AI Education and Learning – Session 1 : State of Play. What is working and what do we know today?
What is AI? How is Machine Learning different? What is the current state of AI technology? What lies ahead?
What is the impact of AI-based technologies and games on children’s brains and mental health?
The AI Summit Health breakthrough track acts as the Focus Group on AI for Health’s (FG-AI4H) 5th Workshop. If you are interested in attending this breakthrough track please also register for the FG-AI4H here.
The session will frame the AI4H workshop by providing an overview of the activities of the FG and the topics and aspects that the FG and workshop would like to address including opportunities for standardization of AI solutions for health.
What data is helpful to collect from AI-based learning platforms that deepen student interest?
How do you encourage lifelong learning in the workplace?
How can artificial intelligence assist humans in diagnostics and medical decision making? This session will provide best practices for the applications of AI in Health, covering different medical domains, e.g. radiology and ophthalmology, and will create a common ground for further discussions and for the subsequent workshop sessions.
Building on SDG 10 (reduce inequality within and among countries) and SDG 5 (achieve gender equality and empower all women and girls) this segment focuses on equal protection, non-discrimination, and gender and racial diversity with respect to design and deployment of AI for public and private applications.
This session aims to brief the audience about “for good” projects in AI that have identified the problem, created a prototype, and are in the process of deploying towards product-market fit. What lessons have been learned, what examples can we take from them?
As SDG Goal 13 states, we must “take urgent action to combat climate change and its impacts”. This is an area where AI could be applied to the wealth of Earth observation (and other) data to help predict and mitigate the effects of climate change.
How can we ensure that developing countries, and those most at risk, are able to benefit from AI and Space? Can we develop data-sharing agreements and standards to help deliver on this potential? Who has what data – and how do we get it to the right people at the right time?
Goal of Labs: Evaluate and discuss practical proposals & projects that could amplify the use of AI technologies by a broader section of society to tackle problems that are meaningful to them.Format: Participants will be invited to join 1 of 5 labs listed below. In the lab they will engage in a facilitated discussion identifying practical projects that could help move the field forward. Individuals and organizations are also invited to present their work as a lightning talk if it is relevant to that lab (proposal submission link). At the end of the day, the group will decide which projects are ready to launch. These projects will be presented to all the Summit attendees the following day.
Lab themes: Community, Workplace, Media, Lifelong Learning, Psychology. Read a full description of the AI Education track and labs.
Building on SDG target 16.10 (ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements) and article 19 of the Universal Declaration of Human Rights (right to freedom of opinion and expression), this segment explores UNESCO roam framework * for a digital information sphere that protects human rights and openness, enables access, and facilitates multistakeholder engagement.
This session aims to raise awareness of tools that can help at each of the steps.
There is a huge potential for AI and Space tools to help in areas such disaster response, urban planning, and early-warning systems, but how do you get people trust something if they can’t explain it? From policy, to data, to AI deployment, trust plays a role in the success (or failure) of initiatives in a multitude of different ways. This session will identify areas where trust is essential to progress and look at the steps that can be taken to meet the need for trust at a policy-level, through to the ensuring that AI outputs are accurate, transparent, and reliable.
Goal of Labs: Building off of the earlier session, participants will identify resources and practical next steps needed for proposed projects to be launched at the end of the Summit. Groups will identify which projects to share at the breakthrough pitch session May 30.
This session will discuss existing AI initiatives in countries, related AI strategies, AI adoption maturity models, and priority health use cases that can benefit from AI for Health. The session will also highlight the need to align the health solutions with WHO global goals for health in terms of health outcomes, scalability and inclusiveness.
Building on the former discussions, this closing session showcases how applications and development of ai can be inclusive and respect human dignity. The speakers will present unique applications, best practices and inclusive models of governance in both the public and the private sectors.
This session briefs the audience about the importance of the data & AI commons to scaling up AI for Good: how they can help to reduce friction in problem-solving across all four steps; emerging platforms for Data & AI Commons; and how organizations can best leverage these platforms to open data silos and connect AI problems to fundamental research in AI, to AI solutions deployed at a scale.
This session will end with a call to action to mobilize the community to lend support to specific steps in the funnel or for specific verticals.
Development of resilient infrastructure, access to technology, and improvements in disaster risk reduction fall under a number of SDGs and could all benefit from AI and Space. How can we ensure that these benefits are shared with the largest number of people, while reducing the risk of negative impacts from badly-trained AI, algorithmic inequity, drift?
Can governance models assist in building trust and acceptance of these technologies? Should there be a global agreement on standards for AI and Space? What are the benefits and risks of introducing a governance model, and who should be responsible for maintaining it?
9:15 – 9:45 – Introduction to the Scientific Challenge. Industrial Sustainability: What are the real challenges and in which ways can AI help to solve these?
9:45 – 10:05 – Impulse statements. How do science, industry and AI intertwine? How can AI support scientists in finding and implementing breakthrough solutions? Industry and AI perspectives.
The agricultural sector has already started to exploit the potential of Artificial Intelligence. There are some promising examples of introducing AI to agriculture. This session will be the place to share some of the promising innovations and to discuss possible action-oriented proposals for projects and initiatives that can accelerate progress and bring the potential of AI to agriculture.
The session will discuss research and applications of innovative solutions in artificial intelligence and robotics to meet existing challenges in health, education, social services, and humanitarian aid. It will explore the positive opportunities offered by these applications for sustainable development.
Bring together experts working on integrating technology into community health systems in low- and middle-income countries to exchange experiences and challenges to fully realise the efficiency gains that technology can bring toward optimising health systems so they are responsive to population need.
Live Tiles (Bronze sponsor) – ‘Hopper’ (in homage to Grace Hopper) is an intelligent meeting scheduler, that uses neural network technology and machine learning to set up meetings across multiple time zones and teams; Hopper analyses agendas and your meeting history to determine the optimum time while taking the pain and wasted hours out of endless email exchanges.
Jojo Mayer (world renown drummer) will give a brief synopsis on the emergence of rhythm culture and its relationship to technology and communication. He shares his thoughts on interacting with digital culture and cross examines the relevance of a human performance in the digital age.
An overview of the projects selected on May 29 to move forward with will be presented along with discussions revolving around tangible next steps and committed supporters (time and resources).
AI powered augmented and autonomous driving, 5G enabled smart and connected vehicles, novel ride sharing platforms, vertical lift-off and people carriers, connected car security, and road safety. The session will look at how the future of smart mobility will change the way we work and live, and transform urban and suburban environments.
The session will explore AI Solutions for Humanitarian Challenges, including disaster response, food security, informal settlements, despeciation and poaching, oceanic plastics, disaster prediction and mitigation, wildfires, climate change and adaptation.
MIND AI (Gold sponsor) – Using its avatar, Delight, Mind AI will demonstrate its AI reasoning engine’s logic capabilities with a powerful question and answer session based on its reading of Chapter 1 from The Little Prince. We spotlight the power of inference when new knowledge is added, comprehended, and utilized in real time. Delight will answer questions via a live text-to-speech component, demonstrating how we aim to become the CPU of AIs, but will also rely on other AI components for the full experience.
Learn about current applications of AI in social sector and how to incorporate AI into your programs and projects, from understanding how to frame a problem for AI and get your data in order to how to resource your team and have the right processes in place.
Join us for a series of interactive workshops as we explore ways to invite global voices into AI. The workshops, led by AI and Storytelling thought leaders, will explore pioneering research on automated storytelling and include hands-on workshops on structured data collection as one of the keys to training algorithms to be more culturally aware. In addition, the workshops delve into ethical and historical considerations as we create the foundations for a deeply inclusive AI.
9:15-10:00 – Opening Panel: Can Global Storytelling Build Cultural AI Capacity?
The panel discusses ways to bring together data science, international development, and storytelling to create structured data around world cultures. Can storytellers capture their communities’ values and cultural heritage in metadata? Can AI preserve indigenous stories? Can endowing AI with a sense and understanding of culture bring machines closer to us as humans and make them easier to trust?
10:00-10:30 – Lightning Talks: Why AI and Culture?
The session will focus on showcasing how to develop a strong ecosystem for “AI for Good” solutions. Global experts from across the world will be invited to share their thoughts and personal experiences on defining the pillars required for an “AI for Good” City.
This workshop strives to introduce a broad audience with a high level view of AI and how to start an AI for Social Good project, including the data, algorithm and hardware considerations that should be made along the way. As part of this workshop, there will be a hands-on portion which all individuals can put their newfound knowledge into practice by designing and modifying their own AI algorithm.
Part II – Examine several case studies to deconstruct an AI application from how the problem was defined through to deployment, iteration, scale, extensibility. Case studies will cover different problems / contexts (Humanitarian, Development, Conservation) and different AI capabilities.
12:30-13:45 – Lunch (ITU Program) and Hands-on session: Identify an AI use case for each participating UN:NGO agency (ideally, teams of 2-3), frame the problem, identify data needs, domain expertise and a plan of action. Template will be provided.
We will highlight a few high profile examples of technologically advanced AI products that have failed to gain traction due to lack of cultural empathy.
14:00-14:30 – Lightning Talks: Why Storytelling?
This workshop aims to provide an introduction to MEXICA, a computer-based storyteller inspired by the way humans write.
Considering lessons from history, we explore how human rights have evolved over time and been extended to women, minorities, workers, children, disabled, immigrants, refugees. We ask the audience to share their ideas and concerns about privacy, security, fairness, accountability, and openness. What are our rights and responsibilities in the digital universe? What should be included in a Declaration of Citizen, Machine, and Culture?
The AI for Good Learning day is made up of workshops, tutorials, and educational sessions through three full-fledged tracks targeting businesses, the public sector and youth. Discover the latest AI trends, use cases and solutions, and learn how to leverage AI strategically for your organization, business or career.
The Business & AI track aims to provide workshops with practical and real life use cases addressing the use of AI & ML for businesses. The track includes workshops ranging from Microsoft Azur as a tool, to the implementation of AI chatbots and conversational AI for customer services, passing through the uses of the 5G and AI technologies for good, and also the use of AI for business intelligence and decision making.
The Public Sector & AI track provides workshops exploring various AI policies across different regions in Europe as well as Africa. The track also offers the chance to participate in interactive workshop for policy co-creation using the policy kitchen format, as well as a deep dive into learning practical hands on tools for data collection platform and ML based prediction for humanitarian crisis.
The Youth & AI track aims to explore the youth and younger generations perspectives on AI, in terms of the digital talents and skillset required for the age of AI, and the opportunities and challenges that AI and ML can bring for a sustainable future. The track enables participants to share their ideas and understandings about the common global challenges and provide them with tools to design thinking concepts to create relevant and implementable solutions.
Alex Lustig and Paul Conneally, LiveTiles EMEA.
Using foraus’s new Policy Kitchen crowdsourcing method, the session will identify and assess areas of research and innovation that need stronger international cooperation to ensure trusted, safe, and inclusive development of AI, promote equitable access to its benefits on a global level and can inform the AI global governance processes.
AI has the potential to solve some of the most pressing challenges of emerging economies in core sectors such as agriculture, healthcare and public services. However, ensuring the benefits of AI are within the reach of those who need it the most requires a concerted effort of several stakeholder groups. Having a robust governance framework for AI is only one piece in the puzzle, but it is a fundamental one. Join our workshop to learn from AI developers, governments and users what policies should be considered to capitalize on the AI potential.
The workshop will showcase the applied use of AI to transform humanitarian collaboration, allow participants to explore the DEEP platform in a hands-on session, and discuss with the audience the challenges and opportunities of exploiting advanced Machine Learning (ML) techniques to support humanitarian decision making.
The EQ Eco Sprint is a bold approach to innovation that focuses on actionable ideas for positive environmental change. This workshop brings together a diverse group of people, from scientists to entrepreneurs to brands and policy makers, to THINK, SOLVE and AMPLIFY solutions for critical environmental challenges. Join fellow conference attendees to unpack current global challenges, identify new possible futures and retrocast back to today through the lenses of Socio-Cultural, Technological, Economic, Environmental and Political factors for positive change. The Eco Sprint mission is to develop solutions that can be fostered, implemented, invested in or amplified to create tangible action. | https://aiforgood.itu.int/programme/ |
In conformity with Regulation 20 of the Transparency (Directive 2004/109/EC) Regulations 2007, Mainstay("Mainstay" or the "Company") announces that:
The total number of Ordinary Shares of nominal value €0.001 each in issue on 04 February 2020 is 13,424,004 corresponding to a total of 13,424,004 voting rights. The Company holds no Ordinary Shares in treasury.
Therefore, the figure which may be used by shareholders as the denominator for the calculations by which they will determine if they are required to notify their interest in, or a change to their interest in the Company under the Transparency (Directive 2004/109/EC) Regulations 2007 and the Transparency Rules is 13,424,004.
Total number of Ordinary Shares outstanding
13,424,004
Total number of theoretical voting rights1
13,424,004
Total number of exercisable voting rights2
13,424,004
4 February 2020
1 The total number of theoretical voting rights (or "gross" voting rights) is calculated on the basis of all shares to which voting rights are attached, including shares whose voting rights have been suspended.
2 The total number of exercisable voting rights (or "net" voting rights) is calculated without taking into account the shares with suspended voting rights.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200204005730/en/
Regulatory News:
In conformity with Regulation 20 of the Transparency (Directive 2004/109/EC) Regulations 2007, Mainstay("Mainstay" or the "Company") announces that:
The total number of... ► Artikel lesen
Next step is applying for inclusion on Australian Prostheses List for private reimbursement; decision expected in the third quarter of 2020.
Regulatory News:
Mainstay Medical International... ► Artikel lesen | |
Issues in applied Corporate Finance and Valuation
Estimation of the Cost of Capital:
In recent decades, theoretical breakthroughs in such areas as portfolio diversification, market efficiency, and asset pricing have converged into compelling recommendations about the cost of capital to a corporation. The cost of capital is central to modern finance, touching on investment and divestment decisions, measure of economic profit, performance appraisal, and incentive systems.
Each year in the United States, corporations undertake more than $500 billion in expenditures, so how firms estimate the cost is not a trivial matter. A key insight from finance theory is that any use of capital imposes an opportunity cost on investors; namely, funds are diverted from earning a return on the next-best equal risk investment. Since investors have access to a host of financial market opportunities, corporate use of capital must be benchmarked against these capital market alternatives. The cost of capital provides this benchmark. Unless a firm can earn in excess of its cost of capital, it will not create economic profit or value for investors. A recent survey of leading practitioners reported the following best practices:
- Discounted cash flow (DCF) is the dominant investment-evaluation technique.
- Weighted average cost of capital (WACC) is the dominant discount rate used in DCF analyzes.
- Weights are based on market, not book, value mixes of debt and equity.
- The after-tax cost of debt is predominantly based on marginal pretax costs, as well as marginal or statutory tax rates.
- The capital asset pricing model (CAPM) is the dominant model for estimating the cost of equity.
Discounted cash flow valuation models
The parameters that make up the DCF model are related to risk (the required rate of return) and the return itself. These models use three alternative cash-flow measures: dividends, accounting earnings, and free cash flows. Just as DCF and asset-based valuation models are equivalent under the assumption of perfect markets, dividends, earnings, and free cash-flow measures can be shown to yield equivalent results. Their implementation, however, is not straightforward. First, there is inherent difficulty in defining the cash flows used in these models. Which cash flows and to whom do they flow? Conceptually, cash flows are defined differently depending on whether the valuation objective is the firm’s equity or the value of the firm’s debt plus equity. Assuming that we can define cash flows, we are left with another issue. The models need future cash flows as inputs. How is the cash-flow stream estimated from present data? More important, are current and past dividends, earnings, or cash flows the best indicators of that stream? These pragmatic issues determine which model should be used. Although the dividend model is easy to use, it presents a conceptual dilemma. Finance theory says that dividend policy is irrelevant. The model, however, requires forecasting dividends to infinity or making terminal value assumptions. Firms that presently do not pay dividends are a case in point. Such firms are not valueless. In fact, high-growth firms often pay no dividends, since they reinvest all funds available to them. When firm value is estimated using a dividend discount model, it depends on the dividend level of the firm after its growth stabilizes. Future dividends depend on the earnings stream the firm will be able to generate. Thus, the firm’s expected future earnings are fundamental to such a valuation. Similarly, for a firm paying dividends, the level of dividends may be a discretionary choice of management that is restricted by available earnings. When dividends are not paid out, value accumulates within the firm in the form of reinvested earnings. Alternatively, firms sometimes pay dividends right up to bankruptcy. Thus, dividends may say more about the allocation of earnings to different claimants than valuation. All three DCF approaches rely on a measure of cash flows to the suppliers of capital (debt and equity) to the firm. They differ only in the choice of measurement, with the dividend approach measuring the cash flows directly and the others arriving at them in an indirect manner. The free cash-flow approach arrives at the cash-flow measure (if the firm is all-equity) by subtracting investment from operating cash flows, whereas the earnings approach expresses dividends indirectly as a fraction of earnings.
The capital asset pricing model
This is a set of predictions concerning equilibrium expected returns on risky assets. Harry Markowitz established the foundation of modern portfolio theory in 1952. The CAPM was developed twelve years later in articles by William Sharpe, John Lintner, and Jan Mossin. Almost always referred to as CAPM, it is a centerpiece of modern financial economics. The model gives us a precise prediction of the relationship that we should observe between the risk of an asset and its expected return. This relationship serves two vital functions. First, it provides a benchmark rate of return for evaluating possible investments. For example, if we are analyzing securities, we might be interested in whether the expected return we forecast for a stock is more or less than its “fair” return given its risk. Second, the model helps us to make an educated guess as to the expected return on assets that have not yet been traded in the marketplace. For example, how do we price an initial public offering of stock? How will a new investment project affect the return investors require on a company’s stock? Although the CAPM does not fully withstand empirical tests, it is widely used because of the insight it offers and because its accuracy suffices for many important applications. Although the CAPM is a quite complex model, it can be reduced to five simple ideas:
- Investors can eliminate some risk (unsystematic risk) by diversifying across many regions and sectors.
- Some risk (systematic risk), such as that of global recession, cannot be eliminated through diversification. So even a basket with all of the stocks in the stock market will still be risky.
- People must be rewarded for investing in such a risky basket by earning returns above those that they can get on safer assets.
- The rewards on a specific investment depend only on the extent to which it affects the market basket’s risk.
- Conveniently, that contribution to the market basket’s risk can be captured by a single measure—”beta”—that expresses the relationship between the investment’s risk and the market’s risk.
Finance theory is evolving in response to innovative products and strategies devised in the financial market-place and in academic research centers. | http://bba-mba.net/issues-in-applied-corporate-finance-and-valuation.html |
As per our current Database, Oscar Auerbach died on Jan 15, 1997 (age 92).
He received his MD from New York Medical college in 1929.
Oscar met his future wife while studying in Vienna.
Currently, Oscar Auerbach is 116 years, 11 months and 1 days old. Oscar Auerbach will celebrate 117th birthday on a Saturday 1st of January 2022. Below we countdown to Oscar Auerbach upcoming birthday. | https://www.celeb-networth.com/oscar-auerbach |
Luckily for Jonah Martinez, Cole Smith and Jeremy Prouty, the coins they found were on a public Florida beach. Discover information and vessel positions for vessels around the world. Cruise On A Pirate Ship. Shipwrecking may be deliberate or accidental. 1. Climb aboard up the gangway … Pirate Ship Slides Read More » Gather jumjum, white tea leaf and black coffee bean from the long-abandoned garden nodes in the zone. Outside the Caribbean, finding a traditional vessel is a task. The decisive battle was held offshore of the island, where the pirates achieved victory over Beckett and Davy Jones, who were killed in battle.. A young Norwegian boy in 1850s England goes to work as a cabin boy and discovers some of his shipmates are actually pirates. You may have to switch servers if the Pirate Camps or Shadow Ghosts were recently killed. Shipwrecked Recipe Book. But these questions would go unanswered for the next 260 years, inspiring stories and hopeful treasure seekers, until underwater explorer Barry Clifford found the ship’s remains in 1984. Beneath the waves, among the sea moss and rocks, there lies a hidden treasure on the central Oregon coast. This is an index of lists of shipwrecks (i.e. long-abandoned garden. A Pirate's Adventure: Treasures of the Seven Seas, Pirates of the Caribbean Official Website, Official Pirates of the Caribbean Facebook, LEGO Pirates of the Caribbean: The Video Game Wiki, Pages using duplicate arguments in template calls, Pirates of the Caribbean: Armada of the Damned, Pirates of the Caribbean: The Visual Guide, https://pirates.fandom.com/wiki/Shipwreck_Island?oldid=182210, Coincidentally, there was an island named "Shipwreck Island" in the Spring 2005 issue a. Fortnite Season 9 Weekly Challenge Articles Check Out Season 9 Challenge & Rewards! Explore the secrets of the deep and discover treasures found deep beneath the surface at Branson’s new Shipwrecked Treasure Museum. Fifty-two months of solitude passed before the English ship Duke appeared, searching out pirates under Captain Woodes Rogers. Shipwreck location database with gps coordinates, maps and wreck locations details. This secret sea vessel is a swashbucklers’ paradise! 1 Characters 1.1 Captain Redbeard's crews 1.1.1 Black Seas Barracuda crew (comic book and Ladybird books) 1.1.2 Rest of the Black Seas Barracuda crew (German audio dramas) 1.1.3 Skull's Eye Schooner crew 1.1.4 Redbeard Runner crew (LEGO Mania Magazine) 1.1.5 Brick Bounty crew 1.1.6 Black Seas Barracuda crew (2020 edition) 1.2 Crews of other pirates 1.2.1 Barracuda crew 1.2.2 Blackheart … Hovering your mouse over the sea region will tell you which fish you can catch there. Shipwreck definition, the destruction or loss of a ship, as by sinking. Please see the maps for spawn locations of Cox Pirate Camps, Cox Pirates’ Shadow Ghosts, and Hungry Hekarus and Ocean Stalkers. On February 22, three metal detectorists reaped the rewards of sand that had been swept away from “Treasure Coast”, as this particular area of Florida is known. The AmericasBoston – Florida – Louisiana – Mexico – New Orleans – North Carolina Panama – Peru – Savannah – Virginia – Yucatán Peninsula, AsiaBombay – China – Hong Kong – India – Nippon – Shanghai – Singapore, The BahamasAndros – Nassau – New Avalon – New Providence, The Caribbean Antigua – Black rock island – Cuba – Devil's Triangle – Hangman's Bay – HispaniolaIle d'Etable de Porc – Isla De La Avaricia – Isla Cruces – Isla de Muerta – Isla de PelegostosMartinique – Padres Del Fuego – Port Royal – Poseidon's TombPuerto Rico – Saint Martin – Shipwreck Island – Tortuga – Unnamed Island, EuropeCádiz – France – Gibraltar – Great Britain – Holland London – Marseilles – Portugal – Spain, Other locationsDavy Jones' Locker – Farthest Gate – Ice Passage – Isla Sirena. Use an Air Strike in Different Matches. Become a hero among the pirates and challenge yourself to build your own epic town. 3. This expansive two-story museum features items from the Roman Empire through World War II, from real pirate coins to actual artifacts recovered from historic shipwrecks from around the world. There were also many notable inhabitants who lived in the city, particularly pirates and wenches. Most seafarers who heard of it regarded it as nothing more than the rum-soaked invention of tale-spinning pirates. See more. When searching for common fish like Mullet, Skipjack, and Clownfish, I recommend going to a sea region with the least varieties of fish so the chances of your catching the fish you need will increase. Cursed Pirate Ships spawn in the Sea of Silence which is located East / Southeast of Iliya Island. If you have bad gathering luck, the nodes will respawn in about 5 minutes. Online tools and resources that make planning and preparing for Shipwrecked VBS easy-breezy! The brigantine, schooner and frigate were all popular choices for a pirate vessel. It was the location of Shipwreck Cove, which contained Shipwreck City. Much of Shipwreck Island's history is shrouded in mystery. According to nautical lore and legend, Captain Calico Jack Rackham became a pirate when he took control of a sailing ship in 1718. The ship was captured by pirates during its return voyage of the triangle trade. Last updated May 15, 2020 at 10:57PM | Published on May 13, 2020, Use Chopping in the Processing window (L) on, Defeat 20 Cox Pirates infiltrating the islands. The ship moves to one of these locations about every day, as far as I can figure. From the creators of warin.space we bring you ShipWrecked.space a pirate Io style game Take your ship to the high seas and battle your way to the top of the leader board. This will highlight each sea region. But pirates who knew of it could usually find it, though not always. Shipwreck City was located in the crater of a volcano, and its mass was comprised of hundreds of wrecked ships. One of the best known legends on the Spanish Main, it was believed that the island was an impregnable pirate stronghold and sanctuary for hundreds of years. Island Exploration Quests were improved in a patch on 5-13-2020. Search inside to find the tricorn hat , to complete your Commodore Norrington cosplay. The castaway is located on a small unnamed island Southwest of Staren Island off the coast of Olvia. Comment by HazelNutty The Shipwrecked Captive is a Pet Tamer that can be summoned by the Sternfathom's Pet Journal toy in Azsuna. A shipwreck is the remains of a ship that has wrecked, which are found either beached on land or sunken to the bottom of a body of water. The only way to enter this island was through the passage known as the Devil's Throat. The Schiedam was a pirate ship for a period of time in between its life in the Dutch East India Company and its time in the English fleet. This expansive two-story museum features items from the Roman Empire through World War II, from real pirate coins to actual artifacts recovered from historic shipwrecks from around the world. You can't fully understand a pirate's life until you sailed on a traditional ship. Explore the secrets of the deep and discover treasures found deep beneath the surface at Branson’s new Shipwrecked Treasure Museum. , Throughout the Age of Piracy, the Brethren Court held many meetings at Shipwreck Island during their existence. Search the MarineTraffic ships database of more than 550000 active and decommissioned vessels. On 2 February 1709, they took him off to safety. Pirate Hall was the chamber where the Pirate Lords assembled within Shipwreck Island to begin the meetings of the Brethren Court. Otherwise, you may spend an hour trying to catch a Clownfish. Another particularly well-known shipwreck is that of the Queen Anne’s Revenge; the ship that belonged to the pirate Blackbeard.The Queen Anne’s Revenge terrorized the seas in the early 1700s, before Blackbeard ran it aground around 1718 (and was then ‘ran aground’ himself a few months later).. About 150 years later, divers visiting the shipwreck uncovered a perfectly preserved bottle of perfume. Pirate Hall was located somewhere within Shipwreck Cove. Its treacherous passage has been known to claim several vessels every year. Shipwreck Island was a tropical island in the Caribbean, located a day's sail off the northeast coast of South America. It's stationed almost to the center of the Lagoon. Charles Vane was the original captain, but a mutiny occurred when the cautious commander refused to attack a French ship; Calico Jack ignored the order, took command and fired the ship's cannons. Find us at 2 Bay St. Tobermory Ontario, in Northern Bruce Peninsula! Selkirk had left the island, but the island never left him. After you catch the required fish, turn the fish in to him and collect your rewards. For more information on Sailors and Bartering check out BDO Sailor Guide or BDO Bartering Guide, Cox Pirates’ Artifact (Parley Beginner) x2. Being a Pirate Hunter. Please see the chart below for details. The Devil's Throat was the only way into the volcanic crater that housed Shipwreck Cove and Shipwreck City. Row out there and you’ll find the big pirate ship on the north of the island – you can’t miss it. Take your favorite fandoms with you and never miss a beat. There were also many notable inhabitants who lived in the city, like Edward Teague and "Stupid" Barnaby. MarineTraffic Live Ships Map. Some said that it had no fixed location, but that it moved. Shipwrecks generate occasionally in all types of oceans and beaches, usually underwater. This fight is a viable way to level pets. One of the few pirate maps that bore correct (at least at some times) coordinates for Shipwreck Island showed it as lying a day’s sail off the northeast coast of South America. sunken or grounded ships whose remains have been located), sorted by region.. By location. Use this at the designated location to summon the Cox Pirates' Ghost. Currently, every Shipwreck takes the form of a Galleon. Lying with the current the vessel is about 130 ft. x24 ft. x 8 ft. with some decking remaining however no rudder or “bow spirit”.
shipwrecked cox pirate ship location
Canon C500 Mark Ii Vs C300 Mark Ii
,
Land For Sale In Frederick, Md
,
Bangor Daily News Real Estate
,
Tier 1 Occupational Therapy Mental Health Intervention
,
Where Is It Legal To Have A Pet Raccoon
,
Dragon Age History
,
Bird Interview Questions
,
Menard County Property Tax Bill
,
What Is The Average Pay For A Caregiver?
,
Backyard Drinking Fountain Systems
,
Landscape Design And Horticulture
, | http://www.doglink.pt/sites/the-lighthouse-gbjnpw/article.php?98ec64=shipwrecked-cox-pirate-ship-location |
Q:
How to use a python produce as many subplots of arbitrary size as necessary accommodate all data?
If I have a method of a python class that plots a graph using Matplotlib, how can i then use this method within another to produce subplots of an arbitrary size.
For example
The function:
import matplotlib.pyplot as plt
def plot(x,y):
fig=plt.figure()
plt.plot(x,y)
x=[3,3,5,4,6,7,8,6]
y=[6,,5,,7,6,5,4,6]
plot(x,y)
would produce a graph. If I had say 30 sets of (x,y) pairs, how can i use plot in a loop while adding to subfigures, to produce as many subfigures as required. I.e. if we want a 4 by 4 subplots then we need 7 subfigure with the remainder only containing 2 graphs.
A:
You can use subplot to achieve this. The following example:
import matplotlib.pyplot as plt
import numpy as np
def plot(x,y,i,num):
snum = np.sqrt(num)+1
ax = plt.subplot(snum,snum,i)
ax.plot(x,y)
numberofplots = 31
for i in range(1,numberofplots+1):
x = np.random.randint(0,100,10)
y = np.random.randint(0,100,10)
plot(x,y,i,numberofplots)
, results in this:
A:
armatita answer is perfectly fine, and it will always try to arrange for a "square" of subplots (same number of rows and columns where possible).
If you want the subplots to follow a different aspect ratio (i.e. more columns than rows when you have a 16:9 monitor) you can use the following:
import matplotlib.pyplot as plt
import numpy as np
import math
def fitPlots(N, aspect=(16,9)):
width = aspect[0]
height = aspect[1]
area = width*height*1.0
factor = (N/area)**(1/2.0)
cols = math.floor(width*factor)
rows = math.floor(height*factor)
rowFirst = width < height
while rows*cols < N:
if rowFirst:
rows += 1
else:
cols += 1
rowFirst = not(rowFirst)
return rows, cols
numberofplots = 31
rows,cols = fitPlots(numberofplots)
plt.figure()
for i in range(1,numberofplots+1):
x = np.random.randint(0,100,10)
y = np.random.randint(0,100,10)
ax = plt.subplot(rows,cols,i)
ax.plot(x,y)
plt.show()
| |
Why you should invest in an ETF trading company
Posted by Tech Insider News on Tuesday, August 31, 2019 12:12:12If you’re a seasoned trader, you’re probably familiar with the process of creating an ETF, or exchange-traded fund.
But there are still a lot of factors to consider when deciding whether or not to buy or sell an ETF.
Here’s what you need to know about ETFs.
1.
Who Owns an ETF?
ETFs are often traded by a small group of investors, which means that the amount of money involved is usually low.
The amount of each share in an exchange-ticker is the same, but the trading price is usually determined by a number of factors, such as the price of a specific asset.
For example, a company that holds 10,000 shares of a company with a market cap of $1.5 billion could be worth a lot less than one that holds 100,000.
2.
When Does an ETF Trade?
ETF trading takes place over a set period of time.
When an ETF trades, it goes public.
This is when the company’s shares are traded for the first time.
This allows investors to gain exposure to the company without having to buy their shares individually.
Investors can also buy and sell shares on exchanges, but this is a more expensive process and may result in less profit.
3.
What is a Fund?
An ETF is a type of exchange-listed stock, like a bond or stock in a bank.
These companies often sell shares to individuals or institutional investors.
ETFs have certain characteristics that are unique to each company, including the size of their stock portfolio and whether or to how much they have in cash reserves.
For instance, an ETF can hold more than 1.5 million shares of stock, whereas a bank might hold about 1,000 stocks.
4.
How Much Does an Exchange-Traded Fund Cost?
An exchange-based ETF usually has a minimum investment of $100,000, and you can usually purchase an ETF with the proceeds from your regular investments.
However, an exchange traded fund generally has an annual cost of between $1,000 and $2,000 depending on the company.
In addition, an investor can buy shares of an ETF through a brokerage account or through an investment company.
5.
How Many Shares Can an ETF Hold?
The minimum investment required for an ETF is usually a few thousand shares.
However if you invest at a time when the market is low, or when there are few investors with sufficient liquidity, an investment of millions of shares can be possible.
6.
What Are the Benefits of an Exchange Traded Fund? | https://manisahabersaati.com/archives/277 |
Dr. Lee is the co-director of the Center for Cognition and Sociality, established in July 2012. He earned his B.A. in Chemistry from The University of Chicago before getting his Ph.D. from Columbia University in 2001, and later worked in Department of Pharmacology at Emory University as a postdoctoral fellow. In 2004 he joined KIST as a senior research scientist and later served as the Director of Center for Neuroscience. In 2009, he founded the WCI Center for Functional Connectomics as a part of World Class Institute Program. In 2015 he became the recipient of Creative Research Investigator Award to establish the Center for Glia-Neuron Interaction at KIST to serve as the Director of the Center before taking up his position in IBS.
Investigating the brain function in regulating cognition and sociality
The Center for Cognition and Sociality (CCS) focuses on unraveling the mechanisms of brain function for cognition and sociality, identifying causes of psychiatric disorders and neurodegenerative diseases, and developing novel treatments through research at various levels encompassing molecules, cells, and organisms. The CCS consists of eight laboratories in three groups (Cognitive Glioscience, Social Neuroscience, and Molecular Neuroimaging Groups) according to the research topics. We are conducting multidisciplinary research based on genetics, behavioral genetics, electrophysiology, optogenetics, acoustic neuromodulation, molecular neuro-imaging, brain wave analysis, synthetic biology, and glycomics.
The Cognitive Glioscience Group focuses on research to understand and regulate brain cognitive functions. We have been investigating the role of astrocytes in cognitive functions and seeking to find answers to questions about how various gliotransmitters such as Glutamate, D-serine, GABA, BDNF, and H2O2 are synthesized and secreted from astrocytes to regulate brain cognitive function, how astrocytic volume changes can regulate synaptic plasticity, and how reactive astrocytes can cause neurodegenerative diseases such as Alzheimer's disease, Parkinson’s disease and Huntington’s disease. Through these studies, we aim to clarify the importance of astrocytes in brain cognitive function and discover drug candidates that can regulate the function of astrocytes, thereby presenting a new paradigm for the treatment of various neurodegenerative diseases and neuropsychiatric disorders.
For translational research, we are developing ultrasonic technologies for non-invasive neuromodulation. We are particularly interested in the fundamental molecular and cellular mechanism of how ultrasound is sensed and transduced by various ion channels and transporters. We believe that this approach will provide an unprecedented way for selective and specific control of brain functions to treat neurological and psychiatric diseases.
In addition, we are trying to develop translational and reverse-translational research behavioral paradigm to understand how human brain as well as animal brain encode the abstract information by processing complex sensory signals. We deploy multiple techniques such as ECoG, EEG, PET/CT, and MRI to understand how various perceptual, cognitive, and conscious processes unfold in time.
The Social Neuroscience Group is studying the brain mechanisms that control a variety of social behaviors and emotions. We are particularly interested in the neural mechanisms of “empathy,” the ability to share and understand the emotional state of others. Using an observational fear model that assesses affective empathy in rodents, we seek to find mechanistic explanation for how the brain generates the affect sharing, and how pathological dysfunction within these brain networks causes abnormal empathic responses. Understanding neural mechanisms of observational fear will provide novel insights into effective treatments for psychiatric disorders associated with empathy.
Our group is also conducting research on neural underpinnings of how the brain recognizes individuals as unique identities during social interactions. We have been developing simplified and precisely controlled novel individual discrimination paradigms. Together with quantitative behavioral measures, we use multiple state-of-the-art techniques including two-photon calcium imaging, miniscope imaging, and Neuropixels recordings to reveal neural mechanisms of social recognition.
In addition, we are pioneering new fields of research to uncover the role of glycosylation of proteins and lipids in brain function for social behaviors. We have been generating a variety of genetically engineered mouse models targeting glycan-modifying enzymes and building a brain map of glycan structures. These efforts will broaden our understanding of the molecular mechanisms of glycosylation to regulate social behaviors and contribute to the development of new diagnostic methods and treatments for various neuropsychiatric disorders such as autism, depression, and schizophrenia.
The Molecular Neuroimaging Group has been developing molecular technologies for real-time visualization or control of brain functions at the molecular level. We are developing genetically encoded fluorescent biosensors to monitor dynamic molecular interactions and the activity of specific proteins in the brain cells from freely moving animals. Also, we are designing a variety of optogenetic tools for precise modulation of specific molecules in space and time by light illumination. With the aid of various fluorescence imaging instruments such as confocal microscopes, two-photo microscopes, super-resolution microscopes, light-sheet microscopes, FACS, we are studying brain functions on various scales from nanometers (10-9 m) to centimeters (10-2 m). Currently, our group mainly focuses on the development of optogenetic technologies for control of various channel proteins in the brain and new synthetic approaches for modulating the brain connections to decipher the meaning of communication between brain cells and clarify the structure-function relationship of brain circuits. | https://ibs.re.kr/eng/sub02_05_04.do |
A light year, also light-year, abbreviated ly, is the distance light travels in vacuum in one Galactic Standard Year. 3.26 light years make up a parsec, which was a unit of distance that was important in locating star systems in the known galaxy.
Since the Galactic Standard Calendar used a year of 368 days, the length of a Galactic Standard Light Year would have been 9,531,961,160,601,600 meters.
Behind the scenes
The above calculation assumes that the Galactic Standard Day, Hour, etc. and Meter are equal to Earth's, and results in a Galactic light year 0.75% longer than an Earth-based light year.
It is also possible that Galactic days (and hours etc.) were 0.75% shorter than their Earth equivalents, and that the light years are the same length. (see below)
The length of a light year depends on the exact length of one year. On Earth, the International Astronomical Union (IAU) uses a Julian year of 365.25 days, while other sources may use a Gregorian year of 365.2425 days, or another year altogether.
|Source||year (days)||light year (meter)||light year (miles)|
|IAU||365.25||9,460,730,472,580,800||5,878,625,373,184|
|Gregorian||365.2425||9,460,536,207,068,020||5,878,504,662,190|
|365.242199||9.460 528 4 ×1015||5.878 499 81 ×1012|
|Yahoo||365.2411‡||9.460 5 ×1015||5,878,482,164,161|
|Coruscant||368||9,531,961,160,601,600||5,922,886,070,723|
‡ Note that while Yahoo separately reports a year length of 365.24220 days, its rounding of the light year length to five digits results in a year length of ~365.2411 days. | https://starwars.fandom.com/wiki/Light-year/Legends |
Flanged Immersion Heaters
ProTherm flange heaters are high capacity electric heating elements made for tanks and/or pressurized vessels. They consist of multiple tubular heaters formed into a hairpin shape and brazed to ANSI flanges. The heating elements can be made of copper, steel, stainless steel or Incoloy sheath. Standard flanges are composed of carbon steel rated for 150 or 300 Lbs. Other flange materials and shapes are available. Various types of electrical protection housing, built in thermostats, thermocouple options and high limit switches can be incorporated.
Flange Heater Material Options
In order to meet the heating requirements of your application, and have a safe operation in the environment within which the heater operates, several factors should be taken into consideration in the design of your heating element. The following are a number of criteria that should be considered:
- The pressure rating and the material of a flange.
- The sheath material of the tubular elements. The table below provides recommended tubular sheathe and flange material for different mediums.
- Operating temperature and a watt density of tubular elements that are adequate to the material heated. The table below provides maximum operating temperatures and watt densities that are recommended for heating various materials.
- Design watt density, flow velocity, outlet temperature are factors that contribute to the temperature level that tubular elements will attain.
- Safety issues considering the environment within which the immersion heater will operate.
- Utilization of adequate temperature controlling devices, temperature and pressure high limit switches, low liquid level and flow controllers and other control/safety devices that will control the heating process and protect the heater from excessive heat.
- The classification of the electrical terminal box required (NEMA1, NEMA4, NEMA7 and NEMA12).
- The level of the contamination that the immersion heater will be exposed to
- Safety and electrical code consideration
- The possible requirement of baffles that force a gas or a liquid to circulate around heating elements when flanged immersion heaters are used inside circulation tanks.
The table below shows the maximum temperatures that different sheath materials could be subjected to:
|Sheath Material||Maximum Temperature|
|Copper||360° F (180° C)|
|Stainless Steel||1200° F (650° C)|
|Steel||750° F (400° C)|
|Incoloy||1500° F (815° C)|
For more information in selecting flange heaters please contact us. | https://www.tankheaters.com/products/flange-heaters/ |
New warning labels for cell phone sale in Berkeley about exceeding RF limits when placed in pockets or bra
The Wireless Association was taking legal action to stop Berkley council requesting warning labels to be put on cell phone packaging (like cancer warning on cigarettes). The court of appeals reconfirmed that the council was in its rights.
This is and more is publicly available:
“9.96.030 Required notice
A. A Cell phone retailer shall provide to each customer who buys or leases a Cell phone a notice containing the following language:
The City of Berkeley requires that you be provided the following notice:
To assure safety, the Federal Government requires that cell phones meet radio frequency (RF) exposure guidelines. If you carry or use your phone in a pants or shirt pocket or tucked into a bra when the phone is ON and connected to a wireless network, you may exceed the federal guidelines for exposure to RF radiation. Refer to the instructions in your phone or user manual for information about how to use your phone safely.”
Go here to read the rest. | https://healthstronghold.com/cell-phone-radiation-warning-required-berkley/ |
Everybody knows the importance of good nutrition. Healthy nutrition and regular meal times are generally believed to positively affect concentration and endurance and prevent tiredness. Food, therefore, has an effect on both mental well-being and physical health.
In order to operate normally, the body requires many different nutrients, some of which provide energy, while others are important for the growth and maintenance of the body. For most of us, eating a varied diet is sufficient to obtain all the nutrients required by the body.
For pilots, who need to have good endurance and maintain concentration during flights, their diet, both before and during flights, can have a significant impact on concentration, reaction and well-being during and after the flight.
Studies have shown that it takes longer to recover from jet lag if a person is older than middle age, is in poor physical shape and/or has an unhealthy diet. Based on this, one may assume that to reduce the symptoms of jet lag, it is better to be in good physical shape and that a healthy diet is one aspect of maintaining good health.
Drinking water is extremely important to reduce the symptoms of jet lag and to prevent other negative effects that water loss can have on well-being. Humidity in aircraft is often between 2–3%, which can be likened to a desert climate. As the water loss is not accompanied by loss of body salts, people are less thirsty, but this dry air nevertheless dries out the body. As a result, it is important to drink a lot of water during long flights. A good rule of thumb is to drink quite a bit of water every hour during a flight.
In addition to drinking water, coffee intake should be limited, as caffeine is a diuretic and simply increases loss of water from the body. Many use coffee to sharpen their concentration, and if people need to do so during flights, they should do so for that purpose but try to limit their caffeine intake otherwise.
When giving advice on diet, the general rule of thumb is to recommend regular meals and emphasise the importance of not going too long between meals. The importance of eating breakfast is always reiterated, and it is no exaggeration that breakfast is one of the most important meals of the day. Pilots should keep these recommendations in mind, both as regards regular meals as well as regarding breakfast. Breakfast sharpens concentration and better prepares one for the tasks of the day. Blood sugar tends to be low in the morning because no food has been eaten for many hours. There are various indications that such a condition may even slow reactions and concentration. In addition, studies have shown that the diet of those who eat breakfast is generally more nutritious than that of those who forgo this first meal of the day.
If a long time elapses between eating something, then this will have the same effect, i.e. blood sugar levels will fall and people will feel tired and drowsy. In general, no more than 3–4 hours should elapse between meals. It is important, therefore, especially during long flights, to have something healthy and nutritious to snack on if necessary.
Eating correctly during flights can be difficult, and it is therefore important to plan ahead by taking an easy-to-grab snack with you. This could include dried or fresh fruits, nuts, wholemeal biscuits, sandwiches, yoghurt or skyr-drinks (take care to pick those with less sugar) and energy bars such as protein bars. Be sure to avoid sports snack bars, as they generally contain only carbohydrates.
The best selection for breakfast is to select carbohydrate-rich foodstuffs such as coarse bread, oatmeal, breakfast cereals with low sugar content and even some fruit. In addition, breakfast should contain some protein and fat. For flights during other times of day, the same applies, i.e. select foodstuffs that are carbohydrate rich rather than high in fat, and try to avoid foodstuffs containing sugar such as sweets or sodas, as such foodstuffs raise blood sugar levels very rapidly, which subsequently falls very rapidly, causing the negative effects described earlier. It is also wise to avoid large, energy-rich meals before a flight, as such meals are often accompanied by drowsiness and tiredness because so much energy is being used to digest the meal.
After longer flight, it is also a good idea to ensure sufficient fluid intake, continue to drink water and eat appropriate-sized meals. Do not select something that is heavy on the stomach and hard to digest, particularly if you are on a layover and may have only a short time before the next flight. A large and heavy meal eaten just before going to sleep can have a negative effect on the quality of sleep and thereby your subsequent well-being.
In light of the above, it must be quite clear that diet and nutrition can be an important aspect in pilot well-being during flights and after longer flights. A healthy diet also plays an important part in physical health and is important to prevent increases in weight and improve health. | https://fittofly.com/nutrition-during-flights/?lang=en |
Stray is not the game for me. And despite there being a Twitter account dedicated to cats being enrapured by it, it’s not for my cat, either.
The game has some immediate problems that just boil down to poor design across the board. No, I didn’t finish the tutorial, and no, I’m probably not going to. As much as everyone else enjoys it, here are the problems I encountered straight out the gate.
My experience
First off, ignoring that all the marketing material makes a big deal of the protagonist being separated from his family, I thought it would be, like, a human family, not a family of strays. There’s nothing wrong with that; strays do congregate into communities. I just thought the impetus was going to be, like, the image of a crying little girl to make it feel urgent that you get home.
The problem with this is that the game starts you off in what looks like "home" with rain outside and, lacking anything else to do, you have to interact with the other 3 cats. Fair enough; I was playing as a cat, so I was thinking like a cat. The rain was not something to explore. I interacted with the other cats and we all went to sleep. Then the next morning came and that’s where the problems started. Ignoring that the other 3 cats had various hard-wired triggers, like if you stopped to drink from a puddle, so did they, the game was more or less okay on the behavior with at least one leading you down various paths, without you necessarily being the last in line. But then it became clear that the level design didn’t make it easy to go back "home." That rang some alarm bells, because, thinking like a cat, this should have been my territory. I shouldn’t be leaving behind perfectly good shelter in a way that made it impossible to get back. Stray cat colonies take shelter within their established territory. This was not adding up. On top of that, it became unclear what we were all doing, because if it was hunting, then there should have been some form of food accessible to us all that didn’t involve trekking through a stream – something cats would normally avoid, and which one of the other cats just did even with perfectly good dry land off to the side, which I myself stuck to, because I’m not some cat weirdo. The game forcing me into the stream to continue felt unnatural.
To make matters worse, one of them scratched a mossy tree – an obvious territory marking behavior – and when a prompt invited me to do the same, I was presented with L2 and R2 button prompts. Yes, you basically have to alternate mashing them to pedal your little paws to scratch at things, apparently, and you get some distracting vibration out of it for your efforts, which felt like punsishment, so I stopped. I’m sure that’s intended to feel great on the PS5 triggers, but without the nuance, it was uncomfortable and dissuaded me from doing it.
Of course when they all went conga line on an unstable pipe, the real game began in a cutscene of my player cat slipping as the pipe gave way, sending him trying to claw his way back up and falling basically forever in cat height as the other 3 cats looked down in horror. My cat found himself somewhere dark and damp and he was limping, so, naturally, my gamer instinct kicked in and I realized I needed to find a way to restore my HP, but the puddles in this area were not drinkable. Unable to figure out what to do, I just kept walking the way the camera pointed until my cat blacked out and I figured I’d failed something. But no, that just kicked off a cutscene where a door took 10x as long to open as was necessary and a couple cyclops headcrabs ran through. My cat woke up miraculously fine and I realized I just needed to push forward, despite knowing those are the enemies in this game and feeling like it was a bad idea, but not seeing any alternative.
So here we are with me thinking like a gamer, much more affected by the fact my own cat was acting like I’d abandoned her with her eyes squeezed shut facing away from the game full of cats I’d chosen over her than at the heavy-handed emotional manipulation the game had thrown into my face like a wet towel and apparently a kitty Wolverine as a protagonist complete with claws and a healing factor. After going through a door, I was greeted with various messages in English that indicated I had a mysterious benefactor who was going to lead me around by lighting up various TVs and decrepit signs with arrows on them and began to realize that this game treats red, or at least neon scarlet not dissimilar from our ginger protagonist, as the "good" color. Or at least one of them since all the messages are in white, but running toward red always seemed like a good idea since nothing white was actually bright enough to lead me from afar. Red from the camera lights, red from various neon signage, all of that seemed to lead me forward just enough for some TV to turn on with white text.
The problem of course became apparent in the fan room because there I was with a couple buckets and no idea what my course of action was supposed to be. There were sources of dripping water, so, thinking like a gamer, I decided I needed to fill one to maybe weight something down, but all of my attempts failed because I simply wasn’t accurate enough to place the buckets in the stream. I finally managed to find my way up and saw the fan and decided I needed to short it with the water, so I went down hoping I had maybe filled my bucket up at least a little and brought my (empty) bucket back up to throw into the convenient funnel to the fan, which broke it and brought me to the worst room of the tutorial up until the chase. Knocking paint cans down was one thing and they seemed to all be blue paint, so I knocked one down close to the gap to see if I could maybe clear a way to jump across because it looked for all the world like an artifical wall, but nothing became apparent. Having no other recourse, I ended up knocking all of them down hoping one of them would hit something important, but none of them did, so I started looking at all the various pipes that looked like they might help cross the gap and none of them had any interaction to them. So out of frustration I just started trying to find anything left to interact with and lo and behold, the obvious platform with 4 paint cans blocking it DID have a jump I could use, but I found it completely by accident because I noticed the prompt blip and had to fight with it to get it to register. This room took me the longest of anything in the tutorial to figure out because of how fiddly that prompt was, but once I was over the gap, figuring out the can of blue paint in the giant blue splash over the skylight was going to break the skylight was the easiest puzzle of anything up to that point and ending up in a home felt like an immediate victory! Surely the robots who lived there were going to be upset, what with it looking lived-in with working electronics and a drip pan I could drink out of! Only no, apparently it was abandoned. But hey, blue was good. Starting to see a problem here? You will in a minute.
So after avoiding scratching a carpet to save myself from the displeasure of being ruffled for it by the rumble and giving myself a congratulatory drink from the drip pan, I realized I was supposed to leave the relative safety of this house and almost immediately came across a damaged robot with no limbs left but one good arm who activated and scared my cat out of my control, reaching for me before dying for good. Unable to determine if there was anything I could do to help or activate him, because any semblance of thinking like a cat was long gone, I continued forward, getting a bit nervous at how the horror aspects of the game were ramping up.
Then the chase happened. A bunch of cyclops headcrabs lit up red and I was instructed to outrun them, being guided by various arrow signs. One of them jumped on me and I was prompted to meow them off myself with the Circle button, but in my haste, I was presented with an arrow that I thought meant "forward." Oops. No, apparently an "up" arrow actually means "up," which was only ambiguously established earlier in the tutorial, and I got cornered and eaten in a blue room, because I thought blue was a "good" color, because "neon" certainly had been redefined as a bad one at the beginning of this segment and my screen going red when I took damage decidedly didn’t help that. Blue felt like the safe color and it wasn’t.
It’s at this point I rage quit, because it wasn’t worth it to me to retry the segment when my real cat was upset at me for choosing this interloper over her and frankly so was I.
In summary, the game, despite being billed as a game for cat lovers, does not allow you to think like a cat. I know as someone on the autism spectrum that my ability to think like a cat is probably significantly better than most, but the game has you quickly thinking like a gamer limited by a cat body and all the feline reactions are pure lip service. Cats in this world are humans pretending to be cats, from the way they all jump to do cat things when you do to reacting like humans when one of their own takes a plunge. The whole segment with your cat family is play acting with people who don’t actually fully understand cats even at a basic level, like, you know, not liking getting wet.
The game also just plain has technical problems and even when it doesn’t, it has confused visual messaging and is just too heavy-handed in everything it does. It’s heavy-handed in trying to scrape for your sympathy, it’s heavy-handed in talking to you directly as the player when something more whimsical would have worked better that didn’t immediately show its hand that you had a specific benefactor and could’ve left more question to it, and it’s heavy-handed in the way it expects you to use cat behaviors to solve human puzzles.
Most of all, it just feels like I’m being punished for playing it. Being a cat should be easy, not something I have to rapidly mash the triggers to do only for the controller to play spray bottle in my hands for my efforts.
How do we fix this?
Let me put on my game designer hat for a hot minute and explain where the game fails worst: color messaging.
Let’s be clear, blue is normally the most trustworthy color at your disposal. Blue is calming. Yes, your protagonist is a redhead, but that should be incidental because rules of contrast apply just as well as anything else. Break out your color wheel and let the protagonist stand out. Using blue paint to establish blue as a "good" color quickly came to bite me in the butt when I needed it most.
Red is normally your most obvious "bad" color. In this case it’s used for both good and bad things and that made the messaging in a hectic segment immediately confusing. You cannot expect me to choose a red staircase over a blue room when I’m being chased by enemies with red glowing eyes who are filling my screen with red damage.
White is also normally a "good" color, though here it’s a cold white, which can be a little harsh and off-putting. If it had at least been consistent, though, it would have been SOMETHING.
Green is one that’s a toss-up depending how it’s used, because green light is hard to come by in nature and immediately sets off alarm bells, but your little kitty haven is a beautiful green space. Your zombie robot, unfortunately, presents a green face when it’s scaring you, so once again mixed messaging is in play.
So how do we fix this?
Make green your "good" color.
I know what I said about green light, but if you want to lead the player back to their verdant home, leading with green is a good idea. In the bleak atmosphere you fell into, green would stand out as well as anything else even if it wouldn’t be colorblindness friendly. There’s plenty of opportunity for green neon signage (it exists) and green paint and whathaveyou. If the goal is to get back to my green home, green camera LEDs, green TV messages, and even green-faced robots all would be within messaging that they’re pointing me back to that goal of finding my way home.
In absence of that, just keep the "good" colors to blue and white. For crying out loud, it’s not like white isn’t the next most common neon signage after red neon itself (white, and all the color filters over it that make up any other "neon" signage, are actually argon). Keep your benefactor’s contributions a consistent color. Put white LEDs on the security cameras. Or even blue ones. Blue and white would be an easy contrast to both your feline protagonist and all the red-eyed enemies all over the place. Blue LEDs on the cameras would have also made them less creepy to have looking at you off the bat.
Keep red as your bad color. Your enemies emit it; damage emits it; don’t spend a whole tutorial telling me red is a good thing and then throw me in a situation where it’s both a good and bad thing. It CANNOT be both. When the first red LED on a security camera turned to look at me, I was immediately afraid of what that type of surveillance would result in and had to be taught to trust it for several minutes, only to be utterly betrayed in the end.
Color messaging is a core pillar of any kind of art design and the fact it’s failed so spectacularly here doesn’t give me much interest in the rest of the game, level design issues aside.
What else? Make being a cat easy. I shouldn’t have to furiously trigger both halves of the controller to do a cat thing. Let me interact with a respectful animation that doesn’t base its speed on my ability to coordinate firing twin pistols. And don’t punish me with unpleasant force feedback when I succeed at it. If I’m going to roleplay as a cat, I don’t want to be punished for it. Save the force feedback for the PS5; on PS4 rumble is not a positive tactile experience; it’s usually one that says you’re taking damage. It’s just more mixed messaging.
For that matter, MAKE IT CLEAR WHEN I’M TAKING DAMAGE! Oh, my GOD that’s a basic one! Having to wait until I assume the end of the tutorial to learn that all my limping around was just cutscene BS and that I could expect red on my screen with regenerating health like a cover-based shooter was one of the most aggravating realizations because I spent most of the tutorial unable to tell what a failure state looked like or what I could do to fix it! One would THINK that drinking water would have some gameplay purpose, but apparently no, it’s just "catting." Having to wonder exactly how much survival would be involved and assuming I’d have to drink water or else find a mouse to heal myself or keep myself from starving was, I think, a fairly reasonable concern going into this effectively blind.
Most of all, I was left wondering at what point I was going to get the floating robot companion and at no point did I have any indication it was leading up to that.
And as a final point, which feels minor in relation to everything else, giving the action prompts a little more leeway would have gone a long way to stepping out of the way to let the player solve the puzzles, because the puzzles themselves aren’t actually all that difficult. If you’re not going to have people thinking like a cat, at least let the gamer do the gamer thing without hassle.
The way the game leads the player forward with following the things we all know from the marketing material are the enemies and trying to jumpscare us with the robot allies is just the cherry on top of all the mixed messaging the game does.
The one and only thing the game gets right is that it looks fantastic. The cats move realistically and the verdant area is gorgeous. The world below is bleak, but in a realistic way. Even if the way the world is designed makes zero sense, it’s nice to look at.
Wrapping this up
I know everyone is fawning over this game right now, but as someone who was promised I’d be playing as a cat, I’m walking away disappointed. As a writer I’m walking away disappointed. And as a gamer, I am walking away disappointed. This game has so many problems right out the gate that while I might give it another chance when I’m feeling more charitable if only because I’m not getting that $30 back from PSN, it’s going to be despite its various core design flaws rather than it delivering on what I felt I was promised.
Unless and until I feel that charitable, I have a real cat with a broken heart I need to attend to. Daddy’s girl comes first. | https://blog.bluestarcreations.net/blues-reviews/stray-tutorial-review/ |
The coevolution of staple crops and human society can be traced in the relics of ancient genomes and in population genetic signatures that our interdependence has left on our genomes and those of our crop plants. Patterns of geographical adaptation in the genomes of local crop varieties connect millennia of survival strategies of subsistence farmers with future agricultural improvement in the face of challenges from environmental changes.
A new analysis has characterized a fundamental building block of complex transcribed loci. Constellations of core promoters can generally be reduced to pairs of divergent transcription units, where the distance between the pairs of transcription units correlates with constraints on genomic context, which in turn contribute to transcript fate.
A genome-wide study in Samoans has identified a protein-altering variant (p.Arg475Gln) in CREBRF as being associated with 1.3-fold increased risk of obesity and, intriguingly, 1.6-fold decreased risk of type 2 diabetes. This variant, which is common among Samoans (minor allele frequency = 26%) but extremely rare in other populations, promotes fat storage and reduces energy use in cellular models.
Study of the Greater Middle East (GME), home to approximately 10% of the world's population, has made invaluable contributions to the characterization of rare genetic disease, especially recessive conditions arising from the tradition of consanguinity and large families with multiple children. A new study now reports 1,111 unrelated exomes from the GME and provides a comprehensive view of genetic variation for enhanced discovery of disease-associated genes.
Albert Tenesa and colleagues report an analysis of the heritability of 12 complex diseases in 1,555,906 individuals from the UK Biobank. They find that SNP heritability explains a higher proportion of estimated heritability when shared familial environmental factors are taken into account.
Yun Chen, Albin Sandelin, Torben Heick Jensen and colleagues describe general rules governing the expression of reverse-oriented promoter upstream transcripts (PROMPTs) based on the orientation and proximity of promoter pairs. They characterize how the distance between promoters affects the expression of PROMPTs and the usage of alternate mRNA transcription start sites.
Jonathan Pritchard, Christopher Garcia and colleagues examine associations between different T cell receptor V genes and MHC alleles by eQTL mapping. They find that there are strong associations between MHC variation and T cell receptor gene usage and map these signals to specific MHC amino acids, many of which physically interact with germline-encoded amino acids on the T cell receptor.
Yongfeng Shang and colleagues report that the pioneering factor FOXA1 associates with DNA repair complexes and regulates DNA demethylation at its genomic targets in a DNA polymerase β–dependent manner. They show that FOXA1-associated DNA demethylation is coupled with genomic targeting of estrogen receptor α and estrogen responsiveness in a breast cancer cell line.
Margaret Goodell, Wei Li and colleagues use double-knockout mice for Dnmt3a and Tet2 to model leukemia development. Through epigenetic and transcriptional analyses, they show that loss of DNMT3A and TET2 upregulates lineage-specific transcription factors such as KLF1 in hematopoietic stem cells and accelerates malignancy.
Robbie Waugh, Nils Stein, Gary Muehlbauer and colleagues report the exome sequencing of 267 landraces and wild accessions of barley from diverse regions to study adaptations to different agricultural environments. They observe correlations of days to heading and height with environment and find that variation in flowering-associated genes has strong geographical structuring.
Ashley Winslow, Roy Perlis, David Hinds and colleagues report the identification of 15 genetic loci associated with risk of major depressive disorder in individuals of European descent. They find that several loci are also associated with risk of other psychiatric traits, including schizophrenia and neuroticism.
Jan Veldink and colleagues show that loss-of-function variants in NEK1 are associated with susceptibility to amyotrophic lateral sclerosis (ALS). In addition to finding an excess of rare loss-of-function NEK1 variants in ALS cases, they report a significant association between a specific NEK1 missense variant (p.Arg261His) and disease risk.
Ammar Al-Chalabi, Jan Veldink and colleagues perform a genome-wide association study for amyotrophic lateral sclerosis (ALS) in 15,156 cases and 26,242 controls. They identify three new genome-wide-significant variants and establish ALS as a complex trait with a polygenic architecture, but with a distinct and important role for low-frequency variants.
Stephen McGarvey and colleagues identify a missense variant in CREBRF strongly associated with body mass index in Samoans. This variant is rare in other populations but is common in Samoans and has a much larger effect size than other known common obesity risk variants, including variation in FTO.
Ewan Pearson, Kathleen Giacomini and the Metformin Genetics Consortium perform a genome-wide association study for glycemic response to the antidiabetic drug metformin. They find an intronic allele of the GLUT2 glucose transporter gene that associates with greater metformin action, an effect that is more pronounced in obese individuals.
Matthew Hurles and colleagues report exome sequencing of 1,891 individuals with syndromic or nonsyndromic congenital heart defects (CHD). They found that nonsyndromic CHD patients were enriched for protein-truncating variants in CHD-associated genes inherited from unaffected parents and identified three new syndromic CHD disorders caused by de novo mutations.
Jaume Bertranpetit, Partha Majumder and colleagues analyze whole-genome sequences from Andamanese individuals and compare them to sequences from mainland Indian and other geographically diverse populations. They find evidence of ancestry from an unknown extinct hominin in South Asian populations and show that distinct Andamanese characteristics derive from strong natural selection.
Joseph Gleeson and colleagues report whole-exome sequencing of a cohort of over 1,000 individuals from the Greater Middle East, characterizing common and rare variants. They find evidence of subregional diversity and historical migrations and use the GME Variome to identify disease-causing mutations.
Magnus Nordborg and colleagues report a genomic analysis of all 27 known species in the genus Arabidopsis. They find evidence for a complex speciation history that is not accurately reflected by a traditional bifurcating species tree and identify widespread shared polymorphisms between species.
Rachel Meyer and colleagues use whole-genome resequencing of 93 African rice landraces to generate a SNP map used for population analysis and a genome-wide association study for salt tolerance traits. They find 11 significant loci, some with signatures of positive selection, and evidence for a population bottleneck beginning around 15,000 years ago.
Nils Stein, Ehud Weiss, Tzion Fahima, Johannes Krause and colleagues report the genome sequences of 6,000-year-old barley grains obtained from desert caves in Israel. They compare these to whole-exome sequences of a modern barley diversity panel to explore domestication and migration patterns, finding evidence for prehistoric gene flow between wild and cultivated populations.
Victoria Hore, Jonathan Marchini and colleagues present a method for multiple-tissue gene expression studies aimed at uncovering gene networks linked to genetic variation. They apply their method to RNA sequencing data from adipose, skin and lymphoblastoid cell lines and identify several biologically relevant gene networks with a genetic basis. | https://www.nature.com/ng/volumes/48/issues/9?error=cookies_not_supported&code=6b2e98c4-c7f3-460d-8dcd-d4fa65375272 |
This assignment will define, describe and illustrate the “art movement” or influence of the time period indicated. The goal for this paper is for you to understand the characteristics from the movement or time period chosen and to find a modern example of it. You must submit either a word or pdf of the document ONLY (not rtfs or pages documents)—DO NOT copy and paste your submission into the “write submission” area.
Movements Assignment 1: Design Movements • Due 6/7/20 • (Choose 3)
Graphic Design of the Italian Renaissance
The Epoch of Typographic Originality
Arts & Crafts
Art Nouveau
Glasgow School
Vienna Secession
Project Specifications:
Each movement submission should focus on the individual movements and how they treated or affected graphic design (You may mention other areas affected by the movement – product design, architecture – but the main focus should be graphic design.) You must include the following: | https://essaycreek.com/history-graphic-design/ |
1. The Field of the Invention
The present invention relates to wireless networks, and more specifically, to using directional antennas to increase signal strength and enhance throughput in wireless networks.
2. Background and Relevant Art
Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, and database management) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another to form both wired and wireless computer networks over which the computer systems can communicate electronically to share data. As a result, many tasks performed at a computer system (e.g., voice communication, accessing electronic mail, electronic conferencing, web browsing) include electronic communication with one or more other computer systems via wired and/or wireless computer networks.
For example, a number of computer systems can be coupled to a data hub through corresponding wired connections (e.g., category 5 cable) to form a wired network (e.g., an Ethernet segment). Similarly, a number of wireless computer systems (commonly referred to as “stations”) can be coupled to a wireless access point (“AP”) through corresponding wireless connections (e.g., resulting from appropriate communication between radio transmitters and receivers) to form a wireless network (e.g., an IEEE 802.11 network). Further, a data hub and/or an AP can be connected to other data hubs, APs, or other network devices, such as routers, gateways, and switches to form more complex networks (including both wired and wireless connections).
When computer systems communicate electronically, electronic data will often pass through a protocol stack that performs operations on the electronic data (e.g., packetizing, routing, flow control). The Open System Interconnect (“OSI”) model is an example of a networking framework for implementing a protocol stack. The OSI model breaks down the operations for transferring electronic data into seven distinct “layers,” each designated to perform certain operations in the data transfer process. While protocol stacks can potentially implement each of the layers, many protocol stacks implement only selective layers for use in transferring electronic data across a network.
When data is received from a network it enters the physical layer and is passed up to higher intermediate layers and then eventually received at an application layer. The physical layer, the lower most layer, is responsible for converting electrical impulses, light, or radio waves into a bit stream and vice versa. On the other hand, when data is transmitted from a computer system, it originates at the application layer and is passed down to intermediate lower layers and then onto a network. The application layer, the upper most layer, is responsible for supporting applications and end-user processes, such as, for example, electronic conferencing software, electronic mail clients, web browsers, etc.
An intermediate layer incorporated by most protocol stacks is the Data Link layer. The Data Link layer decodes data packets (received from higher layers) into bit streams for use by the physical layer and encodes bit steams (received from the physical layer) into data packets for use by higher layers. A sub-layer typically included in the Data Link layer is the Media Access Control (“MAC”) layer, which implements protocols for moving data packets onto a shared channel (e.g., an Ethernet segment or an 802.11 channel).
However, to access a medium a computer system must be able to sense the medium. In a wireless environment, sensing a wireless medium (e.g., an 802.11 channel) can be difficult, and at times impossible, depending on how a station and an access point are physically separated. Access points typically include an omni-directional antenna. Accordingly, when no physical bariers exist (e.g., walls, floors, buildings, etc.), the range of the omni-directional antenna essentially results in a spherical region around the access point. When a station is within a particular range of the access point, the omni-directional antenna enables the access point to meaningfully send data to and receive data from the station. That is, within the particular range, transmitted radio signals have sufficient signal strength such that a physical layer can convert the radio signals into a bit stream.
However, when a station is at or near the range of an omni-directional antenna and/or is separated from an omni-directional antenna by physical bariers, radio signal propagation loss (e.g., in the 2.4 GHz band or 5 GHz band) can significantly reduce the speed and reliability of data transferred between a station and an access point. When the station is outside the range of the access point or when substantial physical bariers exist, meaningful communication between a station and an access point may not be possible. For example, due to propagation loss, an data rate can be significantly reduced essentially making communication with the omni-directional antenna impossible. Further, while am omni-directional antenna may have sufficient signal strength to detect that radio waves are being transmitted (e.g., from a station to an access point or vice versa), the signal strength may be degraded such that it is difficult, or even impossible, to determine what data is being represented by the radio waves. That is, a physical layer may not be able to generate a bit stream from the degraded radio waves. Therefore systems, methods, and computer program products for reducing the effects of propagation loss would be advantageous.
| |
The Incidental Propagation of English Language behind the Curtains of Human Affairs: Insights on the Episodic Documentary, The Adventures of English
Language has coexisted with the historic collaborative endeavor of people in meeting human needs and in adapting to drastic social, political, economic, and religious changes. English, as a global language, has been a product of time and of such changes, and has grown so much in structure and in communicative quality. Indeed, it has not escaped the influences and threats of other languages, yet it has evolved productively embracing these influences and threats as opportunities for continuous propagation.
English, as a social means, has inadvertently been defined by social processes and conditions all throughout human history. With words “ruthless,” “obstinate,” and “tenacious,” as consistently used by Melvyn Bragg to qualify English after the subjugation of the native Celts by Germanic tribes, it might be inferred that there was a social turbulence, and language became the expression of force and suffering. In the Medieval Ages, cross-cultural marriages and communications helped English gain momentum from the prevalence of French and Latin. During the 20th Century, migration and cultural diversities led to several avenues for varieties of English. American English was influenced by interstate migration of southern blacks to northern cities. Inarguably, the accumulation of foreign vocabularies to the English language stream and the enrichment of language structure equate with the active communication and participation of various cultural groups.
English has also been shaped by political backgrounds. Although French became the official language in most governmental and educational affairs after the Norman Conquest, English, the underground language, survived until the social upheaval that brought it up again on the surface of use and prestige. Moreover, English monarchs used the language in political activities and even advocated its aesthetic role in literature. Profoundly influential, England, during the age of exploration, diffused the language to the different regions of the word. With Western ideologies, some Asian countries even used English as a tool for the attainment of freedom and democracy. The cultivation and incidental spread of English language have also been attributed to some political agenda.
Fortuitously, English has gained so much profit as profitable trades have been transacted. It has benefited from words and expressions in different languages that merchants and consumers of diverse cultures have used. There has been not just a barter system but a significant cultural and linguistic interchange. With the Industrial Revolution that started in Great Britain, English has reached the busy ports and business centers of the world.
Not only that English has become the language of British colonization and of industrial Revolution, it has also served as an instrument of religious revolution. With the ambition and need of spreading Christianity throughout Europe, members of religious orders painstakingly translated the Holy Scripture from Latin to English. The Scripture in English even formed the foundation of faith for some groups of people leading them to become the guardians and protectors of the language. In the same manner, religion has also exerted a great effect on the language as Latin terms have been added to the English vocabulary. As religion is deemed a universal institution, English, in most religions, is one of the pillars that keep the faith among people.
English is a dynamic language. It has gone through different periods in which words and expressions from different languages have been borrowed. It keeps on undergoing changes alongside the social, political, economic, and religious changes that people have to adapt to. With approximately, 341 million people speaking English as a native language and a further 267 million speaking it as a second language in over 104 countries (Marwah, 2010), it is, in an honest assertion, a language that has satisfied the communication needs of most people. | https://www.acadshare.com/adventures-english-reflection-paper/ |
The objective of this article is to present an overview of the burden, spectrum of diseases, and risk factors for mental illness among subgroups of migrants, namely immigrants, refugees, and individuals with precarious legal status. This expert review summarizes some of the implications for primary care services in migrant receiving countries in the global North.
Methods
A broad literature review was conducted on the epidemiology of mental health disorder in migrants and refugee and on the available evidence on mental health services for this population focusing on key issues for primary care practitioners in high income countries.
Results
Although most migrants are resilient, migration is associated with an overrepresentation of mental disorder in specific subpopulations. There is general consensus that stress related disorders are more prevalent among refugee populations of all ages compared to the general population. Relative to refugees, migrants with precarious legal status may be at even higher risk for depression and anxiety disorders. Persistence and severity of psychiatric disorders among migrant populations can be attributed to a combination of factors, including severity of trauma exposures during the migration process. Exposure to stressors after resettlement, such as poverty and limited social support, also impact mental illness. Services for migrants are affected by restricted accessibility, and should address cultural and linguistic barriers and issues in the larger social environment that impact psychosocial functioning.
Conclusion
There is substantial burden of mental illness among some migrant populations. Primary care providers seeking to assist individuals need to be cognizant of language barriers and challenges of working with interpreters as well as sensitive to cultural and social contexts within the diagnosis and service delivery process. In addition, best practices in screening migrants and providing intervention services for mental disorders need to be sensitive to where individuals and families are in the resettlement trajectory. | https://sherpa-recherche.com/publication/mental-health-needs-and-services-for-migrants-an-overview-for-primary-care-providers/ |
As states implement application and process changes for the Patient Protection and Affordable Care Act (ACA), states look to the Food and Nutrition Service (FNS) for technical assistance on the Supplemental Nutrition Assistance Program (SNAP) application issues and policy compatibility. This memorandum provides regional offices with guidance as they work with states to ensure that online and paper SNAP applications meet federal requirements and are user-friendly, understandable and effective.
As part of the Oct. 1, 2013, roll out of open enrollment under ACA, state agencies across the country are reviewing and updating applications to conform to ACA's Medicaid-related provisions. Many states have multi-benefit applications for both SNAP benefits and other health and human services programs like Medicaid. They are now facing the challenging task of implementing changes required by ACA while complying with existing SNAP requirements.
New ACA requirements are significant, but they do not affect existing laws and regulations governing SNAP and do not change SNAP application policies. Since updates to applications can have unintended impacts on SNAP content and functionality, strict adherence to SNAP regulations and guidance is imperative.
FNS encourages the use of multi-benefit applications as they provide inherent administrative, workload, and access advantages to state agencies and SNAP clients. At the same time, faced with limited resources and time constraints, some states are temporarily de-coupling integrated systems and applications in order to stand up new applications in time for the October 1, 2013, deadline. Federal-state collaboration will be a key tool to support continued integration.
While FNS does not approve state applications, all applications should be reviewed for compliance with SNAP requirements when modifications are made; this is particularly important as states implement wide-ranging changes associated with the ACA requirements for Medicaid. The timing of these reviews is up to each regional office working with their states, but all state applications should be examined by the end of Fiscal Year 2014. These reviews can be part of state-level management evaluation review or can be conducted independently.
States must notify FNS whenever existing applications are modified and new online application systems are launched so that FNS can help quickly address any areas that do not comply with SNAP policy. If states do not adhere to SNAP rules and guidelines, action must be taken to correct instances of non-compliance.
To ensure that all FNS offices are measuring compliance using standardized methods, FNS has developed an Online Application Checklist that is enclosed with this memorandum and included in the Program Access Review Guide released in January 2013. (http://www.fns.usda.gov/snap/government/PAR Guide-1212.pdf). This checklist is also provided as an attachment to this memorandum.
In preparing to review states' paper applications, staff may find it useful to review the guidance and technical assistance materials available at SNAP's Program Improvement page at: http://www.fns.usda. gov/snap/ government/program-improvement.htm. In addition, it may be helpful to consider the recent guidance provided to states regarding ACA and their Medicaid applications at: https://www.medicaid.gov/federal-policy-guidance/downloads/CIB-06-19-2013dcr.pdf.
Please contact Jessica Dziengowski at [email protected] or Elizabeth Weber at [email protected] if you have any questions regarding this important aspect of SNAP policy and operations.
Lizbeth Silbermann
Director
Program Development Division
Attachment
The contents of this guidance document do not have the force and effect of law and are not meant to bind the public in any way. This document is intended only to provide clarity to the public regarding existing requirements under the law or agency policies. | https://fns-prod.azureedge.net/snap/snap-applications-and-affordable-care-act |
The feeling of depression can be manifested in multiple ways. Learn how the experience of depression may be impacting your functioning in everyday life, and useful ways to cope with and resolve this distress. Group times: Tuesdays 3:00-4:20 pm. Group will be limited to 5-8 members and will begin once filled.
Anxiety is a common response to stress. When anxiety interferes with test results, grades, classes or relationships and other areas of functioning, it can be a problem. Learn how to recognize your anxiety and develop strategies to manage it. Group times: Tuesdays 3:00-4:20 pm. Group will be limited to 5-8 members and will begin once filled.
Contact Health, Counseling and Student Wellness (UC 440) at (859) 572-5650 if you want more information or to schedule a group brief screening appointment.
This is an eight-session group for survivors of sexual violence. Members will learn and practice skills needed to manage negative symptoms resulting from their traumatic experience (s), including skills for increasing tolerance of intense emotional and physiological responses as well as coping with environmental triggers. Group times: Wednesdays 3:00-4:15 pm. Group will be limited to 5-10 members and will begin once filled.
Please contact Health, Counseling, and Student Wellness (UC440) by phone at 859-572-5650, walk in to UC 440, or by email at [email protected].
Feeling stressed? Is talking about it hard? De-Stress with Art is a six-session Art Therapy group for students. Participants will engage in art activities to develop skills in managing emotions, stress reduction, and enhancing self-awareness. This group will focus on the process of expressing inner feelings into visual form, not the art product. No prior artistic experience or skill necessary, only a willingness to experiment and try art making. All art supplies provided. Group times: Wednesdays 1:00-2:30pm from 2/13/19-4/3/19 (Dates subject to change). Group will be limited to 5-8 members.
RSVP spaces are limited. Schedule a free 30 minute group screening meeting to find out more by: walking in to UC 440, calling 859-572-5650, or emailing [email protected]. | https://inside.nku.edu/hcsw/counseling/group/group-offerings.html |
This is a divisional of copending application Ser. No. 08/711,122 filed on 9 Sep. 1996.
absorber means for receiving and contacting the H2 S-containing gaseous stream with a sorbing liquor comprising a nonaqueous solvent containing dissolved sulfur, and a base having sufficient strength and concentration to drive the reaction converting H2 S sorbed by said liquor to a nonvolatile polysulfide which is soluble in the sorbing liquor;
regenerator means downstream of said absorber means for oxidizing said sorbing liquor containing the dissolved polysulfide to convert said polysulfide to sulfur which remains dissolved in said liquor;
sulfur conversion means downstream of said regenerator means for converting at least part of said dissolved sulfur to solid particulate sulfur; and
separating and recycling means downstream of said regenerator means for separating said solid sulfur from the liquor and returning the regenerated liquor to said absorber means for recycling.
1. Field of the Invention
This invention relates generally to processes and systems for removing hydrogen sulfide from a gaseous stream. More specifically the invention relates to a process and system for removal of hydrogen sulfide from a gaseous stream using as an oxidizing agent a nonaqueous scrubbing liquor in which are dissolved sulfur and a reaction-promoting base.
2. Description of Prior Art
The presence of hydrogen sulfide in fuel and other gas streams has long been of concern for both the users and the producers of such gas streams. In the instance, e.g., of natural gas, it may be noted that historically about 25% of the said gas produced in the United States has been sour, i.e., containing greater than 4 ppmv H2 S (5.7 mg H2 S/m3). Aside from the corrosive and other adverse effects that such impurities may have upon equipment and processes with which such gas streams interact, noxious emissions are commonly produced from combustion of the natural gas as a result of oxidation of the hydrogen sulfide. The resulting sulfur oxides are a major contributor to air pollution and may have detrimental impact upon humans, animals, and plant life. Increasingly stringent federal and state regulations have accordingly been promulgated in an effort to reduce or eliminate sulfurous emissions, and a concomitant interest exists in efficiently removing from natural gas streams and the like the hydrogen sulfide that comprises a significant precursor of the emissions.
Among the most widely used methodologies for removing hydrogen sulfide from gaseous streams is the so-called liquid redox sulfur recovery (LRSR) technology. In conventional LRSR processes a redox couple dissolved in a solvent (usually water) is used to scrub hydrogen sulfide from a gas stream and convert it to sulfur that is removed from the system. The redox agent is reduced by the hydrogen sulfide and is regenerated by contacting with air in a separate vessel. This technology dates back to at least the late 1950's in the United Kingdom with the introduction of a continuous process to react H2 S to elemental sulfur using an aqueous solution of the sodium salts of the 2:6 and 2:7 isomers of anthraquinone disulphonic acid (ADA). The reaction rates for this original ADA process were very slow, resulting in large volumes of liquor and large reaction tanks. Later, it was learned that addition of sodium vanadate would increase the reaction rates and the Stretford process resulted. Further discussion of the latter is contained in U.S. Pat. No. 5,122,351. The Stretford process was a considerable improvement over the ADA-only process and more than 100 plants were built worldwide, many of which are still in operation.
Several limitations of Stretford and similar aqueous-based LRSR processes utilizing vanadium (e.g., Unisulf, Sulfolin) provided impetus for new technology. For some applications (often those with low levels of CO2 in the feed gas) the extent of conversion of inlet H2 S to sulfur salts (e.g., thiosulfate and sulfate) was such that large solution blowdown was required. For other applications (often those with high-CO2 in the feed gas), the absorber experienced sulfur plugging and poor removal of H2 S. Environmental concerns related to vanadium discharges, especially in the U.S., provided additional opportunity for new technology to be developed and enter the marketplace. Pertinent generally to the Sulfolin process, see Heisel, Michael, (Linde AG), "Operating Experiences with the Direct Oxidation Scrubber Using Sulfolin Liquor at Rheinbraun AG/Berrenrath," Proceedings of the 1989 GRI Liquid Redox Sulfur Recovery Conference, GRI-89/0206, Austin, Tex., May 7-9, 1989, pp 146-164. Pertinent to the Unisulf process, see Gowdy, Hugh W. and M. R. Anderson (UNOCAL Science and Technology Division), "The Commercialization of the Unisulf Process," Proceedings of the 1986 Stretford Users' Conference, GRI 86/0256, Austin, Tex., May 5-6, 1986, pp 104-120.
Over the past decade the commercial marketplace has been dominated by aqueous-based technology in which aqueous polyvalent metal chelates are used as the redox solution, with iron being the most common metal used. In U.S. Pat. No. 4,622,212, McManus et. al. provide a summary of the technology area and this patent along with its cited U.S. and foreign patent documents are incorporated herein by reference.
More than 100 liquid redox plants of the aqueous chelated polyvalent metal type have been built over the last ten years, and numerous patents have issued pertaining to enhancements for this basic approach to LRSR technology. However, there are several fundamental disadvantages of the aqueous-based, polyvalent metal chelate approach to LRSR technology that have generally limited the success of LRSR technology. One of the more serious of these is that the said aqueous-based, polyvalent metal chelate approach tends to convert H2 S to solid sulfur in the absorber, contributing to foaming and plugging in the absorber and downstream equipment. This condition is especially disadvantageous in situations where the feed gas is to be treated at high pressure (e.g., greater than 600 psi). Due to the very small solubility of elemental sulfur in water, some solid elemental sulfur exists in all liquid streams, even the stream of regenerated scrubbing liquor returning to the absorber. As a result, sulfur deposition can occur throughout the system, which results in poor operability and reliability. In addition, having solid sulfur in all liquid streams means that liquid pumps must operate on aqueous slurries of water and sulfur. This condition leads to excessive pump wear and maintenance, especially for high-pressure plants.
Furthermore, the byproduct sulfur salts of sulfate and thiosulfate formed in the aqueous-based, polyvalent metal chelate approach to LRSR technology are soluble in these aqueous systems and cannot be easily removed, requiring expensive purges of valuable solution components and/or undesirable contamination of the product sulfur or sulfur cake.
Additionally, sulfur particles formed in these aqueous systems tend to be of small unit particle size (often less than 5 microns) and are difficult to separate by gravity, filtration, or other means due to its small particle size and its presence in an aqueous mixture. Surfactants and other additives must also be introduced into the process and maintained to attempt to induce the sulfur particles in this aqueous environment to float or sink, depending on the process, and to reduce foaming and plugging. Additionally iron or other metal ions must be added to the solution and maintained in the solution to react with the inlet H2 S; and expensive chelants must be added to the solution and maintained at levels sufficient to keep the metal ion(s) in solution. These chelants are susceptible to attack by free radicals and other species, resulting in degradation rates which are often too high.
A still further difficulty arising in the aqueous-based, polyvalent metal chelate LRSR technology is that sulfur cake produced by filtering, or centrifuging the sulfur particles from the aqueous stream will contain significant quantities of moisture (30 to 60 wt. %) and will be contaminated with solution components, even after washing.
As a result of these problems, the current state of the an of liquid redox sulfur recovery technology is that for low pressure gas streams (e.g., less than 600 psi) the current technologies can be made to operate, but costs are often higher than desired and operability and reliability are often less than desired. For high-pressure applications (e.g., greater than 600 psi), the operability and reliability of these processes are not adequate to be considered practical.
It has been known for some time that nonaqueous systems may produce sulfur with superior handling properties. However, limitations associated with reaction rates and product conversions have heretofore prevented implementation of a practical nonaqueous approach to LRSR technology. Because of the desirability of forming and handling sulfur in a nonaqueous system, several nonaqueous processes have indeed been proposed to date. For example, the UCB Sulfur-Recovery Process (UCB) proposes a nonaqueous system wherein hydrogen sulfide gas is absorbed in a solvent having a good solvent power for H2 S and a much greater solvent power for sulfur dioxide (SO2), for example, a polyglycol ether. This process is essentially a liquid phase version of the (gas phase) Claus reaction. The initial reaction is in the liquid phase and is between H2 S and SO2. One of the key control issues is to maintain the correct ratio of SO2 to H2 S in the reaction zone, as is the case with the Claus process. Water is soluble in the solvents proposed for this system. The UCB process requires the use of equipment to melt the sulfur, a furnace and boiler to react the sulfur with oxygen to form SO2, an SO2 scrubber to dissolve that SO2 in the solution for recycle to the reactor/crystallizer, a solvent stripper to recover lean solvent, a sour water stripper, and other components. See Lynn, Scott, et. al. (University of California, Berkeley), "UCB Sulfur-Recovery Process," Proceedings of the 1991 GRI Sulfur Recovery Conference, GRI 91/0188, Austin, Tex., May 5-7, 1991, pp 169-180; and Lynn, U.S. Pat. No. 4,976,935.
Another proposed nonaqueous approach is the HYSULF process of Marathon Oil Company. This process utilizes the solvent n-methyl-2-pyrrolidinone (NMP) to react with H2 S to form a quaternary ion complex, which in turn reacts with an anthraquinone to form sulfur and anthrahydroquinone. The anthrahydroquinone is then passed through a catalytic reactor to form anthraquinone for recycle and byproduct hydrogen gas. Further details of this process appear in Plummer, Mark A. (Marathon Oil Company), "Hydrogen Sulfide Decomposition to Hydrogen and Sulfur," Proceedings of the 1989 GRI Liquid Redox Sulfur Recovery Conference. GRI 89/0206, Austin, Tex., May 7-9, 1989, pp 344-361; and in Plummer, U.S. Pat. Nos. 5,334,363, and 5,180,572.
A process which utilizes molten sulfur to react with H2 S has also been described by Peter Clark of Alberta Sulfur Research Ltd. (termed "ASR" process here). In this paper a system for removing H2 S from natural gas containing from 10 to 1000 ppm H2 S is described wherein the gas stream is sparged into a vessel containing molten sulfur at temperatures between 130° C. and 150° C. While this process does involve an initial reaction between sulfur and H2 S in the absorber section, the process conditions, operations and chemistry of the ASR processes is very different from the invention described here, which we have termed the "CrystaSulf™" process. The ASR process thus operates around 140° C., versus 50° C. to 70° C. for CrystaSulf, so sulfur in the circulating streams is in the molten state and has molecular structure, physical properties, chemical properties, and performance characteristics of such state, as well as the consequent reaction pathways and reaction rates. Among other things, it may be noted that the H2 S capacity of ASR's molten sulfur is likely to be equilibrium limited, and is evidently much lower than CrystaSulf. The ASR literature indicates a 1000 ppm upper limit for inlet H2 S concentration. Furthermore, in the ASR process the molten sulfur circulating fluid will solidify if it cools much below the target operating range of 130° C. to 150° C., causing major operational problems. Further details of the ASR process can be found in Clark, P. D., E. G. Fitzpatrick, and K. L. Lesage, "The H2 S/H2 Sx / Liquid Sulfur System: Application to Sulfur Degassing and Removing Low Levels of H2 S from Sour Gas Streams," presented at the 1995 Spring National Meeting of the American Institute of Chemical Engineers, Sulfur Removal from Gas Streams, Session 54, Mar. 19-23, 1995.
Now in accordance with the present invention, the foregoing inadequacies of the prior art LRSR technology are overcome by use of a nonaqueous solvent approach, which yields surprising and unexpected benefits. Pursuant to the invention, a sour gas stream containing H2 S is contacted with a nonaqueous sorbing liquor which comprises an organic solvent for elemental sulfur, dissolved elemental sulfur, an organic base which drives the reaction converting H2 S sorbed by the liquor to a nonvolatile polysulfide which is soluble in the sorbing liquor, and an organic solubilizing agent which prevents the formation of polysulfide oil-which can tend to separate into a separate viscous liquid layer if allowed to form. The sorbing liquor is preferably water insoluble as this offers advantages where water soluble salts are desired to be removed. Hydrogen sulfide (H2 S) gas is sorbed into this sorbing liquor where it reacts with the dissolved sulfur in the presence of the base to form polysulfide molecules. This reaction decreases the equilibrium vapor pressure of H2 S over the solution, thus providing more efficient scrubbing than a physical solvent. The liquor is then sent to a reactor where sufficient residence time is provided to allow the polysulfide forming reactions to reach the desired degree of completion-i.e., resulting in a nonvolatile polysulfide which is soluble in the sorbing liquor. From the reactor, the liquor flows to a regenerator where the solution is oxidized (e.g., by contact with air), forming dissolved elemental sulfur and water (which, being insoluble, is rejected from the solution either as an insoluble liquid layer or as water vapor exiting the overhead of the regenerator or absorber). The temperature of the liquor, which up to this point is sufficient to maintain the sulfur in solution, is then lowered, forming sulfur crystals, which are easily removed by gravity settling, filtration, centrifuge, or other standard removal method. Enough sulfur remains dissolved in the liquor following separation of the sulfur crystals that when this solution is reheated and returned to the absorber for recycling in the process, a sufficient amount of sulfur is present to react with the inlet H2 S gas.
The process and system for removal of hydrogen sulfide from a gaseous stream in accordance with this invention thus utilizes a nonaqueous sorbent liquor comprising a solvent having a high solubility for elemental sulfur, and a sufficient temperature so that solid sulfur formation does not occur either in the hydrogen sulfide absorber or in the air-sparged regenerator of the system utilized for carrying out the process. In accordance with the invention, the solvent generally can have a solubility for sulfur in the range of from about 0.05 to 2.5, and in some instances as high as 3.0 g-moles of sulfur per liter of solution. The temperature of the nonaqueous solvent material is preferably in the range of about 15° C. to 70° C. Sulfur formation is obtained, when desired, by cooling the liquor proceeding from the air-sparged regenerator. This can for example be effected at a sulfur recovery station by cooling means present at the station. The solvent is thereby cooled to a sufficiently low temperature to crystallize enough solid sulfur to balance the amount of hydrogen sulfide absorbed in the absorber. The solubility of elemental sulfur increases with increasing temperature in many organic solvents. The rate of change of solubility with temperature is similar for many solvents, but the absolute solubility of sulfur varies greatly from solvent to solvent. The temperature change necessary to operate the process will vary primarily with the composition of the sorbent, the flow rate of sorbent, and the operating characteristics of the recovery station. For most applications, a temperature difference of 5° C. to 20° C. is appropriate as between the temperature of the solvent material at the absorber/reactor and temperature to which the said solvent is cooled at the sulfur recovery station; but the temperature difference can in some instances be as little as 3° C. or as much as 50° C. The nonaqueous solvent in accordance with one preferred embodiment of this invention comprises a solvent selected from the group consisting of 1, 2, 3, 4 tetrahydronaphthalene, N,N dimethylaniline, diphenyl ether, dibenzyl ether, terphenyls, diphenylethanes, alkylated polycyclic aromatics, and mixtures thereof.
In order to obtain a measurable conversion of sulfur and hydrogen sulfide to polysulfides, the base added to the solvent must be sufficiently strong and have sufficient concentration to drive the reaction of sulfur and hydrogen sulfide to form polysulfides. Most primary, secondary and tertiary amines are suitable bases for use in accordance with the process of this invention. More particularly, amines which comprise nitrogen connected to alkane groups, alkanol groups, benzyl groups, or hydrogen (but not to phenyl) are suitable for use in the process of this invention. It should be noted that while the solvent utilized in the process of this invention requires the addition of a base to promote the reaction of sulfur and hydrogen sulfide to form polysulfides, the base and the solvent may be the same compound.
In accordance with one preferred embodiment of this invention, the base may be a tertiary amine. We have found that polysulfide compounds formed in the presence of tertiary amines are much more easily converted to sulfur by air during the regeneration step than those formed from primary amines or secondary amines. In accordance with a particularly preferred embodiment of this invention, the base is selected from the group consisting of 2-(dibutylamino) ethanol, N-methyldicyclohexylamine, N-methyldiethanolamine, tributylamine, dodecyldimethylamine, tetradecyldimethylamine, hexadecyldimethylamine, diphenylguanidine, alkylaryl polyether alcohols, and mixtures thereof. The base is present at concentrations of about 0.01M to 2.0M. Of the bases cited, 2-(dibutylamino) ethanol and N-methyldicyclohexylamine are most preferred, and are preferably present at concentrations of about 0.5 to 1.0M.
The nonaqueous sorbing liquor, in addition to including a solvent having a high solubility for sulfur, and a base, comprises an agent suitable for maintaining the solubility of polysulfide intermediates which may otherwise separate when they are formed during operation of the process. Such solubilizing agent is preferably selected from the group consisting of benzyl alcohol, benzhydrol, 3-phenyl-1-propanol, tri(ethylene glycol), and mixtures thereof.
The major chemical reactions for the process of this invention are summarized as follows:
H2 S scrubber: H2 S(g)+4 S(1)+Base(1)➝HBaseHS5 (1)
Regenerator: HBaseHS5 (1)+1/2O2 (g)➝5 S(1)+H2 O(g)+Base(1)
Crystallizer: S(1)➝S(s)
Overall: H2 S(g)+1/2O2 (g)➝S(s)+H2 O(g)
In the foregoing equations, the dissolved species HbaseHS5 (1) is thought to be a salt of the protonated amine and protonated polysulfide. It is to be understood that the nominal S:H2 S stoichiometry and predominant polysulfide chain length can vary with the actual solvent, base employed and physical operating conditions, and that the actual elemental sulfur species is predominantly cyclic S8.
By use of the invention all of the aforementioned difficulties inherent in the prior art are overcome. Solid sulfur exists only at the point where the temperature is intentionally lowered and the product sulfur crystals are produced (solid sulfur is not present elsewhere as the temperature is elevated, which keeps the sulfur dissolved in solution), thus avoiding plugging and the like, and providing a very operable and reliable process. Where the liquor is water insoluble, byproduct sulfur salts in this process can be easily separated by water washing the solution since they will migrate to the water and the water is insoluble in the organic solvent. The sulfur formed by crystallization in the nonaqueous environment is large (50 microns or more), does not stick, and settles easily. The solution contains no metal ions (unless added for enhanced operation), chelants, surfactants or other additives, thus eliminating the difficulties generated by the prior art use of metal chelants. And finally, the sulfur crystals produced by this process are yielded at a point and in a manner that they are not in contact with water or other contaminants. Any residual traces of organic solvent on the sulfur crystals are easily removed with a solvent wash loop, thus eliminating the problem of wet, contaminated sulfur. In laboratory testing, sulfur formed in bench runs where tetralin was the solvent was vacuum filtered to give 10% solvent on sulfur, and then washed with three volumes of methanol to produce a teltalin-free sulfur product with 0.5 weight percent methanol, which can be easily removed and recovered to yield a pure sulfur product.
In the present invention the initial reaction is between dissolved sulfur and H2 S, not (as in much of the prior art) between H2 S and a metal ion. Some of the reactions are catalyzed by the presence of an organic base and occur in the presence of a polysulfide oil solubilizer. Removing the solubilizer can cause a polysulfide oil layer to form. Neither of these constraints exist in an aqueous system. The reactions are carried out in a nonaqueous environment; most of the reacting species would not exist in an acceptable form/configuration in an aqueous environment. Aeration of aqueous polysulfide streams usually produces predominantly sulfur oxyanion salts, not elemental sulfur. In addition, the solution components comprising the sorbing liquor of the present process are insoluble in water. Furthermore, sulfur is formed initially in solution in the dissolved state and becomes a solid only after the solution solubility for sulfur is decreased by lowering the temperature.
In the drawing appended hereto:
The FIGURE is a schematic block diagram of a system operating in accordance with the present invention.
In the Figure, a schematic block diagram appears of a system 20 which may be used in practice of the present invention. In a typical application of the invention, a gaseous stream 22 to be treated by the process and apparatus of the invention is a natural or other fuel gas which typically includes 0.1 volume % to 50 volume % of hydrogen sulfide, which component for environmental and other reasons is desired to be minimized in or substantially removed from the gas stream. A more common parlance in the art is to measure the degree of contamination of a gas stream sought to be treated in terms of its daily production of sulfur. When viewed in this way, the streams to be treated by the invention will generally be those that produce 0.1 to 30 tons/day of sulfur. In a representative case where input stream 22 comprises a natural gas, it is provided to system 20 at a pressure of around 1,000 p.s.i. The stream 22 is passed into and through an absorber 11 where the hydrogen sulfide is effectively removed so that the output stream 24 is substantially free of hydrogen sulfide-typically concentrations of hydrogen sulfide in output stream 24 will be less than 4 ppm by volume.
Absorber 11 is a conventional liquid-gas contact apparatus at which the input gas stream 22 to be purified is passed in counter-current or other relation to a liquid sorbent liquor 26. Absorber 11 may for example take the form of a tower which is packed with porous bodies so as to provide a high surface area for the gas-liquid contact. Other absorber apparatus as are known in the art can similarly be utilized. Pursuant to the invention, the sorbent liquor 26 comprises a preferably nonaqueous solvent having a high solubility for sulfur, typically in the range of from about 0.05 to 2.5 g-moles of sulfur per liter of solution. Sorbent liquor 26 as provided to absorber 11 includes sulfur dissolved in the nonaqueous solvent in the range of from about 0.05 to 2.5 g-moles of sulfur per liter of solution, together with a base (such as the aforementioned tertiary amines) having sufficient strength and sufficient concentration in respect to that of the hydrogen sulfide and sulfur to drive a reaction between the sulfur and hydrogen sulfide which results in formation of one or more nonvolatile polysulfides which are soluble in the solvent. In order to provide sufficient residence time for the reactions forming the polysulfide, a reactor vessel 15 is preferably provided downstream of the absorber. This vessel can also be physically present in a delay section at the base of the absorber tower. The reactor vessel can be of conventional construction such as a plug flow reactor. Total residence time for the reaction, whether carried out in the absorber alone, in the absorber and the reactor, or in the reactor alone, can be in the range of 5 to 30 minutes, with 15 minutes or so being typical. The polysulfide remains in solution in the solvent, and the spent sorbing liquor including the dissolved polysulfide is conveyed via line 13 to a regenerator 10.
Since it is possible for certain polysulfide intermediates to separate as their concentration increases during practice of the invention (e.g., an amine-polysulfide "red oil" where the aforementioned base is a tertiary amine), a polysulfide solubilizing agent is preferably also present in sorbing liquor 26. Benzyl alcohol is a typical such solubilizing agent; however other agents such as benzhydrol, glycol, and mixtures of these several agents are suitable; and in addition the solubilizing function can be accomplished in some instances by one of the other components of the sorbent, such as the nonaqueous solvent or the base.
It is to be appreciated that the spent sorbing liquor provided to regenerator 10 is entirely provided as a liquid phase. Substantially no solid sulfur particles are present as could cause blockages or other difficulties either at the absorber or in other portions of the system proceeding regenerator 10. At regenerator 10, the sorbing liquor at a temperature in the range of 15° C. to 70° C. is oxidized by contacting with an oxygenating gas, as for example by contacting with a counter current stream of air, or other means. Typically, for example, the sorbing liquor can be contacted with an ascending upwardly sparged air stream from supply line 9, which air is at a temperature of 15° C. to 70° C. Residence time in the regenerator is typically on the order of 15 to 45 minutes, and results (in the presence of the aforementioned base) in the dissolved polysulfide being oxidized into elemental sulfur. One unexpected aspect of the invention is indeed that more than 85% conversion of the polysulfide to elemental sulfur is achieved with the surprisingly short residence times indicated.
Because of the high sulfur solubilizing characteristics of the solvent, and of the temperature of the solvent at regenerator 10, substantially no precipitation of the sulfur occurs at the regenerator, thereby continuing to avoid clogging and similar problems as often occur where slurries are developed. The sorbing liquor is thereupon discharged from the regenerator and proceeds through a line 25 to a sulfur recovery station 14. Air and water vapor are discharged from regenerator 10 at vent 27. This vent stream will likely be of acceptable environmental quality, but can be catalytically combusted if it contains large amounts of benzene or other volatile organic compound contaminant sorbed from the inlet gas.
At or just prior to recovery station 14, the sorbing liquor is cooled to a sufficiently low temperature to enable solid sulfur to be precipitated. The sorbing liquor discharged from regenerator 10 will typically have a temperature between 15° to 70° C. This temperature is reduced as the sorbing liquor proceeds through line 25 but does not reach a temperature at which sulfur precipitation occurs until it approaches or reaches station 14. In any event, station 14 may comprise a cooling means such as by refrigeration or heat exchange, with the objective of reducing the temperature of the sorbent to that needed to precipitate enough sulfur to balance for the sulfur being added to the sorbent by the hydrogen sulfide. The precipitated sulfur, as it is formed from a nonaqueous solvent, generally has a larger crystal size and a higher purity and better handling characteristics than such properties for sulfur precipitated from aqueous solution. The precipitated sulfur is separated from the sorbent by separating means which form part of recovery station 14 or which can be immediately downstream of station 14. Separation can be accomplished by filtration, and/or settling and/or by centrifugation, and the now regenerated sorbent is recycled to the absorber 11 for reuse in the cycle.
The recovered sulfur at station 14 can be purified at a sulfur purification station 18. Residual traces of organic solvent on the sulfur crystals are removed with a solvent wash loop. Methanol can be used for such purpose, and can be recovered, distilled off and recycled in the loop. Pumps 12 and 17 are shown positioned in the system 20 to enable circulation of the sorbent in the manner shown-these and/or other pumps can be otherwise situated within the system to sustain the desired circulation. A heating station 16 can be provided between recovery station 14 and absorber 11 to bring the sorbent back to a temperature appropriate for dissolution of the sulfur that remains with the sorbent as it is returned to absorber 11. Supplemental heating means can also be provided at other points in the system to assure that the temperature remains above the sulfur precipitation point, i.e., until the sorbing liquor reaches the point in its circulation where such precipitation is desired. A byproduct sulfur salts removal step may also be employed, as shown for example in station 19. If the sorbing liquor is insoluble in water, then a water or aqueous alkalai wash followed by disposal of the aqueous phase or by removal of the salts from the aqueous phase by crystallization, or other means can be used for this purpose.
The invention is further illustrated by the following Examples, which however are to be considered as exemplary, and not delimitalive of the invention which is otherwise set forth:
In this Example a system similar to that shown in the Figure was utilized, except that no cooling of the liquid stream from regenerator 10 was used, and no reaction vessel was used between the H2 S absorber and the regenerator. The objective was not to crystallize the sulfur but rather to merely demonstrate the effectiveness of the basic reactions used in the process. Accordingly an H2 S-containing gaseous stream was contacted in the absorber 11 with a nonaqueous solvent material comprising 65% (v/v) tetralin (1,2,3,4 tetrahydronaphthalene), which has a high solubility for sulfur and a high boiling point, 15% (v/v) of a base, 2-(dibutylamino) ethanol, and 20% (v/v) benzyl alcohol. The benzyl alcohol, which also has a high boiling point, eliminates the formation of a heavy red "oil" which may be an amine-polysulfide "salt." The H2 S absorber was a 1.0 inch diameter column fitted with a 13-inch tall bed of 5 mm Raschig rings. The bed was wet with sorbent from the top. H2 S-containing gas entered from the bottom. The oxidizer was a bubble column of 24-inch working height holding 840 mL of fluid. Using 1.0 L of this solvent formulation at a liquid flow rate of 20 cc/minute, the system was operated continuously for eight hours while absorbing H2 S from a gas containing 18% H2 S (balance nitrogen) flowing at 100 cc/minute. The outlet concentration of H2 S from the absorber decreased during the run from a high of 65 ppm to a steady volume of 21±2 ppm during the last three hours of the run. During this time, the sulfur concentration increased from an initial volume of 0.30M up to 0.57M and no sulfur precipitated. The temperature of the system was 24°±2° C. during the run. Air was passed into the oxidizer at 1.0 L/minute. The total alkalinity of the system changed very little, indicating that the amine base was regenerated by aeration in the air-sparged regenerator 10. A secondary release of 300±100 ppm H2 S from the regenerator was noted, but the overall H2 S removal was still better than 98%. The hydrogen sulfide to sulfur conversion efficiency was 73% based on an electrochemical analysis for sulfur in the sorbent solution. As noted previously, no attempt was made to crystallize the sulfur during this run.
This run was similar to the one described in Example 1 except that a cylindrical vessel having a volume of 200 mL was inserted between the H2 S absorber and the air regeneration vessel. The system was charged with 1.2 L of a sorbent having the same chemical composition as in Example 1. The system was operated at ambient temperature (20° C.) and a liquid flow rate of 20 mL/minute. The sour gas stream characteristics were the same as in Example 1. The air flow rate to the oxidizer was lowered to 0.4 L/minute. The increased residence time (10 minutes) for reaction in the 200 mL reaction vessel produced a significant decrease in the amount of H2 S stripped from the air regenerator even at the reduced air rate. The average H2 S emitted from the air regenerator was 75 ppm as compared to 300 ppm H2 S in Example 1, producing a decrease of about a factor of 10 in the total amount of H2 S lost from the regenerator. The H2 S concentration leaving the H2 S absorber was also lowered to an average of about 18 ppm. Thus, the overall H2 S removal efficiency was 99.8% in this case.
On continued use of this solution, a point was reached where solid sulfur began to precipitate in the solution leaving the air-sparged regenerator as the sulfur concentration reached approximately 0.8M. However, simply warming the solution from 20° C. to approximately 40° C. by directing the output of a heat gun on the regeneration vessel redissolved the sulfur and allowed the run to continue.
Thus, it is apparent that by lowering the temperature of the solution circulating through the system, the sulfur can be crystallized. This was done initially in a "batch" mode by placing solution drained from the continuous-running apparatus in a refrigerator at 4° C. A batch of very large (up to 0.1 inches long) yellow crystals was obtained. The yellow crystals were found to be sulfur by melting and by x-ray diffraction. The filtrate from this operation was then be used as a hydrogen sulfide sorbent for at least seven hours with no further formation of solid sulfur. Subsequent experiments demonstrated hydrogen sulfide to sulfur conversion efficiencies of at least 90% based on comparing weight changes in the hydrogen sulfide cylinder and weight of sulfur produced from solution on cooling.
This example describes operation of the process with continuous crystallization and removal of sulfur produced from sorption and oxidation of H2 S. A water-jacketed cylindrical vessel was inserted downstream of the air regenerator to act as a sulfur crystallizer (i.e. for sulfur recovery). Tap water was passed through the outer jacket to lower the temperature of the circulating liquid exiting the regenerator from 49° C. to 27° C. Another vessel was inserted downstream of the crystallizer to allow settling and separation of the sulfur crystals from the slurry exiting the crystallizer. The crystalizer was operated at a liquid volume of 650 mL and the separator was operated at a liquid volume of 500 mL. The liquid flow rate was 20 mL/minute. The 200 mL cylindrical reaction vessel was replaced with a 15-foot long tube with a volume of 150 mL. The H2 S concentration was 15.7%, and the total sour gas flow was again 100 mL/minute. The air flow rate to the regenerator was 600 mL/minute.
The sorbent consisted of 60% (v/v) of Therminol® 59 solvent (a mixture of alkyl diarylethanes supplied by Monsanto Company), 15% (v/v) of Polycat® 12 (consisting primarily of N-methyldicyclohexylamine supplies by Air Products and Chemicals, Inc.), and 25% (v/v) of benzyl alcohol with an average elemental sulfur concentration during the run of 0.51M.
The process was operated for 28.8 hours with an average outlet concentration of H2 S from the absorber of 7.8 ppm (99.98% removal) and an average H2 S concentration out of the regenerator of 91 ppm. There was no evidence of a decrease in H2 S absorption efficiency during the run. Based on extraction and analysis of the circulating sorbent, the average total sulfur converted to sulfate was 11.7%, and the average total sulfur converted to thiosulfate was 4.1%. The physical handling and settling properties of the sulfur formed throughout the run were excellent. The crystal size of the product sulfur was greater than 50 micron as measured by scanning electron microscopy.
It is apparent from the foregoing that the process and system of the present invention overcomes the sulfur handling problems of the prior art aqueous liquid redox sulfur recovery processes. At the same time, because the reactants, sulfur and base, are highly soluble in the circulating solution, the process provides a large capacity for hydrogen sulfide absorption, thus permitting low circulation rates and consequently small equipment sizes for both the hydrogen sulfide absorber and the solution regenerator. Low circulation rates are also important in reducing the pumping energy needed to operate a hydrogen sulfide absorber at high pressures, as, for example, for direct treatment of high-pressure natural gas. The efficiencies for simple air regeneration are unexpectedly high and the rates of the air oxidation reaction are unexpectedly fast. While the present invention has been set forth in terms of specific embodiments thereof, it will be evident in the light of the present disclosure that numerous variations upon the invention are now enabled to those skilied in the art, which variations will yet reside within the present teachings. Accordingly, the invention is to be broadly construed, and limited only by the scope and spirit of the claims now appended hereto. | http://www.freepatentsonline.com/5738834.html |
The universe is a very complex system to understand and study, due to its nature and the limitations of our existence, yet efforts to better understand the system we have developed have continued to progress, and also today we already have enough technology to deal with the phenomena that occur in the universe through a hypothetical approach, in some Situations, without the need to conduct an experiment, but to build the necessary tools in a computer to create simulations and provide them with the properties of the objects inside it, in order to model what would happen in a given situation.
Technological progress has led us to study under certain conditions completely intangible phenomena. Recently and after much work, even a universe that is capable of containing it in the universe has been formed.
n really small space.
The uchuu . simulation, is the most complex and detailed model of the universe known to date. It states that it “contains 2.1 trillion ‘dark matter’ particles in a space of 9.6 billion light-years in diameter”, which is impressive in itself, but it should be noted that this simulation does not cover any fixed point in history like the present, but rather It seeks to model the behavior of dark matter over a time period exceeding 13 billion years, taking into account the cosmic expansion of the universe.
With ambitious minds, the authors of the work sought to obtain sufficient accuracy to identify groups of galaxies and even huge amounts of dark matter, rather than focusing only on the formation of stars or planets.
This is due to the fact that dark matter is a predominant component of the matter that composes the universe and is known to be fundamental in stellar formation and assembly, and for this reason it is considered more relevant to the study of these processes through virtual modeling to gain a better understanding of the phenomena that originated in the universe.
This recreation of the universe was a colossal feat, requiring 40,000 computer cores and 20 million computer hours, and initially the model was produced with a weight of over 3 petabytes, which equates to more than 3000 terabytes in a model only, but thanks for the high pressure that is made on it Only 100 TB can be matched.
The team that formed Uchuu maintained free access to a platform that could be consulted and explored uchuu . simulationFor Internet users interested in the topic.
On the other hand, thanks to this model, which collected such a large amount of data, researchers will be able to perform relevant simulations to support astronomical knowledge.
Finally, the study authors note that “future versions will include gravitational lens maps, galactic simulations, X-ray ensembles, and catalogs of active galactic nuclei,” anticipating a massive advance in astronomical science with a technology that allows it to be simulated in a more accurate way.
The information presented has been published in the journal Royal Astronomical Society.
Share knowledge, share knowledge. | https://www.theclevelandamerican.com/this-is-the-virtual-universe-created-by-scientists-and-you-can-download-it-if-you-have-100-terabytes-teach-me-about-science/ |
This unique balancing complex of active ingredients contains jasmine, black sesame and magnolia. The carefully selected ingredients merge into a wonderful symbiosis.
For centuries, these active ingredients have successfully been used in Traditional Chinese Medicine and Ayurveda. Their skin-friendly and balancing properties are scientifically recognized as well. The valuable composition of these plant extracts is full of supportive nutrients and vital substances.
The Calming Complex offers soothing relaxation to irritated skin and provides a harmonious, even-toned complexion. Furthermore, it offers anti-inflammatory and moisturizing properties and leaves even very stressed, demanding skin with a balanced and harmonious complexion. | https://www.dalton-cosmetics.com/int_en/explore-dalton/ingredients-library/calming-complex |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.