content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Portia- - c.1935, Color Lithograph.
Image size 14 3/4 x 12 1/2 inches (375 x 318 mm); sheet size 17 3/8 x 12 5/8 inches (442 x 321 mm).
A fine impression, with fresh colors, on off-white wove paper; printed to the sheet edges left and right and with margins (1 to 1 5/8 inches), in excellent condition.
Created for the New York City WPA. Collection: NYPL. Ex. collection Audrey McMahon, Director, New York City WPA Art Project.
Wonders of Our Time- - 1936, Lithograph.
Edition 50. Signed, dated, titled, and annotated 50 prints in pencil.
Image size 11 5/8 x 15 1/8 inches (295 x 384 mm); sheet size 13 3/8 x 16 3/8 inches (340 x 416 mm).
A fine, rich impression, on cream wove paper, with margins (1/2 to 1 1/16 inches), in excellent condition. Created for the New York City WPA.
Illustrated in: L'Amerique de la Dépression: Artistes Engagés des Années 30, Musée-Gallerie de la Seita, Paris, 1996; In the Eye of the Storm: An Art of Conscience, 1930-1970, Francis K. Pohl, Chameleon Books Inc., New York, 1995; Pressed In Time: American Prints 1905-1950, Henry E. Huntington Library and Art Gallery, San Marino, 2007. | http://keithsheridan.com/abelman.html |
Q:
Show that the inequality is valid for infinite terms of a sequence
This question comes from a Brazilian book of real analysis, which is "Introdução a Análise" (Introduction to Analysis) of Antonio Caminha. The problem is:
Let $(a_n)_{n \in \mathbb{N}}$ be a sequence of positive real numbers. Show that the inequality
$$ 1 + a_n > 2^{1/n}a_{n-1} $$
is true for infinity $n \in \mathbb{N}$.
A:
Suppose the inequality does not hold for infinitely many $n \in N$.
Then, $ \forall$ except finitely many $n \in N$,
the reverse inequality holds, i.e.,
$1 + a_n \leq 2^{1/n}a_{n-1} $ -------(1)
Taking limsup on both sides, and by the positivity of the $a_n$, we get,
$ 1+ \limsup a_n \leq \limsup2^{1/n}a_{n-1}$ --- (2)
Case 1) $ \limsup a_{n} < \infty $
Let $\limsup a_n = a$
Then by (2),
$ 1 + a \leq a $
A contradiction. Hence the result holds.
Case 2) $ \limsup a_n = \infty$
Then by (1), for all except finitely many $n \in N$,
we have $ 1+ {a_n} \leq 2^{1/n}a_{n-1}$
(say this happens $\forall n \geq N_0$ )
Multiplying by $2^{1/{n+1}} $ on both sides, we get,
$2^{1/{n+1}} + 2^{1/{n+1}}a_n \leq 2^{ 1/{n+1}} •2^{1/n}a_{n-1} \forall n \geq N_0$
Then again using the inequality (1), we get,
$ 2^{1/n+1} + 1 + a_{n+1} \leq 2^{ \frac{2n+1}{n(n+1)}}a_{n-1}$
$ \Rightarrow 1+ a_{n+1} \leq 2^{ \frac{2n+1}{n(n+1)}}a_{n-1}$
Continuing in this way, we obtain ,
$ 1+ a_{n+1} \leq 2^ {\frac{f(n)}{g(n)}}a_{N_0}$
$ \forall n \geq N_0$ where $f(n)$ & $g(n)$ are functions such that
$\lim_{n \rightarrow\infty} \frac{f(n)}{g(n)} =0 $ ------(3)
Hence,
$ a_{n+1} \leq 2^ {\frac{f(n)}{g(n)}} a_{N_0} + C $
$\forall n \in N$
( where the constant C is chosen according to the finitely many terms $a_n $ where $ 1\leq n \leq N_0$
By (3), since any convergent sequence is bounded,
We have,
$a_n \leq M $ $ \forall n \in N$ for some constant
$ M >0$
Contradiction to assumption that
$\limsup a_n = \infty$
Hence proved.
| |
Q:
Correctly substituting a matrix into a scalar equation
I am having trouble correctly substituting a matrix into an (originally) scalar equation.
For example:
A = {{3, 1}, {1, 2}}
cp = CharacteristicPolynomial[A, x]
which produces x^2-5x+5
It is known that the characteristic polynomial must be equal to zero.
Using:
cp /. {x -> A}
We obtain {{-5, 0}, {0, -5}}, but the correct answer is {{0, 0},{0, 0}} or,
MatrixPower[A, 2] - 5 A + 5 IdentityMatrix[2]
I have toyed with the idea of using CoefficientList and an If statement inside of a For loop, then matching the coefficients with their respective MatrixPower and multiplying the final CoefficientList[A, x, 0] by the identity matrix. But, this seems unnecessarily complex and not general enough to work for all characterist polynomial forms.
Can anyone think of an elegant way to raise the matrix to the correct power using MatrixPower and also only multiply the constant term by the identity matrix?
A:
This comes straight from the documentation on CharacteristicPolynomial. Look at the section Examples > Properties & Relations.
m = {{3, 1}, {1, 2}};
cp = CharacteristicPolynomial[m, x]
5 - 5 x + x^2
cl = CoefficientList[cp, x];
Sum[MatrixPower[m, j - 1] cl[[j]], {j, 1, Length[cl]}]
{{0, 0}, {0, 0}}
| |
description:
The Enolmatic bottle filler is a semi-professional solution for beginners. It is small and easy to use, it can fill up to 200 bottles per hour.
This tool works properly for glass bottles therefore it is not recommended for plastic or metal containers. For more information please click on the following article.
Specifications:
|SKU reference||001062-40|
|Weight||4000 grs|
Out of stock
This item is temporarily unavailable for sale.
If you wish we can send you a notice when the product is again available in our catalog.
Request a quote
Quantity: | https://www.berlinpackaging.eu/en/66/glass-packaging-wine-and-champagne-containers-glass-bottles-bordeaux-wine/863/filling-machine-enolmatic |
The diffusion-driven instability and complexity for a single-handed discrete Fisher equationFor a reaction diffusion system, it is well known that the diffusion coefficient of the inhibitor must be bigger than that of the activator when the Turing instability is considered. However, the diffusion-driven instability/Turing instability for a single-handed discrete Fisher equation with the Neumann boundary conditions may occur and a series of 2-periodic patterns have been observed. Motivated by these pattern formations, the existence of 2-periodic solutions is established. Naturally, the periodic double and the chaos phenomenon should be considered. To this end, a simplest two elements system will be further discussed, the flip bifurcation theorem will be obtained by computing the center manifold, and the bifurcation diagrams will be simulated by using the shooting method. It proves that the Turing instability and the complexity of dynamical behaviors can be completely driven by the diffusion term. Additionally, those effective methods of numerical simulations are valid for experiments of other patterns, thus, are also beneficial for some application scientists.
-
Multi-order fractional differential equations and their numerical solutionThis article considers the numerical solution of (possibly nonlinear) fractional differential equations of the form y(α)(t)=f(t,y(t),y(β1)(t),y(β2)(t),…,y(βn)(t)) with α>βn>βn−1>>β1 and α−βn1, βj−βj−11, 0
-
Numerical investigation of noise induced changes to the solution behaviour of the discrete FitzHugh-Nagumo equationIn this work we introduce and analyse a stochastic functional equation, which contains both delayed and advanced arguments. This equation results from adding a stochastic term to the discrete FitzHugh-Nagumo equation which arises in mathematical models of nerve conduction. A numerical method is introduced to compute approximate solutions and some numerical experiments are carried out to investigate their dynamical behaviour and compare them with the solutions of the corresponding deterministic equation. | https://chesterrep.openrepository.com/handle/10034/6981/browse?type=journal&value=Applied+Mathematics+and+Computation |
Volkswagen Polo Germany model of the B class.
This car is presented with gasoline and diesel engine.
The most powerful version of the car has an engine 1.2 (105 hp) gasoline with a 7 gears. With this engine, the gasoline consumption is 7 liters per hundred kilometers in the city, on the highway – 4.4 liters, with a mixed trip – 5.3 liters. The capacity of the fuel tank is 45 liters. Volkswagen Polo has a weight of 1600 kg. The car accelerates at 100 km / h for 9.7 seconds. The maximum speed for Volkswagen Polo is 190 km/h. The gasoline engine, 4 cylinders, is located in front of the car. The front suspension are independent suspensions. The rear suspension are semi-independent suspension. The car has ventilated disc brakes on the front wheels and disc brakes at the rear. | https://carsot.com/volkswagen/polo/volkswagen-polo-v-2009-2015-hatchback-3-door.html |
Sufficient accessibility of pine needles in hilly regions is not only resource wastage but also a cause of forest fire threat. The primary motivation of the present study is to utilize locally available abundant pine needles to complement the diesel-based generation/backup unit. In the present study techno-economic and environmental assessment of off-grid dispersed biomass energy system has been carried out to quench the electricity demand of an educational building load currently run by the state grid. Moreover, a comparative analysis of diesel generator also has been investigated to determine the optimal system configuration for the study area. The biomass gasifier energy system integrated with battery storage was found to be the most favorable configuration with a total net present cost of $78,964 and cost of energy of 0.192$/kWh, and it saves 27.7 Mt of CO2/year relative to only diesel system. The study will provide insights to designers, researchers, investors, and policy originators in the field of biomass energy systems.
This is a preview of subscription content, access via your institution.
Access options
Buy single article
Instant access to the full article PDF.
US$ 39.95
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
US$ 119
Tax calculation will be finalised during checkout.
Abbreviations
- HRES:
-
Hybrid renewable energy system
- BG:
-
Biomass gasifier
- d :
-
Day of a year
- t :
-
Hour of a day
- DG:
-
Diesel generator
- SOC:
-
Battery state of charge
- MATLAB:
-
Matrix Laboratory
- i-HOGA:
-
Improved Hybrid Optimization by Genetic Algorithm
- NREL:
-
National Renewable Energy Laboratory
- TRNSYS:
-
Transient Energy System Simulation Program
- RETScreen:
-
Renewable Energy Project Analysis Software
- TNPC:
-
Total net present cost
- COE:
-
Cost of energy
References
- 1.
Ocon, J.D., Paul, B.: Energy transition from diesel-based to solar photovoltaics-battery-diesel hybrid system-based island grids in the philippines–techno-economic potential and policy implication on missionary electrification. J. Sustain. Develop. Energy Water Environ. Syst. 7(1), 139–154 (2019)
- 2.
Sinha, S., Chandel, S.S.: Improving the reliability of photovoltaic-based hybrid power system with battery storage in low wind locations. Sustain. Energy Technol. Assess. 19, 146–159 (2017)
- 3.
Renewables 2018 Global Status Report, www.REN21.net.
- 4.
Saxena, R.C., Adhikari, D.K., Goyal, H.B.: Biomass-based energy fuel through biochemical routes: a review. Renew. Sustain. Energy Rev. 13(1), 167–178 (2009)
- 5.
WBA. WBA global bioenergy statistics 2019. World Bioenergy Association (2019)
- 6.
Chauhan, A., Saini, R.P.: Renewable energy based off-grid rural electrification in Uttarakhand State of India: technology options, modelling method, barriers and recommendations. Renew. Sustain. Energy Rev. 51, 662–681 (2015)
- 7.
Hiendro, A., Kurnianto, R., Rajagukguk, M., Simanjuntak, Y.M., Junaidi: Technoeconomic analysis of photovoltaic/wind hybrid system for onshore/remote area in Indonesia. Energy 59, 652–657 (2013)
- 8.
Merei, G., Berger, C., Sauer, D.U.: Optimization of an off-grid hybrid PV-wind diesel system with different battery technologies using genetic algorithm. Sol. Energy 97, 460–473 (2013)
- 9.
Sinha, S., Chandel, S.S.: Review of software tools for hybrid renewable energy systems. Renew. Sustain. Energy Rev. 32, 192–205 (2014)
- 10.
Salehin, S., Islam, A.K.M.S., Hoque, R., Rahman, M., Hoque, A., Manna, E.: Optimized model of a solar PV-biogas-diesel hybrid energy system for Adorsho Char Island, Bangladesh. 3rd IntConf Dev Renew Energy Technol 2014, 1–6 (2014)
- 11.
Fahmy, F. H., Farghally, H. M., & Ahmed, N. M.: Photovoltaic-biomass gasifier hybrid energy system for poultry house. Int. J. Modern Eng. Res. (IJMER), 4(8) (2014)
- 12.
Sigarchian, S.G., Paleta, R., Malmquist, A., Pina, A.: Feasibility study of using a biogas engine as backup in a decentralized hybrid (PV/wind/battery) power generation system—Case study Kenya. Energy 90, 1830–1841 (2015)
- 13.
Bhatt, A., Sharma, M.P., Saini, R.P.: Feasibility and sensitivity analysis of an off-grid micro hydro-photovoltaic-biomass and biogas-diesel-battery hybrid energy system for a remote area in Uttarakhand state, India. Renew. Sustain. Energy Rev. 61, 53–69 (2016)
- 14.
Ramchandran, N., Pai, R., Parihar, A.K.S.: Feasibility assessment of Anchor-Business-Community model for off-grid rural electrification in India. Renew. Energy 97, 197–209 (2016)
- 15.
Mishra, S., Panigrahi, C.K., Kothari, D.P.: Design and simulation of a solar-wind-biogas hybrid system architecture using HOMER in India. Int. J. Ambient. Energy 37, 184–191 (2016)
- 16.
Nag, A.K., Sarkar, S.: Modeling of hybrid energy system for futuristic energy demand of an Indian rural area and their optimal and sensitivity analysis. Renew. Energy 118, 477–488 (2018)
- 17.
Chauhan, A., Saini, R.P.: Discrete harmony search based size optimization of Integrated Renewable Energy System for remote rural areas of Uttarakhand state in India. Renew. Energy 94, 587–604 (2016)
- 18.
Sen, R., Bhattacharyya, S.C.: Off-grid electricity generation with renewable energy technologies in India: an application of HOMER, Renew. Energy 62, 388–398 (2014)
- 19.
Rajbongshi, R., Borgohain, D., Mahapatra, S.: Optimization of PV-biomass-diesel and grid base hybrid energy systems for rural electrification by using HOMER. Energy 126, 461–474 (2017)
- 20.
Eteiba, M.B., Barakat, S., Samy, M.M., Wahba, W.I.: Optimization of an off-grid PV/biomass hybrid system with different battery technologies. Sustain Cities Soc 40, 713–727 (2018)
- 21.
Ghenai, C., Janajreh, I.: Design of solar-biomass hybrid Microgrid system in Sharjah. Energy Proced. 103, 357–362 (2016)
- 22.
Kenfack, J., Neirac, F.P., Tatietse, T.T., Mayer, D., Fogue, M., Lejeune, A.: Microhydro-PV-hybrid system: sizing a small hydro-PV-hybrid system for rural electrification in developing countries. Renew. Energy 34(10), 2259–2263 (2009)
- 23.
Shahzad, M.K., Zahid, A., Rashid, T., Rehan, M.A., Ali, M., Ahmad, M.: Technoeconomic feasibility analysis of a solar-biomass off grid system for the electrification of remote rural areas in Pakistan using HOMER software. Renew. Energy 106, 264–273 (2017)
- 24.
Malik, P., Awasthi, M., Sinha, S.: Analysis of sensitive parameters influencing a SPV/WT/Biomass/Battery based hybrid system. In: 2019 8th International Conference on Power Systems (ICPS) 2019 Dec 20 (pp. 1–6). IEEE.
- 25.
Askarzadeh, A., dos Santos, C.L.: A novel framework for optimization of a grid independent hybrid renewable energy system: a case study of Iran. Sol. Energy 1(112), 383–396 (2015)
- 26.
Malik, P., Awasthi, M., Sinha, S.: Study on an existing PV/wind hybrid system using biomass gasifier for energy generation. Pollution 6(2), 335–346 (2020)
- 27.
Badwawi, R.A., Abusara, M., Mallick, T.: A review of hybrid solar PV and wind energy system. Smart Sci. 3(3), 127–138 (2015)
- 28.
Gebrehiwot, K., Mondal, M.A.H., Ringler, C., Gebremeskel, A.G.: Optimization and cost-benefit assessment of hybrid power systems for off-grid rural electrification in Ethiopia. Energy 177, 234–246 (2019)
- 29.
Castellanos, J.G., Walker, M., Poggio, D., Pourkashanian, M., Nimmo, W.: Modelling an off-grid Integrated Renewable Energy System for rural electrification in India using photovoltaics and anaerobic digestion. Renew. Energy 74, 390–398 (2015)
- 30.
Kanase-Patil, A.B., Saini, R.P., Sharma, M.P.: Sizing of Integrated Renewable Energy System based on load profile and reliability index for the state of Uttarakhand in India. Renew. Energy 36(11), 2809–2821 (2011)
- 31.
http://www.nrel.gov/homer/. [last accessed 20/9/2020]
- 32.
Malik, P., Awasthi, M., Sinha, S.: Study of grid integrated biomass-based hybrid renewable energy systems for Himalayan terrain. Int. J. Sustain. Energy Plan. Manag. 28, 71–88 (2020)
- 33.
Bala, B.K.: Energy and environment: modelling and simulation. Nova Publishers, Hauppauge (1998)
- 34.
Chauhan, A., Saini, R.P.: Techno-economic feasibility study on Integrated Renewable Energy System for an isolated community of India. Renew. Sustain. Energy Rev. 59, 388–405 (2016)
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Malik, P., Awasthi, M. & Sinha, S. Techno-economic analysis of decentralized biomass energy system and CO2 reduction in the Himalayan region. Int J Energy Environ Eng (2021). https://doi.org/10.1007/s40095-020-00370-0
Received:
Accepted:
Published: | https://link.springer.com/article/10.1007/s40095-020-00370-0?error=cookies_not_supported&code=4f874e5f-e33e-49b5-bc87-b2d780cfcb2e |
How to Build Stairs for a Deck
Building stairs for a deck is a pretty simple procedure, once you’ve built a deck or patio. So far, we’ve covered “How to Build a Deck” and “How to Build a Patio”, but we’ve only brushed over the building of stairs. So to complete our series, here’s “How to Build Stairs for a Deck“.
This is going to be simple, in comparison to the rest of the project. Most of the time, you stairs are only going to require a few steps spaced out evenly, and are hardly the equivalent of building a staircase inside the home. Getting the incline right can be tricky, but in the case of a few steps, a few trial and error calculations should suffice. Learn what the “rise to run ratio” is, then build stairs according to an acceptable ratio.
What Is Rise to Run Ratio?
- Rise – The height between two steps on a staircase.
- Run – The depth of the step on a staircase. Also called the “tread”. Indicates how much room your foot has to step on.
The conventional approach is to make the run and the rise x2 (twice the rise) should equal between 24″ and 26″. So if the rise is 8 inches (x2 = 16), then the run should be 8-10 inches. If the rise is 10 inches (x2 = 20), then the run should be 4-6 inches (clearly unacceptable). You’ll quickly see the outer limits to what the rise should be, keeping in mind that the ideal tread path should be about ten to twelve inches.
Measure the Vertical Distance
When preparing to build your staircase, first measure the vertical distance the stairs need to climb. A rough riser height is 7, so divide your vertical distance by 7, to give you the number of stairs you need. This is a rough number, since your rise isn’t likely to divide perfectly into 7.
Calculate the total run of the stairs by multiplying your tread depth by the number of stairs you have, minus one. The -1 is figured in, because the final step is your deck floor. When you get this calculation, you know how far out from the patio your staircase should start.
Measure the Stringers
The “stringers” are the up and down boards that support your treads, which rise and are nailed to your stairs. Imagine walking up the stairs and stubbing your toe as you do so – the stringer is what you would stub your toe again. Usually, a stringer is a piece of 2×12 lumber.
The stringer is important in measuring your overall rise from the ground to the deck. Using rise and run calculations, make your measurements and mark them along the stringer.
Cut Stringers – Set Stringers into Place
Next, using a hand saw or circular saw, cut the stringers out. Set these boards into place on your deck. Cut them to shape with a hand saw, adding a stringer every 16″ along the width of the stairway.
The bottom of the stringers should rest on a cement footing. This footing can be either a prefab cement slab or a poured cement base.
Install the Treads
Once you have the stringers on your stairs, add the treads to your staircase. The treads are the actual steps. The width of the steps should be easy to figure, since the stringers are already in place.
Screw the treads to the stringers by using galvanized decking screws, using 2×6 boards cut to the proper length.
Building Stairs for a Deck
Mastering how to build stairs for a deck has one or two tricky aspects, but after building a full deck or patio, this should be a pretty simple process in comparison. Be careful to calculate rise and run correctly, because you want to build steady and secure stairs, so people from 3 years to 83 years old can make it up the stairs. | http://www.lifeguides.net/building/stairs-deck/ |
Virtue Ethics is a moral philosophy commonly attributed to Plato and Aristotle. The meaning of the word “virtue” for both was that of excellence. Although there are differences in their individual schools of thought, their outlook on morality is more or less the same.
Both these philosophers came upon their understanding of ethics and morality while attempting to answer some fundamental question. For Plato, the question was, “what is the good life?” And for Aristotle this was, “what do men fundamentally desire?”
In their individual attempts to answer these questions we find their theories of ethics.
Plato described four cardinal virtues in his works. They were: Wisdom, Courage, Temperance and Justice, referring respectively to the following faculties of the human soul: Reason, Spirits, Appetites and justice being the correct balance of the first three, which according to him was the subservience of spirits and appetites to the faculty of reason. These virtues when properly exercised would lead to the development of an organized, well-balanced and hence virtuous individual. This well-balanced individual would be a happy person.
So, Plato hypothesizes that it is a happy person who is leading a good life (hence, a good life is a happy life). He is happy because he is morally virtuous, morally virtuous because he is guided by reason and reason is knowledge.
We now understand the first part of Plato’s theory that to be happy one must be morally virtuous. This leads to the second part of his theory that reason or that ultimate knowledge which is needed for morality comes from the Idea of Good. It is this Idea of Good which exists in the realm of Ideas, of immutable, unchanging Form which is the source and the final goal of all morality. And this Idea of Good is accessible only to the philosophers.
Aristotle differs from the Platonic view over the concept of “Forms” and that knowledge of morality is a priori. According to him, moral principles are to be discovered through the study of man’s life and his experiences and not from some obscure, formless world of ideals.
In his search for the answer to what men fundamentally desire, Aristotle more or less comes to the same conclusion as Plato, which is the attainment of Eudaimonia, a term used by Aristotle and translated commonly as Happiness. As with Plato, Aristotle also believes that leading a virtuous life will lead to happiness. A virtuous life is one which is governed by reason.
If you need assistance with writing your essay, our professional essay writing service is here to help!Essay Writing Service
Reason in man has two functions. First is the use of reason (or the rational part of the soul) to control the irrational (appetitive, for e.g. emotions and vegetative, for e.g. breathing) part of the soul. The second is to use reason for the sake of deep analysis to come up with knowledge which in turn yields laws and principles to govern everyday life.
He further states that virtue in man corresponds to these two functions of reason respectively: moral virtues and intellectual virtues. These moral and intellectual virtues are the mean between two vices. That is these virtues exist as the middle ground between two extremes.
Moral virtues are those which based on rationality are ingrained in a man as his nature and are practiced by him out of habit. Examples of the moral virtues are courage and prudence etc.
On the other hand, the intellectual virtues are those of exercising the rational part of the soul purely for the sake of reasoning, an example of which is wisdom.
The former (moral virtues) are within reach of the ordinary man while the intellectual virtues fall in the domain of a few divinely blessed only.
Finally, according to Aristotle it is the state of character of a person which makes him morally virtuous. This state of character is one of the three components of a man’s personality. The other two being: the passions (e.g. anger or fear) and the faculties (e.g. ability to feel anger).
It is the state of character which propels a man to choose between two extremes. Hence moral virtue is the state of character of a man which leads him to choose the “golden mean”. Let us take an example, proper pride is the mean between empty vanity and undue humility.
To sum up Aristotle’s philosophy of ethics is that it is the character of man within which lies the power to choose. Hence it is not the act but the choice made between different forms of that act that morality is evident.
NOTES
William Lille, An Introduction to Ethics, (London: Methuen & Co Ltd, 1971), 272.
Ethel M. Albert, Theodore C. Denise and Sheldon P. Peterfreund, Great Traditions in Ethics, (New York: D. Van Nostrand Company, 1980), 11.
Ibid., 38.
Lille, An Introduction, 274, 277.
Ethel M. Albert and others, Great, 11.
Ibid., 29.
Ibid., 38.
Ibid., 46.
Ibid., 48.
Ibid., 39.
Ibid., 50.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: | https://www.ukessays.com/essays/philosophy/the-moral-philosophy-of-virtue-ethics-philosophy-essay.php |
Q:
Convexity of n $4u^4x^{2.5}y^{-5}$ over $x>0, y >0$ and $u>0$
I am trying to find if the function $4u^4x^{2.5}y^{-5}$ is convex over $u>0, y>0$ and $x>0$.
The thing which comes to my mind immediately is to check the positive semi-definiteness of the Hessian over the domain and I tried finding the hessian using MATLAB. It's very complicated and i don't think it is that complicated. Should I use the basic definition of convexity?($f(\lambda x+ (1-\lambda x) <= \lambda f(x) + (1-\lambda) f(x))$
A:
Function defined by $f(x,y,u) \ = \ 4 x^{5/2}y^{-5}u^4$ is NOT convex on this domain, i.e.,
$$f(\lambda X_1+ (1-\lambda) X_2) \ \ \leq \ \ \lambda f(X_1) + (1-\lambda) f(X_2) \ \ \ \ \ (*)$$
is false for certain points and certain values of $\lambda \in (0,1)$.
It suffices for example to take the midpoint ($\lambda=1/2$) of
$$X_1=(x_1,y_1,u_1)=(0.5,1,1) \ \ \ \ and \ \ \ \ X_2=(x_2,y_2,u_2)=(1,1,0.5) \ \ \ (*)$$
The value of the left side of (*) is $0.61\cdots$ whereas the value on the right is $0.47\cdots$.
These cases are not rare, but aren't dominant either: random simulations in the limited range $(0,1)\times(0,1)\times(0,1)$ show that (*) is not true for about 1 out of 6 cases .
| |
The amount of hours from sunrise to sunset is equal to the total sunlight hours in a 24 hour period. Similarly, peak sun hours are the amount of total sunlight hours in a 24 hour period that are strong enough to provide power from being captured by a solar panel. Not every hour of sunlight delivers the same amount of energy resources. The sunlight at sunrise does not provide as many resources as the amount of sunlight mid-day. Thus, looking at the average peak sunlight hours for Brazos Bend is valuable for calculating your solar needs.
The equator has a latitude of zero while Brazos Bend has a latitude of 32.5. Any city located on the equator will receive the most sunlight throughout the year because the sunlight arrives at a perpendicular 90 degree angle to the earth at the equator. The further you are from the equator the more your daily sunlight hours can vary.
A tracking mount will increase the average peak sun hours for a solar power system. Think about a panel that is tracking the sun in the sky vs a panel that is fixed and not moving: you will see a higher efficiency ratio of productions. A 1-axis mount will track the sun from East to West from sunrise to sunset and move on a single axis of rotation. A 2-axis mount will track the Sun from East to West the same as a 1-axis mount would, but it will also track the angle of the sun in the sky as it slowly varies season to season. A 2-axis mount is more necessary in high latitude regions where the angle of the sun in the sky changes dramatically between each equinox.
The sun is a great ball of gas that rises and sets every day that the earth rotates while in orbit around the sun. Barring any major disasters this is a very predictable occurance every day. Latitude helps predict this even more, narrowing it down to the minute for sunrise and sunset. But some things aren’t as predictable that will greatly influence the efficiency of solar panels. Weather and cloud coverage for example can greatly diminish peak sun hours on any given day. Thick storm clouds will block a high percentage of the sun's rays, resulting in lower output of your solar panels. Weather needs to be factored into deciding when to use your system, or how much output one expects to get.
By taking the latitude of Brazos Bend one can get a close estimate of the amount of average peak sun hours per day for the geographical area. It varies with technology and the type of solar panel mount you use, but for a fixed mount solar panel in Brazos Bend one can expect close to 5.4 average peak sun hours per day. With a 1-axis tracking mount you would get 6.6 hours per day, and 7.2 hours per day with a 2-axis tracking mount that tracks the sun everywhere in the sky. | https://www.turbinegenerator.org/solar/texas/brazos-bend/ |
The calibration gives you a number called the calorimeter constant. It’s the amount of heat energy required to raise the temperature of the calorimeter by 1 degree Celsius. Once you know this constant, you can use the calorimeter to measure the specific heat of other materials.
The Calorimeter Constant Is Necessary To Determine The Volume And Pressure Of The Contents Of The Calorimeter And Must Be Corrected For Each Time The Calorimeter Is Used. Because The Calorimeter Is Not Ideal, It Absorbs Some Of The Heat From Its Contents And This Heat Must Be Corrected For
Beside above, what is the calorimeter constant of water? Determine a Calorimeter Constant II (The specific heat capacity of water is 4.184 J g¯1 °C¯1).
Also to know, how do you calculate the calorimeter constant?
Subtract the energy gained by the cold water from the energy lost by the hot water. This will give you the amount of energy gained by the calorimeter. Divide the energy gained by the calorimeter by Tc (the temperature change of the cold water). This final answer is your calorimeter constant.
What is the purpose of the calibration of the calorimeter?
The temperature increase is measured and, along with the known heat capacity of the calorimeter, is used to calculate the energy produced by the reaction. Bomb calorimeters require calibration to determine the heat capacity of the calorimeter and ensure accurate results.
What is a normal calorimeter constant?
A calorimeter constant (denoted Ccal) is a constant that quantifies the heat capacity of a calorimeter. It may be calculated by applying a known amount of heat to the calorimeter and measuring the calorimeter’s corresponding change in temperature.
What is the formula for specific heat?
Specific heat is the amount of heat required to raise one gram of any substance one degree Celsius or Kelvin. The formula for specific heat is the amount of heat absorbed or released = mass x specific heat x change in temperature.
What are the units for QRXN?
The standard enthalpy of reaction is symbolized by ΔHº or ΔHºrxn and can take on both positive and negative values. The units for ΔHº are kiloJoules per mole, or kj/mol. The Standard State: The standard state of a solid or liquid is the pure substance at a pressure of 1 bar ( 10 5 Pa) and at a relevant temperature.
What does a negative calorimeter constant mean?
If the calorimeter constant is negative, it means that it will release heat when heated up, or absorb heat when cold. water is inserted, which is not logical.
How do you define enthalpy?
Enthalpy is a thermodynamic property of a system. It is the sum of the internal energy added to the product of the pressure and volume of the system. It reflects the capacity to do non-mechanical work and the capacity to release heat. Enthalpy is denoted as H; specific enthalpy denoted as h.
How do you do a calorimeter experiment?
Place the metal into a test tube and place the test tube into the 250 mL beaker containing the boiling water. Empty and dry the calorimeter from Part A, then add about 40 mL of water to the calorimeter. Weigh and record the mass of the cups, cover and water in Data Table B. Calculate and record the mass of the water.
How do you calibrate a calorimeter?
Re:How to calibrate a calorimeter by measuring heat loss from a system? Weigh the empty calorimeter, add about 50mL of cold tap water and reweigh it to get the weight of cold water (W1). Insert the digital thermometer, note the time (let this be time zero) and plot the temperature each minute for 5 minutes.
Can the value for Ccal be negative?
than 100 mL of boiling water, you will arrive at a value of Ccal that is negative, which is impossible.
How does calorimetry work?
A typical calorimeter works by simply capturing all the energy released (or absorbed) by a reaction in a water bath. Thus by measuring the change in the temperature of the water we can quantify the heat (enthalpy) of the chemical reaction. Attached below is a helpful sheet on calorimetry from Dr.
What does a high calorimeter constant mean?
Explanation: The “calorimeter constant” is just the specific heat of the calorimeter and its thermal conductivity. An “ideal” calorimeter would have a very low specific heat and zero thermal conductivity because the point is to conserve energy within the system.
What is Q MC _firxam_#8710; T used for?
Q = mc∆T. Q = heat energy (Joules, J) m = mass of a substance (kg) c = specific heat (units J/kg∙K) ∆ is a symbol meaning “the change in”
How do you find QCAL?
Calculate Qcal. Measure the change in temperature in degrees Celsius that occurs during the reaction inside the calorimeter. Multiply Ccal (energy/degree Celsius) by the change in temperature that occurred during the reaction in the calorimeter.
What is the symbol for specific heat capacity?
In ?SI units, specific heat capacity (symbol: c) is the amount of heat in joules required to raise 1 gram of a substance 1 Kelvin. It may also be expressed as J/kg·K. Specific heat capacity may be reported in the units of calories per gram degree Celsius, too. | https://unangelic.org/why-do-we-have-to-calculate-a-calorimeter-constant-what-does-the-calorimeter-constant-account-for/ |
"The Silent King" is the fourteenth episode in the second season of Adventure Time. It is the fortieth episode overall.
Contents
- 1 Synopsis
- 2 Plot
- 3 Characters
- 4 Locations
- 5 Music
- 6 Trivia
- 7 Videos
- 8 Censorship
- 9 Gallery
- 10 References
Synopsis
After Finn deposes the spank-happy king of Goblins, he becomes the king of goblins to prevent strife. But they and their strange rules prove not to his liking.
Plot
The episode starts with Finn and Jake fighting against Xergiok, the goblins' tyrant king. Xergiok tries to cast a fire spell on them, but Finn deflects it with his sword saying "wands are for wimps!" They defeat him and force him to flee. After Xergiok's defeat, both Finn and Jake are taken to the palace of the Goblin Kingdom, where they meet Gummy, the royal goblin chief of staff, who asks Finn to be their king. He also tells them how they are ill-accustomed to any compassionate treatment, since Xergiok loved being a jerk and randomly spanked their butts. At first Finn refuses, explaining that he is an adventurer for life, but quickly changes his mind when he realizes that if there is no king for the goblins, they will start a riot that could eventually destroy them. Jake adds that if Finn is going to be a king, he should also have a queen by his side and thus puts himself in office.
So Gummy begins to show everything that Finn and Jake are entitled to as royalty, such as a goblin birthing chamber, living fountains, an advanced video game system, and a stable of Dragons. When they arrive in the royal bedroom, Jake asks to turn the bed into a bunk bed and immediately takes the top, but as they settle down in their beds, Gummy implores that Finn should listen as he reads from the Book of Royal Rules. While Jake refuses to listen and falls asleep, Finn agrees to hear the rules but eventually falls asleep as well.
The next morning, Finn discovers that most of the rules that Gummy read prohibit him from doing almost anything himself, such as brushing his teeth, cutting and chewing his food, and even helping people being attacked by thieves. As Finn struggles to conform to the rules, Xergiok suddenly returns to the city with an army of evil Earclops in an attempt to take back his rule. Finn believes that this attack is the perfect opportunity to show the goblins that an active king can be a good king, but he must do so without letting them know that he is the one helping them or else they may stop him. Finn leaves Whisper Dan at the palace disguised as himself and charges out onto the battlefield. As he nears Xergiok's army, Finn hops into Jake's mouth and uses him as a muscle suit to easily defeat the earclops army by creating sound waves that hurt their sensitive ear heads. Finn says to Xergiok as he lies trapped under a fallen earclops, "Dude, no one uses earclopses in battle without earplugs." In desperation Xergiok attempts to use his magic against Finn, but he is finally defeated when struck by one of his own fireballs, causing him to drop his wand which Finn then eats.
After their victory, Finn and Jake are carried back to the palace by the goblin warriors where they find that Whisper Dan disguised as Finn is exactly the kind of king that the goblins need: a king who needs and allows others to do everything for him. Finn and Jake then decide to leave the Goblin Kingdom still merged. As they leave, Finn asks Jake why his insides smells like vanilla. Jake responds that a wizard placed a curse on him.
Characters
Major characters
Minor characters
- Goblins
- Thief Goblin
- Royal Speaker Goblin
- Old Lady Goblin
- Walking Goblin
- Tooth Brush Goblins
- Maria
- Soldier Goblins
- Dragons
- Earclopses
- Snail
- Lion
- White Tigers
Locations
Music
Trivia
- The idea of Finn merging with Jake is from an unproduced first season episode "Jakesuit."
- The number "041010" appears in this episode, on the plate of an overturned car that appears briefly during the goblin riot scene.
- In the last scene with the goblin crowd, a goblin with a PHIL FACE appears on the right side of the screen.
- When Finn gets Jake to battle the earclops, he finds him in an isolation tank.
- After Finn says that he will become the Goblin King, Jake mouths the rest of what Finn is saying.
- In the bathroom of the goblin castle, there is a compartment with a ladder leading up to it that is filled with toilet paper.
Episode connections
- In the part when the goblins thank Finn and Jake, a goblin seems to be holding a ball that has a star on it like what the Fluffy Person was holding in "It Came from the Nightosphere."
- Finn uses Jake as a suit once again in the Season 5 episode "Jake Suit".
Cultural references
- When Finn wears Jake's body like a suit, to fight Xergiok and the earclops army, they use an ability similar to Marvel superhero Hulk's Thunder Clap.
- The way the goblins are born from a mud pit is very similar to how orcs are born in The Lord of the Rings.
Production notes
- This episode was previously titled "Finn the King."
- The storyboard for this episode contains what would have been the first mention of the Mushroom War in the series. It is seen when Gummy is reading from the rule book; however, this line was cut from the actual episode.
Errors
- When Finn is battling Xergiok for the first time, the bottom of Finn's hat extends for a frame as he lowers his arm and the space between his fingers is brown.
- There are a few mistakes that happen during the scene when Finn jumps into Jake's mouth:
- Three separate frames show Finn's hat as the same color as the sky behind him.
- One frame has Finn's mouth above his eyes.
- One frame has Finn's hat the same color as Jake's body.
- One frame has the inside of Jake's mouth the same color as the rest of his body.
Videos
Censorship
This episode was censored in some countries. See Censorship of Adventure Time for more information. | https://adventuretime.fandom.com/wiki/The_Silent_King |
About Bitter Guard: Bitter guard is a perennial vine, and it is the most popular vegetable. Bitter guard is bitter, and it gives a weird smell. The scientific name of the bitter guard is Momordica charantia, and it is from the Cucurbitaceae family. It is also known as bitter melon, and it comes in many types of verities. It is widely cultivated in Asia and Africa.
Vine grows up to 20 to 25 feet long with spread up to 5 to 6 feet wide and always gives support to the vines using any stalk or rope. The balcony is the best location for the plantation.
Plantation or Seedling
How to Plant Seeds at Home: One of the best methods to plant bitter guard seeds at home is by using seeds. Seeds are light brown with the size of 10 to 20 mm long, and seeds are easily available in local nurseries and online stores. If you have ripened bitter guard, remove all the seeds, and the ripened bitter guard is yellow.
Prepare a Soil Potting: To make a good quality soil mixture, take the normal garden soil (50%), vermicompost (30%), and coco-peat (20%). Mix it well and take one flowerpot (size at least 6 to 8 inches wide).
- Keep the seeds in water (at least one day).
- Fill the soil mixture into the flowerpot.
- After one day, remove all the seeds from the water.
- In the soil mixture, dig the holes 1 inch deep.
- Put the seeds into those holes.
- Cover all the seeds with the same soil mixture.
- Give water properly and place the flowerpot in full sunlight.
Within 6 to 8 days, germination starts from the seeds and stems, and new leaves appear from the seeds. Stems are greenish with lots of moisture. If the vines are growing up to 7 to 10 inches, then it’s time to transplant them with a separate flowerpot.
Remember: USDA hardiness zone should be 9-11, and the soil mixture should be slightly acidic. In a flowerpot, make sure drainage holes are available or not. If there are no holes at the surface, then make 5 to 9 holes (Drainage holes are necessary). Cover the holes with a grenade or stones. Use a clay flowerpot. The distance between the seeds should be 2 to 4 inches.
Transplantation Tips : There is no need to transplant vine if you are planted in a large flowerpot. If you are planted in a small flowerpot, then we need to transplant them. For transplantation, use the same soil mixture and steps.
Best Caring Tips
- Watering: Give water properly when the top layer of the soil mixture is dried. During the winter season, give water after 3 to 5 days.
- One of the best fertilizers for bitter guard vines is cow dung compost. Before the growing season, add 1 inches layer of cow dung.
- Temperature: Keep the vines in full sunlight for at least 4 to 5 hours, or you can maintain the temperature between 15°C to 30°C.
- Pests & Diseases: Over time, many types of insects and diseases occur in the vines, so always spray the neem oil to kill them.
- Pruning and Cutting: Before the growing season, remove all the faded or dry leaves and weeds (near the root).
Fruiting and Harvesting Time
Season of Fruit: After successful plantation, within 3 to 5 days, flowering starts from the vines, and after flowering, fruits start. Fruits are long and green in colour, and the size of the fruit is 3 to 5 inches long.
Harvesting Time: After one month of fruiting, you can harvest them. If you are not satisfied with the fruits’ size, you can wait. | https://gardenontop.com/most-popular-trick-to-plant-bitter-guard-seeds/ |
Q:
Splitting ASCII text files in C
Suppose I have the following code in C:
FILE* a=fopen("myfile.txt","r");
FILE* b,c;
There is a delimiter line in 'a', which designates the place where I want to split; and I want to split the contents of 'a' into 'b',and 'c'. I want to do this without creating any other files.
Also later, I want to do this dynamically, by creating a pointer array pointing to 'FILE*'s. So the number of delimiter lines will be arbitrary.
For this case, suppose that the delimiter line is any line that has the string 'delim'.
A:
The concept could be:
1) fopen() INFILE and (first) OUTFILE
2) while you can, fgets() lines from INFILE and strncmp() them to the delimiter
2.a) delimiter not found: fputs() the line to OUTFILE
2.b) delimiter found: fclose() OUTFILE and fopen() the next OUTFILE
2.c) end of file: break loop
3) fclose() INFILE and OUTFILE
Or this way:
1) fopen()INFILE
2) fseek() to the end of the stream and use ftell() to get the file position, let's call this N
3) rewind() the stream and fread() N bytes from it into a buffer.
4) fclose()INFILE
5) while you can, strstr() the delimiter in your buffer and fwrite() the data blocks inbetween to OUTFILEs
| |
Q:
Traversing an unusual tree in Python
I have an unusual tree array like this:
[[0, 1], [1, 2], [2, 3], [2, 4], [2, 5], [5, 6],
[4, 6], [3, 6], [0, 7], [7, 6], [8, 9], [9, 6]]
Each element of the array is a pair, which means second one is a follower of the first, e.g.:
[0, 1] - 0 is followed by 1
[1, 2] - 1 is followed by 2
I am trying to extract array like this:
0 1 2 3 6
0 1 2 4 6
0 1 2 5 6
0 7 6
8 9 6
I couldn't code a robust traversal to extract all possible paths like this. How can I do it with Python?
A:
You could do it using a recursive generator function. I assume that the root node in the tree always comes before all its children in the original list.
tree = [[0, 1], [1, 2], [2, 3], [2, 4], [2, 5], [5, 6], [4, 6], [3, 6],
[0, 7], [7, 6], [8, 9], [9, 6]]
paths = {}
for t in tree:
if t[0] not in paths: paths[t[0]] = []
paths[t[0]].append(tuple(t))
used = set()
def get_paths(node):
if node[1] in paths:
for next_node in paths[node[1]]:
used.add(next_node)
for path in get_paths(next_node):
yield [node[0]] + path
else:
yield [node[0], node[1]]
for node in tree:
if tuple(node) in used: continue
for path in get_paths(node):
print path
Output:
Explanation: First I construct a list of all possible paths from each node. Then for each node that I haven't used yet I assume it is a root node and recursively find which paths lead from it. If no paths are found from any node, it is a leaf node and I stop the recursion and return the path found.
If the assumption about the order of the nodes does not hold then you would first have to find the set of all root nodes. This can be done by finding all nodes that do not appear as the second node in any connection.
A:
Here you go. Not the nicest code on earth but it works:
inputValues = [[0, 1], [1, 2], [2, 3], [2, 4], [2, 5], [5, 6], [4, 6], [3, 6], [0, 7], [7, 6], [8, 9], [9, 6]]
tree = {}
numberOfChildren = {}
for (f, t) in inputValues:
if not tree.has_key(f):
tree[f] = []
tree[f].append(t)
if not numberOfChildren.has_key(t):
numberOfChildren[t] = 0
numberOfChildren[t] += 1
roots = [c for c in tree if c not in numberOfChildren]
permutations = []
def findPermutations(node, currentList):
global tree
global permutations
if not tree.has_key(node):
permutations.append(currentList)
return
for child in tree[node]:
l = list()
l.extend(currentList)
l.append(child)
findPermutations(child, l)
for r in roots:
findPermutations(r, [r])
print permutations
| |
While 600 million people in India suffer without electricity, hundreds of miners are trapped underground. Officials are attempting a rescue operation.
Digital Journal reported on the power outage in northern India, which initially affected 300 million, but is today affecting 600 million people in the country.
When a massive power failure knocked out electricity to the country, the chief minister of West Bengal state, Mamata Banjeree, announced that hundreds of miners are trapped today in eastern India.
Banjeree said, "We are trying to rescue the coal miners. All efforts are on to resume power supplies. You need power supplies to run the lifts in the underground mines."
She further said that hundred of miners are trapped in Burdwan, around 180 kilometers northwest of Kolkata. The mines are owned by Eastern Coalfields and are government-run. | http://www.digitaljournal.com/article/329727 |
If you were to go and do a simple internet search for the phrase “artistic representations of Pi” (and I encourage you do so), you’d find a slew of amazing looking pieces that generally depict what looks like random noise. Sure, the noise may be presented elegantly, but the fact still remains, the digits of Pi, when plotted out, just represent an overall lack of pattern.
You, see, people tend to focus on the actual digits. Before I begin talking about an alternative approach, let’s talk about art in general. Upon the wall of my son’s pre-school classroom is a chart which reads: ELEMENTS OF ART. The elements include: Color, Value, Line, Texture, Shape, Form, and Space. This is a nice list that indicates the sort of elements that I tried to bring to the problem of representing Pi in what I believe are three very interesting ways.
There are a couple of other mathematical concepts that relate to art which I relied on heavily for this project. The first is Symmetry. By imposing symmetry onto Pi, it becomes less about the noise. This I did by folding Pi into a circle. Quite apropos, but it’s been done before.
The second thing was by studying which numbers do produce wonderful patterns in art. It turns out that this is done by creating what is called a Sequence. Arguably, the most famous sequence is the Fibonacci Sequence. This is created by using the real numbers and adding them together serially:
1, 1, 2, 3, 5, 8, 13 . . .
The Fibonacci Sequence is a marvel of the mathematical world as well as appears throughout the natural world in countless permutations.
Another famous sequence is the Primes. A number that can’t be divided by any number except itself and 1:
2,3,5,7, 11, 13, 17, 19 . . .
There is an entire library of sequences:
What is missing is the sequence created by adding the digits of Pi serially. I’ve searched and searched for the sequence’s name and for any research into exploring its properties, but, apparently, I’m laying claim to it.
The sequence turns out to be:
3, 4, 8, 9, 14, 23, 25, 31, 36 . . .
One thing you’ll notice about such sequences is that they almost always grow as the sequence progresses. It is because of the relationship between the numbers, rather than just the numbers themselves, that make sequences a great tool for an artist and a mathematician to explore.
It should also be noted that I chose a 36 degree circle because of its divisibility and the fact that it limits most of the sequences I wanted to explore to a range of 6 to 11 numbers. A very manageable set that can be used for multiple purposes, as you’ll see. So, with the limit set to 36, the serialized Pi sequence we are going to explore by encoding into three schemas is once again:
3, 4, 8, 9, 14, 23, 25, 31, 36
These nine numbers will be turned into 3D Shapes, Colors, and Sounds.
To understand my approach, it helps to understand how to form a cardioid – a shape within a circle that resembles a heart.
We’ll be using a 36-degree circle, so it helps to understand how these 36 nodes will be counted. For these examples, we’ll always be using the bottom node as our Starting Point (the node in the South direction, or in the 6 o’clock position. Instead of counting the Starting Point as “1”, however, we consider it to be “0”.
Notice that when you count up one side of the circle that the lines grow longer until you reach the number “18”, which is directly opposite node “0”, and then the lines become shorter incrementally again as you count higher. In essence, the numbers 1-17 all have a corollary number 19-35 so that the distance from 0 to 1 is the same in length as 0 to 35. In the same way, 0 to 2 is the same as 0 to 34, etc.
With all this in mind, it will help to view the next two links and learn about the Cardioid.
In creating a Cardioid on a 36-degree circle, each step in the process creates just one instance of a particular number that is always of the formula n = n x 2. This pattern creates a mirrored spiral across the even numbers where the lines grow to node 18 and then shrink again as the numbers go higher.
But let us now consider the act of creating a line from every single node of the same number over and over. Take for example the number “9”. When the number “9” is fully mapped around the circle, a smaller circle appears. This circle of the number “9” is actually formed out of the half-way points of every single line.
But also notice that the number “9” corresponds to the number “27”. Before we move on to a method to differentiate such cases of corollary numbers, let’s take a look at what the number sequence of 3, 4, 8, 9, 14, 23, 25, 31, and 36 looks like when plotted in number circles within a 36-degree circle in pencil.
The pencil version looks pretty amazing, but presents a problem with higher numbers that might get confused with or even overlay lower numbers.
To see the issue clearer, let’s keep in mind that the goal is to create an actual 3D version of the circle using thread for the lines. When constructing a 3D version, the thread gets built from the base upwards. In other words, the thread goes on the loom in the reverse order of the numbers. What happens when higher numbers are covered over with thread as the lower numbers get added?
To see the problem clearer, let’s view several sequences in cross-sectional views.
Interestingly enough, from doing this, we see that the Pi Sequence is actually a great sequence to apply to the 36-degree circle. The Prime Sequence turns out to be a bad choice, at least with 36 degrees. The “odd” nature of Prime Sequence causes several numbers to coincide with their corollary, which would hide the higher number and/or artificially reinforce the lower numbers (e.g 5 & 31, 7 & 29, 13 & 23, 17 & 19).
This doesn’t happen with the Cardioid because the even numbers are only “counted” one instance whereas using our method, each number is “counted” 36 times to form the full circle.
An artistic method of differentiating the lower and higher corollaries is to map each number to a color. I chose to use the RBG color wheel because it is already mapped onto a circle. Plus, the values are precisely determined – even though I chose not to seek out thread colors of such exacting values.
By adding color values to the circle we can see quite easily the difference between a “yellow 6-valued circle” and a “magenta 30-valued circle”.
The end result is like looking through a rainbow of the digits of the Pi Sequence.
This next picture illustrates the rainbow color of the cross section which is formed.
Not wanting to stop there, I chose to apply the sequence to music.
The problem then becomes how to project a musical scale onto a 36-degree circle? It just so happens that 36 can be divided into 12 sets of 3 numbers. The Western musical scale also has 12 note values in an octave for the Chromatic Scale.
This process, unlike the previous two of mapping numbers and colors, cannot be reversed, though. Since numbers 1, 2, and 3, could all become the first note, there is no way to know the first note and from that extrapolate backwards whether that note is exactly 1, 2, or 3. But it is still a fun exercise to see how the Pi Sequence sounds when plotted onto the circle of notes.
For my version, I chose to start with the note A and build from A being the root note.
Note Note Value Original Number
A 0 1-3
A#/Bb 1 4-6
B 2 7-9
C 3 10-12
C#/Db 4 13-15
D 5 16-18
D#/Eb 6 19-21
E 7 22-24
F 8 25-27
F#/Gb 9 28-30
G 10 31-33
G#/Ab 11 34-36
A (again 1 octave higher)
Our Pi Sequence (3, 4, 8, 9, 14, 23, 25, 31, 36) becomes:
A, A#, B, B, C#, E, F, G, G#
This creates 9 notes and I chose to write in the unorthodox time signature of 9/8 time. My method was to break down the notes into three groups of three notes each. I also wrote a return to Pi at the end that emphasized the numbers 3, 1, 4, 1, 5, and then 9 but using the 9-note sequence as a reprise. 314159 then plays over and over until fade out.
Plotted on a midi sequencer, it looks like this when played twice. I lowered the last 3 notes an octave.
Here is the final result.
Here are several pictures of the build. | https://davidmauricegarrett.com/category/pi/ |
---
abstract: |
We study Falconer’s subadditive pressure function with emphasis on analyticity. We begin by deriving a simple closed form expression for the pressure in the case of diagonal matrices and, by identifying phase transitions with zeros of Dirichlet polynomials, use this to deduce that the pressure is piecewise real analytic. We then specialise to the iterated function system setting and use a result of Falconer and Miao to extend our results to include the pressure for systems generated by matrices which are simultaneously triangularisable. Our closed form expression for the pressure simplifies a similar expression given by Falconer and Miao by reducing the number of equations needing to be solved by an exponential factor. Finally we present some examples where the pressure has a phase transition at a non-integer value and pose some open questions.\
\
*Mathematics Subject Classification* 2010: primary: 37D35, secondary: 37C45, 37D20.\
\
*Key words and phrases*: subadditive pressure, analytic function, thermodynamic formalism.
author:
- |
Jonathan M. Fraser\
\
School of Mathematics, The University of Manchester,\
Manchester, M13 9PL, UK\
E-mail: [email protected]
title: Remarks on the analyticity of subadditive pressure for products of triangular matrices
---
Introduction
============
Let $n \in \mathbb{N}$ and $\{A_i\}_{ i \in \mathcal{I}}$ be a finite collection of $n \times n$ non-singular matrices. We define the *subadditive pressure* for this system following Falconer [@affine]. Let $\mathcal{I}^* = \bigcup_{k{\geqslant}1} \mathcal{I}^k$ denote the set of all finite sequences with entries in $\mathcal{I}$ and for $$\textbf{\emph{i}}= \big(i_1, i_2, \dots, i_k \big) \in \mathcal{I}^*$$ write $$A_{\textbf{\emph{i}}} = A_{i_1} \circ A_{i_2} \circ \dots \circ A_{i_k}$$ and $\alpha_1(\textbf{\emph{i}}) {\geqslant}\dots {\geqslant}\alpha_n(\textbf{\emph{i}})>0$ for the singular values of $A_\textbf{\emph{i}}$. The *singular values* of a linear map $A$ are the positive square roots of the eigenvalues of $A^T A$. They are also the lengths of the semi-axes of the image of the unit ball under $A$ and thus correspond to how much $A$ contracts or expands in different directions. For $s \in [0,n)$ the *singular value function* $\phi^s: \mathcal{I}^* \to (0,\infty)$ is defined by $$\phi^s(\textbf{\emph{i}}) =\alpha_1(\textbf{\emph{i}}) \alpha_2(\textbf{\emph{i}}) \cdots \alpha_{m}(\textbf{\emph{i}}) \alpha_{m+1}(\textbf{\emph{i}})^{s-m}$$ where $m \in \{0, \dots, n-1\}$ is the unique non-negative integer satisfying $m {\leqslant}s < m+1$. The singular value function leads us to define the *pressure* $P : [0,n) \to \mathbb{R}$ corresponding to the system $\{A_i\}_{ i \in \mathcal{I}}$ by $$P(s) = \lim_{k \to \infty} \frac{1}{k} \log \sum_{\textbf{\emph{i}} \in \mathcal{I}^k} \phi^s(\textbf{\emph{i}})$$ where the limit exists since the singular value function is submultiplicative in $\textbf{\emph{i}}$, i.e. $$\phi^s(\textbf{\emph{i}}\, \textbf{\emph{j}}) {\leqslant}\phi^s(\textbf{\emph{i}}) \, \phi^s(\textbf{\emph{j}})$$ for all $\textbf{\emph{i}}, \, \textbf{\emph{j}} \in \mathcal{I}^*$, see [@affine Lemma 2.1]. It is convenient to extend the domain of $P$ to $[0,\infty)$ and so we let $$P(s) = \log \sum_{i \in \mathcal{I}} \det(A_i)^{s/n}$$ for $s {\geqslant}n$. Here the pressure is defined without the need for a limit as the determinant is multiplicative. It is easy to see that $P$ is continuous on $[0,\infty)$ and convex on each interval $(m,m+1)$, with $m \in \{0,\dots, n-1\}$, and on $(n, \infty)$. Moreover, it is easy to contruct examples where the pressure is not convex on an interval containing an integer; see Section \[examplessection\]. It is a simple consequence of piecewise convexity that $P$ is differentiable at all but at most countably many points and semi-differentiable everywhere. The main focus of this article is to study real analyticity of the pressure and our main application is that the pressure is always piecewise real analytic for products of matrices which are simultaneously triangularisable, see Corollaries \[mainanalytic\] and \[mainanalytictri\]. Moreover, the number of phase transitions, and therefore points where the pressure is not smooth, can be bounded in terms of the spatial dimension and the number of matrices. We also provide examples showing that the pressure can have phase transitions at non-integer values. Phase transitions in the interval $(0,1)$ have previously been exhibited by Käenmäki and Vilppolainen [@kaenmakisub Example 6.5].\
\
We say a real valued function on some domain $D \subseteq \mathbb{R}$ is *piecewise real analytic* if $D$ can be written as the closure of the union of a finite collection of open (possibly unbounded) intervals with the function being real analytic on each interval. The boundary points of the open intervals which are in the interior of $D$ are called *phase transitions*, provided that the function is not real analytic on any neighbourhood of the point. Note that if a piecewise real analytic function is continuous, then it is completely defined by its values on the open intervals where it is real analytic.\
\
One of the main applications of the subadditive pressure function discussed in this paper is in the study of self-affine fractals. In particular, if the matrices $\{A_i\}_{i \in \mathcal{I}}$ are chosen to be contractions and to each matrix we associate a translation vector $t_i \in \mathbb{R}^n$, then we have an iterated function system $\{A_i+t_i\}_{i \in \mathcal{I}}$, which has a unique non-empty compact attractor $F$, called the self-affine set for the system. Alternatively, assuming some separation conditions, one can view $F$ as the repeller of a uniformly expanding map defined by the inverse branches of the contraction mappings. In either case, the pressure is related to many interesting geometric properties of $F$ and the associated dynamical system. Perhaps most notably the unique zero of the pressure gives an upper bound for the Hausdorff dimension of $F$ and a ‘best guess’ for the actual Hausdorff dimension. These ideas date back to Douady-Oesterlé [@douady] and Falconer [@affine; @affine2]. In [@affine] Falconer proved that the zero of the pressure gives the Hausdorff dimension of $F$ for Lebesgue almost all choices of $\{t_i\}_{i \in \mathcal{I}}$ provided the matrices all have singular values strictly less than $1/3$, which was relaxed to the optimal constant $1/2$ by Solomyak [@solomyak]. Since then the subadditive pressure, and several related functions, have received a lot of attention in the literature on self-affine fractals and non-conformal dynamics. There have also been several extensions of these ideas to nonlinear systems, see Falconer [@falconerrepeller] and Barreira [@barreira]. Due to their focus on upper triangular systems, the papers of Falconer-Miao [@miao], Falconer-Lammering [@lammering], Manning-Simon [@manning] and Bárány [@barany] are particularly relevant to our study.\
\
It is worth remarking that *additive pressure functions* associated to uniformly hyperbolic dynamical systems and self-conformal fractals were studied before the more complicated subadditive analogues, see [@bowen; @bowen2; @ruelle]. The additive setting is rather simpler and if the associated potential is taken to be the appropriate analogue of the singular value function, then the pressure is real analytic on its whole domain. This is a special case of a more general result of Ruelle [@ruelle]. The proof relies on a transfer operator approach, which does not apply in the non-conformal (or self-affine) setting.\
\
One of the reasons the analyticity (or differentiability) of the pressure is interesting is that it is related to the number of ergodic equilibrium measures for the pressure (this was drawn to our attention by Pablo Shmerkin). Indeed such links have been investigated by Feng-Käenmäki [@fengkaenmaki] and Guivarc’h-Le Page [@lepage], albeit in a slightly different context.
Results
=======
Subadditive pressure for diagonal matrices
------------------------------------------
Suppose the matrices $\{A_i\}_{ i \in \mathcal{I}}$ are all diagonal and write $c_1(i), \dots, c_n(i)>0$ for the absolute values of the diagonal entries of $A_i$. Note that the sets $\{c_1, \dots, c_n\}$ and $\{\alpha_1(i), \dots, \alpha_n(i)\}$ are equal but one cannot say anything about the relative ordering. Indeed, once one starts composing diagonal matrices, the order in which the singular values appear down the main diagonal of the matrix can change, which is one of the main difficulties in computing the pressure. For $\textbf{\emph{i}} = (i_1, i_2, \dots, i_k) \in \mathcal{I}^*$ write $c_1(\textbf{\emph{i}}), \dots, c_n(\textbf{\emph{i}})$ for the diagonal entries of $A_\textbf{\emph{i}}$, noting that $$c_l(\textbf{\emph{i}}) = c_l(i_1) \cdots c_l(i_k)$$ for each $l \in \{1, \dots, n\}$. Let $S_n$ be the symmetric group on $\{1, \dots, n\}$ and for each $\sigma \in S_n$ and $s \in [0,n)$ we define the *$\sigma$-ordered singular value function* $\phi^s_\sigma:\mathcal{I}^* \to (0,\infty)$ by $$\phi^s_\sigma(\textbf{\emph{i}}) =c_{\sigma(1)} (\textbf{\emph{i}}) c_{\sigma(2)}(\textbf{\emph{i}}) \cdots c_{\sigma(m)}(\textbf{\emph{i}}) c_{\sigma(m+1)}(\textbf{\emph{i}})^{s-m}$$ where $m \in \{ 0, \dots, n-1\}$ is the unique non-negative integer satisfying $m {\leqslant}s < m+1$. The key advantage of these ordered singular value functions is that they are multiplicative in $\textbf{\emph{i}}$ instead of only submultiplicative, i.e. $$\phi_\sigma^s(\textbf{\emph{i}}\, \textbf{\emph{j}}) = \phi_\sigma^s(\textbf{\emph{i}}) \, \phi_\sigma^s(\textbf{\emph{j}})$$ for all $\textbf{\emph{i}}, \, \textbf{\emph{j}} \in \mathcal{I}^*$ and $\sigma \in S_n$. This allows us to define the associated pressure by means of a closed form expression, without taking a limit. More precisely, we define the *$\sigma$-ordered pressure* $P_\sigma : [0,n) \to \mathbb{R}$ by $$P_\sigma(s) = \log \sum_{i \in \mathcal{I}} \phi_\sigma^s(i)$$ and observe that $$\sum_{\textbf{\emph{i}} \in \mathcal{I}^k} \phi_\sigma^s(\textbf{\emph{i}}) = \Bigg(\sum_{i \in \mathcal{I}} \phi_\sigma^s(i) \bigg)^k$$ for all $k \in \mathbb{N}$. We extend the domain of each $P_\sigma$ to $[0,\infty)$ as before by setting $P_\sigma(s) = P(s)$ for $s {\geqslant}n$, since the ordering of the diagonal entries of a diagonal matrix does not change the determinant. Again, it is easy to see that $P_\sigma$ is continuous on $[0,\infty)$ and convex on each interval $(m,m+1)$, with $m \in \{0,\dots, n-1\}$, and on $(n, \infty)$. Moreover, it is immediate that $P_\sigma$ is piecewise real analytic, with the only possible phase transitions occurring at the points $\{1, \dots, n\}$.
\[mainmax\] For all $s \in [0, \infty)$ we have $$P(s) = \max_{\sigma \in S_n} P_\sigma(s).$$
We will prove Theorem \[mainmax\] in Section \[mainmaxproof\]. In the case of $2 \times 2$ matrices, where the pressure is the maximum of two functions, this can be found in [@manning]. In fact [@manning] dealt with certain nonlinear maps corresponding to upper triangular matrices. The key point of this result is that we have a closed form expression for the pressure, which is very useful for computational purposes and for analysing differentiability and analyticity, since differentiating a function defined by a limit is awkward. First and foremost, by identifying phase transitions in the pressure with zeros of Dirichlet polynomials, we can deduce the following result.
\[mainanalytic\] For products of non-singular diagonal matrices, the pressure is piecewise real analytic.
We will prove Corollary \[mainanalytic\] in Section \[mainanalyticproof\]. We are able to bound the number of phase transitions (and therefore the number of ‘pieces’ in the piecewise decomposition of $P$) in terms of the number of matrices $\lvert \mathcal{I} \rvert$ and the spatial dimension $n$, however we defer discussion of the explicit bound until Sections \[mainanalyticproof\] and \[questions\]. It is now possible to give various sufficient conditions for $P$ to be real analytic on the whole interval $(m,m+1)$, however, we refrain from stating a myriad of different examples because in practice one would simply plot the different ordered pressures and observe which is the maximum. Then on any interval where one ordered pressure is bigger than or equal to all the others, $P$ is real analytic. However, we do state one sufficiency result which we find particularly intuitive.
\[mainanal\] Let $m \in \{0, \dots, n-1\}$. If there exists $\sigma \in S_n$ such that for all $i \in \mathcal{I}$ $$\{\alpha_1(i), \dots, \alpha_{m}(i)\} = \{ c_{\sigma(1)}(i), \dots, c_{\sigma(m)}(i)\}$$ and $$\alpha_{m+1}(i) = c_{\sigma(m+1)}(i),$$ then $$P(s) = P_\sigma(s)$$ for all $s \in [m,m+1]$ and, in particular, the pressure is real analytic on $(m,m+1)$.
We will prove Corollary \[mainanal\] in Section \[mainanalproof\]. Notice that (especially for large $m$) the sufficient condition for analyticity given above is weaker than requiring $\alpha_1(i) = c_{\sigma(1)}(i)$, $\dots$, $\alpha_{m}(i) = c_{\sigma(m)}(i)$ and $\alpha_{m+1}(i) = c_{\sigma(m+1)}(i)$. However, in that more restrictive setting, we get the following precise corollary.
If there exists $\sigma \in S_n$ such that for all $i \in \mathcal{I}$ and $l \in \{1, \dots, n\}$ $$\alpha_l(i) = c_{\sigma(l)}(i),$$ then $$P(s) = P_\sigma(s)$$ for all $s \in [0, \infty)$ and, in particular, the pressure is real analytic on each interval $(m,m+1)$ with $m \in \{0, \dots, n-1\}$.
In light of Theorem \[mainmax\], non-trivial phase transitions, i.e., phase transitions occurring at non-integer values, can only happen at points when the maximum of the ordered pressures ‘changes hands’ between two different ordered pressures. It is not immediately obvious that this is possible, but it does not take long to find such examples. We will present some examples of non-trivial phase transitions in Section \[examplessection\], as well as a simple example where Corollary \[mainanal\] can be applied to certain intervals.\
\
We conclude this section with the combinatorial observation that, despite there being $n!$ different ordered pressures, there are significantly fewer distinct ones. In particular, we choose the first $m$ entries in the ordered singular value functions, with the ordering irrelevant, and then choose the $(m+1)$th entry from the remaining $n-m$ choices. As such, if we are interested in analysing the pressure in the interval $[m,m+1)$, for some $m \in \{0, \dots, n-1\}$, then we have to take the maximum of $$\label{numberof}
\left( \begin{array}{c}
n \\
m
\end{array} \right) \cdot \left( \begin{array}{c}
n -m \\
1
\end{array} \right) \ = \ n \left( \begin{array}{c}
n -1 \\
m
\end{array} \right)$$ (possibly) distinct functions.
Self-affine sets generated by simultaneously triangularisable matrices
----------------------------------------------------------------------
In this section assume that $\{A_i\}_{ i \in \mathcal{I}}$ are all contracting upper triangular matrices and as before write $c_1(i), \dots, c_n(i) \in (0,1)$ for the absolute values of the diagonal entries of $A_i$. An interesting, and perhaps surprising result, of Falconer and Miao [@miao] is that the pressure in this setting only depends on the diagonal entries. Moreover, they gave a closed form expression for the pressure in the interval $[m,m+1)$ for $m \in \{0, \dots, n-1\}$ as the maximum of functions of the form $$\log \ \sum_{ i \in \mathcal{I}} \big(c_{j_1}(i) \cdots c_{j_{m}}(i) \big)^{m+1-s} \big(c_{j'_1}(i) \cdots c_{j'_{m+1}}(i) \big)^{s-m}$$ over all independent choices of subsets $\{j_1, \dots , j_{m}\}$ and $\{j'_1, \dots , j'_{m+1}\}$ of $\{1,\dots, n\}$, see [@miao Theorem 2.5]. In particular, in the interval $[m,m+1)$, one takes the maximum of $$\left( \begin{array}{c}
n \\
m
\end{array} \right) \cdot \left( \begin{array}{c}
n \\
m+1
\end{array} \right)$$ functions. For related results see [@barany; @lammering; @manning]. Since the pressure does not depend on the non-diagonal entries of the matrices, we can apply Theorem \[mainmax\] also in the upper triangular setting, simply by ignoring the non-diagonal entries. As such and in view of (\[numberof\]) we can reduce the number of functions needed in the interval $[m,m+1)$ by a factor of $$\left( \begin{array}{c}
n \\
m+1
\end{array} \right) / \left( \begin{array}{c}
n-m \\
1
\end{array} \right)$$ which grows exponentially in $n$ in the central intervals. More precisely, applying Stirling’s formula, the above factor is larger than $2^{n}/(n\sqrt{2n})$ for $n {\geqslant}2$ and choosing $m$ to be the integer part of $n/2$. The rest of the results in the previous section carry over to the upper triangular case, or indeed any set of matrices which are simultaneously triangularisable, i.e. there exists a basis with respect to which all of the matrices are either upper or lower triangular. Most notably we have the following general result.
\[mainanalytictri\] For products of contracting non-singular simultaneously triangularisable matrices, the pressure is piecewise real analytic.
Examples {#examplessection}
========
Let $n=3$ and let $T_1$ and $T_2$ be $3 \times 3$ upper triangular matrices with non-zero positive diagonal entries $c_1(1), c_2(1), c_3(1)$ and $c_1(2), c_2(2), c_3(2)$ respectively. Theorem \[mainmax\] and (\[numberof\]) show that the pressure corresponding to this system is given by the maximum of three functions in the interval $[0,1)$, six functions in the interval $[1,2)$ and three functions in the interval $[2,3)$. By choosing the diagonal entries appropriately, we can create a phase transition in each of these intervals. Choosing $$c_1(1) = 0.9, c_2(1)=0.4 , c_3(1) =0.6, c_1(2)=0.1 , c_2(2)=0.4, c_3(2)=0.2$$ gives the pressure a phase transition at the point $s_1 = 0.5 \in (0,1)$ with $P_-'(s_1) \approx -0.916 < -0.655 \approx P_+'(s_1)$. Choosing $$c_1(1) = 0.1, c_2(1)=0.2 , c_3(1) =0.9, c_1(2)=0.9 , c_2(2)=0.4, c_3(2)=0.2$$ gives the pressure a phase transition at a point $s_2 \approx 1.193 \in (1,2)$ with $P_-'(s_2) \approx -1.469 < -0.978 \approx P_+'(s_2)$. Finally, choosing $$c_1(1) = 0.9, c_2(1)=0.5 , c_3(1) =0.8, c_1(2)=0.9 , c_2(2)=0.5, c_3(2)=0.01$$ gives the pressure a phase transition at a point $s_3 \approx 2.156 \in (2,3)$ with $P_-'(s_3) \approx -1.695 < -0.693 \approx P_+'(s_3)$.
![Top row: plots of the ordered pressures in the range $[0,3]$ for each of the three examples described above. The permutations (written as cycles) corresponding to each colour are as follows: black: (1), blue: (23), green: (12), red: (132), pink: (123), yellow: (13). Bottom row: plots of the standard pressure, which is equal to the maximum of the ordered pressures.](graphscolour){width="162mm"}
For our second example, let $n=7$ and let $T_1$ and $T_2$ be given by $$T_1 = \left( \begin{array}{ccccccc}
2 & -6 & 15 & 0 & -2 & 0 & 2 \\
0 &- 1 & 0 & 1 & -6 & 0 & 0 \\
0 & 0 & 10 & 4 & 9 & 6 & 0 \\
0 & 0 & 0 & 8 & -2 & 0 & 1 \\
0 & 0 & 0 & 0 & -5 & -3 & 4 \\
0 & 0 & 0 & 0 & 0 & 7 & 7 \\
0 & 0 & 0 & 0 & 0 & 0 & 4 \\
\end{array} \right) \hspace{5mm} \text{and} \hspace{5mm} T_2 = \left( \begin{array}{ccccccc}
3 & 2 & 5 & 0 & -6 & -4 & 2 \\
0 & 1 & 2 & 8 & 6 & 1 & 6 \\
0 & 0 & -14 & 1 & 1 & 13 & 3 \\
0 & 0 & 0 & 11 & 9 & 0 & 9 \\
0 & 0 & 0 & 0 & 4 & 10 & 1 \\
0 & 0 & 0 & 0 & 0 & -15 & -5 \\
0 & 0 & 0 & 0 & 0 & 0 & 2\\
\end{array} \right)$$ Choosing $$\sigma = \left( \begin{array}{ccccccc}
1 & 2 & 3 & 4 & 5 & 6 & 7 \\
3 & 4 & 6 & 5 & 7 & 1 & 2 \\
\end{array} \right)$$ we can apply Corollary \[mainanal\] in the intervals $(3,4)$ and $(6,7)$ to deduce that the pressure $P(s) = P_\sigma(s)$, and is hence real analytic, in these regions. Of course we could have just plotted all of the ordered pressures and deduced the regions where the maximum was real analytic, however that would involve plotting 140 functions in the interval $(3,4)$, for example.
Proofs
======
Proof of Theorem \[mainmax\] {#mainmaxproof}
----------------------------
\[lemma1\] For all $s \in [0,n)$ and $\textbf{i} \in \mathcal{I}^*$, we have $\phi^s(\textbf{i}) = \max_{\sigma \in S_n} \phi^s_\sigma(\textbf{i})$.
Let $\textbf{\emph{i}} \in \mathcal{I}^*$ and suppose $s \in [m,m+1)$ for some $m \in \{0,\dots, n-1\}$. Clearly $\phi^s(\textbf{\emph{i}})$ is equal to $\phi^s_\sigma(\textbf{\emph{i}})$ for some $\sigma$ and so $\phi^s(\textbf{\emph{i}}) {\leqslant}\max_{\sigma \in S_n} \phi^s_\sigma(\textbf{\emph{i}})$. Also, in trying to maximise $\phi^s_\sigma(\textbf{\emph{i}})$ over $\sigma$, one must choose a permutation for which $$\{\alpha_1(\textbf{\emph{i}}), \dots, \alpha_{m+1}(\textbf{\emph{i}})\} = \{ c_{\sigma(1)}(\textbf{\emph{i}}), \dots, c_{\sigma(m+1)}(\textbf{\emph{i}})\},$$ i.e. a permutation which ‘uses’ the largest $(m+1)$ singular values and excludes the other (smaller) values. Fix such a permutation $\sigma$. Since $\phi^s_\sigma(\textbf{\emph{i}})$ is symmetric in the values $ c_{\sigma(1)}(\textbf{\emph{i}}), \dots , c_{\sigma(m)}(\textbf{\emph{i}})$, the ordering of the first $m$ terms is irrelevant, and so the only question is which singular value to choose as $c_{\sigma(m+1)}(\textbf{\emph{i}})$. Suppose $\sigma$ is such that $c_{\sigma(m+1)}(\textbf{\emph{i}}) \neq \alpha_{m+1}(\textbf{\emph{i}})$. Cancelling common terms we have $$\frac{\phi^s(\textbf{\emph{i}})}{\phi^s_\sigma(\textbf{\emph{i}})} \ = \ \frac{c_{\sigma(m+1)}(\textbf{\emph{i}}) \, \alpha_{m+1}(\textbf{\emph{i}})^{s-m} }{ \alpha_{m+1}(\textbf{\emph{i}}) \, c_{\sigma(m+1)}(\textbf{\emph{i}})^{s-m} } \ = \ \bigg( \frac{ c_{\sigma(m+1)}(\textbf{\emph{i}}) }{ \alpha_{m+1}(\textbf{\emph{i}}) }\bigg)^{m+1-s} \ {\geqslant}\ 1$$ since $c_{\sigma(m+1)}(\textbf{\emph{i}}) {\geqslant}\alpha_{m+1}(\textbf{\emph{i}})$ and $m+1-s > 0$, which gives $\phi^s(\textbf{\emph{i}}) {\geqslant}\max_{\sigma \in S_n} \phi^s_\sigma(\textbf{\emph{i}})$ and completes the proof.
\[lemma2\] For all $s \in [0,n)$, we have $$\Bigg( \max_{\sigma \in S_n} \ \sum_{\textbf{i} \in \mathcal{I}} \phi^s_\sigma(\textbf{i}) \Bigg)^k \ {\leqslant}\ \sum_{\textbf{i} \in \mathcal{I}^k} \phi^s(\textbf{i}) \ {\leqslant}\ n! \ \Bigg( \max_{\sigma \in S_n} \ \sum_{\textbf{i} \in \mathcal{I}} \phi^s_\sigma(\textbf{i}) \Bigg)^k .$$
Observe that by Lemma \[lemma1\] $$\sum_{\textbf{\emph{i}} \in \mathcal{I}^k} \phi^s(\textbf{\emph{i}}) \ = \ \sum_{\textbf{\emph{i}} \in \mathcal{I}^k}\max_{\sigma \in S_n} \phi_\sigma^s(\textbf{\emph{i}}) \ {\geqslant}\ \max_{\sigma \in S_n} \ \sum_{\textbf{\emph{i}} \in \mathcal{I}^k} \phi^s_\sigma(\textbf{\emph{i}}) \ = \ \Bigg( \max_{\sigma \in S_n} \ \sum_{\textbf{\emph{i}} \in \mathcal{I}} \phi^s_\sigma(\textbf{\emph{i}}) \Bigg)^k$$ since the ordered singular value functions are multiplicative. This yields the left hand inequality in the statement of the lemma. To obtain the right hand inequality, observe that $\phi^s(\textbf{\emph{i}}) = \phi^s_\sigma(\textbf{\emph{i}})$ for some $\sigma$ and so $$\sum_{\textbf{\emph{i}} \in \mathcal{I}^k} \phi^s(\textbf{\emph{i}}) \ {\leqslant}\ \sum_{\textbf{\emph{i}} \in \mathcal{I}^k} \ \sum_{\sigma \in S_n} \phi^s_\sigma(\textbf{\emph{i}}) \ = \ \sum_{\sigma \in S_n} \ \Bigg( \sum_{\textbf{\emph{i}} \in \mathcal{I}} \phi^s_\sigma(\textbf{\emph{i}}) \Bigg)^k \ {\leqslant}\ n! \ \Bigg( \max_{\sigma \in S_n} \ \sum_{\textbf{\emph{i}} \in \mathcal{I}} \phi^s_\sigma(\textbf{\emph{i}}) \Bigg)^k$$ again using multiplicativity of the ordered singular value functions.
Theorem \[mainmax\] now follows easily by applying Lemma \[lemma2\] to obtain $$P(s) \ {\geqslant}\ \log \Bigg( \max_{\sigma \in S_n} \ \sum_{\textbf{\emph{i}} \in \mathcal{I}} \phi^s_\sigma(\textbf{\emph{i}}) \Bigg) \ = \ \max_{\sigma \in S_n} \ \log \sum_{\textbf{\emph{i}} \in \mathcal{I}} \phi^s_\sigma(\textbf{\emph{i}}) \ = \ \max_{\sigma \in S_n} \ P_\sigma(s)$$ and $$P(s) \ {\leqslant}\ \lim_{k \to \infty} \frac{1}{k} \log n! \ + \ \log \Bigg( \max_{\sigma \in S_n} \ \sum_{\textbf{\emph{i}} \in \mathcal{I}} \phi^s_\sigma(\textbf{\emph{i}}) \Bigg) \ = \ \max_{\sigma \in S_n} \ \log \sum_{\textbf{\emph{i}} \in \mathcal{I}} \phi^s_\sigma(\textbf{\emph{i}}) \ = \ \max_{\sigma \in S_n} \ P_\sigma(s).$$
Proof of Corollary \[mainanalytic\] {#mainanalyticproof}
-----------------------------------
To prove that $P$ is piecewise real analytic it suffices to show that for a given $m \in \{0, \dots, n-1\}$ and two given permutations $\sigma, \tau \in S_n$, if the ordered pressures $P_\sigma$ and $P_\tau$ are not equal on the entire interval $(m,m+1)$, then their graphs can only intersect a finite number of times. This is equivalent to showing that the function $$\begin{aligned}
E(s) &:=& \sum_{i \in \mathcal{I}} \phi_\sigma^s(\textbf{\emph{i}}) - \sum_{i \in \mathcal{I}} \phi_\tau^s(\textbf{\emph{i}}) \\ \\
&=& \sum_{i \in \mathcal{I}} \frac{c_{\sigma(1)} (\textbf{\emph{i}}) c_{\sigma(2)}(\textbf{\emph{i}}) \cdots c_{\sigma(m)}(\textbf{\emph{i}})}{c_{\sigma(m+1)}(\textbf{\emph{i}})^{m}} c_{\sigma(m+1)}(\textbf{\emph{i}})^{s} - \frac{c_{\tau(1)} (\textbf{\emph{i}}) c_{\tau(2)}(\textbf{\emph{i}}) \cdots c_{\tau(m)}(\textbf{\emph{i}})}{c_{\tau(m+1)}(\textbf{\emph{i}})^{m}} c_{\tau(m+1)}(\textbf{\emph{i}})^{s}\end{aligned}$$ has at most finitely many zeros in the interval $(m,m+1)$, assuming it is not identically zero. However, this is quickly seen to be true since $E(s)$ is a (generalised) Dirichlet polynomial and therefore can have at most $2 \lvert \mathcal{I} \rvert - 1$ zeros in $\mathbb{R}$. Recall that Dirichlet polynomials are functions of the form $$\sum_{i = 1}^N a_i b_i^s$$ with $a_i \in \mathbb{R}$ and $b_i>0$. A classical result, which can be proved by applying Rolle’s Theorem, is that such functions have at most $N-1$ zeros, provided they are not identically zero. For further information on zeros of Dirichlet polynomials and related topics, see Jameson [@jameson].\
\
If we are interested in bounding the number of phase transitions explicitly, then the following crude estimate can be deduced. We can have trivial phase transitions at the points $\{1, \dots, n\}$. For non-trivial phase transitions in the interval $(m,m+1)$ for $m \in \{0, \dots, n-1\}$, we know that each distinct pair of ordered pressures can give rise to at most $2 \lvert \mathcal{I} \rvert - 1$ phase transitions by the above argument and using (\[numberof\]) there are at most $$\left( \begin{array}{c}
n \left( \begin{array}{c}
n -1 \\
m
\end{array} \right)\\
2
\end{array} \right) $$ distinct pairs of ordered pressures. This yields the following upper bound for the total number of phase transitions: $$n \ + \ \big( 2 \lvert \mathcal{I} \rvert -1 \big)\ \sum_{m=0}^{n-1}\left( \begin{array}{c}
\end{array} \right).$$ We can simplify the summation as follows: $$\begin{aligned}
\sum_{m=0}^{n-1}\left( \begin{array}{c}
\end{array} \right)& = & \frac{n^2}{2} \sum_{m=0}^{n-1} \left( \begin{array}{c}
n-1\\
m
\end{array} \right)^2 \ - \ \frac{n}{2} \sum_{m=0}^{n-1} \left( \begin{array}{c}
n-1\\
m
\end{array} \right)\\
\\ \\
& = & \frac{n^2}{2} \left( \begin{array}{c}
2n-2\\
n-1
\end{array} \right) \ - \ \frac{2^nn}{4}
\\ \\
& = & \frac{n^3}{8n-4} \left( \begin{array}{c}
2n\\
n
\end{array} \right)-\frac{2^nn}{4} \\ \\
& \sim & \frac{n \sqrt{n} \, 4^n}{8\sqrt{\pi}} \end{aligned}$$ as $n \to \infty$, where the final line giving the asymptotic value was obtained by applying Stirling’s formula to the binomial coefficient.
Proof of Corollary \[mainanal\] {#mainanalproof}
-------------------------------
Let $m \in \{0, \dots, n-1\}$ and suppose $\sigma \in S_n$ is such that for all $i \in \mathcal{I}$ $$\{\alpha_1(i), \dots, \alpha_{m}(i)\} = \{ c_{\sigma(1)}(i), \dots, c_{\sigma(m)}(i)\}$$ and $$\alpha_{m+1}(i) = c_{\sigma(m+1)}(i).$$ By following the proof of Lemma \[lemma1\], it is easily seen that $\phi_\sigma^s(i) = \max_{\sigma' \in S_n} \phi_{\sigma'}^s(i)$ for all $i \in \mathcal{I}$ and $s \in [m,m+1]$, and therefore by Theorem \[mainmax\] $$P(s) = \max_{\sigma' \in S_n} P_{\sigma'}(s) = P_\sigma(s)$$ for all $s \in [m,m+1]$, completing the proof.
Some open questions and discussion {#questions}
==================================
We have proved that the pressure is piecewise real analytic for products of diagonal matrices and simultaneously triangularisable matrices. However, this falls significantly short of proving this in general and we therefore ask the following question.
Is the pressure always piecewise real analytic or at least piecewise differentiable?
In our setting we can bound the number of phase transitions by $$\label{boundd}
n \ + \ \big( 2 \lvert \mathcal{I} \rvert -1 \big) \ \Bigg(\frac{n^3}{8n-4} \left( \begin{array}{c}
2n\\
n
\end{array} \right)-\frac{2^nn}{4}\Bigg),$$ however, this is very crude. For a fixed spatial dimension, (\[boundd\]) grows linearly in the number of matrices, which seems reasonable, but for a fixed number of matrices it grows as $$\sim \ \frac{ 2 \lvert \mathcal{I} \rvert -1 }{8\sqrt{\pi}} \, n \sqrt{n} \, 4^n$$ as the spatial dimension $n \to \infty$, which seems far too fast and gives poor estimates. For example, for 2 matrices in dimension 5 the explicit bound is 2510. It would be interesting to search for optimal bounds or to just improve (\[boundd\]).
In the setting of upper triangular matrices, what is the optimal bound on the number of phase transitions for the pressure in terms of $\lvert \mathcal{I} \rvert$ and $n$?
It would certainly be possible to reduce the bound (\[boundd\]) via a more careful application of Rolle’s Theorem or Descartes’ rule of signs, to the Dirichlet polynomial $E(s)$, but we omit further details. We emphasise that the purpose of this paper is to prove piecewise analyticity and not to study combinatorial issues concerning the sharpness of the bound on the possible number of phase transitions. Another possible problem to consider is the existence and nature of *higher order phase transitions*, i.e. points for which the pressure is $C^k$ but not $C^{k+1}$ for some $k$. We have only been able to exhibit 0th order phase transitions, i.e. points where the pressure is continuous but not differentiable. Since our main result gives an explicit formula for the pressure, it should provide a useful tool in searching for higher order phase transitions, but we have not pursed this here. Finally, we ask a more open ended question.
Is there any interesting geometric or dynamical significance of the ordered pressures in regions where they are strictly less than the subadditive pressure?
**Acknowledgements**
This work was completed while the author was a Research Fellow at the University of Warwick where he was financially supported by the EPSRC grant EP/J013560/1. He thanks Pablo Shmerkin for helpful discussions and for providing some useful references.
[99]{}
B. Bárány. Subadditive pressure for IFS with triangular maps, *Bull. Pol. Acad. Sci. Math.*, [**57**]{}, (2009), 263–278.
L. M. Barreira. A non-additive thermodynamic formalism and applications to dimension theory of hyperbolic dynamical systems, *Ergodic Theory Dynam. Systems*, [**16**]{}, (1996), 871–927.
R. Bowen. *Equilibrium states and the ergodic theory of Anosov diffeomorphisms*, Lecture Notes in Math. 470. Berlin: Springer, 1975.
R. Bowen. Hausdorff dimension of quasicircles, *Inst. Hautes Études Sci. Publ. Math.*, [**50**]{}, (1979), 11–25.
A. Douady and J. A. Oesterlé. Dimension de Hausdorff des attracteurs, *C. R. Acad. Sci. Paris Sr. A*, [**290**]{}, (1980), 1135–1138.
K. J. Falconer. The Hausdorff dimension of self-affine fractals, [*Math. Proc. Camb. Phil. Soc.*]{}, [**103**]{}, (1988), 339–350.
K. J. Falconer. The Hausdorff dimension of self-affine fractals II, [*Math. Proc. Camb. Phil. Soc.*]{}, [**111**]{}, (1992), 169–179.
K. J. Falconer. Bounded distortion for non-conformal repellers, [*Math. Proc. Camb. Phil. Soc.*]{}, [**115**]{}, (1994), 315–334.
K. J. Falconer and B. Lammering. Fractal properties of generalized Sierpiński triangles, *Fractals*, [**6**]{}, (1998), 31–41.
K. J. Falconer and J. Miao. Dimensions of self-affine fractals and multifractals generated by upper-triangular matrices, *Fractals*, [**15**]{}, (2007), 289–299.
D.-J. Feng and A. Käenmäki. Equilibrium states of the pressure function for products of matrices, *Discrete Contin. Dyn. Syst.*, [**30**]{}, (2011), 699–708.
Y. Guivarc’h and E. Le Page. Simplicité de spectres de Lyapounov et propriété d’isolation spectrale pour une famille d’opérateurs de transfert sur l’espace projectif, *Random Walks and Geometry*, Walter de Gruyter GmbH & Co. KG, Berlin, (2004), 181–259.
G. J. O. Jameson. Counting zeros of generalized polynomials: Descartes’ rule of signs and Laguerre’s extensions, *Math. Gazette*, [**90**]{}, (2006), 223–234.
A. Käenmäki and M. Vilppolainen. Dimension and measures on sub-self-affine sets, *Monatsh. Math.*, [**161**]{}, (2010), 271–293.
A. Manning and K. Simon. Subadditive pressure for triangular maps, *Nonlinearity*, [**20**]{}, (2007), 133–149.
D. Ruelle. Repellers for real analytic maps, *Ergodic Theory Dynamical Systems* [**2**]{}, (1982), 99–107.
B. Solomyak. Measure and dimension for some fractal families, [*Math. Proc. Camb. Phil. Soc.*]{}, [**124**]{}, (1998), 531–546.
| |
Get your jars in a boiling water bath to sterilize or you can put them in the dishwasher and run it on sterilize, but I think that uses a lot of electricity.
I just put a towel on the bottom of my pan, set the jars in, fill with water, then get them to boiling. Turn it down to medium and get started on the jam. They are sterile by the time the jam is ready to go in them (10 minute minimum). Put the lids and bands in another small pot, cover with water, get them boiling and turn down on medium.
Add the sugar all at once and stir. Keep stirring until it gets to a full rolling boil like in the picture above. You want it to boil like that for 1 minute. (Keep stirring...you don't want the bottom to scorch.) Then take it off the heat and set a timer for 5 minutes. During that time I take all my jars very carefully out of the water. I use a long skinny wooden spoon and pair of hefty tongs to do this. I set them on a towel on the counter upside down. Then I set the pan of jelly on a hot pad right next to them on the counter. I also set the lids and bands next to those.
When the five minute timer is up I skim the top of the jelly. There will be a bit of foam, but not nearly as much as if you hadn't put the butter in. Set a jar next to the pot, put the funnel on, and funnel jelly into the jar leaving 1/4 - 1/2 inch head space. Move on to the next jar until all are full. (I usually have a little bit left over that goes into a jar that will go in the fridge for breakfast the next morning.)
Once the jars are all full I wipe the rims of the jars with a hot damp dishrag. You may need more than one, depends on how messy you were filling the jars. *smile* Next take a lid out of the water and dry it off with a clean towel, set it on the jar, take out a band, dry it off, then screw it on the jar. Make sure it's tight. Set jars upside down on a clean towel. Repeat until all jars are tended to. Place the jars 1 inch apart. Once the last jar is flipped upside down set the timer for 5 minutes. (This helps seal the lids.) After 5 minutes flip them right side up and in an hour or so you'll hear little pops. Check all the lids in a few hours to make sure the lids are sealed. If you can press on the lid gently and there's some give then you need to pop that one in fridge and use within a few weeks. There's no need to process the jelly/jam if your jars, lids and band are sterile and hot and your jelly/jam is hot. I know what some people say. I have done this for a few years now and never had a problem with my jelly/jam. And I make a lot of jelly and jam! The instructions on the box of Kroger pectin even tells you to do it this way. Why waste the time and energy when you don't need to? Trust me. It works. Enough said? Good. *smile*
Phew. That was a process, but let me tell you it gets easier each time you do it. I would much rather do several batches in a row this way because it becomes like an assembly line. When my jars are inverted for 5 minutes I take the time to wash out the pan for the jelly, wash the jars and get everything set up for the next batch. It's so rewarding when you're done to see all those pretty little jars lined up on the counter. I love it and I hope you do, too! | http://www.oldhousehomestead.com/2011/06/peach-jelly-without-canner.html |
Q:
Approximation in a Sobolev Space
Consider the open unit disk $\mathbb{D}$ in $\mathbb{R}^2$. In my analysis course, we defined the Sobolev space $H^1(\mathbb{D})$ in a somewhat unusual way. More precisely, $H^1(\mathbb{D})$ was defined to be the completion of the space
$$
\left\{ f \in C^1(\mathbb{D}) : \left\Vert{f}\right\Vert_{H^1(\mathbb{D})} < \infty \right\}
$$
where
$$
\left\Vert f \right\Vert_{H^1(\mathbb{D})} = \left( \int_\mathbb{D} \left\vert f \right\vert^2 + \left\vert \nabla f \right\vert^2 \right)^{1/2}.
$$
Now consider the function
$$
f(x) = \ln\left( \ln\left(1 + \frac{1}{\left\vert x \right\vert}\right)\right)
$$
on $\mathbb{D}$. According to the usual ``weak derivative'' definition of Sobolev spaces (i.e. that used in Evans), I can prove that $f \in H^1(\mathbb{D})$. However, I am unsure of to establish this with the definition we are using.
My attempts so far have involved seeking out functions in $C^1(\mathbb{D})$ converging pointwise a.e. to $f(x)$ and checking whether this sequence also converges to $f$ with respect to the $H^1$-norm. For instance, I considered the sequence in $C^1(\mathbb{D})$ given by
$$
f_n(x)= \ln\left( \ln\left(1 + \frac{1}{\sqrt{x_1^2 + x_2^2 + \frac{1}{n}}}\right)\right).
$$
Here, I am writing $x = (x_1,x_2) \in \mathbb{D}$.
Unfortunately, I was unable to show that the sequence $(f_n)$ was even Cauchy in $H^1(\mathbb{D})$. Is this the right approach? Or would a different sequence work better? I think mollifiers may work but I was hoping to avoid this in favour of an explicit sequence.
A:
Hint: One way to solve the problem is to patch the singularity at the origin.
$f(x)=\log\left(\log\left(1+1/|x|\right)\right).$
$\mathrm{grad}f(x)= \frac{-x}{|x|^3\left(1+1/|x|\right)\log\left(1+1/|x|\right)}.$
Let $z_n = 1/(e^n - 1)$,
then $f(z_n, 0) =\log(n)$ and
$$\mathrm{grad}f(z_n, 0)=(-\frac{(\exp(n/2)-\exp(-n/2))^2}n,0). $$
Define $f_n(x) := f(x)$ when $|x|\geq z_n$. When $|x|<z_n$ let $f_n$ be the paraboloid which matches the gradient of $f$ and value of $f$ on the circle where $|x|=z_n$ with vertex at the origin. The $f_n$ are $C^1$ and they converge to $f(x)$ in the $H^1$norm.
| |
This calculator is perfect for finding the square yardage of any area, especially for construction, remodeling, and renovation projects such as carpeting, ...
homeguides.sfgate.com/convert-square-footage-yards-8706.html
Fortunately, converting square feet to square yards is a simple calculation, ... Multiply the rectangle's length by its width to determine the area in square feet.
www.howdogardener.com/calculator-square-feet-to-square-yards
Jul 22, 2017 ... Measure the square footage of the room or area (multiply the length times the width), enter it in the calculator, and the area in square yards will ...
www.asknumbers.com/square-feet-to-square-yard.aspx
Square feet to sq. yards (ft2 to yd2) area conversion factor is 0.11111 since there are 9 sq. feet in a sq. yard. To find out how many square yards in square feet, ...
mathforum.org/library/drmath/view/58392.html
I need to know how to change sq. ft. to sq. yds. for carpeting. ... many square yards in a room let's say 8 ft. by 11 ft., would the formula be 8 times ... Checking with Entisoft, we find: 88 foot^2 = 9.77777777777778 yard^2 (area) ...
www.metric-conversions.org/area/square-feet-to-square-yards.htm
Square Feet to Square Yards (ft² to yd²) conversion calculator for Area conversions with additional tables and formulas.
mathcentral.uregina.ca/QQ/database/QQ.09.06/michelle1.html
There are 36 inches in a yard so a region that is 1 yard by 1 yard will be 36 inches ... in square inches you need to divide by 1296 to find its area in square yards.
www.calculatorsoup.com/calculators/construction/cubic-yards-calculator.php
rectangle area for cubic yard calculation ... Cubic Yards Formulas and Images for Different Areas ... Calculate cubic yards using depth in a square area.
www.quora.com/What-is-the-formula-to-convert-square-feet-to-square-yards
Formula to convert Square Feet to Square Yard 1 square yard = 9 square feet, or 1 square feet ... Use this Land Area Converter & Calculator to convert from any land area unit to the other. ... How do I calculate square inches from feet? How do I ... | http://www.ask.com/web?qsrc=6&o=102140&oo=102140&l=dir&gc=1&qo=popularsearches&ad=dirN&q=Formula+to+Figure+Area+in+Square+Yards |
Ad-hoc networks have gained extensive research and analysis recent years due to the characteristics of self-organization and flexible networking . However, because of the absence of a centralized administration and limited system resources, guaranteeing communication security in ad-hoc networks is quite challenging. Specifically, in a multi-hop environment, since the information needs to be transmitted and relayed multiple times, the threat from information leakage becomes higher and the secrecy guarantee is quite difficult. The traditional methods of interception avoidance are based on encryption technologies, which may not be applicable to emerging ad-hoc networks. For instance, the time-varying network topologies require complicated key management which is hard to accomplish in decentralized networks. Besides, the computing and processing abilities of the nodes may be limited and cannot afford the sophisticated encryption calculation.
On the other hand, physical layer security, an approach to achieve secrecy through the aspect of information theory by utilizing the characteristics of wireless channels, has been widely studied for its advantages of low complexity and convenient distributed implementation [2, 3]. As a result, various network models applying physical layer security have been investigated in the literature, such as interference channels [4, 5], broadcast and multi-access channels [6, 7], cooperative relay channels [8, 9, 10], and multi-antenna channels [11, 12, 13, 14]. The physical layer security approaches have also been introduced into the multihop ad-hoc or relaying networks. For instance, authors in compared three commonly-used relay selection schemes for a dual-hop network with the constraints of security under the existence of eavesdroppers. The authors in proposed an optimal power allocation strategy for a predefined routing path to maximize the achievable secrecy rates under the constraint of maximum power budget. The authors in studied the secrecy and connection outage performance under amplify-and-forward (AF) and decode-and-forward (DF) protocols for an end-to-end route, and discussed the trade-off between security and QoS performance. The authors in explored the method to guarantee the network security via routing and power optimization for a network with the deployment of cooperative jamming. In particular, assumed over-optimistically that each jammer was located near one malicious eavesdropper to interfere the wiretapped information, which hardly be true in practice.
However, in the aforementioned works, perfect channel state information (CSI) and the locations of eavesdroppers are assumed to be available at the legitimate users, which is often impractical since the eavesdroppers usually work passively and remain silent to hide their existence. As a result, a general framework based on stochastic geometry was proposed to model the uncertainty of eavesdroppers’ location . In fact, Poisson point process (PPP) is the most widely adopted distribution, and have been adopted in various researches studying network security, e.g. [20, 21, 22]. Under the framework of stochastic geometry, in the authors investigated the secure routing problem. The routing strategy aimed to achieve the highest secure connection probability under the DF relaying protocol. The locations of eavesdroppers were assumed following the homogeneous PPP and both cases of colluding and non-colluding eavesdropping were considered. Yet, optimal power allocation was not considered and it is unclear how the power allocation affects the system performance.
We have to point out here that for the routing security in a multi-hop network, the performance of secure communication is coupled with the numbers of hops and the transmit powers of each hop. With higher (lower) transmit power, each hop can support a larger (smaller) transmission distance so that less (more) numbers of routing hops are required to successfully relay messages to the destination. However, higher (lower) transmit power increases (decreases) the probability of information leakage while less (more) numbers of routing hops decrease (increase) it. Hence a non-trivial trade-off naturally exists for the secrecy performance between the numbers of hops and the transmit powers of each hop. However, none of the above works has revealed such an interesting trade-off, which is the main focus of this paper. In particular, in this paper, we investigate the secure routing and transmit power optimization problem in DF relaying networks accompanied with PPP distributed eavesdroppers. The routing secrecy is evaluated by the minimum connection outage probability (COP) subject to the constraint of secrecy outage performance. The optimal route achieved the lowest COP is selected from all possible routing paths and the corresponding transmit power of each hop is also optimized. Moreover, friendly jamming is applied to further improve the security performance. Different from previous studies, we a) consider secure routing under randomly distributed eavesdroppers and optimize the transmit powers jointly, and b) solve the power optimization problem with jamming using both the optimal monotonic optimization and the successive convex approximation (SCA) methods. The main contributions are summarized as follows:
1) The secure routing design for a multi-hop network with the PPP distributed eavesdroppers is formulated as an optimization problem which minimizes the COP under a SOP constraint. The SOP and COP expressions for a given end-to-end path are derived in closed form. The closed-form expression of the optimal transmit powers is obtained. By analyzing the expression of the minimum achieved COP for a given route and defining the routing weights, the routing problem can be interpreted as finding the route with the lowest sum weights, which can be solved optimally by the Dijkstra’s algorithm.
2) A friendly jammer with multiple antennas is introduced to enhance the outage performance. For any fixed jamming power, the transmit powers allocation for the legitimate nodes on the obtained route is formulated as a monotonic optimization problem. The outer polyblock approximation with a one-dimension search algorithm is proposed to achieve the globally optimal solution. Later, to strike a balance between system performance and computational complexity, the SCA method is used to solve the problem considering its non-convexity feature. Though the solution derived from the SCA method is not globally optimal, the numerical results show that it achieves a close-to-optimal performance when it has a proper initial point.
3) The trade-off between the numbers of hops and the transmit powers of each hop is discussed for the routing security. The distribution of the numbers of the hops derived from simulation indicates that too many or few numbers of hops increase the leakage of information and rarely guarantee the security performance. This accentuates the importance of the joint consideration of transmit powers and secure routing.
The remainder of this paper is organized as follows. In Section II, we present the system model of the multi-hop relaying network and formulate the secure routing as an optimization problem. In Section III, the routing and power optimization method is provided. The power optimization problem taking into account of friendly jamming is proposed in Section IV, then the outer polyblock approximation algorithm and the SCA algorithm are given in Section V. Numerical results are presented in Section VI to illustrate the performance of the proposed algorithms. The conclusions are summarized in Section VII.
The following notations are used in this paper. and represent Hermitian transpose and absolute value, respectively. denotes the probability and denotes the mathematical expectation with respect to A.
represents the circularly symmetric complex Gaussian distribution with mean
and variance. The union and difference between two sets and are denoted by and , respectively.
Ii System Model And Problem Description
We consider a multi-hop wireless ad-hoc network which consists of legitimate nodes . The distribution of the eavesdroppers in the network follows the homogeneous PPP denoted as with density . Each of the legitimate nodes and eavesdroppers is equipped with a single omnidirectional antenna. One legitimate node aims to send messages to another in the network. In order to transmit information from the source node to the destination node securely, a routing path needs to be found. As we have mentioned before, there exists a non-trivial tradeoff between the numbers of hops and transmit powers. Therefore the messages can be sent either directly to the destination with a high transmit power, or through multihop via several relays. Assuming a routing path contains relay nodes, each hop can be denoted by , with the transmitter and the receiver at the -th hop denoted as and , respectively. Then, the entire routing path can be denoted by . An illustration of the system model is shown in Fig. 1.
The wireless channels are subjected to small-scale Rayleigh fading together with a large-scale path loss. Each Rayleigh fading coefficient ( and denote the transmitter and receiver of the path, respectively) is modeled as independent complex Gaussian with zero mean and unit variance, i.e., , and the path loss exponent is . We assume that the CSI and the locations of legitimate nodes are known while those of the eavesdroppers cannot be obtained because the eavesdroppers work passively.
Since each route from the source to the destination is composed of several hops, under DF relaying scheme, a widely adopted protocol in literature [15, 16, 17, 18], we first consider the transmission of hop from to . Let denote the symbol transmitted by , then the received signals at the legitimate receiver and the eavesdroppers are given by
|(1)|
|(2)|
where and denote the received signals at receiver and eavesdropper , respectively. denotes the transmit power of node in route . denotes the distance between and . and are the noises at and following .
Now we can derive the expressions of SNR at and , which are given by
|(3)|
|(4)|
respectively.
In this paper, we adopt
connection outage probability (COP) and secrecy outage probability (SOP) as performance metrics to measure the routing security. To improve security,
we let transmit nodes use different codebooks to retransmit the signal, so that the eavesdroppers cannot combine the wiretapped signals from multiple hops and could only decode these signals individually111 The problem with colluding eavesdroppers requires the CDF of the sum of a number of independent but not identical distributed variables which subject to exponential distribution and whose number follows a PPP distribution, which is quite complicated. Due to the space limitation, here we only focus on the non-colluding cases.
The problem with colluding eavesdroppers requires the CDF of the sum of a number of independent but not identical distributed variables which subject to exponential distribution and whose number follows a PPP distribution, which is quite complicated. Due to the space limitation, here we only focus on the non-colluding cases.. For an entire routing path, an end-to-end connection outage refers to the situation that the received SNR at any hop in the route is less than a predefined threshold , thus the receiver cannot decode the message successfully and the corresponding probability of this event is called connection outage probability. Secrecy outage occurs when the SNR of at least one eavesdropper at any hop surpasses the predefined threshold , hence the message can be intercepted by the eavesdropper(s). The probability of secrecy outage is called secrecy outage probability222The traditional SOP defined as is equivalent to our definition when .. The COP and SOP are denoted as and , respectively.
We consider the problem of finding the optimal routing path and transmit powers of each hop to achieve the lowest COP subject to a constraint that the SOP is no more than a predetermined value. Denote as the set of all feasible routing paths from the source to the destination and as the maximum tolerable SOP, the optimization problem can be defined as:
|(5)|
The objective function is equivalent to optimizing the route and transmit powers sequentially, that is
|(6)|
Therefore, in the following section, a secure routing method is proposed to solve this problem. The method can be divided into two parts: First, we optimize the transmit powers for any given route; then we find the optimal secure route from the source to the destination.
Iii Power optimization and Secure Routing
In this section, we study problem (5) and propose a method finding a secure route with power optimization strategy for the considered multi-hop network. The closed-form expressions of COP and SOP are derived first and then be used to facilitate the optimization of transmit power. Finally the optimal secure routing path is obtained.
Iii-a Connection and Secrecy Outage Probabilities
First, we derive the exact expressions of COP and SOP for a given route. According to the definition of COP and the assumption of independent fading in Section II, the COP for route denoted as can be written as
|(7)|
where (7) holds since the fading coefficient follows an exponential distribution with .
On the other hand, due to the usage of different codebooks at each hop and since the distribution of eavesdroppers follows a homogeneous PPP, the SOP for route denoted as is given by
|(8)|
To facilitate the derivation of a concise SOP expression, the distribution of eavesdroppers for each hop is assumed to be uncorrelated with each other, which represents an upper bound of the original stationary eavesdroppers assumption . Hence can be reformulated as
|(9)|
where holds for the probability generating functional lemma (PGFL) for the homogeneous PPP under the assumption that the transmitter of each hop locates at the origin of the polar coordinate and holds for the integration formula [References, 3.326.2] with .
Till now, we have obtained the expressions for and in (7) and (9), respectively. The two formulas indicate that the powers of transmitters has opposite influence for COP and SOP. A higher power leads to less communication outage while a higher probability of information leakage. Therefore, when study the COP and SOP performance jointly, this trade-off needs to be considered and a careful design is required for the transmit powers. For the sake of conciseness, defining and , (7) and (9) can be simplified as
|(10)|
|(11)|
respectively.
Iii-B Transmit Power Optimization
Now we focus on optimizing the transmit powers of each hop to minimize the COP while satisfying the maximum tolerable SOP constraint. The power optimization problem for any given route can be written as
|(12)|
Substituting (11) into the inequality constraint of (12), we have
|(13)|
which can be further transformed into
|(14)|
Notice that is a non-increasing function of . Since the expression on the left side of the inequality constraint is non-decreasing respect to , this problem reaches its optimum when the inequality constraint is active at the optimal solution. As a result, we can safely replace the inequality sign with an equality sign and (12) can be rewritten as
|(15)|
Problem (15) is not convex since its constraint is not affine (except when , which represents propagation in free space). In order to reformulate (15) into a convex form, defining , we have . Therefore, (15) can be rewritten as
|(16)|
Problem (16) is a convex problem due to its convex objective function and affine equality constraint and its global optimum can be obtained. Applying the Lagrange multiplier method associated with the equality constraint in (16), we have the following function:
|(17)|
where is the Lagrange multiplier. Then we set the partial derivatives of respect to to zero, which yields
|(18)|
Substituting (18) into the constraint in (16), the expression of is derived as
|(19)|
Then substitute (19) into (18), and we have
|(20)|
Finally, using , we derive the optimal transmit power of hop as
|(21)|
The influence of density of eavesdroppers and SOP constraint on the transmit power can be observed from (21). The increase of and the decrease of lead to the decrease of . This result is comprehensible since the decrease of will lower the risk of information leakage, hence to guarantee the security under the existence of more eavesdroppers and satisfy a more stringent SOP constraint.
So far, we have solved the inner optimization of (6). We still need to find the secure route with the minimum COP from all possible paths in the multi-hop network.
Iii-C Optimal Route Selection
Since the transmit power of any route under the SOP constraint has the form shown in (21) and is expressed as (10), the secure routing problem can be rewritten as
which is equivalent to
|(22)|
Considering to be the weight of hop , expression (22) can be interpreted as to find the optimal route which has the minimum sum of weights. This can be solved by the Dijkstra’s algorithm effectively. Having obtained the optimal route and calculated the transmit powers for all transmission nodes using (21), the minimum COP can be obtained via (10). The whole optimization procedure is shown in Algorithm 1.
The relation of the optimal route and the system parameters of the network is worthy discussing. From (22) we notice that the routing optimizing is determined by the weight , which is independent of the information of eavesdroppers as well as the constraint on SOP. This indicates that changing the density of eavesdroppers in the network and SOP threshold of the optimization problem does not impact the final selection of secure route. This can be interpreted from the following perspective. Since the distribution of eavesdroppers is homogeneous and the CSI along with the locations of eavesdroppers are unknown, eavesdroppers appears homogeneously for any route path of the legitimate network. As a result, the expected influence of eavesdroppers toward all options are equal, or we can say the information of eavesdroppers does not affect the optimal routing design. Hence the constraint set by which constrains the eavesdroppers has no affect on routing either.
Iv Transmission power optimization with friendly jamming
Algorithm 1 provides us an effective way to obtain the secure route and the optimal transmit powers under the existence of random distributed eavesdroppers. In order to enhance the security performance further, we now consider the existence of a friendly jammer . This scenario is often feasible in practical applications. Take the device-to-device (D2D) communication system as an example. A cellular base station can work as a friendly jammer to interfere with the interception of eavesdroppers and assist the legitimate communication among the D2D users. Hence in this section, based on the optimal secure route obtained from Algorithm 1, friendly jamming is introduced and the transmit power optimization problem of each transmit node on the secure route is reconsidered.
Based on the system model described in Section II
, we assume that there is a jammer equipped with multiple antennas in the network. The channels from the jammer and the transmitters to the legitimate receivers and to the malicious eavesdroppers are assumed uncorrelated with each other. To secure the transmission, the friendly jammer radiates artificial noise isotropically in the nullspace spanned by the channel vectors of the legitimate nodes to avoid interfering with the legitimate network. The system model is depicted in Fig.2. With the knowledge of channel fading coefficients from the jammer to the legitimate receiver , denoted as , the jammer adjusts its beamforming weight vector to suppress the artificial noise to legitimate receiver according to
|(23)|
Denote the optimal route obtained from Algorithm 1 as , the transmit power for node in as , the transmit power of jammer as , and the transmit power of jammer as with its beamforming weight vector normalized as . Due to (23), the expression of COP is identical to (7). We assume that the interference produced by the jammer is much larger than the noise, then the noise at the eavesdroppers can be neglected and the expression of SOP is given by
|(24)|
In fact, expression (24) is an upper bound of the accurate SOP under jamming due to the neglect of noise, which represents a worst case of the exact value.
Due to and the independence between and , follows an exponential distribution with . Therefore we have
|(25)|
Substituting (25) into (24) and using PGFL, SOP can be expressed as
|(26)|
where denotes the distribution area of eavesdroppers, and the location of . (26) cannot be expressed in closed form due to the complexity of with respect to .
The expressions of COP and SOP with friendly jamming have been derived in (7) and (26). Then the transmit power optimization problem with the assistance of a multi-antenna jammer under a total transmit power constraint can be written as
|(27)|
Following the same procedures transforming (12) to (15) and using for brevity, (27) is equivalent to
|(28)|
Problem (28) is not a convex optimization problem. Interestingly, however, the objective function is monotonic respect to , under a fixed , then (28) turns to a monotonic optimization problem under a fixed . Therefore, we propose an outer polyblock approximation algorithm to obtain the global optimal solution of the inner monotonic optimization problem under a fixed , and the solution of (28) can be derived by searching within the results obtained from the outer polyblock algorithm for different . Then we propose an SCA algorithm to reduce the complexity, at the price of obtaining a sub-optimal solution.
V Algorithms for Power Optimization with Jamming
In this section, we present our methods to solve problem (28). The method based on outer polyblock approximation and one-dimension search is proposed first, and the SCA algorithm is put forward next to reduce the complexity.
V-a Outer Polyblock Approximation and One-dimention Search
The power optimization for transmit nodes under a fixed is considered first. This problem can be written as:
|(29)|
which is a monotonic optimization problem with respect to . We aim to solve (29) via the outer polyblock approximation algorithm based on the theory of monotonic optimization theory. The solution obtained through the proposed iterative algorithm reaches the global optimum .
Now we rewrite (29) into a canonical form of monotonic optimization. In order to simplify the expressions, we define
|(30)|
and rewrite the optimization variables as a transmit power vector , while the power region is defined by
|(31)|
Based on the above definitions, problem (29) can be written in the following form:
|(32)|
In the sequel, we aim to solve (32). The polyblock algorithm is proposed to obtain its globally optimal solution.
1) Preliminaries: In this subsection, we explain that problem (32) is a monotonic optimization problem. First, several definitions are listed as follows to facilitate the presentation [30, 31, 32, 33].
Definition 1
Given any two vectors , denotes that . If and , , we say dominates ; If , we say strictly dominates and write .
Definition 2
Function is called an increasing function on if for two vectors , can be implied from . Function is called strictly increasing if for any two vectors , can be implied from .
Definition 3
Set is a normal set if for all , any points dominated by also belongs to .
Definition 4
A point is said to be an upper boundary point of a compact normal set if and no point in strictly dominates . All the upper boundary points of constitute the upper boundary of , which is denoted by .
Definition 5
For vector , the hyper rectangle is called a box with being its vertex. The union of a finite number of boxes is referred to as a polyblock.
Now, we provide some important results of optimization problems based on polyblock via the following proposition.
Proposition 1
A strictly increasing function reaches its maximal value over a polyblock at one vertice of the polyblock.
Proof 1
Suppose that attains the global maximum at which is not a vertex of the polyblock, then there must exists one vertex dominating , i.e, and . holds since is strictly increasing, which is contradicted against the assumption that reaches the optimum.
Based on the definitions and the proposition above, we have the following proposition.
Proposition 2
Optimization problem (32) is a monotonic problem which has an increasing objective function with respect to and the power region is a compact normal set.
Proof 2
2) Outer Polyblock Generation: Proposition 1 reveals that the maximum of an increasing function can be found via searching among the vertices of the polyblock. Thus for a monotonic optimization problem, we can gradually approach its region by iteratively generating a series of polyblocks and find its maximum via searching.
In the following paragraphs, a method to generate the polyblocks is provided. First, we aim to find the vertex achieving the maximal value of on the polyblock. We use to denote the polyblock generated at the -th iteration, the vertex set of the polyblock , then the vertex maximizing denoted as can be found by searching in set .
Then we project onto the upper boundary of along the line segment through the origin to and get the intersection point . Denoting the th element of vector as and the scaling parameter as , the projection operation can be represented as solving the following optimization problem
|(33)|
The intersection point can be calculated by , and the new vertices adjacent to are generated according to
|(34)|
where denotes the th new vertex generated at the th iteration, denotes the th element of and is the
th column of the identity matrix of size. Then the new vertex set is defined by
|(35)|
The new polyblock is the union of the boxes defined by vertices in . An illustration of the generation procedure is depicted in Fig. 3.
3) Outer Polyblock Approximation Algorithm: Based on the polyblock generation method, an iterative algorithm is proposed to obtain the optimal solution for problem (32). The algorithm starts from calculating the initial vertex for the first iteration. It is clear that the initial vertex should be the upper bound of the problem so that the box could cover the power region . Obviously an upper bound is achieved when and are relaxed for each item separately, which can be written more specifically as
|(36)|
The solution of (36) acts as the initial vertex . Note that the selection of the initial point does not impact the final results since the solution of the outer polyblock approximation algorithm always converges to the global optimum.
In the th iteration, the optimal vertex is first derived by searching in vertex set , and the corresponding maximal value over polyblock is denoted as . Then the scaling parameter and the intersection point on the upper boundary of is derived by solving (33). The optimal intersection point till the th iteration is obtained via
|(37)|
and are the upper and lower bound of the optimal value respectively, thus is always satisfied. If is lower than a predefined number , is greater than by no more than . We quit the iteration and called an -optimal solution to problem (32). Otherwise, a new polyblock is generated and the above procedure is repeated till an -optimal solution is obtained.
Suppose that the optimal solution is located in the region defined by with being a small positive number, then the polyblock algorithm would converges with a fairly low speed as gradually approaching this region, as depicted in Fig.3. Thus, in order to guarantee the convergence speed of the algorithm, we replace the region by . The parameter reflects the tradeoff between the accuracy and computational complexity.
The procedure for solving (32) is summarized in Algorithm 2. The convergence explanation can be found in [References, Theorem 1]. Given the accuracies and , the proposed algorithm will terminate after a finite number of iterations and an -approximate optimal solution for problem (32) can be derived.
So far, we have solved (29) through outer polyblock approximation algorithm and obtained the optimal powers of route under a fixed . By varying the value of , a series of solutions for (29) under different can be derived, and the solution of (28) can be derived through searching within these solutions. | https://deepai.org/publication/secure-routing-with-power-optimization-for-ad-hoc-networks |
Presentation is loading. Please wait.
Published byAshlynn Roberts Modified over 4 years ago
2
7.1 Number, Operation, and Quantitative Reasoning. The student represents and uses numbers in a variety of equivalent forms. 7.1B: Convert between fractions, decimals, whole numbers, and percents mentally, on paper, or with a calculator STAAR Readiness Standard
3
1)Stack 3 sheets of paper ¾ inch apart 2)Roll up bottom edges so that all tabs are the same size 3)Crease and staple along the fold 4)Write title, Rational Numbers, and subtitle, fractions, decimals, percents, and integers, on the front. 5)As you follow along the presentation, label each tab with the title of the slide. 6)Fill in the space above the tab (on the same sheet of paper) with notes on how to perform the skill.
4
½50%.5
5
According to the meaning of the word, percent, it is ALWAYS in reference to something divided into 100 parts. To find the decimal equivalency to a percent, simply divide the number by 100. 26%26 ÷ 100 = 40% 40 ÷ 100 =.4 7% 7 ÷ 100 =.26.40.07
6
Imagine a decimal point in the place of the percent sign, and move the decimal two spaces to the left (the same as dividing by 100). 26%.26 40%.40.4 7%.07
7
As we discovered, percents are ALWAYS out of 100. Place the percent number in a fraction with a denominator of 100. Simplify the fraction. 26% 26 100 13 50 75% 75 100 3 4
8
Decimal numbers use a place value system and decimal point to represent the quantity of the number. To find the percent equivalency of a decimal, multiply by 100..34.34 x 100 =.19.19 100 =.125.125 x 100 =.6.6 100 = 34% 19% 12.5% 60%
9
Move the decimal point two spaces to the right, and add a % symbol (this is the same as multiplying by 100)..34 34%.19 19%.12512.5%.6 60% 1 100%
10
Writing Fractions as Percents Divide the numerator by the denominator to get a decimal. Change the decimal to a percent by moving the decimal point to the right (multiply by 100). 6 25 0.2424%
11
Benchmarks
12
What is one as a percent? Justify your answer. (Use what you know about percents. If you need to, go back to the definition of percent and see if you can use that information to help explain your answer.)
Similar presentations
© 2019 SlidePlayer.com Inc.
All rights reserved. | https://slideplayer.com/slide/7288636/ |
Created for the most important concerts, celebrations and receptions (the design was completed in 1976) the V. I. Lenin Tallinn Culture and Sports Palace or Tallinn City Hall with its axes, symmetry and emphasizing mathematical beauty associates with temple architecture of great civilizations of Mesopotamia and Egypt.
The total length of the building is about 300 meters and its width about 160 meters, its total area is 27 215 square meters.
In addition to the possibility of walking on the roof a striking experience was created by the fact that the building was located completely by the sea which in the Soviet era was strictly controlled and limited.
Guided tours on Saturday and Sunday at 12:00, 1:00, 2:00, 3:00 and 4:00 pm. Meeting on the stairs in front of the building. Unfortunately it is not possible to enter the building during the tour as the building interior is treated with chemicals against mould. | http://openhousetallinn.ee/en/buildings/linnahall/ |
Ash Abraham | July 2, 2018
It’s nine o’clock in the morning in Tengeru market, at the base of Mount Meru in northern Tanzania. The dirt road leading into the market is dotted with puddles from last night’s rain. Women pour out of minibuses called dala dalas. Young boys hop on the rooftop of the dalas to untie lumpy bags stuffed with fruits and vegetables. Women fix barrels on their heads, and slosh through the muddy market entrance.
Rows of mats stocked with apples, oranges, and mangos line the market streets. Garlic, ginger, and cassava are piled high next to slouchy sacks of maize and pungent little fish.
Embracing multiple roles is crucial to women’s success at the market. Transitioning from farmer to vendor can help maximize earnings by avoiding slow periods between harvests. And for women who don’t have access to land, selling for someone else may be the only option.
Inside the market, Bozana Ndelilio Kaaya sits on a brightly coloured bucket. She is a banana farmer from Kimundo village. Twice weekly, she travels more than 30 kilometres to the market.
Mrs. Kaaya says: “It’s better if I sell the bananas here in the market than to a shop where I won’t get as much money. When I take the bananas to a shop, I’m not paid right away.” She explains that she will not receive payment until after the shopkeepers sell her bananas—which could take days. She says, “When I sell here in the market, I am paid immediately.”
She offers a customer eight bananas for 1,000 Tanzania shillings (US$0.44). As the day goes on, her prices will drop so she can avoid throwing away unsold bananas.
Ndefisiwa Daniel also grows bananas in Kimundo. She travelled one hour by bus with ten bunches of bananas, then sold all her bananas by 10 a.m. Arriving early is the key to success, so she makes sure her bananas are ready to be sold at 6:30 a.m. In this way, her customers receive the best quality bananas and she gets top price.
Other vendors, such as Mama Joshua, buy produce from farmers and sell it at higher prices to make a profit.
Mrs. Joshua says, “If you come tomorrow, you’ll find me here. If you come the next day, you’ll find me here.” Even on Sundays, she comes to the market after church to sell fruit.
Mrs. Joshua learned how to be a vendor from her mother. Despite her successful business, she will not teach her daughter how to work in the market.
She says, “I don’t want my daughter to sell. I want her to study. I want her to stay in the office and write.”
Mrs. Joshua says each season presents its own challenges. During the rainy season, it’s difficult to get to the market; at other times, she has to battle the sweltering heat.
Market days are often a race against the sun. When food stays out all day, it becomes wilted and less appealing. Rotten or unsold food can be devastating for the women’s families, as most women in Tengeru market use their earnings to support their children’s education.
A vegetable vendor named Flora is from Mbuguni village, about 25 kilometres south of Tengeru. She says, “If you don’t sell everything you buy, it’s a loss. That’s a hardship.”
She gets the best prices in the morning, selling a bunch of peppers for 1,000 shillings (US$0.44). In the evening, she sells the same peppers for 300 shillings (US$0.13). She throws away what she can’t sell.
She says, “I would rather plant [vegetables] by myself, but I don’t have land.”
Rachael Zefania Abraham is from the nearby village of Shimbumbu. As the seasons change, she switches from farmer to vendor. In January, she planted maize, potatoes, and beans. While she waits for the harvest, she sells bananas. She says this ensures that she will make money year round.
In a few weeks, Mrs. Abraham will change roles. She’ll bring her own crops to the market, and sell them to vendors.
Whether as farmers, vendors, or a mix of both, women find year-round employment at Tengeru market.
Ash Abraham is a Uniterra volunteer based in Arusha, Tanzania. She worked with Abraham Godwin on this story. Uniterra helped support the reporting of this story. | https://wire.farmradio.fm/farmer-stories/tanzania-women-switch-from-farming-to-selling-at-tengeru-market/ |
In this lesson, students follow and describe a series of steps to program a floor robot. Plan a route to program a robot to follow a path and write a sequence of steps (algorithm).
Year band: F-2Curriculum Links Assessment
Curriculum Links
Links with Digital Technologies Curriculum Area
|Strand||Content Description|
|Knowledge and Understanding||Recognise and explore digital systems (hardware and software components) for a purpose (ACTDIK001)|
|Processes and Production Skills||Follow, describe and represent a sequence of steps and decisions (algorithms) needed to solve simple problems (ACTDIP004)|
For a detailed explanation of the content descriptions featured in this learning sequence for Digital Technologies, please download this PDF.
Links with other Learning Areas
|Learning Area||Strand and Content Description|
|English||
|
Language (Text structure and organisation)
Understand that different types of texts have identifiable text structures and language features that help the text serve its purpose (ACELA1463).
|Mathematics||
|
Measurement and Geometry (Location and transformation)
Describe position and movement (ACMMG010).
For a detailed explanation of the content descriptions featured in this learning sequence, please download this PDF.
Assessment
Assessment rubric
The rubric below is based on the SOLO Taxonomy which is a great framework to measure the depth of learner understanding against a set of learning outcomes. There are many different rubric types you can choose, you will find different examples on this site Whichever rubric scoring system you decide to use, ensure it focuses on learner progress and provides an opportunity for feedback to learners which goes beyond a simple numeric score. There is much research on effective feedback on a summative task and Dylan William suggests that comment only feedback is the most effective, as learners otherwise only focus on a numeric score and do not heed the advice in the feedback.
|Quantity of knowledge||Quality of understanding|
|Criteria||Pre-structural||Uni-structural||Multi-structural||Relational||Extended abstract|
|
|
Algorithmic thinking
|Learner shows no evidence of understanding an algorithm.||Learner is able to say what an algorithm is.||
|
Learner is able to follow and execute an algorithm.
|Learner is able to explain the causes of each element of their algorithm.||
|
Learners are able to generalise their understanding of an algorithm to create their own.
|
|
Use of Bee-Bots
|Learner is unable to make the Bee-Bot work.||Learner is able to make the Bee-Bot complete one command.||
|
Learner is able to make the Bee-Bot complete multiple commands.
|Learner is able to string together multiple commands to complete a task/algorithm.||Learners are able to design more complex tasks for the Bee-Bots to complete beyond the scope of the tasks given (innovate).
|
|
|
Optional score
|0||1||2||3||4|
Learning hook
Sequence game – whole class activity
Step 1
Give students a floor mat or large surface divided as a grid. The grid can be of any dimension, for example 3 x 3; 4 x 4; 5 x 5, 10 x 10. (A mat with a grid marked out, or some masking tape on the floor to show the grid. Make sure it is highly visible and that the squares are large enough for students to stand or sit in.)
Additional scaffolding:
To further create engagement, you could print out checker/chess boards, have students choose a token (either individually or in pairs), and follow a set of directions.
Limited, low, or no vision:
For those with limited, low, or no vision, ice-cube trays or similar with tactile grid lines could be used instead of a checker/chess boards. Complexity can be increased by joining two ice-cube trays to make a larger grid.
Step 2
Invite a student to volunteer to act in role as a robot. The robot’s task is to locate and retrieve an object placed somewhere on the grid. The robot must start from the bottom left hand corner.
Step 3
Explain that the robot needs to be given a set of instructions in order to retrieve the object. The robot understands words like Forward or Reverse and symbols like arrows and F for Forward, R for Reverse, etc.
Step 4
Invite the students to write a set of instructions for the robot to follow. For example:
FFRR or ↑↑→→
(Forward 2 squares then move right 2 squares)
Additional scaffolding:
You may also wish to model an example set of instructions on the whiteboard to make sure students understand what is expected of them.
Step 5
Share student responses and direct the robot to move according to the sequence. Student acting in role as the robot must follow the directions given.
Step 6
Repeat the activity once or twice, using different volunteer robots and placing the object to be retrieved in a different location each time.
You many choose to increase or decrease the size of the grid.
For background notes about coding, programming and computational thinking download this PDF.
Learning map and outcomes
Discuss the importance of instructions. Explain that digital technology systems are not magic: they follow instructions.
- Share the learning intentions for the lesson.
For example: Today we will be:
- learning to write a set of directions
- directing a robot (Bee-Bot) to move in different directions
- learning to use directional words.
Learners have a clear understanding of what they are learning and can see how it fits into bigger context.
WILT chart displayed on the wall (What I am Learning Today).
Learning input
Model the control buttons on a Bee-Bot.
Encourage students to spend some time playing with the Bee-Bots to work out a sequence of steps needed to:
- move the robot from one location to the other
- move to create the perimeter of a square.
Learning construction
Provide students with challenges. For example:
- Code the Bee-Bot to move forward for a specific distance in cm.
- Code the robot to move across a number chart and skip count.
- Code the Bee-Bot to trace differently sized squares.
- Code the robot to move across a world map from one country to another.
- Invent a Bee-Bot challenge. Learners record their learning, for example, by:
- recording a simple written algorithm they have used (which could be done collaboratively in something like a Google Doc).
Extension Activity
Making a video recording with commentary about the algorithm they have used (Explain Everything app)
Learning demo
Sharing solutions – whole class activity
- Encourage learners to share their solutions to the coding challenges. Ask learners to describe the series of steps they used to program their Bee-Bot.
- Record the solutions.
- Ask students to suggest directional words and/or symbols they used as they participated in the challenges. List these words. | https://www.digitaltechnologieshub.edu.au/teach-and-assess/classroom-resources/lesson-ideas/buzzing-with-bee-bots/ |
Back in 1964, Terry Riley introduced to the world what many consider and herald as the first proper minimalist work with his renowned In C. Composed through a new method where musicians are allowed to play different notes and rhythms to their liking, it is also an almost aleatoric musical piece in that its outcome can be decided a number of ways. But looking back and listening to the piece now, it’s astonishing that music like this was being made, rehearsed and performed more than 45 years ago.
In a new series, where contemporary music is being combined with classical music, the Grand Valley State University New Music Ensemble’s November 8th , 2009 performance at New York City’s Le Poisson Rouge is being brought to our attention as a resonating reminder that music can be loved by all, anytime. This astounding performance, which features a sixteen-person orchestra (the piece is written for as many performers as desired, though Riley would go on to say that 35 was preferable), is joined by New York producer/composer Dennis DeSantis, who layers his own effects by way of a laptop on top of the musicians’ performance. The end result is a fascinatingly remarkable one, where the organic touch of the instruments and the electronic feel of DeSantis’ treatments add up for a fusion of contemporary classical music that is remarkable.
The explanation of the music is always the hardest to describe because of the music’s own ever-changing features. But the best way I can explain it is that the music consists of 53 short phrases (known as cells) that may be repeated and played as much as the instrumentalist likes. While each musician has complete control over what they play, they are encouraged to bring forth their ideas at different rates and times, while traditionally there is always one musician acting as ‘the pulse,’ or more rudimentary, as the tempo. This particular performance lasts for sixty-five minutes but other performances have been known to last much longer and shorter (the sixty-five minutes fits in between the norm of 40 minutes to an hour, hour and a half.)
As for the actual musicians, the Grand Valley State University New Music Ensemble is made up of some of the finest musicians. Each one is clearly heard with the saxophones shining brightly. Near the beginning, it is a lone sax that brings the music to a rousing climax before the ten-minute mark and later, around the forty-minute mark, the trumpet passes the lead to the sax for another wonderful excerpt.
Even DeSantis’ timely and subtle touches add color and dynamics to the music’s minimal movements. DeSantis’ electronics take over the strongest on the back-end, where he adds jagged beats and drums on top of the moving melodies. Although the piece is focused on a repetitive, looping feel, the electronics provide an even better sense for the music’s originality and innovative influence. And the memorable moments come from all over the place with a new entrance, introduction or exit always remaining in your memory bank.
While the piece begins on a C major chord, the piece is mostly heterophonic in demeanor as it is characterized by the variation of a single melodic line. That line, coming in the form of a few eighths notes always shifts from tone to tone, instrument to instrument but it always exists as the main idea. Though the polyphony might be taken to a new level simply because of the dissimilar ideas DeSantis brings to the table; this alone is one of the many reasons why others like Flying Lotus and Explosions in the Sky have lent their contemporary hands.
I remember when I first studied In C, it took me for a ride because of its tendency towards allowing the musicians to take over. Much like jazz with its improvisation, it’s a landmark classical piece and this new performance by the Grand Valley State University New Music Ensemble is arguably, the finest I’ve heard. The collaborative effort is a worthy one that has turned out something truly sublime and one that all fans of music should seek out. | http://www.adequacy.net/2010/04/grand-valley-state-university-new-music-ensemble-%E2%80%93-terry-riley-in-c/ |
Below are some tips and techniques that can help with solving word problems.
Akshay brought four boxes of chocolate truffles to a party. Every guest at the party ate exactly 3 truffles, and there were none left over. Or another way of thinking about 4 times 6, that's the same thing as 6 plus 6 plus 6 plus 6, which is 6, 12, 18, 24. This is going to be equal to the number of guests at the party times 3. And another way of viewing this is if g times 3 is equal to 24, that means that 24 divided by 3 must be equal to g.
The situation in the first example is well-known to most people and may be useful in helping primary school students to understand the concept of subtraction.
The second example, however, does not necessarily have to be "real-life" to a high school student, who may find that it is easier to handle the following problem: Word problems are a common way to train and test understanding of underlying concepts within a descriptive problem, instead of solely testing the student's capability to perform algebraic manipulation or other "mechanical" skills.
I told her, “mom, the day I become good at math, pigs will fly.” I think pigs just flew.
my mom always told me that one day I would be good at math.Gustave Flaubert wrote this nonsensical problem, now known as the Age of the captain: Since you are now studying geometry and trigonometry, I will give you a problem. Select from the options below for help on how to solve word problems. So it's 1 times 6, 2 times 6, 3 times 6, and 4 times 6. And now we know that this is going to be equal to the number of guests at the party. The other way of thinking about it is I'm like, hey, some mystery number here, g, that I'm trying to figure out, the number of guests at the party times 3 is equal to 24. Well, you could just think about all the multiples of three. Then we could actually set up a relation between the number of guests, the number of truffles each guest ate, and then the total number of chocolates. So the total number of chocolates at the party must have been 4 times 6. Now, what's another way of thinking about the total number of truffles at the party? So what was the total number of chocolates that we have at this party? 4 times 6 truffles must have been the total number of truffles at the party. Students had to find lengths of canals dug, weights of stones, lengths of broken reeds, areas of fields, numbers of bricks used in a construction, and so on. There are seven houses; in each house there are seven cats; each cat kills seven mice; each mouse has eaten seven grains of barley; each grain would have produced seven hekat. In more modern times the sometimes confusing and arbitrary nature of word problems has been the subject of satire. Your children's reading ability may impact their understanding of the problem.Discuss with them any language or vocabulary they may be unfamiliar with. Solving word problems can be both a challenging and rewarding activity (like many things that are challenging! They help students to see math in the real world and they encourage and give reason for them to learn the underlying concepts and operations.
Comments Solve Math Word Problem
-
Two-step equation word problem garden video Khan.
Here's a nifty word problem in which we find the dimensions of a garden given only the perimeter. Let's create an equation to solve. CCSS Math 7B.4, 7.…
-
How to solve math word problems - without giving yourself a. | https://ecotext2.ru/solve-math-word-problem-1228.html |
Presentation is loading. Please wait.
1
1 Key points – Heart Failure within Bradford 2011
2
2 Order of slides 1.Prevalence – diagnosed 2. Prevalence – undiagnosed 3.Other risk factors 4. QOF achievement 5. Incidence 6.Admissions Summary in numbers
3
3 Point 1 Prevalence of Heart Failure
4
4 Prevalence and total numbers of registered patients have fallen over the last 4 years Trends within Bradford are following national and regional trends Prevalence is currently 0.77% (4,129 cases), which is higher than the national average (0.72%)
5
5 Prevalence of Heart Failure varies across NHSBA practices, between 0.2% and 1.5% Older practices (proportion register >65yrs) show higher prevalence
6
6 Point 2 Underdiagnosis of Heart Failure
7
7 There are undiagnosed cases. We don’t know the true number, but can estimate it Estimated true prevalence is 1.2%, with approximately 2,500 undiagnosed* cases within Bradford Majority of unfound cases can be found in South & West and BANCA alliances * based on NHS Doncaster QOF Benchmarking model - As used in Ellis C, Gnani S and Majeed A (2001) Prevalence and management of heart failure in General Practice in England and Wales, 1994-1998. Health Statistics Quarterly 11: 17-24.
8
8 Estimated that 62% of true prevalent cases have been diagnosed through QOF Number of unfound cases range between -11 and 150 per practice (mean = 30) Number of diagnosed and potentially undiagnosed Heart Failure patients in the PCT
9
9 Point 3 Other risk factors
10
10 For other risk factors, including Atrial Fibrillation, Chronic Heart Disease and Hypertension, Bradford has shown similar trends to all Yorkshire and Humber PCT’s and it’s most similar ONS PCT’s Over the last 4 years, CHD prevalence has fallen, whilst Hypertension and AF prevalence have risen
11
11 Point 4 QOF achievement
12
12 When compared to all PCT’s, Bradford has low achievement for QOF indicators HF02 (Heart Failure confirmed by echocardiogram / specialist assessment) and HF03 (ACE inhibitor therapy for patients with heart failure due to left ventricular dysfunction)
13
13
14
14 Point 5 New incident cases of Heart Failure
15
15 *Based on Bridging the quality gap: Heart Failure, Sutherland, March 2010 http://www.health.org.uk/publications/bridging-the-quality-gap-heart-failure/ A practice with 10,000 patients would expect to diagnose approximately 10-13 new Heart Failure patients per year* This would vary depending on the age, gender and other risk profiles of the practice Based on an estimated incident rate of 1.3 per 1,000 population, there would be approximately 712 new incidents a year in Bradford (ranging between 1 and 28 new cases per practice) Not possible to estimate the “actual” in Bradford as the QOF register is falling + not possible to discern new incidents from prevalent cases
16
16 Point 6 Elective and non-elective Heart Failure admissions Dataset is based on extracts from local admissions, 2009/10 ICD-10 code I50 used
17
17 633 admissions for Heart Failure in 2009/10 (primary diagnosis only). 28 elective (4.4%), 605 non-elective (95.6%) Average of 8 admissions per practice, ranging between 0 and 30 Admission rates, based on per 1,000 patients on the HF QOF register, stand at 153 per 1,000 Rates range from 0 to 625 per 1,000 patients on the QOF register 2,798 admissions where a primary or secondary diagnosis of Heart Failure is recorded. 315 elective (11.3%), 2,483 non-elective (88.7%) Primary diagnosis of HF accounts for 22.6% of these admissions
18
Ambulatory Care Sensitive Conditions are relatively stable statistically speaking.
19
19 Summary in numbers 4,129 patients on HF register (prevalence = 0.77%) 6,640 estimated true number (prevalence = 1.2%) Estimated 2,500 missing patients 62% population diagnosed 712 new incidents of HF per year a practice with 10,000 patients can expect 10 – 13 patients diagnosed a year (depends on age profile) 633 admissions with a primary diagnosis of HF 28 elective, 605 non-elective. Relatively stable 2,798 admission primary or secondary diagnosis of HF 315 elective, 2,483 non-elective
Similar presentations
© 2021 SlidePlayer.com Inc.
All rights reserved. | https://slideplayer.com/slide/4758759/ |
July 2, 2019 - Groundbreaking fossil discovery in the southern state of Paraná, Brazil, reveal new dinosaur species that could balance on single toes.
Read the article
here
.
Transcript
Paleontologists confirmed a new dinosaur species at a dig site in the southern state of Paraná, Brazil.
They uncovered a six-inch foot of the carnivorous dinosaur
Vespersaurus paranaensis
, suspected to have lived during the Cretaceous period roughly 90 million years ago.
Once standing 2.5 feet tall and measuring roughly 5 feet long, the bipedal species is considered to be a small, desert-dwelling dinosaur.
The feet of the species are what paleontologists believe to be the most fascinating.
The three functioning toes of
V. paranaensis
qualify the species as a theropod, like
T. rex
. However, it placed all of its weight on its middle toes like a monodactyl, or one-toed, animal.
The remaining two razor-sharp toes on each foot were likely used for hunting prey, possibly pterosaurs and lizards.
With 40% of the dinosaur’s skeleton complete, it’s become the best-preserved fossil of any theropod found in Brazil to date.
X
Animals
Newly discovered dinosaur foot is best-preserved theropod fossil in Brazil
July 2, 2019 - Groundbreaking fossil discovery in the southern state of Paraná, Brazil, reveal new dinosaur species that could balance on single toes.
Read the article
here
.
Share Link
Featured Videos
Related
Arkansas Dinosaurs?
"Missing Link" Dinosaur: Feather Color Determined
Fast, "Nasty" Little Dinosaur Discovered
T. Rex in 3-D
NG Live!: Grave Secrets of Dinosaurs
NG Live!: Bringing Back Nigersaurus
Journey Into Unexplored Dinosaur Country
How a Giant
T. Rex
Packs for a Road Trip
T. Rex
Arrives in Washington, D.C.
How Do You Dismantle a Dino? (Very Carefully)
Spinosaurus Exhibit Time-Lapse
Dino Hunter Digs for Prehistoric Predators
Bigger Than T. rex: Spinosaurus
Nizar Ibrahim: Lost Giant of the Sahara
A Behind-the-Scenes Look at Assembling Spinosaurus
Moving a Ten-Ton Dino Deathtrap
TIL: Dinosaurs May Have Danced Like Birds
Hunting for the Bones of an Ancient Sea Monster
Remarkably Preserved Dinosaur Feathered Tail Discovered in Amber
Australia's 'Jurassic Park' Home to Some of World's Largest Dino Tracks
Newly-Unveiled Dinosaur Fossil Is Best Preserved of Its Kind
Watch: Biggest Dinosaur Ever Found
Dinosaur May Have Looked Like a Raccoon
Watch: Duck-Like Dinosaur Is Among Oddest Fossils Yet Found
Watch: Ticks That Fed on Dinosaurs Found Trapped in Amber
Are Birds Modern-Day Dinosaurs?
Ichthyosaurs 101
Dinosaurs 101
Linked: Dino Dig
World's Earliest Buddhist Structure Uncovered
New View on Stonehenge Burials
Ancient Sahara Graveyard Found
Rome's Ancient Neighborhood
Destination: Egypt, Pyramids
Investigating Cultural Heritage: Secrets of the Past
Maya "Underworld" Observatory Revealed
Delilah’s People
A 5,300-Year-Old Mummy with Keys to the Future
Hidden Tomb Reveals a Treasure Trove of Royal Remains
The Americas’ Oldest Most Complete Human Skeleton
Fredrik Hiebert: Peruvian Gold
Culture Heroes: Sarah Parcak
Digging Into Scotland's Mysterious, Ancient Past
Looters' Tunnels Lead to New Maya Discoveries
Risk Takers: Archaeology From Space
The Quest for Genghis Khan’s Lost Tomb
King Tut Tomb Scans Support Theory of Hidden Chamber
Rising Seas Are Swallowing This North American Island
Newest
Nat Geo Exclusives
Category:
Adventure
Animals
Environment
History & Civilization
Nat Geo Exclusives
People & Culture
Photography
Science & Space
Technology
Travel
Expedition Raw
Robot vs. Volcano: “Sometimes It’s Just Fun to Blow Stuff Up” (Exclusive)
03:24
Magazine
Here, Cutting Down Millions of Trees is Actually a Good Thing
03:05
News
Risking Arrest, Pygmies Deal Weed to Survive in the Congo
01:42
News
A Riveting Encounter with a Rare Black-Maned Lion
01:44
News
Paralyzed Man Regains His Freedom Through Diving
03:22
News
Meet the People Behind a Movement to Diversify Our National Parks
03:36
101 Videos
Space Archaeology 101: The Next Frontier of Exploration
02:40
Magazine
This Widow’s Relatives Stole Everything. Now She’s Fighting Back.
10:31
Magazine
These Little Wildcats are Fiercely Cute
00:58
News
See the 1,000-Year-Old Windmills Still in Use Today
02:34
Magazine
The Photographer Behind the Obama Snorkeling Photo
01:48
Magazine
President Credits Mom and Hawaii For His Love of Nature
03:31
News
With Vapor, This 11-Year-Old Treats Illnesses
01:03
Magazine
Follow a Transgender Teen’s Emotional Journey To Womanhood
08:51
News
Inside the Struggle to Save an Endangered Grouper Species
05:10
News
Shaman Performs Rite to Protect a Man’s Soul From the Underworld
01:19
Show More Videos
National Geographic
© 1996-2015 National Geographic Society.
© 2015-2020 National Geographic Partners, LLC.
All rights reserved. | https://video.nationalgeographic.com/video/animals-source/0000016b-b3e8-dce2-a16f-bbffa0980000?gc=%2Fvideo%2Fexclusive-videos&gs=recent |
Art September 1993
On Fairfield Porter: An American Painter at the Parrish Art Museum.
“Fairfield Porter: An American Painter” at the Parrish Art Museum, Southampton, New York.
June 27–September 12, 1993
The exhibition that William C. Agee has devoted to the work of Fairfield Porter (1907–1975) is an event of considerable importance. Although, like many American painters of his generation, Porter did not hit his stride as an artist until he was in his forties, he soon developed into one of the best painters of his period. It wasn’t until the last years of his life, however, that the public began to catch up with his achievement, and it is still too little known. While in the 1950s and Sixties he enjoyed the esteem of a small circle of painters and poets—among them, Willem de Kooning, Frank O’Hara, and John Ashbery—and indeed exerted some influence on a younger generation of representational painters, Porter had more of a following as a critic for Art News and The... | https://www.newcriterion.com/issues/1993/9/exhibition-note-4807 |
Your garage door may not need replacing or show any visible need for repairs, yet it is not functioning quite like it should; perhaps it makes noise, or has trouble properly opening/closing and staying half open at mid point, etc. If you are experiencing issues such as these, there are a few things you can attempt on your own.
A noisy garage door does not necessarily indicate that it is losing proper functioning and breaking down, it may just require a few follow up maintenance procedures to have it running noise-free again.
With usage over time, vibrations may begin to occur as the garage doors lift to open or close, which are the source of the racket you hear. This causation is because screws/bolts holding the garage door panels are becoming looser. Some places you should take a look at are:
Rust and need for lubrication are also options to consider for the cause of a noisy operating garage door. It is a good idea to lubricate several specific areas of your garage door’s moving parts every 4 to 6 months.
Before attempting to lubricate, refer back to your garage door manual to locate the areas in need of lubrication. Remember to carefully wipe off remaining grease before applying lubricant. When you are finished, insure that you clean off any excess grease or lubricant to prevent any spill or drip on any unsuitable surfaces.
Please Note: Do not use WD40, normal machine grease, or grease weight heavier than 10W. Lubricants such as these can attract dirt and dust particles leading to the garage door malfunctioning. Instead you may use one of the following:
Maintaining you garage door by Lubrication will not only silence the noise acquired when opening/closed, but it will also increase the life usage of your door, and its opener.
Does your garage door come down too hard when closing? Does it stay not stay on the half way point on its own? If so, this may mean that the tension need some adjusting. Maintaining the tension is a serious matter, which if left unchecked, can cause fatal accidents and injuries. Below are two ways you can assess the right tension of your garage door: 1. To test the functioning of the door, place a brick under it. Allow for the door to close, the door should immediately reverse and go back up as it approaches the brick at the bottom. If it does not immediately come up, then you have a problem with the tension on the door opener, in addition to it being a safety hazard. To fix the adjustment, look for two switches on the garage door opener. Clockwise switch increases the amount of force, and the counterclockwise switch reduces the force. 2. Should there still be a tension problem after attempting #1, then try the next procedure, which is to firstly pull the red disconnect switch. By doing so, you will be disengaging the opener from the garage door. Now you are able to check the garage door tension manually by lifting it. The correct tension will allow you to lift the door and by the halfway point it should be able to stay in that position. If it does not, this will indicate that the tension is not tight enough. To check where an adjustment is needed, close the garage door and observe. If the tension spring is on the long arm above the garage door, you may need to check and adjust the screws on the pulleys. You will have to open the door completely to lift the tension off the springs. Now, before doing anything else, clamp a block into the roller to prevent the door from suddenly coming down, for safety precautions. Release the tension the cable drum screws then rotate and tighten the tension spring. This will increase the tension. Insure this is done to both sides, and keeping in mind to incrementally only tighten by ½ a turn or a full turn at a time. At each adjustment, the screws should also be tightened. Finally, you can now check to see if the door will hold its position when it is half way up. When it does, then you’ve got yourself the correct tension. If it does not hold its position, you will need to repeat this procedure until you've got the right/correct tension needed.
If you would like to check for solutions yourself, before giving us a call, refer to the following list of common issues: Should your garage door not be working, giving you trouble opening up, closing down, or staying opened at the mid-way point: 1. Check the remote batteries, and test by replacing the batteries. 2. Visually, look for the garage door’s safety sensors. These are generally located closer to the ground, and at both sides of the door. One side emits a shade of a green beam light, while the other side emits a shade of a red beam light. Check to insure that nothing is in the way of the “eyes”, and that they are positioned correctly and properly aligned. Should there be out of alignment, you may need to tighten the ratchets holding them. Clear away anything that could possibly block the “eyes”. 3. The circuit breaker could be tripped. To check for that, you need to look for the power feed to your garage door or garage door opener. 4. Take a look at the setting “up limit” switch is at on your garage door opener. Check the other limit switches on you garage door opener. These control the force at which the door comes up and down with. 5. Screws must be tight on the hardware that connects the garage door to the opener, check to insure that they are. These tend to become lose with the frequent use of the door. Therefore, they should be checked continually over time. Slide the micro switch located on the rail up or down according to what is needed to have it positioned right. To know that you’ve positioned it right or that it is already positioned right, the doors should open and close properly and normally. | http://www.garagedoorottawa.com/ottawa-garage-doors-repairs.html |
Coming through the Straits, the ships sailed into the South Sea, which was bigger than they could possibly imagine. The lack of storms led them to name this sea the Pacific, and when the Emperor's secretary reflected on the immensity of the ocean, basing his records on Juan Sebastián's words, he wrote:
Having navigated the infinite expanses of that southern sea for three months and twenty days...
On the far side of the ocean, and after nine men had died from scurvy, they reached the island of Guam. There they got in new supplies of fresh food, but the skill of the natives at appropriating the property of the fleet ended in the expeditionary force having to impose a series of severe punishments. When they eventually left the islands, they mapped them as "Island of Thieves".
If you'd like to know more about the tattoos seen in the indigenous tribes met by the expedition during the circumnavigation, take a look at the section dedicated to them.
Magellan was welcomed on arrival at the island of Cebu, and, wanting to secure these isles, decided to attack the neighbouring island of Mactan. As we can see in document number 27, Juan Sebastián was against the plan.
Eight men died on Mactan and a few days later the king of Cebu set a trap for the expeditionary foces and, according to Juan Sebastián, 27 men were killed.
A missionary's testimony
“They do not paint all their body at once, but bit by bit. This way, they spend many days painting themselves, and in ancient times, for each part to be painted, new feats and acts of bravery had to be undertaken”.
“They do them with a ruler and a set of compasses, some like one or two finger-width strips of wood, one straight and the other snaking or in zig-zags. First they make the drawing carefully and then take a small comb-like object, as wide as the stripes and made of pin-heads, to prick the skin and remove it to transfer the paint”.
“Then they sprinkle on some black powder made of the soot of a smelly tar they call balong, and once this sacrifice is made they are retained for some nine or ten days because they are often overcome and apt to be taken by the devil".
“The women only paint their hands. They painted the backs of their hands, from their fingers to their wrist, but not the underside. The usual designs were flowers and bows and they were very well done, but thanks be to God, this practice has now ceased."
Many years later it transpired that at least eight of the twenty-seven had survived and had been sold into slavery to a junk returning to China. Amongst the disappeared was the spy, Joan da Silva, who was in the pay of the King of Portugal.
Document number 28 is Joan da Silva's payslip, drawn up afterwards by the Contracting House, where we can see that he very nearly managed to do what he had set out to accomplish. That is to say, Magellan had chosen him to remain in charge of Spanish interests on Cebu once the fleet had sailed.
There were now not enough men to sail the three ships, so the Concepción was burnt and everything useable was distributed among the two remaining ships, the Trinidad and the Victoria, and Carvalho was chosen as Captain General. Juan Sebastián took on the role of master once more, as we can see in document number 29. A sailor named Mafra referred to Juan Sebastián as "discrete".
The definition of "discrete" in those days meant a wise man with a sound mind who knows how to deliberate and give each man his due.
Espinosa and Juan Sebastián, along with some other companions, went to the main city of Borneo to make peace with the king. Two enormous elephants came to meet them on the shore, carrying great wooden howdahs on their rumps to take them to the palace. Thanks to Juan Sebastián's curiosity we can see, in document number 30, what the religion of the people of Borneo was like.
Juan Sebastián and a handful of men stayed in the city, Carvalho and the two remaining ships saw themselves embroiled in various battles. They were finally able to escape, but left some companions imprisoned on the island. Among these was Domingo de Barruti, from Lekeitio, Biscay, who we know was still alive two years later.
Carvalho behaved irresponsibly and was dismissed from his post. The men elected Espinosa as captain of the Trinidad and Juan Sebastián took responsibility for the Victoria. Together with the master of the Trinidad, Juan Bautista Ponceroni, they were now in charge of the destiny of the fleet. They were close, very close indeed, to their goal. | https://itsasmuseum.eus/en/sala/al-otro-lado-del-pacifico/ |
Q:
Using chain rule indirectly in multi-parametric system
Problem:
Given
$x=u+v$, $y=u^2+v^2$, $z=u^3+v^3$ ,
Find $\frac{δz}{δx}$ and $\frac {δz}{δy}$.
Now I started off this problem by calculating dx, dy, and dz in terms of u and v from the given equations. But i do not know what to do next.
Can someone please help me?
TIA :)
A:
Just a quick note on notation: you use $\mathrm{d}$ (the 'straight $d$') when the function that you're differentiating only has one variable, and $\partial$ (the 'curly $d$' which is rendered with \partial) when the function that you're differentiating has more than one variable.
Thus, it would be appropriate to write $\partial x$ in this case, and the same for $\partial y$ and $\partial z$.
Now, the chain rule is super simple, and hopefully can be easy to remember: if $f$ is a function of $x$, and $x$ is a function of $u$, then the chain rule is
$$
\frac{\mathrm{d}f}{\mathrm{d}u} = \frac{\mathrm{d}f}{\mathrm{d}x}\frac{\mathrm{d}x}{\mathrm{d}u}.
$$
You introduce $\mathrm{d}x/\mathrm{d}x$ (which is just $1$, so is totally valid) that you group in a particular way so that you can differentiate two things which are easier. Notice that each of $f$ and $x$ are a function in one variable, hence the straight $\mathrm{d}$'s (though some authors might still write $\partial f/\partial x$ to emphasise that $f$ is a function of $x$, but that $x$ is also a function of something else).
If, however, the function $f$ was a function of the variables $x_{1}$, $x_{2}$, and $x_{3}$, with each of $x_{1}$, $x_{2}$, and $x_{3}$ functions of $u$, then
$$
\frac{\mathrm{d}f}{\mathrm{d}u} = \frac{\partial f}{\partial x_{1}}\frac{\mathrm{d} x_{1}}{\mathrm{d} u} + \frac{\partial f}{\partial x_{2}}\frac{\mathrm{d} x_{2}}{\mathrm{d} u} + \frac{\partial f}{\partial x_{3}}\frac{\mathrm{d} x_{3}}{\mathrm{d} u}.
$$
We just sum the derivatives of $f$ with respect to the other variables, and use the chain rule in each case. Observe the use of straight $\mathrm{d}$'s and curly $\partial$'s. I hope it is clear how this extends if $f$ is a function of many other variables.
For your particular example, you have
$$
x=u+v, \hspace{20pt} y=u^2+v^2, \hspace{20pt} z=u^3+v^3.
$$
Each of $x$, $y$, and $z$ are functions of two variables, so we use the curly $\partial$ in their derivatives. You need to calculate
$$
\frac{\partial x}{\partial u},\hspace{20pt} \frac{\partial x}{\partial v},\hspace{20pt} \frac{\partial y}{\partial u},\hspace{20pt} \frac{\partial y}{\partial v},\hspace{20pt} \frac{\partial z}{\partial u},\hspace{20pt} \frac{\partial z}{\partial v},
$$
which are straightforward (and I shall leave to you). Then you want
$$
\frac{\partial z}{\partial x} \hspace{20pt}\text{and}\hspace{20pt} \frac{\partial y}{\partial x}.
$$
Expand these using the multivariate chain rule to get
$$
\frac{\partial z}{\partial x} = \frac{\partial z}{\partial u}\frac{\partial u}{\partial x} + \frac{\partial z}{\partial v}\frac{\partial v}{\partial x}
$$
and
$$
\frac{\partial y}{\partial x} = \frac{\partial y}{\partial u}\frac{\partial u}{\partial x} + \frac{\partial y}{\partial v}\frac{\partial v}{\partial x}.
$$
Notice that $\partial u/\partial x$ is just the reciprocal (one over) of what you calculated for $\partial x/\partial u$ (and analogously for the other like terms), so you have all the terms that you need to find these derivatives now.
| |
In this project, you will be opening your own specialty cookie company to see how product costing methods and changes in production affect business decisions. You will be creating a series of reports and analyzing the results using the template provided to guide you through the project.The learning objectives of this project are as follows:
- Gain an understanding of product costing (direct materials, direct labor, and overhead).
- Review job order costing.
- Review process costing.
- Make business decisions based on analyzing accounting data.
You will prepare a four- to five-page written report (including spreadsheets) with at least two scholarly sources using the Unit II Project Template . Your report will provide the following information:
Introduction
Part 1: Establish a cookie business selling only one type of specialty cookie with two employees making the cookies.
- Create a name and establish a location for the business.
- Construct a mission statement for the business.
- Decide on the type of cookie you want to make and sell.
Part 2: Develop costing and sales information for 1,000 cookies.
- Estimate and explain the cost per cookie based on job order costing (manufacturing overhead is 30% of direct labor costs). Prepare a job order cost sheet by researching and identifying the top five ingredients and their estimated costs as your direct materials. Research and identify the cost of wages for your two employees as your direct labor. It typically takes two days to make 1,000 cookies.
- Estimate and explain the cost per cookie based on process costing with 40% conversion costs. Identify the top three processes you feel are needed to make the cookies and prepare a production cost sheet for one of those processes.
- Estimate and explain the sales price you plan to set per cookie based on the cost data.
Part 3: Compare and contrast the costing methods used in this project, including which you believe provides the most useful information as a manager.Part 4: Discuss what will happen to revenue if the number of the cookies sold increases or decreases.
Conclusion and Recommendations
Use the Unit II Cookie Project Spreadsheet Templates for your job order, and process costing spreadsheets to be embedded in your case study document.Be sure to use APA formatting throughout and reach out to the Writing Center or the Library for assistance with research, writing, and formatting. Include at least two resources from the CSU Online Library in your report.
Job Order
|<Name of Company Here>|
|Job Order Cost Sheet|
|Job number:|
|Direct Material||Direct Labor||Manufacturing overhead|
|Ingredients||Total Cost per ingredient||Date||Hours||Rate||Total Cost per employee||Total Cost|
|<list 5 ingredients and their costs>||<2 employees>||<30% of Direct Labor Costs>|
|Total||– 0||– 0|
|Cost Summary|
|Direct Materials||– 0|
|Direct Labor||– 0|
|Manufacturing Overhead||– 0|
|– 0|
|Units||1,000|
|Cost per unit||– 0|
Process Costing
|<Name of Company Here>|
|Production Cost Report|
|Department:||<Select one department in your cookie making process>|
|Cost:||Material||Labor||Overhead||Trans In||Total|
|Beginning WIP|
|Cost incurred|
|Total|
|Units:|
|Units Completed|
|Equivalent Units (ending WIP)|
|Total|
|Cost per Equivalent unit|
Sheet3
,
1
2
Title of the Paper Goes Here
Student Name
Institution
ACC 5301 Management Applications of Accounting
Instructor
Date
Abstract
The Abstract is an overview of the paper, written after completion. Other researchers use the abstract to determine if your work will be useful to them. The abstract should include the background, hypothesis or research question, methodology for data collection and analysis, the findings of your research, and conclusions. It should be between 100–150 words. This is done when the paper is complete.
Title of Paper
Remember this part of the paper is double spaced in APA format.
The Introduction should lead readers into the topic and its importance. Introductions typically include the overall topic of the paper, the specific focus of the paper within the larger topic, the main points in the paper, the kind of paper (study, argument, critique, discussion), and the purpose.
Writing tip: The length of the introduction should be in proportion to the length of the paper. Also ask yourself, “With my purpose and my audience, how do I engage my readers best?” In the introduction, you set the tone of the piece, establish your voice, and demonstrate your writing style; be authentic to your purpose and your audience.
Part 1 Establish Cookie Business
Identify the name of your company, location, mission statement for your business, and type of cookie you plan to make. Keep in mind that you are only making one type of cookie for this project.
Part 2 Costing and Sales Information
Analyze and discuss the estimated cost per cookie using job order costing, the estimated cost per cookie using process costing, and the estimated sales price per cookie. Embed your spreadsheets to justify your costs.
Part 3 Compare and Contrast Costing Methods
Analyze and discuss the major differences you see between the types of costing. Which do you believe is more useful for this business, and why?
Part 4 Impact of Increase and Decrease in Sales
Discuss what will happen to revenue if the number of cookies sold increases or decreases.
Conclusions and Recommendations
The Conclusion section should summarize for the readers the topics of importance that led to your final conclusions/analysis regarding this case. Include some specific areas of focus from your analysis to reinforce your conclusion.
References
Include complete references in proper APA format for all of the citations listed in your paper. Be sure to use the library for the required number of sources. Additional sources can be used but should be scholarly. Present your references in alphabetical order.
Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.
About Wridemy
We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.
How It Works
To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
Are there Discounts?
All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure. | https://wridemy.com/2022/10/25/in-this-project-you-will-be-opening-your-own-specialty-cookie-company-to-see-how-product-costing-methods-and-changes-in-production-affect-business-decisions-you-will-be-creating-a-series/ |
London. A piece of lead from the US with very little obvious aesthetic appeal is due to go on show at Tate Britain this month. It will be joined by marble fragments from Dublin, stained-glass windows from Canterbury Cathedral and a smashed piano, complete with an audio recording of its destruction at the hands of an axe-wielding artist. Aside from the Medieval windows, which have been removed from the cathedral especially for the exhibition and therefore represent a real coup for the Tate, these objects may look out of place in an art museum and, at first glance, seem as if they have little in common. One common feature, however, links them: they are all works of art that have suffered physical attacks. “Art under Attack: Histories of British Iconoclasm” explores 500 years of deliberate destruction in Britain.
“We’re trying to get away from the [misunderstanding] of iconoclasm as just vandalism, but destruction that has an ideology and a purpose,” says the show’s co-curator Tabitha Barber. The exhibition opens with Henry VIII’s Dissolution of the Monasteries beginning in 1536, an act that led to the destruction of countless works of art including the attack on the hyperrealistic pre-Reformation sculpture Statue of a Dead Christ, around 1500-20—one of the stars of the Tate show. Found buried under the floor of London’s Mercers’ Chapel in 1954, the sculpture was probably attacked on the orders of Henry VIII’s son, Edward VI. Henry’s actions laid the foundation for the future destruction of images including the removal of depictions of Christ from Canterbury Cathedral’s windows more than 100 years later.
Political motives are often a driving force behind iconoclasm. For example, the unassuming lump of lead is a fragment of an equestrian statue of George III by Joseph Wilton. It was ceremoniously toppled in New York in 1776
after a public reading of the Declaration of Independence with the plan to melt it down to make bullets to fire at royalist troops. The marble fragments from Dublin are from Nelson’s Pillar, which was blown up in 1966 by a group affiliated to the IRA. “Although these fragments are unremarkable to look at now, they have value and an afterlife—they live on after iconoclasm with a new power,” Barber says.
The show closes with a section devoted to artists, such as the Chapman Brothers, who use destruction as a means of creating something else. Prominent in the “aesthetics” section of the show is the piano mangled by American artist Raphael Montañez Ortiz in 1966 after the famous Destruction in Art Symposium in London.
• Art under Attack: Histories of British Iconoclasm, Tate Britain, | https://www.theartnewspaper.com/2013/10/01/purposeful-destruction-smashing-art-at-the-tate-britain |
Job Nature:
Contract
Position Level:
Entry Level
Job Category:
Qualification:
Bachelor's / Honours
Salary:
Job Description
Responsibilities:
- Assist in sales, marketing and lead generation efforts
- Assist in developing and communicating marketing plan
- Evaluate and maintain marketing strategy
- Help identify potential new customers
- Develop profitable pricing strategy
- Collaborate with advertising managers to create promotions
- Develop and manage advertising campaigns
- Assist in developing company budgets
- Organize for company conferences, trades shows and major events.
- Build brand awareness and brand positioning
- Research demand for company's products and services
- Compile lists describing company offerings Handle social media, public relations, and content marketing
Requirements:
- Min Degree in Marketing
- Able to commit immediate for 9 months
- Some marketing experience will be preferred
- 5 working days
- Contract (Extendable)
- Location: Macpherson
Interested applicants, please send your resume to [Click Here to Email Your Resume]
(Vanessa Koh Ching Meei)
CEI No.: R1548232
Recruit Express Pte Ltd, EA: 99C4599
We regret that only shortlisted candidates will be contacted. | https://jobscentral.com.sg/job/9-months-sales-and-marketing-assistant-up-to-2500-j3p43p6h1r45xc96vqf |
Q:
Need to add new column in pandas data frame based on some rule on a particular column
I have a data frame in Pandas (using Python 3.7) as shown below:
print("DATA FRAME DATA= \n",bin_data_df_sorted.head(5))
# OUTPUT:
# DATA FRAME DATA=
# actuals probability
# 0 0.0 0.116375
# 1 0.0 0.239069
# 2 1.0 0.591988
# 3 0.0 0.273709
# 4 1.0 0.929855
I need to add extra column named 'bucket' such that:
If probability value in between (0,0.1), then bucket=1
If probability value in between (0.1,0.2), then bucket=2
If probability value in between (0.2,0.3), then bucket=3
If probability value in between (0.3,0.4), then bucket=4
If probability value in between (0.4,0.5), then bucket=5
If probability value in between (0.5,0.6), then bucket=6
If probability value in between (0.6,0.7), then bucket=7
If probability value in between (0.7,0.8), then bucket=8
If probability value in between (0.8,0.9), then bucket=9
If probability value in between (0.9,1), then bucket=10
So, the output should look like this:
# actuals probability bucket
# 0 0.0 0.116375 2
# 1 0.0 0.239069 3
# 2 1.0 0.591988 6
# 3 0.0 0.273709 3
# 4 1.0 0.929855 10
How can we do it?
NOTE: I have tried below code but it is not working correctly.
> for val in bin_data_df_sorted['probability']:
> if val >= 0.0 and val <=0.1:
> bin_data_df_sorted['bucket']=1
> elif val > 0.1 and val <=0.2:
> bin_data_df_sorted['bucket']=2
> elif val > 0.2 and val <=0.3:
> bin_data_df_sorted['bucket']=3
and so on..
A:
You can use pd.cut:
import numpy as np
bins = np.arange(0, 1.1, 0.1)
df['bucket'] = pd.cut(df.probability, bins, labels=(bins*10)[1:])
actuals probability bucket
0 0.0 0.116375 2.0
1 0.0 0.239069 3.0
2 1.0 0.591988 6.0
3 0.0 0.273709 3.0
4 1.0 0.929855 10.0
Details
pd.cut bins values from a sequence into discrete intervals. So you need to specify some criteria to bin by. You can do:
bins = np.arange(0,1.1, 0.1)
# array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ])
And some labels for the returned bins, which in this case can be generated using the same bins:
(bins*10)[1:]
# array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
| |
THE Federal Territories Ministry is encouraging eateries in Covid-19 red and yellow zones not to offer dine-in yet.
Non-static and roadside food businesses like food truck operators and temporary stalls are also not allowed to operate in yellow and red zones throughout the Federal Territories.
In its FAQs for the conditional movement control order (MCO), the ministry said that in other zones, eateries must practise minimum social distancing of at least one metre as well as record details of customers and limit the number entering at any one time.
It also stated that night markets, open markets and bazaars will not be allowed in Kuala Lumpur, Labuan and Putrajaya.
Meanwhile, a check by StarMetro of various eateries and food courts in Kuala Lumpur, showed that tables and chairs remained stacked in some restaurants while in others, the furniture had been rearranged to ensure customers can dine at a safe distance.
Traders offering dine-in service are adhering to standard operating procedures (SOPs) while some are opting to open for a few days only while others are sticking to takeaway and food delivery.Quiet scenes
At food courts, the scene was mostly quiet with a minimal number of stalls open on the second day of the conditional MCO.
At Medan Selera Kompleks TLK Brickfields, which is operated by Kuala Lumpur City Hall (DBKL) and located at the ground floor of a multi-storey car park, only six stalls were open.
Selvam’s Corner operator Ganesan Selvam said there was only so much they could do to arrange customers’ seating.
“DBKL has bolted the tables and chairs to the floor. So, the most we can do is put an ‘X’ on the places where customers cannot sit.
“I have been running my stall since the second week of the MCO but now there is more work for us such as cleaning and sanitising the tables, chairs and toilets. The cleaners are not coming to work, so we have to do it ourselves.
“We were confused on what could and couldn’t be done on the first day of the conditional MCO. The police came by today to advise us on social distancing and seating arrangements so we have put markers for customers to follow, ” he told StarMetro.
Ganesan added that the car park management was assisting them in collecting personal details of diners and providing hand sanitiser.
Medan Selera Mega in Taman United was also very quiet, with only a handful of food stalls open. A noodles stall trader who only wanted to be known as Chong said the traders had decided to do takeaway only.
A Kuala Lumpur City Hall spokesman said their officers were patrolling daily to check compliance of the guidelines.
“If our officers find any trader not following the requirements, the officers will advise them on the SOP, ” he said.
Complying with SOPs
At Brickfields, Sri Paandi Restaurant manager Shabeer Ahmed rearranged the tables and chairs so those who wanted to do so could dine-in.
Enforcement officers who came to check on his premises had nothing to fault him on.
“We made sure to follow the guidelines, ” he said, adding that customers’ temperature, name and contact details were noted down.
“To keep the business running, we have to let customers in as some would like to sit down and have a meal, ” he said.
Taking added precautions was Sister’s Place Kopitiam in Taman Tun Dr Ismail (TTDI), Kuala Lumpur, which had arranged its tables 2m apart.
Restaurant owner Neoh Lean Huah said the SOP was not too difficult to follow as long as customers complied.
“The conditional MCO is beneficial for businesses like us because we still need to earn a living and pay the bills, ” he said.
Porto Romano Restaurant owner Sonila Hajoederasi, who manages the business in TTDI with her husband, said the move to ease restrictions might seem like it was too soon to some.
“Both restaurants and customers play a big role to flatten the curve of Covid-19 infections. We have done our part, customers should cooperate as well, ” she said.
Sonila said although the number of orders was still quite low, she was grateful for any received.
“Our regular customers are excited about dining in. We even had a call on Sunday for a reservation for two.”
In Bukit Bintang, Sahara Tent had opened for the first time since the MCO and offered dine-in, takeaway and delivery.
Its manager Ahmed Tareq said it would limit its restaurant capacity to only 20.
Partial reopening
Aunty Manju’s Home of Banana Leaf in TTDI said it would open for dine-in for three days only. Its owner Gary Fernandez said they were unsure of their ability to keep the restaurant open for the long-term.
“We have decided to see the response first, ” he said and asked for customers’ cooperation.
“For example, if they see the restaurant is full because of the limited capacity, they should wait awhile.”
He was glad that dining-in has been allowed under the conditional MCO as he was not sure if the business could have survived another month without it.
“We thought of closing down but we need to soldier on for the sake of our workers and shareholders.”
Sri Kortumalai Pillayar Restaurant manager Prakash Nathan also plans to start slow by seeing if there are requests for dine-in before deciding whether to offer it.
“We will observe the number of customers visiting the restaurant for the first three days before making a final decision, ” he said at his restaurant in Brickfields.
TTDI Q Bistro operations manager Francis Xavier Thomas said they decided to offer only takeaway and delivery for the time being,
“We’re going to study the situation for now, ” he said, adding that most of his customers said they would prefer to takeway instead of dine-in.
Happy customers
Lee Chuan Hong, 55, was excited to have his first meal away from home since the MCO as he and his wife found it more convenient to dine out.
He was also happy to do his part in helping his favourite eatery, Sister’s Place Kopitiam, survive the crisis.
Lee said he had no qualms dining out as long as the restaurant follows the SOPs.“As long as the management maintains good hygiene and ensures social distancing is practised, eating out should not be a problem, ” he added.
Customers interviewed at Pavilion Kuala Lumpur, were also satisfied with the arrangements at its food court.
Before entering the area, customers had their temperature taken and and their particulars taken down.
Tam Nyet Yeen, 40, was glad to have her meal at the food court for the first time since the MCO.
“I still had to come to work and had nowhere to sit during my lunch break, even when I packed food from home, ” she said, adding she was happy with the layout.
“I would say the traders here are handling the matter responsibly. Now, we have to play our part, ” she said.
Did you find this article insightful? | https://www.thestar.com.my/metro/metro-news/2020/05/06/different-views-on-wisdom-of-dining-in |
You will need a tablet, computer, laptop or phone for today’s work.
You will need to login to your My Maths portal today. Your lesson is Yr2 pictograms. Work through the lesson with your child and then encourage them to complete the online homework independently.
There is no work to email in today for maths as your teacher will be able to see your online homework.
Remember to collect your booklet from school, for Thursday, if you haven’t already.
English:
You will need the handwriting passage, lined paper and a pencil.
You will need to use lined paper for this activity. If you do not have any lined paper at home, there will be some available for collection in a box by the front door of school.
In your very neatest handwriting, you will copy the handwriting passage from the Snail and the Whale. You will need to think about the formation of your letters, take your time and copy a word at a time.
Ask your adult to send your completed work into school using your class email address.
Read Write Inc:
You will need an iPad/tablet/laptop.
Choose an e-book to read using the below link:
https://www.oxfordowl.co.uk/for-home/library-page/?view=image&query=&type=book&age_group=&level=&level_select=&book_type=&series=#
Reading:
Shared reading for 20 minutes. This can either be a school book or a chosen book from home. You may choose any book and you and your adult can share the story/non-fiction text together.
PE:
This week we challenge you to beat the teach at the sock target challenge.
Watch the beat the teacher video. Make your own target using the below equipment:
2 pieces of a4 paper taped together, pencils to draw the circles and write numbers in.
There is no need to colour it or take a long time in creating the target. Collect 5 pairs of socks to start you challenge.
Place your target down then take 6 steps backwards. Throw your pairs of socks one at a time, ask an adult to help you to keep score.
Once you have thrown all of your pairs of socks record the final score and send it to your class email.
Good luck
Now ask an adult to open the below link for you. Enjoy your yoga lesson. | https://www.priory-common.milton-keynes.sch.uk/tuesday-7/ |
A fire drill procedure in childcare centres serves as your life support during emergencies. It is the fundamental basis for order and maximum safety in any fire incident. With that said, building managers and safety officers are urged to create an emergency evacuation plan as a guide to all the staff and children.
If you are just venturing into safety practices or looking towards improving your existing plan, here is a basic guide for day care centres.
Fire Drill Procedures
Fire drills are based on regulations, policies, and procedures designed to respond to any given situation involving fire. Though one event may be different from the other, they both end up using a standard basis for response and evacuation as follows:
Identify the Possible Threats
Assess the situation whether the threats are minor or may require immediate evacuation. Usually, a quick assessment can determine the location of the fire and its level of danger, which can tell you what to do next.
Stay Calm and Raise the Alarm
As a role model, panicking will only cause the children to panic, leading to uncontrollable situations. Remember to quickly raise the alarm and give appropriate instructions for the given situation.
Evaluate the Situation
In circumstances when the fire calls for immediate evacuation, lead the children out of the area. As part of safety measures, close the windows to confine the fire. Once the children have evacuated and gathered in the evacuation area, do a headcount to ensure no one is left behind.
Call the Fire Brigade
Notify the fire brigade immediately through phoning 000 and give necessary information such as location of the fire and your evacuation centre location for immediate assistance.
Check Attendance
It is best to always check attendance to make sure no one has left the premises without notice. Children are grouped together and given full assistance at all times.
Get the Emergency Kits
The kits have to be readily available at the evacuation room for the kids. These usually have the necessities such as water, nappies, parents’ numbers and some toys to divert their attention from the incident.
Wait for Instructions
An authorised incident controller is responsible for declaring the status of the place and whether it is safe to go back or not. Wait for instructions before going out of the evacuation centre.
You might have heard of this many times before, but it is always necessary to keep in mind the acronym “RACE” in responding to fire. If you need a bit of a refresher, take note of these steps:
Rescue people, especially those who need assistance. Children will need more guidance in this stage so it is important to stay alert while doing so.
Activate alarms in all areas. Automatic fire detectors and alarms immediately respond to all building occupants and fire brigade. Ensure that these are well maintained and fully operational.
Confine the fire by preventing it from spreading. Close all door and windows before the fire spreads to neighbouring rooms.
Extinguish fire if it is necessary. Based on quick and right judgment, one should be able to determine if a flame is controllable. If so, immediately extinguish it using available fire suppression equipment in the area.
Children are quite vulnerable to these types of incidents because they may not fully understand it, and their response to it can be a major setback. Every childcare centre holds the responsibility to establish fire drill procedures and must pass the fire drill evaluation. For assistance, call a fire safety expert today. | https://firesafetydarwin.com.au/guide-fire-drill-procedures-childcare-centres/ |
What is the role of investment banker in Best Effort Basis IPO?
2
Mahendrasingh
What is the role of investment banker in Best Effort Basis IPO?
Market Intermediaries
Sep 10 2015 10:06 AM
1 Answer(s)
1
Jitendra
In this type of system Investment Banker acts as a marketing dept. It simply tries to sell the subscription to the investor community and doesn't accepts the underwriting responsibilities.
Sep 10 2015 11:32 AM
Relevant Projects
Machine Learning project for Retail Price Optimization
In this machine learning pricing project, we implement a retail price optimization algorithm using regression trees. This is one of the first steps to building a dynamic pricing model.
Explore features of Spark SQL in practice on Spark 2.0
The goal of this spark project for students is to explore the features of Spark SQL in practice on the latest version of Spark i.e. Spark 2.0.
Loan Eligibility Prediction using Gradient Boosting Classifier
This data science in python project predicts if a loan should be given to an applicant or not. We predict if the customer is eligible for loan based on several factors like credit score and past history.
Real-time Auto Tracking with Spark-Redis
Spark Project - Discuss real-time monitoring of taxis in a city. The real-time data streaming will be simulated using Flume. The ingestion will be done using Spark Streaming.
Mercari Price Suggestion Challenge Data Science Project
Data Science Project in Python- Build a machine learning algorithm that automatically suggests the right product prices.
Ecommerce product reviews - Pairwise ranking and sentiment analysis
This project analyzes a dataset containing ecommerce product reviews. The goal is to use machine learning models to perform sentiment analysis on product reviews and rank them based on relevance. Reviews play a key role in product recommendation systems.
Hadoop Project-Analysis of Yelp Dataset using Hadoop Hive
The goal of this hadoop project is to apply some data engineering principles to Yelp Dataset in the areas of processing, storage, and retrieval.
Yelp Data Processing using Spark and Hive Part 2
In this spark project, we will continue building the data warehouse from the previous project Yelp Data Processing Using Spark And Hive Part 1 and will do further data processing to develop diverse data products.
Real-Time Log Processing using Spark Streaming Architecture
In this Spark project, we are going to bring processing to the speed layer of the lambda architecture which opens up capabilities to monitor application real time performance, measure real time comfort with applications and real time alert in case of security
Data Science Project - Instacart Market Basket Analysis
Data Science Project - Build a recommendation engine which will predict the products to be purchased by an Instacart consumer again.
Related Questions
whats the exact difference between Broker and a dealer
What is an Intermediary and what are its function?
Broker and a dealer
How is term "Greenshoe" derived?
What does "Devolvement" signifies and how does it pose risk to Investment Banks?
What is the difference between Lead Manager and Co-manager in the event of Syndicated Underwriting? | https://www.dezyre.com/questions/4237/what-is-the-role-of-investment-banker-in-best-effort-basis-ipo |
Description: Tanya is an elf princess who breaks out from her sheltered life in the Enchanted Forest in hopes to discover adventure in the human world.
Summary: Tanya is the princess of Elves down south in Mississippi. She lives in the Enchanted Forest with her overbearing mother Queen Ilrondelia. Tanya has always wanted to venture from the Enchanted Forest and be out in the real world. But the elves have a deal with the government which prevents any elf from leaving the forest. Tanya takes it upon herself to meet up with the Hunters and breaks out of the forest. She ends up on a great hunting adventure with the Hunters as her mother, the Queen, tries desperately to search for her as Tanya lands herself in a few sticky situations with the Hunters.
Excerpt: The queen of the elves pondered on that while she unwrapped a Ho-Ho and squirted ranch dressing on it. Her people had a sweet deal. The government paid them good money to stay right here in the Enchanted Forest, but some of the younger elves were getting uppity, talking about adventure. They’d been watching too many movies with fancy movie elves in them. They didn’t realize how good they had it here in the Enchanted Forest. | http://www.worldpubliclibrary.org/eBooks/WPLBN0003548830-Tanya-Princess-of-Elves-by-Correia-Larry.aspx |
This paper investigates the possible outcomes of an eruption of Makushin Volcano, specifically depicted in this paper, the impact on the community of Unalaska. It has been determined that the ash from a volcanic eruption would be the most threatening aspect of an eruption of Makushin Volcano.
This paper investigates the possible outcomes of an eruption of Makushin Volcano, specifically depicted in this paper, the impact on the community of Unalaska. It has been determined that the ash from a volcanic eruption would be the most threatening aspect of an eruption of Makushin Volcano. The potential outcomes of falling ash would affect everyday life of the community of Unalaska and pose numerous safety hazards. The health and well being of the citizens of Unalaska could very well be at stake if there was a volcanic eruption. With the current city plans, the water and energy sources of Unalaska would be held in abeyance during a volcanic eruption, because falling ash would contaminate and stall the use of these utilities.
To mitigate the outcomes of such volcanic activity, this paper is composed of possible planning measures the City of Unalaska could make. Planning and preparation related to energy, water, infrastructure, and the possibility of an evacuation are crucial in the resilience of the community of Unalaska. Within this plan there is a focus on the importance of the City of Unalaska finding a new water source or better filtration system for the impending water contamination issue, and the investing in another energy source that would not be affected by falling ash. In sum, the creating or adopting an action plan would greatly benefit the City of Unalaska.
Volcanoes form when molten rock, debris, and gases from the planet’s interior are expelled onto the Earth’s crust (Science Learning [SL], 2010). Often, these volcanoes form along the boundaries of the tectonic plates that lie in the lithosphere. These tectonic plates are rigid and float atop the asthenosphere, a much hotter and more viscous layer of the Earth’s interior. When two tectonic plates collide, the plate that is denser will sink, or subduct, into the asthenosphere (SL, 2010). The temperature of the asthenosphere melts the subducted tectonic plate and turns it into molten rock, or magma. This magma then rises to the Earth’s crust and causes an eruption. Repeated eruptions create an accumulation of hardened magma, which is how volcanoes are initially formed (Figure 1).
Subduction zones, such as the Ring of Fire, are notorious for their high rates of volcanism and seismic activity. The Ring of Fire is an area surrounding the Pacific Ocean, where the Pacific Tectonic Plate subducts into multiple other surrounding tectonic plates. This 40,000 kilometer, nearly continuous series of oceanic trenches, volcanic island chains, and volcanic mountain ranges line the coasts of the surrounding continents, including South America, North America, Asia, and Australia (SL, 2010). Similar to the Ring of Fire, island arcs are also formed by subduction along plate boundaries. These chains of oceanic islands are associated with intense volcanic and seismic activity, inherently, because most island arcs are located along the Ring of Fire (SL, 2010). One of the numerous island arcs created by the Pacific Tectonic Plate is the Aleutian-Arc, located east of Kamchatka between the Bering Sea and the Pacific Ocean forming the Aleutian Chain of Alaska (Coats, 1950).
The Aleutian Arc consists of 76 major volcanoes, 36 of which are geologically active, meaning they have erupted in the past 10,000 years (Coats, 1950). Makushin Volcano, located on Unalaska Island, has been labeled as being ‘potentially the most threatening volcano in the Aleutian Arc,’ due to both its proximity to the City of Unalaska, along with its extensive eruptive history (Lerner, 2010). Makushin Volcano is the highest point on Unalaska Island, standing at 1,800 meters, and is 16 kilometers wide in basal diameter. The volcano itself occupies the majority of the northwest extension of the island. Makushin Volcano is located a mere 28 kilometers from the population center of the City of Unalaska (Beget, Nye & Bean, 2000).
Due to records kept from Russian explorers and traders, it is known that Makushin has had over 17 explosive eruptions since the 1700’s (Coats, 1950). All of these written records document the eruptions as being relatively small, sending ash 3 to 10 kilometers above the volcano’s summit, and depositing ash mainly on the flanks of the volcano. Makushin has most recently erupted in 1995, creating an ash cloud that rose to around 2.5 kilometers above the peak of the volcano (Coats, 1950) (Table 1).
Prehistoric investigation into the past of Makushin reveals volcanic activity of a greater magnitude. Geological studies indicate a much more destructive series of eruptions occurred between 8,000 and 8,800 years ago known as the Driftwood-Pumice deposits (Lerner, 2010). Combined, these eruptions created an ash layer reaching a depth of 1.5 meters, and around 1 meter where the City of Unalaska is located (Lerner, 2010). Along with numerous pyroclastic flows that traveled down Makushin Valley and into Broad Bay. This series of volcanic activity produced a 4 kilometer diameter crater at the summit of Makushin.
Sedimentary records cannot be taken back much further, due to removal by glacial erosion from the last ice age, 10,000 years ago (Beget, Nye, & Bean, 2000). However, Makushin has been volcanically active for at least a million years, as shown by the radiometric dating of a sample of lava found on an eroded cliff at the base of the volcano (Beget, Nye, & Bean, 2000). The potential for an eruption similar to those in the past are unpredictable and a foreboding thought for the future of Unalaska.
The City of Unalaska has a population of approximately 4,000 people, making it the 12th largest city in the State of Alaska (Dutch Harbor, 2015). Its economy is primarily based on commercial Pollock and crab fishing, as well as seafood processing (Dutch Harbor, 2015). The seafood processing companies on the island provide local employment, but non-residents are flown in during the peak seasons, bringing in over 6,000 transient residents during the peak production seasons. All homes and processing plants are served by the city’s piped water system. Almost every home uses the city’s electrical system, but the processors provide their own electricity (UniSea, 2009). Daily scheduled flights are one of the two ways to get to the island, the second being a ferry that operates bi-monthly during the summer months. The bustling town of Unalaska could be detrimentally affected by volcanic activity caused by Makushin Volcano. The City of Unalaska is currently not prepared for such an eruption, with limited city-specific plans detailing how to cope with such an event.
An eruption of Makushin Volcano would affect the community of Unalaska in a multitude of ways. The first and most important way that Unalaska would be affected by the eruption of the Makushin Volcano is the falling ash and ash clouds. In other communities, where cities are of much closer in proximity to the volcano, there is the risk of pyroclastic bombs, lahars, lava flows, and other hazards. The material that the ash clouds are composed of range in size from microscopic to several meters in diameter, and is collectively called tephra. During an eruption, the finer material can rise more than 20 kilometers from the summit, and then be carried by prevailing winds. Tephra can travel through the atmosphere for extended periods of time and across great distances. Depending on the size of the initial eruption, ash clouds have been known to travel hundreds to thousands of kilometers and for periods of days or months (Beget, Nye, & Bean, 2000). During this time and within the affected area, turbine engine aircrafts are not able to operate. The volcanic material within the ash cloud, like volcanic glass, is extremely harmful to an aircraft’s safety.
Redoubt Volcano, an active stratovolcano in the Aleutian Range, erupted several times in 1989-1990. These eruptions sent ash at least 12 kilometers into the atmosphere. On December 15, 1989, a Boeing 747 jet flying 240 kilometers from Anchorage flew into an ash cloud and lost power in all four engines. The plane had 231 passengers onboard, fell more than 3,000 meters before the engines were restarted by the crew (Casadevall, 1994). An ash cloud from Makushin Volcano could potentially cause an equally dangerous situation.
Volcanic ash would also pose a threat on land. The accumulation of falling ash on the roofs of buildings may cause less sturdy roofs to collapse. The weight of dry ash can range from 400 to 700 kg/m3 (USGS, n.d.). Since the chance of rain in Unalaska is very high, an average of 223 days of precipitation per year, this increases the possibility of falling ash destroying roofs (USGS, n.d.). Ash increases in weight by 50-100 percent or more if the ash becomes saturated by rain, and could even reach more than 2,000 kg/m3 (USGS, n.d.).
A prime example of ash damaging buildings is the eruption of Mount Pinatubo in the Philippines in 1991 (USGS, n.d.). The volcanic ash that settled on roofs of buildings accumulated and many roofs could not withstand the weight, and were damaged by 5-10 centimeters of wet ash. Especially buildings with long-span roofs, like warehouses, were susceptible to collapsing under the excessive weight of dry and wet ash (USGS, n.d.). A similar situation could occur in Unalaska, and the accumulation of wet ash would destroy homes and businesses. In such a situation, the canneries and housing facilities would probably see the most damage because, like the warehouses in the Philippines, their elongated roofs would make them more susceptible to damage (USGS, n.d.).
Most importantly the health effects of ash should be duly noted. After an ash fall, there are potential respiratory symptoms from the inhalation of volcanic ash. The abrasive property of ash particles will typically also cause eye and skin discomfort or irritation (USGS, 2010).
Common respiratory complications include nasal discharge (runny nose), throat irritations followed by dry coughing, airway irritation, and uncomfortable breathing. Those with preexisting respiratory problems are especially susceptible to such complications. Short exposure to volcanic ash usually does not create serious health problems, but one should still be careful to limit their exposure. An example of a community affected by ash fall is the 1995-96 eruption of Mount Ruapehu in New Zealand. After the eruption an increase in bronchitis cases was noted, even though the ash fall was light (USGS, 2010). Other than respiratory symptoms, after ash fall people will typically experience eye irritations. People may feel that there are “foreign particles” in their eyes, which can lead to great discomfort and pain. There are not many complications related to short term exposure to ash, long term can have adverse effects on a person’s health (USGS, 2010).
Preparation is vital for Unalaska to bounce back from an eruption of Makushin Volcano. Without a plan in place, any efforts to mitigate the situation would be inefficient and potentially hazardous. Proper steps need to be in place to insure that in the case of a volcanic eruption, the community of Unalaska would be prepared.
The State of Alaska has the “Alaska Interagency Operating Plan for Volcanic Ash Episodes” that outlines what agencies are responsible for certain duties, should a volcano in Alaska erupt (Office of the Federal Coordinator for Meteorology [OFCM], 2014). The plan is revised every two years, and its intent is to provide an “overview of an integrated, multi-agency response to the threat of volcanic ash in Alaska”. The Alaska Volcano Observatory (AVO) is a joint program of the U.S. Geological Survey, the University of Alaska Fairbanks Geophysical Institute, and the Alaska Division of Geological and Geophysical Surveys. The AVO monitors seismic activity in Alaska and studies volcanic activity. In this situation, the AVO would notify the Division of Homeland Security and Emergency Management. This information is then passed down to all relevant agencies, such as the Federal Aviation Administration, the United States Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA). Following notification, those agencies use available tools to alert the public and organizations that might be affected from volcanic activity. Meaning each agency uses what communication portals they have to distribute alerts and safety information. For example, the National Weather Service uses NOAA Weather Wire, marine High Frequency and Very High Frequency radio, NOAA Weather Radio, the statewide Alaska television weathercast, and the Emergency Alert System to get their alerts out to the public. By using a diverse methods to disperse information, this plan quickly and efficiently reaches as many people as possible (OFCM, 2014).
An official emergency response plan for volcanic activity has not been written for the community of Unalaska. Currently the only information the community possesses about proper conduct during a volcanic eruption is represented by the boxes of emergency situations pamphlets at the local Department of Public Safety office. The community needs to made aware of the potential danger of a volcanic eruption and a comprehensive emergency response plan should to be written. Like any emergency response plan, it should be written by experts, but nonetheless the plan that follows proposes elements that should be included in such a plan.
The Unalaska Department of Public Safety currently has 5,000 number of dust masks to distribute out in case of an eruption, and 50 number radios to give to local businesses so that they can stay informed in recovery plans.
A stockpile of emergency preparedness items are essential for before an ash fall. Dust masks and eye protection are necessary, because ash can damage a person’s respiratory system and eyes. People should also have 72 hours’ worth of water and nonperishable food, because food and water may not be available during an emergency situation. Cleaning supplies, such as a broom, vacuum cleaner, and shovels, are needed to remove ash from roofs and the inside of a home. General preparations include buildings equipped with emergency supply kits. The supply kit should include “non-perishable foods, water, a battery-powered or a hand cranked radio, extra flashlights and batteries” and also, “a pair of goggles and a disposable breathing mask” for each person (Volcanoes, 2015).
During an ash fall, it advised that all people to stay indoors with windows closed. Due to the grave outcomes ensuing the inhaling of excess volcanic material, those who suffer from respiratory problems should be extra wary that they are not spending excessive time outside and, if necessary, to go outside, then a protective dust mask should be worn. Elderly and children should also limit their time outside while ash is falling and even for the proceeding days. Ash in any form, whether it is falling or on the ground, can cause lung and/or throat irritation, being that ash has landed can be swept up by winds or stirred up in other ways (USGS, 2009).
Operating motor vehicles is also not advised during the time of an ash fall event. Heavy ash fall can make the visibility poor and, in extreme cases, complete darkness may ensue after a volcanic eruption. This ash-water mixture creates a mud-like texture, which can make cars lose traction and consequently cause a vehicular accident. Driving should only be done if absolutely necessary, and if one should find themselves in this situation, they should take heed to the dangerous conditions. In the community of Unalaska, the maximum speed limit is 30 miles per hour, and during an ash fall, drivers should reduce their speed to at least 20 to 10 miles per hour. In addition, motorist should wait until the roads have been swept of ash before driving on the roads again. This will help to facilitate the cleanup of ash and limit the stirring up of fallen ash (USGS, 2009).
An eruption of Makushin Volcano could catastrophically affect the community of Unalaska. This is why preparation and proper planning are vital in a speedy recovery. The four primary things that would need to be addressed in a recuperation effort are the City of Unalaska’s possible evacuation plan, water source, energy source, and infrastructure.
An ash fall could cut off Unalaska’s air transportation because of ash clouds and a slippery runway. The issue of a slippery runway can be solved if you properly clean the runway, but the problem of an ash cloud is much more complex.
If an ash cloud is blown in flight paths to Unalaska, planes may not fly which could interfere with emergency medical evacuations and could strain the limited resources at the local clinic. In addition, emergency supplies that are unavailable in Unalaska would have limited means to arrive in Unalaska.
The City of Unalaska’s primary source of power is a 16.6 megawatt powerhouse. (Fitch, n.d.) In the event of a severe ash fall, the city generator could fail because it requires a constant stream of clean air to cool moving parts. The city’s power source could fail because the generator that the City of Unalaska uses is powered by burning diesel fuel and normally, diesel generators run when diesel fuel is burned to heat up liquid water. The liquid water then turns into compressed steam, which rotates a turbine. The rotational mechanical energy is then turned into electrical energy using a generator (Rozenblat, 2014). However, when ash accumulates on and inside of the turbine, the turbine that makes the generator run will stop functioning efficiently and eventually slow down. Although proper maintenance will be able to get the generator back up and running, it will take time to restore the generator to proper working condition (Loehlein, 2007). Theoretically, this would leave the City of Unalaska without power for an unforeseeable amount of time after a volcanic eruption.
If the city had a more diverse source of energy, resiliency would be easier to achieve. In addition to the maintenance of current power sources, the community of Unalaska could also look into energy that would not be affected during an ash fall. This would include wind power, geothermal energy, and fuel cells.
The City of Unalaska’s primary source of water is from a reservoir. In the event of an ash fall, that source of water could be contaminated with ash which would affect water quality. Harmful water-soluble substances called leachates, which are mostly acids and salts, are usually found in ash. Turbidity, the quality of suspended particles in water, may become a concern. Suspended particles in the water protect micro-organisms and stimulate bacterial growth (USGS, n.d.). Additionally, the chemicals in ash are hazardous if ingested. This would include substances like fluoride, which can be hazardous if excess amounts are ingested. Long-term exposure to fluoride can increase the risk of developing both dental and skeletal fluorosis (Wilson et al., 2011).
Ash leachates are typically acidic, “due to the presence in volcanic plumes of strong mineral acids … and consequently may acidify receiving waters” (Wilson et al., 2011). However, acidification is not a significant worry, because “the majority of surface waters have sufficient alkalinity to buffer against significant pH change” (Wilson et al., 2011).
Ensuing after an eruption, the water reservoirs should be closed before the turbidity and acidity levels become excessive. Turbidity can be managed using filters, but constant cleaning of the filter may become a problem. To insure the safety of the drinking water, a more efficient method of filtration must be developed and implemented in the case of an emergency\.. Alternatively, a new water source could be found, it would have to be one that would not be contaminated by falling ash. A water source that is not exposed to the open air would be most effective in keeping the ash out. In sum, the City of Unalaska would need to invest in finding a new water source and/or filtration method.
Unfortunately, there isn’t any way to rebuild the roads to make them less susceptible to ash falls, but proper maintenance can help to mitigate the issue. In this situation, proper training of road maintenance crews is imperative for safety and efficiency of ash clean up. Ash cannot be simply swept off the roads, and in some situation can even exacerbate conditions. Road sweepers and passing cars can cause settled ash to get stirred up or billow and create an ash cloud that can last for weeks or months. Ash should not be swept to the side of the road for this very reason. Instead, ash should be lightly moistened to facilitate the process of cleaning up, and motorized graders to scrape and blades to move ash to the middle of the roads. This volcanic material should then be loaded into trucks and hauled to appropriate ash disposal zones. Disposal zones should not be in the proximity of homes or businesses that could be affected by ash. This designated area should be where the ash would later not have to be moved and be somewhat undisturbed by the population (USGS, 2010).
The community should be made aware of these disposal zones, and should be encouraged to make an effort to help clean up ash as well. An example of this would be to organize community cleaning activities. During this time, participants would make a concentrated effort to clean up ash for in neighborhood. Most importantly ash should be removed from the roofs of homes and other buildings too. This is another instance where the community would have to informed on proper ash removal. When removing ash from roofs, wetting of the ash would cause more damage, because the most roofs would not be able to handle the additional weight of wet ash. Like in any other situation, the more informed the community is, the more efficient a clean up and resilience effort would be.
The city of Unalaska, in its current state, is largely unprepared for an eruption of Makushin Volcano. However, a cohesive action plan will help the community to bounce back more quickly in the event of an eruption. If the city were to implement the use of an energy source that would not be affected by falling ash, this would significantly help in the response and maintenance after an eruption. In addition, a secondary source of water would greatly benefit the community. If ash were to pollute the current water source, the community of Unalaska would be without clean water for an unforeseeable amount of time. Being able to filter this tainted water, or find a new source of water that would not be as susceptible to pollution would greatly benefit the City of Unalaska.
Taking into consideration both the proximity of Makushin Volcano to the City of Unalaska and its eruptive history, the possibility of an eruption should not be taken lightly. Creating or adopting an action plan is the first step in this process.
You must log in to like an article
You must be logged in to post a comment.
This paper investigates the possible outcomes of an eruption of Makushin Volcano, specifically depicted in this paper, the impact on the community of Unalaska. It has been determined that the ash from a volcanic eruption would be the most threatening aspect of an eruption of Makushin Volcano. | https://www.sciencebuzz.com/bouncing-back-after-an-eruption-of-makushin-volcano/ |
For people looking for a career that balances creativity with precision, a technical drafting position may be a good option. Read on to find out more details regarding some possible career paths, such as civil drafting and electronics drafting, and to learn about educational requirements and job prospects. Schools offering Building Information Modeling degrees can also be found in these popular choices.
Technical drafters are responsible for transforming rough sketches, ideas and specifications into well-constructed blueprints. Using both manual and computer-based tools, these professionals create the plans necessary to complete a variety of projects. Some possible technical drafting professions include mechanical drafter, architectural drafter, civil drafter and electronics drafter.
Mechanical Drafting involves creating detailed drawings and computer simulations that support the production of a variety of mechanical devices. Such a career involves the use of both computer aided design (CAD) software programs and traditional drafting techniques. These professionals must also be familiar with many manufacturing processes, materials and metallurgy.
Architectural drafters support construction projects by drawing the architectural and structural features of a building. These positions often specialize in a particular type of building, typically residential or commercial. These workers may specialize in a type of building, such as residential or commercial. Furthermore, architectural drafters provide expertise in specific materials used, such as steel, wood, or reinforced concrete.
Creating the topographical and relief maps used in a variety of engineering projects, civil drafters employ both traditional drafting techniques and computer aided design. Employees in this career may be involved with the construction of many civil engineering projects, such as bridges, highways and water control systems. These professionals are also responsible for accurately representing topographical contours and elevations.
Working with other engineers to determine the specifications, electronics drafters are responsible for making diagrams of wiring and circuit board assemblies for electronics. They also create layout drawings that others will use when they install or repair equipment. They might also be involved with circuit board testing to make sure the boards work properly.
According to the Bureau of Labor Statistics (BLS), drafters generally complete a technical program, with an associate's degree being the most common (www.bls.gov). Both community colleges and technical institutes have relevant programs, although the BLS notes that technical institutes are more likely to offer diploma and certificate programs in drafting. While you can expect to study computer-aided design and drafting and sketching at either type of institute, the BLS mentions that you can also expect to take general education classes if you choose to attend a community college.
You can also choose to pursue optional certification from the American Design Drafting Association (ADDA). For example, you might get certified in mechanical or civil drafting. While you generally won't need drafting certification to find employment, getting certified might help you stand out to employers.
The BLS notes that there will be a 3% decline in drafter employment between 2014 and 2024. However, growth does vary by specialty, with electrical and electronics drafters having better career prospects and a 5% projected employment growth from 2014 through 2024. While architectural and civil drafter employment won't see much of a change, mechanical drafter employment will decline by 7%.
According to the BLS, the median annual salary of a mechanical drafter was $52,200 in May 2014, while an electronics or electrical drafter made a median wage of $58,790. Architectural and civil drafters earned less, with median annual earnings of $49,970. | https://learn.org/articles/What_are_Some_Common_Technical_Drafting_Professions.html |
As of 2018, San Felipe Del Rio Consolidated Independent School District has received a rating of "B" overall performance.
District Accountability Summary
Texas Academic Performance Report
The Texas Academic Performance Reports (TAPR) pull together a wide range of information on the performance of students in each school and district in Texas every year. Performance is shown disaggregated by student groups, including ethnicity and low income status. The reports also provide extensive information on school and district staff, programs, and student demographics. More Info
2016-2017 District TAPR Report
Community and Student Engagement Ratings
House Bill 5’s community and student engagement requirements allow districts to showcase areas of excellence and success outside the limited scope of standardized test scores. In accordance with the Texas Education Code (TEC), §39.0545(a), each school district shall assign performance ratings to the district and each campus for community and student engagement indicators based on locally determined criteria. The nine factors evaluated include fine arts, wellness and physical education, community and parental involvement, 21st century workforce development program, second language acquisition program, digital learning environment, dropout prevention strategies, educational programs for GT students and compliance with statutory reporting and policy requirements.
2016-2017 San Felipe Del Rio CISD Community and Student Engagement Ratings
Federal Report Cards
The No Child Left Behind Act of 2001 (NCLB) requires federal accountability and reporting for all public school districts, campuses, and the state. The Federal Report Cards include the following information:
Part I: Percent Tested and Student Achievement by Proficiency Level
Part II: Student Achievement and State Academic Annual Measurable Objectives (AMOs)
Part III: Priority and Focus Schools
Part IV: Teacher Quality Data
Part V: Graduates Enrolled in Texas Institution of Higher Education (IHE)
Part VI: Statewide National Assessment of Educational Progress (NAEP) Results
District Level Report - 2016-17 Federal Report Card for San Felipe Del Rio CISD
Texas Consolidated School Rating Report
The Texas Consolidated School Rating (TCSR) report combines performance information from several different sources and reports it for each Texas public school district and campus. It includes:
San Felipe Del Rio CISD Ratings
The Financial Integrity Rating System of Texas (Schools FIRST) is administered by the Texas Education Agency and calculated on information submitted to TEA via the district's Public Education Information Management System (PEIMS) submission each year. The Schools FIRST accountability rating system assigns one of four financial accountability ratings to Texas school districts, with the highest being “Superior Achievement,” followed by “Above-Standard Achievement,” “Standard Achievement” and “Substandard Achievement.” For the 13th year in a row, the San Angelo Independent School District has earned a rating of “Superior Achievement” with a perfect score on the rating criteria. | https://www.sfdr-cisd.org/about-us/district-accountability/ |
Q:
Python3 Pandas - Create new DataFrame based on array of objects
I have an array of objects. I'm trying to loop through that array and create a new dataframe, then save that to a spreadsheet.
My object variables are like this:
def __init__(self, question, total):
self.question = str(question)
self.total = float(total)
self.answers = {}
question is a string of the question text
total is a number of the total votes the question received
answers is a dictionary containing data like: {'Yes': 5, 'No': 2, 'Maybe': 1}, a string for the answer choice and a number for the number of votes an answer received
I am trying to loop through the q_array of Question objects, append the question and total, then in a for loop below go through the answer items and append those on additional rows.
Here's the desired output/sheet:
Question Answer Total Percent
What color is the sky? 22
Red 8 36.4%
Green 2 9.1%
Blue 12 54.5%
Here's my current code:
writer = pd.ExcelWriter('master.xlsx')
sdf = pd.DataFrame(columns=('Question', 'Answer', 'Total', 'Percent'))
for data in q_array:
sdf.append({'Question': data.get_question(), 'Total': data.get_total()}, ignore_index=True)
for answer, number in data.get_answers().items():
sdf.append({'Answer': answer, 'Total': number, 'Percent': number_to_percent(number, data.get_total())}, ignore_index=True)
sdf.to_excel(writer, 'stats', index=False)
writer.save()
I'm trying to use .append() to add the new rows and select what data goes in the row. But when I print sdf it's empty and in the spreadsheet it has the columns but the rest of the data is missing. What am I doing wrong? Thanks for any help provided!
A:
The answer was simple: instead of sdf.append() I needed to set the DataFrame equal to the append, therefore sdf = sdf.append()
Here's the correct code:
writer = pd.ExcelWriter('master.xlsx')
sdf = pd.DataFrame(columns=('Question', 'Answer', 'Total', 'Percent'))
for data in q_array:
sdf = sdf.append({'Question': data.get_question(), 'Total': data.get_total()}, ignore_index=True)
for answer, number in data.get_answers().items():
sdf = sdf.append({'Answer': answer, 'Total': number, 'Percent': number_to_percent(number, data.get_total())}, ignore_index=True)
sdf.to_excel(writer, 'Stats', index=False)
writer.save()
| |
01 December 2021 - Computer Science PhD student Ma Yunshan, Kwan Im Thong Hood Cho Temple Chair Professor Chua Tat Seng and their collaborators have won the Best Student Paper Award at the ACM International Conference on Multimedia Retrieval (ICMR) 2021.
The conference is a premier scientific conference for multimedia retrieval and was held online from 16 – 19 November this year.
Ma, Prof Chua, Ding Yujuan, a visiting research intern at NUS Computing, and Professor Wong Wai Keung from the Institute of Textile and Clothing at Hong Kong Polytechnic University, presented their paper on Leveraging Two Types of Global Graph for Sequential Fashion Recommendation at the conference.
In the paper, the team proposed a more efficient and effective approach to develop a sequential fashion recommendation model. This refers to a fashion recommender system capable of recommending items that take into consideration a shopper’s sequential actions when shopping online.
Recommender systems used in online shopping platforms - an important part of the fashion industry - can improve users’ shopping experience and give retailers’ a boost in sales volume by directing users to their preferred product items.
However, building an effective fashion recommender system is not an easy feat, explained Ma on behalf of the team. They had to consider two important factors when developing a better system: shoppers’ long-term and short-term fashion preferences.
Long-term preferences refer to users’ individual fashion tastes (for example, a preference for edgy streetwear over vintage clothes), while short-term preferences refer to the purchases that may sequentially correlate to the items a shopper picks in that moment (for example, buying a matching skirt and blouse set, or looking for shirts that are currently in trend).
As such, Prof Chua and his team proposed using two types of global graphs - the user-item interaction graph and item-item transition graph - to capture both long-term and short-term preferences in shoppers.
“In addition, we took advantage of an improved GNN model, named LightGCN, as the graph learning kernel to learn both user and item representations,” said Ma. “We conducted experiments on two public datasets from Alibaba iFashion and Amazon Fashion, and the experimental results in our paper demonstrate the effectiveness of our method.”
Said Prof Chua, who is also Director of the NUS-Tsinghua Extreme Search Center (NExT++), of the team’s win: “This paper rightly observes that a person's fashion taste is influenced by a myriad of factors. It is not only affected by individual tastes, which is a long term preference, but because of the need to be accepted by the community, it is also affected by global taste - the choice of other users and trends in the fashion industry. Our work transforms this observation into a graph-neural-network (GNN) formulation based on two types of graphs to capture these two types of preferences. The process of transforming observations to solutions based on the latest graph-based formulation is a key contribution of this paper. It is comprehensive, and makes our paper stand out against other related projects.”
For the paper, the researchers relied on their different areas of expertise to develop a cohesive solution.
“The challenging aspect is in coming up with a solution that incorporates all factors in a seamless and effective manner, and demonstrating that the solution works on large-scale real-world datasets. This work incorporates fashion knowledge from Yujuan and Wai Keung, industry knowledge from ViSenze, and state-of-the-art techniques from our group,” Prof Chua explained. ViSenze is an AI-powered product search software used by large retail stores globally, and was co-founded by Prof Chua.
“We had help from ViSenze in contributing their industry knowledge and market insights, as well as research support from other researchers in our group, especially on graph neural network,” he said. | https://www.comp.nus.edu.sg/news/2021-icmr-best-student-paper-award/ |
We produce games that we hope will bring people together in a social way and they may learn some new stuff along the way.
Website: http://www.thinkark.co.uk/playark
Location: Cardiff
Members: 56
Latest Activity: Feb 6
We wanted to do something extra for playARK Festival and Talks 2013, so we decided that this year we would run a hack week that would bring together nine individuals from different disciplines to…Continue
Tags: gaming, technology, pervasive, creativity, hacking
Started by Alison John Oct 23, 2013.
This year we have some fantastic games for you to come and play. We have scoured the world for the best in street games and computer assisted games. All games are free apart from our headline…Continue
Tags: digital, technology, reclaim, Festival, playARK
Started by Alison John Oct 8, 2013.
We've got a great line-up for playARK talks this year. The talks will be exploring the theme of reclaimed and will look at how story, design, technology, games and play is influencing new ways of…Continue
Tags: reclaim, technology, art, design, playark
Started by Alison John Sep 19, 2013.
So we are hugely into collecting and mapping. We love the idea of learning new things and gaining new insights into what could be perceived as mundane. That's why we love games and play. So what are…Continue
Started by Julian Sykes. Last reply by Julian Sykes Jul 6, 2011.
Add a Comment
Aspiring private investigators, amateur sleuths and detectives we need you! We are looking for people to come and play test a new prototype game experience this weekend. If you are free and interested come and join us on Sunday 10th Feb at Wales Millennium Centre 2pm - 4pm. #whodunit #coldcase #playtest
A creative morning that delves into ways in which place, experience and story can be explored through digital and analogue technologies. Against the noisy backdrop of investment in VR and AR can we create new dialogues with audiences that arouse the senses, engage the emotions, and that are attuned to the environment.
The event will feature presentations from creative organisations and artists that have produced work at St Fagans National Museum of History.
The event is aimed at anyone who is interested in creating digital and/or analogue site-specific projects that connect people, story and place.
The day will be split into two parts:
10.30am -1.30pm
Talks and presentations focusing on exploration and discovery through creative practice. This will feature contributions from St Fagans' artists in residence, members of the digital team and researchers on immersive media. The morning session will be followed by lunch and networking.
2pm - 4pm [optional]
In the afternoon guests will be invited to take part in St Fagans' new immersive storytelling experience Traces (Olion) or can attend a games making workshop with yello brick producer Alison John (places for the workshop are limited to the first 20 sign ups).
THE ORGANISERS
Alison John, Yello brick
Jenny Kidd, Cardiff University
Sara Huws, Amgueddfa Cymru - National Museum Wales
The February Edition of playARK After Dark is nearly here!
This month we have some gaming guests who will be bringing some exciting games to playARK. Craig Quat from No Fit State will be introducing some circus games, Steve Donnelly our partner in crime will be running a new game he has been developing called DriftMob and yello brick will be introducing you to a big game of old school classic Werewolf!
As always you don’t need to know any of the rules before you arrive, all you need to bring is yourself and a sense of adventure.
Make sure you book a place!
https://www.eventbrite.co.uk/e/playark-after-dark-february-edition-...
playARK After Dark proudly presents Santa's Snow Dash! A Christmas Street Game.
Santa’s sleigh has crash landed in Cardiff and the presents have been scattered all over town in the mayhem. You must help Santa and his reindeer to find all the presents in time and save Christmas. But….be careful not all his reindeer are as helpful as they might seem and some of Cardiffs weirder inhabitants want the presents from themselves. Meet sleepy polar bears, mischievous penguins, and a couple of other surprises in your quest to retrieve all the presents.
In teams of six Santa's Snow Dash will be played out on the streets of Cardiff City Centre. The game will last approximately 1 hour and 30 minutes, and don’t worry if you don’t have a team of six, we will place you in a team with some other fun loving individuals.
After the game we will have a Christmas Party at the Urban Tap on Westgate Street with entertainment, mince pies and a prize for the best Christmas fancy dress.
Tickets are £7 in total, that means £5.86 is the price of the ticket and £1.14 for the handling fee that goes to Eventbrite. You can buy tickets here Eventbrite
Please meet at the Urban Tap House at 19:00 to start the game.
YOU HAVE BEEN ACTIVATED. We have launched our latest game today. It's a follow up to our Secret Agent game EYE SPY. You can get tickets from our eventbrite page www.eyespy2.eventbrite.com
More talks from #playARK2013, Nick Tandavanijt from Blast Theory talks about using technology to create new forms of performance #performance #technology #interactive #theatre #digital
Nick Tandavanijt from yellobrick on Vimeo.
Missed last year's playARK Talks? We'll be releasing videos of all our speakers each week. First up is Alison Norrington talking about Reclaiming Stories:
Alison Norrington - playARK 2013 from yellobrick on Vimeo.
playARK is an annual two day festival that takes place in Cardiff. It explores playful approaches to learning, living and working now and in the future. The talks feature interesting playful projects (both physical and digital) taking place in the UK and beyond. We are interested in hearing about practices that subvert the norm in some way and push the boundaries of the predefined. The theme of the conference last year was RECLAIMED, looking in particular at story, traditional learning, urban environments, craft/hacking and looking how games and playful experiences exist within these areas.
The games day showcases some of the best talent in street and pervasive games, immersive theatre and interactive projects nationally and internationally.
Interesting course taking place in April:
Connected Stories
1st April - 3rd April 2014
------------
Connected Stories is a three day workshop event that will be an opportunity for individuals from different disciplines and methodologies to come together to learn, develop and explore what the narrative possibilities are within multiplatform storytelling. During the three days the participants will be taken through a process that will open discussion on transmedia with examples of best practice and lessons learned from leading industry professionals, in addition to inspirational sessions that explore and analyse successful case studies. The programme will also offer important opportunities to connect with others from different work practices to form the beginnings of new and exciting networks.
This short course is intended as an introduction to the world of transmedia and focuses on the idea of knowledge exchange, open experimentation and collaboration between practitioners of different media.
-----------
Course Venue:
The course will take place Cardiff Bay (Suite 6, Second Floor, 33 - 35 West Bute Street, CF10 5LH) and there are bursaries available for travel and accommodation for those who are not based within a 55 mile radius of the Cardiff area.
Who should apply?:
The course will offer 20 individuals the opportunity to participate and will be open to applications from creatives, writers, game designers and producers, technologists, linear media producers and directors (theatrical, film/TV and radio) who are interested in working creatively with storytelling and working collaboratively with others from different disciplines.
APPLY HERE
www.connectedstories.eventbrite.co.uk
and fill out a short application form. Deadline for application is Friday 21st March 2014.
If you have any queries please contact:
[email protected]
CASTING CALL for Audio Narrative
1 x actor and 1 x actress required for voice recording as part of a location based audio narrative experience.
Characters are roughly in their 30’s so we are looking for voices that fit that age range. The script is bilingual so applicants MUST be Welsh speakers.
About the project -
We are currently developing an audio based experience that can be accessed on the local cycle networks in the South Wales Valleys. Told through two voices, the narrative will guide participants on a unique adventure that aspires to inject a little extra imagination into journeys. The project aims to encourage new audiences to engage and offer an unique experience to cyclists and other path users.
Actors will be required for two and a half days at the end of February/beginning of March for:
Rehearsal - (half a day)
Audio Recording - (two days)
Fee for each performer is £375.00
Auditions will take place on Wednesday 12th February 2014 in Cardiff.
Please send a performance CV and voice reel (if you have one) to [email protected] by 12 Noon Friday 7th February 2014.
Only two weeks to go until playARK Festival and Talks 2013.
The Talks have a fantastic line up this year from an artist who uses augmented reality to create stories around her pottery, companies who are using the urban environment to create new theatrical experiences to exciting innovations in technology used in the classroom including the use of robots to connect children with autism to learning.
We have a vast amount of games on the Games day including our headline game Block Party - a roaming theatrical game experience that takes place in Cardiff Bay. | https://community.nationaltheatrewales.org/group/playark |
Since tomorrow is the 4th of May, here's a little Star Wars themed post to prepare you mentally to all the bad jokes coming tomorrow.
During a session of the galactic senate all the senators are sitting in an n*n grid. A sudden outbreak of JarJar flu (which lasts forever and causes the infected to speak like JarJar Binks) causes some of the senators to get infected.
After that the infection begins to spread step by step. Two senators are adjacent if they share a whole edge on the grid (i.e., top,bottom,right,left), which means we exclude diagonals.
Your code doesn't need to handle invalid inputs like a list greater than n*n or coordinates that aren't distincts.
n being the side of the grid which means the grid will be a n*n grid, and the list of couples of integers being the coordinates of the cells of the initally infected senators.
The bottom left of the grid is [0,0] and the top right is [n-1,n-1]. The top left is [0,n-1].
Remember that this is code-golf, thus the shortest answer in bytes wins !
Takes enclosed matrix as argument, returns the step number which completes infection, i.e. 1=is already fully infected. 0=does not fully spread, which is just 1 + the OP's numbers.
Uses 1-indexing for the output.
It raises an error if it isn't possible.
Where \n represents the literal newline character. Takes input as a string of 0s and 1s in a newline-delimited array. Returns NaN if the grid will never become fully infected.
Find the Words on the Grid! | https://codegolf.stackexchange.com/questions/118731/may-the-fourth-be-with-flu |
Warning:
more...
Fetching bibliography...
Generate a file for use with external citation management software.
Stratified squamous epithelial cells are found in a number of organs, including the skin epidermis and the thymus. The progenitor cells of the developing epidermis form a multi-layered epithelium and appendages, like the hair follicle, to generate an essential barrier to protect against water loss and invasion of foreign pathogens. In contrast, the thymic epithelium forms a three-dimensional mesh of keratinocytes that are essential for positive and negative selection of self-restricted T cells. While these distinct stratified epithelial tissues derive from distinct embryonic germ layers, both tissues instruct immunity, and the epithelial differentiation programs and molecular mechanisms that control their development are remarkably similar. In this review, we aim to highlight some of the similarities between the thymus and the skin epidermis and its appendages during developmental specification.
© 2014 Wiley Periodicals, Inc.
National Center for
Biotechnology Information, | https://www.ncbi.nlm.nih.gov/pubmed?term=25176390 |
The ‘housing first’ approach is a well-developed strategy to successfully and swiftly end homelessness. It is an approach that concentrates on connecting individuals and their families experiencing homelessness by means of moving them from the streets or temporary shelters into stable housing.
This approach requires no preconditions to access to stable housing like community-compliance requirements and treatment clearance, and is centered on personal choice and lifelong recovery.
The ‘housing first’ program is complemented with support intervention programs to reassure the stability of housing, hence, preventing people from returning to homelessness.
Through added support services, overall quality of living is improved among the recipients of the housing first program. Most of all, they are taught on how to foster self-sufficiency and personal responsibility.
The housing first approach added with another innovative approach has been proven to be very effective in resolving the problem of homelessness, particularly chronic homelessness.
It has resulted in improved physical and behavioral health, permanence in residency, and the reduced dependency on emergency services like rehabilitation centers, treatment facilities, jails, and other emergency departments. | https://www.makepovertyhistory.ca/housing-first/ |
Finally, it provides management recommendations, or key action statements, for lower-risk infants. A BRUE is diagnosed only when there is no explanation for a qualifying event after conducting an appropriate history and physical examination. By using this definition and framework, nocartis younger than 1 year who present with a BRUE are categorized either as (1) a lower-risk patient on the basis of history and physical examination for whom evidence-based recommendations for evaluation and management are offered or (2) a higher-risk patient whose Nedocromil (Alocril)- FDA and physical examination suggest the need for further investigation and treatment but for whom recommendations are not offered.
This clinical practice novartis news is intended to foster a patient- and family-centered approach to care, reduce unnecessary and costly medical interventions, improve patient outcomes, support implementation, and provide direction for future research.
Each key action statement indicates a level of evidence, the benefit-harm relationship, and the strength of recommendation. This clinical practice guideline applies to infants younger than 1 year novartis news is intended for pediatric clinicians. Novartis news guideline has 3 primary objectives. First, it recommends Plasbumin (Albumin - Human Solution for Injection )- FDA replacement of the term apparent life-threatening event (ALTE) with a new term, brief resolved unexplained event (BRUE).
Second, it provides an approach to patient evaluation that is based on the risk that the infant novartis news have a recurring event or has a serious underlying disorder. Novartis news, it provides evidence-based management recommendations, or key action statements, for novarts patients whose history and physical examination are normal. It does not offer recommendations for higher-risk patients whose history and physical examination suggest the need for further investigation novartis news treatment (because of insufficient evidence or the availability of clinical practice guidelines specific to their presentation).
This clinical practice guideline also provides implementation support and suggests directions for future research. In some cases, the observer fears that the infant has died. First, under the ALTE definition, the infant is often, but where is testosterone produced in the body necessarily, asymptomatic on presentation.
The evaluation and management of symptomatic infants (eg, those with fever or respiratory distress) need to be distinguished from that of asymptomatic infants. Novartis news, the reported symptoms under the ALTE definition, although often concerning to the caregiver, are not intrinsically life-threatening and frequently novartis news a benign manifestation of normal infant physiology or a self-limited condition.
A definition needs enough precision to allow the clinician to base clinical decisions on events that are characterized as abnormal after conducting a thorough history and physical examination. For example, a constellation of symptoms novartis news hemodynamic instability or central apnea needs to be distinguished from more common septobore less concerning events readily characterized as periodic breathing of the newborn, psychologist online spells, dysphagia, or gastroesophageal reflux (GER).
Furthermore, events defined as ALTEs are rarely a manifestation of a more serious illness that, if left undiagnosed, could lead to morbidity or death. Yet, the perceived potential for recurring events or a serious underlying disorder often provokes novartis news in caregivers and Elbasvir and Grazoprevir Tablets (Zepatier)- FDA. A more precise definition could prevent the overuse of medical interventions sound noise helping clinicians distinguish infants with lower risk.
For these reasons, a replacement of the term ALTE with a more novartis news term could improve clinical care and management. In this clinical practice guideline, novartis news more precise definition is novaris for this group of clinical events: brief resolved novartis news event (BRUE). Novartis news authors of this guideline recommend that the novartis news ALTE no longer be used by clinicians to describe pollution air project event novartis news as a diagnosis.
For example, the presence novartia respiratory symptoms or fever would preclude classification of an nogartis as a Novartis news. Similarly, an event characterized as choking or gagging associated with spitting up is novartis news included in the BRUE definition, because clinicians will want to pursue the cause of vomiting, which may be related to Novartis news, infection, or central nervous system (CNS) disease. Clinicians should use the term BRUE to describe an event occurring in an infant Moreover, clinicians should diagnose a BRUE only when there is no explanation for a qualifying event after conducting an appropriate history and physical examination (Tables 2 and 3).
Historical Features To Be Considered in the Evaluation of a Potential BRUEPhysical Examination Features To Be Considered in the Evaluation of a Potential BRUEDifferences between the terms ALTE and BRUE should be noted. First, the BRUE definition has a strict age limit. Second, an event is only a BRUE if there is no other likely explanation.
Clinical novattis such as fever, nasal congestion, and increased work of breathing may indicate temporary airway obstruction from viral infection. Events characterized as choking after vomiting may indicate a gastrointestinal cause, such as GER.
Although such perceptions are understandable and important to address, such risk can only be assessed after the event has been objectively characterized by a clinician. Episodes of rubor or redness are not consistent with BRUE, because they are common in healthy infants. Seventh, because choking and gagging usually indicate common diagnoses such as GER or respiratory infection, their presence suggests an event was not a BRUE.
For infants who have experienced a BRUE, a nvartis history and physical examination are necessary to characterize the event, assess the risk of recurrence, and determine the presence of an underlying disorder (Tables 2 and 3).
In the absence of identifiable risk factors, infants are at lower risk and laboratory studies, imaging studies, and other diagnostic procedures are unlikely to be novartis news or necessary. However, if the clinical history or physical examination reveals abnormalities, the patient may be novartis news higher risk and further evaluation should focus on the specific areas of concern.
Patients who have experienced a BRUE may novartis news a recurrent event or an undiagnosed serious condition (eg, child abuse, pertussis, etc) that confers a risk neews adverse outcomes. Although this risk novartis news been difficult to bovartis historically and old spice krakengard novartis news have fully evaluated patient-centered outcomes (eg, family experience survey), the systematic review of the ALTE literature identified a subset of BRUE patients who are unlikely to have a recurrent event or undiagnosed serious conditions, are novartis news lower risk of adverse outcomes, and can novartis news be managed safely without extensive diagnostic evaluation or hospitalization.
Nonetheless, most events were less than one minute. By consensus, the subcommittee established 6 but novartis news was unclear how the need novartis news CPR was determined.
Therefore, the committee agreed by consensus that the need for CPR should be determined by nvoartis novartis news providers. To be designated lower risk, the following criteria should be met (see Fig 1):Diagnosis, risk classification, and recommended management of a BRUE. No concerning historical features novartis news Table 2)No concerning physical examination findings (see Table 3)Infants who have experienced a BRUE who do not qualify as lower-risk patients are, by definition, at higher risk.
Unfortunately, the outcomes data jovartis ALTE studies in the Atropine and Pralidoxime Chloride Injection (ATNAA)- Multum higher-risk population are unclear and preclude the derivation of evidence-based recommendations regarding management.
Thus, pending further research, this guideline does not provide recommendations for the management of the higher-risk infant.
Nonetheless, mangoes is important for clinicians and novartis news to recognize that some novartis news suggest that higher-risk Thyroid disease patients may be more likely to have a serious underlying cause, recurrent event, or an adverse outcome.
For example, infants younger baltimore 2 months who experience a BRUE may be more likely novartis news have a congenital or infectious cause and be at higher novartis news of an adverse outcome. Infants who neews experienced multiple events or a concerning social assessment for child abuse may warrant increased observation novartis news better document the events or contextual factors.
A list of differential diagnoses for BRUE patients is provided in Supplemental Table 6. In July 2013, the American Academy novartis news Johnson bay (AAP) convened a multidisciplinary subcommittee composed of primary care clinicians and experts in the fields of general pediatrics, hospital medicine, emergency medicine, infectious diseases, child abuse, sleep novartis news, pulmonary medicine, cardiology, neurology, biochemical genetics, gastroenterology, environmental health, and quality novartis news. All panel members declared potential conflicts on the basis of the AAP policy on Conflict of Interest and Voluntary Disclosure.
Subcommittee members repeated this process annually and nobartis publication of the guideline. All potential conflicts of interest are listed at the end of this document.Further...
Comments:29.08.2019 in 08:22 Shakagul:
It absolutely agree with the previous message
06.09.2019 in 21:43 Yozshuzshura: | http://super-bahis-guvenilir.xyz/dominant-eye/novartis-news.php |
ASOS PLC said Wednesday that it swung to a pretax loss for fiscal 2022 on higher sales costs, and distribution and administrative expenses, and said the business environment remains volatile going into the new year.
The online fashion retailer ASC, +11.22% posted a pretax loss for the year ended Aug. 31 of 31.9 million pounds ($36.1 million) compared with a profit of GBP177.1 million for a year earlier.
Its gross margin decreased by 1.8 percentage points to 43.6% while revenue rose to GBP3.94 billion from GBP3.91 billion for the year before. The company said it saw a change in consumer behavior as customers facing inflation and disposable income-pressure increasingly returned items, leading to lower-than-expected second half income.
ASOS said the business environment going into fiscal 2023 remained volatile, with September showing a slight improvement from August. Within the U.K., the company expects a decline in the apparel market over the next year but is confident it can take share in that environment.
The company said it expects to report a loss in the first half of fiscal 2023, driven by normal profit phasing and exacerbated by elevated markdowns to clear stock from a change in commercial model. It further expects a non-cash stock write-off of GBP100 million to GBP130 million. | https://outperformdaily.com/dow-jones-newswires-asos-swings-to-loss-predicts-volatile-start-to-2023/ |
Our school is continuing an exciting “journey” as we implement another year of a schoolwide program of Positive Behavior Interventions and Supports, also known as PBIS. PBIS is a school-wide system that provides:
- A common purpose and approach to discipline
- A clear set of positive expectations and behaviors
- Procedures for teaching expected behaviors
- A continuum of procedures for encouraging expected behaviors
- A continuum of procedures for discouraging inappropriate behaviors
- Procedures for ongoing monitoring and evaluation
Our theme, “Lions That ROAR”, will be used to implement PBIS.
Lions that ROAR are:
Respectful
On Task
Always Prepared
Responsible
Students will be taught specifically what those expectations mean in all areas of the school. You will know what our expectations are because you will hear “Lions That ROAR” many times! Your child will be excited to participate in PBIS because expectations are clearly defined. Students will be encouraged through positive reinforcement of expected behavior, and they will not want to miss the school-wide activities planned. Please ask your child to share what is happening at NSES as they become one of our “Lions That ROAR”!
You can see our school-wide matrix and Daily Discipline Flow map under the “For Students’ tab on our website. The matrix gives you an outline of what your child is expected to do and the flow map show the consequences that result when not meeting our school-wide expectations. | https://nses.greenwood52.org/apps/pages/index.jsp?uREC_ID=2736199&type=d&pREC_ID=2313730 |
Are we being taken by beings from outer space or another dimension? A troubling phenomenon has been occurring for centuries – countless individuals have very real memories of being taken secretly against their will by Alien entities, many being subjected to complex physical and psychological procedures.
Trending News
Eight things you need to know about poltergeists – just in time for Halloween...
Halloween is the time of year when interest in the paranormal peaks and people celebrate all things supernatural. Of particular fascination are stories and...
Top 10 Haunted House Movies To Give You Chills – FandomWire
There are things that go bump in the night and every house has its secrets. The haunted house has a staple place in scary...
The Roebling Museum’s Ghost | A&E | communitynews.org – Community News
Jan-Lynn Bastien Picasa The following section from the story “A Bridge to the Other Side: Hauntings in Roebling” from the book “Ghosts of Burlington County” provides...
PRESS RELEASE: Haunted Rooms America Heads to the Historic Cheney Mansion in Jerseyville, IL...
Located at 601 N. State Street in Jerseyville, Illinois, and built in 1827, it was one of the first houses built in the town,...
Most Haunted Places in Asia – Travel + Leisure
Most Haunted Places in Asia | Travel + Leisure Skip to content Top Navigation Close this dialog window Explore Travel + Leisure Profile Menu Follow Us Close this dialog window Share &... | http://u-p-o.com/index.php/2022/09/02/abducted-by-aliens-ufo-encounters-of-the-4th-kind/ |
7-Year-Old Makes Food Allergy Video With Puppets
Third-grader Corey Shive recently won an award from the Food Allergy and Anaphylaxis Network for a YouTube video he wrote, directed, and starred in. The video is a puppet show that educates viewers about food allergies. Corey, who is allergic to nuts, recently explained to the Pennsylvania Times Herald "It’s an award for trying to make food allergies better. I made a puppet show for people to learn about food allergies."
The puppet show runs three and a half minutes, and is titled "Dolphin and Dog talk about food allergies." Corey is Dolphin, while his friend, fellow student Caitlin Laska, plays the role of Dog. During the kid-friendly puppet show, Dog asks questions about food allergies and Dolphin answers them.
The two puppets discuss the basic principles of food allergies, how they work, and what they mean for the person with food allergies. Dog asks questions like “How can you tell you’re having an allergic reaction?”, while Dolphin explains " Food allergies are when someone gets an allergic reaction to food. An allergic reaction is when your body thinks the food you ate is bad, but it really isn’t."
The writing and research was all Corey's. He explains that he checked out several library books about food allergies, taking notes as he read them. He then created the puppets, and built the stage from cardboard and construction paper. After viewing the video on YouTube, the Food Allergy & Anaphylaxis Network awarded him with a Youth Special Achievement Award. He will formally receive that award on April 28, in a FAAN conference in Tarrytown, New York. Watch Corey's video at http://youtu.be/iCpwfqY6kbY.
Latest Community Discussions
More Articles
What can you eat if you can't eat peanut butter? Fortunately for people with a peanut allergy, there...
If you’ve recently discovered a peanut allergy in your family, you may be wondering what on earth you are going to replace those peanut butter and...
If you find frequent allergy-related food recalls upsetting you are not alone, but a new federal rule may help reduce the cross-contamination...
Recent UK studies revealing the benefit of giving peanut protein to infants at risk for peanut allergy have left some mothers feeling guilty. The...
Peanuts are classified as legumes, as are chickpeas. Does this mean a child with a peanut allergy needs to avoid eating chickpeas? As with many...
More Articles
Parents of kids with peanut allergy and adults with a peanut allergy may worry about allergen exposure from surfaces not cleaned after peanut...
A 504 plan* documents food allergy accommodations agreed to by parents and their child’s school. Plans are typically created during a 504 meeting...
It may seem a contradiction when doctors claim reactions owed to airborne peanut protein are rare, yet you read multiple online stories of kids...
Nearly 25 percent of children with a peanut allergy will outgrow it. However, there is a small risk...
If you or your child has a peanut allergy, that unmistakable smell of peanuts wafting through the air...
If you have a peanut allergy, you are probably accustomed to reading labels and scanning for warnings...
Tree nuts and peanuts are distinctly different. An allergy to one does not guarantee an allergy to the other. Peanuts are considered legumes and...
Only those who have peanut allergies really seem to realize how many things can and often do have...
Childhood allergies can affect sleep, eating habits, concentration level, and mood. Children with allergies can exhibit behaviors that reveal...
Health Canada’s new food labeling regulations, which came into force August of this year, have made it easier for people with food allergies to...
A school prank could have gone terribly wrong had the girl being pranked not been diligent about her allergies. The Washington state teenager is...
Although allergies affect many people worldwide, there are currently no universal allergy symbols. It is estimated that about 25 percent of...
George Washington Carver found more than 300 uses for peanuts and peanut oil. That’s more than 300 ways people with an allergy to peanuts can be...
In the United States, millions of people suffer from acid reflux, a painful condition in the throat and esophagus that results when stomach acid...
Peanuts cause more severe food allergic reactions than other foods, followed by shellfish, fish, tree nuts and eggs. Although there is only a... | https://www.peanutallergy.com/news/food-allergy-news/7-year-old-makes-food-allergy-video-with-puppets |
Cooper journeys through works of three 19th century master composers.
Elisabeth Murdoch Hall, Melbourne Recital Centre
August 26, 2014
On Tuesday evening at the Melbourne Recital Centre, Imogen Cooper led an attentive audience on a journey through the works of three master composers of the 19th century. This was an impressive recital right from the beginning to the very last note of the encore. Brahms’ arrangement for solo piano of the Theme and Variations from his own first string sextet opened the program. This work has a noble theme followed by variations which contain increasingly florid figurations before the work winds down to a gentle place of rest. This emotional shape was to be a familiar thread throughout the recital and in the Brahms, Cooper gleaned tremendous moments of beauty from each variation and bookended them with a delivery of the opening theme and concluding bars which made the work grow and subside with a powerful unifying spirit.
The next work on the program was the Davidsbündlertänze by Schumann. This piano cycle is very unique amongst Schumann’s other famous examples such as the Carnaval, Fantasiestücke or Kreisleriana. In general, the Carnaval has a mix of shorter and longer pieces that mingle to form a whole. In the Fantasiestücke and Kreisleriana, every piece is on a larger scale. But in the Davidsbündlertänze, the general trend (with some exceptions) is towards shorter pieces in the first nine and then the remaining nine are more extended. And there were many fine moments in the first nine movements as Cooper delved into the jumpy figures and tender replies of the opening number. There was more beautiful playing in No 5, a gripping No 7 followed by a whirlwind of virtuosity in Nos 8 and 9 that concluded the first half of the piece. But the most wonderful playing occurred after this in the more extended pieces of the second half including No 4 where the central choral was delivered with such lyrical beauty that one was fortunate to have it come back twice as a result of the repeat; also No 5 was played with great tenderness as was the sweeping central melody of No 6. In the second to last number, Schumann brings back music from the beginning, this time couched in new harmonies and with a coda. Cooper played this number with an elevated intensity and then brought the cycle to a natural place of rest with a simple but touching conclusion.
After the interval, Cooper played the Novellette, Op. 21 No 2 in D major by Schumann. This piece is very similar in structure and spirit to many of the movements from the second half of the Davidsbündlertänze and Cooper was able to bring the same combination of virtuosity and lyricism here as she did in the previous examples. The night’s program ended with a majestic performance of the Schubert B-flat major Sonata where Cooper once again impressed with her sensitive and finely gradated playing. From the intimate delivery of the opening theme, to the musical silences and outbursts of the concluding material of the exposition, through the mystery and drama of the development section, and to the whispering bass trills before the recapitulation, Cooper was once again a powerful unifying agent for the music’s vast landscape. And this continued in the second movement where the musical sunshine and darkness of Schubert’s key changes were sensitively conveyed. After a charming dash through the Scherzo, Cooper gave a fiery account of the finale. This is an extended movement where the musical struggle occurs right in the middle and only finds rest in the triumphant conclusion. Once again, Cooper was able to combine moments of beauty and a powerful driving force towards these crucial moments that framed the movement. After extended applause from the audience, Cooper gave another soulful account of Schubert, this time the Allegretto in C minor. | https://limelightmagazine.com.au/reviews/review-imogen-cooper-musica-viva/ |
We can find it very easy to feel consumed by our jobs, especially as nonprofit leaders. If you’re like me, you’re extremely passionate about executing your mission, one you consider crucial in today’s world. Yet you may also feel there’s never enough time to do it all, and fires constantly popping up further increase your time demands. Your job’s pull can quickly take over all your time – both professional and personal.
Clients frequently ask me for advice on how to find the elusive balance between career and home life. I always share what I call the “Power of No” – a lesson I learned myself seven years ago.
I’ve always prided myself on being the “go-to guy.” I was the one who would work whatever hours necessary to satisfy a client or agree to jump on a plane with 24-hours’ notice to make a meeting. In my second year as a CEO, the difficulty of integrating my work and personal life had become very apparent, and this way-of-being was becoming increasingly unsustainable.
That year marked the arrival of my first son, Benjamin. At the end of my wife's maternity leave, I decided that I was no longer going to consult on Mondays but rather use it as a “Daddy Day” with my son. (This tradition continues today with our second boy, Finn). Around that time, a colleague asked me about the most important lesson I had learned as a parent. Without thinking, I quickly responded, "The power of no." This wisdom of knowing when to say “no” to professional demands has helped me effectively integrate work and life, and I promise it can help you, too. “Integration” means ensuring that you are able to accommodate both work and life as best as possible. Sometimes you have to engage in life when other people are traditionally working, and vice-versa.
I learned that reserving Mondays as “Daddy Day” and functioning in a household with two working parents meant I needed to start saying “no” to some client asks, and I quickly became quite adept at it. Consequently, my organization continued to grow larger, I was happier, and I felt like I had avoided short-changing time with my children as well as productive work-time. To accomplish this, I have adopted three primary strategies that I consider crucial to helping me realize life/work integration.
I’m happy to share the 3 key ways I’ve discovered to better integrate your personal and professional life:
- Set Boundaries – The first step to saying “no” is to set clear boundaries and metrics for yourself and your organization. The boundaries can be obvious, such as my saying "no Mondays," but I also recommend metrics for other things important to your organization’s culture.
Here's a prime example that comes up all the time: travel. When most people meet me, they assume that as a consultant, I'm constantly on the road. In truth, I'm not at all – I rarely travel more than 10 days a year for work. This is a conscious decision I made, not only to keep me connected to my family, but also because it positively impacts profitability.
For many nonprofit leaders, more time traveling means less time to devote to actively leading the organization. When speaking with a leader who is concerned about travel, I always suggest setting a metric that aligns to their business model. For example, “No more than 10 days traveling per year” or “No more than five days a month.” Having such a metric will help you to assess when you should say “no” to a new commitment or suggest a date for the following month instead.
- Maximize Flexibility – Another strategy I use involves staying flexible about where and when I work. When we talk about work-life integration versus work-life balance, the key difference lies in when and how you're doing your work. Work-life balance means making clear divisions between home and work life, whereas integration is about maximizing the time for each and interweaving them. This might mean being very cognizant of which tasks you must do during normal business hours – even if they are not the most urgent – and holding off on other tasks until evening hours or weekends. This trade-off could allow you to leave space during prime daytime weekday hours to pursue the activities most personally important to you.
- Rely on Your Team – Finally, it proves important to consider how you can best use your team to complement your time. For example, if you know you’re going to be traveling or focused on a particular project more than usual, ensure you have a plan for who will provide any needed support during this time. That way, if something urgent comes up or a meeting needs to be called, the need can be addressed without alterations to your plans. In other words, always make sure you have good back-up. The result from the customer, funder, or stakeholders’ perspective is seamless service – they might not even know you’re not available.
Two last thoughts: First, know that there is a cost to life-work integration. Leaders sometimes have to say “no” to some projects or additional scope if it impacts their organization’s life-work integration culture. In most cases, customers will respect your commitment to maintaining a positive work environment for your organization and may even see it as a good sign of strong character and self-worth. In my own case, I’ve only had one client ever say no to working with me because of my decision to stay home on Mondays with my son. By and large, there has been no measurable negative impact on my business.
Second, this is not an exhaustive list – other seasoned leaders have developed many other effective strategies to successfully integrate their personal and professional lives. I simply ask you to focus on the main question: How are you saying “no” and integrating life and work?
Comment below and help your peers (and me) benefit from your ideas. | https://www.civstrat.com/blog/the-power-of-no |
Available as an ebook at:
Amazon Kindle
Apple iBooks
Barnes & Noble Nook
Chegg Inc
eBooks Corp.
Google Play
|
|
Poe Abroad: Influence Reputation Affinities
University of Iowa Press, 1999
eISBN: 978-1-58729-321-4 | Paper: 978-1-58729-363-4 | Cloth: 978-0-87745-697-1
Library of Congress Classification PS2638.P62 1999
Dewey Decimal Classification 818.309
ABOUT THIS BOOK | AUTHOR BIOGRAPHY | TOC | REQUEST ACCESSIBLE FILE
ABOUT THIS BOOK
Perhaps no one would be more shocked at the steady rise of his literary reputation—on a truly global scale—Than Edgar Allan Poe himself. Poe's literary reputation has climbed steadily since his death in 1849.
In Poe Abroad, Lois Vines has brought together a collection of essays that document the American writer's influence on the diverse literatures—and writers—of the world. Over twenty scholars demonstrate how and why Poe has significantly influenced many of the major literary figures of the last 150 years.
Part One includes studies of Poe's popularity among general readers, his influence on literary movements, and his reputation as a poet, fiction writer, and literary critic. Part Two presents analyses of the role Poe played in the literary development of specific writers representing many different cultures.
Poe Abroad commemorates the 150th anniversary of Poe's death and celebrates his worldwide impact, beginning with the first literal translation of Poe into a foreign language, “The Gold-Bug”into French in 1845. Charles Baudelaire translated another Poe tale in 1848 and four years later wrote an essay that would make Poe a well-known author in Europe even before he achieved recognition in America.
Poe died knowing only that some of his stories had been translated into French. He probably never would have imagined that his work would be admired and imitated as far away as Japan, China, and India or would have a lasting influence on writers such as Baudelaire, August Strindberg, Franz Kafka, Jorge Luis Borges, Julio Cortázar, and Tanizaki Junichiro.
As we approach the sesquicentennial of his death, Poe Abroad brings together a timely one-volume assessment of Poe's influence throughout the world.
See other books on: American influences | Appreciation | Foreign countries | Literature, Modern | Translations
See other titles from University of Iowa Press
Reference metadata exposed for Zotero via unAPI. | https://www.bibliovault.org/BV.book.epl?ISBN=9781587293634 |
Code P0421 means the powertrain control module senses the catalytic converter system is not operating as it should during warm-up. This period covers the time from when the vehicle is first started up to approximately five to ten minutes later.
The powertrain control module uses data from oxygen sensors upstream and downstream and compares the two readings. If both readings are identical or very similar, the Check Engine light will illuminate and a code P0421 will be stored. Code P0421 will be stored if this problem only occurs during the warm-up period of the vehicle.
What are the causes of the P0421 code?
Some potential causes of the P0421 code may be as follows
- Faulty catalytic converter (most likely cause, unless other codes are also recorded).
- Defective oxygen sensor
- Faulty oxygen sensor circuit
- Faulty powertrain control module.
What are the symptoms of the P0421 code?
Symptoms of the P0421 code that may occur in the driver are:
- The check engine light will come on.
- The engine may not start
- The engine may lack power or it may hardly accelerate.
- Hear strange noises while driving
How can a mechanic diagnose the P0421 code?
If P0421 was the only code stored in the system, the mechanic can begin diagnosing the problem by looking at the exhaust system. Visual inspection is always the best start for any car diagnostics.
There are a few things the mechanic can do to check the condition of the catalytic converters, such as the smell of the exhaust gases, to see if there is excess fuel, to check the catalytic converters with the engine running to see if they turn red, and to do a road test of the vehicle to confirm customer symptoms.
If the visual test is OK, the mechanic can proceed to testing the oxygen sensors and the powertrain control module, starting with the sensors. If any of the oxygen sensors are damaged, they will be replaced at the customer’s request.
The most common errors when diagnosing the P0421 code
A common mistake a mechanic can make when diagnosing the P0421 code is to skip all diagnostics and simply replace the catalytic converter. While this is the most likely cause of the P0421 code, it is not the only cause and all other possibilities should be ruled out before any parts are replaced. The more so as catalysts are often the most expensive part of the entire exhaust system.
How serious is the P0421 code?
The P0421 code can be very serious. If the catalytic converter has failed and the engine is not operating properly, severe engine damage can result if the vehicle continues to drive. In order for the engine to function properly, it must be able to breathe properly. If the catalytic converter has molten internals or is clogged with carbon deposits, the engine will not be able to breathe properly and will therefore run poorly.
What repairs can fix the code P0421?
Repairs that can fix the P0421 code may include:
- Catalytic converter replacement
- Replacing the oxygen sensor
- Repair or replace the wiring related to the oxygen sensor.
- Replace the powertrain control module.
Additional comments about the P0421 code?
It is important that if the catalytic converter is damaged, it is replaced with an OEM quality part (Original Equipment Manufacturer). Some aftermarket catalytic converter manufacturers produce cheap parts and can fail prematurely. Since catalytic converter replacement usually comes with high labor costs, it is a good idea to go ahead and invest in a high-quality part to ensure that the job only needs to be done once. | https://mcphersonsquaregroup.com/error-p0421-how-to-fix/ |
Mallory Sapoff, PsyD, is a clinical psychologist at Madison Park Psychological Services. Dr. Sapoff utilizes a creative, personable, and flexible approach to the therapeutic relationship with an understanding that all clients enter treatment with unique perceptions and experiences of the world. She has been trained in psychodynamic, cognitive-behavioral, relational, and humanistic therapeutic modalities. Her approach to psychotherapy is centered on the individual’s particular needs and goals throughout the treatment process. She is committed to providing a safe and accepting therapeutic environment, and the tools needed for desired change and progress. Dr. Sapoff has experience working with clients from diverse backgrounds and of all ages. She has additional familiarity and extensive experience with adults experiencing anxiety, depression, family conflict, major life transitions, and relationship issues.
Dr. Sapoff earned her master’s in Counseling Psychology from Pace University in 2009. She subsequently worked for the Mental Health Association of Westchester providing clinical services to diverse populations of adults and children in inpatient and outpatient settings. She then successively earned her master’s in School Psychology and doctorate in Clinical Psychology at Pace University. She is currently an Adjunct Professor at Pace University, where she teaches more than 7 classes. Prior to joining Madison Park Psychological Services, she provided psychotherapy and complete neuropsychological assessment at the William Alanson White Institute, McShane Center for Psychological Services, and Four Winds Hospital.
Back to Provider Page
Check Other Providers: | https://www.madisonparkpsych.com/MallorySapoffPsyD.en.html |
ORDINARY Western Australians will be surveyed on their views on farmers and farming before the end of the year as a precursor to a more proactive push back against anti-agriculture activism.
A quantitative community perceptions survey will aim to establish formal baseline data on what the general urban-based WA community thinks about those involved in primary industries - it is also likely to canvass views on commercial fishing - and current industry practices.
It will seek to establish how much the non-farming and fishing community actually knows about primary industries, how accurate its perceptions of them are and identify where misconceptions came from.
Information garnered by the survey will be used to better and earlier target future factual, educational and possibly personalised responses explaining to the general community the reasons why farmers do what they do.
It is hoped the survey data will better position WA's primary industries to more effectively counter emotive, shock image, often industry-damaging misinformation tactics and threats employed by activists, particularly on social media platforms.
Although a proposed one-off as part of a Trust In Primary Production pilot project, the survey could establish a benchmark data set against which effectiveness of pro-primary industry activities, designed to connect with urban consumers and build community trust, could be gauged.
Comparing results of similar surveys conducted at regular intervals into the future could establish how successful primary industries are in countering activism, building trust and protecting markets, a proponent of the survey said last week at the Grower Group Alliance (GGA) Annual Forum in Fremantle.
Future surveys and results comparisons could also monitor the level of influence on consumer perceptions and purchasing decisions vegan and other activists actually have with campaigns in relation to primary industries' impact on climate, chemical usage, genetic engineering, animal welfare, sustainability and other issues.
Food Alliance WA is driving the Trust In Primary Production pilot project.
The Department of Primary Industries and Regional Development (DPIRD) is a Food Alliance WA member, as are WAFarmers, Grains Industry Association of WA (GIWA) and the West Australian Fishing Industry Council.
The GGA, WinesWA, VegetablesWA and the Kimberley Pilbara Cattlemen's Association are also involved.
Trust In Primary Production will dovetail in with a number of national community trust-building or exploring initiatives like the National Farmers' Federation (NFF) Telling Our Story, launched last month in conjunction with Meat and Livestock Australia (MLA) and business leaders to try to overcome the rural-urban disconnect between farmers and consumers.
Agrifutures Australia is also managing a scoping study for building and maintaining community trust in primary industries on behalf of research and development corporations, including Australian Egg Corporation, Australian Pork Ltd, Cotton Research and Development Corporation, Dairy Australia, Fisheries Research and Development Corporation, Forest and Wood Products Australia, Grains Research and Development Corporation and MLA.
The CSIRO this month launched its Voconoq offshoot to "scale up" its community insights service to specifically help the agriculture and mining sectors build community trust, having identified social licence as one of the greatest risks to business they face.
National body GrainGrowers has also launched its Behind Australian Grain project to assess economic, social and environmental challenges to the grains industry and, in conjunction with KPMG, develop a sustainability framework for the industry into the future addressing these issues.
Locally, there have been projects to personalise agriculture as a way of connecting with consumers, including the Visible Farmer Project which launched the first of 15 web documentaries showcasing WA women farmers yesterday.
GIWA chief executive officer Larissa Taylor told the forum the Trust In Primary Production survey results would be available in December and be widely shared.
She told the forum "seed funding for a little pilot project" had grown out of ongoing discussions with DPIRD, going back to 2015 and previous director general Rob Delane, about the "culture" of agriculture.
"What the survey is hoping to measure is how the community perceives agriculture," Ms Taylor said.
She said key figures in agriculture "need to practise the trust conversation".
Earlier Ms Taylor said agriculture needed to have an "open and honest" conversation in explaining to the community what it does and why it does it.
As an example, she said, farmers asked about chemical usage should explain the "whole story".
"We have a weed burden which requires herbicide use as a direct result of decisions taken 30 years ago on the type of farming we do," Ms Taylor said.
"With the introduction of no-till we don't rip up our paddocks and watch our top soils blow away anymore, instead we put down a thin line and put our seed and fertiliser into that, but it has created a weed burden.
"The good news is, we don't spray the whole crop, we have the technology now to just spot spray the weeds.
"People will accept a reasonable explanation."
Later DPIRD stakeholder engagement director Karen Carrierio said Food Alliance WA was a cross-sector working group comprising WA agrifood industry associations and industry representatives.
"DPIRD is supporting Food Alliance WA to undertake pilot market research to explore consumer sentiment and key drivers of opinion on the topic of trust in food and social license to operate across the broad WA agriculture and fishing sectors," Ms Carrierio said.
"Part of this work includes information gathering across different sectors and stakeholders and a survey of key influencers in the field of creating trust and social licence in primary production.
"This work is important in supporting WA's reputation as a reliable producer of premium and safe food, products and services.
"The first part of this work is expected to be complete by the end of the year," she said.
A number of speakers told Friday's session of the GGA forum that individual primary industries operating in isolation and producing and protecting their own data could not hope to combat widespread influence on public perceptions of well-organised and resourced activism which relied on graphic images and sensationalist claims to command attention and get its message across.
Some of the activism expected to come to WA had international connections and tactics that have been developed and refined with considerable success in Europe, Canada and other primary producer nations, members of farm improvement groups from across WA's agricultural area were told.
The general impression, outside of primary industry, of a farmer being "an old guy on a tractor" needed rectifying, they were told.
Predominantly, WA farmers are tertiary-educated managers aged under 50 running multi-faceted businesses with annual turnover in the millions of dollars, supplying quality-driven and at times complex domestic and export markets.
KPMG head of markets and agrifood tech sector Ben van Delden, Australian Farm Institute executive director Richard Heath and South Australian farmer, 2019 AgriFutures South Australian Rural Women's Award winner, Churchill Fellow and AgCommunicators managing director Deanna Lush all said it was vital primary industries established their own trusted stories with consumers before activists tried to tell consumers a different story.
Mr Heath and Ms Lush said overseas experience and studies in Canada and elsewhere had clearly established that attempting to refute graphic images and sensationalist claims with bland scientific data was ineffectual.
Studies had shown the general public was "not up to speed" on the science behind modern agriculture and particularly so on complex issues like genetic engineering, so was not equipped to accept scientific explanation.
But the public could easily understand and align with intended assumptions conveyed by shocking images and slogans on Facebook, they said.
Mr Heath pointed out studies had confirmed "extreme opponents of genetically modified foods knew the least about the science, but were convinced they knew the most".
This has broader implications for primary industries attempting to combat activism, he said, and was a reason industries needed to work together on "inoculation theory" - getting in first with easily understood explanations of why they operate in the way they do to build a bridge of trust with the community.
"It's incredibly important that we get all this right and become a trusted partner," Mr Heath said.
Both he and Ms Lush pointed out activism had potential to severely hamper primary industry's aim of expanding the value of its contribution to the national economy to $100 billion by 2030.
Only a small proportion of the general public aligned with activists' views, while the rest were generally supportive of primary producers and usually ambivalent to industry practises unless confronted by horrifying images or descriptions, they said.
Their message was instead of targeting a tiny group of activists, as primary industries had in the past, they needed to concentrate on keeping the bulk of the public supporting them.
As part of a discussion panel with Ms Taylor, Ms Lush and WAFarmers chief executive officer Trevor Whittington, MLA community engagement manager Jax Baptista said farmers often felt as though "they are under attack" from the community for producing its food.
But this was because a tiny group of activists were successful in creating an out-of-proportion impression their views were widely held and supported, she said.
"The fact is, farmers are not under attack from the general community," Ms Baptista said.
Statistics showed 78 per cent of the population ate meat, the number of people identifying as vegetarian had not changed in four years and vegans comprised just 0.9pc of the population and their number might grow to 1.02pc, she said.
"We need to be more proud of what we do," Ms Baptista told the forum. | https://www.farmweekly.com.au/story/6352775/bold-plan-for-ag-industry-to-push-back/ |
The newest tutorial is a continuation of part one, and is mostly an overview of the Vingette plugin interface, at the same time showing how you can use this filter to lighten part of the scene:
The newest tutorial is a continuation of part one, and is mostly an overview of the Vingette plugin interface, at the same time showing how you can use this filter to lighten part of the scene:
Summary: Always apply your 8-bit effects as the last ones in the pipeline.
A few years ago Karl Soule wrote a short explanation of how the 8-bit and 32-bit modes work in Premiere Pro. It’s an excellent overview, although it is a bit convoluted for my taste (says who), and does not sufficiently answer the question on when to use or not to use 8-bit effects, and what are the gains and losses of introducing them in the pipeline. Shying away from an Unsharp Mask is not necessarily an ideal solution in the long run. Therefore I decided to make a few tests on my own. I created a simple project file, which you can download and peruse ( 8-bit vs 32-bit Project file (1476 downloads) ).
In essence, the 32-bit mode does affect two issues:
For the purposes of testing them, in a Full HD sequence with Max Bit Depth enabled, I created a black video, and with a Ramp plugin I created a horizontal gradient from black to white, to see how the processing will affect the smoothness and clipping of the footage.
Next I applied the following effects:
To assess the results I advise opening a large reference monitor window in the YC Waveform mode. Looking at Program monitor will not always be the best way to check the problems in the video. You should see the diagonal line running through the whole scope, like this (note, that the resolution of Premiere’s scopes is pretty low BTW):
Now perform the following operations, and observe the results:
This simple experiment allows us to establish following best practices on applying 8-bit effects in Premiere Pro:
And that’s it. I hope this sheds some light on the mysteries of Premiere Pro’s 32-bit pipeline, and that your footage will always look great from now on.
Since it’s the Christmas season, I hope you’ll appreciate the new addition to the Creative Impatience toolbox: meet the Power Window filter.
After creating the Vignette plugin, I decided that even though it did most of the things that I wanted it to, there were still some image manipulations which were pretty hard to achieve. For example, a simple operation of lightening the inside of a selected shape, turned out to be pretty problematic to perform in a decent manner.
Therefore, I set out to create a variant of the CI Vignette, which would manipulate directly lift, gamma, gain and saturation values of the pixels inside, and outside of the shape. Most of the code was reused from the Vignette, and the rest was pretty uncomplicated to write. Frankly, I spend most of the time trying to figure out how to circumvent something that I perceive to be a bug in Premiere. But then, this is the life of a software developer. We have to live with what we are given.
Without further ado: Power Window plugin for After Effects and Premiere is up and running. Be sure to visit the download page for the file, read the instructions on how to install it, and if you have problems operating the plugin, take a look at Instructions and Tutorials.
Hopefully some day I will manage to create some decent videocast on how to use these tools. In the meantime, feel free to experiment, and let me know how it goes.
Merry Christmas, and a Happy New Year!
Quite recently I commented on what kind of features are important in my opinion for the popularity of SpeedGrade to rise. This interview with the creators and developers of SG is the proof that they do understand what is the key feature to fix:
There are three things in this interview that I wanted to take a closer look at.
One, it’s excellent that sending frames from the GPU will not require a major rewrite of FrameCycler. This was the basic hurdle, and it looks like it’s going to be amended pretty soon.
Two, it’s great to see that Photoshop does allow for LUTs to be applied to an image. In fact, it’s a very cool technique. Create and adjustment layer “Color lookup”, and in the properties panel for the layer you can select or load any .look, CUBE or 3DL LUT. For some reason I seems to have a problem with SpeedGrade’s .look files, but it’s a great tool nevertheless. One that is similar to Apply Color LUT that can be found in After Effects, and I hope is coming to Premiere as well.
Three, and most important, it is clear that they understand the next logical step for color grading – democratization.
Up until recently – and most of the colorists will most likely argue that even now – color grading has been serious business, that required proper hardware, proper monitoring, and proper place. Grading suites are still one of the most expensive facilities for post, even though the cost of components has dropped dramatically during recent years. And the prevailing opinion is that if you attempt to do it on a lesser equipment, you might as well not do it at all, because you’ll never going to get good results.
But you know what? The same argument was being made some time ago with regard to pre-press, and photo correction. You need a calibrated, expensive monitor to see all the nuances of color, you need profiles and color management, you need properly lighted room, Pantone color guides, proofs, etc. And at certain level you want and need all of that. But for most publications that see the light of day, you don’t. The reality is that even on a $299 24″ IPS monitor one can get a decent match in color, that will allow you to output great material. Heck, Dan Margulis, an acclaimed Photoshop expert, claims that you can color correct the pictures if you’re a color-blind or using a monochromatic monitor. And if you know what you are doing, most people will not see the difference.
Granted, video signal is a bit special, and you do need some kind of hardware to output it to your monitor to see the possible artifacts. And DI, projection or film is another league altogether. But at the same time, unless you are heading for a theatrical projection (and in some cases even then) you have no control on how your movie is going to be watched, and what the improperly setup TV or laptop screen will do to it. Even broadcast these days with higher and higher compression ratios, is not what it used to be. The question then becomes, what the real entry level is, and what kind of deviation from your reference point you are willing to accept.
And SpeedGrade creators seem to understand this simple fact, that in order to pick up color grading tools, you don’t need the million-dollar equipment and software any longer. You can try it at home, similarly as you can try your best using Photoshop or Lightroom to correct your photos, Premiere to edit your videos, or a word processor to write your novels. Does it mean that just because you have an access to a tool, you automatically become a great colorist? Or that the fruits of your attempts will be as great as those of the master colorists? No more than each of us is a successful, popular, and talented writer.
But somewhere in the realms of high-end entertainment industry the message of having fun is being lost. Creativity is the ultimate freedom of exploration. It does not respect borders or limitations. And playing with ideas is its integral part. To experiment, to play, you don’t necessarily need high-end tools. You need toys and imagination. And toys for aspiring colorists is what we need. Now. Especially when your home PC can handle HD footage with color correction in real time without a problem.
The sad part is that Adobe is not a hardware company, so I guess I won’t expect them to make an affordable color grading surface to play with anytime soon. We still have to wait for BlackMagic Design or some other party, even less invested in the grading market, to fill this niche, and earn millions of dollars. And I do believe that it will happen sooner or later.
The craft of color grading is expanding. More and more people know about it, more and more people like to do it, find it interesting and fun. Of course, the professional colorist is not going to disappear, like professional editors did not disappear when NLE became something one could run at his home computer. But I’m going to agree with Lawrence Lessig, Philip Hodgetts and Terence Curren – video is the new literacy. And color grading is its important part.
In the end, such democratization will only benefit the craft, even though it might make some craftsmen seem more like human beings, and less like gods and magicians. The change is inevitable. And it’s exciting to see some players embracing it.
SpeedGrade seems like a very promising addition to Adobe Creative Suite, which I have already mentioned. However, after playing with it for a short moment, I found with regret that it does not fit our current infrastructure and workflows. Below is a short list of what kind of changes that I consider pretty important. These requests seem to be quite common among other interested parties, judging by the comments and questions asked during Adobe SpeedGrade webinar.
First, as of now the only way to output a video signal from SpeedGrade is via very expensive SDI daughter board to nVidia Quadro cards. This is pretty uncommon configuration in most post facilities. These days a decent quality monitoring card can be bought for less than 10 times the price of nVidia SDI. If the software is to gain wider popularity, this is the issue to be addressed.
Adobe seems to have been painfully aware of its importance, even before the release. I’m sure that had it been an easy task, it would have been accomplished long ago. Unfortunately, the problem is rooted deep in the SpeedGrade architecture. Its authors say, that SG “lives in the GPU”. This means that obtaining output on other device might require rewriting a lot – if not most – of an underlying code – similarly to what Adobe did in Premiere Pro CS5 when they ditched QuickTime and introduced their own Mercury Playback Engine. Will they consider the rewrite worthwhile? If not, they might just as well kill the application.
Second, as of now SG supports a very limited number of color surfaces. Unless the choice is widened to include at least Avid Color, and new Tangent Elements, it will push the application again into the corner of obscurity.
Third, the current integration with Premiere is very disappointing. It requires either using an EDL, or converting the movie into a sequence of DPX files. It’s choice of input formats is also very limited, which means that in most cases you will have to forget about one of the main selling point of Premiere – native editing. Or embrace offline-online workflow, which is pretty antithetical to the flexible spirit of other Adobe applications.
The integration needs to be tightened, and (un)fortunately Dynamic Link will not be an answer. DL is good for single clips, but a colorist must operate on the whole material to be effective. Therefore SG will have to read whole Premiere sequences, and work directly with Premiere’s XML (don’t confuse with FCP XML). It also means that it will have to read all file formats and render all the effects and transitions that Premiere does. Will it be done via Premiere becoming a frame server for SpeedGrade, as is After Effects for Premiere when DL is employed? Who knows, after all, Media Encoder already runs a process called PremiereProHeadless, which seems to be responsible for rendering without Premiere GUI being open. A basic structure seems to be in place already. How much will it conflict with SpeedGrade’s own frame server? How will effects be treated to obtain real time playback? Perhaps SpeedGrade could use Premiere’s render files as well?
An interesting glimpse of what is to come can also be seen in an obscure effect in After Effects which allows to apply a custom look from SpeedGrade to a layer. Possibly something like this is in store for Premiere Pro, where SG look will be applied to graded clips. The question remains, if the integration will follow the way of Baselight’s plugin, with the possibility to make adjustments in Premiere’s effect panel, or will we have to reopen the project in SG to make the changes.
This tighter integration also means that export will most likely be deferred to Adobe Media Encoder, which will solve the problem of pretty limited choice of output options presently available in SpeedGrade.
As of now SpeedGrade does not implement curves. Even though the authors claim that any correction done with curves can be done with the use of other tools present in SG, curves are sometimes pretty convenient and allow to solve some problems in more efficient manner. They will also be more familiar to users of other Adobe applications like Photoshop or Lightroom. While not critical, introducing various curve tools will allow SG to widen its user base, and will make it more appealing.
Talking about appeal, some GUI redesign is still in order, to make the application more user friendly and Adobe-like. I don’t think a major overhaul is necessary, but certainly a little would go a long way. Personally I don’t have problems with how the program operates now, but for less technically inclined people, it would be good to make SpeedGrade more intuitive and easier to use.
These are my ideas on how to improve the newest addition to Adobe Suite. As you can see, I am again touting the idea of the container format for video projects – and Premiere Pro’s project file, being an XML, is a perfect candidate. Frankly, if SpeedGrade will not be reading .prproj files by the next release, I will be very disappointed.
Recently I was doing a small editing job for a friend, and ran into a few interesting problems.
The footage provided was shot partially on a Canon Powershot, which saves it as an AVCHD MTS stream. My computer is not really up to editing AVCHD, so I decided to transcode the clips into something less CPU intensive. The final output would be delivered in letterboxed 640x480p25 because of the limitations of the second camera, so the quality loss was of little concern. Having had decent experience with AVID’s DNxHD codecs, I decided to convert it to 1080p25 36 Mbps version. And then, the problems began.
Even though Premiere Pro did import the file without a problem, Adobe Media Encoder did hang right after opening the file for transcoding. I decided to move the footage to AVID thinking that perhaps it would be a good project to hone my skills on this NLE, but it was complaining about Dolby encoding of audio, and didn’t want to import the footage. I then tried to use Sorenson Squeeze to convert it, but it also threw an error, and crashed. Even the tried MPEGStreamclip did not help.
I was almost going to give up, but then came up with an idea to use Premiere’s internal render to transcode the footage by putting it on an XDCAM HD422 timeline, rendering it (Sequence -> Render Entire Work Area), and then exporting it with the switch that I almost never use – “Use previews“. I figured, that once the problematic footage is already converted, then the Media Encoder will handle the reconversion using previews without problems. I was happily surprised to have been proven correct. And because Premiere’s internal renderer was able to cope with the footage without a glitch, it all worked like a charm.
Afterwards the edit itself was relatively swift. I encountered another roadblock when I decided to explore DaVinci Resolve for color grading, and exported the project via XML. Resolve fortunately allows custom resolutions, so setting up a 640×480 project was not a problem. I also had to transcode the files again, this time to MXF container. This was a minor issue, and went relatively fast. However, due to the fact that some media was 480p, and some 1080p, and I have done quite a lot of resizing and rescaling of the latter, I wanted to use this information in Resolve. Unfortunately, Resolve did not want to cooperate. It’s handling of resize was very weird, and every time I clicked on the resized clip to grade it, it crashed. I’m certain that the scaling/panning was responsible, because when I imported the XML without this information, everything worked great. It might have something to do with the fact, that I was running it on an old GTX260, but still, I was not able to use the software for this gig.
In the end I graded the whole piece in Premiere Pro on its timeline. Here’s the whole thing for those of you who are interested:
Peter Chamberlain from BlackMagic Design did deny any rumors (guess which ones?) that they are working on the cheaper control surface, believing that the segment is well saturated by other manufacturers. This is of course based on an assumption that the lowest segment is the price range that AVID, Tangent and JL Cooper are targetting, ie. around $1500-$2000. I must admit, that the release of Tangent Element, with the basic control surface at the cost of about $1200 is interesting, however it is still far above what I would consider the real democratization barrier – around $500-$700.
I understand all the limitations of such pricing, including the fact that this kind of surface would be looked by all proffessionals as a toy, which it would indeed be out of necessity of using cheap materials. I still believe it can be done, if R&D costs can be covered, and that it would introduce more people to color grading, than all the plugins combined.
It might of course be my wish to have at my disposal something that I’m currently not able to afford. But I also can’t help but to notice certain wording in Peter’s message. Namely:
…we have no plans for a cheaper panel at NAB. (emphasis added)
So… will anyone pick up the challenge? Or is my premise inherently flawed, and the future of color grading lies somewhere else?
This is my latest production. It’s a promotional spot for a non-profit organization that is dedicated to another passion of mine – historical personal combat.
What follows is an overview of the production of this short movie, including how the screenplay changed during production, breakdown of my editing process, and a few techniques that we used in post-production to achieve the final result.
It was a collaborative, voluntary effort, and included cooperation from parties from various cities in Poland. The Warsaw sequences (both office and training) were shot with Sony EX-1R, 1080i50, with the exception of slow-motion shots that were recorded at 720p60. Sequences from Wroclaw and Bielsko Biala were shoot with DSLRs at 1080p25. Therefore the decision was made to finish the project in 720p25, especially since the final distribution would be via youTube.
The most effort went into filming the Warsaw training, where we even managed to bring a small crane on set. Out of two shots that we filmed, in the final cut only one was partially used – the one where all people are running on the open clearing. We envisioned it as one of the opening shots. As a closing shot we filmed from the same place the goodbyes and people leaving the clearing, while the camera was moving up and away. It seemed a good idea at that time, one that would be a nice closure of the whole sequence, and perhaps of the movie as well.
We had some funny moments when Michal Rytel-Przelomiec (the camera operator, and the DOP) climbed up a tree to shot running people from above, and after a few takes he shouted that he can last only one more, because the ants definitely noticed his presence and started their assault. What a brave and dedicated guy!
A few days later we were able to shot the office sequence. The first (and back then still current) version of the screenplay involved a cut after the text message was send to what was supposedly a reminiscence from another training, and finished up with coming back to office, where Maciek (the guy in office) would pick up a sword and rush at the camera. Due to the spatial considerations on set (we were filming in Maciek’s office after hours), we decided to alter the scenario, especially since we had already filmed the training sequences, including the farewell closing shot.Therefore instead of Maciek picking up a sword and attacking the camera, he actually rushed away to training, leaving the office for something dearer to his heart. It was also Michal’s idea to shot the office space with 3200K white balance to create more distant, cold effect, and it worked really well.
All footage (about 2 hours worth) was imported into Adobe Premiere CS5, that allowed skipping transcoding and working with the source files from the beginning right to the end. After Effects CS5 and Dynamic Link were used for modest city titles only, although perhaps it could have been used to improve a few retimed shots. Music and effects were also mixed in Premiere.
The promo was in production for over half a year, mostly because we were waiting for footage from other cities, some of which never materialized, and we decided to finish the project with what we had. Actual cutting was pretty quick, and mostly involved looking for the best sequences to include from other cities. Some more time was spend on coming up with a desired final look for the short movie.
The general sequence of events was laid out by the screenplay written by Maciek Talaga. At first the clip started immediately with corporate scene. We were supposed to have some similar stories from other cities, and I was ready to use dual or even quadruple split screen for parallel action, but since the additional footage never materialized, I decided to pass on this idea. In the end it allowed us to focus more on Maciej Zajac, and made him the main hero of our story, what was not planned from the start.
After leaving the office we had to transition to the training, and preferably to another place. Wroclaw had a nice gathering sequence, and completely different atmosphere (students, backpacks, friendship and warmth), which constituted excellent contrast to the cool corporate scenes from Warsaw, presenting another kind of people involved in pursuing the hobby.
The order of following cuts was determined by the fact, that we had very little material from Bielsko-Biala, and it all involved the middle of the warm-up. We had excellent opening shots from Warsaw, which were great for setting the mood, and adding some more mystery. I used them all, and even wanted to transition to push-ups and other exercises, however when the guys already stopped running, coming back to it in Bielsko sequence ruined the natural tempo of the event. Therefore with great regret I had to shorten the crane shot to the extent that it most likely does not register as a crane shot at all, and transition to Bielsko for the remaining part of the warm-up.
Coming back to Warsaw seemed a little odd, so I decided to cut to Wroclaw to emphasize the diversity, and a short sequence with a few shots of a warm-up with swords. Here I especially like the two last cuts, where one cuts on action with the move of the sword, that is underlined by the camera move in the next shot, and then the one that moves the action back to Warsaw, when a guy exits the frame with a thrust. I was considering using a wipe here, but it looked too cheesy, so I decided to stick to a straight cut.
As an alternate to this choice, I could at first come back to Warsaw, and move the Wroclaw sequence between the warm-up and sparrings, but this would then create an alternating cadence Warsaw-other place-Warsaw and I wanted to break this rhythm and avoid that. Therefore I was stuck in Warsaw for the remaining of the movie, even though it had at least two distinctive parts left. We had an ample selection of training footage from Wroclaw, however it was conducted in a gym, and including it would ruin the overall mood and contrast closed office space vs. open training space, so in the end we decided against it.
Unfortunately we did not have any footage from gearing up, so the transition between the florysh part in Warsaw to the sparrings is one of the weakest parts of this movie, and I would love to have something else to show. I did not come up with anything better than the cut on action though.
The sparring sequence is mostly cut to music selection of the most dynamic and most spectacular actions from our shoot (not choreographed in any way), including a few speed manipulations here and there to make sword hits at proper moments or to emphasize a few nice actions, including the disarm at the end. There were a few lucky moments during shooting, where Michal zoomed in on a successful thrust, and I tried to incorporate them as much as I could, to obtain the best dynamics, and to convey as much of the atmosphere of competitive freeplay as was possible.
The sequence ends on a positive note with fighters removing masks and embracing each other. I tried to avoid cutting in the middle of this shot, but it was too long, and I wanted to have both the moment where the fencing masks come off, and the glint on the blade of the sword at the end (which was not added in post). In the end the jump cut is still noticeable, but it defends itself. There is a small problem with music at the end, because I had to cut it down and extend a little bit to hold it for the closing sequence, but it is minor, and does not distract too much from the overall story.
Apart from the serious and confrontational aspect of the training, we wanted to stress the companionship, and I believe that both the meeting sequence in Wroclaw, and the final taking off the masks and embrace did convey the message well.
During cutting I realized that regardless of the added production value of the crane farewell shot, there is no way to include it at the end. It was too long, it lessened the emotional content, and paled in comparison to the final slow motion shots that I decided to use, including the final close-up of Maciek, that constituted the ellipse present in the first version of the screenplay. Therefore it had to go, regardless of our sentiment towards it.
The feedback from early watchers was that Maciej Zajac was not easily recognizable for people who did not know him, and made us wish for something more. The idea of the beginning with sounds and no picture came from Maciek Talaga, and I only tweaked it a little bit. We first thought about putting as the first shot the one where Maciej takes off the fencing mask, however it did not look good at all, and the transition to the office scene was awkward at best. In the end I proposed the closing close up as the first shot, which in our opinion nicely tied the whole thing together, being both introduction of Maciek, setting focus on him as a person, and also nicely contrasting the “middle ages dream or movie” with his later work at the office. Excellent brief textual messages authored by Maciek Talaga added also a lot to the whole idea.
All color correction was done in Premiere Pro with the use of standard CC filters and blending modes. I experimented with the look in the midst of editing, trying to come up with something that would best convey the mood. I started with high-contrast, saturated theme, and moved quickly to a variation of bleach bypass with a slightly warmer, yellowish shift in midtones. However, it still lacked the necessary punch, and in the end I decided to over-emphasize the red color (an important one for the organization as well) with a slight Pleasantville effect. It gave the movie this slightly unreal, mysterious feeling, and the contrast underlined the seriousness of effort.
The office sequence did not need much more than the variation of bleach bypass, not having anything actually red. The increase of contrast and slight desaturation was mostly enough to bring it to the desired point, thanks to Michal’s idea of shooting it at lower Kelvin. Warsaw sequence required additional layer of “leave color” effect where everything apart from red was partially desaturated, a little more push towards yellow in highlights and in midtones, all blended in color mode over previous bleach bypass stack. I will do the detailed breakdown of color correction I used in a separate entry, although perhaps with the introduction of SpeedGrade in Adobe CS6 this technique might become obsolete.
Michal also suggested a clearer separation between the various cities, so I pushed Wroclaw more towards blue, as it involved more open air, and Bielsko more towards yellowish-green, to emphasize its more “wild” aspect. In the end, I had the most trouble with footage from this place, because as shot it was dark, had bluish tint, and involved pretty heavy grading, which on H.264 is never pleasant. Overall I’m satisfied with the results, although there are a few places that could benefit from perhaps a few more touches.
The blooming highlight on the fade out of the opening and closing shot was a happy accident and a result of fading out all corrected layers simultaneously mixed with “Lightning effects”, at first intended only for vignetting (as mentioned in my another entry).
I like the overall result. I also enjoyed the production on every step of the way, and even though it could still perhaps be improved here and there, I am happy. It was an excellent teamwork effort, and I would like to thank all people who contributed to its final look.
Yesterday BlackMagic released an upgrade to its free version of the industry standard grading tool, daVinci Resolve. The biggest and most influential change was surely removing the limit of 2 nodes that was present in previous Lite version. This bold move essentially makes the professional color correction software available to everyone for free. I am still waiting for the announced Windows version, that would make it even more accessible, but it’s almost a given at the beginning of the next year.
There still are limitations – you can at most output at HD resolution, even though you can work with footage that is much bigger than that, you won’t get noise reduction, you are limited to a single GPU. That said, most of the people to whom this version of software is directed hardly ever yet think about projects in 2K and above and have not considered buying a second GPU except perhaps for gaming purposes. However you choose to look at it, BlackMagic did surprise everyone by providing amazing piece of truly professional software for free. This kind of democratization of grading tools is certainly terrific, and unexpected. It is however not yet disruptive enough. What will BlackMagic next move be?
I see this release as a preemptive strike against Adobe (see my previous post on Adobe acquiring Iridas) and following Apple recent “prosumerisation” trend. In Adobe CS6 we will almost certainly see integrated SpeedGrade color-correction software – to many it means that they will get this tool almost for free (for the price of upgrade, but you would most-likely want to upgrade anyway). To attempt to win the new users, there was little else that BlackMagic could do. However the question still remains, why would BlackMagic voluntarily resign from some part of their income? Why not sell the newly unlocked Lite version for $99 or $199 and profit handsomely? What’s in it for them, apart from perhaps profiting from monitoring interfaces that they already sell? Let’s speculate a little bit.
One of the things that distinguishes “real” from “would-be” colorists is a control surface. It’s a tool dedicated towards increasing speed and ease with which to operate the software. All companies that provide serious grading software also sell special panels that go with it. This hardware is extremely expensive, costing anywhere from ten thousand to several hundred thousand dollars. BlackMagic does have its own model, which costs about $20 grand. Of course, in the world of high-turnover, high-end productions, such costs are quite quickly recovered. But this highly demanding pro world is relatively small, and competing companies rather numerous: BlackMagic, Digital Vision (former Nucoda), Baselight, Autodesk, Quantel, to name a few important ones.
Certainly no home-grown editor would-be colorist will shell out $20k for a tool that will sit idle 90% of their working time. Towards this end companies like Euphonix (now Avid), and Tangent Devices developed less sophisticated models that cost about $1500. For a pro it is often a very reasonable price for an entry-level piece of hardware that will pay for itself pretty quick. However, for a prosumer it is still at least two to three times too much, especially considering very limited use of the said tool. Regular consumers are willing to pay $499 for a new iPhone, avid gamers usually spend this much on a new GPU, and I guess this is about the limit that a prosumer color-grading surface would have to cost to catch on big time.
From a business perspective, selling 10 000 pieces of hardware costing $500 each earns you more than selling 10 $20k ones. Apple knew that when they released Final Cut Pro X (regardless of what you think about the program). Professional market is quite saturated, and there is not much to be gained there. It is also very demanding. Prosumers are much easier to appease, and their tools do not have to withstand the amount of abuse that pros require. Following the Apple model – giving the tool to prosumers – is a surer promise of profit, than appealing to the demanding pros.
The question is – who will make this move? Two years ago I would say that Apple might be one of the best candidates, but after introducing weird color control in Final Cut Pro X, and focusing all their efforts on touch panels I’m pretty sure they are not the ones. I don’t expect Tangent Devices or Avid to undercut the sales of their relatively low-cost models, especially after Tangent recently revamped their panels. BlackMagic is the most likely candidate, because right now they only have their high-end model. Creating a new version takes a lot of R&D resources, both time and money, and it is pretty hard to compete in this segment. BlackMagic also always did appeal to those with lower budgets, and this kind of disruptive move is something that is the easiest to expect from this company.
Therefore I am waiting for a simple control surface that will cost about $500-$700, will be sturdy enough to last me two years of relatively light to moderate use, and sensitive enough for the kind of color grading that I presently do – nowhere near truly professional level, but sometimes quite demanding nevertheless. I understand the big problem is producing decent color wheels, but I don’t loose hope that somebody will come up with some neat idea, and implement it. And no, multitouch panel will not do. If you wonder why, read another of my articles on the importance of tactile input. The whole point of control surface is that you don’t have to look at it while grading.
Finally, is the realm of professional colorists in any danger from the newcomers? To a certain extent perhaps. The field will certainly become more competitive, and even more dynamic, perhaps a few players will drop out of the market. On the other hand, more people will be educated about the quality of good picture, and more will require this quality, and also will be able to appreciate excellent work that most of the professionals do. All in all it probably will influence more the job of an editor than a colorist, bringing the two even closer together – the editors will be required to learn color correction to stay in business. In the high-end productions not very much will change, the dedicated professionals will still be sought for both for training and for expertise. Perhaps some of the rates will go down, but most likely in the middle range. In the end I think it will have net positive effect on what we do and love.
Will we then see a new product during NAB 2012 or IBC 2012? I would certainly be the first in line with my credit card. And if we do – you heard it here first. | https://www.creativeimpatience.com/category/color-grading/page/3/ |
This exhibition brings inspiration through the various forms of colour manipulated by artists and explores the facets and associations of colour that shape our world.
The history of colour is fascinating not only because of the surprising materials used to create the pigments but also the personal journeys made by artists in their pursuit of new hues. Ultramarine blue for instance, which comes from lapis lazuli, a gemstone that for centuries could only be found in a single mountain range in Afghanistan, was as coveted as gold upon its discovery. An easy summarisation of how many contemporary artists came to use colour in a more dynamic and less descriptive or naturalistic way can be found in a quote from Henri Matisse who famously said “When I put down green it doesn’t mean grass, and when I put down blue it doesn’t mean sky”.
Scientifically speaking, to understand colour, it is necessary to understand light. Visible light to the human eye is subdivided into seven major colours—red, orange, yellow, green, blue, indigo, and violet. The objective components of colour are a source of radiant energy, like the sun or a light bulb and a medium through which energy travels. The psychology behind colours states that warm colours such as red, yellow and orange, can spark a variety of emotions such as comfort and warmth while cool colours, such as green, blue and purple, often spark feelings of calmness. How we perceive colours can depend on whether an artist has set out to create harmony within a painting, stark contrasts, a unity, or even a rhythm or motion. While the source of colour inspiration maybe from observation of nature either consciously or subconsciously, the artists in this exhibition all have a very different approach to colour. Making colour, whether developing it in overlapping layers or building the pigments from scratch as Jane Goodwin does is a sort of alchemy. Painting with a restricted palette of three blues, colour is the pulse of her painting, both an energising and life-giving force. Lydia's Mammes is a colourist with an entirely different approach. Through careful transparent layering of diverse tones and hues she creates floating fields of colour that can radiate both energy and a meditative calmness depending on how the light falls on them.
Filtered through memory and imagination, Cecile van Hanja's tropical shades that fill some of her architectural compositions with light and warmth, perhaps are a sub-conscious nod to her childhood spent in Corsica. When she is working she goes out into urban spaces, photographs architecture and the transforms those images into paintings, altering colour, light and atmosphere. Roy Osborne's paintings come from an entirely theoretical standpoint. Having written countless books and lectured all over the world on colour theory, Roy’s paintings are informed and meticulously precise. In them we see how the simplest of forms can, when arranged according to certain scientific principles of colour, create a sense of movement and rhythm, and an optical illusion of depth and space.
In the purest tradition of abstract painting we also have artists like Bernd Mechler, who empties out his canvases of any objects in order to focus on the rhythm and motion of colour. His use of colour is highly sensory with multiple layers of alternating dark and bright tones, energised by planes of intense colour with intervals of softer more atmospheric pastel shades. And Lars Rylander who walks around the canvas and whimsically but harmoniously composes a lyrical arrangement of colourful forms abstracted from the nature around him in his native Sweden. Or Michael Luther with his sublime 'colour landscapes', which although are absent of any recognisable subject matter, are driven by a focus on the process of painting itself and the rich, fluid brushwork.
So many of us are afraid of colour, but these artists allow us to dive in to a whole brighter, fresher world. Ultimately the role of colour in our lives is neatly summarised in the words of Christopher Le Brun, president of the Royal Academy: "colour is the medium in which we swim". | https://wsimag.com/art/40088-chromotopia |
The Directorate is an administrative institution in the field of education and its main objective is to improve quality and support progress in education in accordance with law and government policies, best evidence and international standards.
1. Knowledge and understanding
1.1. Students have a precise knowledge and clear understanding of at least one specialisation within philosophy.
1.2. Students have an overview of the different perspectives, methods and ideas used in research of the area or areas addressed in their own research.
1.3. Students have systematically acquired an understanding of the most recent knowledge in their specialisations within the study of philosophy.
1.4. Students are able to apply their knowledge and understanding in their research and can take a reasoned stance on philosophical issues.
2. Type of knowledge
1.3. Students are familiar with the major subjects and problems featuring in international discourse on their specialisation.
1.4. Students have acquired knowledge of their specialisation both through participation in specialised courses, seminars or symposiums and through their own research.
3. Practical skills
3.1. Students are able to acquire, analyse and evaluate scientific data.
3.2. Students are able to write academic articles for publication in journals.
3.3. Students have developed independent research practices and are well prepared to write scholarly texts on philosophy, alone or in collaboration with others, to be published in an appropriate outlet.
3.4. Students are able to understand and tackle complex subject matter in an academic context.
4. Theoretical skills
4.1. Students are able to independently judge when different analytical methods and complex academic issues are applicable.
4.2. Students are able to demonstrate a deeper and further understanding and broader overview of their specialisation than conferred by study at lower levels.
4.3. Students are able to place their own projects and research in a wider context, independently assess debates within the discipline and compare their own conclusions with those made by other scholars.
4.4. Students demonstrate in their Master's thesis that they are capable of independently conducting extensive and thorough research on a topic that is significant to the progression of academic discourse.
5. Communication skills and computer literacy
5.1. Students are able to take the initiative on projects within their specialisation, manage them and take responsibility for the work of groups and individuals.
5.2. Students are able to participate in debates on philosophical issues and show respect and understanding for the views of others.
5.3. Students are able to explain reasoned, scholarly findings within their specialisation in philosophy, either independently or with others, to experts or the general public.
5.4. Students are aware of the main opportunities for disseminating philosophical information in contemporary society.
5.5. Students are able to use software suited to philosophical research, especially research in their own specialisation.
6. Academic skills
6.1. Students are aware of the primary methods of maintaining their knowledge and expertise and are able to acquire further knowledge in their field. Students have acquired the independent working practices necessary to be able to take on doctoral studies in their field. | https://ec.europa.eu/ploteus/en/content/philosophy-ma-0 |
Introduction {#s1}
============
MicroRNAs (miRNAs) are an abundant class of small non-protein-coding RNAs that have emerged as key post-transcriptional regulators of gene expression in animals and plants [@pone.0011387-Bartel1], [@pone.0011387-Plasterk1]. Metazoan miRNA genes are transcribed by either RNA polymerase II or RNA polymerase III into primary miRNA transcripts (pri-miRNAs) as single genes or in clusters [@pone.0011387-Bartel1], [@pone.0011387-Borchert1], [@pone.0011387-Cai1], [@pone.0011387-Lee1]. The pri-miRNAs contain stem-loop structures (hairpins) that harbor the miRNAs in the 5\' or 3\' half of the stem. These primary miRNA gene transcripts are typically, but not always, recognized and cut by the endonuclease Drosha in the cell nucleus to produce miRNA hairpin precursors that are then exported to the cytosol, where the hairpin structures are cut by the endonuclease Dicer at relatively fixed positions and released as short double-stranded RNA duplexes [@pone.0011387-Filipowicz1], [@pone.0011387-Friedlander1], [@pone.0011387-Grishok1], [@pone.0011387-Hutvagner1], [@pone.0011387-Lund1], [@pone.0011387-Schwarz1], [@pone.0011387-Schwarz2]. Although both strands of duplexes are necessarily produced in equal amounts by transcription, their accumulation is asymmetric at steady state [@pone.0011387-Okamura1]. Based on the thermodynamic stability of each end of this duplex, one of the strands is thought to be a biologically active miRNA, and the other is considered as an inactive strand and a carrier strand called miRNA\* (miRNA star) or passenger strand [@pone.0011387-OToole1]. Generally, the miRNA\* strand is typically degraded, whereas the mature miRNA strand is taken up into the microribonucleoprotein complex (miRNP) [@pone.0011387-Filipowicz1] ([Figure 1A](#pone-0011387-g001){ref-type="fig"} and [Figure 1B](#pone-0011387-g001){ref-type="fig"}). The mature miRNA strand is used as a guide to direct negatively post-transcriptional regulation by the binding of 5\'-seed (nucleotides 2--8) and anchor (nucleotides 13--16) with target sequences in the 3\' untranslated region (UTR) of cognate mRNAs [@pone.0011387-Bartel1], [@pone.0011387-Grimson1]. Once bound to Ago proteins, miRNAs are more stable than average mRNAs and the half-life of most miRNAs is greater than 14 hours [@pone.0011387-Hwang1]. They may be produced by 5\' (left arms) or 3\' arms (right arms) of the miRNA precursors, and the nonrandom nature of miRNA strand selection might reflect an active process that minimizes the population of silencing complexes with illegitimate miRNA\* species [@pone.0011387-Okamura1] ([Figure 1](#pone-0011387-g001){ref-type="fig"}). The mechanism of strand selection maybe correlates with the relative free energies of the duplex ends [@pone.0011387-Schwarz1], [@pone.0011387-Okamura1], [@pone.0011387-Khvorova1].
{#pone-0011387-g001}
However, recently, some miRNA\* sequences were reported as mature functional miRNAs with abundant expression, and miRNA/miRNA\* ratios may vary dramatically among developmental stages [@pone.0011387-Okamura1], [@pone.0011387-Ro1], [@pone.0011387-Jagadeeswaran1]. Most *Drosophila* miRNAs are bound to Ago1, and miRNA\* strands accumulate bound to Ago2 [@pone.0011387-Ghildiyal1]. The rarer partner of the mature miRNA, has been recognized both in terms of increasing the complexity of regulatory networks and in governing miRNA and messenger RNA evolution [@pone.0011387-Okamura1], [@pone.0011387-Jagadeeswaran1], [@pone.0011387-Jazdzewski1], [@pone.0011387-Wheeler1], [@pone.0011387-Liu1]. Some hairpins produce miRNAs from both strands at comparable frequencies because strand selection is often not a stringent process [@pone.0011387-Kim1]. These abundant miRNA\* species are often present at physiologically relevant levels and can associate with Argonaute proteins [@pone.0011387-Okamura1] ([Figure 1](#pone-0011387-g001){ref-type="fig"}). During Drosophilid evolution, more than 40% miRNA\* sequences resist nucleotide divergence, and at least half of these well-conserved miRNA\* species select for conserved 3\' untranslated region seed matches well above background noise [@pone.0011387-Okamura1]. The miRNA\* species diverge much more slowly than miRNA terminal loops, and conserved miRNA\* sequences are almost perfectly conserved in sequences similar to mature miRNA sequences [@pone.0011387-Okamura1], [@pone.0011387-Wheeler1]. According to miRBase database (version 14.0, <http://www.mirbase.org/>), about 80 kinds of human miRNA precursors can yield two kinds of abundant mature miRNAs (left-arm, miR-\#-5p; right-arm, miR-\#-3p) with different seed sequences and target mRNAs, while most miRNA precursors only yield abundant mature miRNAs from left-arms or right-arms and rare miRNA\* sequences. Most miRNA\* species still keep fewer sequence counts despite of their mature miRNA sequences are detected higher expression levels based on high-throughput method [@pone.0011387-Guo1]. Therefore, those specific miRNA precursors that yield two kinds of abundant functional miRNAs from different arms maybe reflect evolutionary implication across miRNA gene evolution. Although evolutionary patterns of miRNA\* are consistent with their regulatory potential across Drosophilid evolution [@pone.0011387-Okamura1], limited knowledge about evolutionary information of miRNA/miRNA\* has been discussed especially across different animal species.
miRNAs are evolutionary conserved across broad phylogenetic distances [@pone.0011387-LagosQuintana1], [@pone.0011387-Lau1], [@pone.0011387-Lee2], and they have gained considerable attention about evolution, genetic and phylogenetic analysis [@pone.0011387-Grimson1], [@pone.0011387-Liu1], [@pone.0011387-Chen1], [@pone.0011387-Guo2], [@pone.0011387-Hertel1], [@pone.0011387-Niwa1], [@pone.0011387-Sempere1]. The non-coding small RNAs are strongly conserved in primary sequence and rarely secondarily lost once integrated into a gene regulatory network [@pone.0011387-Wheeler1], [@pone.0011387-Hertel1], [@pone.0011387-Heimberg1]. Recent study suggested an explosive increase in the miRNA repertoire in vertebrates [@pone.0011387-Bompfunewerer1]. Some miRNAs in a single animal species are similar in sequence and produce the same or similar mature miRNA sequences, and these miRNAs always compose miRNA gene family. These family members may be derived from ancestral miRNA gene directly or indirectly through duplication, but the duplication process maybe complex and unclear based on limited miRNA data across animal species. Nonetheless, miRNA gene evolution might provide potential implication for selection of miRNA and fate of miRNA\*. The miRNA\* strand with lower expression level because of degradation, or as functional mature miRNA with abundant clones, maybe get evolutionary implication by analyzing their evolutionary patterns. In the study, we intended to discover potential relationship between evolutionary pattern and selection of mature miRNA by analyzing miRNA/miRNA\* based on miRNA gene families and single miRNA gene across vertebrates. Simultaneously, we also analyzed a complex miRNA gene family from a single animal species to study divergence trends of miRNA/miRNA\* and discover potential evolutionary implication across evolution. Finally, because different miRNAs showed different distribution spectrums and evolutionary patterns across vertebrates, evolutionary analysis of miRNA/miRNA\* based on single miRNA gene was performed across the same kinds of typical animals.
Results {#s2}
=======
Divergence patterns of miRNA/miRNA\* based on miRNA gene families {#s2a}
-----------------------------------------------------------------
Mature miRNAs always were highly conserved across vertebrates, especially in seed sequences (nucleotides 2--8) and anchor sequences (nucleotides 13--16), while their passenger strands showed higher nucleotide divergence ([Figure 2](#pone-0011387-g002){ref-type="fig"}). Some miRNA\* species were well conserved across vertebrates although they showed a higher level of nucleotide divergence than their partners ([Figure 2A](#pone-0011387-g002){ref-type="fig"} and [Figure 2B](#pone-0011387-g002){ref-type="fig"}). The divergence mainly resulted from divergence of different animal species, homologous miRNA genes and multicopy hairpin precursors. For example, in complex miRNA gene families, such as let-7 family, miRNA\* sequences were less conserved because of wide distribution spectrum in vertebrates, multiple homologous genes and multicopy precursors. Different miRNA\* sequences showed different levels of nucleotide divergence. miR-124\* sequences were well conserved, while miR-100\* and miR-10\* sequences showed greater nucleotide divergences than their miRNAs ([Figure 2A](#pone-0011387-g002){ref-type="fig"} and [Figure 2B](#pone-0011387-g002){ref-type="fig"}). Even in positions 2--8, some miRNA\* sequences were involved nucleotide substitutions. According to miRBase database, some miRNA precursors were reported that they could generate two kinds of abundant miRNAs (miR-\#-5p and miR-\#-3p). Intriguingly, despite involved homologous genes and multicopy precursors (such as mir-142 family and mir-129 family), many of these miR-\#-5p/miR-\#-3p sequences were well conserved ([Figure 2C](#pone-0011387-g002){ref-type="fig"}). However, miR-\#-5p and miR-\#-3p showed different levels of nucleotide divergence despite both of them always had conserved seed sequences.
{#pone-0011387-g002}
Although miRNA gene families may be involved complex evolutionary history across the animal kingdom and in a single animal species, miRNA/miRNA\* based on a single animal species might show different levels of nucleotide divergence and imply different fates. Here, we took an example of let-7 family in *Homo sapiens*, which included several homologous members ([Figure 3A](#pone-0011387-g003){ref-type="fig"}). Some of these members could be found to have multicopy precursors, for example, hsa-let-7a could be produced by hsa-let-7a-1, hsa-let-7a-2 and hsa-let-7a-3. Mature hsa-let-7 sequences were produced by 5p (left arms) and well conserved especially for the seed sequences (nucleotides 2--8) and anchor sequences (nucleotides 13--16), while hsa-let-7\* showed a higher level of nucleotide divergence even positions 2--8 ([Figure 3A](#pone-0011387-g003){ref-type="fig"}). These multicopy precursors could yield the same mature miRNA sequences, but their loop sequences and miRNA\* strands might show greater divergence than miRNAs ([Figure 3A](#pone-0011387-g003){ref-type="fig"}). Interestingly, the similar trend of nucleotide divergence of miRNA and miRNA\* could be detected across vertebrates ([Figure 3B](#pone-0011387-g003){ref-type="fig"}). Phylogenetic network of hsa-let-7 family was split into several clades based on different miRNA genes ([Figure 4](#pone-0011387-g004){ref-type="fig"}). Multicopy precursors for a single miRNA, such as hsa-let-7a-1, hsa-let-7a-2 and hsa-let-7a-3, might be reconstructed in different clusters ([Figure 4](#pone-0011387-g004){ref-type="fig"}).
{#pone-0011387-g003}
{#pone-0011387-g004}
Divergence patterns of miRNA/miRNA\* based on single miRNA gene {#s2b}
---------------------------------------------------------------
We observed different amounts of nucleotide divergence between miRNA and miRNA\* sequences based on single miRNA gene, such as miR-125a-5p/miR-125a-3p and miR-210/miR-210\* ([Figure S1](#pone.0011387.s001){ref-type="supplementary-material"}). Generally, more sites of miRNA\* were involved divergence despite miRNAs were highly conserved ([Figure S1](#pone.0011387.s001){ref-type="supplementary-material"}). Different levels of divergence pattern also were detected in mammalian-specific miRNAs. Similarly, loop sequences showed different levels of divergence between various miRNA genes ([Figure S1](#pone.0011387.s001){ref-type="supplementary-material"}). According to human miRNAs in miRBase database, 80 kinds of miRNA precursors were reported two kinds of abundant miRNAs (miR-\#-5p and miR-\#-3p). Sequence analysis based on miRNA precursor sequences revealed that \>80% of these miR-\#-5p and miR-\#-3p sequences ensured conserved seed sequences throughout evolution.
Because miRNAs always showed different distribution spectrums across the animal kingdom, we also analyzed miRNA and miRNA\* across several kinds of typical vertebrate animals: *Danio rerio* (Pisces), *Homo sapiens* (Mammalia), *Gallus gallus* (Aves) and *Xenopus tropicalis* (Amphibia). Mature miRNAs were highly conserved across these animal species, while their passenger strands showed different evolutionary patterns ([Figure 5](#pone-0011387-g005){ref-type="fig"}). Some miRNA\* sequences were less conserved even in positions 2--8, such as miR-31\*, miR-100\* and miR-125b\*, while their terminus regions (5\' and 3\') were more conserved than their central regions. Other miRNA\* sequences were well conserved similar to mature miRNAs, such as miR-18a\*, miR-18b\*, miR-17-3p and miR-455-3p ([Figure 5](#pone-0011387-g005){ref-type="fig"}). Although mature miR-100 and miR-125b were highly conserved, their star sequences showed greater divergence across species. Some well-conserved miRNA\* species were reported as functional guide miRNAs with abundant expression, which were well conserved particularly in seed and anchor sequences ([Figure 5C](#pone-0011387-g005){ref-type="fig"}). Homologous miRNA genes maybe showed different divergence patterns in the same kinds of animals, such as miR-18a\* and miR-18b\* ([Figure 5B](#pone-0011387-g005){ref-type="fig"}). The loop sequences also showed different divergence trends although they maybe showed greater divergence than miRNA and miRNA\*([Figure 5](#pone-0011387-g005){ref-type="fig"}).
{#pone-0011387-g005}
Discussion {#s3}
==========
Mature miRNAs (miR-\#-5p or miR-\#-3p) were evolutionarily conserved across the animal kingdom [@pone.0011387-LagosQuintana1], [@pone.0011387-Lau1], [@pone.0011387-Lee2], while their passenger strands, either as typically degraded miRNA\* or abundant mature miRNAs, always showed conservation across vertebrates with higher nucleotide divergences than their partners ([Figure 2](#pone-0011387-g002){ref-type="fig"}). Different miRNA\* sequences showed various divergence patterns. Data analysis revealed that some mature miRNAs and their passenger strands were well-conserved, especially in their seed sequences ([Figure 2](#pone-0011387-g002){ref-type="fig"}). For example, miR-124 ([Figure 2A](#pone-0011387-g002){ref-type="fig"}), a phylogenetic conserved miRNA from *Caenorhabditis* to *Homo*, is one of the most abundantly expressed miRNAs in the nervous system and contributes to the development of nervous system [@pone.0011387-Cheng1], [@pone.0011387-Nelson1], [@pone.0011387-Smirnova1]. However, some miRNA\* sequences showed higher level of nucleotide divergence, even though in their positions 2--8, such as miR-10\* and miR-100\* ([Figure 2B](#pone-0011387-g002){ref-type="fig"}). Even if multicopy hairpin precursors could yield the same mature miRNA sequences, another product, termed as miRNA\* species, always diverged, such as hsa-let-7a-1\* and hsa-let-7a-2\*, hsa-let-7f-1\* and hsa-let-7f-2\* ([Figure 3A](#pone-0011387-g003){ref-type="fig"}). Despite of greater divergence than miRNAs, we also found miRNA\* diverged much more slowly than terminal loops, which maybe strongly aid the identification of functional animal miRNA hairpins as "saddle" structures [@pone.0011387-Berezikov1], [@pone.0011387-Lai1]. Interestingly, similar divergence trends of human let-7/let-7\* could be detected by sequence analysis across vertebrates ([Figure 3](#pone-0011387-g003){ref-type="fig"}), which might reveal historical miRNA gene divergence and similar evolutionary trend across different animals. High divergence levels could be detected among these homologous miRNA genes ([Figure 3](#pone-0011387-g003){ref-type="fig"} and [Figure 4](#pone-0011387-g004){ref-type="fig"}). Although the divergence mainly resulted from the loop regions, the divergence of miRNA\* strands also contributed partly to the high divergence level ([Figure 3A](#pone-0011387-g003){ref-type="fig"} and [Figure 4](#pone-0011387-g004){ref-type="fig"}). On the other hand, we selected several typical vertebrate animals to analyze miRNA/miRNA\* sequences because different miRNAs had different distribution spectrums across the animal kingdom. Similarly, some miRNA\* strands were highly conserved, but others were less conserved despite their mature miRNAs were well conserved ([Figure 5](#pone-0011387-g005){ref-type="fig"}). Those miRNAs that reported both miR-\#-5p and miR-\#-3p could be mature functional miRNAs always were well conserved especially for seed and anchor sequences, such as miR-17, miR-140 and miR-455 ([Figure 5](#pone-0011387-g005){ref-type="fig"}). Nevertheless, some miRNA\* strands were diverged even in their seed sequences, such as miR-31\*, miR-100\* and miR-125b\* ([Figure 5](#pone-0011387-g005){ref-type="fig"}). Therefore, across miRNA gene evolution, functional mature miRNAs still were well conserved especially for their seed sequences, while miRNA\* sequences showed various evolutionary patterns. Some miRNA\* maybe showed high divergence levels between different precursors even between different multicopy precursors, but others ensured well-conserved seed sequences, especially for those miRNA genes that generated abundant miRNAs from two arms of hairpins ([Figure 2](#pone-0011387-g002){ref-type="fig"}, [Figure 5](#pone-0011387-g005){ref-type="fig"} and [Figure S1](#pone.0011387.s001){ref-type="supplementary-material"}). Evolutionary conservation of passenger strand might result from two plausible reasons. Firstly, evolutionary process would be influenced because it maybe contributed to stable stem-loop structure of miRNA hairpin precursor. Secondly, the well conservation of passenger strand might afford an opportunity to be mature miRNA to bind target mRNA similar to its partner. Therefore, the evolutionary patterns of miRNA\* might be a pivotal implication (discussed below).
According to miRNA biogenesis, as miRNA partners, the miRNA passenger strands should be more tightly constrained at their 3\' ends which pair with the miRNA seed sequences (nucleotides 2--8). However, similar to Okamura et al. [@pone.0011387-Okamura1], systematic analysis showed that some miRNA\* sequences were notably analogous to miRNA strands: well conserved in seed (nucleotides 2--8) and anchor sequences (nucleotides 13--16) ([Figure 2](#pone-0011387-g002){ref-type="fig"}, [Figure 5](#pone-0011387-g005){ref-type="fig"} and [Figure S1](#pone.0011387.s001){ref-type="supplementary-material"}). They also showed patterns of nucleotide divergence that were consistent with their selection for regulatory activity [@pone.0011387-Okamura1]. Therefore, the evolutionary pattern of miRNA\* afforded an opportunity to be abundant functional guide miRNAs based on well conserved seed sequences that reflected their sequence-based, trans-regulatory activity [@pone.0011387-Okamura1], [@pone.0011387-Lewis1]. Indeed, earlier computational efforts for miRNA genes finding hinted the possibility of trans-acting activity for miRNA\* species [@pone.0011387-Lai1]. Some miRNA\* strands were abundant because they were degraded more slowly than others, and the miRNA:miRNA\* ratio of many loci became increasingly skewed as development proceeded [@pone.0011387-Okamura1]. According to miRBase database, we found two kinds of mature products (miR-\#-5p from left-arm and miR-\#-3p from right-arm) were reported from some miRNA precursors, such as mir-199a and mir-17. Analysis of miRNA based on high-throughput sequencing data also showed abundant miRNA\* although less abundant than their partners [@pone.0011387-Guo1]. Recent study revealed that miRNA:miRNA\* ratios were flexible in different development stages and both of them resisted nucleotide divergence across Drosophilid evolution [@pone.0011387-Okamura1]. The expression level of miRNA passenger strand mainly relied on degradation degree and degradation rate, because both strands of miRNA duplex were necessarily produced in equal amounts by transcription. We found different miRNA\* showed various divergence patterns despite their mature miRNAs were highly conserved ([Figure 2](#pone-0011387-g002){ref-type="fig"}, [Figure 5](#pone-0011387-g005){ref-type="fig"} and [Figure S1](#pone.0011387.s001){ref-type="supplementary-material"}). Generally, those less conserved miRNA\* strands were not reported as mature functional miRNAs. The divergence of less-conserved miRNA\* always resulted from individual animal and/or multicopy precursors ([Figure 5](#pone-0011387-g005){ref-type="fig"} and [Figure S1](#pone.0011387.s001){ref-type="supplementary-material"}). Evolutionary trends of the miRNA\* strands might be potential implication for their final fates: degradation as by-products or functional regulatory molecules as mature miRNAs. It is plausible that non-functional miRNA\* strands maybe involved higher rates of nucleotide substitution during evolution, while functional miRNA\* sequences would be strictly regulated that were critical during binding target mRNAs. The correlation between the evolutionary constraint of miRNA\* and their expression levels might reflect their potential function as endogenous regulatory RNAs. Some miRNA\* strands might become functional guide strands and they were phylogenetically conserved similar to their mature miRNAs. Those well conserved miRNA\* strands might also play important roles in regulating network in different development stages, but limited miRNA data cannot afford enough experimental evidences. Therefore, evolutionary patterns of many miRNA\* strands were consistent with their regulatory potential [@pone.0011387-Okamura1], [@pone.0011387-Liu1], and the final fate, degradation as merely carrier strand or becoming potential functional guide miRNAs, might be got some implication throughout miRNA gene evolution. Some passenger strands were well-conserved in positions 2--8 similar to their mature miRNAs, and the phylogenetic conservation of miRNA\* may be evolutionary implication to become abundant guide miRNAs and play important roles in particular developmental contexts at specific times. The systematic evolutionary analysis maybe broaden our understanding of miRNA\* strands, especially for those potential regulatory miRNA\* species.
Materials and Methods {#s4}
=====================
All the miRNA and miRNA\* sequences, and their miRNA precursor sequences from different animal species were obtained in miRBase database (version 14.0, <http://www.mirbase.org/>). We denoted the miRNA precursors by mir-\#, the mature miRNAs by miR-\#, and miRNA\* (miRNA star) by miR-\#\* in accordance with the convention in miRBase database. If the miRNA\* strands were reported as abundant mature miRNA, miR-\#-5p or miR-\#-3p was denoted. In the study, miR-\#-5p and miR-\#-3p were identified according to human miRNAs in miRBase database. These sequences were aligned with Clustal X 2.0 [@pone.0011387-Larkin1] by using the multiple sequence alignment. Phylogenetic network of miRNA genes was reconstructed using the neighbor-net method [@pone.0011387-Bryant1] based on Jukes-Cantor model as implemented in SplitsTree 4.10 [@pone.0011387-Huson1]. For human let-7 family, we attempted to reconstruct the evolutionary history from the gene tree and discover potential evolutionary implications of let-7 and let-7\*. All the gaps/missing data were deleted in phylogenetic network.
Because miRNA\* sequences always degraded, there were limited miRNA\* sequences in miRBase database. In order to discover detailed evolutionary information, we analyzed predicted consensus sequences as miRNA\* sequences according to known miRNA\* based on their precursor sequences. Because of imprecise and alternative cleavage of Dicer and Drosha, multiple isomiRs, the population of variants of known miRNAs, have been identified from the sequencing data by applying high-throughput DNA sequencing technologies [@pone.0011387-Guo1], [@pone.0011387-Kuchenbauer1], [@pone.0011387-LagosQuintana2], [@pone.0011387-Morin1], [@pone.0011387-Ruby1]. Therefore, in the study, we only analyzed nucleotide substitutions of internal sequences of miRNA and miRNA\* without considering gaps/missing sites in the terminus regions. Percentage of nucleotide substitution at positions (from 1 to ∼22) was estimated for miRNAs and miRNA\* sequences by analyzing all the miRNA precursors from miRBase database. In order to estimate substitution trend more precisely, we selected the most abundant nucleotide at each position as reference nucleotide.
Supporting Information {#s5}
======================
######
Patterns of nucleotide divergence of miRNA and miRNA\* across vertebrates. (A) and (B) showed well conserved miR-\#-5p and miR-\#-3p based on miRNA gene family. (C) and (D) showed divergence patterns of miR-\#-5p/miR-\#-3p and miRNA/miRNA\* based on single miRNA gene.
(3.99 MB TIF)
######
Click here for additional data file.
**Competing Interests:**The authors have declared that no competing interests exist.
**Funding:**The work is supported by the project 30871393 from National Natural Science Foundation of China and funded by Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
[^1]: Conceived and designed the experiments: LG ZL. Analyzed the data: LG. Contributed reagents/materials/analysis tools: LG. Wrote the paper: LG.
| |
Weill Cornell Medicine is among the top-ranked clinical and medical research centers in the country, providing compassionate care for the whole patient for their whole life, training the next generation of health care leaders and speeding breakthrough discoveries from the lab bench to the patient bedside.
Weill Cornell Medicine continues to build on its track record of unprecedented growth. Our world-class physicians and scientists are committed to addressing emerging challenges through collaboration and innovation and changing the future of medicine.
Donors are the cornerstone of our work at Weill Cornell Medicine, helping us
lead the way in life-saving treatments and pioneering research, pairing
medical education with the most advanced technologies.
You are the agents of change in
advancing medicine and patient care. | https://give.weill.cornell.edu/ |
CBSU bibliography search
Automaticity and attentional control in spoken language processing: neurophysiological evidence
Authors:
SHTYROV, Y.
Reference:
Mental Lexicon, 5(2), 255-276
Year of publication:
2010
CBU number:
7222
Abstract:
A long-standing debate in the science of language is whether our capacity to process language draws on attentional resources, or whether some stages or types of this processing may be automatic. I review a series of experiments in which this issue was addressed by modulating the level of attention on the auditory input while recording event-related brain activity elicited by spoken linguistic stimuli. The overall results of these studies show that the language function does possess a certain degree of automaticity, which seems to apply to different types of information. It can be explained, at least in part, by robustness of strongly connected linguistic memory circuits in the brain that can activate fully even when attentional resources are low. At the same time, this automaticity is limited to the very first stages of linguistic processing (<200 ms from the point in time when the relevant information is available in the auditory input). Later processing steps are, in turn, more affected by attention modulation. These later steps, which possibly reflect a more in-depth, secondary processing or re-analysis and repair of incoming speech, therefore appear dependant on the amount of resources allocated to language. Full processing of spoken language may thus not be possible without allocating attentional resources to it; this allocation in itself may be triggered by the early automatic stages in the first place.
| |
Full article issued by The University of New South Wales.
Smart building ecosystems are now possible thanks to a partnership between the University of New South Wales (UNSW) Sydney and an Australian company, WBS Technology, which was assisted by funding through an ARC Linkage Projects grant.
A wireless solution developed by the researchers is being used by WBS Technology to roll out technology that would allow buildings to monitor themselves, react to their surroundings, follow instructions from afar, and even talk to smartphones. Each exit sign or emergency light acts as a node in the network, passing information back and forth across a building. Other devices can be connected to the network allowing all of them to be controlled and monitored remotely.
The collaboration between the University engineers and the company began under UNSW's TechConnect incubator program, which led to an ARC Linkage project between the two, and finally culminated in an Innovation Connections Grant funding the commercialisation of the technology developed by the University.
Exit sign. Credit: Stuart Cunningham (CC BY 2.0).
Media release issued by the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav).
An international group of scientists, including dozens of Australians based at OzGrav, has announced the detection of the most massive binary black hole merger yet witnessed in the universe.
The black hole that resulted from this cataclysmic event is more than 80 times as massive as our Sun and comes—along with evidence of nine other black hole mergers—just over one year since scientists announced they had witnessed, for the first time, the violent death spiral of two dense neutron stars via gravitational waves.
A series of papers now published present the full catalogue of observations of binary black hole and binary neutron star mergers from the first two observing runs (2015, 2016-17) of the Advanced LIGO (US) and Advanced Virgo (Italy) gravitational-wave detectors, as well as calculations of mass, spin, and redshift distributions for the first ten binary black hole mergers.
Prior to OzGrav, Australian researchers' involvement in Advanced LIGO has received signfiicant support through the ARC's Linkage Infrastructure Equipment and Facilities scheme.
A new Australian Research Council Discovery Project led by Dr Eloise Foo at the University of Tasmania is receiving $322,000 to build a model of the signals that regulate the formation of root nodules—unique organs which have a vital role in extracting nitrogen from the soil.
Nitrogen is often limited in the soil, and farmers rely on artificial fertilisers to improve a soil’s nitrogen content for agriculture, but this can have adverse environmental consequences. Some plants, however, (mostly legumes) can form root nodules that host nitrogen-fixing bacteria.
Sustainable sources of plant nutrients are increasingly required to ensure food security and minimise the environmental impact of intensive farming, and this project will provide fundamental information on why some species can form nitrogen-fixing nodules by examining the role of plant hormones. The researchers will build a knowledge base to potentially expand this symbiosis into non-legumes, harnessing the huge advantage nodule forming species have in staple crops.
Image: A cross section of root nodules on an alder tree. Source: Wikimedia (Public Domain).
A new Australian Research Council Discovery Project led by Professor Susan Luckman at the University of South Australia has received $333,000 to enhance the future of advanced manufacturing in Australia by mapping intersectional craft work within the Australian economy.
Craft skills embedded and working in collaboration with industries are essential to innovation as Australia looks to develop high-end advanced manufacturing. The research team will identify ways in which the skills of ‘making’—which are required to sustain and grow future manufacturing—can be maintained and extended, supporting not only the survival and updating of current production, but also enabling the kind of fertile ground out of which the innovation necessary for developing advanced manufacturing can grow.
A new Australian Research Council Linkage Infrastructure, Equipment and Facilities (LIEF) grant of $420,000 awarded to a large team led by Professor Hugh Craig at The University of Newcastle will support a time-layered cultural map (TLCMap) of Australia—an online research platform to deliver researcher driven national-scale infrastructure for the humanities, focused on mapping, time series, and data integration.
The TLCMap will expand the use of Australian cultural and historical data for research through sharply defined and powerful discovery mechanisms, enabling researchers to visualise hidden geographic and historical patterns and trends, and to build online resources which present to a wider public the rich layers of cultural data in Australian locations.
TLCMap is not a singular project or software application with a defined research outcome, but infrastructure linking geo-spatial maps of Australian cultural and historical information, adapted to time series and will be a significant contribution to humanities research in Australia. For researchers, it will transform access to data and to visualisation tools and open new perspectives on Australian culture and history. For the public, it will enable increased accessibility to historical and cultural data through visualisations made available online and in print.
New research supported with $460,000 through the Australian Research Council’s Linkage Projects scheme will help Australian endangered mammal populations to recover, by examining data on their genomic and morphological variation. This data will be combined with the results from conservation translocations, where small populations of endangered species are mixed or moved into predator-controlled habitats to promote their recovery.
Led by Professor Craig Moritz at The Australian National University, with partners including the Australian Museum, and the museums of Western Australia and South Australia, the research team will use new genomics methods to measure precisely the effects of small population size on genetic diversity and mutations, with a focus on seven intensively managed marsupial species.
Using evidence on genomic and phenotypic divergence of remnant and translocated populations, the project will give valuable guidance on when and how to mix populations for fauna restoration projects, and evaluate the risks of these conservation activities.
A project led by Dr Lenore Adie at the Australian Catholic University aims to assess the dependability of teacher judgement using psychometric scaling and online moderation, with $511,658 in new funding from the Australian Research Council’s Linkage Projects scheme.
The project will use an innovative approach to connect achievement standards and judgement-practice for middle-year students in the Australian education system. Partnering with the Federal Department of Education and Training, and the Western Australian School Curriculum and Standards Authority, among others, the project is expected to provide significant benefits in building teachers’ assessment capabilities, shaping policy and teacher preparation.
Expected outcomes include the development of scaled work samples exemplifying A-E standards of achievement, refined methods for the consistency and comparability of assessment decisions, and a new approach to moderating teacher judgements.
A new research project led by Professor Chris Sarra at the University of Canberra, and funded with $583,199 through the Australian Research Council’s Discovery Indigenous scheme, aims to generate new knowledge about high-achieving Indigenous students from low socioeconomic backgrounds.
The longitudinal study will provide evidence on success factors that are most effective in improving academic achievement and experience of Indigenous students, and inform government and Aboriginal communities in developing evidence-based policy and practice.
The project is expected to provide a unique opportunity to unveil how Indigenous students’ network characteristics formed in family, school, and community interplay with their psychological constructs in shaping their academic success longitudinally. The project will also represent a significant advancement in the global research field of Indigenous education.
Image: Aboriginal education. Credit: Public Domain Files. | https://www.arc.gov.au/news-publications/media/research-highlights?page=5 |
Executive Committee Member
Chick-fil-A, Inc.
Since March 2019, Emily has been with Chick-fil-A as their Principal Program Lead, Public Affairs. Prior to this role,
she served as a member of the U.S. Mission to the United Nations as then-Secretary Haley’s senior advisor on
foreign policy issues related to Africa, Europe, East Asia, Western Hemisphere, and Peacekeeping. Her role included
coordinating with U.S. embassies, other U.S. government agencies, foreign governments, and non-governmental
organizations to identify emerging global political and security events and develop and implement appropriate
responsive courses of action within the United Nations Security Council and General Assembly, as well as organizing
the High Level Summit on the World Drug Problem hosted by President Donald J. Trump during the 73rd Session of
the United Nations General Assembly and attended by Heads of Delegations from over 130 other countries. | https://foodintegrity.org/about/leadership/emily-mcmillan/ |
Published on 06 Aug 2019.
Share
Tweet
Email
RAM Ratings has assigned a final AA3/Stable rating to Telekosang Hydro One Sdn Bhd’s (TH1) RM470 mil ASEAN Green SRI Sukuk under the Shariah principle of Wakalah Bi Al-Istithmar (2019/2037) (Senior Sukuk). Concurrently, we have also assigned a final A2/Stable rating to TH1’s RM120 mil ASEAN Green Junior Bonds (2019/2039) (Junior Bonds).
The Senior Sukuk represents the world’s first rated mini-hydro project sukuk that is aligned with the requirements of Securities Commission Malaysia’s Sustainable and Responsible Investment Sukuk Framework, the ASEAN Green Bond Standards, and the globally recognised Green Bond Principles. TH1’s Green Sukuk Framework has been reviewed by RAM Consultancy Services Sdn Bhd.
The Project Companies (comprising TH1 and Telekosang Hydro Two Sdn Bhd (TH2)) have each signed 21-year Renewable Energy Power Purchase Agreements (REPPAs) with Sabah Electricity Sdn Bhd (SESB), in relation to two small hydropower plants with a combined installed capacity of 40 MW in Tenom, Sabah (the Projects). The Scheduled Feed-in Tariff Commencement Date (FiT CD) for the Projects is 31 July 2021, with an expected 24-month construction period.
While TH1 is the issuer of both the Senior Sukuk and the Junior Bonds, the Project Companies will be joint signatories for the relevant financing agreements, thereby ensuring their due performance in supporting TH1’s financing obligations. The total estimated project cost of RM577.85 mil will be funded via the issuance proceeds from the Senior Sukuk (80%), the Junior Bonds (3%) and redeemable preference shares (17%).
In assigning the final ratings, RAM has reviewed all the relevant transaction documents and assumptions applied. We find them to be in line with our expectations when the preliminary ratings had been assigned (published on 31 May 2019). Please refer to our final rating rationale on TH1 for further details on the assigned ratings.
Analytical contact
Yip Chee Meng
(603) 3385 2516
[email protected]
Media contact
Padthma Subbiah
(603) 3385 2577
[email protected]
The credit rating is not a recommendation to purchase, sell or hold a security, inasmuch as it does not comment on the security’s market price or its suitability for a particular investor, nor does it involve any audit by RAM Ratings. The credit rating also does not reflect the legality and enforceability of financial obligations.
RAM Ratings receives compensation for its rating services, normally paid by the issuers of such securities or the rated entity, and sometimes third parties participating in marketing the securities, insurers, guarantors, other obligors, underwriters, etc. The receipt of this compensation has no influence on RAM Ratings’ credit opinions or other analytical processes. In all instances, RAM Ratings is committed to preserving the objectivity, integrity and independence of its ratings. Rating fees are communicated to clients prior to the issuance of rating opinions. While RAM Ratings reserves the right to disseminate the ratings, it receives no payment for doing so, except for subscriptions to its publications.
Similarly, the disclaimers above also apply to RAM Ratings’ credit-related analyses and commentaries, where relevant. | https://www.ram.com.my/pressrelease/?prviewid=5052 |
Font Size:
a
A
A
Competency Allocation And Institutional Revenue: Political And Economic Analysis Of Institutional Change In The Asia - Pacific Region After The Cold War
Posted on:
2016-10-31
Degree:
Doctor
Type:
Dissertation
Country:
China
Candidate:
C C Ye
Full Text:
PDF
GTID:
1106330461966109
Subject:
World Economy
Abstract/Summary:
PDF Full Text Request
As the systemic change of international order after the Cold War, regional order in Asia Pacific had also been changed. When facing the different consideration of profit of institutions, major powers will have different policies. This dissertation will answer these major questions:1) what is the feature of institutional change in Asia Pacific after the Cold War? 2) What factors determine the institutional change? 3) What is the feature of the major power games and how they shape regional institutions? 4) Why the same institution develops different in different periods and different institutions develop different in the same period?The main hypothesis of this dissertation is that the major powers behaviors in institution construction depend on capacity distribution of regional system which refers to the survival and security pressure induced by the relative position of a country in a regional system and their cognition of profit of regional institutions. This dissertation adopts these two independent variables to explain the dependent variable, regional institutional change in Asia Pacific. Profit of institutions and capacity distribution affect the major powers behaviors in institutional games which promote regional institutional change.This dissertation, in the perspective of capacity and profit change of the struggle between China and U.S, divides institutional change into three periods which argues that China has experienced from the process of integration, participation and leading area system, while the United States has experienced a process of construction, respectively is ignored and the Asia Pacific rebalancing. Each period also has three steps, namely the institution demand, institution game and system maintenance. In order to test my hypothesis, the qualitative approaches of structured focused on comparison and process tracing are applied, and three cases are discussed respectively: 1) Asia-Pacific Economic Cooperation (APEC) and ASEAN Regional Forum in 1990s; 2) the ASEAN Plus Three (APT) and APEC during the of the war on terror from 2001 to 2008; and 3) the America leading TPP and the hedging institutions found by China after the global financial crisis.The major findings of this study are as follows:l)The first period was a unipolar system, China and the United States had a strong willing to promote the regional institutional, but Chinese and ASEAN were not willing to be USA dominant, so the result of the game was to form a weakly constrained institution named "open regionalism".2) The second period was still unipolar, the US willing of institution supply declined while China showed more interest in it, but without sufficient power, the outcome of the was the stagnation of US-led APEC and setbacks of China-led APT.3) The last period was a bipolar system, due to Sino-US balance of power in the Asia-Pacific region, the United States had more incentive to promote regional institution which was more exclusive and valued relative gains after Obama’s rebalancing strategy. After the relative rise of Chinese power, China had taken a more aggressive posture to found a series of institutions to hedge it. So the result of the game is the balance of power between the US and China.The theoretical contribution of this dissertation lies in three aspects. First, it uses capacity distribution and institution profit to explain the regional institutional change and divide institutional change into three phases to avoid the paradigm debate between different schools. Second, it distinguishes state actors and result in regional regimes, and adopts game theory to explain the relationship between the behavior of a national policy and as a result of institutional change. And finally, beyond static analysis of the variables, this study provides a historical institutionalism perspective to understand the regional institutional change.In conclusion, under the framework of Neo-liberal institutionalism, this dissertation reveals the change of capacity distribution and institution profit between hegemony power and the rising power. It also uses game theory to investigate the effect of these changes on the power behaviors and game results and therefore enriches the analysis about the regional institutional change.
Keywords/Search Tags:
capacity distribution
,
institution profit
,
regional institution
,
institutional change
, | https://www.globethesis.com/?t=1106330461966109 |
First, what is a cybersecurity awareness program? It is a structured approach to managing an organization’s human risk. You can gauge and measure the maturity of an awareness program by using the Security Awareness Maturity Model. Mature awareness programs manage human risk by answering three key questions in this order.
- Human risks: You cannot manage all human risk. As such you must assess, identify and prioritize your organization’s top human risks. This should be a data-driven process in partnership with key groups within security such as the incident response, security operations, cyber threat intelligence or risk management teams.
- Behaviors: We need to prioritize behaviors, the fewer behaviors we focus on the more likely people will change those behaviors, and at a lower cost to your organization.
- Change: How do we motivate and enable people to change those behaviors? One of my favorite behavior change models is the BJ Fogg Behavior model.
Over time, technology, threats and business requirements change. As such, an organization’s human risks, in coordination with its security team, should be reviewed and updated annually.
What to measure
Once you look at cybersecurity awareness and managing human risk through this lens it becomes easier to identify what metrics you should be focusing on. Measure what you care about. Your top human risks and the behaviors that most effectively manage those risks. I’ve been hesitant to suggest to organizations exactly what risks and behaviors they should focus on, as risks are often unique to each organization.
The concern is that too many organizations simply don’t have the data/resources to identify their top human risks, as such they don’t know where to start. I’m seeing in many cases it doesn’t matter as almost all the data resources I have been researching such as the annual Verizon DBIR Report, CISA Essentials, and this year’s NCSA/CybSafe Report point to the same finding, most organizations share the same top three human risks – phishing, passwords and updating. As such, I’m going to define these risks, the behaviors that manage these risks, and how to measure those behaviors.
One thing you should decide beforehand is if you want to measure and track behavior by individual or by role/department/business unit. If tracking at the individual level be sure you are taking measures to protect the information and privacy of every individual. Depending on the size of your organization and the amount of data you are collecting, you may also need to partner with someone in your organization who specializes in data analytics/business intelligence to help you normalize / analyze findings.
Phishing
Phishing for three years now has been the number one driver of breaches at a global level (2021 Verizon DBIR Report – p15). No matter the number of technical controls we throw at this problem, cyberattackers simply adapt and bypass them. As such we need to teach people how to identify and report these attacks. So, what do we measure? After people have been trained, measure their susceptibility to phishing attacks. Of our top human risks this one is the simplest to measure and why it is such a common metric.
- Click rates: Measure the overall click rate of your organization. When you first roll out phishing training this number will drop fast, perhaps from a 20% click rate to less than 2% click rate for more basic phishing templates. Once you are at around 2-3% click rate you may need to start using more difficult / targeted phishing templates. Most phishing vendors support a tiered approach enabling you to use different categories of phishing difficulty. Remember, your goal is not a 0% click rate, as once you hit 2% or less click rate with basic, beginner level phishing lures, your first-time clickers are primarily new hires, and this is a training event for them.
- Repeat click rates: For many organizations this is their most valuable phishing metric as this measures your repeat clickers – the people who are not changing behavior and represent a far greater risk to your organization.
- Reporting rates: If you are training and enabling your workforce to report suspected phishing emails, this helps develop your Human Sensor network. For this, it’s not so much the number of people that report that is key, but how fast your security team gets the first reports. The sooner people report a suspected incident, the faster the security team can respond and manage potential incidents. People who report represent the most resilient of your workforce, as they are not only identifying attacks, but enabling the security team to respond and secure the entire organization more proactively.
Passwords
For several years now passwords continue to also be a primary driver of breaches. Cyber attackers have changed their TTPs (Tactics, Techniques and Procedures), moving from gaining access or lateral movement by continually hacking into and infecting systems to using legitimate accounts to more easily pivot and traverse through a victim organization while avoiding detection. As such, both strong passwords and the secure use of those passwords have become key.
- Strong passwords: Ensure people are adapting and using strong passwords. Length is the new entropy; passphrases are now highly encouraged. This can be tested by running brute force / cracking solutions against password databases.
- Password manager adoption: We in many ways have made passwords difficult, confusing, and even intimidating for people with various rules and policies. As such, organizations are starting to adopt password managers to make passwords simpler for their workforce. If your organization is/has deployed password managers, measure the password manager adoption, and use rate. What percentage of your workforce is using password managers? You should be able to pull this data from which ever department is deploying/managing password managers.
- Multi-factor authentication adoption: Like password managers, if you have rolled out MFA attempt to identify how much of your workforce has adopted it. MFA is especially important for critical or sensitive accounts. Once again, this information should be accessible from whomever is responsible for deploying the MFA solution, responsible for the logging of authentication systems, leads Identity and Access Management, or part of Operations or Security.
- Password reuse/password sharing: Are people reusing the same password across different work accounts (or even worse reusing work and personal accounts)? Or are people sharing their passwords with fellow co-workers? While this behavior sounds difficult to measure you can effectively measure both behaviors with a security behavior/culture survey. The key is using a scientific approach to how you both write and measure the survey results. For example, one way to measure password sharing would be to ask your workforce
Updating
Of the three human risks we cover, this one may not apply. We want to ensure the computers and devices people are using, and the applications and apps installed on them, are updated and current. For some organizations this is not an issue as people do not have admin rights or control over work issued devices, instead their devices are actively patched by IT. However, for many organizations this is an issue as so many people are now working remotely from home and are often using personal devices or home networks for work access. There are several ways to measure this.
- For any devices your organization issues, your operations, IT, or perhaps even vulnerability management teams should be able to remotely track the update status of those devices. In some cases, solutions such as MDM (mobile device management) may be installed on personal devices which can also track updating status.
- Your learning management system (LMS) or phishing platform may be able to automatically track the device, operating system and browser version of any device that connects to them.
- Assess and survey your workforce to determine if they understand the importance of updating and are actively updating their personal devices, to include enabling automatic updating.
Strategic metrics
Once you start collecting metrics on peoples’ behaviors, you can use this data to better understand and manage your overall human risk. Three key uses include:
- Identify what regions, departments, or business units have the fewest secure behaviors and represent the greatest risk to the organization.
- Identify what regions, departments, or business units are most successfully changing behavior…and why. Use lessons learned to apply to your less secure departments or regions.
- When an incident does happen, understand whether that individual was trained. Was the department they were in one of the most secure or least secure departments or business units?
You can also demonstrate the strategic value of your program to leadership by aligning behavior with what leadership really cares about.
- Number of incidents: As people change behavior, the overall number of incidents should go down, such as number of infected devices due to people falling victim to phishing attacks or account take-overs due to bad passwords.
- Attacker dwell time: The time it takes to detect a successful cyber attacker in your organization should decrease as you develop a Human Sensor network. The less time an attacker is on your network (dwell time) the less damage they can do.
- Cost of incidents: By reducing the number of incidents, and the dwell time of successful attackers, we can reduce overall costs.
- Policy and audit violations: As behaviors change we should see a reduction in the number (or severity) of policy and audit violations.
Do you have experience and expertise with the topics mentioned in this article? You should consider contributing content to our CFE Media editorial team and getting the recognition you and your company deserve. Click here to start this process. | https://www.industrialcybersecuritypulse.com/cybersecurity-awareness-metrics-what-to-measure-and-how/ |
Today we’d like to introduce you to John L Hill.
John L, please share your story with us. How did you get to where you are today?
I am a Celebrity Portrait Artist. I became interested in art at an early age since my father was an artist before me. I began practicing artwork consistently when I was nine years old, mainly creating cartoons and competing with my peers to see who could draw specific characters the best. By the time I was ten years old, I began creating portraits and started developing the skills it took to create one successfully.
For a long time, I would use pencils, colored pencils and pens to create my work. When I became of age, I got into tattooing, and then after a few years of that, I moved back into creating work with only pen on canvas. It didn’t take long for me to realize that my work was highly skillful but not so creative, so I moved into painting in 2018 to express myself through color. This transition allowed me to step outside of what I was used to when creating and opened the door for my creative juices to flow. Painting and developing creativity with paint has led me to create for hundreds of amazing collectors including a number of celebrities.
Has it been a smooth road?
Being an artist and making it your career has not been a smooth road. At the end of 2016, I quit my day job so that I could pursue art full time and a pen artist. At first, it was tough finding people who wanted to buy my artwork. Eventually, the excessive time that it took to create large drawings with pen, and the small number of collectors who I could reach were part of the driving force that led me to start painting. When I started painting, I was introduced to a wider range of collectors, but it was hard for me to break out of my “perfectionist” shell and be creative. I had a hard time accepting that there really weren’t rules to creating art and expressing myself. I had to practically “unlearn” what I had known all of my life about art and become an entirely new artist. This allowed me to become more creative, which was an amazing addition to the skill that I already had.
Please tell us more about your art.
My business, John L. Hill Art, revolves around myself as an artist, the things that I create, and the experiences that I give. I specialize in creating artwork, mainly portraits, and I am known for creating pieces that are both realistic and also very creative. Whenever I create, I do not want to create just another “realistic picture,” I want to create something that looks like a painting. I want to create something that a camera cannot capture. All of my artwork is the sum of everything that I have ever learned as an artist, and that is one thing that sets me apart from others. I pour myself into my work and I express the love and passion that I have for art in everything that I do and every piece that I create. I am most proud of the growth that I have made over my lifetime and the success that I have found in influencing people through my work. My ultimate goal is to become a highly in-demand artist who positively influences everyone who looks at my creations.
Is our city a good place to do what you do?
Yes, I’ve been able to work with so many celebrities in L.A. I never set out to appeal to that audience however many of L.A finest actually own some of my pieces.
Contact Info:
- Website: jlhillart.com
- Phone: 346.907.6376
- Email: [email protected]
- Instagram: jlhill_art
Image Credit:
John L Hill
Suggest a story: VoyageLA is built on recommendations from the community; it’s how we uncover hidden gems, so if you or someone you know deserves recognition please let us know here. | http://voyagela.com/interview/meet-john-l-hill-artist-los-angeles/ |
By: Rolf Garcia-Gallont
This week, in Xing Yang Yang v. Holder, the Fourth Circuit vacated a Board of Immigration Appeals (“BIA”) decision that erroneously upheld an inadmissibility decision based on an adverse credibility ruling.
Background
Xing Yang Yang, a native of China, entered the United States without inspection in January, 1993. In March of the same year, Yang applied to the Immigration and Naturalization Service (“INS”) for asylum and withholding of removal, but in 1997 he was ordered deported in absentia. In March 2001, Yang’s mother, a lawful permanent resident in the United States and qualified relative, petitioned for an immigration Visa on Yang’s behalf.
Yang filed various applications for relief from deportation under the Immigration and Nationality Act (“INA”), including a request for asylum and withholding of removal, protection under the Convention Against Torture, and adjustment of status based on his mother’s visa petition.
Initial IJ Decision: Adverse credibility ruling and denial of asylum
In June 2008, an immigration judge (“IJ”) conducted an evidentiary hearing on Yang’s asylum application. The IJ rendered an adverse credibility determination, finding that Yang’s demeanor undermined his credibility. The IJ observed that Yang had taken notes with him to the witness stand, and appeared to refer to them during his testimony; that he had signaled his mother before and during her testimony, and that her testimony had changed after the signal; and that there were several inconsistencies between Yang’s asylum application and his testimony at the hearing. The IJ then denied Yang’s asylum application.
Second IJ Decision: Denial of adjustment of status due to willful misrepresentation
Under 8 U.S.C § 1182(a)(6)(C)(i), an alien who seeks to procure an immigration benefit by “fraud or willfully misrepresenting a material fact” is inadmissible. The Second IJ Decision denied Yang’s adjustment application partly because the IJ determined that Yang had engaged in fraud and willful misrepresentation to procure an immigration benefit, and was thus ineligible for adjustment. The IJ justified the willful misrepresentation ruling by invoking the Initial IJ Decision’s credibility ruling.
Yang appealed both the Initial and Second decisions to the BIA, and the BIA affirmed both. Yang then petitioned the Fourth Circuit for review of the BIA’s decision.
The Second IJ Decision Committed Legal Error by Using an “Adverse Credibility” Ruling as the Equivalent of “Willful Misrepresentation”
Adverse credibility and willful misrepresentation are distinct legal concepts and require separate analyses. An adverse credibility determination does not require any deliberate and voluntary misrepresentation – inconsistencies between a petitioner’s application and subsequent testimony may suffice. On the other hand, a determination that an alien made a willful misrepresentation requires that deliberate and voluntary misrepresentation be shown by clear and convincing evidence.
The Fourth Circuit held that the Second IJ Decision had committed legal error because it based its willful misrepresentation ruling solely on the credibility ruling of the Initial IJ Decision, without any other basis for finding the deliberate and voluntary requirements.
To Render a Petitioner Inadmissible Under 8 U.S.C. § 1182(a)(6)(C)(i), the Government Must Show by Clear and Convincing Evidence that the Petitioner’s Willful Misrepresentation Was Used to Seek an Immigration Benefit
The Fourth Circuit then went on to review the record to see if it contained any factual basis on which the Second Decision could have reached the conclusion that Yang was inadmissible under 8 U.S.C. § 1182(a)(6)(C)(i).
In his original asylum application, Yang stated that he feared harm from the Chinese government based on his political participation in the 1989 student protests at Tianamen Square, and his association with the Falun Gong group, which had been prosecuted by the Chinese government.
At the Initial IJ Hearing, Yang explained that a “travel service” had prepared his original asylum application forms because he did not speak English at the time. Reviewing the application at the hearing, he clarified that he had participated in demonstrations in Fuzhou supporting the Tianamen student protests, and that while he had contact with Falun Gong, he was not relying on these contacts for purposes of his asylum application.
The Fourth Circuit remarked that while a comparison of Yang’s asylum application and his Initial IJ Hearing testimony did show contradictory statements, the record did not show that these statements had been knowing and deliberate misrepresentations to gain an immigration benefit. In fact, to the extent it contradicted his asylum application, the testimony harmed his prospects of gaining the immigration benefit he was seeking – asylum. Additionally, the language barrier could explain the variations between the application and his testimony.
The Record Did Not Contain Clear and Convincing Evidence that Yang Attempted to Procure an Immigration Benefit by Deliberately and Voluntarily Making False Statements.
Willful misrepresentation to procure an immigration benefit must be shown by clear and convincing evidence in order to render an alien inadmissible under 8 U.S.C. § 1182(a)(6)(C)(i). Because the record lacked substantial evidence that would have supported such a determination, the Court held that the Second IJ Decision erred in determining that Yang was inadmissible under § 1182(a)(6)(C)(i), and the BIA erred in affirming in that respect. | http://www.wakeforestlawreview.com/2014/11/fourth-circuit-clarifies-lack-of-credibility-not-the-same-as-willful-misrepresentation/ |
A manual transmission (also known as a manual gearbox; abbreviated as MT and sometimes called a standard transmission in Canada and the United Kingdom) is a multi-speed motor vehicle transmission system, where gear changes require the driver to manually select the gears by operating a gear stick and clutch (which is usually a foot pedal for cars or a hand lever for motorcycles).
Early automobiles used sliding-mesh manual transmissions with up to three forward gear ratios. Since the 1950s, constant-mesh manual transmissions have become increasingly commonplace and the number of forward ratios has increased to 5-speed and 6-speed manual transmissions for current vehicles.
The alternative to a manual transmission is an automatic transmission; common types of automatic transmissions are the hydraulic automatic transmission (AT), and the continuously variable transmission (CVT), whereas the automated manual transmission (AMT) and dual-clutch transmission (DCT) are internally similar to a conventional manual transmission, but are shifted automatically.
Alternately, there are transmissions which facilitate manual clutch operation, but the driver's input is still required to manually change gears; namely semi-automatic transmissions. These systems are based on the design of a conventional manual transmission, with a gear shifter, and are mechanically similar to a conventional manual transmission, with the driver's control and input still required for manually changing gears (like with a standard manual transmission), but the clutch system is completely automated, and the mechanical linkage for the clutch pedal is completely replaced by an actuator, servo, or solenoid and sensors, which operate the clutch system automatically, when the driver touches or moves the gearshift. This removes the need for a physical clutch pedal. | https://worddisk.com/wiki/Manual_transmission/ |
Pages
Sunday, August 7, 2016
Wisconsin - Milwaukee, Harley-Davidson Museum and Headquarters
Marley-Davidson
Milwaukee
Wisconsin
The Harley-Davidson Museum opened in July 2008 and displays a large collection of Motorcycles along with history about Harley-Davidson. The Iron Horse Hotel, a boutique hotel catering to motorcycle enthusiasts, is located one block south of the museum.
Photos by William H. Jacobs
2016
- - - - - - -
The Harley-Davidson Museum opened on July 12, 2008 and is a celebration of the more than 110-year history of Harley-Davidson motorcycles. The 130,000-square-foot (12,000 m2) three building complex on 20 acres (81,000 m2) contains more than 450 Harley-Davidson motorcycles and hundreds of thousands of artifacts from the Company's history.
- - -
Here is one more from Milwaukee...
Milwaukee
Wisconsin
Harley Davidson Motorcycle
With the corporate headquarters of Harley Davidson Motorcycles in Milwaukee, it is not surprising to see a number of these cycles cruising the streets.
| |
by Brendan Rowland, Student Public Relations Writer
Jim Mellick’s artistic expression is far more than elaborate wooden sculptures of canines with prosthetic limbs; each of his Wounded Warrior Dogs allegorizes the plight of wounded veterans by depicting man’s best friend.
His exhibit “Wounded Warrior Dogs and Other Parables” will open in Cedarville University’s Stevens Student Center art gallery on October 6 and will culminate in a closing reception the evening of Veterans Day, November 11. Mellick will attend the closing reception to talk through the stories his art represents with the veterans whose courage and sacrifice he celebrates.
After winning the $200,000 Grand Prize by popular vote at Art Prize 2016 in Grand Rapids, Michigan, the Wounded Warrior Dogs have toured the country while being exhibited in most of the major military museums in the eastern United States.
Mellick, recipient of Cedarville University’s faculty scholarship award in 2012-2013, retired from teaching studio art at Cedarville in 2014. Since his retirement, Mellick has devoted his time and energy to his Wounded Warrior Dogs.
Though Mellick has been carving wood sculptures since 1976, and canine allegories since 1985, the Wounded Warrior Dogs sculptures are a recent project.
“In 2015, I was moved by the images of veterans returning from Iraq and then Afghanistan with amputations from the new warfare of roadside bombs and IEDs,” said Mellick.
Mellick is not a veteran, but he is a patriot.
The sculptures are allegories for their human counterparts, so each dog in the series bears an injury – a missing leg, or a prosthesis – that appears more human than canine. Each dog’s collar displays a service ribbon representing its respective war.
Mellick’s pieces require 160-250 hours to sculpt. He employs six types of wood for different breeds, representing different wars from World War II to the global war on terror. The dogs are all life-sized or larger.
His latest series of sculptures, “K9 War Stories,” honors the sacrifice of certain K9 teams, representing the bond between dog and handler by depicting the injured or fallen dog as closely as possible in front of text on the wall behind documenting the heroic event.
“Giving form to these stories had a healing effect on the surviving handlers and was a blessing to the surviving families,” Mellick said. “I try to meet family members at museums where the story of their sacrifice is being told.”
Mellick’s exhibit has not been seen in Ohio since its successful 4-month display at the National Museum of the United States Air Force at the end of 2019.
Mellick is thrilled to be able to bring his art back to his former employer due to renovations that added an art gallery within the Stevens Student Center.
Prof. Aaron Huffman, chair of the department of art, design and theatre, is also excited about this addition. “We can now bring in professional caliber work that can be experienced by the campus body and the surrounding community,” he said.
Huffman is proud to host his friend and former colleague’s work. “Jim’s personality, passion for art, sense of humor, deep thought and excellent craftsmanship have all been missed – and we can't wait to have him back on campus in person!”
Cedarville’s exhibit will be the last chance to view the Wounded Warrior Dogs locally as Florida’s new Marco Island Art Museum has purchased the sculptures, and it will become the permanent central exhibit beginning March 2023.
His exhibit at Cedarville will also feature award-winning installations and sculptures spanning his career back to the 1980s. These other works provide a colorful context to the development of his headlining Wounded Warrior Dogs.
Mellick is grateful for the response his work has evoked. “The tears, the hugs and handshakes and the breaking of attendance records have shown that I’ve reached people profoundly,” he concluded.
“I’ve realized that the Wounded Warrior Dogs are more than a project – they are a mission.”
Located in southwest Ohio, Cedarville University is an accredited, Christ-centered, Baptist institution with an enrollment of 5,082 undergraduate, graduate, and online students in more than 150 areas of study. Founded in 1887, Cedarville is one of the largest private universities in Ohio, recognized nationally for its authentic Christian community, rigorous academic programs, including its Bachelor of Arts in Studio Art, strong graduation and retention rates, accredited professional and health science offerings, and high student engagement ranking. For more information about the University, visit cedarville.edu. | https://www.cedarville.edu/news/2022/wounded-warrior-dogs-represent-veterans |
In recent years, the Financial Crimes Enforcement Network (“FinCEN”) and federal regulators of the financial services industry have more aggressively enforced the Bank Secrecy Act (“BSA”) and the economic sanctions imposed by the US Treasury’s Office of Foreign Assets Control (“OFAC”). While this should in of itself be a matter of particular attention to the directors and officers of those entities in the financial services industry, so too should the recent trend toward increased scrutiny for directors and officers failing to address alleged BSA or OFAC compliance shortfalls. An August 2014 agreement reached by FinCEN and a former casino official permanently barring the official from working in any financial institution drives the point home: When it comes to liability for BSA or OFAC violations, FinCEN and federal regulators might not limit penalties to the entity actually committing violations, and instead, may also penalize the individual directors and officers of those entities.
Even before FinCEN’s August 2014 bar of the casino official, a number of enforcement actions assessed personal monetary penalties against financial institution directors and officers over the past few years. In February 2009, the directors of Sykesville Federal Savings Association were collectively fined $10,500 in non-reimbursable civil money penalties for multiple violations of a consent order to cease and desist. In January 2013, the Office of the Comptroller of the Currency (the “OCC”) levied civil money penalties against five directors and officers of Security Bank for up to $20,000 per person in connection with violations including failure to ensure an effective BSA compliance and suspicious activity reporting (“SAR”) system. In September 2013, the Justice Department charged the CEO of Public Savings Bank with criminal failure to file a SAR and maintain adequate anti-money laundering controls in connection with an $86,400 wire transfer of suspected drug money.
And while most directors and officers are often covered by D&O liability insurance, the Federal Deposit Insurance Corporation (“FDIC”) has taken an increasingly strong position that a financial institution’s insurance policies may not indemnify directors and officers for civil money penalties. In 2011, the FDIC cited several financial institutions for D&O liability insurance policies that covered civil money penalties, and in October 2013 the FDIC published a Financial Institution Letter explicitly prohibiting insured depository institutions or their holding companies from purchasing insurance policies that would indemnify institution-affiliated parties against civil money penalties.
The directors and officers of financial industry participants are ultimately responsible for ensuring that their entities maintain effective BSA/OFAC compliance programs, which must be approved by the board of directors and noted in the board minutes. The intent of FinCEN and the regulators’ increasingly aggressive enforcement tactics are aimed at forcing these executives and directors to prioritize compliance, thereby providing more support to compliance officers. But FinCEN and the regulators should tread carefully, as the approach could have some negative, unintended consequences. For example, qualified personnel might avoid compliance, director or officer positions at financial institutions due to the risk of personal liability, especially due to the prohibition on institution-provided D&O civil money penalty insurance coverage. Also, financial institutions might respond by “de-risking” their activities, terminating or eliminating financial relationships with complete groups of customers or lines of business considered high risk under BSA or OFAC standards. As a result, FinCEN and the regulators should to take a balanced approach to enforcement in the context of personal liability, focusing on knowing or willful, or major and systemic violations, as opposed to honest mistakes, errors in judgment, or minor compliance failures. | https://www.thesecuritiesedge.com/2014/12/directors-and-officers-beware-you-could-be-individually-liable-for-your-entitys-bank-secrecy-act-or-office-of-foreign-assets-control-violations/ |
Psoriasis is a disease whose main symptom is gray or silvery flaky patches on the skin which are red and inflamed underneath. In the United States, it affects 2 to 2.6 percent of the population, or between 5.8 and 7.5 million people. But can it be stopped?
By Amanda Gardner HealthDay Reporter
(HealthDay is the new name for HealthScoutNews.)
(HealthDayNews) -- For the nearly 5 million Americans who suffer from psoriasis, life can be a painful odyssey of pills, creams and even light therapy. Worse yet, the condition can spread beyond large swaths of skin to the joints, leading to debilitating arthritis.
To get the word out about this potentially disabling disorder, the National psoriasis Foundation has designated August as psoriasis Awareness Month.
Fortunately, knowledge about the condition is increasing and, with it, the stock of available remedies. The most promising are new biologic treatments that work with few side effects, experts say.
"I'd say [psoriasis] is very treatable. It's just not curable," says Dr. Ted Daly, director of paediatric dermatology at Nassau University Medical Centre in East Meadow, N.Y.
psoriasis is immune-mediated, meaning that abnormal immune system responses are somehow involved.
"There's no question that the immune system plays a role in the development of the disease," says Dr. Mark Lebwohl, professor and chairman of the department of dermatology at the Mount Sinai School of Medicine in New York City and president of the medical board of the psoriasis Foundation.
Other than that, no one is sure what causes the disease, although there does seem to be a genetic component. "We have not identified the psoriasis gene, but [the disease] seems to be a combination of genes and external factors," Lebwohl says.
While the precise causes are unclear, some triggers have been identified, including strep throat, cold weather, being out of the sun and even the drug lithium, which is commonly prescribed for bipolar disorder.
What is certain, however, is the suffering it can cause. Psoriasis manifests as an uncomfortable itchy, thickening of the skin with red patches and silvery scales. These abnormal patches are really areas of extra skin cells. Inside the body, defective immune systems trigger a series of events that lead to the skin's outer layer growing at a much faster rate than normal.
"Instead of being a month, the turnover of cells happens within a week or even less," Daly explains. There's not enough time for the dead cells to slough off, so these build-ups occur. The red comes from the excess blood supply needed by the rapidly growing cells.
For some people, the rash is confined to a small part of the body, such as elbows, knees or scalp. Others aren't so lucky. The scaly area can spread to cover a much greater area. "It can go from a patch or two to 100 percent of the body," Daly says.
And the disease can strike at any age. "Just because you don't have it now doesn't mean you won't get it in the future," Lebwohl explains. "It can come as early as birth and as late as 100 years of age."
Luckily, recent treatment advances are making life easier for many sufferers.
Most exciting is the development of biologic treatments. "These drugs target specific receptors on molecules or specific chemicals without affecting the entire immune system," Lebwohl says. That means they have far fewer side effects than conventional treatments.
The U.S. Food and Drug Administration (FDA) approved the first biologic treatment for psoriasis in March: alefacept (brand name Amevive). A study published in the June issue of the Archives of Dermatology found that people taking 15 milligrams of the drug had a 75 percent reduction in their psoriasis Area and Severity Index (pASI), a measure of the severity of the condition. Alefacept stops the overproduction of skin cells by destroying the defective immune cells that are responsible for the abnormality.
"That's one of at least five new agents that are in development for psoriasis and probably more will be coming after that and probably we haven't even seen the best of them," Lebwohl says. "As time goes on, we're going to see better and better molecules."
Embrel, or etanercept, another biologic agent, was approved for psoriatic arthritis in January 2002 and is currently being considered by the FDA for moderate to severe psoriasis.
Many patients still rely on the traditional arsenal of treatments, many of which have been around for decades. "Those treatments are still very useful," Lebwohl says.
Some, such as cyclosporine, may be more effective than the biologics, but can entail severe side effects. Cyclosporine can damage the kidneys, and methotrexate, a chemotherapy drug, can cause liver damage. Both of these drugs are for more severe forms of the disease, doctors say.
A variety of treatments are available for milder versions of psoriasis, including creams you put directly on your skin (for instance, steroid creams and topical vitamin D). People with larger affected areas might benefit from light therapy, or even a combination of this and topical creams.
We offer a variety of products for treatment of psoriasis. Please click here to see all products or here for the
Dermaray UV treatment comb.
Not available in Australia
Psoriasis is a common skin
disease that causes raised red skin
with thick silvery scales.
Vitiligo is a disorder in which
white patches of skin appear on the
body
Hair loss usually develops
gradually and may be patchy or
diffuse
Acne is a disorder of the hair
follicles and sebaceous oil glands
that leads to skin infections
Inflammation of the skin, often
a rash, swelling, pain, itching, | https://beatpsoriasis.com/stop-psoriasis.htm |
All pupils at Moreton School are now studying the New Primary Curriculum (2014).
At Moreton, our curriculum is designed to meet the needs of the child and this extends to learning and enrichment beyond the classroom walls. Additional activities are offered by the school such as lunchtime and after school clubs, educational visits and special events during the school year.
We aim to deliver a balanced and broad-based curriculum which prepares children for the 21st Century as well as retaining the best of the accumulated knowledge of history.
Our curriculum is both skills and knowledge based. It promotes cross curricular links; it is relevant to the local environment and the national and international context which the modern world provides. The curriculum is developmental and ever changing to meet the needs of children.
Certain parts of the curriculum are known as core subjects. These subjects are English, Science and Mathematics. Religious Education is also a core subject. English and Mathematics learning takes place every day in every class in the school. Science is taught in each class, twice each week. As a Voluntary Aided Church School, the child's spiritual development is at the heart of learning.
Some skills are subject specific for example, map reading, but cross- curricular links are made so that children make sense of their learning and it is always purposeful. Some of the ‘technical’ parts of English (Handwriting, Spelling, Grammar, Individual and Guided Reading, including Phonics, where we use Read Write Inc as a phonics scheme throughout EYFS and KS1, with its linked spelling scheme throughout KS2) have their own specific lessons per week to ensure effective learning in small groups.
Foundation subjects are linked to the topic when relevant. The topics can be history, geography, science, PSCHE, PE, music or art-led. This encourages children to transfer the skills and knowledge they learn to different situations. The topics are designed to fulfil the requirements of the new National Curriculum and the themes are based on a two year rolling programme for each Key Stage phase. Therefore, each child will experience every topic in the school once. Computing is an important part of the curriculum in its own right and ICT also supports learning across the whole curriculum.
Each term Moreton welcomes visitors to the school and visits out of school to deliver some aspects of the curriculum and to enhance other areas, e.g. sport, music, dance and drama.
At Moreton we also encourage children to go beyond our curriculum. To this end, we offer: cycling proficiency; cookery; debating; community assistance; all helping develop the skills of problem solving, communication, motivation, creative thinking and reasoning as well as teaching the skills and knowledge specified in the National Curriculum.
We believe it is very important for parents/carers to understand what and how their children are being taught. We communicate with parents in the following ways:
-
Meet the teacher / parents’ evenings the teacher in the Autumn and Spring Terms
-
End of Year Reports
-
Termly curriculum overview
-
Newsletters and regular updates on events
-
Class and curriculum pages on the school website
-
Parents Forum
-
Email and texting service
Please contact the school office if you would like more information on any aspect of our curriculum. | https://www.moretonceprimaryschool.co.uk/our-curriculum |
Creating A Great Customer Experience is essential if you want to retain customers for the long term, decrease staff retention, and building an effective customer-centric business. But how can you start training your representatives to give exceptional support?
That is why we have designed this training course as a complete and practical program, combining theory with case studies and team activities.
The Objective:
Module 1: The depth of experience mapping
- Building a team
- Internal Investigation
- Assumption Formulation
- External Research
Module 2: Building Empathy Maps
- Spend some time discussing what each of the different quadrants corresponds to: seeing, hearing, saying, and thinking.
- In order to reinforce the idea of using real data to build out the maps, have one person in the pair be an interviewer and the other person be the interviewee.
- Provide interviewers with a relevant experience to interview their partner about. A good generic topic for different groups is asking about the last major purchase the interviewee made.
- Interviewers should use the empathy map as a notetaking tool.
Module 3: Building Customer Journey Maps
- Spend time discussing the sentiment and sequential characteristics of the customer journey map.
- In order to reinforce further the idea of using real data and practice interviewing, have the interviewer and interviewee switch roles.
- Provide interviewers with a relevant and recent experience to interview their partner about. Try to gauge your audience to make sure the topic is recent for the majority of the people in the room. Interviewees do less recollecting and generalizing about an experience if it is fresh in their mind.
- Interviewers should use the customer journey map template as a notetaking tool.
Module 4: Brainstorming Opportunity Areas
- Spend time discussing how to derive insights from the peaks and valleys of a customer journey map. What parts of the high points along the experience can be applied to improve the low parts?
- Teams should use the opportunity area worksheet to translate problem areas into design opportunities.
- Always timebox the activity between 5–10 minutes.
- Wrap up the activity by asking pairs what they found helpful about the tool and what they found the least helpful. Sometimes the insights gained from this discussion can be beneficial to your mapping practice.
Module 5: Bringing it all together
- A great way to incorporate research findings and opportunity areas into a product’s overall strategy and design is through collaborating with a product owner or manager to build a product roadmap with the customer in mind.
- Themes should be based on an organization’s overall business strategies. I like to find a list of a company’s strategic goals to help colleagues in the workshop begin to see how they can incorporate this type of roadmap into their processes. These answer the ‘why are we doing this,’ question across the company.
- Strategic features should be actionable items that are less detailed than a task but more detailed than an overall initiative. They should be built from opportunity areas identified in the generative research phase of a project.
- Trello is a great, free, online tool to build out this type of roadmap. I also recommend adding user stories to the different feature cards to provide specifics. Cards in the pipeline don’t need specifics, but cards in the now column should always have a user story attached to narrow the scope of the feature. | https://valdus.net/courses/creating-a-great-customer-experience/ |
The UK housing market slowed again in August with the biggest monthly price fall since July 2012, according to Nationwide’s latest house price index, as experts warned a no-deal Brexit could hit London prices “like a sledgehammer”.
The average price was £214,745, down 0.5 per cent from £217,010 the previous month, while annual growth slowed to 2 per cent from 2.5 per cent.
Prices are still expected to rise by around 1 per cent this year, said Nationwide’s chief economist Robert Gardner.
He added: “Looking further ahead, much will depend on how broader economic conditions evolve, especially in the labour market, but also with respect to interest rates.
“Subdued economic activity and ongoing pressure on household budgets is likely to continue to exert a modest drag on house price growth and market activity this year, though borrowing costs are likely to remain low.”
Meanwhile, Nationwide’s index showed help to buy accounted for around 8 per cent of mortgages in England in the year to March 2018, an increase of 21 per cent compared to the same period last year.
Mr Gardner said it was “unclear how much help-to-buy activity represents additional demand and how much has simply replaced activity that would already have taken place”.
“The scheme has, however, been a key source of demand for newly built homes in recent years,” he said.
Jonathan Samuels, chief executive of property lender Octane Capital, said: “You can’t help but think the government is boxing itself into a corner on help to buy.
“Help to buy is certainly enabling many more people to get onto the property ladder but serious question marks remain over the longer term impact of what is an artificial stimulus.”
Mr Samuels said it was not surprising that prices fell in August, which is traditionally a quiet month for the property market.
However, he warned that a no-deal Brexit could prompt a severe shift in prices.
“There is a blanket of uncertainty covering the UK property market at present,” he said.
“While the employment market remains strong, stubbornly high inflation, the potential for another rate rise, overstretched household finances and the growing possibility of a no-deal Brexit are seeding serious doubt in the minds of prospective buyers.
“A Brexit no-deal could hit prices in the capital, especially at the higher end, like a sledgehammer.”
Lucy Pendleton, founder and director of estate agents James Pendleton, said Nationwide’s prediction of 1 per cent growth in the market this year would mean prices finish on just over £213,000, a level not seen since April and only around £1,500 lower than current prices.
| |
We’re hiring: Communication & policy traineeship to start in September for 6 months
Based in the heart of Brussels, close to the EU-institutions, FEAD offers you the opportunity to work with a European trade association representing the private waste management industry. As a team member of FEAD, you will be in the frontline of our organisation, taking care of day-to-day communications to enhance our visibility and outreach, ensuring that FEAD’s policy proposals and campaign messages reach the right eyes and ears to make a difference. You will also assist FEAD in its day-to-day activities, attend meetings with members and EU policymakers, and research key EU policies.
The selected candidate will join a young, international, small, and dynamic team advocating for the European waste management sector.
Tasks
Communication :
- Manage all external-facing communications channels (website and social media)
- Draft and give support to publications, including press releases, newsletters, and social media posts
- Organise events, both online and in-person
- Offer advice on FEAD’s content and communication outreach and engagement strategy
- Show initiative and propose FEAD’s communications activities
Policy:
- Conduct research and policy analysis to support and strengthen our advocacy on issues related to circular economy, legal and market issues, recycling, organic recovery, REACH, energy recovery, landfill, and hazardous waste
- Monitor the relevant EU institutions in areas such as Environment, Industry, Transport, Budget, and Trade
- Assist with the coordination of FEAD’s Committee meetings
- Assist the team with other ad hoc tasks and work closely in a supporting role
Requirements:
- Communication and/or policy background (European affairs or environmental sciences)
- Comfortable with networking, good social skills, and ability to read into intercultural settings
- Strong interest in environmental issues and waste management policies
- Experience in communications and/or working with the EU institutions and/or public authorities, good knowledge of the EU institutions and procedures are a strong plus
- Excellent English is a must; French or German is considered an asset, or any other language
- Excellent oral and written skills
- Excellent organisational skills
- Fully computer literate and knowledge of graphic design would be a plus
- Ability to take initiative and adapt to a small and highly motivated team
The ideal candidate would be able to join FEAD in September, with interviews taking place on a rolling basis until the position has been filled. Due to the expected high number of applications, only short-listed candidates matching the above profile will be contacted for interviews. Our traineeships are remunerated in accordance with Belgian law (in addition to private hospital insurance, daily lunch vouchers, and public transport).
To apply, please send us your CV and cover letter to: [email protected]. Please state ‘Communications traineeship application’ in the subject line and indicate your name clearly on all attachments. | https://fead.be/were-hiring-communication-policy-traineeship-to-start-in-september-for-6-months/ |
General Motors Canada
About GM There’s never been a more exciting time to work for General Motors.
To achieve our vision of a world with Zero Crashes, Zero Emissions and Zero Congestion, we need people to join us who are passionate about creating safer, better and more sustainable ways for people to get around.
This bold vision won’t happen overnight, but just as we transformed how the world moved in the last century, we are committed to transforming how we move today and in the future.
Why Work for Us Our culture is focused on building inclusive teams, where differences and unique perspectives are embraced so you can contribute to your fullest potential as you pursue your career.
Our locations feature a variety of work environments, including open work spaces and virtual connection platforms to inspire productivity and flexible collaboration.
And we are proud to support our employees volunteer interests, and make it a priority to join together in efforts that give back to our communities.
Job Description Key Responsibilities: Ownership of feature/capability strategy by defining the technology and enablers roadmap and cross-collaborating with multiple teams to ensure feasibility and viability.
Single point contact for feature execution working with the various program teams, leadership and engineering teams for a successful and robust launch.
Providing regular and key updates to the leadership and team leaders to ensure synchronization and smooth flow of information.
Lead other system capability engineers by providing direction in all aspects of systems development such as requirements definition, analysis and development of their applicable features/systems/capabilities.
Continuously work towards improving efficiency in the capability domain group by improving processes, quality and mentoring the team members.
Lead/participate in software peer reviews to ensure compliance to requirements.
Provide technical leadership for advanced technology development.
Collaborate with other System Engineers and Subject Matter Experts to define and negotiate key functional and performance requirements.
Root cause all applicable system related issues during testing and validation.
Participate in Pre-Production Builds (IV Builds & CTF) at PPO and provide feedback.
Communicates information to both internal and external stakeholders.
Ensure deliverables are complete as per milestones with excellence.
Responsible for compliance with GM processes and safety procedures.
Represent the team globally as a key stakeholder to the product.
Support and/or lead activities for customer outreach and education in these technologies and their respective use cases.
Talent development within the team and the Systems Engineering group.
Additional Job Description Required Skills: Deep knowledge of Systems Engineering principles and a proven track record of using them successfully Strong technical leadership skills and an ability to work in a highly collaborative environment with colleagues local and at remote sites High level of oral and written communication skills Very strong project management skills 8 or more years of experience with automotive or similar electronic modules and systems engineering Demonstrated proficiency in writing and comprehending software and hardware requirements and interfaces Experience with requirement management tools such as DOORS, DNG, JAMA etc.
Ability to read and interpret engineering drawings and specifications Comfortable with ambiguity and has a passion to shape the future Cumulative travel requirements of 2 to 6 weeks a year are typical, primarily to Michigan and other locations in the US Creative, disciplined, strong sense of responsibility, delivery and schedule commitment Understanding of major automotive vehicle systems such as Advanced Driver-Assistance (ADAS), Motion Control, Automotive Network, and Powertrain Experience in performing system and component level DFMEAs DFSS Blackbelt certification Design/Development of safety critical systems/components (ISO26262) Experience in automotive product release and specification process Education and Training: Master’s Degree in Engineering Advanced degrees preferred Diversity Information General Motors is committed to being a workplace that is not only free of discrimination, but one that genuinely fosters inclusion and belonging.
We strongly believe that workforce diversity creates an environment in which our employees can thrive and develop better products for our customers.
We understand and embrace the variety through which people gain experiences whether through professional, personal, educational, or volunteer opportunities.
We encourage interested candidates to review the key responsibilities and qualifications and apply for any positions that match your skills and capabilities.
Equal Employment Opportunity Statement Accommodation is available for applicants with disabilities.
Should you be contacted by General Motors of Canada, please advise if you require accommodation.
General Motors of Canada values diversity and is an equal opportunity employer. | https://www.teachingcareer.ca/teaching-education/feature-execution-manager-lead-system-capability-engineer-automated-driving-c5eb60/ |
Carrollton’s Leisure Connections recreation magazine is now online. The magazine is normally printed and available at Rosemeade Recreation Center, Crosby Recreation Center, Josey Ranch Lake Library, Hebron & Josey Library, the Carrollton Senior Center, City Hall, and other City facilities, but due to COVID-19 Stay-at-Home restrictions issues are only available digitally on the City's website. The September-December 2020 issue includes details regarding City events, activities, and programs during those months. To view the magazine online, visit cityofcarrollton.com/leisuremag.
City-sponsored activities at recreation centers, Libraries, and other leisure facilities include community events, athletic league opportunities, fitness classes, courses for adults and children, and gymnastics classes. Registration for residents began on Monday, August 3; for non-residents, beginning Monday, August 17. Classes start Monday, August 24. | https://www.cityofcarrollton.com/Home/Components/News/News/3938/27?backlist=%2Fdepartments%2Fdepartments-g-p%2Flibrary |
Cranston, Rhode Island has one of the strongest economies in the state and the country. Just 6.0% of Cranston residents live in poverty, less than half of Rhode Island’s 12.8% poverty rate and well below the 14.0% U.S. rate.
Cranston is safer than its neighboring cities. The city’s violent crime rate of 153.0 per 100,000 people is much lower than the rate across Rhode Island of 239.0 incidents per 100,000 people. There were also 1,862 property crimes in Cranston per 100,000 people, compared to 2,451 property crimes per 100,000 people nationwide. | http://learningbrooke.com/news/cranston-named-best-city-in-ri/ |
Featured Article:
Has New Zealand Identified the Causes of Crime?
Completing the Journey to The 5 Drivers of Crime
The term ‘drivers of crime’ was clearly defined by an unnamed Ministry of Justice official in a December 2009 policy paper as "the underlying causes of offending and victimisation". However, that paper itself conflated causation and correlation, by also referring to circumstances merely “associated with” offending (MOJ, 2009a, pp. 2-3). Despite the apparent clarity of definition, the transformative process by which correlation was seemingly replaced with a causative explanation remained uncertain.
A 2010 Cabinet paper referred to the ‘priority areas’ as “risk factors” having a "demonstrated link to" or "direct influence on" crime (Cabinet, 2010, pp. 1, 2), indicating correlation, and consistent with the literature noted above. However, the paper also introduced "initiatives [that] address the drivers of crime" as “working on the causes of crime" (Cabinet, 2010, pp. 1, 2, emphasis added). Likewise in the context of performance measures. Two of the (then) four priority areas (low-level offending, and alcohol and drugs) were referred to as only influencing or contributing to crime. However, the remaining factors (youth conduct and behaviour, and maternity and early parenting) were more assertively claimed as drivers of crime (Cabinet, 2010, p. 2), a distinction apparently reinforcing causal meaning ascribed to the term 'drivers'. A later report co-presented by the Ministers of Justice and Police also characterised the 'drivers of crime' and related programs as "[addressing] the causes of offending" (MOJ, 2011, p. 15, emphasis added).
It was in this inconsistent context that the 'strategic change portfolio' known as Policing Excellence was mentioned at Cabinet, with stated aims "to prevent crime, provide better outcomes for victims,.. and reduce the growth rate of the criminal justice system" (Cabinet, 2010, p. 3). NZ Police quickly took up the challenge, reflecting a government "commitment to a police force that is well-trained, well-resourced and has the legislative authority to tackle the key drivers of crime" (NZ Police, 2009b, p. 4). A comprehensive Policing Excellence change program commenced in late 2010 was joined a year later by a complementary Prevention First operating strategy (NZ Police, 2012a, 2013a), both supplemented the following year with a series of crime rate reduction targets set under the government's Better Public Services program (Cabinet, 2012a).
Notwithstanding the Minister of Justice publicly describing “the drivers of crime strategy” as addressing “the underlying causes of crime” (Power, 2011), there remained some inconsistency how Police expressed the (now) five factors' connections with crime, ie whether by causation or correlation. Alcohol, for example, was noted only as "a factor in most incidents", and the 'drivers of crime' were sometimes referred to simply as "drivers of demand on the criminal justice system" (NZ Police, 2011, pp. 8, 16, emphasis added). NZ Police's 2012 Statement of Intent described the Addressing the Drivers of Crime program as targeting "social factors that contribute to offending" (NZ Police, 2012b, p. 15, note 5, emphasis added). Police failed to mention the program in its 2013 Statement of Intent, referring only to police involvement with an "all-of-government response to organised crime, Youth Crime Action Plan, Children's Action Plan, social sector trials and other Police initiatives". Several of these initiatives, however, were described as "addressing the causes of crime" (NZ Police, 2013b, pp. 13, 15).
Notwithstanding such inconsistencies, the five drivers’ transformation from correlation to causation appeared complete by 2014. The Police Statement of Intent that year described crime prevention activities "targeting its action to the drivers of crime: youth, alcohol, organised crime, dysfunctional families, and high-risk driving behaviours", described as "the underlying causes of offending and victimisation" (NZ Police, 2014c, pp. 19, 34). Similar statements in a series of high-level documents (including the 2014 Statement of Intent, 2014 Annual Report, and 2015 Four Year Plan 2015-2019) reinforced the purported causal connection initially defined in the Ministry of Justice policy paper noted above (MOJ, 2009a).
The 'drivers of crime' phraseology is confused. That is part of the problem. By at least 2014, however, its enumerated factors were repeatedly portrayed as representing the underlying causes of offending. The following section constructively critiques the causal connection asserted by The 5 Drivers of Crime.
The Causality Deficit
This article does not suggest that the constituent elements of The 5 Drivers of Crime are not relevant, significant and meaningful in understanding the criminal environment. They are (Brown, Esbensen, & Geis, 2010; Hagan, 2011). Nor does it contend that well-implemented policies and activities based on those factors cannot generate positive crime prevention results, or that none of the factors have any causal effect. One such possibility is illustrated below. This section simply contends that The 5 Drivers of Crime as currently expressed does not reflect the underlying causes of offending as it purports, nor does it present a unified theory of criminal causation as its title alluringly suggests.
Enumerating issues relevant to his generation, many of which still resonate today (as evident by 'the 5 drivers of crime' itself), Cantor illustrated the critical importance of distinguishing between correlation and causation:
By way of illustration, selecting one of the nominated 'drivers of crime', youth offenders are disproportionately represented in crime statistics (St Thomas of Canterbury College, 2015). ‘Youth’ is a well-recognised risk factor, and young adults are more likely to be involved in criminal offending than other age groups. However, being 18 years old is neither causative nor indicative of criminality. Many young people commit crime. A great many others do not. Policymakers may designate youth offending a priority area for policy development, as New Zealand did in 1992/1993 and 2009/2010, and well-implemented policies may have a positive crime prevention impact. The research conducted for New Zealand’s earlier national crime prevention initiative also indicated that predictive ‘risk factors’ such as age, gender and dysfunctional family relationships contribute to the likelihood of offending, at least in the presence of social, economic and environmental ‘triggers’, but “these factors are not causes of crime” (Crime Prevention Action Group, 1992, pp. 4, 29, emphasis in original).
Illicit drugs offer a more nuanced example from the 'drivers of crime' list. A drug addict described as "a desperate man in need of help" may commit a string of burglaries to pay for his next purchase (eg Hutt News, 2015), suggesting a causative glimmer of truth in The 5 Drivers of Crime. But what does it tell us about higher order criminality? What about the dealer who supplied the drugs? The organised crime group ‘cook’ who manufactured them? The trafficker who imported the ingredients? Or local and overseas representatives of the cartels responsible for exporting vast quantities of illicit drugs? Although listed as a ‘driver of crime’, ‘drugs’ likely has little bearing on the underlying causes of serious offending committed by drug dealers, drug traffickers, and the organised crime groups associated with its trade. Illicit drugs is the product they choose to manufacture, distribute and sell. It is not what causes them to do so. Thus, even when aspects of constituent elements of The 5 Drivers of Crime carry some causal resonance, such as theft and burglary to support drug use, the seemingly self-contained ‘drivers of crime' construct remains largely silent in explaining the underlying causes of a range of higher-order serious criminal activities that create significant economic and social harm.
Moreover, although 'drugs' is presented as an underlying cause of offending, in practice NZ Police has long recognised a more nuanced bifurcated reality. Even before The 5 Drivers of Crime was introduced into operational policing, the Police Commissioner distinguished between "burglaries committed by drug users [and] drug dealing carried out by organised criminal groups" (NZ Police, 2009a, p. 2). After a recent $800,000 methamphetamine seizure, Detective Sergeant McNeill also succinctly differentiated crime caused by drug users themselves ("the ripple effect it causes, the burglaries, the car theft, aggravated robbery") from likely causal factors motivating the offending of drug suppliers: "we don't believe the alleged offenders here are actual users of the drug, this is purely a financial transaction" (Stuff, 2015).
Police appear also to have added 'organised crime' (and 'road policing', and sometimes 'gangs') to four priority areas arrived at by a ‘drivers of crime’ committee process sanctioned by Ministers, but much like the fact of being a youth, the existence of an organised crime group is of itself not a ‘driver’ of crime in a causative sense. It may be said that organised crime groups are responsible for much serious crime. Of course they are. It is what they do. By definition. The suggestion that organised criminal groups cause crime is, however, so irredeemably tautological as to be useless in a criminological context. Identifying a section of society likely to commit crime gets us no closer to understanding "the underlying causes of offending" apparently intended by the term 'drivers of crime'.
A final hypothetical example that traverses all five segments of the ‘drivers of crime’ quintet illustrates the causality deficit of the definition ascribed to The 5 Drivers of Crime. Whether youth members of an organised crime group responsible for distributing illicit drugs have dysfunctional family backgrounds, consume drugs and alcohol irresponsibly, and drive much too fast, is unlikely to have much bearing on the underlying causes of their offending.
These examples suggest that The 5 Drivers of Crime also misses important elements that may arguably be described as causative, one of which is discussed in the following section.
Profit Motive Omission Reveals Gaps in ‘The 5 Drivers’
Money is often described as a key motivator of “rational, calculating crimes” (Coleman, 1992) spanning a broad expanse of serious offending such as trafficking in arms, drugs and people, corruption, fraud and extortion (Brown et al., 2010; Hagan, 2011). Money laundering is also an “inevitable accessory” (Fréchette, 2000) to all large-scale acquisitive offending, “through which to preserve illicitly gained funds while at the same time incentivising the overall profitability of crime” (Gilmour & Ridley, 2015, p. 293).
Money as “the foremost reason” (Stankiewicz, 2015) for engaging in unlawful activity may be drawn from across the spectrum of criminological discourse. For example:
Studies on the impact of proceeds of crime enforcement also recognise that the nature of organised crime includes "profit [as] the primary motive of such businesses" (McFadden, O'Flaherty, Boreham, & Haynes, 2014, p. 4). Australasian policing guidelines refer to "underlying social and economic causes of crime" (Australian Institute of Criminology, 2012, p. 4, emphasis added), and a recent report on the cost of serious and organised crime records the “relentless pursuit of illicit profit” at the core of offending (ACC, 2015, p. 2). Material gain is fundamental to some of the world's most serious criminal activities, as reflected in the Palermo Convention definition of "organised criminal group":
Similarly in New Zealand, "organised criminal group" includes "3 or more people who have as [an] objective ... obtaining material benefits from the commission of [serious] offences" ("Crimes Act," 1961, s98A(2)). Countless news reports reflect the experience of law enforcement agencies worldwide, such as Europol observing criminals switching between activities: "very lucrative" profits see "criminals who would normally deal with drugs,...[or] other forms of crime [use the] opportunity [to make] criminal profits out of the migrant crisis" (Ganley, 2015).
Money as a possible causal ‘driver’ of crime, and the harmful effects of profit-motivated offending, were also regarded "key messages” by at least one working group in the April 2009 Drivers of Crime Ministerial Meeting. Participants expressed a need for "a wider definition” of criminal drivers in the policymaking process to “include both violent crime as well as the crime that occurs more secretly (eg white-collar crime that is equally damaging to society and families)" (MOJ, 2009b, p. 43).3 As the indispensable companion of white-collar crime and many other serious offences, money laundering is arguably the world's apex profit-motivated crime. It supports, enables and perpetuates profit-motivated offending, and “sustains every criminal activity engaged in for profit, which is to say all crime but crimes of passion or vengeance" (Noble, 1993, p. 3). However, the profit motive central to some of the most serious offending "equally damaging to society" failed to appear in the 'priority areas' that ultimately became The 5 Drivers of Crime at the core of police focus on "the underlying causes of offending and victimisation.” In practice, however, this gap is not lost on NZ Police, as outlined in the following section. The subsequent section then suggests why it matters if there is a gap between what is said and done.Continued on Next Page »
Suggested Reading from Inquiries Journal
Inquiries Journal provides undergraduate and graduate students around the world a platform for the wide dissemination of academic work over a range of core disciplines.
Representing the work of students from hundreds of institutions around the globe, Inquiries Journal's large database of academic articles is completely free. Learn more | Blog | Submit
Latest in Criminology & Criminal Justice
What are you looking for? | http://www.inquiriesjournal.com/articles/1349/2/has-new-zealand-identified-the-causes-of-crime |
The human connectome from an evolutionary perspective.
The connectome describes the comprehensive set of neuronal connections of a species' central nervous system. Identifying the network characteristics of the human macroscale connectome and comparing these features with connectomes of other species provides insight into the evolution of human brain connectivity and its role in brain function. Several network properties of the human connectome are conserved across species, with emerging evidence also indicating potential human-specific adaptations of connectome topology. This review describes the human macroscale structural and functional connectome, focusing on common themes of brain wiring in the animal kingdom and network adaptations that may underlie human brain function. Evidence is drawn from comparative studies across a wide range of animal species, and from research comparing human brain wiring with that of non-human primates. Approaching the human connectome from a comparative perspective paves the way for network-level insights into the evolution of human brain structure and function.
| |
In recent years, various vehicular navigation systems capable of informing the driver of the current location of the vehicle have been developed. One known navigation system of this type detects the distance traveled and the relative travel direction of the vehicle at prescribed time intervals, successively calculates the vector sum of these detection results, and displays the current location of the vehicle by means of an appropriate display device on the basis of the calculated result and the given initial vehicle location information.
This kind of navigation system requires detection of the relative direction of the vehicle as described above, and for this reason there is a well-known arrangement for detecting the relative travel direction on the basis of the difference between the rotational angle of the left wheel and that of the right wheel, which occurs at the time of vehicle turning, by means of rotational sensors on a pair of wheels.
Japanese Patent Application Public Disclosure No. Sho 62-298716, for example, discloses a method for detecting the relative travel direction of a vehicle, in which information concerning rotational angles of the left and right front wheels is obtained by means of rotational speed sensors mounted on left and right front wheels, information concerning the average distance traveled by the rear wheels is also obtained, and on the basis of this information, information concerning the relative travel direction is obtained free from the detection error owing to the steering mechanism.
However, this proposed method has a disadvantage in that since at least one sensor is needed for detecting the average distance traveled by the rear wheels in addition to two speed sensors for detecting the wheel rotation angles of the two front wheels, the number of sensors required is large and the processing circuit for processing the signals generated by the sensors is complicated.
To overcome these disadvantages, an arrangement making use of the rotation of the cable of the already installed speedometer can be used. According to this method, although the number of sensors is not increased, the accuracy cannot help being unsatisfactory, so that it is difficult to detect the location of the vehicle with high accuracy.
It is an object of the present invention to provide an improved relative direction detecting method for a vehicle which is capable of detecting the relative travel direction of the vehicle with high accuracy only by detecting information concerning the rotational angles of the left and right front wheels.
| |
ARLINGTON, Va., Sept. 15, 2022 (GLOBE NEWSWIRE) — Two out of three U.S. employers (67%) plan to prioritize controlling rising healthcare benefit costs over the next three years. And with many employers expecting costs to rise steadily in the foreseeable future, they are pursuing several initiatives to manage costs and make benefits more affordable for employees. These are among the key findings in a new survey by leading global advisory, broking and solutions company WTW (NASDAQ: WTW).
The survey found U.S. employers project their healthcare costs will jump 6.0% next year compared with an average 5.0% increase they are experiencing this year. Most employers see little relief in sight, as seven in 10 (71%) expect moderate to significant increases over the next three years. Additionally, over half of respondents (54%) expect their costs will be over budget this year. On top of managing costs, 42% cite managing employee affordability as a top priority. To address a higher-cost environment, 52% will implement programs or switch to vendors that will reduce total costs; one in four (24%) will shift costs to employees through higher premium contributions.
“With no end in sight to projected cost increases, the need to manage healthcare costs and address employee affordability has never been greater,” said Courtney Stubblefield, Insights & Solutions leader, Health & Benefits, WTW. “Yet, with so many potential actions, employers must focus on changes that go beyond addressing their employees’ needs to also support efforts to attract and retain talent during a tight labor market.”
The survey of 455 U.S. employers revealed several actions employers implemented or used this year, or expect to pursue, to manage costs and enhance employee affordability. These include:
- Health plan budget boost: Two in 10 employers (20%) added dollars to their healthcare plan without reallocating funds from other benefits or pay. Another 30% expect to do so in the next two years.
- Defined contributions: Four in 10 employers (41%) reported using a defined contribution strategy with a fixed dollar amount provided to all employees that differs by employee tier. Another 11% are planning or considering doing so in the next two years.
- Evaluate employee contributions by income: The number of employers that examine employee health payroll contributions as a percent of total compensation or income as the basis for benefit design decisions is expected to more than double from 13% this year to 32% in the next two years.
- Contribution banding: More than a quarter (28%) structured payroll contributions to reduce costs for targeted groups, such as low-wage employees, or by job class. Another 13% are planning or considering doing so in the next two years.
- Low-deductible plan: Three out of 10 (32%) offered a plan with low member cost sharing (e.g., no more than a $500 deductible for a single preferred provider organization plan) this year; another 7% are planning or considering doing so in the next two years.
- Fraud, waste and abuse: A quarter of respondents (27%) used programs to combat fraud, waste and abuse. Another 22% expect to do so by 2024.
- Out-of-pocket costs: Nearly a quarter (23%) implemented higher out-of-pocket costs for use of less efficient services or site of service, such as use of non-preferred labs, high-cost facilities for imaging or mandated centers of excellence. Another 19% are planning or considering doing so by 2024.
- Concierge navigation: Two in 10 (21%) offered concierge navigation even if it requires movement from a full-service health plan to a third-party administrator. Another 25% are planning or considering doing so by 2024.
- Voluntary benefits: Over a third of respondents (35%) added or enhanced voluntary benefits and vendor solutions in case of a catastrophic event. Another 27% are planning or considering doing so by 2024.
“Employers that act now to predict, plan and implement solutions and strategies that balance employee affordability objectives with escalating prices can avoid having to take desperate measures in a rising healthcare cost environment,” said Tim Stawicki, chief actuary, Health & Benefits, WTW. “Without question, employers face difficult challenges in the next few years. And with limited budgets, the challenge of making decisions that consider healthcare affordability and engagement is exponentially greater.”
About the survey
A total of 445 U.S. employers participated in the 2022 Best Practices in Health Care Survey, which was conducted in August 2022. Respondents employ 8.2 million workers.
About WTW
At WTW (NASDAQ: WTW), we provide data-driven, insight-led solutions in the areas of people, risk and capital. Leveraging the global view and local expertise of our colleagues serving 140 countries and markets, we help organizations sharpen their strategy, enhance organizational resilience, motivate their workforce and maximize performance.
Working shoulder to shoulder with our clients, we uncover opportunities for sustainable success—and provide perspective that moves you. Learn more at wtwco.com.
Media contact: | https://b2bchief.com/u-s-employers-double-down-on-controlling-healthcare-costs/ |
Q:
How to find out what changes applied to integral?
I have got such integral $$\int{\frac{\sqrt{x^2+1}}{x+2}dx}$$ and with Maple I got something like this:
$$\int\frac{1}{2} + \frac{1+3u^2+4u^3}{-2u^2+2u^4-8u^3}du$$ And I want to know how to achive this changes.
I tried to use WolframAlpha, but there is scarier solution. This integral was for Gaussian quadrature method, so it's analytic solution is horrible.
A:
We can use the Euler substitution $t=\sqrt{x^{2}+1}-x$ to obtain a rational fraction in terms of $t$
$$\begin{eqnarray*}
I =\int \frac{\sqrt{x^{2}+1}}{x+2}\mathrm{d}x=\frac{1}{2}\int \frac{1+2t^{2}+t^{4}}{t^{2}\left( -1+t^{2}-4t\right) }\mathrm{d}t.
\end{eqnarray*}$$
Since the integrand is a rational fraction, we can expand it into partial fractions and integrate each fraction.
$$\begin{equation*}
\frac{1+2t^{2}+t^{4}}{t^{2}\left( -1+t^{2}-4t\right) }=1-\frac{1}{t^{2}}+
\frac{4}{t}+\frac{20}{t^{2}-4t-1}.
\end{equation*}$$
Added. Detailed evaluation. From $t=\sqrt{x^{2}+1}-x$, we get $x=\dfrac{1-t^{2}}{2t}$ and $\dfrac{dx}{dt}=-\dfrac{t^{2}+1}{2t^{2}}$. So we have
$$\begin{eqnarray*}
I &=&\int \frac{\sqrt{x^{2}+1}}{x+2}\mathrm{d}x=\int \frac{t+\frac{1-t^{2}}{2t}}{\frac{1-t^{2}}{2t}+2}\left( -\frac{t^{2}+1}{2t^{2}}\right) \mathrm{d}t
\\
&=&\frac{1}{2}\int \frac{1+2t^{2}+t^{4}}{t^{2}\left( -1+t^{2}-4t\right) }
\mathrm{d}t.
\end{eqnarray*}$$
Expanding into partial fractions as above, we obtain
$$
\begin{eqnarray*}
2I &=&\int 1-\frac{1}{t^{2}}+\frac{4}{t}+\frac{20}{t^{2}-4t-1}\mathrm{d}t \\
&=&\int 1\mathrm{d}t-\int \frac{1}{t^{2}}\mathrm{d}t+4\int \frac{1}{t}
\mathrm{d}t+20\int \frac{1}{t^{2}-4t-1}\mathrm{d}t \\
&=&t+\frac{1}{t}+4\ln \left\vert t\right\vert -2\sqrt{5}\ln \frac{\sqrt{5}t-2
\sqrt{5}+5}{5-\sqrt{5}t+2\sqrt{5}}+C \\
&=&\sqrt{x^{2}+1}-x+\frac{1}{\sqrt{x^{2}+1}-x}+4\ln \left( \sqrt{x^{2}+1}
-x\right) \\
&&-2\sqrt{5}\ln \frac{\sqrt{5}\left( \sqrt{x^{2}+1}-x\right) -2\sqrt{5}+5}{5-\sqrt{5}\left( \sqrt{x^{2}+1}-x\right) +2\sqrt{5}}+C.
\end{eqnarray*}$$
Therefore the given integral is
$$\begin{eqnarray*}
I &=&\frac{1}{2}\left( \sqrt{x^{2}+1}-x\right) +\frac{1}{2}\frac{1}{\sqrt{
x^{2}+1}-x}+2\ln \left( \sqrt{x^{2}+1}-x\right) \\
&&-\sqrt{5}\ln \frac{\sqrt{5}\left( \sqrt{x^{2}+1}-x\right) -2\sqrt{5}+5}{5-\sqrt{5}\left( \sqrt{x^{2}+1}-x\right) +2\sqrt{5}}+C.
\end{eqnarray*}$$
| |
What are the key elements of EAP presentations?
This is the second of three lessons about Presentations. To complete this course, read each lesson carefully and then unlock and complete our materials to check your understanding.
– Introduce the five elements of academic presentations
– Discuss each element in turn to help guide the reader
– Link to other useful resources to encourage extended learning
Lesson 2
In the second lesson of this short course on presentations, we focus more specifically on the key elements of an academic presentation for students using English for Academic Purposes (EAP). While there are many elements and skills that can be improved and honed by presenters, we’ve grouped these elements into five categories, which are content, display, organisation, language and delivery. Any student completing an assessed presentation at university may find this information useful, particularly those trying to improve confidence, delivery and final grades.
Element 1: Content
Before the day of the presentation, it’s very important that a presenter spends considerable time in researching, selecting and editing their content so that their presentation is as convincing, up-to-date and engaging as possible. This is particularly true for assessed academic presentations in which your assessor will be paying careful attention to the quality of your sources, the strength of your arguments, and how you’ve interpreted and summarised the relevant concepts. After spending considerable effort on narrowing down your sources, don’t forget to then include clear citations and references so that your assessor can stay informed.
Element 2: Display
How both you and your presentation are displayed to the audience is another critical aspect of a successful academic performance. Start by wearing smart and appropriate clothing, and make sure to also spend a good number of hours in creating some kind of visual aid to support your spoken word such as a PowerPoint or Prezi. A well displayed presentation will create a good impression, leading both the audience and assessor to believe that they’re about to watch something of high quality.
Element 3: Organisation
Of course, it doesn’t matter how smart and neatly presented both you and your presentation slides are, if your performance lacks any clear structure then it may seem confusing or illogical to your audience. Remember to divide your presentation into obvious sections such as an introduction, a body, a conclusion and questions and answers, and then use clear headings on your PowerPoint slides to indicate this throughout the presentation. You may also wish to include and describe a contents page that can be introduced towards the beginning of your performance.
Element 4: Language
In addition to having organisational cues written directly on your PowerPoint slides, presenters may also wish to be considerate of the type of language structures they use as these can be equally helpful in guiding the audience. As our short courses on presentation language and listening and lecture cues indicate, such language structures can inform the audience of when transitions are being made by the presenter, when evidence and sources are being introduced, and when examples are being provided. By using these formulaic structures, students will be helping the audience to predict their content, reducing some of the strain of comprehension.
Element 5: Delivery
The final element of a presentation that academics should focus on is delivery, which is perhaps also the most challenging of all the elements. While the previous aspects can all be mastered from the comfort of your own home, the actual delivery of the presentation is very public and requires practice, confidence and experience. This includes aspects of body language, such as gesture, posture and facial expressions, as well as aspects of vocal delivery, such as volume, pitch and memorisation. Ultimately, while poor content and visual aids may be somewhat masked by a strong delivery, the same can not be said in reverse.
In our third and final lesson, we next provide ten tips for presentation success. Complete the Lesson 2 activities first and then move on to Lesson 3.
Materials
Lesson 1 explores the topic: What are professional and academic presentations? Our Lesson 1 Worksheet (containing guidance, activities and answer keys) can be accessed here at the click of a button.
Lesson 2 explores the topic: What are the key elements of EAP presentations? Our Lesson 2 Worksheet (containing guidance, activities and answer keys) can be accessed here at the click of a button.
Lesson 3 explores the topic: What are 10 tips for successful presentations? Our Lesson 3 Worksheet (containing guidance, activities and answer keys) can be accessed here at the click of a button.
To save yourself 2 Marks, click on the button below to gain unlimited access to all of our Presentations Lesson Worksheets. This All-in-1 Pack includes every lesson, activity and answer key related this topic in one handy and professional PDF.
Media
There are currently no PowerPoint activities, additional teacher resources or audio and video recordings created for this topic. Please come back again next semester.
Feedback
Would you like to receive 10 more Academic Marks to unlock our content? Community feedback is very important to Academic Marker, so if there’s something you like about our materials or an aspect that could be improved, please complete the form below (or get in touch at [email protected]) and we’ll credit your account to say thanks.
Wish to say ‘thanks’ for these free materials? Share academicmarker.com with your fellow students, tutors, colleagues and classmates 🙌. | https://academicmarker.com/academic-guidance/assignments/presentations/what-are-the-key-elements-of-eap-presentations/ |
The broader investment community has known for some time that disruption is coming. A recent survey of operations professionals, however, suggests it has already arrived. It may not be as obvious as the upheaval that occurred across the media landscape or even as dire as Amazon’s threat to traditional retail, but based on growing adoption rates of new technologies – from alternative data and robotics process automation (RPA) to artificial intelligence (AI) and machine learning (ML) – it’s evident that many of the most hyped innovations in recent years are now being deployed regularly across the investment landscape.
This sweeping change, however, isn’t necessarily obvious from the outside looking in. That’s because most buy-side operating professionals have a very realistic view as to how disruptive technologies can help their business today. Take robotic process automation or RPA: the technology isn’t necessarily changing the product set so much as it is altering how investment firms manage and perform back-office functions. Certain “swivel chair” responsibilities – such as reconciliations, data cleansing, trade processing or other fund-administration tasks – are increasingly being automated through RPA applications. The impact may not be immediately evident to clients, but the efficiencies gained are certainly material. And this will allow firms to redeploy valuable resources to client-facing functions or directly into their core investment operations.
The study, “Reaping the Benefits of Disruptive Technology,” polled approximately 100 decision makers at global investment firms. Even as the findings point to a pronounced trend in which a growing number organizations are leveraging disruptive technologies, many still face daunting obstacles, such as data hygiene, that stand in the way. This would hint at a future in which the haves and the have nots will soon be divided between those who are able to deploy disruptive technologies and those whose legacy infrastructure creates a distinct disadvantage that could begin to impede not only performance, but also their ability to address evolving client demands in the years ahead.
According to the research, conducted by both Eagle Investment Systems and strategic consulting firm Hegarty Group, a nearly unanimous 99% of respondents said their organization is already using alternative data in some form, with eight out of every ten identifying that it is either a core part of their investment process or a strong adjunct to it. RPA, similarly, has also experienced widespread adoption, although the most ardent users generally reside in operations, where COOs are deploying bots to keep static or minimize the back-office and IT footprints.
While the adoption rates of AI or ML are not yet as high as alternative data or RPA, the survey demonstrated that these technologies are being deployed more broadly to improve research and risk-management functions, create back-office efficiencies, or automate and optimize discrete tasks such as client onboarding or sales efforts. Among those polled, exactly half said their organization is using AI and ML in these or other related functions.
But how firms are currently deploying these technologies only represents a part of the story and, really, only offers a snapshot of a movement that is evolving rapidly in real time. Perhaps more important is what’s preventing those who may be behind the technology curve from keeping pace as adoption grows and new applications for these and other technologies emerge.
In many ways, the growing deployment of RPA, AI and ML capabilities reflect business-transformation efforts in recent years that have been largely premised on reducing the complexity and cost of IT and gaining more value from the organization’s data. Nearly a third of respondents also noted that the overriding goal of digital transformation was to either accelerate their ability to deploy new innovations or improve the overall customer experience.
The caveat, however, is that the required groundwork to facilitate a digital transformation can itself be disruptive to the larger organization. Based on the survey, the biggest obstacles that stand in the way of adopting the aforementioned technologies are the current maturity of existing solutions, broader organizational interest, and the current state of data hygiene.
The first two obstacles are certainly related. With limited organizational interest, awareness around how these technologies can help buy-side firms will remain limited. The front office, in an era of shrinking margins, can also be hesitant to take on the costs that accompany large-scale transformations knowing that the payoff may be years down the road.
But the most daunting challenge, given the nature of the advancing technology, is the need for buy-side firms to address current data hygiene. This challenge only becomes more overwhelming the longer organizations sit on their hands. It typically entails the implementation of a centralized, next-generation data platform that can deliver auditable fit-for-purpose data across a global enterprise. A data governance program is also essential, particularly as front-office data needs often differ significantly from what’s required for back-office functions.
From a cultural perspective alone, there needs to be an organizational commitment to regard data as a true business asset. More importantly, data governance provides the foundation to meet the voluminous but strict data demands of AI and ML technologies.
Other considerations will also become paramount. To continually leverage these advances as AI and other new capabilities advance, buy-side firms will likely have to embrace the extensibility of the cloud. In addition to the requisite computing power to manage and process big data, the continual software delivery of cloud-native platforms will drive improved quality and system resiliency, facilitate seamless software upgrades, and allow for faster technology adoption.
Along these same lines, an extensive API framework will provide avenues into multiple technology ecosystems, while service-abstraction design principles allow critical business services to move out of single-data center environments to be absorbed by a set of providers and SaaS capabilities. In other words, the next business transformation may be the last, assuming the architects encourage flexibility and agility above all else.
The transition over time to a composable business paradigm will allow firms to co-create with vendors to tailor new, business-led applications and will enable a plug-and-play accessibility. This accessibility will help to open organizations up to the rapid innovation occurring across an expansive and growing FinTech universe. The range and scale of the opportunity set, however, is offset by the risk of falling so far behind the technology curve that the disadvantages become apparent to those outside the organization — in the form of lower profit margins, deficient client service, inadequate transparency, and, ultimately, underperformance. But once buy-side firms can get their data management and governance houses in order, they’re generally in position to move quickly to embrace disruption as it occurs.
As Chief Technology Officer of Eagle Investment Systems, Steve Taylor drives the software, technology and architecture decisions across Eagle’s investment management suite and the Eagle ACCESSSM private cloud platform. | https://www.marketsmedia.com/data-as-a-disruption-enabler/ |
|Home||Sacraments||Sacrament of Matrimony (Marriage)|
"The intimate community of life and love which constitutes the married state has been established by the creator and endowed by him with it's own proper laws. God himself is the author of marriage. The vocation of marriage is written in the very nature of man and woman as they came from the hand of the creator.
Marriage is not a purely human institution despite the many variations it may have undergone through the centuries in different cultures, social structures and spiritual attitudes. These differences should not cause us to forget it's common and permanent characteristics. Although the dignity of this institution is not transparent everywhere with the same clarity, some sense of the greatness of the matrimonial union exists in all cultures.
"The well-being of the individual person and of both human and Christian Society is closely bound up with the healthy state of conjugal and family life." (Catechism 1603)
The celebration of marriage is a special moment when a couple wish to declare before God and their friends their love and their wish to seek God's blessing on them and those who will be encouraged by their witness. By this wonderful sacrament, God gives the husband and wife His grace and help, so that they might face life's journey with great zeal.
If it is part of the Lord's plan this love will be further proclaimed by the gift of children.
Please see the Information Centre for marriage forms, in the porch at the back of Church. Some of the information included in the form is available here for download, please press this link. | http://stmarysgemchurch.co.uk/Sacrament-of-Matrimony--Marriage-.php |
BACKGROUND OF THE INVENTION
Field of the Invention
Related Background Art
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[Description of a printer (Figs. 1 and 2)]
[Another Embodiment]
[Description of the Printer (Figs. 4 and 5)]
The present invention relates to an output apparatus for changing a dot density and recording into a recording medium.
Hitherto, as font pattern data which is output to the printer, a plurality of kinds of font patterns are provided so as to cope with various character types, and when a character code to be printed is input, the font pattern corresponding to the designated character is read out of a font memory and printed. In such a printer, the density of dots to be printed is generally set to a fixed value.
Among the above font patterns, in the case of the pattern data for English or European languages in which the number of character types is small, they are formed so as to be printed at a high quality by printing by using a printer of a high print dot density. On the contrary, in the case of the pattern data for Japanese in which Chinese characters (Kanji) or the like are printed, since the number of characters is large, they are set so as to be recorded at a relatively low resolution because of a limitation due to a capacity of a font ROM.
Therefore, to print both of an English or European language and Japanese by the conventional printer, since the dot density of the printer is constant, it is necessary to prepare completely separate printers only for use in Japanese or English or European language, respectively.
There are various application programs to print by using such printers. In those programs, a special printer control code is generated through a host computer and the printing is executed. In such a kind of printing apparatus, the printing by other control codes is realized by a control program, that is, an emulation program to print by the other control code of the printing apparatus different from the inherent printer control code. Generally, since the emulation program is formed in correspondence to the recording dot density which is peculiar to the printing apparatus, the sizes or positions of output images differ from desired size or position in the application program depending on the difference of the recording dot densities of the printing apparatuses, so that there is a problem such that the normal printing cannot be executed.
In consideration of the above problems, it is an object of the present invention to provide an output apparatus of a recording control apparatus for receiving code information and developing into a bit map and recording, comprising developing means for developing the code information into image information of a bit pattern form; recording means which can record the image information onto a recording medium and can record an image which is formed onto the recording medium while changing a dot density of the image; and changing means for changing the dot density of the recording means in correspondence to a code system which was designated by the code information.
In consideration of the above problems, it is another object of the invention to provide an output apparatus of recording means in which code information is developed into image information of the bit pattern form, the image information is recorded onto a recording medium, and a dot density of an image which is formed onto the recording medium is changed and the image can be recorded, wherein the output apparatus operates so as to change the dot density of the recording means in correspondence to a code system designated by the code information.
The present invention is made in consideration of the above conventional example and it is still another object of the invention to provide an output apparatus in which a recording dot density can be automatically changed to the recording dot density corresponding to the print control program such as an emulation program or the like.
In consideration of the above problems, further another object of the invention is to provide an output apparatus of a recording control apparatus for receiving code information and developing into a bit map and recording, comprising: developing means for developing the code information into the image information of the bit pattern form; recording means which can receive the image information and record onto a recording medium and can record an image which is formed onto the recording medium while changing a recording dot density of the image; and changing means for changing the recording dot density by the recording means in correspondence to a control program to execute the recording process by using the recording means.
In consideration of the above problems, further another object of the invention is to provide an output apparatus in which code information is developed into image information of a bit pattern form, the image information is input and recorded onto a recording medium, and an image which is formed onto the recording medium can be recorded while changing a recording dot density of the image, wherein the recording dot density to record onto the recording medium can be changed in correspondence to a control program to execute the recording process.
Fig. 1 is a block diagram showing a schematic construction of a printer of an embodiment of the invention;
Fig. 2 is a flowchart showing printing processes in a printer in the embodiment;
Fig. 3 is a diagram showing a code system of the ISO code;
Fig. 4 is a block diagram showing a schematic construction of a printer of an embodiment; and
Fig. 5 is a flowchart showing printing processes in the printer of the embodiment.
A preferred embodiment of the present invention will be described in detail hereinbelow with reference to the drawings.
Fig. 1 is a block diagram showing a schematic construction of a printer 100 of an embodiment.
Reference numeral 101 denotes a host computer. The host computer 101 outputs a character code and various control codes to the printer 100 of the embodiment and indicates the printing. The code which is output from the host computer 101 includes code information such as a character or the like, information to indicate the size (for example, a point number), type style, and the like thereof, and the like.
A construction of the printer 100 will now be described hereinbelow.
Reference numeral 103 denotes a controller of the printer 100 and 104 indicates a printing section to actually execute the printing operation. The printing section 104 can print at a different dot density while changing the printing dot density by a command which is output from the controller 103. Reference numeral 111 denotes a CPU to control the whole printer 100. The CPU outputs various control signals onto a bus 120 in accordance with control programs (for instance, Fig. 2) stored in an ROM 112 and various kinds of data, thereby controlling each section, which will be explained hereinlater. Reference numeral 112 denotes the program ROM in which the control programs of the CPU 111, various kinds of data, and the like are stored. Reference numeral 113 denotes an RAM which is used as a work area of the CPU 111. Various kinds of data are temporarily stored into the RAM 113. The present printing dot density in the printing section 104 is stored by, for instance, dpi (the number of dots per inch) in a DDS (Dot Density) memory area.
Reference numeral 114 denotes a host interface section to execute the input and output controls of various data between the host computer 101 and the printer 100. Reference numeral 115 indicates a page buffer to store input print data. The page buffer 115 is also used for a page edition, which will be explained hereinlater. Reference numeral 116 denotes a font ROM to store dot pattern information in correspondence to character codes and the like. Reference numeral 117 denotes a bit map memory. Image information of at least one page which was developed into a bit map pattern with reference to the font ROM 116 on the basis of the character codes or the like stored in the page buffer 115 is stored in the bit map memory 117.
Reference numeral 119 indicates a printer interface section to control the interface between the printing section 104 and the CPU 111 or the like. Reference numeral 118 represents a clock generator to generate image clocks corresponding to the dot density in the printing section 104. The printer interface section 119 serially outputs print data to the printing section 104 synchronously with the image clocks. The printing section 104 corresponds to the recording section to actually execute the recording in the printer 100. A mechanism section and the like of the recording system are included in the recording section.
The operation in the above construction will now be described. When print data is sent from the host computer 101, it is input through the host interface section 114 and the fact that the data has been input is informed to the CPU 111. Thus, each time the data is received, the CPU 111 transfers the data to the page buffer 115 and stores. In parallel with the above process, the commands or data stored in the page buffer 115 are sequentially read out. In accordance with the command, the kind of character and the font, the number of characters, the character pitch, and the like to be selected are interpreted, and the page edition is executed in an area 115a in the page buffer 115.
The code data which was read out of the area 115 is developed into a bit map by referring to the pattern data in the font ROM 116 and is developed as a pattern into the bit map memory 117. When the print data of a predetermined amount, for instance, one page is developed into the bit pattern, the image clocks which are generated from the clock generator 118 are set on the basis of the present dot density stored in the DDS memory area and the printing dot density is designated in the printing section 104. The bit pattern data is output to the printing section 104 and the image information which was developed into a dot pattern is printed.
When the dot density designating command is input to the printing section 104, the internal mode of the printing section 104 is set so as to change the printing dot density to the designated density. After that, the printing is performed at the designated printing dot density.
The printing control is executed in a manner such that the CPU 11 outputs a control signal to the printing section 104 through the printer interface section 119 and a signal is input/output to/from the printing section 104. When a horizontal sync signal is received from the printing section 104, the print data is sent as video data to the printing section 104 synchronously with the image clocks which are generated from the clock generator 118. The print data is transferred from the bit map memory 117 to the printer interface section 119. The printer interface section 119 converts the parallel data into the serial data and the video data is formed.
The fundamental outputting operation has been described above. A changing process of a dot density as a feature of the embodiment will now be described hereinbelow.
Particularly, as a general example, explanation will now be made with respect to the case where the printing is executed while changing the printing dot density in accordance with the English and Japanese character code systems.
It is now assumed that the character code system of certain English is set to A and the character code system of Japanese is set to B. On the other hand, since the number of character fonts of the English character code system is small, the printing dot density is set to α (for instance, 300 dpi) and the printing is executed. In the case of the Japanese character code system, since the number of kinds of characters such as Chinese characters or the like is large, the printing dot density is set to β (for example, 240 dpi) and the printing is performed. There is a relation of α > β and such a relation is general in such a kind of printing apparatus which has already been known.
In the embodiment, before the outputting operation to the printing section 104 mentioned above, the printing command or print data stored in the page buffer 115 is read out by the CPU 111. The page edition is executed in the page buffer 115a on the basis of the command. At this time, a check is made to see if the command to designate the character code system B, that is, the Japanese character code or the character code of Japanese exists in the relevant page or not.
If the character code system B does not exist in the relevant page, an instruction is made to the printing section 104 so as to print at the printing dot density α. The character font, character pitch, and the like are selected so as to output a character at the character size designated by the command and at the dot density indicated and are developed into the bit map memory 117.
On the other hand, if the command to designate the code system B of Japanese or the character code of Japanese exists, in order to print at the printing dot density β, the character font and character pitch are selected so as to output the character at the size designated by the command and at the dot density β and are developed into the bit map memory 117. Upon printing, as mentioned above, the printing dot density of the printing section 104 is switched to the dot density α or β and the printing is executed.
Fig. 2 is a flowchart showing the printing processes in the embodiment. The control program to execute the printing processes is stored in the ROM 112.
In Fig. 2, steps S1 to S7 show one step in the printing processes of one page. If those processes can be also performed even for the data in other page or the data in the same page, the processes in the above steps can be also executed in parallel.
In step S1, the print data from the host computer 101 is received by the host interface section 114. In step S2, the data received in step S1 is stored into the page buffer 115. In step S3, the data in the page buffer 115 stored in step S2 is read out by the CPU 111 and the command or data is interpreted and the page edition is executed. After completion of the edition of one page, step S4 follows and a check is made to see if the command to designate the character code system B (of Japanese) or the character code exists in the page or not. If the character code system B does not exist, step S5 follows. If NO, step S8 follows.
In step S5, in order to print at the dot density α, or in step S8, in order to print at the dot density β, the character dot data is developed into the bit map memory 117 so as to print at the size, character style, pitch, and font kind which were requested by the command from the host computer 101 and at each dot density. At the same time, the value in the DDS memory area is also updated into the latest dot density.
In steps S6 and S9, the dot density is switched to the corresponding dot density α or β. Such a switching process of the dot density is accomplished by changing the clock rate of the clock generator 118 or by outputting the dot density information to the printing section 104. In step S7, the printing dot density is output to the printing section 104 and the printing is performed.
As described above, in the embodiment, by discriminating the code system to be used on the basis of the printing command or print data or the like from the host computer 101, the English output and Japanese output can be automatically printed at different dot densities. A method of actually designating the character codes of English and Japanese will now be described.
Fig. 3 is a diagram showing the code system of the command system ISO.
In Fig. 3, the numerical values written in the upper stage denote values of upper three bits of the 7-bit code and the numerical values written at the left edge portion denote lower four bits of the 7-bit code. On the other hand, in the diagram, the block in the bold frame denotes the character code system for each code and a few kinds of character code systems such as USA, JIS, and the like are provided. The numerical values written out of the bold frame indicate codes which are used for commands. The switching among the character code systems is indicated by the data subsequent to an escape command "ESC". The subsequent code which continues after the data is interpreted as a character code system which was designated there. As the code systems, for instance, ASCII, UK, JIS, and the like can be mentioned.
In parallel with the above data, if the data indicative of the 2-byte code continues after the "ESC" command, it is assumed that a Chinese character has been designated. The subsequent data is interpreted as a 2-byte code and the character code system B of Chinese character is selected.
As mentioned above, a check is made to see if the code system is set as a code system for English or Japanese, and the printing can be executed at the optimum printing dot density by the character code system which is included in the data from the host computer 101 in accordance with the designated code system for English or Japanese.
[I] In the above embodiment, the printing dot density has been selected in dependence on English or Japanese and the printing has been executed at the selected density. However, the dot density can be also switched in accordance with the occupation ratio in one page. Or, those factors can be also distinguished by an arbitrary code system irrespective of the special character code system of English or Japanese. On the other hand, although the above embodiment has been described with respect to the case of characters, the invention is not limited to such a case. The invention can be also applied to the case of recording ordinary images, figures, or the like.
[II] In the above embodiment, the printing dot density has been switched between two kinds by the character code system. However, a few kinds of printing dot densities can be also distinguished by a few kinds of character code systems.
[III] In the above embodiment, the number of character system to be distinguished has been set to each one system. However, as a set of a few kinds of character code systems, the printing dot density can be also distinguished between groups of the character code systems.
In the above item [III], it is also possible to correspond to a few kinds of printing dot densities by a few groups.
In the embodiment, the printing dot density has been recognized on a page unit basis. However, it can be also discriminated on a unit basis of one job or a few jobs.
[IV] Although the explanation has been made for only the character codes, in addition to this, the printing dot density can be also switched and the printing is executed in accordance with the command system such as ISO, DIABLO, EBCDIC, shift JIS, or the like, the command system such as a VDM as a command system of the vector expression, the program of a page description language, or the like.
[V] Although the description has been made with respect to only the font of the dot construction, data can be also developed in the bit map memory by using an outline font of the vector expression.
For instance, in the case of a laser beam printer, the changing process of the dot density in the printing section can be realized by changing the period of image clocks, changing the scanning speed or scanning period of a laser beam, or changing the conveying speed (pitch) or the like of a recording paper. In the case of a serial printer or the like, the dot density change can be realized by changing the scanning pitch (speed) of a serial head and the conveying pitch of a recording paper.
As described above, according to the embodiment, there is an advantage such that by automatically switching the printing dot density and printing in accordance with the command system or character code system in the print data, the printing can be performed at the necessary optimum printing dot density in correspondence to the characteristics of each character without the aid of an operator.
On the other hand, the printing can be executed at the optimum dot density by using the character font of the conventional printing apparatus of a fixed printing dot density without changing the character font.
As described above, according to the invention, there is an advantage such that the dot density upon recording can be changed to the optimum density in correspondence to the code system for recording.
A preferred embodiment of the present invention will now be described in detail hereinbelow with reference to the drawings.
Fig. 4 is a block diagram showing a schematic construction of the printer 100 in the embodiment, in which the same parts and components as those shown in Fig. 1 are designated by the same reference numerals.
Reference numeral 101 denotes a host computer for outputting character codes and various control codes to the printer 100 in the embodiment and instructing the printing. The code which is output from the host computer 101 includes the code information of character or the like, information to indicate the size (for instance, the point number), type style, and the like of such a character, and the like.
A construction of the printer 100 will be described hereinbelow.
Reference numeral 103 denotes the controller of the printer 100 and 104 indicates the printing section to actually execute the printing operation. The printing dot density can be changed by a command which is output from the controller 103 and the printing can be performed at a different dot density. Reference numeral 111 indicates the CPU to control the whole printer 100. The CPU 111 outputs various control signals onto the bus 120 and controls each section, which will be explained hereinlater, in accordance with the control programs or various data stored in the ROM 112. Reference numeral 112 denotes the program ROM in which the control programs of the CPU 111 and various kinds of data and the like are stored. Reference numeral 113 indicates the RAM which is used as a work area of the CPU 111. The RAM 113 temporarily stores various data. The present printing dot density in the printing section 104 is stored in the DDS (Dot Density) memory area as a value of, for instance, dpi (the number of dots per inch).
Reference numeral 114 indicates the host interface section to execute the input/output control of various data between the host computer 101 and the printer 100. Reference numeral 115 represents the page buffer to store the input print data. The page buffer 115 is also used for a page edition, which will be explained later. Reference numeral 116 indicates the font ROM to store dot pattern information corresponding to the character codes and the like. Reference numeral 117 denotes the bit map memory. The image information of at least one page which was developed into a bit map pattern with reference to the font ROM 116 on the basis of the character codes or the like stored in the page buffer 115 is stored in the bit map memory 117.
Reference numeral 119 denotes the printer interface section to control the interface between the printing section 104 and the controller 103. Reference numeral 118 denotes the clock generator to generate image clocks in correspondence to the dot density in the printing section 104. The printer interface section 119 serially outputs the print data to the printing section 104 synchronously with the image clocks. The printing section 104 corresponds to the recording section to actually execute the recording by the printer 100. A mechanism section and the like of the recording system are included in the recording section. A few kinds of dot densities can be set by an instruction from the controller 103 and the recording can be executed.
Reference numeral 121 denotes an input section. The operator can instruct through the inputting section 121 whether the program in the ROM 112 is executed or either one of the emulation programs A and B is selected and executed or the like. Reference numeral 122 denotes a built-in ROM in which the emulation program B is stored; 123 indicates a detachable external connecting memory which is connected to the controller 103 by a connector or the like and in which the emulation program A is stored; and 120 the system bus to connect each section mentioned above as shown in the diagram and to transmit an address signal, a data signal, and various control signals.
The operation of the above construction will now be described. When print data is sent from the host computer 101, it is input through the host interface section 114. The fact that the data has been input is informed to the CPU 111. Due to this, each time the data is received, the CPU 111 transfers and stores the data into the page buffer 115. In parallel with the above processes, the commands or data stored in the page buffer 115 are sequentially read out, the kind of character and the font, the number of characters, the size of character, the interval, and the like to be selected are interpreted in accordance with the command, and the page edition is executed in the area 115a in the page buffer 115 by a predetermined format.
The code data stored in the area 115a is developed into a bit map with reference to the pattern data in the font ROM 116 on the basis of the information of the printing position, pitch, type style, size, and the like. The bit map is developed as a bit map pattern into the bit map memory 117. When the print data of a predetermined amount, for instance, one page is developed into a bit pattern, the clock generator 118 sets the image clocks on the basis of the present dot density stored in the DDS memory area and designates the printing dot density into the printing section 104. A print start instruction and a vertical sync signal are output to the printing section 104.
When the vertical sync signal is input, the printing section 104 generates a horizontal sync signal by the printing mechanism in the printing section 104 and outputs to the printer interface section 119. The printer interface section 119 reads out the bit image data from the bit map memory 117 synchronously with the horizontal sync signal and converts into the serial signal and outputs as the video signal to the printing section 104. The printing section 104 receives the video signal and prints by scanning, for instance, a laser beam or the like. In the case where the control program stored in the ROM 112 or either one of the emulation programs A and B was selected, the above-described printing operation is controlled in accordance with the selected emulation program.
When a dot density designation command is input, the printing section 104 sets the internal conditions of the printing section 104 so as to change the printing dot density to the designated dot density. After that, the printing is executed at the designated printing dot density.
The fundamental printing operation has been described above. Changing processes of the dot density as a feature of the embodiment will now be described hereinbelow.
In the embodiment, it is now assumed that the emulation programs A and B are connected and both of the emulation programs differ from the command systems stored in the ROM 112.
When a power source is turned on or a reset command is input from the inputting section 121, the CPU 111 initializes the system on the basis of the program in the ROM 112. At this time, it is recognized that the emulation program has been connected by referring to the ID written in a special address of the emulation program A (in the case where the auxiliary memory is connected) through the system bus 120.
When the execution of the emulation program A123 is indicated from the inputting section 121, on the basis of the ID of the emulation program A, the printing dot density is read and a dot density changing command is output to the printing section 104 so as to change the printing dot density to the designated dot density through the printer interface section 118. At the same time, the clock rate of the clock generator 119 is changed so as to output the image clocks corresponding to the printing dot density in the printing section 104. After that, the control is shifted from the program in the ROM 112 to the emulation program A123 and the printing process is executed.
If information to indicate the printing dot density does not exist in the emulation program, a table to store the printing dot density corresponding to the emulation printer is previously stored in the ROM 112 and the printing dot density is instructed with reference to the table.
On the other hand, in the case where the emulation program A123 is not connected to the outside or the execution of the emulation program B122 was instructed from the inputting section 121, the printing dot density corresponding to the emulation program is also similarly changed.
Fig. 5 is a flowchart showing the printing process in the embodiment. The control program to execute the printing process is stored in the ROM 112.
The printing process is started by turning on the power source of the apparatus, inputting a reset command from the inputting section 121, or the like. First, in step S1, a check is made to see if the execution of the emulation program has been instructed by the inputting section 121 or not. If NO, step S2 follows and the control program stored in the ROM 112 is executed.
If the execution of the emulation program has been instructed, step S3 follows and a check is made to see if the execution of the auxiliary memory 123 (emulation program A) has been instructed from the inputting section 121 or not. If the execution of the emulation program B122 has been instructed, step S4 follows and the ID of the emulation program B (built-in ROM) is read.
If the execution of the emulation program A in the auxiliary memory 123 has been instructed in step S3, step S5 follows and a check is made to see if the auxiliary memory is connected or not. If NO, step S2 follows and the control program in the ROM 112 is executed. Under the control of the control program, the processes to display the absence of the auxiliary memory and the like are executed.
If the auxiliary memory is connected in step S5, step S6 follows and the ID of the emulation program A123 in the auxiliary memory is input. Then, step S7 follows and a check is made to see if the printing dot density of the emulation program A or B has been designated or not on the basis of the ID information which was read in step S4 or S5. If the printing dot density has been designated, step S8 follows and the dot density is read and stored into the DDS memory area in the RAM 113. If the information to indicate the dot density does not exist in the emulation program, step S9 follows and the printing dot density is determined with reference to the table (not shown) stored in the ROM 112.
When the printing dot density is determined, step S10 follows and the dot density information is output to the printing section 104 through the printer interface section 119 on the basis of the dot density information stored in the DDS memory area in the RAM 113. Due to this, the printing dot density of the printing section 104 is changed to the indicated value. Then, step S11 follows and an instruction signal to change the clock rate is output to the clock generator 118. Thus, the image clocks according to the printing dot density which was set in the printing section 104 are supplied to the printer interface section 119. The subsequent printing processes are executed synchronously with the image clocks. In step S12, the emulation program indicated is executed and the processing routine is finished.
Since the printing is performed at the printing dot density corresponding to the emulation program which is executed, there is an advantage such that the printing processes can be also executed by using the emulation program of a different printing dot density.
In the above embodiment, all of the emulation programs have been executed from the inputting section 121. However, it is also possible to construct in a manner such that the priorities are provided for the auxiliary memories and when the auxiliary memory is connected, its emulation program is executed.
Although the embodiment has been described with respect to the case of the emulation programs, the invention is not limited to such a case but even in the case of the control program corresponding to other command system, it can be also similarly executed.
As described above, according to the embodiment, by recognizing the external or internal emulation programs or the like and automatically switching the printing dot density to the densities designated by the programs, the optimum printing dot densities can be set and the printing can be executed without the aid of the operator for the control programs corresponding to various emulation programs or other command systems.
Thus, all of the emulation programs or the like which cannot be executed because of the difference of the dot density in the printing section can be used.
As described above, according to the invention, since the recording dot density can be automatically changed in correspondence to the optimum recording dot density suitable for the control program such as an emulation program or the like, there is an advantage such that the disenable state of the program execution or the incorrect recording due to the difference of the recording dot density can be avoided. | |
When managing pupil’s behavior, all staff will need to be aware of school policies.
The majority of children/young people do not present challenging behavior, and they attend a range of educational settings in environments which are conducive to learning appropriate behaviors. It is essential to ensure that behavior which does not meet school/setting’s expectations, is responded to through management strategies that do not rely upon any form of physical or abusive …show more content…
What the law now provides
Section 91 of the Education and Inspections Act 2006 introduces, for the first time, a statutory power for teachers and certain other school staff to discipline pupils.
The power covers those issues on which schools are most likely to face any legal challenges, as regards their disciplinary authority…in particular; the Act specifies a power for teachers and certain other school staff to enforce disciplinary penalties. The penalty could be for failing to follow a school rule, an instruction given by a member of staff of the school, or for any other reason that causes the pupil’s behavior to fall below the standard which could reasonably be expected. | https://www.educationindex.com/essay/Strategies-for-Promoting-Positive-Behaviour-According-with-F35TL7EEY |
When a North African pumping station required modernization, ANDRITZ supplied the first high-pressure pumps of their kind to Morocco for the drinking water supply. Enormous savings on energy costs resulted in the investment paying for itself within a short time.
Very little natural groundwater reserves, strong population growth, various socio-economic developments and a rising demand for drinking water as well as increasing periods of drought are all placing a burden on Morocco’s water resources.
Characterized by a semi-dry climate, the North African kingdom has been occupied with its water supply since the 1960s. At that time, a separate program was adopted to mobilize surface water by building numerous reservoirs. These resources currently provide 65% of the country’s drinking water.
Following Morocco’s independence from France in 1956, the water distribution system was nationalized. Various public, but also private companies and institutions in the country’s larger cities, so-called ‘régies’, are responsible for distribution and maintenance.
The national power and water authority ONEE is playing an increasingly important role in ensuring the water supply and hygiene in Morocco. Established in 1995 under the patronage of the UNESCO, ONEE is responsible today for the production of 80% of Moroccan drinking water and sells it to the various ‘régies’ and other concession holders. In addition, ONEE provides direct water distribution to around 500 small towns.
Water shortage In spite of this controlled system, Morocco is listed in the World Map of Water-Stressed Countries from the World Resources Institute as one of the 45 countries that is confronted with an increasing water shortage. Although the African state is not affected by an extreme situation, i.e. prolonged drought, the current status is already giving cause for alarm.
According to information provided by the UN, the annual water resources available in 1960 amounted to 2,500 m3 per head of the population. In the meantime, they have dropped by 80% to 500 m3 per year and head of the population. In addition to households, it is mainly agriculture and industry that are responsible for the excessive consumption.
In order to master these challenges successfully and avert an actual drought, the Moroccan government has developed a strategy extending over several years. This national schedule is intended to secure the water supply and its availability. An investment sum of 200 million Moroccan dirhams – approximately €18 million, has been earmarked for this purpose up to 2030.
In the course of this national strategy, ONEE will generate new water sources on the one hand, but also conduct extensive maintenance programs on existing sources on the other hand in order to guarantee more effective production and distribution in the future.
Additionally, various surveys are being conducted, water quality checks also improved, and pumps and generators are being adapted and refurbished. As a result of these measures, any water deficits in the 10 densely populated areas would be eliminated sustainably by the end of 2018, in 15 more urban centers by 2019, and in the remaining 17 urban centers after that.
Drinking water supply The focus here lies on the Bab Louta dam and reservoir and the attached pumping plant in the Fez region. Located in the Sebou River basin, the plant has supplied the city of Taza for more than 10 years and also provides parts of Fez, the oldest city in Morocco and UNESCO World Cultural Heritage site, with 100,000 m3 of drinking water every day. The Sebou river rises in the Atlas Mountains and, in terms of volume, is the largest river in North Africa, with a length of 496 km and water flow of 137 m3 per second.
Thanks to its existing good customer relationship, satisfactory pump deliveries to this station in the past, and the outstanding efficiency, ANDRITZ was awarded the contract once again for further modernization of the equipment. This order comprises delivery of a total of three high-pressure pumps from the HP 43 series to the EPC and installer Novelli Pumps.
These are the very first high-pressure pumps of this kind in North Africa. Due to their unique, high efficiency of 85%, they also have a strictly ecological design. This means enormous savings on energy costs, resulting in the investment paying for itself within a short time. Each pump conveys 113 liters of water per second over a height of 117 m into the drinking water pipes. To achieve this, each model is fitted with a 400 kW motor.
“I have never seen pumps that operate so smoothly and quietly. What impresses me even more is, of course, their efficiency of 85%. This helps us save a lot of money on energy consumption. In any case, we needed these high-grade pumps! That’s why we had to obtain a special permit from the Ministry of Water in order to avoid a monopoly situation,” explains Youssef Bahri, project manager at ONEE.
100 years of experience Behind the pumps from the HP43 series lie more than 100 years of product experience, a technology network operating worldwide, the latest simulation and test stand technology, and several years of development work. The modular machine can be extended in stages as required. It is offered in both a horizontal and a vertical design. The arrangement of the suction and discharge branches can be varied depending on the intended purpose. The high-pressure pumps are available in ductile cast iron as well as in high-alloy stainless steel.
Depending on the design selected, the pump shaft runs in grease-lubricated anti-friction bearings or plain bearings lubricated by the pumping media. Several options are offered for the shaft seal. In the standard design, the pump is designed with a non-relieved or with a relieved mechanical shaft seal depending on the delivery pressure.
A cartridge seal or stuffing box packing are available as an option. In order to guarantee long pump running times, such wear parts as protection sleeves, impeller rings and casing rings made of top-grade materials are used.
The pumps were delivered and installed by March 2019. As a result, a sustainable and energy-efficient drinking water supply for the Fez region can be guaranteed as from the summer months to cover the coming decades. | https://www.worldpumps.com/content/features/andritz-fights-drought-with-high-pressure |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.