content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Last updated on August 5th, 2017 at 02:44 pm Australia as a continent is divided into mainland part (Australia) and island part comprising Tasmania, New Guinea, Seram and Timor. Australian continent is actually the smallest continent out of all seven traditional ones, though Australia as a country belongs to one of the largest ones in the world. The full official name of the country is the Commonwealth of Australia and includes the mainland part of the continent, island state of Tasmania and multiple small islands. The territory which was initially inhabited by indigenous Aboriginals was first discovered by Dutch explorers in 1606. Later on, in 1770 it was claimed by Great Britain as the penal colony of NSW was established. Ever since, the population has been growing steadily and nowadays reaches more than 23 million people. Australia: has grown into highly developed liberal and democratic country with one of the largest economies and high income per capital. Country’s capital city is Canberra located in the Australian Capital Territory where most of the governmental buildings of this constitutional monarchy are located. This Oceania country is divided into six states (New South Wales, Queensland, SA, WA and Tasmania) and two territories (Northern Territory and Australian Capital Territory), while both of them function as states. The difference is, that the Commonwealth Parliament can override their own legislation. Australia is spreading across the Indo-Australian Plate surrounded by different water masses (Indian and Pacific oceans, Timor Sea, Arafura Sea, Coral Sea and Tasman Sea). Due to its vast area and influence by ocean currents, Australian climate varies from tropical in the north, through Mediterranean around south-west part to temperate climate in Tasmania and south-east. Not only thanks to its more than 34 thousands kilometers of coastline and isolated location, Australia is home to some of the world’s most famous natural highlights. Those include for example the Great Barrier Reef, the Ayers Rock in the desert inland part of the country called the outback. Australian cities (Melbourne, Sydney, Perth) have been repeatedly announced to be among the most liveable ones in the world. This country sometimes also called “Down Under” is also home to some of the unique and rare animals such as kangaroos, koalas, platypus or wombat).
https://travellerhints.com/place/au-nz/australia/
SUMMARY OF THE INVENTION The present invention relates to novel arylsulfonamido- substituted hydroxamic acids, as matrix metalloproteinase inhibitors, methods for preparation thereof, pharmaceutical compositions comprising said compounds, a method of inhibiting matrix-degrading metalloproteinases and a method of treating matrix metalloproteinase dependent diseases or conditions in mammals which are responsive to matrix metalloprotease inhibition, using such compounds or pharmaceutical compositions comprising such compounds of the invention. Matrix-degrading metalloproteinases, such as gelatinase, stromelysin and collagenase, are involved in tissue matrix degradation (e. g. collagen collapse) and have been implicated in many pathological conditions involving abnormal connective tissue and basement membrane matrix metabolism, such as arthritis (e.g. osteoarthritis and rheumatoid arthritis), tissue ulceration (e.g. corneal, epidermal and gastric ulceration), abnormal wound healing, periodontal disease, bone disease (e. g. Paget's disease and osteoporosis), tumor metastasis or invasion, as well as HIV-infection (as reported in J. Leuk. Biol. 52 (2): 244-248, 1992). The compounds of the invention are inhibitors of stromelysin, gelatinase and/or collagenase activity, inhibit matrix degradation and are expected to be useful for the treatment of gelatinase, stromelysin and collagenase dependent pathological conditions in mammals, such as those cited above, including rheumatoid arthritis, osteoarthritis, tumor metastasis, periodontal disease, corneal ulceration, as well as the progression of HIV-infection and associated disorders. DETAILED DESCRIPTION OF THE INVENTION Particularly the invention relates to (a) the compounds of formula I ##STR2## wherein Ar is carbocyclic or heterocyclic aryl; R is hydrogen, lower alkyl, carbocyclic aryl-lower alkyl, carbocyclic aryl, heterocyclic aryl, biaryl, biaryl-lower alkyl, heterocyclic aryl- lower alkyl, mono- or poly-halo-lower alkyl, C.sub.3 - C.sub.7 - cycloalkyl, C.sub.3 -C.sub.7 -cycloalkyl-lower alkyl, hydroxy- lower alkyl, acyloxy-lower alkyl, lower alkoxy-lower alkyl, lower alkyl- (thio, sulfinyl or sulfonyl)-lower alkyl, amino, mono- or di-lower alkylamino)- lower alkyl, acylamino-lower alkyl, (N-lower alkyl- piperazino or N-aryl- lower alkylpiperazino)-lower alkyl, or (morpholino, thiomorpholino, piperidino, pyrrolidino, piperidyl or N-lower alkylpiperidyl)-lower alkyl; R.sub.1 is hydrogen, lower alkyl, carbocyclic aryl-lower alkyl, carbocyclic aryl, heterocyclic aryl, biaryl, biaryl-lower alkyl, heterocyclic aryl-lower alkyl, mono- or poly-halo-lower alkyl, C.sub.3 - C.sub.7 -cycloalkyl, C.sub.3 -C.sub.7 -cycloalkyl-lower alkyl, hydroxy- lower alkyl, acyloxy-lower alkyl, lower alkoxy-lower alkyl, (carbocyclic or heterocyclic aryl)-lower alkoxy-lower alkyl, lower alkyl-(thio, sulfinyl or sulfonyl)-lower alkyl, (amino, mono- or di-lower alkylamino)- lower alkyl, (N-lower alkyl-piperazino or N-aryl-lower alkylpiperazino)- lower alkyl, (morpholino, thiomorpholino, piperidino, pyrrolidino, piperidyl or N-lower alkylpiperidyl)-lower alkyl, acylamino-lower alkyl, piperidyl or N-lower alkylpiperidyl; R.sub.2 is hydrogen or lower alkyl; and pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. (b) Compounds of formula I wherein R and R.sub.1 together with the chain to which they are attached form an tetrahydro isoquinoline, piperidine, oxazolidine, thiazolidine or pyrrolidine ring, each optionally substituted by lower alkyl; and Ar and R.sub.2 have meaning defined above; which can be represented by formula Ia ##STR3## wherein X represents methylene or 1,2-ethylene each optionally substituted by lower alkyl, or X represents oxygen, sulfur, or 1,2-phenylene; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof; and also (c) compounds of formula I wherein R.sub.1 and R.sub.2 together with the carbon atom to which they are attached form a ring system selected from C.sub.3 -C.sub.7 -cycloalkane optionally substituted by lower alkyl, oxacyclohexane, thia-cyclohexane, indane, tetralin, piperidine or piperidine substituted on nitrogen by acyl, lower alkyl, carbocyclic or heterocyclic aryl-lower alkyl, (carboxy esterified or amidated carboxy) lower alkyl or by lower alkylsulfonyl; and Ar and R have meaning as defined above; which can be represented by formula Ib ##STR4## wherein Y is a direct bond, C.sub.1 -C.sub.4 -straight chain alkylene optionally substituted by lower alkyl, CH.sub.2 OCH.sub.2, CH. sub.2 SCH.sub.2, 1,2- phenylene, CH.sub.2 -1,2-phenylene or CH.sub.2 N(R. sub.6)-CH.sub.2 in which R.sub.6 represents hydrogen, lower alkanoyl, di lower alkylamino- lower alkanoyl, aroyl, lower alkyl, carbocyclic or heterocylic aryl-lower alkyl, (carboxy, esterified or amidated carboxy)- lower alkyl or lower alkylsulfonyl; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. A preferred embodiment thereof relates to the compounds of formula Ic ##STR5## in which Y' represents oxygen, sulfur, a direct bond, methylene or methylene substituted by lower alkyl, or NR.sub.6 ; R.sub. 6 represents hydrogen, lower alkanoyl, all-lower alkylamino-lower alkanoyl, aryl-lower alkanoyl, lower alkyl, carbocyclic or heterocyclic aryl-lower alkyl, (carboxy, esterified or amidated carboxy)-lower alkyl or lower alkylsulfonyl; Ar and R have meaning as defined herein; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. Preferred are said compounds of formula I, Ia, Ib and Ic wherein Ar is monocyclic carbocyclic aryl such as phenyl or phenyl mono-, di- or tri- substituted by C.sub.1 -C.sub.10 -alkoxy, hydroxy, aryl-lower alkoxy, C. sub.3 -C.sub.7 -cycloalkyl-lower alkoxy, (lower alkyl, aryl- lower alkyl or C.sub.3 -C.sub.7 -cycloalkyl-lower alkyl)-thio, lower alkyloxy- lower alkoxy, halogen, lower alkyl, cyano, nitro, trifluoromethyl, lower alkyl- (sulfinyl or sulfonyl), amino or mono- or di-lower alkylamino; or Ar is phenyl substituted on adjacent carbon atoms by C.sub.1 -C.sub.2 - alkylene-dioxy or oxy-C.sub.2 -C.sub.3 -alkylene; or Ar is heterocyclic monocyclic aryl such as thienyl or thienyl substituted by lower alkyl; the other symbols have meaning as defined; pharmaceutically acceptable prodrug derivatives thereof; and pharmaceutically acceptable salts thereof. A particular embodiment of the invention relates to the compounds of formula II ##STR6## wherein R is hydrogen, lower alkyl, carbocyclic aryl-lower alkyl, carbocyclic aryl, heterocyclic aryl, biaryl, biaryl-lower alkyl, heterocyclic aryl- lower alkyl, mono- or poly-halo-lower alkyl, C.sub.3 - C.sub.7 - cycloalkyl, C.sub.3 -C.sub.7 -cycloalkyl-lower alkyl, hydroxy- lower alkyl, acyloxy-lower alkyl, lower alkoxy-lower alkyl, lower alkyl- (thio, sulfinyl or sulfonyl)-lower alkyl, amino, mono- or di-lower alkylamino)- lower alkyl, acylamino-lower alkyl, (N-lower alkyl- piperazino or N-aryl- lower alkylpiperazino)-lower alkyl, or (morpholino, thiomorpholino, piperidino, pyrrolidino or N-lower alkylpiperidyl)-lower alkyl; R.sub.1 is hydrogen, lower alkyl, carbocyclic aryl-lower alkyl, carbocyclic aryl, heterocyclic aryl, biaryl, biaryl-lower alkyl, heterocyclic aryl-lower alkyl, mono- or poly-halo-lower alkyl, C.sub.5 - C.sub.7 -cycloalkyl, C.sub.5 -C.sub.7 -cycloalkyl-lower alkyl, hydroxy- lower alkyl, acyloxy-lower alkyl, lower alkoxy-lower alkyl, lower alkyl- (thio, sulfinyl or sulfonyl)-lower alkyl, (amino, mono- or di-lower alkylamino)-lower alkyl, (N-lower alkyl-piperazino or N-aryl-lower alkylpiperazino)-lower alkyl, (morpholino, thiomorpholino, piperidino, pyrrolidino, piperidyl or N-lower alkylpiperidyl)-lower alkyl, piperidyl, N-lower alkylpiperidyl, or acylamino-lower alkyl represented by R.sub.3 - CONH-lower alkyl; R.sub.2 is hydrogen; R.sub.3 in R.sub.3 -CONH-lower alkyl is lower alkyl, carbocyclic or heterocyclic aryl, di-lower alkylamino, N-lower alkylpiperazino, morpholino, thiomorpholino, piperidino, pyrrolidino, N- alkylpiperidyl, or (di-lower alkylamino, N-lower alkylpiperazino, morpholino, thiomorpholino, piperidino, pyrrolidino, pyridyl or N-lower alkylpiperidyl)-lower alkyl; R.sub.4 is hydrogen, lower alkoxy, hydroxy, aryl-lower alkoxy, lower alkylthio or aryl-lower alkylthio, lower alkyloxy-lower alkoxy, halogen, trifluoromethyl, lower alkyl, nitro or cyano; R.sub.5 is hydrogen, lower alkyl or halogen; or R.sub.4 and R.sub.5 together on adjacent carbon atoms represent methylenedioxy, ethylenedioxy, oxyethylene, or oxypropylene; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. Another preferred embodiment of the invention relates to the compounds of formula II wherein R and R.sub.1 together with the chain to which they are attached form an tetrahydro isoquinoline, piperidine, thiazolidine or pyrrolidine ring; and R.sub.2, R.sub.4 and R.sub.5 have meaning as defined above; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. Such compounds correspond to compounds of formula Ia wherein Ar is optionally substituted phenyl as defined above. Another preferred embodiment of the invention relates to the compounds of formula II wherein R.sub.1 and R.sub.2 together with the carbon atom to which they are attached form a ring system selected from cyclohexane, cyclopentane, oxacyclohexane, thiacyclohexane, indane, tetralin, piperidine or piperidine substituted on nitrogen by acyl, lower alkyl, carbocyclic or heterocyclic aryl-lower alkyl, or by lower alkylsulfonyl; and R, R.sub.4 and R.sub.5 have meaning as defined above; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. Such compounds correspond to compounds of formula Ib wherein Ar is optionally substituted phenyl as defined above. Particularly preferred are the compounds of formula III ##STR7## wherein R represents lower alkyl, trifluoromethyl, C.sub.5 -C. sub.7 - cycloalkyl, biaryl, carbocyclic monocyclic aryl or heterocyclic monocyclic aryl; R.sub.1 represents hydrogen, lower alkyl, C.sub.5 -C. sub.7 -cycloalkyl, monocyclic carbocyclic aryl, carbocyclic aryl-lower alkyl, heterocyclic aryl-lower alkyl, lower alkoxy-lower alkyl, lower alkyl-(thio, sulfinyl or sulfonyl)-lower alkyl, di-lower alkylamino- lower alkyl, (N-lower alkylpiperazino, morpholino, thiomorpholino, piperidino or pyrrolidino)-lower alkyl or R.sub.3 -CONH-lower alkyl; R. sub.3 represents lower alkyl, carbocyclic aryl, heterocyclic aryl, di- lower alkylamino, N-lower alkylpiperazino, morpholino, thiomorpholino, piperidino, pyrrolidino, N-alkylpiperidyl, or (di-lower alkylamino, N- lower alkylpiperazino, morpholino, thiomorpholino, piperidino, pyrrolidino or N-alkylpiperidyl)-lower alkyl; R.sub.4 represents lower alkoxy or aryl-lower alkoxy; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. Further preferred are compounds of formula III wherein R represents monocyclic carbocyclic aryl or monocyclic heterocyclic aryl; R. sub.1 and R.sub.4 have meaning as defined above; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. More particularly preferred are said compounds of formula III wherein R represents heterocyclic monocyclic aryl selected from tetrazolyl, triazolyl, thiazolyl, imidazolyl and pyridyl, each optionally substituted by lower alkyl; or R represents phenyl or phenyl substituted by lower alkyl, lower alkoxy, halogen or trifluoromethyl; R. sub.1 represents lower alkyl, cyclohexyl, or R.sub.3 -CONH-lower alkyl wherein R.sub.3 represents (di-lower alkylamino, N-lower alkylpiperazino, morpholino, thiomorpholino, piperidino, pyrrolidino or N- alkylpiperidyl)- lower alkyl; and R.sub.4 represents lower alkoxy or aryl- lower alkoxy; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. A further preferred embodiment relates to said compounds of formula III wherein R represents 2- or 3-pyridyl or phenyl; R.sub.1 represents C. sub.1 -C.sub.4 -alkyl, cyclohexyl, or R.sub.3 -CONH-C.sub.1 -C.sub.4 - alkyl wherein R.sub.3 represents di-C.sub.1 -C.sub.4 - alkylamino-C.sub.1 -C.sub.4 -lower alkyl; and R.sub.4 represents lower alkoxy; pharmaceutically acceptable prodrug derivatives; and pharmaceutically acceptable salts thereof. Pharmaceutically acceptable prodrug derivatives are those that may be convertible by solvolysis or under physiological conditions to the free hydroxamic acids of the invention and represent such hydroxamic acids in which the CONHOH group is derivatized in form of an O-acyl or an optionally substituted O-benzyl derivative. The compounds of the invention depending on the nature of the substituents, possess one or more asymmetric carbon atoms. The resulting diastereoisomers and enantiomers are encompassed by the instant invention. Preferred are the compounds of the invention wherein the asymmetric carbon in the above formulae (to which are attached R.sub.1 and/or R.sub. 2) corresponds to that of a D-aminoacid precursor and is assigned the (R)- configuration. The general definitions used herein have the following meaning within the scope of the present invention, unless otherwise specified. The term "lower" referred to above and hereinafter in connection with organic radicals or compounds respectively defines such as branched or unbranched with up to and including 7, preferably up to and including 4 and advantageously one or two carbon atoms. A lower alkyl group is branched or unbranched and contains 1 to 7 carbon atoms, preferably 1-4 carbon atoms, and represents for example methyl, ethyl, propyl, butyl, isopropyl, isobutyl and the like. A lower alkoxy (or alkyloxy) group preferably contains 1-4 carbon atoms, advantageously 1-3 carbon atoms, and represents for example, ethoxy, propoxy, isopropoxy, or most advantageously methoxy. Halogen (halo) preferably represents chloro or fluoro but may also be bromo or iodo. Mono- or poly-halo-lower alkyl represents lower alkyl preferably substituted by one, two or three halogens, preferably fluoro or chloro, e. g. trifluoromethyl or trifluoroethyl. Aryl represents carbocyclic or heterocyclic aryl. Prodrug acyl derivatives are preferably those derived from an organic carbonic acid, an organic carboxylic acid or a carbamic acid. An acyl derivative which is derived from an organic carboxylic acid is, for example, lower alkanoyl, phenyl-lower alkanoyl or unsubstituted or substituted aroyl, such as benzoyl. An acyl derivative which is derived from an organic carbonic acid is, for example, alkoxycarbonyl which is unsubstituted or substituted by an aromatic radical or is cycloalkoxycarbonyl which is unsubstituted or substituted by lower alkyl. An acyl derivative which is derived from a carbamic acid is, for example, amino-carbonyl which is substituted by lower alkyl, aryl- lower alkyl, aryl, lower alkylene or lower alkylene interrupted by O or S. Prodrug optionally substituted O-benzyl derivatives are preferably benzyl or benzyl mono-, di-, or tri-substituted by e.g. lower alkyl, lower alkoxy, amino, nitro, halogen, trifluoromethyl and the like. Carbocyclic aryl represents monocyclic or bicyclic aryl, for example phenyl or phenyl mono-, di- or tri-substituted by one, two or three radicals selected from lower alkyl, lower alkoxy, hydroxy, halogen, cyano and trifluoromethyl or phenyl disubstituted on adjacent carbon atoms by lower-alkylenedioxy, such as methylenedioxy; or 1- or 2- naphthyl. Preferred is phenyl or phenyl monosubstituted by lower alkoxy, halogen or trifluoromethyl. Heterocyclic aryl represents monocyclic or bicyclic heteroaryl, for example pyridyl, quinolyl, isoquinolyl, benzothienyl, benzofuranyl, benzopyranyl, benzothiopyranyl, furanyl, pyrrolyl, thiazolyl, oxazolyl, isoxazolyl, triazolyl, tetrazolyl, pyrrazolyl, imidazolyl, thienyl, or any said radical substituted by lower alkyl or halogen. Pyridyl represents 2-, 3- or 4-pyridyl, advantageously 2- or 3-pyridyl. Thienyl represents 2- or 3-thienyl, advantageously 2-thienyl. Quinolyl represents preferably 2-, 3- or 4-quinolyl, advantageously 2-quinolyl. lsoquinolyl represents preferably 1-, 3- or 4-isoquinolyl. Benzopyranyl, benzothiopyranyl represent preferably 3-benzopyranyl or 3- benzothiopyranyl, respectively. Thiazolyl represents preferably 2- or 4- thiazolyl, advantageously 4-thiazolyl. Triazolyl is preferably 1-, 2- or 5-(1,2,4-triazolyl). Tetrazolyl is preferably 5-tetrazolyl. Imidazolyl is preferably 4-imidazolyl. Biaryl is preferably carbocyclic biaryl, e.g. biphenyl, namely 2, 3 or 4-biphenyl, advantageously 4-biphenyl, each optionally substituted by e.g. lower alkyl, lower alkoxy, halogen, trifluoromethyl or cyano. C.sub.3 -C.sub.7 -Cycloalkyl represents a saturated cyclic hydrocarbon optionally substituted by lower alkyl which contains 3 to 7 ring carbons and is advantageously cyclopentyl or cyclohexyl optionally substituted by lower alkyl. Carbocyclic aryl-lower alkyl represents preferably straight chain or branched aryl-C.sub.1 -C.sub.4 -alkyl in which carbocyclic aryl has meaning as defined above, e.g. benzyl or phenyl-(ethyl, propyl or butyl), each unsubstituted or substituted on phenyl ring as defined under carbocyclic aryl above, advantageously optionally substituted benzyl. Heterocyclic aryl-lower alkyl represents preferably straight chain or branched heterocyclic aryl-C.sub.1 -C.sub.4 -alkyl in which heterocyclic aryl has meaning as defined above, e.g. 2-, 3- or 4- pyridylmethyl or (2-, 3- or 4-pyridyl)-(ethyl, propyl or butyl); or 2- or 3-thienylmethyl or (2- or 3-thienyl)-(ethyl, propyl or butyl); 2-, 3- or 4-quinolylmethyl or (2-, 3- or 4-quinolyl)-(ethyl, propyl or butyl); or 2- or 4- thiazolylmethyl or (2- or 4-thiazolyl)-(ethyl, propyl or butyl); and the like. Cycloalkyl-lower alkyl represents preferably (cyclopentyl- or cyclohexyl)-(methyl or ethyl), and the like. Biaryl-lower alkyl represents preferably 4-biphenylyl-(methyl or ethyl and the like. Acyl is derived from an organic carboxylic acid, carbonic acid or carbamic acid. Acyl represents preferably lower alkanoyl, carbocyclic aryl- lower alkanoyl, lower alkoxycarbonyl, aroyl, di-lower alkylaminocarbonyl, di- lower alkylamino-lower alkanoyl, and the like. Acylamino represents preferably lower alkanoylamino, lower alkoxycarbonylamino and the like. Acylamino-lower alkyl in R and R.sub.1 is R.sub.3 -CONH-lower alkyl in which R.sub.3 represents preferably lower alkyl, lower alkoxy, aryl-lower alkyl, aryl-lower alkoxy, carbocyclic or heterocyclic aryl, di- lower alkylamino, N-lower alkylpiperazino, morpholino, thiomorpholino, piperidino, pyrrolidino, N-alkylpiperidyl, or (di-lower alkylamino, N- lower alkylpiperazino, morpholino, thiomorpholino, piperidino, pyrrolidino, pyridyl or N-lower alkylpiperidyl)-lower alkyl, and the like. Lower alkanoyl represents preferably C.sub.2 -C.sub.4 -alkanoyl such as acetyl or propionyl. Aroyl represents preferably benzoyl or benzoyl mono- or di- substituted by one or two radicals selected from lower alkyl, lower alkoxy, halogen, cyano and trifluoromethyl; or 1- or 2-naphthoyl; also pyridylcarbonyl. Lower alkoxycarbonyl represents preferably C.sub.1 -C.sub.4 - alkoxycarbonyl, e.g. ethoxycarbonyl. Lower alkylene represents either straight chain or branched alkylene of 1 to 7 carbon atoms and represents preferably straight chain alkylene of 1 to 4 carbon atoms, e.g. a methylene, ethylene, propylene or butylene chain, or said methylene, ethylene, propylene or butylene chain mono- substituted by C.sub.1 -C.sub.3 -alkyl (advantageously methyl) or disubstituted on the same or different carbon atoms by C.sub. 1 -C.sub.3 - alkyl (advantageously methyl), the total number of carbon atoms being up to and including 7. Esterified carboxyl is for example lower alkoxycarbonyl, benzyloxycarbonyl and the like. Amidated carboxyl is for example aminocarbonyl, mono- or di- lower alkylaminocarbonyl. Pharmaceutically acceptable salts of the acidic compounds of the invention are salts formed with bases, namely cationic salts such as alkali and alkaline earth metal salts, such as sodium, lithium, potassium, calcium, magnesium, as well as ammonium salts, such as ammonium, trimethyl-ammonium, diethylammonium, and tris-(hydroxymethyl)- methyl- ammonium salts. Similarly acid addition salts, such as of mineral acids, organic carboxylic and organic sulfonic acids e.g. hydrochloric acid, methanesulfonic acid, maleic acid, are also possible provided a basic group, such as pyridyl, constitutes part of the structure. The novel compounds of the invention exhibit valuable pharmacological properties in mammals and are particularly useful as inhibitors of matrix- degrading metalloproteinase enzymes, such as stromelysin and/or collagenase, and are therefore particularly useful in mammals as agents for the treatment of e.g. osteoarthritis, rheumatoid arthritis, corneal ulceration, periodontal disease, tumor metastasis and HIV-infection related disorders. Illustrative of the matrix degrading metalloproteinase inhibitory activity, compounds of the invention prevent the degradation of cartilage caused by exogenous or endogenous stromelysin in mammals. They inhibit e. g. the stromelysin-induced degradation of aggrecan (large aggregating proteoglycan), link protein or type 1X collagen in mammals. Beneficial effects are evaluated in pharmacological tests generally known in the art, and as illustrated herein. The above-cited properties are demonstrable in in vitro and in vivo tests, using advantageously mammals, e.g. rats, guinea pigs, dogs, rabbits, or isolated organs and tissues, as well as mammalian enzyme preparations. Said compounds can be applied in vitro in the form of solutions, e.g. preferably aqueous solutions, and in vivo either enterally or parenterally, advantageously orally, e.g. as a suspension or in aqueous solution. The dosage in vitro may range between about 10. sup.- 5 molar and 10.sup.-10 molar concentrations. The dosage in vivo may range, depending on the route of administration, between about 0.1 and 50 mg/kg. One test to determine the inhibition of stromelysin activity is based on its hydrolysis of Substance P using a modified procedure of Harrison et al (Harrison, R. A., Teahah J., and Stein R., A semicontinuous, high performance chromatography based assay for stromelysin, Anal. Biochem. 180, 110-113 (1989)). In this assay, Substance P is hydrolyzed by recombinant human stromelysin to generate a fragment, Substance P 7-11, which can be quantitated by HPLC. In a typical assay, a 10 mM stock solution of a compound to be tested is diluted in the assay buffer to 50 . mu.M, mixed 1:1 with 8 &mgr;g recombinant human stromelysin (mol. wt. 45- 47 kDa, 2 Units; where 1 Unit produces 20 mmoles of Substance P 7-11 in 30 minutes) and incubated along with 0.5 mM Substance P in a final volume of 0.125 ml for 30 minutes at 37° C. The reaction is stopped by adding 10 mM EDTA and Substance P 7- 11 is quantified on RP-8 HPLC. The IC. sub.50 for inhibition of stromelysin activity and K.sub.i are calculated from control reaction without the inhibitor. Illustrative of the invention, N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl](3-picolyl)amino]-3-methylbutanamide hydrochloride exhibits a Ki of 17 nM in this assay. Stromelysin activity can also be determined using human aggrecan as a substrate. This assay allows the confirmation in-vitro that a compound can inhibit the action of stromelysin on its highly negatively- charged natural substrate, aggrecan (large aggregating proteoglycan). Within the cartilage, proteoglycan exists as an aggregate bound to hyaluronate. Human proteoglycan aggregated to hyaluronate is used as an enzyme substrate. The assay is set up in 96-well microtiter plates allowing rapid evaluation of compounds. The assay has three major steps: 1) Plates are coated with hyaluronate (human umbilical chord, 400 ug/ml), blocked with BSA (5 mg/ml), and then proteoglycan (human articular cartilage D1 - chondroitinase ABC digested, 2 mg/ml) is bound to the hyaluronate. Plates are washed between each step. 2) Buffers+inhibitor (1 to 5,000 nM)+recombinant human stromelysin (1- 3 Units/well) are added to wells. The plates are sealed with tape and incubated overnight at 37° C. The plates are then washed. 3) A primary (3B3) antibody (mouse IgM, 1:10,000) is used to detect remaining fragments. A secondary antibody, peroxididase-linked anti-IgM, is bound to the primary antibody. OPD is then added as a substrate for the peroxidase and the reaction is stopped with sulfuric acid. The IC.sub. 50 for inhibition of stromelysin activity is graphically derived and Ki is calculated. Illustrative of the invention, N-Hydroxy-2(R)-[[4- methoxybenzenesulfonyl](3-picolyl)amino]-3-methylbutanamide hydrochloride exhibits a Ki of 55 nM in this assay. Collagenase activity is determined as follows: ninety six-well, flat- bottom microtiter plates are first coated with bovine type I collagen (35 ug/well) over a two-day period at 30° C. using a humidified and then dry atmosphere; plates are rinsed, air dried for 3-4 hours, sealed with Saran wrap and stored in a refrigerator. Human recombinant fibroblast collagenase and a test compound (or buffer) are added to wells (total volume=0.1 ml) and plates are incubated for 2 hours at 35° C. under humidified conditions; the amount of collagenase used per well is that causing approximately 80% of maximal digestion of collagen. The incubation media are removed from the wells, which are then rinsed with buffer, followed by water. Coomasie blue stain is added to the wells for 25 minutes, removed, and wells are again rinsed with water. Sodium dodecyl sulfate (20% in 50% dimethylformamide in water) is added to solubilize the remaining stained collagen and the optical density at 570 nM wave length is measured. The decrease in optical density due to collagenase (from that of collagen without enzyme) is compared to the decrease in optical density due to the enzyme in the presence of test compound, and percent inhibition of enzyme activity is calculated. IC.sub. 50 's are determined from a range of concentrations of inhibitors (4-5 concentrations, each tested in triplicate), and K.sub.i values are calculated. Illustrative of the invention, N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl](3-picolyl)amino]-3-methylbutanamide hydrochloride exhibits a Ki of 380 nM in this assay. The effect of compounds of the invention in-vivo can be determined in rabbits. Typically, four rabbits are dosed orally with a compound up to four hours before being injected intra-articularly in both knees (N=8) with 40 Units of recombinant human stromelysin dissolved in 20 mM Tris, 10 mM CaCl.sub.2, and 0.15M NaCl at pH 7.5. Two hours later the rabbits are sacrificed, synovial lavage is collected, and keratan sulfate (KS) and sulfated glycosaminoglycan (S-GAG) fragments released into the joint are quantitated. Keratan sulfate is measured by an inhibition ELISA using the method of Thonar (Thonar, E. J. -M. A., Lenz, M. E., Klinsworth, G. K. , Caterson, B., Pachman, L. M., Glickman, P., Katz, R., Huff, J., Keuttner, K. E. Quantitation of keratan sulfate in blood as a marker of cartilage catabolism, Arthr. Rheum. 28, 1367-1376 (1985)). Sulfated glycosaminoglycans are measured by first digesting the synovial lavage with streptomyces hyaluronidase and then measuring DMB dye binding using the method of Goldberg (Goldberg, R. L. and Kolibas, L. An improved method for determining proteoglycan synthesized by chondrocytes in culture. Connect. Tiss. Res. 24, 265-275 (1990)). For an i.v. study, a compound is solubilized in 1 ml of PEG-400, and for a p.o. study, a compound is administered in 5 ml of fortified corn starch per kilogram of body weight. Illustrative of the invention, N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl](3-picolyl)amino]-3-methylbutanamide hydrochloride produces a 72% and 70% inhibition, respectively, in the release of KS and S-GAG fragments into the joint when given to rabbits at a dose of 30 mg/kg, 4 hours prior to the injection of human recombinant stromelysin. The compounds of formula I can be prepared by condensing a carboxylic acid of formula IV ##STR8## or a reactive functional derivative thereof, wherein R, R.sub.1, R.sub.2 and Ar having meaning as defined above with hydroxylamine of formula V NH.sub.2 -OH (V) optionally in protected form, or a salt thereof. In the above cited process, the said process is carried out while, if necessary, temporarily protecting any interfering reactive group(s), and then liberating the resulting compound of the invention; and, if required or desired, a resulting compound of the invention is converted into another compound of the invention, and/or, if desired, a resulting free compound is converted into a salt or a resulting salt is convened into a free compound or into another salt; and/or a mixture of isomers or racemates obtained is separated into the single isomers or racemates; and/or, if desired, a racemate is resolved into the optical antipodes. In starting compounds and intermediates which are converted to the compounds of the invention in a manner described herein, functional groups present, such as amino, carboxyl and hydroxy groups, are optionally protected by conventional protecting groups that are common in preparative organic chemistry. Protected amino, carboxyl and hydroxy groups are those that can be converted under mild conditions into free amino and hydroxy groups without the molecular framework being destroyed or other undesired side reactions taking place. The purpose of introducing protecting groups is to protect the functional groups from undesired reactions with reaction components under the conditions used for carrying out a desired chemical transformation. The need and choice of protecting groups for a particular reaction is known to those skilled in the art and depends on the nature of the functional group to be protected (hydroxy group, amino group, etc.), the structure and stability of the molecule of which the substituent is a part and the reaction conditions. Well-known protecting groups that meet these conditions and their introduction and removal are described, for example, in J. F. W. McOmie, "Protective Groups in Organic Chemistry", Plenum Press, London, New York, 1973, T. W. Greene, "Protective Groups in Organic Synthesis", Wiley, New York, 1991. In the processes cited herein, reactive functional derivatives of carboxylic acids represent, for example, anhydrides especially mixed anhydrides, acid halides, acid azides, lower alkyl esters and activated esters thereof. Mixed anhydrides are preferably such from pivalic acid, or a lower alkyl (ethyl, isobutyl) hemiester of carbonic acid; acid halides are for example chlorides or bromides; activated esters for example succinimido, phthalimido or 4-nitrophenyl esters; lower alkyl esters are for example the methyl or ethyl esters. Also, a reactive esterified derivative of an alcohol in any of the reactions cited herein represents said alcohol esterified by a strong acid, especially a strong inorganic acid, such as a hydrohalic acid, especially hydrochloric, hydrobromic or hydroiodic acid, or sulphuric acid, or by a strong organic acid, especially a strong organic sulfonic acid, such as an aliphatic or aromatic sulfonic acid, for example methanesulfonic acid, 4-methylbenzenesulfonic acid or 4- bromobenzenesulfonic acid. A said reactive esterified derivative is especially halo, for example chloro, bromo or iodo, or aliphatically or aromatically substituted sulfonyloxy, for example methanesulfonyloxy, 4- methylbenzenesulfonyloxy (tosyloxy). In the above processes for the synthesis of compounds of the invention can be carried out according to methodology generally known in the art for the preparation of hydroxamic acids and derivatives thereof. The synthesis according to the above process (involving the condensation of a free carboxylic acid of formula IV with an optionally hydroxy protected hydroxylamine derivative of formula V can be carried out in the presence of a condensing agent, e.g. 1,1'-carbonyldiimidazole, or N-(dimethylaminopropyl)-N'-ethylcarbodiimide or dicyclohexylcarbodiimide, with or without 1-hydroxybenzotriazole in an inert polar solvent, such as dimethylformamide or dichloromethane, preferably at room temperature. The synthesis involving the condensation of a reactive functional derivative of an acid of formula IV as defined above, e.g. an acid chloride or mixed anhydride with optionally hydroxy protected hydroxylamine, or a salt thereof, in presence of a base such as triethylamine can be carded out, at a temperature ranging preferably from about -78° C. to +75° C., in an inert organic solvent such as dichloromethane or toluene. Protected forms of hydroxylamine (of formula V) in the above process are those wherein the hydroxy group is protected for example as a t-butyl ether, a benzyl ether or tetrahydropyranyl ether. Removal of said protecting groups is carried out according to methods well known in the art, e.g. hydrogenolysis or acid hydrolysis. Hydroxylamine is preferably generated in situ from a hydroxylamine salt, such as hydroxylamine hydrochloride. The starting carboxylic acids of formula IV can be prepared as follows: An amino acid of formula VI ##STR9## wherein R.sub.1 and R.sub. 2 have meaning as defined herein, is first esterified with a lower alkanol, e.g. methanol, in the presence of e.g. thionyl chloride to obtain an aminoester which is treated with a reactive functional derivative of the appropriate arylsulfonic acid of the formula VII ArSO.sub.3 H (VII) wherein Ar has meaning as defined hereinabove, e.g. with the arylsulfonyl chloride, in the presence of a suitable base such as triethylamine using a polar solvent such as tetrahydrofuran, toluene, acetonitrile to obtain a compound of the formula VIII ##STR10## wherein R. sub.1, R.sub.2 and Ar have meaning as defined herein and R.sub.6 is a protecting group, e.g. lower alkyl. Treatment thereof with a reactive esterified derivative of the alcohol of the formula IX R-CH.sub.2 OH (IX) wherein R has meaning as defined herein, such as the halide, e. g. the chloride, bromide or iodide derivative thereof, in the presence of an appropriate base, such as potassium carbonate or sodium hydride, in a polar solvent such as dimethylformamide. The resulting compound corresponding to an ester of a compound of formula IV can then be hydrolyzed to the acid of formula IV, using standard mild methods of ester hydrolysis, preferably under acidic conditions. For compounds of formula Ia (wherein R and R.sub.1 of formula I are combined) the starting materials are prepared by treating a carboxylic acid of formula X ##STR11## or an ester thereof, wherein R.sub.2 and X have meaning as defined above, with a reactive functional derivative of a compound of the formula ArSO.sub.3 H (VII) under conditions described for the preparation of a compound of formula VIII. The starting materials of formula VI, VII, IX and X are either known in the art, or can be prepared by methods well-known in the art or as described herein. The above-mentioned reactions are carried out according to standard methods, in the presence or absence of diluent, preferably such as are inert to the reagents and are solvents thereof, of catalysts, condensing or said other agents respectively and/or inert atmospheres, at low temperatures, room temperature or elevated temperatures (preferably at or near the boiling point of the solvents used), and at atmospheric or super- atmospheric pressure. The preferred solvents, catalysts and reaction conditions are set forth in the appended illustrative examples. The invention further includes any variant of the present processes, in which an intermediate product obtainable at any stage thereof is used as starting material and the remaining steps are carried out, or the process is discontinued at any stage thereof, or in which the starting materials are formed in situ under the reaction conditions, or in which the reaction components are used in the form of their salts or optically pure antipodes. Compounds of the invention and intermediates can also be converted into each other according to methods generally known per se. The invention also relates to any novel starting materials and processes for their manufacture. Depending on the choice of starting materials and methods, the new compounds may be in the form of one of the possible isomers or mixtures thereof, for example, as substantially pure geometric (cis or trans) isomers, optical isomers (antipodes), racemates, or mixtures thereof. The aforesaid possible isomers or mixtures thereof are within the purview of this invention. Any resulting mixtures of isomers can be separated on the basis of the physico-chemical differences of the constituents, into the pure geometric or optical isomers, diastereoisomers, racemates, for example by chromatography and/or fractional crystallization. Any resulting racemates of final products or intermediates can be resolved into the optical antipodes by known methods, e.g. by separation of the diastereoisomeric salts thereof, obtained with an optically active acid or base, and liberating the optically active acidic or basic compound. The hydroxamic acids or carboxylic acid intermediates can thus be resolved into their optical antipodes e.g. by fractional crystallization of d- or 1-(alpha-methylbenzylamine, cinchonidine, cinchonine, quinine, quinidine, ephedrine, dehydroabietylamine, brucine or strychnine)-salts. Finally, acidic compounds of the invention are either obtained in the free form, or as a salt thereof. Acidic compounds of the invention may be converted into salts with pharmaceutically acceptable bases, e.g. an aqueous alkali metal hydroxide, advantageously in the presence of an ethereal or alcoholic solvent, such as a lower alkanol. From the solutions of the latter, the salts may be precipitated with ethers, e.g. diethyl ether. Resulting salts may be converted into the free compounds by treatment with acids. These or other salts can also be used for purification of the compounds obtained. In view of the close relationship between the free compounds and the compounds in the form of their salts, whenever a compound is referred to in this context, a corresponding salt is also intended, provided such is possible or appropriate under the circumstances. The compounds, including their salts, can also be obtained in the form of their hydrates, or include other solvents used for their crystallization. The pharmaceutical compositions according to the invention are those suitable for enteral, such as oral or rectal, transdermal and parenteral administration to mammals, including man, to inhibit matrix- degrading metalloproteinases, and for the treatment of disorders responsive thereto, comprising an effective amount of a pharmacologically active compound of the invention, alone or in combination, with one or more pharmaceutically acceptable carriers. The pharmacologically active compounds of the invention are useful in the manufacture of pharmaceutical compositions comprising an effective amount thereof in conjunction or admixture with excipients or carriers suitable for either enteral or parenteral application. Preferred are tablets and gelatin capsules comprising the active ingredient together with a) diluents, e.g. lactose, dextrose, sucrose, mannitol, sorbitol, cellulose and/or glycine; b) lubricants, e.g. silica, talcum, stearic acid, its magnesium or calcium salt and/or polyethyleneglycol; for tablets also c) binders e.g. magnesium aluminum silicate, starch paste, gelatin, tragacanth, methylcellulose, sodium carboxymethylcellulose and or polyvinylpyrrolidone; if desired d) disintegrants, e.g. starches, agar, alginic acid or its sodium salt, or effervescent mixtures; and/or e) absorbants, colorants, flavors and sweeteners. Injectable compositions are preferably aqueous isotonic solutions or suspensions, and suppositories are advantageously prepared from fatty emulsions or suspensions. Said compositions may be sterilized and/or contain adjuvants, such as preserving, stabilizing, wetting or emulsifying agents, solution promoters, salts for regulating the osmotic pressure and/or buffers. In addition, they may also contain other therapeutically valuable substances. Said compositions are prepared according to conventional mixing, granulating or coating methods, respectively, and contain about 0.1 to 75%, preferably about 1 to 50%, of the active ingredient. Suitable formulations for transdermal application include an effective amount of a compound of the invention with carrier. Advantageous carriers include absorbable pharmacologically acceptable solvents to assist passage through the skin of the host. Characteristically, transdermal devices are in the form of a bandage comprising a backing member, a reservoir containing the compound optionally with carriers, optionally a rate controlling barrier to deliver the compound of the skin of the host at a controlled and predetermined rate over a prolonged period of time, and means to secure the device to the skin. Suitable formulations for topical application, e.g. to the skin and eyes, are preferably aqueous solutions, ointments, creams or gels well- known in the art. The pharmaceutical formulations contain an effective matrix- degrading metalloproteinase inhibiting amount of a compound of the invention as defined above either alone, or in combination with another therapeutic agent, e.g. an anti-inflammatory agent with cyclooxygenase inhibiting activity, each at an effective therapeutic dose as reported in the art. Such therapeutic agents are well-known in the art. Examples of antiinflammatory agents with cyclooxygenase inhibiting activity are diclofenac sodium, naproxen, ibuprofen, and the like. In conjunction with another active ingredient, a compound of the invention may be administered either simultaneously, before or after the other active ingredient, either separately by the same or different route of administration or together in the same pharmaceutical formulation. The invention further particularly relates to a method of inhibiting matrix-degrading metalloproteinase activity in mammals including man, and of treating diseases and conditions responsive thereto, such as arthritic conditions, and others disclosed herein, which comprises administering to a mammal in need thereof an effective amount of a compound of the invention or of a pharmaceutical composition comprising a said compound in combination with one or more pharmaceutically acceptable carriers. More particularly the invention relates to a method of inhibiting tissue matrix degradation and of treating gelatinase, stromelysin and collagenase dependent pathological conditions in mammals, such including rheumatoid arthritis, osteoarthritis, tumor metastasis, periodontal disease, corneal ulceration, as well as the progression of HIV-infection and associated disorders. The dosage of active compound administered is dependent on the species of warm-blooded animal (mammal), the body weight, age and individual condition, and on the form of administration. A unit dosage for oral administration to a mammal of about 50 to 70 kg may contain between about 25 and 250 mg of the active ingredient. The following examples are intended to illustrate the invention and are not to be construed as being limitations thereon. Temperatures are given in degrees Centrigrade. If not mentioned otherwise, all evaporations are performed under reduced pressure, preferably between about 15 and 100 mm Hg. The structure of final products, intermediates and starting materials is confirmed by standard analytical methods, e.g. microanalysis and spectroscopic characteristics (e.g. MS, IR, NMR). Abbreviations used are those conventional in the art. EXAMPLE 1 (a) N-(t-Butyloxy)-2(R)-[[4-methoxybenzenesulfonyl](3- picolyl) amino]- 3-methylbutanamide (4.1 g, 9.13 mmol) is dissolved in dichloroethane (150 mL) containing ethanol (0.53 ml, 9.13 mmol) in a round bottom flask, and the reaction is cooled to -10° C. Hydrochloric acid gas (from a lecture bottle) is bubbled through for 30 minutes. The reaction is sealed, allowed to slowly warm to room temperature, and stirred for 2 days. The solvent is reduced to 1/3 volume by evaporation and triturated with ether. The mixture is filtered, filter cake removed, and dried in vacuo to provide N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl]-(3-picolyl)amino]-3- methylbutanamide hydrochloride as a white solid, m.p. 169°-170. degree. C. (dec), and having the following structure: ##STR12## The starting material is prepared as follows: To a solution of D-valine (15.0 g, 128.0 mmol) in 1:1 dioxane/water (200 mL) containing triethylamine (19.4 g, 192.0 mmol) at room temperature is added 4-methoxybenzenesulfonyl chloride (29.0 g, 141. 0 mmol), and the reaction mixture is stirred at room temperature overnight. The mixture is then diluted with methylene chloride, washed with 1N aqueous hydrochloric acid and water. The organic layer is washed again with brine, dried (Na.sub.2 SO.sub.4), and the solvent is evaporated to provide N-[4-methoxybenzenesulfonyl]-(D)-valine as a crude product. A solution of this crude product (15.0 g) in toluene (100 mL) containing N, N-dimethylformamide di-t-butyl acetal (50 mL, 206.5 mmol) is heated to 95. degree. C for 3 hours. The solvent is then evaporated. The crude product is purified by silica gel chromatography (30% ethyl acetate/hexanes) to provide N-[4- methoxybenzenesulfonyl]-(D)-valine t-butyl ester. To a solution of N-[4-methoxybenzenesulfonyl]-(D)-valine t- butyl ester (4.38 g, 13.0 mmol) in dimethylformamide (200 mL) is added 3- picolyl chloride hydrochloride (2.3 g, 14.0 mmol) followed by potassium carbonate (17.94 g, 130.0 mmol). The reaction mixture is stirred at room temperature for 2 days. The mixture is then diluted with water and extracted with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (ethyl acetate) to give t-butyl 2(R)-[N-[4-methoxybenzenesulfonyl](3- picolyl)amino]-3- methylbutanoate. t-Butyl 2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-3- methylbutanoate (5.3 g, 12.2 mmol) is dissolved in methylene chloride (150 mL) and cooled to -10° C. Hydrochloric acid gas is bubbled into the solution for 10 minutes. The reaction mixture is then sealed, warmed to room temperature and stirred for 4 hours. The solvent is then evaporated to provide 2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]- 3-methylbutanoic acid hydrochloride. 2(R)-[[4-methoxybenzenesulfonyl](3- picolyl)amino]-3-n-methylbutanoic acid hydrochloride (5.0 g, 12.06 mmol), 1-hydroxybenzotriazole (1.63 g, 12.06 mmol), 4-methylmorpholine (6.6 mL, 60.31 mmol), and O-t-butylhydroxylamine hydrochloride (54.55 g, 36.19 mmol) are dissolved in methylene chloride (200 mL). N- [Dimethylaminopropyl]-N'-ethylcarbodiimide hydrochloride (3.01 g, 15.68 mmol) is added, and the reaction is stirred overnight. The reaction is then diluted with water and extracted with methylene chloride. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub. 4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (2% methanol/methylene chloride) to give N-(t- butyloxy)-2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-3- methylbutanamide. (b) L-tartaric acid salt, m.p. 114°-116° C. (c) Methanesulfonic acid salt, m.p. 139°-141.5° C. (d) Maleic acid salt, m.p. 133°-134° C. EXAMPLE 2 The following compounds are prepared similarly to Example 1. a) N-Hydroxy-2(S)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]- 3- methylbutanamide hydrochloride, m.p. 170.5°-171° C., by starting the synthesis with L-valine, and carrying out the subsequent steps as described above. (b) N-hydroxy-2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]- 4- methylpentanamide hydrochloride, m.p. 128°-129° C. The first two steps are carried out as described in example 1, except the synthesis was started with D-leucine. The alkylation step is different, as described below. To a solution of t-butyl 2(R)-[[4-methoxybenzenesulfonyl]amino]- 4- methylpentanoate (10.0 g, 27.92 mmol)in dimethylformamide (250 mL) at room temperature is added 3-picolyl chloride hydrochloride (4.81 g, 29. 32 mmol) followed by sodium hydride (2.79 g, 69.80 mmol, 60% in oil). The reaction mixture is stirred at room temperature for 48 hours. The mixture is quenched with water and extracted with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub. 4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (45% ethyl acetate/hexanes) to provide t-butyl 2(R)-[[4- methoxybenzenesulfonyl](3-picolyl)amino]-4-methylpentanoate. All of the following steps are carried out as described above in example 1. (c) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](6- chloropiperonyl) amino]-4-methylpentanamide, m.p. 85°-87° C. , by starting the synthesis with D-leucine and alkylating with 6- chloropiperonyl chloride in the third step. (d) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](piperonyl)amino]- 4- methylpentanamide, m.p. 145°-147° C., by starting the synthesis with D-leucine and alkylating with piperonyl chloride in the third step. (e) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](2-picolyl)amino]- 4- methylpentanamide, m.p. 89°-90° C., by starting the synthesis with D-leucine and alkylating with 2- picolyl chloride in the third step. (f) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](2-picolyl)amino]- 3- methylbutanamide hydrochloride, m.p. 140°-142° C., by starting the synthesis with D-valine and alkylating with 2-picolyl chloride in the third step. (g) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]- 4, 4- dimethylpentanamide, hydrochloride m.p. 130°-150° C. (slow melt), by starting the synthesis with D-t-butylalanine and alkylating with 3-picolyl chloride in the third step. (h) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]- 2- cyclohexylacetamide hydrochloride, m.p. 149.5°-152.0° C. , by starting the synthesis with (D)-cyclohexylglycine hydrochloride. The starting amino acid is prepared as follows: (D)-phenylglycine (10.0 g, 66.2 mmol) is suspended in 2N hydrochloric acid (100 mL) containing platinum (IV) oxide hydrate (267 mg) . The mixture is shaken in a Parr hydrogenation apparatus for 24 hours under a hydrogen pressure of 50 psi. The resultant suspended crystalline material, (D)-cyclohexylglycine hydrochloride, was used without further purification. (i) N-Hydroxy-2(R)-[[(2,3-dihydrobenzofuran)-5-sulfonyl](3- picolyl) amino]3-methylbutanamide hydrochloride, m.p. 150.0°-153.0. degree. C., by starting the synthesis with 2,3-dihydrobenzofuran-5- sulfonyl chloride. The starting sulfonyl chloride is prepared as follows: 2,3-dihydrobenzofuran (6.0 g, 49.94 mmol) is added over 20 minutes to chlorosulfonic acid (29.09 g, 249.69 mmol) at -20° C. The reaction mixture is quenched by addition of ice followed by water (20 mL). The mixture is then extracted with ethyl acetate. The combined organic estracts are washed with brine, dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (30% ethyl acetate/hexane) to give 2,3-dihydrobenzofuran- 5-sulfonyl chloride (3.3 g). (j) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](3-picolyl)-amino]-3- methylbutanamide hydrochloride, m.p. 139.5°-142° C., by starting the synthesis with DL-valine. (k) N-Hydroxy-2(R)-[[4-ethoxybenzenesulfonyl](3-picolyl)-amino]- 3- methylbutanamide hydrochloride, [&agr;].sub.D.sup.25 =+34.35 (c=5. 84, CH.sub.3 OH). EXAMPLE 3 2(R)-[[4-Methoxybenzenesulfonyl](benzyl)amino]-4- methylpentanoic acid (4.38 g, 11.2 mmol) is dissolved in methylene chloride (56.0 mL). To this solution is added oxalyl chloride (1.95 mL, 22.4 mmol) and dimethylformamide (0.86 mL, 11.2 mmol), and the reaction is stirred at room temperature for 90 minutes. Meanwhile, in a separate flask, hydroxylamine hydrochloride (3.11 g, 44.8 mmol) and triethylamine (9.36 mL, 67.1 mmol) are stirred in tetrahydrofuran (50.0 mL) and water (3.5 mL) at 0° C. for 15 minutes. After90 minutes, the methylene chloride solution is added in one portion to the second flask, and the combined contents are stirred for three days as the flask gradually warms up to room temperature. The reaction is then diluted with acidic water (pH=. about.3), and extracted several times with ethyl acetate. The combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (1% methanol/methylene chloride) to give N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-4-methylpentanamide, m.p. 48. degree.-52° C. The starting material is prepared as follows: (D)-leucine (7.1 g, 53.9 mmol) is dissolved in dioxane (60.0 mL) and water (60.0 mL). To this solution is added triethylamine (11.3 mL, 80.9 mmol) and 4-methoxybenzenesulfonyl chloride (12.25 g, 59.3 mmol), and the reaction is stirred at room temperature overnight. The reaction is then diluted with methylene chloride and washed successively with 2. 5N hydrochloric acid, water, and brine. The organic phase is dried (Na. sub. 2 SO.sub.4), and the solvent is evaporated to give N-[4- methoxybenzenesulfonyl]-(D)-leucine, which is used without further purification. N-[4-methoxybenzenesulfonyl]-(D)-leucine (14.0 g, 46.5 mmol) is dissolved in toluene (100.0 mL), and heated to 90° C. N,N- Dimethylformamide di-t-butyl acetal (45.0 mL, 186.0 mmol) is added dropwise over 20 minutes, and then the reaction is kept at 90° C. for another 2 hours. After cooling back down, the reaction is diluted with ethyl acetate and washed successively with saturated sodium bicarbonate, water, and brine. The organic phase is dried (Na.sub.2 SO. sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (20% ethyl acetate/hexane) to give N-[4- methoxybenzenesulfonyl]-(D)-leucine t-butyl ester. To a suspension of sodium hydride (0.68 g, 14.1 mmol) in dimethylformamide (60.0 mL), is added N-[4-methoxybenzenesulfonyl]-(D)- leucine t-butyl ester (5.02 g, 14.06 mmol) in dimethylformamide (10.0 mL). After stirring at room temperature for 20 minutes, benzyl bromide (1.67 mL, 14.06 mmol) is added, and the reaction is stirred overnight at room temperature. The reaction is then partitioned between ethyl acetate and acidic water (pH=5), the organic layer is dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (10% ethyl acetate/hexane) to give t-butyl 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-4-methylpentanoate. t-Butyl 2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]-4- methylpentanoate (5.38 g, 12.02 mmol) is dissolved in methylene chloride (100.0 mL). Hydrochloric acid gas (from a lecture bottle) is bubbled through the solution for 20 minutes. The reaction is sealed and stirred overnight at room temperature. The solvent is then evaporated to give 2(R) -[[4-methoxybenzenesulfonyl](benzyl)amino]-4-methylpentanoic acid. EXAMPLE 4 The following compounds are prepared similarly to example 3. (a) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]-2- phenylacetamide, m.p. 128°-129° C., by starting the synthesis with (D)-phenylglycine, and carrying out the subsequent steps as described in example 3. (b) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-t- butylacetamide, m.p. 69°-73° C., by starting the synthesis with t- butylglycine, and carrying out the subsequent steps as described in example 3. (c) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](4- fluorobenzyl) amino]- 4-methylpentanamide, m.p. 48°-51° C., by starting the synthesis with (D)-leucine, and carrying out the subsequent steps as described in example 3, with the exception that 4- fluorobenzyl bromide is used in place of benzyl bromide. (d) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]-3- methylbutanamide, m.p. 179°-180° C., by starting the synthesis with (D)-valine, and carrying out the subsequent steps as described in example 3. (e) N-Hydroxy-2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]-4, 4- dimethylpentanamide, by starting the synthesis with (D)- neopentylglycine, and carrying out the subsequent steps as described in example 3. EXAMPLE 5 3-[4-Methoxybenzenesulfonyl]-5,5-dimethylthiazolidine-4(S)- carboxylic acid (2.0 g, 6.0 mmol) is dissolved in methylene chloride (30. 0 mL). To this solution is added oxalyl chloride (1.1 mL, 12.1 mmol) and dimethylformamide (0.50 mL, 6.0 mmol), and the reaction is stirred at room temperature for 2 hours. Meanwhile, in a separate flask, hydroxylamine hydrochloride (1.74 g, 25.0 mmol) and triethylamine (5.0 mL, 36.0 mmol) are stirred in tetrahydrofuran (25.0 mL) and water (2.0 mL) at 0° C. for 15 minutes. After 2 hours, the methylene chloride solution is added in one portion to the second flask, and the combined contents are stirred overnight as the flask gradually warms up to room temperature. The reaction is then diluted with acidic water (pH=. about.3) , and extracted several times with ethyl acetate. The combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (60% ethyl acetate/hexane) to give N-hydroxy-3-[4-methoxybenzenesulfonyl]-5,5- dimethylthiazolidine-4(S)-carboxamide, m.p. 68°-71° C. The starting material is prepared as follows: (D)-5,5-Dimethylthiazolidine-4-carboxylic acid (1.0 g, 6.2 mmol) is dissolved in dioxane (10.0 mL) and water (10.0 mL). To this solution is added triethylamine (1.3 mL, 9.3 mmol) and 4-methoxybenzenesulfonyl chloride (1.41 g, 6.82 mmol), and the reaction is stirred at room temperature for three days. The reaction is then diluted with ethyl acetate and washed successively with 2.5N hydrochloric acid, water, and brine. The organic phase is dried (Na.sub.2 SO.sub.4), and the solvent is evaporated to give 3-[4-methoxybenzenesulfonyl]-5,5- dimethylthiazolidine- 4(S)-carboxylic acid, which is used without further purification. EXAMPLE 6 1-[4-Methoxybenzenesulfonyl]-pyrrolidine-2(R)-carboxylic acid (1. 12 g, 3.93 mmol) is dissolved in methylene chloride (40.0 mL). To this solution is added oxalyl chloride (0.69 mL, 7.85 mmol) and dimethylformamide (0.30 mL, 3.93 mmol), and the reaction is stirred at room temperature for 30 minutes. Meanwhile, in a separate flask, hydroxylamine hydrochloride (1.1 g, 15.7 mmol) and triethylamine (3.3 mL, 23.5 mmol) are stirred in tetrahydrofuran (20.0 mL) and water (4.0 mL) at 0° C. for 15 minutes. After 30 minutes, the methylene chloride solution is added in one portion to the second flask, and the combined contents are stirred overnight as the flask gradually warms up to room temperature. The reaction is then diluted with acidic water (pH=˜ 3) , and extracted several times with ethyl acetate. The combined organic layers are dried (MgSO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (50% ethyl acetate/hexane) to give N-hydroxy- 1-[4-methoxybenzenesulfonyl]- pyrrolidine-2(S)- carboxamide, m.p. 163.5°-165.5° C. The starting material is prepared as follows: (D)-proline (0.78 g, 6.77 mmol) is suspended in methylene chloride (25. 0 mL). To this solution is added triethylamine (1.13 mL, 8. 12 mmol) and 4-methoxybenzenesulfonyl chloride (1.4 g, 6.77 mmol), and the reaction is stirred at room temperature for two days. The reaction is then diluted with methylene chloride and washed successively with 1N hydrochloric acid, water, and brine. The organic phase is dried (MgSO. sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (10% methanol/ethyl acetate) to give 1-[4- methoxybenzenesulfonyl]-pyrrolidine-2(R)-carboxylic acid. EXAMPLE 7 N-(t-Butyloxy)-2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[2- (4- morpholino)ethyl]acetamide (2.65 g, 5.1 mmol) is dissolved in methylene chloride (30.0 mL) and ethanol (1.0 mL) in a glass sealed tube, and the reaction is cooled to 0° C. Hydrochloric acid gas (from a lecture bottle) is bubbled through the solution for 20 minutes, and then the tube is sealed and kept at room temperature for 3 days. After that time, the solvent is removed, and the reaction is partitioned between ethyl acetate and saturated sodium bicarbonate. The organic phase is dried (Na.sub.2 SO. sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (2% methanol/methylene chloride) to give N-hydroxy-2- [[4-methoxybenzenesulfonyl](benzyl)amino]- 2-[2-(4- morpholino)ethyl] acetamide, m.p. 56°-60° C. The starting material is prepared as follows: N-(2-chloroethyl)morpholine hydrochloride (12.0 g) is dissolved in water (200 mL) and made basic with ammonium hydroxide (100.0 mL) to a pH=. about.11. The aqueous layer is then extracted several times with ether, the combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated to yield an oil which is used immediately. Diethyl acetamidomalonate (11.4 g, 57.08 mmol) is added to a freshly prepared solution of sodium ethoxide in ethanol (made from Na (1. 32 g, 57.1 mmol) added to ethanol (34.0 mL)), and the reaction is refluxed for 30 minutes. The reaction is then adjusted to 55° C., and potassium iodide (0.14 g, 0.8 mmol) and dimethylformamide (0.2 mL) are added. Finally, the N-(2-chloroethyl)morpholine (8.9 g, 59.6 mmol) prepared above is added in ethanol (14.0 mL), and the reaction is maintained at 55. degree. C. for 24 hours. The reaction is diluted with ethyl acetate and filtered through Celite to remove salts. The filtrate is evaporated, and then partitioned between ethyl acetate and brine. The organic layer is dried (Na.sub.2 SO. sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (first 50% ethyl/acetate, then 5% methanol/methylene chloride) to give diethyl [2-(4- morpholino)ethyl]acetamidomalonate. Diethyl [2-(4-morpholino)ethyl]acetamidomalonate (8.0 g, 25.6 mmol) is dissolved in ethanol (128.0 mL). Sodium hydroxide (4.55 mL of a 6N aqueous solution, 27.35 mmol) is added, and the reaction is stirred at room temperature for 24 hours. The ethanol is then evaporated, and the residue is diluted up in water, washed several times with ether, and then the aqueous phase is acidified with concentrated hydrochloric acid to pH =˜5. The solution is evaporated to dryness, then suspended in toluene (300.0 mL) and refluxed for 3 hours. After cooling to room temperature, the reaction is diluted with chloroform (300.0 mL), and the mixture is filtered through Celite. The filtrate is evaporated to give ethyl 2-(acetamido)-2-[2-(4-morpholino)ethyl]acetate. Ethyl 2-(acetamido)-2-[2-(4-morpholino)ethyl]acetate (4.2 g, 16. 28 mmol) is dissolved in 6N hydrochloric acid (100.0 mL), and the reaction is refluxed for 4.5 hours. The water is then evaporated, and the product is azeotroped dry using toluene to give 2-amino-2-[2-(4- morpholino)ethyl] acetic acid dihydrochloride. 2-Amino-2-[2-(4-morpholino)ethyl]acetic acid dihydrochloride (4. 0 g, 15.33 mmol) is dissolved in a solution of methanol (100.0 mL) and acetyl chloride (5.0 mL), and the reaction is refluxed for 24 hours. The solvent is then evaporated to give methyl 2-amino-2-[2-(4- morpholino) ethyl] acetate dihydrochloride. Methyl 2-amino-2-[2-(4-morpholino)ethyl]acetate dihydrochloride (6.0 g, 21.82 mmol) is dissolved in chloroform (110.0 mL) and triethylamine (9. 12 mL, 65.46 mmol). To this solution is added 4- methoxybenzenesulfonyl chloride (4.51 g, 21.82 mmol), and the reaction is refluxed for 4 hours. After cooling, the reaction is diluted with more chloroform, washed with saturated sodium bicarbonate, the organic layer is dried (Na.sub.2 SO.sub. 4), and the solvent is evaporated to give methyl 2-(4- methoxybenzenesulfonyl)amino-2-[2-(4- morpholino)ethyl] acetate. To a suspension of sodium hydride (1.03 g, 21.5 mmol) in dimethylformamide (108.0 mL), is added methyl 2-(4- methoxybenzenesulfonyl)amino-2-[2-(4-morpholino)ethyl]acetate (8.0 g, 21. 5 mmol) in dimethylformamide (10.0 mL). After stirring at room temperature for 30 minutes, benzyl bromide (2.56 mL, 21.5 mmol) is added, and the reaction is stirred overnight at room temperature. The reaction is then partitioned between ethyl acetate and acidic water (pH=. about. 5) , the organic layer is dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (3% methanol/methylene chloride) to give methyl 2-[[4- methoxybenzenesulfonyl] (benzyl)amino]-2-[2-(4-morpholino)ethyl]acetate. Methyl 2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[2-(4- morpholino) ethyl]acetate (7.33 g, 15.86 mmol) is dissolved in methanol (80.0 mL). To this solution is added sodium hydroxide (17.5 mL of a 1N aqueous solution, 17.5 mmol), and the reaction is stirred at room temperature for 8 hours. The reaction is then acidified to pH=˜3 using 2.5N hydrochloric acid, and then the solvent is evaporated. The residue is suspended in ethanol, the inorganic salts are filtered away, and the filtrate is evaporated to give 2-[[4- methoxybenzenesulfonyl] (benzyl)amino]-2-[2-(4- morpholino)ethyl]acetic acid hydrochloride. 2-[[4-Methoxybenzenesulfonyl](benzyl)amino]-2-[2-(4- morpholino) ethyl] acetic acid hydrochloride (4.24 g, 8.75 mmol), 1- hydroxybenzotriazole (1. 34 g, 8.75 mmol), 4-methylmorpholine (3.85 mL, 35.02 mmol), and O-t- butylhydroxylamine hydrochloride (1.10 g, 8.75 mmol) are dissolved in methylene chloride (44.0 mL), and the reaction is cooled to 0° C. To this solution is added N-[dimethylaminopropyl]- N'- ethylcarbodiimide hydrochloride (3.35 g, 17.5 mmol), and the reaction is allowed to warm up to room temperature and stir overnight. The reaction is diluted with more methylene chloride, and the organic layer is washed with saturated sodium bicarbonate, brine, dried (MgSO. sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (2% methanol/methylene chloride) to give N-(t- butyloxy)-2- [[4- methoxybenzenesulfonyl](benzyl)amino]-2-[2-(4- morpholino)ethyl] acetamide. EXAMPLE 8 The following compounds are prepared similarly to example 7. (a) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](isobutyl)amino]-2- [2- (4- morpholino)ethyl]acetamide, m.p. 62°-64° C., using isobutyl bromide in the alkylation step in place of benzyl bromide. (b) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](2-picolyl)amino]-2- [2- (4- morpholino)ethyl]acetamide, dihydrochloride m.p. 150°-154. degree. C., using 2-picolyl chloride in the alkylation step in place of benzyl bromide. (c) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-2- [2- (4- morpholino)ethyl]acetamide, dihydrochloride m.p. > 210° C., using 3-picolyl chloride in the alkylation step in place of benzyl bromide. (d) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](2-methylthiazole-4- methyl) amino]-2-[2-(4-morpholino)ethyl]acetamide, dihydrochloride m.p. 180. degree. C., using 4-chloromethyl-2-methylthiazole in the alkylation step in place of benzyl bromide. (e) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[2- (4- thiomorpholino)ethyl]acetamide, m.p. 50°-52° C., by starting the synthesis with N-(2-chloroethyl)thiomorpholine, and carrying out the subsequent steps as described in example 7. (f) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[(2- methyl-4-thiazolyl)methyl]acetamide, m.p. 79°-81° C., by starting the synthesis with 4-chloromethyl-2-methylthiazole hydrochloride, and carrying out the subsequent steps as described in example 7. (g) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[6- chloropiperonyl]acetamide, m.p. 70°-74° C., by starting the synthesis with 6-chloropiperonyl chloride, and carrying out the subsequent steps as described in example 7. (h) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[(1- pyrazolyl)methyl]acetamide, m.p. 130°-131° C., by starting the synthesis with &bgr;-pyrazol-1-yl-alanine (prepared following the procedure of J. Am. Chem. Soc., 110, p. 2237 (1988)), and carrying out the subsequent steps as described in example 7. (i) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-2- [3- picolyl]acetamide, dihydrochloride m.p. > 220° C., by starting the synthesis with 3-picolyl chloride, and carrying out the subsequent steps as described in example 7, but in addition, using 3- picolyl chloride in the alkylation step in place of benzyl bromide in example 7. (j) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[(1- methyl-4-imidazolyl)methyl]acetamide hydrochloride, m.p. > 200° C. , by starting the synthesis with N-&tgr;-methylhistidine dihydrochloride (prepared following the procedure of Recueil, 97, p.293 (1978)), and carrying out the subsequent steps as described in example 7. (k) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](isobutyl)amino]-2- [(1- methyl-4-imidazolyl)methyl]acetamide hydrochloride, m.p. 194° - 195. degree. C., by starting the synthesis with N-&tgr;-methylhistidine dihydrochloride and carrying out the subsequent steps as described in example 7, using isobutyl iodide in the alkylation step in place of benzyl bromide. (l) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-2- [(1- methyl-4-imidazolyl)methyl]acetamide hydrochloride, m.p. > 220. degree. C., by starting the synthesis with N-&tgr;-methylhistidine dihydrochloride and carrying out the subsequent steps as described in example 7, using 3-picolyl chloride in the alkylation step in place of benzyl bromide. (m) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](2-picolyl)amino]-2- [(1- methyl-4-imidazolyl)methyl]acetamide hydrochloride, m.p. 162°- 164. degree. C., by starting the synthesis with N-&tgr;-methylhistidine dihydrochloride and carrying out the subsequent steps as described in example 7, using 2-picolyl chloride in the alkylation step in place of benzyl bromide. (n) N-hydroxy-2-[[4-methoxybenzenesulfonyl](2-methylthiazole-4- methyl) amino]-2-[(1-methyl-4-imidazolyl)methyl]acetamide hydrochloride, m.p. 160. degree.-163° C., by starting the synthesis with N-&tgr;- methylhistidine dihydrochloride and carrying out the subsequent steps as described in example 7, using 4-chloromethyl-2-methylthiazole in the alkylation step in place of benzyl bromide. (o) N-hydroxy-2-[[4-methoxybenzenesulfonyl](piperonyl)amino]-2- [(1- methyl-4-imidazolyl)methyl]acetamide hydrochloride, m.p. 195° C., by starting the synthesis with N-&tgr;-methylhistidine dihydrochloride and carrying out the subsequent steps as described in example 7, using piperonyl chloride in the alkylation step in place of benzyl bromide. EXAMPLE 9 (a) Methyl 2-[[4-methoxybenzenesulfonyl](benzyl)amino] propionate (2.1 g, 6.01 mmol) is dissolved in methanol (20.0 mL). To this solution is added hydroxylamine hydrochloride (0.84 g, 12.0 mmol), followed by the addition of sodium methoxide (7.0 mL of a 4.37M solution). The reaction is stirred overnight at room temperature. The reaction is worked up by first removing all the solvent, and partitioning between ethyl acetate/hexane (2/1) and saturated sodium bicarbonate. The aqueous phase is extracted well with ethyl acetate/hexane, the combined organic layers are dried (MgSO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (ethyl acetate) to give N- hydroxy-2- [[4-methoxybenzenesulfonyl](benzyl)amino]propionamide, m.p. 149°- 151° C. The starting material is prepared as follows: D,L-Alanine (27.0 g, 300.0 mmol) is dissolved in a solution of methanol (100.0 mL) saturated with HCl gas, and the reaction is refluxed for 2 hours. The solvent is then evaporated, and the residue triturated with ethyl acetate to give alanine methyl ester hydrochloride. Alanine methyl ester hydrochloride (7.0 g, 50.0 mmol) is dissolved in methylene chloride (100.0 mL) and triethylamine (20.0 mL, 143.0 mmol). To this solution is added 4-methoxybenzenesulfonyl chloride (10.3 g, 50.0 mmol), and the reaction is stirred at room temperature briefly. The reaction is made basic with 1N sodium hydroxide, and washed with methylene chloride. The combined organic layers are dried (Na.sub.2 SO. sub.4), and the solvent is evaporated. Hexane is added to the residue and the precipitate is collected to give N-[4-methoxybenzenesulfonyl]- alanine methyl ester. To a suspension of sodium hydride (0.60 g, 11.0 mmol) in dimethylformamide (20.0 mL), is added N-[4-methoxybenzenesulfonyl]- alanine methyl ester (2.6 g, 10.0 mmol) in dimethylformamide (10.0 mL). After stirring at room temperature for 30 minutes, benzyl bromide (1.22 mL, 10.0 mmol) is added, and the reaction is stirred for two hours at room temperature. The reaction is then partitioned between ether and brine, the organic layer is dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (20% ether/hexanes)to give methyl 2-[[4-methoxybenzenesulfonyl](benzyl)amino]- propionoate. (b) Similarly prepared is N-hydroxy-2-[[4- methoxybenzenesulfonyl] (benzyl)amino]-4-thiomethylbutyramide, m.p. 104. degree.-106° C., by starting the synthesis with D,L-methionine, and carrying out the subsequent steps as described above. EXAMPLE 10 A solution of methyl 2-[[4-methoxybenzenesulfonyl](benzyl)amino] - 4- (methylsulfonyl)butyrate (900 mg, 2.0 mmol), sodium methoxide previously generated from sodium metal spheres (100.0 mg, 4.5 mmol), and hydroxylamine hydrochloride (280.0 mg, 4.0 mmol) is refluxed for 2 days. The mixture is cooled to room temperature, concentrated in vacuo, diluted with water, acidified with citric acid, and extracted with ethyl acetate. The combined organic extracts are dried (MgSO.sub.4) and and the solvent is evaporated. The product is purified by silica gel chromatography (ethyl acetate) to give N-hydroxy-2-[[4- methoxybenzenesulfonyl](benzyl) amino]-4-(methylsulfonyl)butyramide, [M+ 1]=157. The starting material is prepared as follows: To a solution of racemic methionine methyl ester (1.98 g, 10.0 mmol) in methylene chloride (25 mL) containing triethylamine (2.0 mL, 14. 3 mmol) is added 4-methoxybenzenesulfonyl chloride (2.1 g, 10.2 mmol). After stirring for 2 hours at room temperature, the mixture is diluted with 1N hydrochloric acid. The organic layer is removed and the aqueous layer is extracted with ether. The combined organic layers are washed with brine, dried (MgSO.sub.4), and and the solvent is evaporated. The concentrated solution is triturated with ether, and the product is collected by filtration to give methyl 2-[[4-1- methoxybenzenesulfonyl] amino]-4-(thiomethyl)butyrate. To a solution of methyl 2-[[4-methoxybenzenesulfonyl]amino]-4- (thiomethyl)butyrate (2.1 g, 6.2 mmol) in dimethylformamide (15 mL) containing potassium carbonate (4.0 g, 29.0 mmol) is added benzyl bromide (1.5 mL, 12.6 mmol). The reaction mixture is stirred for 1 hour at room temperature. The mixture is quenched with water and extracted with ether. The organic extracts are washed with brine, dried (MgSO.sub. 4), and and the solvent is evaporated. The product is purified by silica gel chromatography (30% ethyl acetate/hexanes) to give methyl 2-[[4- methoxybenzenesulfonyl](benzyl)amino]-4-(thiomethyl)butyrate. A solution of methyl 2-[[4-methoxybenzenesulfonyl](benzyl) amino]- 4- (thiomethyl)butyrate (925.0 mg, 2.17 mmol) in 25% peracetic acid (5 mL) is stirred overnight at room temperature. The mixture is concentrated in vacuo, diluted with water, and extracted with ethyl acetate. The combined organic extracts are dried (MgSO.sub.4) and the solvent is evaporated to give methyl 2-[[4- methoxybenzenesulfonyl] (benzyl)amino]-4- (methylsulfonyl)butyrate. EXAMPLE 11 (a) To a solution of 2R-[[(4- methoxybenzene)sulfonyl](benzyl) amino] propionic acid (1.04 g, 2.98 mmol) in methylene chloride (50 mL) containing dimethylformamide (230 mL, 2.98 mmol) at room temperature is added oxalyl chloride (520 mL, 5.96 mmol) over 5 minutes dropwise. The mixture is stirred for 30 minutes at room temperature, then added to a pre-formed mixture of hydroxylamine hydrochloride (828 mg, 11.92 mmol) and triethylamine (2.5 mL, 17.9 mmol) in tetrohydrofuran (20 mL)/water (1. 5 mL) at 0° C. The reaction mixture is stirred for 45 minutes at 0. degree. C. then slowly warmed to room temperature for 15.5 hours. The mixture is acidified with 1N hydrochloric acid and extracted with methylene chloride. The combined organic extracts are washed with brine, dried (MgSO.sub.4), and the solvent is evaporated. The crude product is recrystallized from diethyl ether/ethyl acetate (1:1) to give N-hydroxy- 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-propionamide, m.p. 127. degree.-129. degree. C. The starting material is prepared as follows: To a solution of D-alanine methyl ester hydrochloride (3.0 g, 21. 5 mmol) in methanol (10 mL) is added benzaldehyde (2.3 mL, 22.6 mmol). The reaction mixture is stirred at room temperature for 3 hours. The solvent is then evaporated. To the resultant residue is added acetic acid (15 mL) and methanol (1 mL) followed by portionwise addition of sodium cyanoborohydride (1.35 g, 21.5 mmol) at room temperature. The mixture is stirred overnight, and then the solvent is evaporated. The remaining residue is diluted with water (75 mL) and basified with Na.sub. 2 CO.sub. 3. The mixture is extracted with ethyl acetate (3×75 mL). The combined organic extracts are washed with brine (50 mL), dried (Na. sub.2 SO.sub.4), and the solvent is evaporated to give N-benzyl-D- alanine methyl ester. To a solution of N-benzyl-D-alanine methyl ester (˜2 g) in methylene chloride (40 mL) containing triethylamine (2.47 mL, 17.7 mmol) is added 4-methoxybenzenesulfonyl chloride (2.44 g, 11.8 mmol). The reaction mixture is stirred overnight at room temperature. The mixture is acidified with 1N HCl and extracted with methylene chloride. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub. 4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (10%- > 20% ethyl acetate/hexanes) to provide methyl 2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino] propionate. To a solution of methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino] propionate (1.05 g, 2.89 mmol) in tetrahydrofuran (60 mL) at room temperature is added 1N aqueous sodium hydroxide (8.6 mL, 8.67 mmol). The reaction mixture is stirred for 19 hours at room temperature. The tetrahydrofuran is then evaporated. The remaining residue is acidified with 1N hydrochloric acid and extracted with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4) , and the solvent is evaporated to give 2(R)- [[4-methoxybenzenesulfonyl] (benzyl)amino]propionic acid. (b) Similarly prepared is N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino]-2-benzylacetamide, [M+1]=441, by starting with (R)- phenylalanine, and carrying out the previously described steps. EXAMPLE 12 (a) To a solution of N-(t-butyloxy)-2(R)-[[4- methoxybenzenesulfonyl(benzyl)amino]-6-(N,N-dimethylamino)-hexamide (2. 13 g, 4.21 mmol) in 1,2-dichloroethane (140 mL) is added ethanol (250 mL, 4.21 mmol). The solution is cooled to -10° C. and hydrogen chloride gas is bubbled in for 30 minutes. The reaction mixture is then sealed and allowed to warm to room temperature, stirring for 2 days. At this time point, the reaction mixture is cooled to -10° C. and hydrogen chloride gas is bubbled in for an additional 30 minutes. The reaction mixture is sealed, warmed to room temperature, and stirred for 24 hours. The mixture is reduced in volume by 1/2 in vacuo and triturated with ether. The mother liquid is removed and the remaining white solid is dried in vacuo to provide N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino]-6-(N,N-dimethylamino)-hexanamide hydrochloride salt, m.p. 175°-177° C. The starting material is prepared as follows: To a solution of &egr;-N-CBZ-(R)-lysine methylester hydrochloride (15.0 g, 45.10 mmol) in methylene chloride (250 mL) containing triethylamine (15.72 mL, 112.75 mmol) is added 4- methoxybenzenesulfonyl chloride (10.25 g, 49.61 mmol) at 0° C. The reaction mixture is warmed to room temperature and stirred overnight. The reaction mixture is diluted with methylene chloride and washed with 1N hydrochloric acid. The organic layer is washed with brine, dried (Na. sub. 2 SO.sub.4), and concentrated in vacuo to yield a yellow oil. The product is purified by silica gel chromatography (50% ethyl acetate/hexanes) to give methyl 2(R)- [[4-methoxybenzenesulfonyl]amino]-6- (N-benzylcarbamoyl) hexanoate. To a solution of methyl 2(R)-[[4-methoxybenzenesulfonyl]amino]- 6- (N- benzylcarbamoyl) hexanoate (12.4 g, 26.5 mmol) in dimethylformamide (100 mL) is added potassium carbonate (7.5 g, 52 mmol) and benzyl bromide (3.3 mL, 28.0 mmol), and the reaction is stirred for 24 hours at room temperature. The mixture is partitioned between water and 50% diethyl ether/ethyl acetate. The aqueous layer is removed and extracted with 50% diethyl ether/ethyl acetate. The combined organic layers are washed with brine, dried (MgSO.sub.4) and the solvent is evaporated. The crude product is purified by silica gel chromatography (50% ethyl acetate/hexanes) to give methyl 2(R)-[[4- methoxybenzenesulfonyl](benzyl) amino]-6-(N-benzylcarbamoyl) hexanoate. To a solution of methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino] -6-(benzylcarbamoyl) hexanoate (8. 61 g, 15.53 mmol) in 95% ethanol (150 mL) is added 1N hydrochloric acid (15.5 mL, 15.53 mmol) followed by 10% Pd/C (4.0 g). The reaction mixture is stirred at room temperature under 1 atmosphere of hydrogen gas for 2 hours. The mixture is filtered through Celite and the solvent is evaporated to provide methyl 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-6-aminohexanoate. To a solution of methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino] -6-aminohexanoate (5.05 g, 12.02 mmol) in refluxing formic acid (120 mL) containing sodium formate (2.45 g, 36.07 mmol) is added 37% aqueous formaldehyde (2.70 mL, 36.07 mmol). While continuing to reflux the reaction mixture, three more aliquots of 37% aqueous formaldehyde (2. 70 mL, 36.07 mmol each aliquot) are added at 10 minute intervals. The mixture is concentrated in vacuo to yield a yellow oil. The crude product is purified by silica gel chromatography (10:1:0.5; ethylacetate/methanol/ammonium hydroxide) to provide methyl 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-6-(N,N-dimethylamino) hexanoate. This procedure is repeated and the combined product is used in the next reaction. To a solution of methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino] -6-(N,N-dimethylamino) hexanoate (4. 55 g, 10.7 mmol) in tetrahydrofuran (100 mL) is added 1N aqueous lithium hydroxide (20 mL, 20. 33 mmol). The reaction mixture is stirred at room temperature overnight. The reaction mixture is directly concentrated to dryness in vacuo to give the lithium salt of 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-6-(N, N- dimethylamino) hexanoic acid. To a solution of 2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]- 6- (N, N-dimethylamino) hexanoic acid lithium salt (4.42 g, 10.18 mmol) in methylene chloride (100 mL) containing N-methylmorpholine (6.73 mL, 61. 06 mmol), 1-hydroxybenzotriazole monohydrate (1.64 g, 10.687 mmol) and O- t-butylhydroxyl amine hydrochloride (1.41 g, 11.20 mmol) is added N- [dimethylaminopropyl]-N'-ethylcarbodiimide hydrochloride (3.90 g, 20. 36 mmol) at 0° C. The reaction mixture is allowed to warm to room temperature and stirring is continued overnight. The mixture is diluted with methylene chloride, washed with saturated sodium bicarbonate, then with brine, dried (Na.sub.2 SO.sub.4) and the solvent is evaporated. The crude product is purified by silica gel chromatography (10:1:0.5 ethyl acetate/methanol/ammonium hydroxide) to provide N-(t-butyloxy)-2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-6-(N,N-dimethylamino) hexanamide. (b) Similarly prepared is N-hydroxy-2-(R)-[[4- methoxybenzenesulfonyl] (3-picolyl)-amino]-6-(N,N-dimethylamino)- hexanamide dihydrochloride, m.p. 179°-180° C. The first step is carried out as described above. The alkylation step is carried out as follows: To a solution of methyl 2(R)-[[4-methoxybenzenesulfonyl]amino]- 6- (benzylcarbamoyl)hexanoate (10.48 g, 22.43 mmol) in dimethylformamide (220 mL) at 0° C. is added 3-picolyl chloride hydrochloride (3.86 g, 23.55 mmol) followed by sodium hydride (2.24 g, 56.07 mmol, 60% in oil) . The reaction mixture is warmed to room temperature and stirred for 24 hours. The reaction mixture is quenched with water and extracted with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (75% ethyl acetate/hexanes) to provide methyl 2(R)-[[4-methoxybenzenesulfonyl](3- picolyl)amino]-6- (benzylcarbamoyl) hexanoate. All of the following steps are carried out as described above. (c) Similarly prepared is N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl] (2-picolyl)-amino]-6-(N,N-dimethylamino)- hexanamide dihydrochloride, m.p. 134°-136° C., by alkylating with 2-picolyl chloride in the second step and carrying out the subsequent steps as described above. EXAMPLE 13 N-(t-Butyloxy)-2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]-6- [(N,N- dimethylglycyl)amino]hexanamide (2.17 g, 3.86 mmol) is dissolved in dichloroethane (12 mL) containing ethanol (0.22 mL, 3.86 mmol), and the reaction is cooled to -10° C. Hydrochloric acid gas is bubbled through this solution for 30 minutes. The reaction is sealed, warmed to room temperature and stirred for 2 days. The solvent is reduced to 1/2 volume by evaporating solvent, and triturated with ether. The resulting solid is removed and dried in vacuo to provide N-hydroxy- 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-6-[(N,N- dimethylglycyl)amino] hexanamide hydrochloride, m.p. 105°-108. degree. C. The starting material is prepared as follows: To a solution of methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino] -6-amino hexanoate hydrochloride (7. 5 g, 16.44 mmol) in methylene chloride (170 mL) is added 1- hydroxybenzotriazole monohydrate (2.64 g, 1726 mmol), N-methylmorpholine (5.44 mL, 49.34 mmol), and N,N- dimethylglycine (1.86 g, 18.08 mmol), and the reaction is coled to 0. degree. C. N-[dimethylaminopropyl]-N'- ethylcarbodiimide hydrochloride (6. 30 g, 32.88 mmol) is added at 0. degree. C. The reaction mixture is warmed to room temperature and stirred overnight. The mixture is diluted with methylene chloride and washed with saturated aqueous sodium bicarbonate, and then with brine. The organic layer is dried (Na.sub.2 SO. sub.4), filtered, and and the solvent is evaporated. The crude product is purified by silica gel chromatography (10/0.5/0.5 ethyl acetate/methanol/ammonium hydroxide)to provide methyl 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-6-[(N,N- dimethylglycyl)amino] hexanoate (6.04 g). To a solution of methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino] -6-[(N,N- dimethylglycyl)amino]hexanoate (3.95 g, 7.82 mmol) in tetrahydrofuran (75 mL) at 0° C. is added 1N lithium hydroxide (15. 64 ml, 15.64 mmol). The reaction mixture is warmed to room temperature and stirred overnight. The tetrahydrofuran is removed and the remaining aqueous layer is acidified with 1N hydrochloric acid. The mixture is evaporated to dryness to yield 2(R)-[[4-methoxybenzenesulfonyl] (benzyl) amino]-6-[(N, N-dimethylglycyl)amino]hexanoic acid hydrochloride. To a solution of 2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]- 6- [(N,N-dimethylglycyl)amino]hexanoic acid hydrochloride (4.12 g, 7.82 mmol) in methylene chloride (78 mL) and dimethylformamide (5 mL) is added 1- hydroxybenzotriazole monohydrate (1.26 g, 8.21 mmol), N- methylmorpholine (2.58 ml, 23.45 mmol), and O-t-butyhydroxylamine hydrochloride (1.08 g, 8. 60 mmol). The reaction is cooled to 0° C. , and N- [dimethylaminopropyl]-N'-ethylcarbodiimide hydrochloride (3.0 g, 15.64 mmol) is added. The reaction mixture is warmed to room temperature and stirred overnight. The mixture is then diluted with methylene chloride and washed with saturated aqueous sodium bicarbonate, and then with brine. The organic layer is dried (Na.sub.2 SO.sub.4), filtered, and and the solvent is evaporated. The crude product is purified by silica gel chromatography (10/0.5/0.5 ethyl acetate/methanol/ammonium hydroxide) to provide N-(t-butyloxy)-2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]- 6- [(N,N- dimethylglycyl)amino]hexanamide. EXAMPLE 14 (a) To a solution of 4-[[4-methoxybenzenesulfonyl](benzyl)amino] - 4- carboxy-tetrahydrothiopyran (413.0 mg, 1.0 mmol) in methylene chloride (10 mL) containing dimethylformamide (80.0 mg, 1.1 mmol) is added a 2N solution of oxalyl chloride in methylene chloride (1.0 ml, 2. 0 mmol) at - 10° C. The mixture is allowed to warm to 20° C. for 30 minutes. This mixture is added to a pre-stirred mixture of hydroxylamine hydrochloride (280.0 mg, 4.0 mmol) in tetrahydrofuran (10 ml)/water (1 ml) containing triethylamine (650.0 mg, 6.0 mmol) at 0. degree. C. dropwise. The reaction mixture is allowed to slowly warm to room temperature and stirring is continued for 1.5 days. The reaction is worked up by partitioning between 1N hydrochloric acid and ethyl acetate. The aqueous layer is removed and repeatedly extracted with ethyl acetate. The combined organic layers are dried (Na.sub.2 SO.sub.4) and the solvent is evaporated. The crude product is purified by silica gel chromatography (2% methanol/methylene chloride) to give 4-[N-hydroxy- carbamoyl]-4-[[4- methoxybenzenesulfonyl](benzyl)amino]- tetrahydrothiopyran, m.p. 179. degree.-181° C. The starting material is prepared as follows: A solution of tetrahydrothiopyran-4-one (4.64 g, 40.0 mmol)in methanol (10 mL) is added to a mixture of sodium cyanide (2.0 g, 40.0 mmol) and ammonium chloride (2.36 g, 44.0 mmol) in water (8 mL). The reaction mixture is heated to reflux for 14 hours. The mixture is diluted with water, basified with potassium carbonate, and extracted with diethyl ether. The organic extract is dried (MgSO.sub.4) and filtered. The solution is acidified with hydrochloric acid saturated with methylene chloride. The resulting precipitate is filtered off providing 4-amino-4- cyano-tetrahydrothiopyran hydrochloride salt. A solution of 4-amino-4-cyano-tetrahydrothiopyran (5.4 g, 30.3 mmol) in 6N aqueous hydrochloric (250 mL) acid is heated to reflux for 24 hours. The mixture is triturated by addition of methanol/toluene, and filtered. To the crude product, 4-amino-4-carboxy-tetrahydrothiopyran is added 40 ml of methanol followed by careful addition of thionyl chloride (3.0 ml, 41.1 mmol). The reaction mixture is heated to reflux for 12 hours, cooled to room temperature, and concentrated in vacuo to a reduced volume. The remaining mixture is triturated with ethyl acetate/diethyl ether, and the product is collected by filtration, to give 4-amino-4- carbomethoxy- tetrahydrothiopyran hydrochloride. To a solution of 4-amino-4-carbomethoxy-tetrahydrothiopyran hydrochloride (3.1 g, 15.0 mmol) in methylene chloride (75 mL) containing triethylamine (3.5 g, 330.0 mmol) is added 4- methoxybenzenesulfonyl chloride (4.1 g, 20.0 mmol) at room temperature. The reaction mixture is stirred at room temperature for 18 hours. The mixture is diluted with water and the organic layer is removed. The aqueous layer is extracted with diethyl ether and the organic extracts are washed with brine, dried (MgSO.sub.4) and the solvent is evaporated. The product is purified by silica gel chromatography (50% ethylacetate/hexanes) to provide 4-[[4- methoxybenzenesulfonyl]amino]-4- carbomethoxy-tetrahydrothiopyran. To a solution of 4-[[(4-methoxybenzene)sulfonyl]amino]-4- carbomethoxy- tetrahydrothiopyran (690.0 mg, 2.0 mmol) in dimethylformamide (20 mL) at 0° C. is added sodium hydride (100.0 mg, 2.5 mmol, 60% in oil) and benzyl bromide (0.5ml, 4.2 mmol). The reaction mixture is allowed to warm to room temperature and stirred for 16 hours. The mixture is quenched by addition of water and extracted with 50% ethyl acetate/diethyl ether. The combined organic extracts are dried (MgSO.sub. 4), filtered, and the solvent is evaporated. The product is purified by silica gel chromatography (50% diethyl ether/hexanes) to provide 4-[[4- methoxybenzenesulfonyl](benzyl)amino]-4-carbomethoxy- tetrahydrothiopyran. To a solution of 4-[[4-methoxybenzenesulfonyl](benzyl)amino]-4- carbomethoxytetrahydrothiopyran (800.0 mg, 1.9 mmol) in methanol (50 mL) is added 1N sodium hydroxide (25 mL). The mixture is heated to reflux for 10 hours, and then solid sodium hydroxide is added (3.0 g, excess) and refluxing is continued for 18 hours. The mixture is concentrated to a volume of approximately 30 mL and acidified with citric acid (pH=5). The mixture is partitioned between ethyl acetate and water. The organic layer is removed, washed with brine, dried (MgSO.sub.4), and the solvent is evaporated to give 4-[[4-methoxybenzenesulfonyl](benzyl)amino]-4- carboxytetrahydrothiopyran. (b) Similarly prepared is 4-[N-hydroxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl](benzyl)amino]-tetrahydropyran, m.p. 137°- 140° C., by starting with tetrahydropyran-4-one in the first step, and carrying out the subsequent steps as described above. (c) Similarly prepared is 1-[N-hydroxy-carbamoyl]-1-[[4- methoxybenzenesulfonyl]-(benzyl)amino]-cyclohexane, m.p. 149°-151. degree. C., by using commercially available 1- aminocyclohexanecarboxylic acid in the second step, and carrying out the subsequent steps as described above. (d) Similarly prepared is 1-[N-hydroxy-carbamoyl]-1-[[4- methoxybenzenesulfonyl]-(benzyl)amino]-cyclopentane, m.p. 67.0°- 68. 0° C., by using commercially available 1-aminocyclopentane carboxylic acid in the second step, and carrying out the subsequent steps as described above. (e) Similarly prepared is 1-[N-hydroxy-carbamoyl]-1-[[4- methoxybenzenesulfonyl](3-picolyl)amino]-cyclohexane, m.p. 115° C. , by using 1-aminocyclohexanecarboxylic acid in the second step, alkylating 1-[carbomethoxy]-1-[[(4-methoxybenzene)sulfonyl]amino]- cyclohexane with 3-picolyl chloride in the third step, and carrying out the other steps as described above. (f) Similarly prepared is 1-[N-hydroxy-carbamoyl]-1-[[4- methoxybenzenesulfonyl]-(3-picolylamino]-cyclopropane, hydrochloride m.p. 205°-207° C., starting with 1-amino-1- cyclopropanecarboxylic acid. EXAMPLE 15 4-[N-t-Butyloxycarbamoyl]-4-[[4- methoxybenzenesulfonyl](benzyl) amino]- 1-[benzyl]-piperidine is dissolved in dichloroethane (60 mL) and ethanol (1.0 mL) in a glass sealed tube. Hydrochloric acid gas (from a lecture bottle) is bubbled through the solution for 30 minutes at -10° C. The tube is sealed, gradually warmed to room temperature, and stirred overnight. At this point, hydrochloric acid gas is again bubbled through the reaction mixture as done previously and stirred at room temperature for an additional 24 hours. The reaction mixture is reduced to 1/3 volume in vacuo and triturated with diethyl ether. The solid is filtered off and dried in vacuo to provide 4-[N-hydroxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl](benzyl)amino]-1-[benzyl]-piperidine, m.p. 135.5. degree.-142° C. The starting material is prepared as follows: A mixture of N-carboethoxy-4-piperidone (88.6 g, 517.2 mmol), sodium cyanide (30.0 g, 612.1 mmol) in water (54 mL), ammonium chloride (34.0 g, 635.5 mmol) in water (72 mL), and ammonium hydroxide (76 ml) is heated to 60°-65° C. for 5 hours, and then stirred at room temperature overnight. The resulting solid is filtered off, dissolved in methylene chloride, and washed with a small amount of brine. The organic layer is dried (MgSO.sub.4), concentrated in vacuo to 1/2 volume, and triturated with hexane. The resulting precipate is collected by filtration and dried under vacuum, to give N-carboethoxy-4-amino-4- cyanopiperidine. A solution of N-carboethoxy-4-amino-4-cyanopiperidine (82.0 g) in water (700 mL) containing concentrated hydrochloric acid (800 mL) is stirred at room temperature for 4 days. The solvent is then evaporated to give 4-amino-4-carboxypiperidine dihydrochloride. Into a heterogeneous mixture of 4-amino-4-carboxypiperidine dihydrochloride (61.0 g, 0.34 mmol) in methanol (600 mL) is bubbled hydrogen chloride gas at room temperature. The reaction mixture is concentrated to dryness in vacuo, dissolved in 1,4-dioxane (200 mL), and concentrated in vacuo. The residue is redissolved in methanol (1600 mL) into which hydrogen chloride gas is bubbled for 45 minutes. The reaction mixture is refluxed for 18 hours. Most of the solvent is then evaporated, the product is collected by filtration, and washed with ethyl acetate to give 4-amino-4-carbomethoxypiperidine dihydrochloride. To a mixture of 4-amino-4-carbomethoxypiperidine dihydrochloride (6.60 g, 28.7 mmol) and potassium carbonate (18.8 g, 143. 5 mmol) in dioxane/water (350 ml/176 ml) at 0° C. is added di-t-butyl- dicarbonate (8.14 g, 37.31 mmol) in dioxane (60 mL) over 2 hours. The reaction mixture is warmed to room temperature and stirred for 8 hours. To this mixture is added a solution of 4-methoxybenzenesulfonyl chloride (7.71 g, 37.31 mmol) in dioxane (60 mL) at 0° C. The reaction mixture is stirred at room temperature overnight. An additional portion of 4- methoxybenzenesulfonyl chloride (7.71 g, 37.31 mmol) in dioxane (60 mL) is added to the mixture at 0° C. The reaction mixture is allowed to warm to room temperature and stirred overnight. The mixture is concentrated in vacuo, diluted with water, and extracted with ethyl acetate. The aqueous layer is removed, saturated with sodium chloride, and re-extracted with ethyl acetate. The combined extracts are dried (MgSO4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (50% ethylacetate/hexane) to provide 4-[[4- methoxybenzenesulfonyl]amino]-1-[(t-butoxycarbonyl]-4-[carbomethoxy]- piperidine, contaminated with a small amount of 4-methoxybenzenesulfonic acid. To a solution of 4-[[4-methoxybenzenesulfonyl]amino]-1-[(t- butoxycarbonyl]-4-[carbomethoxy]-piperidine (4.0 g, 9.30 mmol) in dimethylformamide (150 mL) at 0° C. is added sodium hydride (1.12 g, 28.0 ml, 60% in oil) followed by benzyl bromide (4.8 g, 28.0 mmol). The reaction mixture is allowed to warm to room temperature for 1 hour. The mixture is quenched with water and extracted with diethyl ether. The organic extract is dried (MgSO.sub.4) and the solvent is evaporated. The crude product is purified by silica gel chromatography (50% ethyl acetate/hexanes) to provide 4-[[4-methoxybenzenesulfonyl](benzyl)amino]- 1-[(t-butoxycarbonyl]-4-[carbomethoxy]piperidine. To a solution of 4-[[4-methoxybenzenesulfonyl](benzyl)amino]-1- [(t- butoxycarbonyl]-4-[carbomethoxy]-piperidine (1.8 g, 3.47 mmol) in ethyl acetate (10 mL) is added a hydrogen chloride gas saturated methylene chloride solution (15 mL). The reaction mixture is stirred for 4 hours at room temperature. The mixture is concentrated in vacuo to give 4-[[4- methoxybenzenesulfonyl](benzyl)amino]-4-[carbomethoxy]- piperidine. To a solution of 4-[[4-methoxybenzenesulfonyl](benzyl)amino]-4- [carbomethoxy]piperidine (1.0 g, 2.39 mmol) in dimethylformamide (160 mL) is added sodium hydride (287.0 mg, 7.18 mmol, 60% in oil) at 0. degree. C. , followed by benzyl bromide (450.0 mg, 2.63 mmol). The reaction mixture is slowly warmed to room temperature and stirred overnight. The mixture is quenched with water and extracted with ethyl acetate. The combined organic layers are washed with brine, dried (Na. sub.2 SO.sub.4) and the solvent is evaporated to give 4-[[4- methoxybenzenesulfonyl](benzyl)amino] -1-[benzyl]-4-[carbomethoxy]- piperidine. A heterogeneous mixture of 4-[[4- methoxybenzenesulfonyl] (benzyl)amino] -1-[benzyl]-4-[carbomethoxy]- piperidine (1.2 g, 2.26 mmol) in 50% aqueous sodium hydroxide (10 mL) and methanol (50 mL) is heated to reflux for 16 hours. The methanol is evaporated and the residue is neutralized with 4N hydrochloric acid. The aqueous solution is extracted with ethyl acetate. The combined organic extracts are dried (NaSO.sub.4) and the solvent is evaporated to give 4- [[4-methoxybenzenesulfonyl] (benzyl)amino] -1-[benzyl]-4- [carboxy]piperidine. To a mixture of 4-[[4-methoxybenzenesulfonyl](benzyl)amino]-1- [benzyl] -4-[carboxy]-piperidine (850.0 mg, 1.64 mmol) in methylene chloride (100 mL) containing N-methylmorpholine (0.6 ml, 5.48 mmol) and O- t- butylhydroxyl amine hydrochloride (620.0 mg, 4.94 mmol) is added N- [dimethylaminopropyl]-N'-ethylcarbodiimide hydrochloride (1.1 g, 5.74 mmol). The reaction mixture is stirred overnight at room temperature. The mixture is diluted with water and extracted with methylene chloride. The combined organic extracts are dried (Na.sub.2 SO.sub.4) and the solvent is evaporated. The crude product is purified by silica gel chromatography (ethyl acetate)to provide 4-[N-t-butyloxy-carbamoyl]-4- [[4- methoxybenzenesulfonyl](benzyl)amino]-1-[benzyl]-piperidine. Alternately, 4-[[4-methoxybenzenesulfonyl]amino]-1-[(t- butoxycarbonyl] -4-carbomethoxy]-piperidine is first hydrolyzed with sodium hydroxide to 4-[[4-methoxybenzenesulfonyl]amino]-1-[(t- butoxycarbonyl]-4-[carboxy]- piperidine. Treatment with O-t- butylhydroxylamine under conditions described above gives 4-[N-t- butyloxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl](benzyl)amino]-1-[t- butoxycarbonyl]-piperidine. Reaction with 1N hydrochloric acid in ethyl acetate yields 4-[N-t- butyloxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)amino]- piperidine, which is treated with benzyl bromide as described above. Similarly prepared, starting from 4-[[4- methoxybenzenesulfonyl(benzyl) amino]-4-[carbomethoxy]-piperidine, are the following: (a) 4-[N-Hydroxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)- amino]-1-[dimethylaminoacetyl]- piperidine hydrochloride, m.p. 145° C.; (b) 4-[N-Hydroxy-carbamoyl]-4-[[4-methoxybenzenesulfonyl(benzyl) - amino]-1-[3-picolyl]-piperidine dihydrochloride, m.p. 167° C.; (c) 4-[N-Hydroxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)- amino]- 1-[carbomethoxymethyl]- piperidine hydrochloride, m.p. 183.5. degree.-185° C.; (d) 4-[N-Hydroxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)- amino]piperidine trifluoroacetate; (e) 4-[N-Hydroxy-carbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)- amino]-1-[t-butoxycarbonyl]-piperidine; (f) 4-[N-t-Hydroxycarbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)- amino]-1-[methylsulfonyl]-piperidine; (g) 4-[N-Hydroxycarbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)amino] -1-[methyl]piperidine hydrochloride, m.p. 185.5°- 187° C.; (h) 4-[N-Hydroxycarbamoyl]-4-[[methoxybenzenesulfonyl]amino]-1- [morpholinocarbonyl]piperidine, m.p. 89°-91° C.; (i) 4-[N-Hydroxycarbamoyl]-4-[[4- methoxybenzenesulfonyl] (benzyl)amino] -1-[4-picolyl]piperidine dihydrochloride, m.p. 168° C. EXAMPLE 16 Ethyl 2-[[4-methoxybenzenesulfonyl](benzyl)amino]acetate (11.20 g, 30. 9 mmol) is dissolved in methanol (100 mL). To this solution is added hydroxylamine hydrochloride (4.31 g, 62.0 mmol), followed by the addition of sodium methoxide, freshly prepared from sodium (2.14 g, 93.0 mmol) dissolved in methanol (55 mL). The reaction is stirred overnight at room temperature. The reaction is worked up by partitioning between dilute hydrochloric acid (pH=˜3) and ethyl acetate. The aqueous phase is extracted well with ethyl acetate, the combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (75% ethyl acetate/hexane) to give N-hydroxy-2-[[4- methoxybenzenesulfonyl](benzyl)amino]acetamide, m.p. 112. degree.-114. degree. C. The starting material is prepared as follows: Benzylamine (16.0 mL, 145.2 mmol) is dissolved in chloroform (110 mL), and the solution is cooled to 0° C. To this solution is added 4- methoxybenzenesulfonyl chloride (10.0 g, 48.4 mmol). The reaction is stirred at room temperature for 1 hour, and then refluxed for 1 hour. After cooling back to room temperature, the reaction is washed three times with 4N hydrochloric acid (200 mL), twice with water (100 mL), once with brine (50 mL), then dried (Na.sub.2 SO.sub.4), and the solvent is evaporated to give N-[4-methoxybenzenesulfonyl]- benzylamine. Sodium hydride (1.56 g of a 50 % oil dispersion, 33.0 mmol) is suspended in tetrahydrofuran (85 mL). To this is added a solution of N- [4-methoxybenzenesulfonyl]benzylamine (9.0 g, 32.5 mmol) also in tetrahydrofuran (85 mL), and the reaction is stirred for 30 minutes at room temperature. Then ethyl bromoacetate (5.40 mL, 48.8 mmol) is added, and the reaction is stirred overnight at room temperature. The reaction is quenched with a small amount of water, and all the solvent is removed. The crude mixture is partitioned between ethyl acetate and water, the aqueous phase is extracted several times with ethyl acetate, the combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (30% ethyl acetate/hexane) to give ethyl 2-[[4- methoxybenzenesulfonyl](benzyl) amino]acetate. EXAMPLE 17 The following compounds am prepared similarly to Example 16. (a) N-Hydroxy-2-[[4- methoxybenzenesulfonyl](isobutyl)amino] acetamide, m.p. 133°-134. degree. C., by coupling isobutylamine with 4- methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (b) N-Hydroxy-2-[[4- methoxybenzenesulfonyl](cyclohexanemethyl) amino] acetamide, m.p. 145. degree.-146° C., by coupling cyclohexanemethylamine with 4- methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (c) N-Hydroxy-2-[[4- methoxybenzenesulfonyl](cyclohexyl)amino] acetamide, m.p. 148°-149. degree. C., by coupling cyclohexylamine with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (d) N-Hydroxy-2-[[4- methoxybenzenesulfonyl](phenethyl)amino] acetamide, m.p. 137°-138. degree. C., by coupling phenethylamine with 4- methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (e) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](3- methylbutyl) amino] acetamide, m.p. 108° C., by coupling 1-amino-3- methylbutane with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (f) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](sec- butyl)amino] acetamide, m.p. 138° C., by coupling (sec)-butylamine with 4- methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (g) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](tert- butyl)amino] acetamide, m.p. 150°-151° C., by coupling (tert)-butylamine with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (h) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](4- fluorobenzyl) amino] acetamide, m.p. 115°-119° C., by coupling 4- fluorobenzylamine with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (i) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](4- chlorobenzyl) amino] acetamide, m.p. 121°-123° C., by coupling 4- chlorobenzylamine with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (j) N-Hydroxy-2-[[4- methoxybenzenesulfonyl](isopropyl)amino] acetamide, m.p. 139°-141. degree. C., by coupling isopropylamine with 4- methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (k) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](4- methylbenzyl) amino] acetamide, m.p. 133°-135° C., by coupling 4- methylbenzylamine with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (l) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](3-phenyl-1- propyl) amino] acetamide by coupling 3-phenyl-1-propylamine with 4- methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (m) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](4- phenylbutyl) amino] acetamide, m.p. 109°-112° C., by coupling 4- phenylbutylamine with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (n) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](2- cyclohexylethyl) amino] acetamide, m.p. 143°-144° C., by coupling 2- cyclohexylethylamine with 4-methoxybenzenesulfonyl chloride in the first step, arid carrying out the subsequent steps as described in example 16. (o) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](4- phenylbenzyl) amino] acetamide by coupling 4-phenylbenzylamine with 4- methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (p) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](2,2,2- trifluoroethyl) amino]acetamide, m.p. 142°-143° C., by coupling 2, 2,2- trifluoroethylamine with 4-methoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (q) N-Hydroxy-2-[[benzenesulfonyl](isobutyl)amino]acetamide, m. p. 130. degree.-131° C., by coupling isobutylamine with benzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (r) N-Hydroxy-2-[[4- trifluoromethylbenzenesulfonyl](isobutyl) amino] acetamide, m.p. 130. degree.-131° C., by coupling isobutylamine with 4- trifluoromethylbenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (s) N-Hydroxy-2-[[4- chlorobenzenesulfonyl](isobutyl)amino] acetamide, m.p. 126°-127. degree. C., by coupling isobutylamine with 4- chlorobenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (t) N-Hydroxy-2-[[4- methylbenzenesulfonyl](isobutyl)amino] acetamide, m.p. 138°-140. degree. C., by coupling isobutylamine with 4- methylbenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (u) N-Hydroxy-2-[[4- fluorobenzenesulfonyl](isobutyl)amino] acetamide, m.p. 144°-146. degree. C., by coupling isobutylamine with 4- fluorobenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (v) N-Hydroxy-2-[[2-thiophenesulfonyl](isobutyl)amino]acetamide by coupling isobutylamine with 2-thiophenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (w) N-Hydroxy-2-[[benzenesulfonyl](benzyl)amino]acetamide, m.p. 90. degree.-93° C., by coupling benzylamine with benzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (x) N-Hydroxy-2-[[4- nitrobenzenesulfonyl](isobutyl)amino] acetamide, m. p. 128-130, by coupling isobutylamine with 4- nitrobenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (y) N-Hydroxy-2-[[4-(tert)- butylbenzenesulfonyl](isobutyl) aminolacetamide, m.p. 113°-114. degree. C., by coupling isobutylamine with 4-(tert)-butylbenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (z) N-Hydroxy-2-[[4- methylsulfonylbenzenesulfonyl](isobutyl) amino] acetamide, m.p. 159° -161° C., by coupling isobutylamine with 4- methylsulfonylbenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (aa) N-Hydroxy-2-[[3- trifluoromethylbenzenesulfonyl](isobutyl) amino] acetamide, m.p. 140. degree.-141° C., by coupling isobutylamine with 3- trifluoromethylbenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (bb) N-Hydroxy-2-[[2,4,6- trimethylbenzenesulfonyl](isobutyl) amino] acetamide, m.p. 142°-143. degree. C., by coupling isobutylamine with 2,4,6- trimethylbenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (cc) N-Hydroxy-2-[[2,5- dimethoxybenzenesulfonyl](isobutyl) amino] acetamide, m.p. 50°-53. degree. C., by coupling isobutylamine with 2,5-dimethoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (dd) N-Hydroxy-2-[[3,4- dimethoxybenzenesulfonyl](isobutyl) amino] acetamide, m.p. 146°-148. degree. C., by coupling isobutylamine with 3,4-dimethoxybenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (ee) N-Hydroxy-2-[[2,4,6- triisopropylbenzenesulfonyl](isobutyl) amino] acetamide, m.p. 131°- 133° C., by coupling isobutylamine with 2,4,6- triisopropylbenzenesulfonyl chloride in the first step, and carrying out the subsequent steps as described above. (ff) N-Hydroxy-2-[[3,5-dimethylisoxazole-4- sulfonyl(benzyl) amino] acetamide, m.p. 140° C., by coupling benzylamine with 3,5- dimethylisoxazole-4-sulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. (gg) N-Hydroxy-2-[[2,4-dimethylthiazole-5- sulfonyl(benzyl) amino] acetamide, m.p. 55° C., by coupling benzylamine with 2,4- dimethylthiazole-5-sulfonyl chloride in the first step, and carrying out the subsequent steps as described in example 16. EXAMPLE 18 Ethyl 2-[[4-methoxybenzenesulfonyl](4- methoxybenzyl)amino] acetate (0. 90 g, 2.3 mmol) is dissolved in methanol (20 mL). To this solution is added hydroxylamine hydrochloride (0.80 g, 11.5 mmol), followed by the addition of sodium methoxide (5.2 mL of a 2. 67M solution) . The reaction is stirred overnight at room temperature. The reaction is worked up by partitioning between dilute hydrochloric acid (pH=˜3) and ethyl acetate. The aqueous phase is extracted well with ethyl acetate, the combined organic layers are washed with brine, dried (Na.sub.2 SO. sub.4), and the solvent is evaporated. The product is recrystallized from ether/ethyl acetate to give N-hydroxy-2- [[4-methoxybenzenesulfonyl](4- methoxybenzyl)amino]acetamide, m.p. 134. degree.-135.5° C. The starting material is prepared as follows: Glycine ethyl ester hydrochloride (31.39 g, 225.0 mmol) is dissolved in dioxane (150 mL) and water (150 mL), triethylamine (69.0 mL, 495.0 mmol) is added, and the solution is cooled to 0° C. To this solution is added 4-methoxybenzenesulfonyl chloride (51.15 g, 248.0 mmol) over 10 minutes. The reaction is warmed to room temperature and stirred overnight. The next day the mixture is reduced to one-half volume by evaporating solvent, diluted with 1N sodium hydroxide, and extracted well with ether. The combined organic layers are washed with brine, dried (Na. sub.2 SO.sub.4), and the solvent is evaporated. The product is recrystallized from ether/ethyl acetate/hexanes to give ethyl 2-[[4- methoxybenzenesulfonyl]amino]acetate. To a suspension of sodium hydride (0.906 g, 22.67 mmol) in dimethylformamide (50.0 mL), is added ethyl 2-[[4- methoxybenzenesulfonyl] amino]acetate (4.13 g, 15.11 mmol) and 4- methoxybenzyl chloride (2.17 mL, 15.87 mmol), and the reaction is stirred overnight at room temperature. The reaction is cooled to 0. degree. C., quenched with 1N hydrochloric acid, and extracted well with ether. The combined organic layers are washed with brine, dried (Na.sub. 2 SO.sub.4), and the solvent is evaporated. The product is recrystallized from ether/hexanes to give ethyl 2-[[4- methoxybenzenesulfonyl](4-methoxybenzyl)amino]acetate. EXAMPLE 19 The following compounds are prepared similarly to example 18. (a) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](2- picolyl)amino] acetamide, m.p. 138.5°-139.5° C., by alkylating ethyl 2-[[4- methoxybenzenesulfonyl]amino]acetate with 2- picolyl chloride in the second step, and carrying out the other steps as described in example 18. (b) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](3- picolyl)amino] acetamide, m.p. 144°-145° C., by alkylating ethyl 2-[[4- methoxybenzenesulfonyl]amino]acetate with 3-picolyl chloride in the second step, and carrying out the other steps as described in example 18. (c) N-Hydroxy-2-[[4- methoxybenzenesulfonyl](piperonyl)amino] acetamide, m.p. 143°-144. degree. C., by alkylating ethyl 2-[[4- methoxybenzenesulfonyl]amino]acetate with piperonyl chloride in the second step, and carrying out the other steps as described in example 18. (d) N-Hydroxy-2-[[4-methoxybenzenesulfonyl](2- piperidinylethyl) amino] acetamide, m.p. 120°-122° C., by alkylating ethyl 2-[[4- methoxybenzenesulfonyl]amino]acetate with N-(2- chloroethyl)-piperidine in the second step, and carrying out the other steps as described in example 18. EXAMPLE 20 (a) N-(t-Butyloxy)-2-[[4-methoxybenzenesulfonyl](2- quinolinylmethyl) amino]acetamide (1.15g, 2.42 mmol) is dissolved in methylene chloride (30. 0 mL) and ethanol (0.20 mL) in a glass sealed tube. Hydrochloric acid gas (from a lecture bottle) is bubbled through the solution for 20 minutes, and then the tube is sealed and stands at room temperature overnight. The next day, additional hydrochloric acid gas is bubbled through the solution for 20 minutes, more ethanol (0.20 mL) is added, and then the tube is sealed and stands at room temperature for two days. After that time, the solvent is removed. The product is purified by silca gel chromatography (5% to 15% methanol/methylene chloride with ˜1% ammonium hydroxide) to give N-hydroxy-2-[[4- methoxybenzenesulfonyl](2- quinolinylmethyl)amino]acetamide, m.p. 177. degree.-178° C. The starting material is prepared as follows: To a suspension of sodium hydride (0.84 g, 35.0 mmol) in dimethylformamide (120.0 mL), is added ethyl 2-[[4- methoxybenzenesulfonyl]amino]acetate (3.19 g, 11.67 mmol) and 2- (chloromethyl)quinoline (2.62 g, 12.26 mmol), and the reaction is stirred for three days at room temperature. Then, additional Nail (0.46 g, 11.67 mmol) is added, and the reaction is heated to 50° C. for 5 hours. The reaction is cooled to 0° C., quenched with water, and extracted well with ether. The combined organic layers are washed with brine, dried (Na. sub.2 SO.sub.4), and the solvent is removed to give ethyl 2-[[4- methoxybenzenesulfonyl](2- quinolinylmethyl)amino]acetate. Ethyl 2-[[4-methoxybenzenesulfonyl](2- quinolinylmethyl)amino] acetate (4.0g, 9.63 mmol) is dissolved in tetrahydrofuran (70.0 mL). To this solution is added lithium hydroxide (18.0 mL of a 1N aqueous solution, 18. 0 mmol), and the reaction is stirred at room temperature overnight. The tetrahydrofuran is evaporated, the reaction is then acidified to pH=. about.3 using 1N hydrochloric acid, and extracted well with ethyl acetate. The combined organic layers are dried (Na.sub.2 SO. sub.4), and the solvent is evaporated to give 2- [[4- methoxybenzenesulfonyl](2- quinolinylmethyl)amino]acetic acid hydrochloride. 2-[[4-methoxybenzenesulfonyl](2-quinolinylmethyl)amino]acetic acid hydrochloride (1.49 g, 3.35 mmol), 1-hydroxybenzotriazole (0.539 g, 3.52 mmol), 4-methylmorpholine (1.55 mL, 14.9 mmol), and O-t- butylhydroxyl amine hydrochloride (0.464 g, 3.70 mmol) are dissolved in methylene chloride (50.0 mL), and the reaction is cooled to 0° C. To this solution is added N-[dimethylaminopropyl]-N'-ethylcarbodiimide hydrochloride (1.35 g, 7.04 mmol), and the reaction is allowed to warm up to room temperature and stir overnight. The reaction is diluted with more methylene chloride, and the organic layer is washed with saturated sodium bicarbonate, brine, dried (MgSO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (1% methanol/methylene chloride) to give N-(t-butyloxy)-2-[[4- methoxybenzenesulfonyl](2-quinolinylmethyl)amino]acetamide. (b) Similarly prepared is N-hydroxy-2-[[4- methoxybenzenesulfonyl](4- picolyl)amino]acetamide, hydrochloride m.p. 193° C., by alkylating ethyl 2-[[4- methoxybenzenesulfonyl]amino] acetate with 4-picolyl chloride in the second step, and carrying out the other steps as described above. EXAMPLE 21 (a) 2-[[4-Methoxybenzenesulfonyl](6-chloropiperonyl)amino] acetic acid (1.87 g, 4.51 mmol) is dissolved in methylene chloride (45.0 mL). To this solution is added oxalyl chloride (0.784 mL, 9.02 mmol) and dimethylformamide (0.35 mL, 4.51 mmol), and the reaction is stirred at room temperature for 60 minutes. Meanwhile, in a separate flask, hydroxylamine hydrochloride (1.25 g, 18.04 mmol) and triethylamine (3.77 mL, 27.06 mmol) are stirred in tetrahydrofuran (20.0 mL) and water (5.0 mL) at 0° C. for 15 minutes. After 60 minutes, the methylene chloride solution is added in one portion to the second flask, and the combined contents are stirred overnight as the flask gradually warms up to room temperature. The reaction is then diluted with acidic water (pH=. about.3) , and extracted several times with ethyl acetate. The combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is recrystallized from ethyl acetate/methanol/acetone to give N- hydroxy-2-[[4- methoxybenzenesulfonyl] (6-chloropiperonyl)amino]acetamide, m.p. 168. degree.-169° C. The starting material is prepared as follows: To a suspension of sodium hydride (1.08 g, 27.06 mmol) in dimethylformamide (180.0 mL), is added ethyl 2-[[4- methoxybenzenesulfonyl]amino]acetate (4.93 g, 18.04 mmol) and 6- chloropiperonyl chloride (3.88 g, 19.0 mmol), and the reaction is stirred overnight at room temperature. The reaction is cooled to 0. degree. C., quenched with 1N hydrochloric acid, and extracted well with ether. The combined organic layers are washed with brine, dried (Na.sub. 2 SO.sub.4), and the solvent is evaporated. The product is recrystallized from ether/hexanes to give ethyl 2-[[4- methoxybenzenesulfonyl](6- chloropiperonyl)amino]acetate. Ethyl 2-[[4-methoxybenzenesulfonyl](6- chloropiperonyl)amino] acetate (2.12g, 4.79 mmol) is dissolved in tetrahydrofuran (40.0 mL). To this solution is added lithium hydroxide (10.0 mL of a 1N aqueous solution, 10. 0 mmol), and the reaction is stirred at room temperature overnight. The tetrahydrofuran is evaporated, the reaction is then acidified to pH =. about.3 using 1N hydrochloric acid, and extracted well with ethyl acetate. The combined organic layers are dried (Na.sub.2 SO. sub.4), and the solvent is evaporated to give 2- [[4- methoxybenzenesulfonyl](6- chloropiperonyl)amino]acetic acid. (b) Similarly prepared is N-hydroxy-2-[[4- methoxybenzenesulfonyl](3,4, 5-trimethoxybenzyl)amino]acetamide, m.p. 116. degree.-118° C., by alkylating ethyl 2-[[4- methoxybenzenesulfonyl] amino]acetate with 3,4,5- trimethoxybenzyl chloride in the second step, and carrying out the other steps as described above. (c) Similarly prepared is N-hydroxy-2-[[4- methoxybenzenesulfonyl](3- methoxybenzyl)amino]acetamide, m.p. 118° - 119° C., by alkylating ethyl 2-[[4- methoxybenzenesulfonyl]amino] acetate with 3- methoxybenzyl chloride in the second step, and carrying out the other steps as described above. EXAMPLE 22 Ethyl 2-[[4-methoxybenzenesulfonyl](2-[4- morpholino]ethyl) amino] acetate (7.1 g, 18.4 mmol) is dissolved in ethanol (100 mL), followed by the addition of sodium spheres (1.1 g). To this solution is added hydroxylamine hydrochloride (2.47 g, 35.5 mmol). The reaction is refluxed overnight. The reaction is worked up by removing most of the solvent, and partitioning between saturated sodium bicarbonate and ethyl acetate. The aqueous phase is extracted well with ethyl acetate, the combined organic layers are washed with brine, dried (MgSO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (80% ethyl acetate/16% methanol/4% acetic acid). The solvent is removed to give the product containing residual acetic acid. The product is partitioned between ethyl acetate and water (pH =7.1), the organic phase is dried (MgSO.sub.4), and the solvent is concentrated and then triturated with ether to give N-hydroxy-2-[[4- methoxybenzenesulfonyl](2- [4-morpholino]ethyl)amino]acetamide, m.p. 108. degree.-112° C. The starting material is prepared as follows: Ethyl 2-[[4-methoxybenzenesulfonyl]amino]acetate (13.7 g, 50.0 mmol) is dissolved in ethanol (500 mL), followed by the addition of sodium spheres (2.5 g, 109.0 mmol). To this solution is added N-(2- chloroethyl) morpholine hydrochloride (10.0 g, 53.7 mmol), the reaction is stirred at room temperature for 2 hours, and then refluxed for 1.5 hours. The reaction is partitioned between ethyl acetate and brine. The aqueous phase is extracted well with ethyl acetate, the combined organic layers are dried (MgSO.sub.4), and the solvent is evaporated to give ethyl 2-[[4- methoxybenzenesulfonyl](2-[4-morpholino]ethyl)amino]acetate. EXAMPLE 23 N-Hydroxy-2-[[4-aminobenzenesulfonyl](isobutyl)amino]acetamide, m. p. 50°-55° C., is obtained by hydrogenation of N- hydroxy-2- [[4-nitrobenzenesulfonyl](isobutyl)amino]acetamide, m.p. 128. degree.-130. degree., using 10% palladium on carbon. The starting material is prepared according to example 16 by coupling isobutylamine and 4-nitrobenzenesulfonyl chloride in the first step thereof. EXAMPLE 24 N-Hydroxy-2-[[4- dimethylaminobenzenesulfonyl](isobutyl)amino] acetamide, m.p. 127°- 129° C., is obtained by methylation of N- hydroxy-2-[[4- aminobenzenesulfonyl](isobutyl)amino]acetamide using the procedure from Synthesis p. 709, 1987. EXAMPLE 25 Ethyl 2-[[4-hexyloxybenzenesulfonyl](isobutyl)amino]acetate (1. 22 g, 3.05 mmol) is dissolved in methanol (15 mL). To this solution is added hydroxylamine hydrochloride (0.43 g, 6.11 mmol), followed by the addition of sodium methoxide, freshly prepared from sodium (0.35 g, 15.3 mmol) dissolved in methanol (5 mL). The reaction is stirred for 36 hours at room temperature. The reaction is worked up by partitioning between dilute hydrochloric acid (pH=˜3) and ethyl acetate. The aqueous phase is extracted well with ethyl acetate, the combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is crystallized from hexane/ethyl acetate and collected by filtration to give N-hydroxy-2-[[4- hexyloxybenzenesulfonyl](isobutyl)amino]acetamide, m.p. 108°-110. degree. C. The starting material is prepared as follows: A solution of ethanethiol (15 mL) and methylene chloride (15 mL) is cooled to 0° C. Aluminum trichloride (9.62 g, 72.2 mmol) is added (the solution turns green), and the reaction is warmed to room temperature. Ethyl 2-[[4-methoxybenzenesulfonyl](isobutyl)amino]acetate (4.75 g, 14.44 mmol) is added in methylene chloride (5 mL), and the reaction is stirred for 3.5 hours at room temperature. The reaction is then slowly quenched with water, and the crude reaction is partitioned between water and methylene chloride. The aqueous layer is extracted well with methylene chloride, the combined organic layers are dried (Na. sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (25% to 50% ethyl acetate/hexane) to give ethyl 2-[[4-hydroxybenzenesulfonyl](isobutyl)amino]acetate. Ethyl 2-[[4-hydroxybenzenesulfonyl](isobutyl)amino]acetate (1.0 g, 3. 17 mmol) is dissolved in dimethylformamide (16 mL). Cesium carbonate (1. 03 g, 3.17 mmol) is added, followed by 1-iodohexane (0.47 mL, 3.17 mmol), and the reaction is stirred overnight at room temperature. The reaction is then partitioned between water and ethyl acetate, the aqueous layer is extracted well with ethyl acetate, the combined organic layers are dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (10% ethyl acetate/hexane) to give ethyl 2-[[4- hexyloxybenzenesulfonyl](isobutyl) amino]acetate. EXAMPLE 26 The following compounds are prepared similarly to example 25. (a) N-Hydroxy-2-[[4- ethoxybenzenesulfonyl](isobutyl)amino] acetamide, by using ethyl iodide in the cesium carbonate alkylation step, and carrying out the subsequent steps as described in example 25. (b) N-Hydroxy-2-[[4- butyloxybenzenesulfonyl](isobutyl)amino] acetamide, m.p. 125°-127. degree. C., by using iodobutane in the cesium carbonate alkylation step, and carrying out the subsequent steps as described in example 25. (c) N-Hydroxy-2-[[4-(3- methyl)butyloxybenzenesulfonyl] (isobutyl)amino] acetamide, m.p. 93° -96° C., by using 1-iodo-3- methylbutane in the cesium carbonate alkylation step, and carrying out the subsequent steps as described in example 25. (d) N-Hydroxy-2-[[4- heptyloxybenzenesulfonyl](isobutyl)amino] acetamide, m.p. 120°-123. degree. C., by using 1-iodoheptane in the cesium carbonate alkylation step, and carrying out the subsequent steps as described in example 25. (e) N-Hydroxy-2-[[4- (cyclohexylmethoxy)benzenesulfonyl] (isobutyl) amino]acetamide, m.p. 75. degree.-80° C., by using cyclohexylmethyl bromide in the cesium carbonate alkylation step, and carrying out the subsequent steps as described in example 25. (f) N-Hydroxy-2-[[4- isopropyloxybenzenesulfonyl](isobutyl) amino] acetamide, m.p. 65°- 66° C., by using isopropyl bromide in the cesium carbonate alkylation step, and carrying out the subsequent steps as described in example 25. (g) N-Hydroxy-2-[[4- ethoxyethoxybenzenesulfonyl](isobutyl) amino] acetamide, m.p. 111°- 114° C., by using 2-bromoethyl ethyl ether in the cesium carbonate alkylation step, and carrying out the subsequent steps as described in example 25. EXAMPLE 27 (a) N-(t-butyloxy)-2-[[4-methoxybenzenesulfonyl](benzyl)amino]- 2- [(2- methyl-5-tetrazolyl)methyl]acetamide (0.77 g, 1.55 mmol) is dissolved in methylene chloride (2.0 mL) and ethanol (0.1 mL) in a glass sealed tube, and the reaction is cooled to 0° C. Hydrochloric acid gas (from a lecture bottle) is bubbled through the solution for 20 minutes, and then the tube is sealed at room temperature for 3 days. After that time, the solvent is removed, and the reaction is partitioned between ethyl acetate and saturated sodium bicarbonate. The organic phase is dried (Na.sub.2 SO. sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (2% methanol/methylene chloride) to give N-hydroxy-2- [[4-methoxybenzenesulfonyl](benzyl)amino]- 2-[(2- methyl-5-tetrazolyl) methyl]acetamide, m.p. 72°-75° C. The starting material is prepared as follows: D-asparagine (13.2 g, 100.0 mmol) is dissolved in dioxane 1,75. 0 mL) and water (125.0 mL), triethylamine (21.0 mL, 150.0 mmol) is added, and the solution is cooled to 0° C. To this solution is added 4- methoxybenzenesulfonyl chloride (22.7 g, 110.0 mmol) over 10 minutes. The reaction is warmed to room temperature and stirred for 3 days. The precipitate is then filtered off, the filtrate is acidified to pH=˜ 4, and extracted well with ethyl acetate. A first crop of pure product precipitates from the ethyl acetate and is collected by filtration. A second crop is obtained by evaporating off the ethyl acetate, and rinsing the solid obtained with water to remove inorganic salts. The two crops are combined to give N-[4-methoxybenzenesulfonyl]-(D)-asparagine. N-[4-methoxybenzenesulfonyl]-(D)-asparagine (10.1 g, 33.3 mmol) is dissolved in dimethylformamide (167.0 mL). Cesium carbonate (5.43 g, 16. 66 mmol) is added, followed by the addition of methyl iodide (2.22 mL, 33. 3 mmol), and the reaction is stirred overnight. The reaction is then diluted with saturated ammonium chloride (366.0 mL), and extracted well with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The crude product is recrystallized from toluene to provide N-[4- methoxybenzenesulfonyl]-(D)-asparagine methyl ester. To a suspension of N-[4-methoxybenzenesulfonyl]-(D)-asparagine methyl ester (8.54 g, 27.0 mmol) in methylene chloride (47.0 mL) is added pyridine (10.9 mL, 135.0 mmol). Para-toluenesulfonyl chloride (10. 3 g, 54.0 mmol) is added, and the reaction mixture is allowed to stand without stirring at room temperature overnight. The next day, saturated sodium bicarbonate is added (125.0 mL), and the mixture is stirred for 1 hour. The mixture is then diluted with water and extracted well with ethyl acetate. The combined organic extracts are washed with brine, dried (Na. sub.2 SO.sub.4), and the solvent is evaporated. The crude product is recrystallized from 20% tetrahydrofuran/methanol to provide methyl 2(R)- [[4-methoxybenzenesulfonyl]amino]-4-cyano-propionate. To a suspension of sodium hydride (0.93 g, 23.2 mmol) in dimethylformamide (95.0 mL), is added methyl 2(R)-[[4- methoxybenzenesulfonyl]amino]-4-cyano-propionate (6.92 g, 23.2 mmol) in dimethylformamide (10.0 mL). After stirring at room temperature for 20 minutes, benzyl bromide (3.1 mL, 25.5 mmol) is added, and the reaction is stirred overnight at room temperature. The reaction is then partitioned between ethyl acetate and acidic water (pH=˜5), the organic layer is dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (40% ethyl acetate/hexane) to give methyl 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-4-cyano- propionate. To a solution of methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino] -4-cyanopropionate (1.34 g, 3.47 mmol) in dimethylformamide (5.4 mL) is added triethylamine hydrochloride (0.95 g. 6.93 mmol) and sodium azide (0. 45 g, 6.93 mmol). The reaction is stirred at 110° C. overnight. The next day, the solvent is evaporated, the residue is acidified with 1N hydrochloric acid (16.0 mL), and extracted well with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4) , and the solvent is evaporated to yield methyl 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-2-[(5-tetrazolyl) methyl]acetate. This crude tetrazole is dissolved in dimethylformamide (17.4 mL) . Cesium carbonate (0.56 g, 1.73 mmol) is added, followed by the addition of methyl iodide (0.23 mL, 3.47 mmol), and the reaction is stirred overnight. The reaction is then diluted with brine and extracted well with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is purified by silica gel chromatography (40% ethyl acetate/hexane) to give separately the two regioisomers: methyl 2(R)-[[4- methoxybenzenesulfonyl] (benzyl)amino]-2-[(1-methyl-5- tetrazolyl)methyl]acetate (0.50 g); and methyl 2(R)-[[4- methoxybenzesulfonyl](benzyl)amino]-2-[(2-methyl-5- tetrazolyl)methyl]acetate. Methyl 2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[(2- methyl-5- tetrazolyl)methyl]acetate (1.0 g, 2.27 mmol) is dissolved in tetrahydrofuran (11.3 mL) and water (11.3 mL). To this solution is added lithium hydroxide hydrate (0.095 g, 2.27 mmol), and the reaction is stirred at room temperature for 2 hours. The reaction is then acidified to pH=˜3 using 1N hydrochloric acid, and extracted well with ethyl acetate. The combined organic extracts are washed with brine, dried (Na. sub.2 SO.sub.4), and the solvent is evaporated to provide 2(R)-[[4- methoxybenzenesulfonyl](benzyl)amino]-2-[(2-methyl-5- tetrazolyl)methyl] acetic acid (0.96 g). 2(R)-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[(2-methyl-5- tetrazolyl)methyl]acetic acid (0.96 g, 2.24 mmol), 1- hydroxybenzotriazole (0.30 g, 2.24 mmol), 4-methylmorpholine (0.86 mL, 7. 89 mmol), and O-t-butylhydroxylamine hydrochloride (0.30 g, 2.24 mmol) are dissolved in methylene chloride (75.0 mL). N-[dimethylaminopropyl]- N'-ethylcarbodiimide hydrochloride (0.86 g, 4.48 mmol) is added, and the reaction is stirred overnight. The reaction is then diluted with water and extracted with methylene chloride. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The crude product is purified by silica gel chromatography (50% ethyl acetate/hexane) to give N-(t-butyloxy)-2-[[4- methoxybenzenesulfonyl](benzyl)amino-2-[(2-methyl-5-tetrazolyl) methyl] acetamide. (b) Similarly prepared is the other tetrazole regioisomer, N- hydroxy- 2-[[4-methoxybenzenesulfonyl](benzyl)amino]-2-[(1-methyl-5- tetrazolyl) methyl]acetamide, m.p. 92°-96° C., by completing the synthesis as described above. EXAMPLE 28 Oxalyl chloride (106 mL, 1.22 mol) is added over 1 hour to dimethylformamide (92 mL) in methylene chloride (1250 mL) at 0° C. To this is added a solution of 2(R)-[[4-methoxybenzenesulfonyl](3- picolyl) amino]-3-methylbutanoic acid hydrochloride (248 g, 0.6 mol) in dimethylformamide (450 mL) over 1 hour, maintaining the temperature at 0. degree. C. This solution is stirred an additional 2 hours at room temperature, and then added dropwise to a mixture of hydroxylamine (460 g of a 50% aqueous solution, 6.82 mol) in tetrahydrofuran (2400 mL). The reaction is stirred an additional 3 hours at 5° C., and then at room temperature overnight. The reaction mixture is filtered, the organic layer is collected, and the solvent is evaporated. The crude product is re-dissolved in methylene chloride (2 L), washed with water (2×1 L), saturated sodium bicarbonate (4×1 L), brine (1 L), dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The product is dissolved in ethyl acetate (700 mL) and diluted with ether (1400 mL) to induce precipitation. The pure product is collected by filtration to provide N- hydroxy-2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-3- methylbutanamide. The starting material is prepared as follows: To a solution of D-valine (2000 g, 17.09 mol) in water (16.9 L) and acetone (9.5 L), cooled to 5° C., is added triethylamine (4769 mL, 34.22 mol), and the reaction is stirred for 30 minutes. Then a solution of 4-methoxybenzenesulfonyl chloride (3524 g, 18.48 mol) in acetone (7.4 L) is added over 30 minutes, and the reaction is stirred at room temperature overnight. Most of the acetone is evaporated off, and the pH is adjusted to pH=8.25 with 6N sodium hydroxide. The crude product is washed with toluene (2×10 L), and then the pH is re- adjusted to pH=2.2 with 6N hydrochloric acid. The mixture is then extracted with methylene chloride (3×12 L), the combined organic layers are washed with 2N hydrochloric acid, water, dried (Na.sub.2 SO. sub.4), and the solvent is evaporated to provide N-[4- methoxybenzenesulfonyl]-(D)-valine. To a solution of N-[4-methoxybenzenesulfonyl]-(D)-valine (8369 g, 29. 13 mol) in methanol (30 L) at 5° C. is added thionyl chloride (2176 mL, 29.7 mol) over 2.5 hours. After stirring for 3 hours at 5. degree. C., the reaction is stirred for 36 hours at room temperature. Most of the solvent is evaporated, and the chide product is dissolved in toluene (80 L). The toluene layer is then washed with water (20 L), saturated sodium bicarbonate (20 L), water again (20 L), 2N hydrochloric acid (20 L), brine (20 L), dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The solid obtained is dissolved in ethyl acetate (8 L) and heptane (16 L) is added to induce crystallization. The precipitated product is collected by filtration to provide methyl 2(R)-[[4- methoxybenzenesulfonyl]amino]-3-methylbutanoate. To a solution of methyl 2(R)-[[4-methoxybenzenesulfonyl]amino]- 3- methylbutanoate (1662 g, 5.52 mol) in dimethylformamide (10.9 L) is added 3-picolyl chloride hydrochloride (947.3 g, 5.77 mol) followed by powdered potassium carbonate (2409.9 g, 17.36 mol). The reaction mixture is stirred at room temperature for 2 days. At that time, additional quantities of 3-picolyl chloride hydrochloride (95 g) and powdered potassium carbonate (241 g) are added, and the reaction is stirred for 3 more days. The solids are then filtered away, the crude product is poured into water (22 L), and the pH is adjusted to pH=8 with 6N sodium hydroxide. This solution is extracted well with toluene (4×10 L), the combined organic layers are washed with water (2×12 L), and then with 6N hydrochloric acid (3×1600 mL). This aqueous layer is then re-adjusted to pH=8 with 6N sodium hydroxide, extracted with toluene (4×10 L), dried (Na.sub.2 SO.sub.4), and the solvent is evaporated. The oil obtained is re-dissolved in ethyl acetate (12 L), cooled to 5. degree. C., and to this is added methanolic HCl (834 mL). After stirring for 2 hours, the precipitated product is collected by filtration to give methyl 2(R)-[[4-methoxybenzenesulfonyl](3- picolyl)amino]-3- methylbutanoate hydrochloride. Methyl 2(R)-[[4-methoxybenzenesulfonyl](3-picolyl )amino]- 3- methylbutanoate hydrochloride (7164 g, 16.7 mol) is added to a solution of water (27 L) and concentrated hydrochloric acid (9 L), and heated to 120° C. for 3 days. After cooling down to room temperature, charcoal (350 g) is added, stirring is continued for 45 minutes, the reaction is filtered, and the solvent is evaporated. The crude solid is re-dissolved in methanol (7.1 L) and ethyl acetate (73 L), and cooled to 3° C. for 2 hours. The precipitated product is collected by filtration to give 2(R)- [[4-methoxybenzenesulfonyl](3-picolyl)amino]-3- methylbutanoic acid hydrochloride. EXAMPLE 29 N-Benzyloxy-2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-3- methylbutanamide is reacted with hydrogen in the presence of 10% palladium on charcoal catalyst at room temperature and atmospheric pressure to yield N-hydroxy-2(R)-[[4-methoxybenzenesulfonyl](3- picolyl) amino]-3-methylbutanamide. The starting material is prepared as follows: 2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino]-3- methylbutanoic acid hydrochloride is reacted with O-benzylhydroxylamine hydrochloride under conditions described for reaction with O-t- butylhydroxylamine hydrochloride to yield N-(benzyloxy)-2(R)-[[4- methoxybenzenesulfonyl](3- picolyl)amino]-3-methyl-butanamide, m.p. 74.5. degree.-76° C. EXAMPLE 30 N-(t-Butyloxy)-2(R)-[[4-methoxybenzenesulfonyl](3-picolyl)amino] - 3(R)- (3-picolyloxy)butanamide (1.3 g, 2.4 mmol) is dissolved in methylene chloride (50 mL) containing ethanol (0.14 mL, 2.4 mmol) in a round bottom flask, and the reaction is cooled to -10° C. Hydrochloric acid gas (from a lecture bottle) is bubbled through for 20 minutes. The reaction is sealed, allowed to slowly warm to room temperature, and stirred for two days. The solvent is reduced to 1/3 the volume by evaporation and the residue is triturated with ether. The mixture is filtered, the filer cake is removed and dried in vacuo to provide N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl](3-picolyl)amino]-3(R)- (3-picolyloxy)butanamide dihydrochloride as a white solid; [&agr;]. sub. D.sup.25 =+35.26° (c=5.58, DMSO). The starting material is prepared as follows: To a solution of D-threonine (5.0 g, 0.042 mol) in water (50 mL) and dioxane (50 mL) containing triethylamine (8.9 mL, 0.063 mol) at room temperature is added 4-methoxybenzenesulfonyl chloride (9.54 g, 0.046 mol) . The reaction mixture is stirred overnight at room temperature. Most of the dioxane is evaporated off, and the pH is adjusted to pH=2 with 1N HCl. The mixture is then extracted with ethyl acetate. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub. 4), and concentrated in vacuo to provide N-[4-methoxybenzenesulfonyl]- (D)- threonine. N-[4-methoxybenzenesulfonyl]-(D)-threonine (4.0 g, 13.84 mmol), 1- hydroxybenzotriazole (1.87 g, 13.84 mmol), 4-methylmorpholine (7.9 mL, 69. 2 mmol), and O-t-butylhydroxylamine hydrochloride (5.22 g, 41.52 mmol) are dissolved in methylene chloride (100 mL). To this solution is added N- [dimethylaminopropyl]-N'-ethylcarbodiimide hydrochloride (3.45 g, 17. 99 mmol), and the reaction is stirred overnight. The mixture is then diluted with water and extracted with methylene chloride. The combined organic extracts are washed with brine, dried (Na.sub.2 SO.sub.4), and concentrated in vacuo. The crude product is purified by silica gel chromatography (ethyl acetate) to give N-(t-butyloxy)-2(R)-[[4- methoxybenzenesulfonyl]amino]-3(R)-hydroxybutanamide. To a solution of N-(t-butyloxy)-2(R)-[[4- methoxybenzenesulfonyl]amino] -3(R)-hydroxybutanamide (3.04 g, 8.44 mmol) in dimethylformamide (150 mL) is added 3-picolyl chloride hydrochloride (1.45 g, 8.87 mmol) followed by potassium carbonate (11.65 g, 84.4 mmol). The reaction mixture is stirred at room temperature overnight, then heated to 45° C. for 5 hours. An additional amount of 3-picolyl chloride hydrochloride (692.0 mg, 4.23 mmol) is added at this point. The reaction mixture is stirred at 45. degree. C. for 10 hours. The reaction mixture is diluted with water and extracted with ethyl acetate. The combined organic extracts were washed with brine, dried (Na.sub.2 SO.sub. 4), and concentrated in vacuo. The crude product is purified by silica gel chromatography (ethyl acetate, then 5% methanol/methylene chloride) to give N-(t-butyloxy)-2(R)-[[4- methoxybenzenesulfonyl](3-picolyl)amino]- 3(R)-(3-picolyloxy)butanamide. EXAMPLE 31 (c) Preparation of 3000 capsules each containing 25 mg of the active ingredient, for example, N-hydroxy-2(R)-[[4- methoxybenzenesulfonyl](3- picolyl)-amino]- 3-methylbutanamide hydrochloride. ______________________________________ Active ingredient 75.00 g Lactose 750.00 g Avicel PH 102 300.00 g (microcrystalline cellulose) Polyplasdone XL 30.00 g (polyvinylpyrrolidone) Purified water q.s Magnesium stearate 9.00 g ______________________________________ The active ingredient is passed through a No. 30 hand screen. The active ingredient, lactose, Avicel PH 102 and Polyplasdone XL are blended for 15 minutes in a mixer. The blend is granulated with sufficient water (about 500 mL), dried in an oven at 35° C. overnight, and passed through a No. 20 screen. Magnesium stearate is passed through a No. 20 screen, added to the granulation mixture, and the mixture is blended for 5 minutes ill a mixer. The blend is encapsulated in No. 0 hard gelatin capsules each containing an amount of the blend equivalent to 25 mg of the active ingredient.
The UNESCO World Heritage Site programme catalogues, names and conserves sites of outstanding cultural or natural importance to the common heritage of humanity. There are 1,073 sites currently listed including cultural and natural sites as well as mixed properties. Around 20 new places make the list each year. London is one of the few cities in the world that can lay claim to having 4 separate UNESCO World Heritage sites. These sites represent the most significant cultural proprieties in the greater London Area: Palace of Westminster, Westminster Abbey and Saint Margaret's Church Together these historic buildings showcase the growth of the English monarchy and have been the setting for many of the events that have shaped the British nation. The Palace of Westminster was originally the site of a royal palace, and the primary London residence of English monarchs from the 11th century until 1512 when a fire destroyed much of the complex. Today it's more commonly called the Houses of Parliament, as it's home to the British parliament. Its elaborate clock tower, known as Big Ben, helps make it one of the most popular tourist attractions in London. Many famous historical events occured there including the failed Gunpowder Plot of 1605 and the subsecuent execution of Guy Fawkes and his fellow conspirators. In 1812, Prime Minister Spencer Perceval was assassinated there, still the only British prime minister to have met that fate. With a rich history of royal coronations, burials and weddings Westminster Abbey is one of the most identifiable churches in the world. People worldwide watched the wedding of the Duke and Duchess of Cambridge held there in April 2011. It is also a great study in the phases of English Gothic art as it has been renovated and added to over the past 9 centuries. The Abbey was home to Benedictine Monks until the 1500s when they were finally removed by Elizabeth I. Several buildings from this period have survived, including the Chapter House, the great dormitory (now the Abbey Library and the Great Hall of Westminister School), the monk's gardens and the cloisters. Their influence can also be seen in the existance of Saint Margaret's Church. Distracted by the locals attempting to attend their services, they established Saint Margaret's as a separate place of worship for the Abbey's neighbors. Several notables are buried at Saint Margaret's, including Sir Walter Raleigh who was efficently tried, executed and buried in Westminster. Royal Botanic Gardens at Kew At Kew Gardens you'll find the world's largest, most diverse collection of plants. Since 1759 Kew Gardens has served an important role in the understanding of the plant kingdom. Their Millennium Seed Bank contains seeds from thousands of plant species for reintroduction to their natural habitats or for scientific study. Kew Gardens' glasshouses allow visitors to experience different environments. The Temperate House is the garden's largest, with specimens including the world's tallest indoor plant. In stark contrast, the Bonsai House displays miniature trees. Other glasshouses include Davies Alpine House, the Evolution House, the historic Palm House and Rose Garden, the Princess of Wales Conservatory, the Secluded Garden and the Waterlily House. The Treetop Walkway offers one of the most unique experiences. At 18 metres above ground, it allows visitors a bird's eye view of the forest. During your scenic journey you might feel the structure swaying slightly in the breeze. The Tower of London The Tower of London lies on the bank of the River Thames and was first built by William the Conqueror in the 11th century as a palace and royal residence. The Tower of London has played an important role in British history. Redeveloped over the years, with evolving building techniques, its use has changed many times. While most castles were used to imprison people for short lengths of time, the Tower of London gained a reputation for torture and imprisonment. It held important prisoners (like the future Elizabeth I), common soldiers and prisoners of war as late as World War II. It is also home to the Crown Jewels and you can see the largest cut diamond in the world there. The Tower of London has long been said to be haunted by spirits. Most famously, the ghost of Anne Boleyn is said to walk around the White Tower holding her head under her arm. Maritime Greenwich Home of the Royal Greenwich Observatory and the National Maritime Museum, Maritime Greenwich is an interesting place to visit. One of the most popular things for tourists to do is stand astride the prime meridian - the Earth's line of 0 degrees longitude - with a foot in both the eastern and western hemispheres at the same time. You can also witness the ball drop at the top of the Greenwich Observatory at 1pm daily, a tradition which has occurred every day since 1833. The Greenwich Observatory is now a museum containing John Harrison’s original time keeping devices, which were used to establish the prime meridian - the basis of Greenwich Mean Time. The Observatory also contains a planetarium containing Britian's first digital planetarium projector. Also in Maritime Greenwich you'll find the National Maritime Museum which houses many important historical and nautical artifacts.
https://www.visitbritain.com/ae/ar/node/1091
I got ##3.93xx10^3 "kg CaCO"_3##. where ##Delta## indicates high heat, and ##"CaCO"_3## (calcium carbonate) has decomposed into ##"CaO"(s)## (calcium oxide) and ##"CO"_2(g)## (gaseous carbon dioxide). You can start by assuming ##"CO"_2(g)## is an ideal gas. At STP, we are at a temperature of ##0^@ "C"##, or ##"273.15 K"##, and ##"1 atm"## of pressure. We know the volume produced is ##"881 L"## of gas. We are not told anything else, so we have to determine the volume in ##bb"1 mol"## (the molar volume) of ideal gas to figure out how many ##"mol"##s of ##"CO"_2## we made. So, back-calculations will give the ##bb"mol"##s of ##"CaCO"_3## needed, and therefore the mass using its molar mass. where molar mass is just the sum of the atomic masses for each atom in the compound. Q: What is Charles' law formula?
https://studydaddy.com/question/in-the-following-reaction-what-mass-of-calcium-carbonate-caco3-would-be-required
Aim and Scope Traditional control, monitoring, and optimization methods for large-scale electrical networks are becoming obsolete due to the rapid transformative changes energy systems are undergoing. The high penetration of variable generation and distributed energy resources and the advent of advanced metering infrastructure are dramatically increasing the potential of electric networks but are also challenging utilities to realize optimal and resilient operations in electric grids. In fact, control, optimization, and monitoring tasks need to be performed in real-time and in a decentralized fashion. Energy is constantly being produced and used, and this balance requires fast decision-making capabilities along with comprehensive situational awareness. As more players are added, the complexity of controlling and optimizing energy systems is rapidly growing which renders conventional methods ineffective under provisioned operational conditions. Heterogeneous sensor, i.e., smart meters or PMU, provide huge amounts of data that can be utilized to infer accurate network states and models and enable judicious optimization and control of available resources. This workshop will explore the potential benefits of adopting the paradigm of autonomous energy grids (AEGs) that rely on novel distributed control and optimization mechanisms to accurately monitor and optimally operate energy systems. Call for Contributions This workshop welcomes contributions that tackle challenges in real-time monitoring and control of complex energy systems. The goal of this workshop is to push forward the Autonomous Energy Grid paradigm that rely on novel distributed control and optimization methods to accurately monitor and optimally operate energy systems. Further, novel disruptive approaches, both technically and from a market perspective, for supporting resilient operation of cyber-physical systems are welcome. The workshop therefore covers the following research topics with applications in power systems, transportation, and buildings: - Autonomous grid operation - Distributed and scalable optimization methods - Stochastic and/or nonlinear control - State estimation methods for energy systems - Real-time optimization of large-scale systems - Data-driven control approaches - System dynamics control - Resilient cyber-physical systems Important Dates Submission Deadline: June 23, 2020 July 10, 2020 (Extended) Authors Notification: July 28, 2020 Camera-Ready Paper Due: August 7, 2019 Workshop Date: October 6, 2020 Submission Guidelines Prospective authors are invited to submit original papers (standard two-column IEEE format, up to six pages) using EDAS (www.edas.info under the track “SmartGridComm 2020” and sub-track “Workshop on Autonomous Energy Grids: A Distributed Optimization and Control Perspective”) on all topics related to the workshop research areas.
https://sgc2020.ieee-smartgridcomm.org/workshop/ws-4-autonomous-energy-grid-distributed-optimization-and-control-perspective
The bustling indoor courtyard of YCIS’s Regency Park Campus was recently transformed into a spectacular showcase of student talent and hard work during the annual Primary Art Show. The three-day show was a fantastic display of creativity and skill, with more than 800 pieces of artwork on display, representing all students in Years 1-4. Throughout the art show, students, parents, and faculty were impressed by the high quality of the artwork displayed and were awed by the skill level the students’ had achieved. From intricate dinosaur fossil collograph prints, to lifelike animal collages, vibrantly coloured clay pinch pots, and much more, the show featured an abundance of art materials and techniques. To create such a large variety of artwork, each year level spent time studying different famous artists from a variety of backgrounds throughout the school year. The students also learned about the special techniques or unique styles of these artists and were tasked with creating several works of art inspired by their studies. The Primary students also learned about the “Elements of Art” – colour, form, line, shape, texture, value, and space – and they employed multiple elements into their projects. Ms Anita Dai, Primary Art Teacher at the Regency Park Campus, uses her art classes to teach and inspire students to create remarkable works of art like the ones on display at the recent exhibit. She says, “I’m thrilled that our visitors were so pleased with the colourful displays of art displayed at the show. I also hope that viewers were able to appreciate the amount of work and effort the students poured into their projects. And always, I strive to inspire in my students the work ethic and positive, open attitude that will allow them to tackle any difficult task, whether it is art or anything else in life, with gusto, enthusiasm, and determination!” As part of the school’s holistic education model, art is a dynamic part of the curriculum at YCIS. To learn more about the subject at the international school in Shanghai, click here.
https://www.ycis-sh.com/en/article/school-news/2117-on-campus-art-show-highlights-ycis-primary-students-talent
Narcissism from Narcissus in Greek myth was a pathologically self-absorbed young man who fell in love with his own reflection in a pool. Narcissism is a concept in psychoanalytic theory, which was popularly introduced in Sigmund Freud's essay On Narcissism (1914). Sigmund Freud, who coined the name narcissism believed that some narcissism is an essential part of all of us from birth. According to Andrew P. Morrison, a reasonable amount of healthy narcissism allows the individual's perception of his needs to be balanced in relation to others. Self-regulation in narcissists involves striving to make one’s self look and feel positive and important. Narcissism is considered a social or cultural problem. It is a factor in trait theory used in various self-report studies of personality. Narcissism is not the same as egocentrism. Acquired situational narcissism is a form of narcissism that develops in late adolescence or adulthood, brought on by wealth, fame and the other trappings of celebrity. Coined by Robert B. Millman, Professor of Psychiatry, Cornell University. Acquired situational narcissism differs from conventional narcissism in that it develops after childhood and is triggered and supported by the celebrity-obsessed society. Narcissism does not necessarily represent a surplus of self-esteem or of insecurity; more accurately, it encompasses a hunger for appreciation or admiration, a desire to be the center of attention, and an expectation of special treatment reflecting perceived higher status. A high level of narcissism can be damaging in romantic, familial, or professional relationships. People with narcissistic personality disorder may be generally unhappy and disappointed when they're not given the special favors or admiration they believe they deserve. A narcissistic personality disorder causes problems in many areas of life, such as relationships, work, school or financial affairs. High levels of narcissism can manifest themselves as a pathological form as narcissistic personality disorder, whereby the patient overestimates his or her abilities and has an excessive need for admiration and affirmation. Narcissistic personality disorder is a condition defined in the Diagnostic and Statistical Manual of Mental Disorders. According to Freud, the love of the parents for their child and their attitude toward their child could be seen as a revival and reproduction of their own narcissism. The Ego Revisited - Understanding and Transcending Narcissism Karen M. Peoples, Ph.D., and Bert Parlee, M.A. While recognizing the complex interweave of prepersonal, personal, and transpersonal dimensions of narcissism, the usefulness of various meditation practices adapted to the client's structural organization is examined. The “Why” and “How” of Narcissism: A Process Model of Narcissistic Status Pursuit Stathis Grapsas, Eddie Brummelman, Mitja D. Back, and Jaap J. A. Denissen Abstract: We propose a self-regulation model of grandiose narcissism. This model illustrates an interconnected set of processes through which narcissists, individuals with relatively high levels of grandiose narcissism, pursue social status in their moment-by-moment transactions with their environments. The model demonstrates how narcissism manifests itself as a stable and consistent cluster of behaviors in pursuit of social status and how it develops and maintains itself over time. Vulnerable and Grandiose Narcissism Are Differentially Associated With Ability and Trait Emotional Intelligence Marcin Zajenkowski, Oliwia Maciantowicz, Kinga Szymaniak and Paweł Urban. The aim of the present study was a deeper understanding of the association between narcissism and EI. Nowadays, an increasing tendency to describe narcissism as a non–clinical personality trait is being observed among psychologists (e.g., Paulhus and Williams, 2002). Empirical data show that narcissism is connected to a variety of psychological variables such as aggression (e.g., Krizan and Johar, 2015), self–esteem and well–being (e.g., Sedikides et al., 2004; Dufner et al., 2012). We examined the association between two types of narcissism, grandiose and vulnerable, and self-reported as well as ability emotional intelligence (EI). Grandiose narcissism is characterized by high self–esteem, interpersonal dominance and a tendency to overestimate one’s capabilities, whereas vulnerable narcissism presents defensive, avoidant and hypersensitive attitude in interpersonal relations.
http://sociologyindex.com/narcissism.htm
CROSS-REFERENCE TO RELATED APPLICATIONS BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE INVENTION This application claims priority to the U.S. Provisional patent application No. 62/074,624 filed in the United States Patent and Trademark Office on Nov. 3, 2014. The specification of the above referenced patent application is incorporated herein by reference in its entirety. 1. Field of the Invention The present invention generally relates to the technical field of bags and more particularly relates to customizable bags with easily interchangeable outer cover. 2. Description of Related Art Bags are manufactured in diverse sizes, shapes, colors and designs using a variety of materialssuiting for different purposes. These bags include messenger bag, hand bag, duffel bag, satchel, shoulder bag, tote bags, backpacks, trolley bags, briefcases and the like. Usage of different types of bags have become increasingly common among a vast range of consumers belonging to diverse age group and constitutes a commonly adopted means for sorting, storing and carrying their belongings. Conventional bags are typically designed and manufactured with fixed aesthetic and utility features. Consumers tend to purchase a multitude of bags in every color, texture and pattern according to their changing preferences and different occasions or usage purposes. For example, one obvious problem is the cost and time involved in frequent purchasing of different bags such as backpacks because of ever changing aesthetic qualities and improved utility features. Another commonly problem include the inability to clean the exterior of the backpack without soiling the interior or other compartments of the back pack during cleaning or washing. Conventional backpacks which exists in the art merely shows improvement in handbags or pocket books by adding an additional foundation layer or an external covering, which are again limited to fixed designs. Other types of improvement that exists in the art include interchangeable strap facade system in backpacks for enhancing its aesthetic features, interchangeable foundation bags and modular backpacks with removable compartments. Options for customizing bags according to consumer's personal preferences would be a great additional feature to the existing bags with fixed designs. Customizing the exteriors of the bags using modular components improves not only aesthetic qualities but also enhances the utility feature by addition of one or more storage compartments. Moreover customized covers or outer panels for bags enables the consumer to express their uniqueness, individuality and also to create their own personality and style. Customized bags with interchangeable skins also caters to the users' need for self-expression. The interchangeable exteriors may comprise customized designs, color, images, text, icons, texture and the like, which reflects the mood or personality of the consumer. Accordingly, there exists a need in the art, for a customizable bag with interchangeable outer cover which not only adds to the aesthetic quality but also enhances the utility features of the bag. The present invention relates to a customizable bag comprising an interchangeable outer panel which consists of one or more fastening members configured to releasably engage with one or more fastening members fixed to an exterior surface of the bag. The customizable bag further comprise a storage space defined between the interchangeable outer panel and the exterior surface of the bag, accessible by unfastening one of the fastening members. In an embodiment, the interchangeable outer panel comprises at least two fastening members disposed substantially along the perimeter, for detachably attaching to corresponding fastening members fixed to an exterior surface of the customizable bag. The interchangeable outer panel attached to the exterior of bag not only adds variety and aesthetic value but also defines a receiving space between the exterior of bag and interior of outer cover thus forming an additional storage compartment. The receiving space is accessible by unfastening one of the two fastening members. In one embodiment, the interchangeable outer panel comprise an upper fastening member and a lower fastening member. The storage space can be accessed by unfastening the upper fastening member, whereas the interchangeable outer panel can be detached completely by unfastening both the upper fastening member and the lower fastening member. The fastening members can be selected from a group consisting zipper fastener, button type fastener, hook and loop type fastener and snap fastener. In an embodiment, the upper and lower fastening members, each comprises a pair of zipper fasteners with one of the zipper portion is attached to the exterior of the bag and its corresponding zipper portion attached along the perimeter of interchangeable outer panel. The customizable bag further comprises an elastic tab can be attached to at least one end of each zipper portion fixed to the exterior surface of the bag body, the elastic tab facilitates pulling up the zipper portion from the bag body for easily engaging with the corresponding zipper on the interchangeable outer panel. The customizable bag comprises messenger bag, hand bag, duffel bag, satchel, shoulder bag, tote bag, backpack, trolley bag, briefcase, computer bag and the like. In one embodiment, the zipper portions of the bag are configured to be operated in the same direction. The lower zipper portion begins adjacent to the point where the upper zipper portion ends and the upper zipper portion begins adjacent to the point where the lower zipper portion ends. The above stated zipper configuration facilitates selective unfastening of upper zipper portion for accessing the storage space defined between the interchangeable outer panel and the exterior surface of the bag body. Customizable bags with interchangeable skins or outer panels caters to the users' need for self-expression. The interchangeable outer panels may comprise customized designs, color, images, text, icons, texture and the like, which reflects the mood or personality of the consumer. A description of different embodiments of the present invention will now be given with reference to the Figures. It is expected that the present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope. The present disclosure relates to a customizable bag with an interchangeable outer panel which consists of one or more fastening members configured to releasably engage with one or more fastening members fixed to an exterior surface of the bag. The interchangeable outer panel attached to the exterior of bag not only adds variety and aesthetic value but also defines a receiving space between the exterior of bag and interior of outer cover thus forming an additional storage compartment. The fastening members can be selected from a group consisting zipper fastener, button type fastener, hook and loop type fastener and snap fastener. FIG. 1 100 110 112 114 110 112 114 110 110 100 112 114 102 104 100 102 104 100 110 100 116 102 104 108 100 116 102 104 112 114 118 102 104 108 100 112 114 110 Referring to , which shows a perspective view of a customizable back pack with an interchangeable outer panel comprising an upper zipper portion and a lower zipper portion disposed substantially along the periphery of the outer panel . In an embodiment, the upper zipper portion is attached along the periphery of the top half portion and the lower zipper portion is attached along the periphery of the bottom half portion of the interchangeable outer panel . The outer cover is detachably attached to the exterior surface of the backpack by releasably engaging the upper, lower zipper portion , with the corresponding zipper portions , fixed on the exterior surface of the backpack . The zipper portions , are arranged on the exterior surface of the backpack in such a way to conform to the shape and size of the interchangeable outer panel . The customizable back pack further comprises and elastic tab attached to at least one end of the zipper portions and disposed on the exterior surface of the bag . The elastic tab allows the end portion of the zippers , to be gently pulled off the bag surface for easily engaging with the zipper portions and . A pull tab for zipper is attached either to the zipper portions , on the exterior surface of bag or to the zipper portion , on the outer panel . FIG. 2A FIG. 2B 100 110 110 112 102 106 110 108 100 110 108 100 112 114 illustrates the customizable backpack with the interchangeable outer panel in a partially detached position. The interchangeable outer panel is opened or partially detached by unfastening the upper zipper portion from the corresponding zipper portion . Further, a storage space is defined between the interchangeable outer panel and the exterior surface of the bag . illustrates the interchangeable outer panel being detached from the exterior surface of the bag by unfastening both the upper zipper portion and lower zipper portion . 112 114 100 114 112 112 114 112 106 110 108 100 FIG. 3 According to an embodiment, the zipper portions , of the customizable bag are configured to be operated in the same direction. For example, the lower zipper portion begins adjacent to the point where the upper zipper portion ends and the upper zipper portion begins adjacent to the point where the lower zipper portion ends . The above stated zipper configuration facilitates selective unfastening of the upper zipper portion for accessing the storage space defined between the interchangeable outer panel and the exterior surface of the bag , as shown in . 110 110 The outer panel can be interchanged by choosing from a variety of designs, colors, texture and fabric types customized according to user preferences. The exterior surface of the outer cover may comprise images, pictures, text, abstract designs, icons and the like, reflecting the personality or mood of the user of the backpack. In addition, the interchangeable outer panel can also be self-designed by users to reflect style that is trending in the present and facilitates to stay updated with such changing trends of fashion. Moreover bags with customized outer panels enables the user to express their uniqueness, individuality and also to create their own personality and style. 112 114 102 104 108 100 102 112 104 114 112 114 102 104 108 100 Each of the upper zipper and lower zipper , comprises a complementary to zipper portion , fixed onto the exterior surface of the bag . In an exemplary embodiment, the zipper portions and are held together by a clasp lock comprising a pull tab for fastening or unfastening the upper zipper. Similarly, the zipper portions and are held together by a clasp lock comprising a pull tab for fastening or unfastening the lower zipper. The pull tab can be attached to either zipper portions , on the outer panel or the zipper portions , on the exterior surface of the bag . 110 108 100 110 110 In an embodiment, the outer panel may extend to cover lateral sides of the exterior surface of the backpack . The interior surface of the outer panel may also comprise one or more pouches or compartments for sorting and storing of users' belongings. For example, the outer panel may comprise pen holder, cup holder, bottle holder, mesh pockets, key rings, ear phone slots, coin pouch, etc. 110 110 The interchangeable outer panel can also be designed to fit with other types of bags such as computer bag, rucksack, duffel bag, sling bag, tote bag, hand bag, messenger bags and diaper bags, etc., using a fastening means for removably attaching the outer panel to the exterior surface of the bags. The outer panel is designed to substantially cover the exterior surface area of the above bags and detachably attached using one or more fastening means, wherein the storage space defined by the area between the outer cover and the exterior of the bag can be accessed by operating one of the fastening means. FIG. 4A FIGS. 4B and 4C 100 100 100 110 108 100 112 114 108 100 110 shows a perspective view of a customizable hand bag according to an embodiment of the present invention. illustrates a side view and a top view respectively of the customizable hand bag . The customizable hand bag comprises an interchangeable outer panel removably attached to the exterior surface of the bag via one or more fastening members such as zipper portions and . A storage space (not shown) is defined between the exterior surface of the bag and the interchangeable outer panel . FIG. 5A FIGS. 5B and 5C 100 100 100 110 108 100 112 114 108 100 110 shows a perspective view of a customizable duffle bag according to an embodiment of the present invention. shows a front view and top view respectively of the customizable duffle bag . The customizable duffle bag comprises one or more interchangeable outer panels removably attached to the exterior surface of the bag via one or more fastening members comprising zipper portions and . A storage space (not shown) is defined between the exterior surface of the bag and the interchangeable outer panel . The fastening members may also comprise button type fastener, hook and loop type fastener and snap fastener. FIG. 6 110 112 114 110 illustrates the interchangeable outer panel comprising upper zipper portion and the lower zipper portion substantially disposed along the periphery of the outer panel . Although specific embodiments of the invention have been disclosed, those having ordinary skill in the art will understand that changes can be made to the specific embodiments, without departing from the spirit and scope of the invention. The scope of the invention is not to be restricted, therefore, to the specific embodiments. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 shows a perspective view of a customizable backpack with an interchangeable outer panel according to an embodiment of the present invention. FIG. 2A illustrates a perspective view of the customizable backpack with interchangeable outer panel partially opened. FIG. 2B illustrates a perspective view of the customizable backpack with the interchangeable outer panel detached. FIG. 3 illustrates accessing of storage space defined between the interchangeable outer panel and an exterior surface of the backpack. FIG. 4A FIGS. 4B and 4C shows a perspective view of a customizable hand bag according to an embodiment of the present invention. illustrates a side view and a top view respectively of the customizable hand bag. FIG. 5A FIGS. 5B and 5C shows a perspective view of a customizable duffle bag according to an embodiment of the present invention. shows a front view and top view respectively of the customizable duffle bag. FIG. 6 illustrates an interchangeable outer panel.
Q: Increasing Font Size in a JButton I am at a loss of what to do for the finalization of my term project. I am working on a Connect Four game and I'd like to increase the font size inside of a JButton. I'm relatively new to programming and I haven't worked anything with fonts yet. I'd just like to at least double the font inside of the button to make it more visible during gameplay. Can someone help me, or point me into the direction of finding a solution? Thanks! My code is below. import java.awt.*; import java.awt.event.*; import javax.swing.*; public class Connect implements ActionListener { private JFrame window = new JFrame("Connect Four by Steven and Anthony"); private JPanel myPanel = new JPanel(); private JPanel myPanelB = new JPanel(); private JButton[][] myButtons = new JButton[6][7]; private JButton[] buttons = new JButton[7]; private boolean win = false; private int count = 5; private int count2 = 5; private int count3 = 5; private int count4 = 5; private int count5 = 5; private int count6 = 5; private int count7 = 5; private int countA = 0; private String letter = ""; public boolean checkHorizontalWin(String letter) { for (int y = 0; y < myButtons.length; y++) { for (int x = 0; x < myButtons[y].length - 3; x++) { if (myButtons[y][x].getText().equals(letter) && myButtons[y][x + 1].getText().equals(letter) && myButtons[y][x + 2].getText().equals(letter) && myButtons[y][x + 3].getText().equals(letter) ) { return true; } } } return false; } public boolean checkVerticalWin(String letter) { for (int y = 0; y < myButtons.length - 3; y++) { for (int x = 0; x < myButtons[y].length; x++) { if (myButtons[y][x].getText().equals(letter) && myButtons[y + 1][x].getText().equals(letter) && myButtons[y + 2][x].getText().equals(letter) && myButtons[y + 3][x].getText().equals(letter) ) { return true; } } } return false; } public boolean checkDiagonalToTheLeftWin(String letter) { for (int y = 0; y < myButtons.length - 3; y++) { for (int x = 0; x < myButtons[y].length - 3; x++) { if (myButtons[y][x].getText().equals(letter) && myButtons[y + 1][x + 1].getText().equals(letter) && myButtons[y + 2][x + 2].getText().equals(letter) && myButtons[y + 3][x + 3].getText().equals(letter) ) { return true; } } } return false; } public boolean checkDiagonalToTheRightWin(String letter) { for (int y = 0; y < myButtons.length - 3; y++) { for (int x = 3; x < myButtons[y].length; x++) { ) { } } } } public Connect(){ window.setSize(800,700); window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); myPanel.setLayout(new GridLayout(1,7)); myPanelB.setLayout(new GridLayout(6,7)); for (int i = 0; i < buttons.length; i ++){ buttons[i] = new JButton(); myPanel.add(buttons[i]); buttons[i].addActionListener(this); } for (int i = 0; i < 6; i ++){ for (int j = 0; j < 7; j ++){ myButtons[i][j] = new JButton(); myPanelB.add(myButtons[i][j]); } } window.add(myPanel, BorderLayout.NORTH); window.add(myPanelB, BorderLayout.CENTER); window.setVisible(true); } public void actionPerformed(ActionEvent e){ countA++; if (countA % 2 == 0) letter = "X"; else letter = "O"; if (e.getSource() == buttons[0]){ myButtons[count][0].setText(letter); count --; } if (e.getSource() == buttons[1]){ myButtons[count2][1].setText(letter); count2 --; } if (e.getSource() == buttons[2]){ myButtons[count3][2].setText(letter); count3--; } if (e.getSource() == buttons[3]){ myButtons[count4][3].setText(letter); count4--; } if (e.getSource() == buttons[4]){ myButtons[count5][4].setText(letter); count5--; } if (e.getSource() == buttons[5]){ myButtons[count6][5].setText(letter); count6--; } if (e.getSource() == buttons[6]){ myButtons[count7][6].setText(letter); count7--; } if (myButtons[0][0].getText().equals("O") || myButtons[0][0].getText().equals("X")){ buttons[0].setEnabled(false); } if (myButtons[0][1].getText().equals("O") || myButtons[0][1].getText().equals("X")){ buttons[1].setEnabled(false); } if (myButtons[0][2].getText().equals("O") || myButtons[0][2].getText().equals("X")){ buttons[2].setEnabled(false); } if (myButtons[0][3].getText().equals("O") || myButtons[0][3].getText().equals("X")){ buttons[3].setEnabled(false); } if (myButtons[0][4].getText().equals("O") || myButtons[0][4].getText().equals("X")){ buttons[4].setEnabled(false); } if (myButtons[0][5].getText().equals("O") || myButtons[0][5].getText().equals("X")){ buttons[5].setEnabled(false); } if (myButtons[0][6].getText().equals("O") || myButtons[0][6].getText().equals("X")){ buttons[6].setEnabled(false); } if (checkHorizontalWin(letter) || checkVerticalWin(letter) || checkDiagonalToTheLeftWin(letter) || checkDiagonalToTheRightWin(letter) ) { win = true; if (win == true) { JOptionPane.showMessageDialog(null, letter + " has won!"); System.exit(0); } } } /** * * @param args */ public static void main(String[] args){ new Connect(); } } A: You can use: button.setFont(new Font("Arial", Font.PLAIN, 40)); "Arial" is obviously the name of the font being used. Font.PLAIN means plain text (as opposed to bold or italic). 40 is the font size (using the same numbering system for font size as Microsoft Word) Javadoc for JComponent.setFont() Javadoc for Java.awt.Font A: I'm not sure if this will work, but looking at the JButton docs, there is a setFont(Font font) method you can call. You can try passing it a Font created with the font size you'd like using the Font(String name, int style, int size) constructor.
Whiplash 2009, Race #2 Wickenburg 2-22-2009 I am still buzzing over this weekends race. The senior pro class is full this year and it is a lot fun. The Wickenburg course is my favorite so I really wanted to win this one. I was lined up next to Rick Ellsworth and his new KTM; we were on the 2nd row for our class. TJ Miller and John Baj were on row 1. The green flag dropped, Rick and I both got a good launch and we were side by side through 1st and 2nd gear and luckily I pulled a bit in 3rd gear so I had clean air in front of me. I caught up to John in the rocky hills section, he waved me by on one of the tight hilly turns but I spun the tires and went wide but it was too late and I couldn’t get by and in the process I ran over a small thorny cactus with both right tires so now I was nervous about getting a flat. John moved over after a bit and I got by and I had clean air until I caught TJ around mile 18. He let me by and I had clean air for the rest of the lap. I considered stopping to have my tires checked after lap 1 but they seemed ok so I kept on going. After 1 mile my pit told me on the radio that TJ was 1 minute behind me. Lap 2 was uneventful and clean. I stopped for fuel after lap 2 and I exited the pits making sure not to break the pit speed limit. After about 2 miles I heard some noise on my radio but I couldn’t hear what they were saying. So I just kept up a good pace. Around mile 8 I looked back and I could see some dust in the distance and I didn’t know who it was so I just went for it the rest of the lap. I knew some fast guys had started behind me and I didn’t want to lose the race on overall time. I caught and passed a few 900’s all of whom moved over quickly. After the 1st powerline section I caught up to a Pro who wasn’t interested in letting me by. I was yelling for him to move but he held his line for a bit. I think he believed we were in the same class, it was at this point I wished the senior pros had different colored number backgrounds or something. The rest of the lap was uneventful, I just concentrated on keeping a solid pace and I just hoped nothing strange would happen with the quad. I crossed the finish line first in my class but I had to wait a while to see how close behind me the others were. The noise I had heard on the radio at the start of lap 3 was my pits telling me that Jason Fritz was about a minute behind me. Jason came in a few minutes later and then I knew I had won. It was a great feeling to get my first win in the new class. While I was loading my quad I noticed a crack in my fancy LSR race frame so that sucks. I really don’t want to tear this quad apart again. Thanks again to all of my sponsors. The quad is performing great. I still can't believe I am running a stock rear shock; the GT Thunder revalve is awesome. Jim at Motowoz rebuilt my front shocks prior to this race and they are really excellent and the turn around time on the rebuild was great. This DFR engine has 30 hours on it now and it still runs awesome. And of course my Flexx bars are still holding up great, I don't even know how many hours I have on them. The JD Performance A-arms are great too. Thanks to BRM for the awesome graphics package, I really like the way the quad looks. Photos by DGP Photography (I shrunk them down for the web) results photo by Jeff @ http://jandsphoto.com/ .
http://kendallrace.com/stories/wick1-2009.php
SearchSkip to Search Results - 2Campbell, Kimberley A. - 2Mohamed, Hadeel - 2Sturdy, Christopher B. - 2Wyering, Colleen - 1Annich, Natasha C - 1Avey, Marc - A comparison of passive monitoring methods for gray wolves (Canis lupus) in Alberta, CanadaDownload Spring 2019 Remote camera traps are often applied to large mammal conservation and management programs because they are cost-effective, allow for repeat surveys, and can be deployed for long time periods. Additionally, statistical advancements in calculating population metrics, such as density, from camera... - An investigation of sex differences in acoustic features of the chick-a-dee call of black-capped chickadees (Poecile atricapillus)Download Fall 2015 The chick-a-dee call of the black-capped chickadee (Poecile atricapillus) is composed of four main note types (A, B, C, and D) that occur in a fixed order. Sex differences have been identified in a number of black-capped chickadee vocalizations (including tseet calls and fee-bee songs) and in the... - Bioacoustic analyses of the chick-a-dee call of the Mexican chickadee (Poecile sclateri) and the boreal chickadee (Poecile hudsonica)Download Fall 2009 To understand the communicative functions of any vocalization it is important to first classify, describe, and measure the elements of that vocalization. Mexican (Poecile sclateri) and boreal (P. hudsonica) chickadees both produce a name-sake chick-a-dee call. Here, the note types present in... - - - Information contained within a simple acoustic signal: The fee-bee song of the black-capped chickadee (Poecile atricapillus)Download Fall 2015 Black-capped chickadees (Poecile atricapillus) are a common North American songbird that produce numerous types of vocalizations with various functions. The vocal repertoire of black-capped chickadees have been the focus of numerous vocal production and perception studies. Black-capped chickadees... - Fall 2018 For male breeding songbirds, song rate varies throughout the breeding season and tends to be correlated with breeding-cycle stages. Although these patterns have been well documented, to our knowledge, this relationship has not been used to predict a bird’s breeding status through acoustic... - Predictive Mapping of Yellow Rail (Coturnicops noveboracensis) Density and Abundance in the Western Boreal Forest via Ground and Satellite Remote SensorsDownload Fall 2019 The Yellow Rail (Coturnicops noveboracensis) is a small, secretive, wetland bird, which is apparently rare throughout most of its range. Almost nothing is known about its abundance and density in the wetlands of the western boreal forest. Emerging technologies have enabled us to effectively... - - The Neural Encoding of Heterospecific Vocalizations in the Avian Pallium: An Ethological ApproachDownload Fall 2011 Songbirds (order Passeriformes, suborder Oscines) have captured the attention of scientists and non-scientists alike with their vocal signals. The black-capped chickadee (genus Poecile) uses its namesake call, chick-a-dee, to convey a variety of information. In Chapter 2 and 3, I examine the...
https://era.library.ualberta.ca/search?facets%5Ball_subjects_sim%5D%5B%5D=Bioacoustics
Echolocation with Light: A New Form of Active Sensing in Fish? Live date was Jul 22nd, 2021 Missed the live date? Don't worry! Scroll down to watch the event. Description The brightest part of many fish species is their iris. This effect can be caused by specular reflection, light focusing, and fluorescence. Why send out light from the iris? Radiating light close to and parallel to the own gaze is the ideal configuration to generate and detect eyeshine in the pupils of other organisms (e.g., cat’s eyes). The research goal of Prof. Michiels’ team is to demonstrate that even weak light reflections in the pupils of predator and prey species can be strong enough to be perceptible by the initial light sender/observer. We call this process “active photolocation”. To test its presence and functionality, we use visual modelling. For this, we collect data on ocular properties (size, retinal map), contrast sensitivity and spatial acuity (Striatech’s OptoDrum), spectral sensitivities of the observer (microspectrometry), spectrometric properties of all the relevant structures, and the natural light field. We also control the ability to redirect light by attaching small shading hats (or transparent controls). We assume that this form of active detection is mainly functional over very short distances and is used to detect otherwise highly cryptic organisms. Our model system is a small fish species (< 5 cm) triplefin. Tripterygion delaisi is a small, active, bottom-dwelling species. It feeds on mm-sized prey, and is prey itself to cryptic, motionless, bottom-dwelling predators, such as the scorpionfish Scorpaena porcus. This work mainly takes place at the marine research station STARESO in Calvi, Corsica. Our latest results indicate that triplefins can detect gammarid crustaceans (prey) as well as scorpionfish (predator) over meaningful distances using active photolocation. Key Topics - Fish can redirect downwelling sunlight sideways by actively controlled eye movement. - Over short distances, this is sufficient to induce perceptible eyeshine in other species. - Our hypothesis is that it facilitates detection of cryptic prey and predators. - First results suggest that the mechanism is functional in triplefins. - It is likely that it is present and functional in many more species as well. Learning Objectives - Eyes are not just passive receptors: Eyes (i.e. mainly the irides) are also actively used as light reflectors, turning them into a local light source. - Eyes are also a weak point in species that try to be invisible by camouflage: Good eyes can be revealed by shining light at them. Background Reading A context analysis of bobbing and fin-flicking in a small marine benthic fish. Santon M, Deiss F, Bitton P-P, Michiels NK. Ecology and Evolution 11(3), 1254–1263 (2021). doi: 10.1002/ece3.7116. Redirection of ambient light improves predator detection in a diurnal fish. Santon M, Bitton P-P, Dehm J, Fritsch R, Harant UK, Anthes N, Michiels NK. Proc. R. Soc. B 287, 20192292 (2020). doi:10.1098/rspb.2019.2292. The contrast sensitivity function of a small cryptobenthic marine fish. Santon M, Münch TA, Michiels NK. J Vis. Feb 1;19(2):1 (2019) doi: 10.1167/19.2.1. Visual modelling supports the potential for prey detection by means of diurnal active photolocation in a small cryptobenthic fish. Bitton P-P, Yun Christmann SA, Santon M, Harant UK, Michiels NK. Scientific Reports 9(1), 8089 (2019). doi:10.1038/s41598-019-44529-0. Daytime eyeshine contributes to pupil camouflage in a cryptobenthic marine fish. Santon M, Bitton P-P, Harant UK, Michiels NK. Scientific Reports 8(1), 7368 (2018). doi:10.1038/s41598-018-25599-y. About the speaker Prof. Nico K. Michiels Professor in Animal Evolutionary Ecology and Director of the Institute for Evolution and Ecology University of Tübingen, Germany Prof. Nico Michiels was born and educated in Belgium. He did a PhD on the reproductive ecology of dragonflies, with a focus on copulation, genital morphology, and sperm competition, and continued as a postdoc in Hassel (Belgium) and at Brown University (USA). He then became interested in the sexual mechanisms of hermaphrodites using planarians at the University of Sheffield, followed by a group leader position at the MPI for Behavioural Physiology, Seewiesen (Germany). After moving to a professorship in zoology at the University of Münster, he widened his scope and included earthworms, nematodes, marine free-living flatworms, and sea slugs to the hermaphrodite repertoire. In 2004 he moved to Tübingen. Since then, his research focus shifted to the visual ecology of marine fish, induced by the accidental discovery of strong fluorescence in fish in 2007. He dives 2-3 months per year in the Mediterranean Sea, the Red Sea and in Sulawesi, Indonesia. He is an experienced diver and underwater photographer. Watch the Journal Club here Q&A from the presentation Here are questions asked during the presentation, and their answers. Please send us an email if you have further questions: [email protected] Other Journal Clubs and Webinars Journal Club: Developing Retinal Gene Therapy for Zellweger Spectrum Disorder (ZSD) - Applications: - Blindness· - Rare Disease Journal Club: In Vivo Modeling of Immune-mediated Optic Neuropathies Journal Club: Endothelial Caspase-9 Mediates Inflammatory and Vision Function Changes in Retinal Vascular Injury - Applications: Journal Club: Anti-FcRn Treatment in Antibody-Associated Experimental Autoimmune Encephalomyelitis Webinar: AcuiSee – Rodent Visual Acuity Using Behavioral Conditioning - Related Products: - AcuiSee Journal Club: IR Photorefraction and IR Photokeratometry – Measuring Refractive State and Corneal Curvature in Animals and Humans - Related Products: - KeratoMeter· - PhotoRefractor - Applications: - Myopia Journal Club: Assessing Neuroinflammation-related Neural Damage by Monitoring the Retinotectal System - Applications: - Aging· - Neuroinflammation· - Vision Science Journal Club: Restoring vision – Optogenetic gene therapy targeted at human ON-bipolar cells - Applications: - Blindness· - Retinal Degeneration· - Vision Science Journal Club: The role of Nogo-A in visual deficits induced by retinal injury. - Applications: - Diabetic Retinopathy· - Glaucoma· - Stroke Webinar: Visual Acuity as a Relevant Phenotype in Mouse Models of Rare Disease - Applications:
https://stria.tech/journal-club-echolocation-with-light/
One of the key leaders of the INC was a man named Gopal Krishna Gokhale, known for his restraint and moderation. This method of retaliation inspired Gandhi to use it but, importantly, manipulate the process to make it look solely Indian. The British authorities ordered a search of Gandhi because they needed to acknowledge the threat he posed to British rule. Gandhi was, however, deemed a harmless, peculiar man who posed no threat. Gandhi then began a Satyagraha campaign which aimed to help poor Indians in Bilar. The nonviolence policy worked because Gandhi did not give in, he accepted punishment and responsibility, and his followers were loyal. When Gandhi returned back to India from Europe in 1896 he was sickened. The British Raj had formally taken over, so he decided to make a change and stop the unfair treatment of everyone in India. As Gandhi said himself, “My ambition is no less than to convert the British people Kamehameha and Mohanda Gandhi were both effective leaders because they were persuasive, they both had a crucial effect on the society, and they both were very convincing to other people. Gandhi came from a low-class family in India, his father was a chief minister of Porbandar and his mother was a practitioner of Vaishnavism. Gandhi was appalled by discrimination that he experienced during his immigration in South Africa. Kamehameha was raised by his uncle, Kalani`opu`u who was the former ruler of the Big Island. Kamehameha’s conquest was to unite all the Hawaiian islands, and he was able to succeed. Gandhi originally went to Africa on business with his job as a lawyer, but instead Gandhi had found his calling both spiritually and politically. This led to many acts of non-violence and civil disobedience and ultimately India’s Independence Movement in 1947. Gandhi implementation of civil disobedience was heavily influenced by Henry David Thoreau’s essay, Civil Disobedience which depicts Thoreau’s resistance towards the government. Gandhi’s system of non-violence and peace was called the Satyagraha, which translates in English to the “truth force”. Gandhi thought of this as “living a life of love and compassion”. He is willing to do anything to try to protect his ideals, even if that requires him to kill a man. The violent act does nothing though, as it becomes evident that people have accepted the Western lifestyle against Okonkwo’s desire: “He knew that Umuofia that would not go to war. He knew because they let the other messengers escape,” (Achebe 189). If the population had not let the other messengers escape, then it would mean that they did not prefer to be Christians and preferred to have control over the land again, instead of the British government having control. That was not the case which means that they had accepted the newly introduced ideals His speeches that I heard in this movie impressed me by their sincerity and dedication to his own race, but at the same time respect for the white. He said that we are not to hate those who hate us. King believed that one day Afro-Americans will get what they are fighting for. That’s why they have to start preparing for being ethical, sane, reasonable. His tolerance and acceptance towards others impressed Similarly, he sent murderers to kill him as he was fearful of the loss of his kingship. Macbeth’s relationship with Banquo has more significance to him than his relationship with Duncan, thus, him betraying Banquo affected him with a greater impact than his betrayal of Duncan. This is evidenced by how his guilt takes the form of Banquo and not Even though Gandhi was small in size his impact on India’s independence was tremendous. Gandhi has shown moral courage in fighting for India 's independence. Secondly, when fighting for India 's independence, he used non-violent protesting. Finally, because he used non-violence, his moral courage cost him his life. By examining in Gandhi’s fight for India’s independence, non-violent protesting, and his moral courage costing him his life, it is clear that he was a beautiful and courageous man, that fought for what he believed in. Macbeth wanted the blame to be placed on someone other than himself so the people showed loyalty to him as the new king. In order to successfully gain the people’s respect, he planted the evidence on the guards. Then, he murdered these guards to show the people of Scotland that he was negatively impacted by the death of the king. This demonstrates the manipulative side of Macbeth is tempted to kill Duncan because he likes the idea of being king. Once he informs Lady Macbeth of the prophecy, she is hooked on the idea. She tells him to stop being a coward. She wants to do it herself so that she feels more like a man. At first, Macbeth was hesitant about killing the king because he knew that King Duncan is a great ruler.
https://www.ipl.org/essay/Why-Did-Gandhis-Nonviolent-Movement-Work-PJTLJZCVYT
Epigenetic modification of the chromatin plays crucial roles in maintaining cellular states, genomic stability, ensuring proper gene transcription and DNA repair[@b1][@b2]. Methylation of cytosine residues in DNA is carried out by a class of enzymes, the DNA methyltransferases (Dnmts), that tightly regulate the initiation and the maintenance of these methyl marks[@b3][@b4]. Dnmt3a and Dnmt3b are responsible for establishing the *de novo* patterns of methylation during embryogenesis, while Dnmt1 is responsible for the propagation of methylation patterns. Errors in this enzymatic machinery have profound effects on development and disease[@b5]. All three Dnmts are required for embryonic development, with previous studies reporting *Dnmt1*^−/−^ and *Dnmt3b*^−/−^ mice to be embryonically lethal[@b6][@b7]. While *Dnmt3a*^−/−^ mice survive to full term, they die around 4 weeks of age[@b7]. Surprisingly, embryonic stem cells (ESCs) can be derived without Dnmts and maintain their stem cell characteristics[@b8]. Studies have also highlighted critical roles for DNA methylation in adult stem cells. Loss of Dnmt1 in neuronal progenitors leads to global genomic hypomethylation and neonatal death[@b9], while *Dnmt1*^−/−^ fibroblasts undergo growth arrest and widespread apoptosis[@b10]. In the hematopoietic system, Dnmt3a and Dnmt3b deficiency leads to defects in self-renewal of hematopoietic stem cells, but had no effect on cellular differentiation[@b11]. However, the loss of Dnmt1 resulted in self-renewal and differentiation abnormalities in the same cell type[@b12]. Studies in the mammalian epidermis reported a premature differentiation of epidermal progenitor cells and tissue loss in the absence of Dnmt1, further emphasizing a role of DNA methylation in maintaining the undifferentiated progenitor cell state[@b13]. In skeletal muscle, the importance of DNA methylation in the control of myogenesis was shown in the regulation of master myogenic transcriptional factor, *MyoD*. *MyoD* is selectively expressed in skeletal muscle cells and its expression in non-muscle cells is suppressed by DNA methylation[@b14]. Demethylating agents such as 5-azacytidine can induce *MyoD* transcription and lead to myogenic conversion of non-muscle cells such as in the CH310T1/2 and NIH3T3 cell lines[@b15][@b16][@b17]. These studies suggest that epigenetic marking through DNA methylation plays a significant role in determining cell fate during muscle development. Limited data exists on the function of Dnmts during myogenesis. It has been reported that *DNMT1* mRNA expression in human myoblasts decreased with differentiation, and this coincided with increases in myogenic differentiation gene expressions[@b13]. While these results are consistent with the notion that DNA methylation in self-renewal of skeletal muscle stem cells exists, more detailed analyses are needed to formally establish this. In the present study, we have analysed the effect of Dnmt1 depletion during murine myogenesis. We demonstrate the Dnmt1 is essential for proper myogenic differentiation and cell fate transition. Absence of Dnmt1 in myoblasts lead to reduced myogenic gene expressions and defects in myoblast fusion that results in the formation of smaller myotubes. Mice with muscle-specific deletion of *Dnmt1* are runted and display smaller body weight. We show that loss of Dnmt1 results in hypomethylation of the *Id-1* promoter, a negative regulator of myogenesis, leading to increased propensity of myoblasts transdifferentiate into the osteogenic lineage. These studies will lead to a better understanding of the epigenetic mechanisms that regulate muscle development, having implications in interventions aimed at combating musculoskeletal disorders such as muscular dystrophy. Results ======= Loss of Dnmt1 lead to the formation of immature myotubes -------------------------------------------------------- We first determined the expression pattern of Dnmt1 during differentiation using the well-established myogenic cell line, C2C12 ([Supplementary Fig. 1](#S1){ref-type="supplementary-material"}). We show that *Dnmt1* transcript level was downregulated during myogenic differentiation ([Fig. 1A](#f1){ref-type="fig"}). To study the functional significance of Dnmt1 in myogenesis, we generated a stable Dnmt1 knockdown C2C12 cell line where the myoblasts were transduced with lentiviral shRNA to target Dnmt1 and selected using puromycin (shDnmt1). Efficient reduction of Dnmt1 expression was observed in the transduced C2C12 cells ([Supplementary Fig. 2A](#S1){ref-type="supplementary-material"}). Furthermore, knockdown of Dnmt1 was highly specific and did not affect Dnmt3a or Dnmt3b expression ([Supplementary Fig. 2B](#S1){ref-type="supplementary-material"}). Myf5 and MyoD are required for myogenesis and are expressed in both myoblasts and myotubes. Myogenin (Myog) is an early marker of myoblasts entering the differentiation pathway[@b18][@b19], and myosin heavy chain 2 (Myh2) is highly expressed when myoblasts differentiate and fuse to form myotubes[@b20]. We detected no difference in *Myf5* transcript levels in the Dnmt1 depleted C2C12 cultures, while *MyoD*, *Myog* and *Myh2* expression were significantly reduced ([Fig. 1B](#f1){ref-type="fig"}). The ability of shDnmt1 myoblasts to undergo myogenic differentiation was assessed 7 days after differentiation media was added to confluent myoblast cultures. Consistent with the qPCR data, we observed decreased number of Myog- and Myh2-expressing cells in the knockdown cells which translated to significant reduction in the number of myotubes formed in the shDnmt1 cultures ([Fig. 1C](#f1){ref-type="fig"}). We found this difference was not due to differences in cellular proliferation between the control and knockdown cultures as the number of cells were similar between groups ([Supplementary Fig. 3](#S1){ref-type="supplementary-material"}), but rather a defect in the ability of the shDnmt1 cells to fuse ([Fig. 1D,E](#f1){ref-type="fig"}). These data indicated that knockdown of Dnmt1 prevented differentiation of C2C12 myoblasts into myotubes. Dnmt1 loss coincides with a loss of myogenic gene expression in myoblasts ------------------------------------------------------------------------- To further establish a role for Dnmt1 in myogenic differentiation, we conditionally deleted Dnmt1 expression in murine myoblasts by crossing the *Dnmt1*^fl/fl^ mice[@b12] with a transgenic line expressing Cre recombinase under the regulation of the human skeletal alpha-actinin 1 promoter (*Acta1*-cre)[@b21]. The *Acta1* promoter is transcribed specifically in skeletal muscle at post-coitum day 9.5 (E9.5) coinciding with the earliest stages of skeletal muscle development and differentiation[@b22]. *Acta1*-cre^+^: *Dnmt1*^fl/fl^ mice were viable and born in Mendelian ratios, however the conditional knockout mice were born runted with smaller body weight compared to their littermate controls ([Fig. 2A](#f2){ref-type="fig"}). This smaller body composition persisted to adulthood ([Fig. 2B](#f2){ref-type="fig"}). We are able to isolate myoblasts with \>98% purity based on gene expression and staining for myogenic markers ([Supplementary Fig. 4](#S1){ref-type="supplementary-material"}). We verified by qPCR that *Dnmt1* was efficiently deleted in the myoblasts isolated from the *Acta1*-cre^+^: *Dnmt1*^fl/fl^ mice, and had no effect on *Dnmt3a* and *Dnmt3b* expression ([Supplementary Fig. 5](#S1){ref-type="supplementary-material"}). Consistent with our *Dnmt1* depleted C2C12 cells, we observed attenuated ability of myoblasts from the *Acta1*-cre^+^: *Dnmt1*^fl/fl^ mice to form myotubes *in vitro* ([Fig. 2C](#f2){ref-type="fig"}). This coincided with reduced expression levels of myogenic genes in the knockout mice compared to littermate controls (*Acta1*-cre^+^: *Dnmt1*^+/+^) at baseline ([Supplementary Fig. 5](#S1){ref-type="supplementary-material"}). To confirm these observations were indeed caused by an absence of *Dnmt1* in myoblasts, myoblasts isolated from the *Acta1*-cre^+^: *Dnmt1*^fl/fl^ mice were transduced with a *Dnmt1* over-expressing retrovirus, and myogenic gene expression then re-analyzed. *Dnmt1* levels following transduction were restored to levels similar to that of the littermate controls. Similarly, expression for *Myf5*, *MyoD*, *Myog* and *Myh2* were also increased in *Acta1*-cre^+^: *Dnmt1*^fl/fl^ myoblasts with *Dnmt1* overexpression ([Supplementary Fig. 6](#S1){ref-type="supplementary-material"}). However, this restoration of Dnmt1 was not able to increase myotube formation in the *Acta1*-cre^+^: *Dnmt1*^fl/fl^ myoblasts ([Fig. 2D](#f2){ref-type="fig"}). Together, these data confirm an important role of Dnmt1 in regulating myogenesis. Loss of Dnmt1 leads to increased activation of Inhibitor of DNA binding ----------------------------------------------------------------------- In addition to the basic helix-loop-helix (bHLH) myogenic factors (*MyoD*, *Myf5* and *Myog*) that are known to have critical roles in orchestrating the muscle phenotype, the *myocyte enhancer factor 2* (*Mef2*) family of genes are also required for the regulation of myogenic gene expression[@b23]. In line with the function of *Mef2c* during muscle maturation, we show that *Mef2c* expression is markedly increased with C2C12 differentiation ([Fig. 3A](#f3){ref-type="fig"}). However, *Mef2c* expression in Dnmt1 depleted cells was almost 4-fold lower compared to scrambled control at baseline, and remained low throughout the duration of the differentiation (more than 7-fold lower than shCtrl cultures over the differentiation time course) ([Fig. 3A](#f3){ref-type="fig"}). The *inhibitor of DNA binding* family of HLH proteins (*Id1-4*) are thought to affect the balance between cell growth and differentiation by negatively regulating the function of bHLH transcription factors[@b24]. *Id-1* is a negative regulator of *MyoD*[@b25], and its overexpression impairs the ability of myoblasts to differentiate into myotubes[@b24][@b26]. Similar to previous reports, *Id-1* expression decreases with myogenic differentiation ([Fig. 3B](#f3){ref-type="fig"}). However, we observed elevated *Id-1* baseline expression in the shDnmt1 myoblasts compared to shCtrl cells, as well as increasing levels of *Id-1* in the shDnmt1 cells as the cells underwent differentiation ([Fig. 3B](#f3){ref-type="fig"}). The presence of lower expression of *Mef2c* together with higher levels of *Id-1* gene expression following differentiation could in part explain the observation of an arrest in cellular differentiation and the failure to form myotubes as seen in the *Dnmt1*-deficient cells. Dnmt1 is known to promote DNA methylation. Therefore we sought to evaluate the chromatin status of *Id-1* loci in *Dnmt1*-deficient myoblasts. Chromatin immunoprecipitation (ChIP) assays were performed in shCtrl and shDnmt1 C2C12 cells with antibodies directed against H3K4me3 (active) and H3K27me3 (inactive) chromatin. ChIP-qPCR analysis showed similar levels of H3K4me3 and H3K27me3 between the shCtrl and shDnmt1 myoblasts at the *Id-1* promoter prior to differentiation ([Fig. 3C](#f3){ref-type="fig"}). Following induction of myogenic differentiation, significant enrichment for the H3K27me3 mark at the *Id-1* promoter in the shCtrl cells was observed ([Fig. 3D](#f3){ref-type="fig"}), consistent with decreased expression of *Id-1* levels in shCtrl myoblasts following differentiation as seen by qPCR. In stark contrast, the *Id-1* promoter remained in the euchromatic state in the shDnmt1 myoblasts 7 days after initiation of myogenic differentiation as demonstrated by the extensive enrichment for the H3K4me3 mark ([Fig. 3D](#f3){ref-type="fig"}). These results are consistent with the differentiation defects observed with *Dnmt1* knockdown in C2C12 myoblasts. Therefore, these results demonstrate that Dnmt1 in myoblasts is required for methylation of *Id-1*, and that a diminished level of Dnmt1 prevents transcriptional repression of *Id-1*, leading to dysfunctional myogenesis. Deficiency of muscle-specific Dnmt1 leads to increased osteogenesis ------------------------------------------------------------------- Muscle cells have the potential to transdifferentiate into the osteogenic lineage in the presence of bone morphogenetic proteins (BMPs)[@b27]. BMP induce the expression of *Id-1*, which negatively regulates myogenesis[@b25]. To test whether the increased expression of *Id-1* as a result of Dnmt1 deficiency can affect the ability of myoblasts to undergo osteogenic differentiation, shCtrl and shDnmt1 myoblasts were cultured in media in the absence or presence of BMP-4. In agreement with previously published reports, addition of BMP-4 decreased the expression of myogenic genes in shCtrl myoblasts, and this reduction was further exacerbated in shDnmt1 myoblasts in the presence of BMP-4 ([Fig. 4A](#f4){ref-type="fig"}). Concomitant with this decrease in myogenic genes was an increase in expression of early (*Runx2*, *alkaline phosphatase*, *Alp,* and *osterix, Osx*) and late (*bone sialoprotein*, *Bsp*; and *osteocalcin*; *Ocn*) osteogenic markers ([Fig. 4B](#f4){ref-type="fig"}). Induction of osteogenic gene expression was greater in the shDnmt1 cells compared to shCtrl cultures as demonstrated by ALP staining and activity, and the ability to form mineralized nodules as assessed by Alizarin Red S ([Fig. 4C](#f4){ref-type="fig"}). In agreement with the osteogenic differentiation data, a more transcriptionally active *Ocn* promoter was evident in shDnmt1 cells compared to shCtrl myoblasts at both baseline and following osteogenic differentiation ([Fig. 4D](#f4){ref-type="fig"}). Taken together, these results demonstrate that Dnmt1 safeguards myoblasts against transdifferentiation into alternative lineages such as the osteogenic lineage. Discussion ========== It was hypothesised more than 30 years ago in two independent seminal papers by Riggs[@b28] and Holliday and Pugh[@b29] that DNA methylation could alter gene expression by influencing the binding affinities of transcription factors or other proteins to DNA. DNA methylation has been well studied in embryos and during development, but only recently has the role of Dnmts been examined in somatic cells. Concentrating on the muscular system, we found the abrogation of Dnmt1 reduced myogenic gene expression and differentiation capacities in myogenic cells ([Figs 1](#f1){ref-type="fig"} and [2](#f2){ref-type="fig"}). Previous studies have raised the possibility that *de novo* methylation carried out by *Dnmt3a* and *Dnmt3b* may also contribute to maintenance methylation in the absence of *Dnmt1*[@b30][@b31]. However, we did not find any compensatory up-regulation of these *de novo* methyltransferases when Dnmt1 was depleted in our cells ([Fig. 3](#f3){ref-type="fig"}, [Supplementary Figs 1 and 2](#S1){ref-type="supplementary-material"}), suggesting that the observed effects were specific to the enzymatic actions of Dnmt1. Interestingly, when we overexpressed Dnmt1 back into Dnmt1-deficient myoblasts, we were able to increase myogenic gene expression. However, we were unable to rescue their inability to form multinucleated myotubes. These data suggest that in addition to affecting myogenic gene expression, some other critical factors during myogenesis are controlled by Dnmt1. During development, mesenchymal cells can either undergo myogenic, adipogenic, osteogenic or chondrogenic differentiation. Stable modifications made to the methylation pattern of DNA activate lineage-specific genes and prevent the transcription of genes from other lineages. Treatment of mouse C3H10T1/2 fibroblasts with 5-azacytidine[@b32], or the overexpression of antisense RNA against *Dnmt1*[@b33] induced a myogenic program in this cell type that do not normally undergo myogenesis. This effect on lineage specificity associated with the absence of Dnmt1 is consistent with our data where Dnmt1 deficiency in C2C12 myoblasts alters their cellular identity and led to enhanced differentiation into the osteogenic lineage ([Fig. 4](#f4){ref-type="fig"}). To confirm an effect of Dnmt1 in regulating cell fate, the *Id* genes were examined for their role in transdifferentiation[@b34][@b35]. We further show that the *Id-1* promoter is hypomethylated in Dnmt1 knockdown cells. Errors in DNA methylation has been linked to a number of human diseases[@b1]. Aberrant methylations of tumor suppressor genes or oncogenes are frequently linked to the metastatic potential of many tumour types. Mutations in DNMT3B have been linked to human ICF syndrome[@b36][@b37], and abnormalities in genomic methylation patterns of DNMT3L has been linked to infertility[@b38]. Mutations in DNMT1 have also recently been implicated in neurodegenerative diseases[@b39], while hypomethylation of the A161 allele is associated with the pathogenesis of facioscapulohumeral muscular dystrophy[@b40][@b41]. We have shown here that Dnmt1 plays a functional role in myogenesis, and that the *Acta1-cre*^*+*^*: Dnmt1*^*fl/fl*^ mice display a runted, dystrophic-like phenotype. Altogether, these results indicate that Dnmt1 is necessary to maintain the correct level of myogenic differentiation, and to prevent promiscuous transdifferentiation into alternate lineages. It provides a new direction for the study of myogenesis and will be interesting in future studies to determine whether Dnmt1 loss translates to premature aging of the tissue or muscular dystrophy. Materials and Methods ===================== Mouse lines ----------- The *Dnmt1*^fl/fl^ mice were generously provided by Dr. Stuart Orkin (Children's Hospital Boston, Harvard Stem Cell Institute, Boston, MA, USA)[@b12] and the *Acta1*-Cre transgenic mice were purchased from the Jackson Laboratory, with the strain originally published by Miniou and colleagues[@b21]. All mice were maintained on a predominantly C57BL/6J background. *Acta1*-cre^+^: *Dnmt1*^fl/fl^ mice were generated by breeding mice heterozygous for the transgenes; Acta1-cre^+^: Dnmt1^+/+^ littermates were used as controls. Genotyping was performed on genomic DNA extracted from tail samples and PCR performed using previously published primers and PCR conditions. All experiments were approved by and carried out in accordance with the guideline of Yale University's Institutional Animal Care and Use Committees. Cell culture ------------ C2C12 myoblasts were cultured in DMEM supplemented with 20% FBS. Primary myoblasts were isolated from 8--12 week old mice as previously described[@b27]. Briefly, hindlimb muscles were enzymatically digested with 0.25% pronase at 37 °C for 1 hour and digestion terminated with the addition of 10% horse serum. Cells were cultured in DMEM containing 20% FBS and antibiotics. Myogenic differentiation was induced by culturing cells in 2% horse serum. Osteogenic differentiation was induced by culturing cells in osteogenic media[@b27] and treated with 0 or 50 ng/ml BMP-4 (120-05, Peprotech). MTS assays were performed according to manufacturer's protocol (Promega). Lentiviral production and shRNA knockdown ----------------------------------------- Dnmt1 shRNA (shDnmt1) and scrambled controls (shCtrl) lentiviruses were generated using standard protocols. Briefly, plasmids containing the control or Dnmt1 constructs were transfected into HEK293 cells using Fugene 6, and the viral supernatant collected and concentrated by ultracentrifugation. Cells were transduced for 48 hours and selected using puromycin (2 μg/ml). Surviving cells were then sequentially passaged to establish stable cell lines, and lines where more than 70% of Dnmt1 was knockdown was used for subsequent studies. Semi-quantitative and quantitative reverse transcriptase-PCR analysis --------------------------------------------------------------------- Total RNA were extracted from cells using the RNeasy mini kit (Qiagen) and quantified by Nanodrop. cDNA was prepared using the first strand cDNA synthesis kit (Invitrogen) according to the manufacturer's instructions. Quantitative real-time PCR was performed with SYBR green PCR master mix (Biorad) using the Biorad C1000 thermal cycler. Samples were run in triplicate and normalised to β-actin. Primer sequences used are listed in [Table 1](#t1){ref-type="table"}. Histochemical staining and immunocytochemistry staining ------------------------------------------------------- Cellular viability was determined using the CellTitre 96 Aqueous One Solution Cell Proliferation Assay kit (Promega) according to the manufacturer's instructions. Alkaline phosphatase activity and staining was detected as previously described[@b42]. Calcium deposits were assessed by Alizarin Red S staining[@b27]. For immunofluorescence staining, cells were fixed with 3.7% paraformaldehyde, permeablized with 0.1% Triton X followed by washing and blocking with 10% FBS. Cells were then incubated with the following primary antibodies overnight at 4 °C: Myh2 (1:100) and Myog (1:50), all from DSHB. Samples were incubated with Alexa 488- and 555-conjugated goat anti-mouse IgG (Invitrogen, 1:250) and DAPI stained. Western blots and chromatin immunoprecipitation (ChIP) ------------------------------------------------------ Cell lysates were used for immunoblotting and ran on a 12.5% SDS-PAGE and transferred to PVDF membranes. Membranes were blocked with 5% skim milk and incubated in primary antibodies overnight. Secondary antibodies were incubated for 30 min. Primary antibodies used include β-actin (Santa Cruz, sc-1616), Dnmt1 (Abcam, \#13537), Dnmt3a (Abcam, \#13888), Dnmt3b (Abcam, \#13604). Rabbit anti-mouse-horseradish peroxidise (Sigma, A0168) or goat anti-mouse-horseradish peroxidise (Fisher, 62--6520) secondary antibodies were used. Chromatin immunoprecipitation (ChIP) was performed as previously described[@b43]. Briefly, 1 × 10^7^ cells were crosslinked and used for each immunoprecipitation. DNA was sheared to 200--750 bp by sonication. Protein G Dynabeads (Invitrogen) were used to immunoprecipitate the antibody-antigen complexes and antibodies against H3K4me3 and H3K27me3 were used. H3 and IgG were also included as positive and negative controls, respectively. Following cross-link reversal and proteinase K treatment, immunoprecipated DNA was extracted with phenol-chloroform, ethanol precipitated and eluted. Recovered DNA was purified with PCR Purification Kit (Qiagen) and analysed by quantitative PCR. Primers spanning the promoter regions of *Ocn*, and *Id1*, were used to detect amplification of input and immunoprecipitated DNA. Primer sequences are listed in [Table 2](#t2){ref-type="table"}. All analysis was performed relative to % input. Statistical analysis -------------------- Statistical analyses were performed with unpaired Student's t test, with *P* \< 0.05 considered significant. Additional Information ====================== **How to cite this article**: Liu, R. *et al*. Dnmt1 regulates the myogenic lineage specification of muscle stem cells. *Sci. Rep.* **6**, 35355; doi: 10.1038/srep35355 (2016). Supplementary Material {#S1} ====================== ###### Supplementary Information We thank Dr. Yifei Liu for expert advice on ChIP assays. R.L. was the recipient of the Sir Keith Murdoch Fellowship from the American Australian Association. I.-H.P. was supported in part by the Charles Hood Foundation, NIH (GM0099130-01, GM111667-01), CSCRF (12-SCB-YALE-11, 13-SCB-YALE-06), KRIBB/KRCF (NAP-09-3) and CTSA Grant UL1 RR025750 from the National Center for Advancing Translational Science (NCATS), a component of the National Institutes of Health (NIH), and NIH roadmap for Medical Research. Its contents are solely the responsibility of the authors and do not necessarily represent the official view of NIH. **Author Contributions** R.L. performed and designed the experiments, and wrote the manuscript. K.-Y.K. and Y.-W.J. performed experiments. I.-H.P. supervised the project, discussed the results and wrote the manuscript. ![Dnmt1 knockdown adversely affects myotube formation.\ (**A**) Dnmt1 expression following differentiation of C2C12 cells. \**P* \< 0.05 over D0. (**B**) qPCR analysis for myogenic genes in the shCtrl compared to shDnmt1 myoblasts. \**P* \< 0.05 over shCtrl cultures. (**C**) Myog and Myh2 immunofluorescence in Dnmt1 knockdown upon induction of myogenic differentiation. Quantitation of the staining results showing number of Myog^+^ and Myh2^+^ cells in the knockdown cultures compared to shCtrl are shown in the bar graph. Scale bar = 50 μm. (**D**) Fusion indexes were calculated after 7 days of differentiation. \**P* \< 0.05 over shCtrl cells. (**E**) Analysis of the number of nuclei calculated in Myh2-stained myotubes. \**P* \< 0.05 over shCtrl myoblasts.](srep35355-f1){#f1} ![Loss of Dnmt1 results in attenuated myogenesis.\ (**A**) Appearance of the *Acta1*-cre^+^: *Dnmt1*^f/f^ mice and littermate controls (*Acta1*-cre^+^: *Dnmt1*^+/+^) during postnatal development. Asterisks indicate *Acta1*-cre^+^: *Dnmt1*^f/f^ mice. (**B**) Weight measurements of mice during development. \**P* \< 0.05 over WT and Het mice of same age. (**C**) Myotube formation in the littermate control and the *Acta1*-cre^+^: *Dnmt1*^f/f^ mice. Number of myotubes per field were counted and graphed on the right. Five individual fields over three independent cultures were analysed. \**P* \< 0.05 over *Acta1*-cre^+^: *Dnmt1*^+/+^ littermate controls. Scale bar = 100 μm. (**D**) *Acta1*-cre^+^: *Dnmt1*^f/f^ myoblasts were transduced with retroviruses expressing EGFP or overexpression (OE) of Dnmt1 for 48 hours, and cultured in differentiation media for 5 days. Scale bar = 100 μm. Number of myotubes per field (five individual fields over three independent experiments) counted in the EGPF and Dnmt1 O/E cultures are shown in the bar graph.](srep35355-f2){#f2} ![Absence of Dnmt1 leads to hypomethylation of the *Id-1* promoter.\ (**A**) qPCR analysis of *Mefc* expression in shCtrl and shDnmt1 myoblasts over 7 days of differentiation. \**P* \< 0.05 over D0 shCtrl; ^\#^*P* \< 0.05 over D0 shDnmt1. (**B**) qPCR analysis of *Id-1* expression in shCtrl and shDnmt1 myoblasts over 7 days of differentiation. \**P* \< 0.05 over D0 shCtrl; ^\#^*P* \< 0.05 over D0 shDnmt1. (**C,D**) ChIP-qPCR was performed with H3K4me3 or H3K27me3 antibodies on chromatin obtained from day 0 myoblasts (**C**) or day 7 myotubes (**D**). The precipitated DNA was amplified by qPCR using specific primers targeting a region of *Id-1* promoter ^\*^*P* \< 0.05 over shCtrl.](srep35355-f3){#f3} ![Dnmt1 knockout myoblasts undergo enhanced osteogenic differentiation.\ (**A**) shCtrl and shDnmt1 C2C12 myoblasts were grown in osteogenic media in the absence or presence of BMP-4 to stimulate osteogenic differentiation, and expression for myogenic markers *MyoD* and *Myog* was assessed by qPCR. \**P* \< 0.05 over shCtrl - BMP-4; ^\#^*P* \< 0.05 over shDnmt1 - BMP-4. (**B**) qPCR analysis of early (*Runx2*, *ALP, Osx*) and late (*Bsp*, *Ocn*) osteogenic genes in shCtrl and shDnmt1 myoblasts without and with BMP-4. \**P* \< 0.05 over shCtrl - BMP-4; ^\#^*P* \< 0.05 over shDnmt1 - BMP-4. (**C**) ALP staining performed 4 days following osteogenic differentiation, and Alizarin Red S staining performed 7 days following differentiation, in the shCtrl and shDnmt1 cells. ALP activity in shCtrl and shDnmt1 cells after 4 days of BMP-4 treatment is shown in the bar graph. \**P* \< 0.05 over shCtrl - BMP-4. (**E**) ChIP-qPCR was performed with H3K4me3 or H3K27me3 antibodies on chromatin obtained from day 0 myoblasts or day 7 myotubes. The precipitated DNA was amplified by qPCR using specific primers targeting a region in the *Ocn* promoter. ^\*^*P* \< 0.05 over shCtrl.](srep35355-f4){#f4} ###### Primers used for qPCR. **Gene** **Forward Primer** **Reverse Primer** ----------- -------------------------- --------------------------- *β-actin* TGAAGTGTGACGTGGACATC GGAGGAGCAATGATCTTGAT *Dnmt1* CCCGGCCATCCACCTCCTCA ATGCGCACTGGTTCTGCGCT *Dnmt3a* GCTGCAGGGCAGAAGGGTGG ATGGGTCGCTGACGGAGGCT *Dnmt3b* GGGCCCGGTACTTCTGGGGT GGCAGTCCTGCAGCTCGAGC *MyoD* CCAGCATAGTGGAGCGCATCTCC GGAGGCGACTCTGGTGGTGCATC *Myf5* CTCCGTGTCCAGCTTGGATTGCTT CTGAAGAGCCAGCTCGGATGGC *Myog* ACCTTCCTGTCCACCTTCAGGGC CTCGGGCTTCCGGGCTTAAGC *Myh2* AGTCGTGGAGTCCATGCAGA CATGCGGTTGGAGTGGTTCA *Runx2* CGTCAGGCATGTCCCTCGGC GGGGTAGGGTGGTGGCAGGT *Alp* CTGCGCCATGAGACCCACGG AAGCAGGTGTGCCATCGGGC *Ocn* TGGCCCAGACCTAGCAGACACC AACCCGGAGGACACATACCTGTGAG *Id-1* CCCGCTCAGCACCCTGAACG TGGAACACATGCCGCCTCGG *Mef2c* CTGCCAGTGCGCTCCACCTC GAGGGCAGATGGCGGCATGT *Ccne1* CCTTTCAGTCCGCTCCAGAA GCTGACTGCTATCCTCGCTT ###### Primers used for ChIP-qPCR. Promoter Forward Primer Reverse Primer ---------- ------------------------ ----------------------------- *Ocn* CTAATTGGGGGTCATGTGCT CCAGCTGAGGCTGAGAGAGA *Id-1* CTTATAAAAGACTGGCTCCAGC GGAGGCTGAGAACAGAAACAGAGTGTG
To slow the effects of climate change, it is essential that the potential impact of infrastructure schemes is considered right from the start. Failure to address carbon emissions at the earliest project design stages can significantly hamper efforts to reduce their environmental impact, as the choices available to the delivery team diminish as the project progresses, says Lewis Barlow of SWECO. This principle underpinned the delivery of £130m infrastructure projects in Glasgow City. A key goal of the Glasgow infrastructure schemes is to create high-quality green travel routes that promote active travel and better connect existing transport systems. Careful planning in the earliest stages meant carbon accounting was built into the process from the project’s inception, meaning the scheme’s overall carbon impact was cut by more than 35%, compared with an initial baseline assessment. Further savings of over 20% are expected through the incorporation of carbon reduction into the procurement process. Each project was informed by the guidelines of Publicly Available Specification (PAS) 2080: Carbon Management in Infrastructure, a framework developed in response to the UK Treasury’s Infrastructure Carbon Review in 2018. Informing industry best practice in managing whole-life carbon, its guidance can steer the decision-making process throughout the design and planning of any infrastructure project. Adopting PAS 2080 principles allows developers to slash a project’s carbon impact, and deliver significant cost savings. In Renfrewshire, it informed a detailed carbon management assessment during the early design stages, with results shared across teams to ensure a proportionate, consistent focus on carbon reduction. This is where the greatest potential to reduce embodied carbon lies, as a meaningful assessment of every development stage can be made. Analysis of carbon impacts in the Glasgow projects identified immediate improvements to be made in terms of air quality, emission reduction and congestion mitigation. But the findings also influenced the final design to help make the project resilient to future climate change. Design changes needn’t be drastic to be impactful. Small tweaks in the planning stages can shape the long-term environmental impact of the project. Features such as public cycleways and footpaths will encourage more sustainable travel, and incorporating lower-carbon construction materials, such as recycled aggregates, will enable a cleaner, greener construction process. Once construction starts, however, there are fewer levers available for project teams to pull to influence the carbon outcomes of a project, with options often limited to minor changes designed to minimise waste. After commission, this diminishes further, with efficient operations and maintenance strategies the only remaining avenues to influence carbon emissions. There is a vital need to reconcile the twin goals of supporting economic growth and meeting carbon-reduction targets. This can only be achieved if the latter becomes a central component of the decision-making process for infrastructure projects from beginning to end. All too often, project teams miss the biggest opportunities to reduce a scheme’s carbon impact by leaving it too late to execute a thorough impact assessment and make decisions based on its findings. But utilising key learnings from the projects Sweco delivered for Renfrewshire Council, informed by industry standards like PAS 2080, can simplify the process and deliver clear carbon and cost savings. Ultimately, carbon targets are fast becoming a fundamental consideration for all major construction projects, and it is high time they are treated that way.
https://constructionclimatechallenge.com/2019/03/19/pas-2080-helps-lead-the-way-in-low-carbon-infrastructure/
Travelling and living in the exciting and beautiful city of Melbourne is a once in a lifetime experience. Whether you are travelling to Melbourne or going to live in Melbourne for a short while, be armed with these... - 1 15 Fun Facts about Brisbane, Australia Brisbane is a city on the Pacific coast of Australia, in the extreme southeast of Queensland. Besides being Queensland's administrative, educational, and cultural capital, the city is also the hub of its transportation... - 1 Can You See The Southern Lights From Australia? Aurora Australis We look at the question, can you see the Southern Lights from Australia. Plus a little bit more about the Aurora Australis and how exactly to see the Southern Lights. - 5 Poisonous and Venomous Snakes, Spiders, Bugs And Creatures In New Zealand New Zealand is a country of such diversity that there is no other place like it on this world. The beauty of the country is only marred by the insects and animals which can cause harm and fatalities - 272 Moving to Australia You live in the Northern Hemisphere - winter is closing in the dream of beautiful, clean empty Australian beaches beckon. Or maybe the housing market is rubbish, the cost of living is ridiculous and you want to escape... Not finding what you're looking for? Browse other articles about Visiting Australia & Oceania.
http://hubpages.com/travel/australasia/5846
SCIENCE 9 - Unit B | Matter & Chemical Change :check:What are the… SCIENCE 9 - Unit B | Matter & Chemical Change :check: What are the properties of materials? :check:What happens to them during chemical change? :check:What evidence do we have of chemical change? :check:What ideas, theories, or models help us explain that evidence? General KO 1: Investigate materials, and describe them in terms of their physical and chemical properties Investigate and describe properties of materials e.g. investigate and describe the melting point, solubility and conductivity of materials observed Describe and apply different ways of classifying materials based on their composition and properties, including e.g. distinguishing between pure substances, solutions and mechanical mixtures e.g. distinguishing between metals and nonmetals Identify conditions under which properties of a material are changed, and critically evaluate if a new substance has been produced General KO 2: Describe and interpret patterns in chemical reactions Identify and evaluate dangers of caustic materials and potentially explosive reactions Observe and describe evidence of chemical change in reactions between familiar materials, by: • describing combustion, corrosion and other reactions involving oxygen • observing and inferring evidence of chemical reactions between familiar household materials Distinguish between materials that react readily and those that do not e.g. compare reactions of different metals to a dilute corrosive solution Observe and describe patterns of chemical change, by: observing heat generated or absorbed in chemical reactions, and identifying examples of exothermic and endothermic reactions identifying conditions that affect rates of reactions e.g. investigate and describe how factors such as heat, concentration, surface area and electrical energy can affect a chemical reaction identifying evidence for conservation of mass in chemical reactions, and demonstrating and describing techniques by which that evidence is gathered General KO 3: Describe ideas used in interpreting the chemical nature of matter, both in the past and present, and identify example evidence that has contributed to the development of these ideas Distinguish between observation and theory, and provide examples of how models and theoretical ideas are used in explaining observations e.g. describe how observations of electrical properties of materials led to ideas about electrons and protons ; describe how observed differences in the densities of materials are explained, in part, using ideas about the mass of individual atoms Use the periodic table to identify the number of protons, electrons and other information about each atom; and describe, in general terms, the relationship between the structure of atoms in each group and the properties of elements in that group e.g. use the periodic table to determine that sodium has 11 electrons and protons and, on average, about 12 neutrons; infer that different rows (periods) on the table reflect differences in atomic structure; interpret information on ion charges provided in some periodic tables Demonstrate understanding of the origins of the periodic table, and relate patterns in the physical and chemical properties of elements to their positions in the periodic table, focusing on the first 18 elements Distinguish between ionic and molecular compounds, and describe the properties of some common examples of each General KO 4:
https://coggle.it/diagram/XXlZCkd5XSK4vp8q/t/science-9-unit-b-matter-us-explain-that-evidence
Creativity is a complex and compelling psychological phenomenon. To understand creativity is to understand the varied individual, social, cultural, and historical factors that impinge on it. Although creativity has always been a topic that has engaged the interest and imagination of researchers and laypeople alike, it has had a somewhat turbulent history in the field of psychology. Indeed, the psychological study of creativity remained something of a niche until the 1990s. This is no longer the case. The steady efforts of creativity scholars—working since the mid-20th century and around the globe—have greatly expanded our understanding of creativity. In the early 21st century the field of creativity studies represents one of the most active, challenging, and important areas of psychological inquiry. Classic Works Research psychologists largely neglected creativity as an area of serious study until the 1950s. The catalyzing event for the psychological study of creativity was Joy P. Guilford’s presidential address to the American Psychological Association (Guilford 1950), in which he noted that creativity was a topic in need of serious scientific study and charged research psychologists with the goal of understanding the nature of creativity and discovering how best to cultivate it. In the years following Guilford’s address, creativity research proliferated. The establishment of several national centers and institutes greatly advanced the psychological study of creativity. Two key examples are the Institute of Personality Assessment and Research (IPAR), directed by Donald W. MacKinnon, at the University of California, Berkeley (see MacKinnon 1975), and the Center for the Study of Creativity and Mental Health, directed by Morris I. Stein, first at the University of Chicago. In addition to the development of institutes and centers, key historical events included a series of influential Utah creativity conferences, starting in 1955, organized by Calvin W. Taylor (see Taylor 1956). These conferences brought together some of the most influential creativity researchers and published their papers in conference proceedings. The infrastructure provided by these centers, institutes, and conferences afforded the context, resources, and opportunities necessary for early creativity researchers to make lasting contributions to the field. Ravenna Helson, Frank X. Barron, and E. Paul Torrance are examples of early creativity researchers who leveraged these early opportunities into contributions that continue to reverberate throughout the field. Helson 1999, for example, engages in pioneering longitudinal research, dating back to 1957, that explores the intersection of creativity, personality, and gender. Barron 1963 reports on a program of research, also starting in the 1950s, in which the author and his colleagues examined the relationship between creativity and personality by studying accomplished creative writers, architects, research scientists, and mathematicians. Torrance, an educational psychologist, was another influential creativity research pioneer. Torrance 1963 developed a program of research that looked at the nurturance (and suppression) of creativity in educational settings. Torrance’s work resulted in the creation of one of the most popular, albeit contested (see Theoretical Perspectives: Psychometric), sets of measurement instruments used in contemporary creativity research. The earnest efforts of these early pioneers helped legitimize the psychological study of creativity and lay down a fertile soil from which creativity research could take root. It is no wonder that contemporary creativity researchers often refer to these early efforts as the golden age of creativity research. Barron, Frank. 1963. Creativity and psychological health: Origins of personal vitality and creative freedom. Princeton, NJ: Van Nostrand. Summarizes Barron’s program of research in the 1950s and early 1960s, including detailed description of several studies and findings from the “lived in” assessments that pertain to creativity, personality, and psychological health. Guilford, J. P. 1950. Creativity. American Psychologist 5.9: 444–454. DOI: 10.1037/h0063487 Guilford’s presidential address to the American Psychological Association, which served as a catalyst for the serious psychological study of creativity. Available online for purchase or by subscription. Helson, Ravenna. 1999. A longitudinal study of creativity personality in women. Creativity Research Journal 12.2: 89–101. DOI: 10.1207/s15326934crj1202_2 This article reports on a pioneering thirty-year longitudinal study exploring the creative potential and personality of one hundred college women. Available online for purchase or by subscription. MacKinnon, Donald W. 1975. IPAR’s contribution to the conceptualization and study of creativity. In Perspectives in creativity. Edited by Irving A. Taylor and J. W. Getzels, 60−89. Chicago: Aldine. Gives a historical account of the role that IPAR played in conceptualizing the facets of creativity and outlining major research questions for systematic psychological study of creativity. Taylor, Calvin W., ed. 1956. The 1955 University of Utah Research Conference on the Identification of Creative Scientific Talent, Held at Alpine Rose Lodge, Brighton, Utah, August 27–30, 1955. Salt Lake City: Univ. of Utah Press. This text includes papers and committee reports that were presented at the first Utah conference organized by Taylor. The majority of the papers focus on the nature and measurement of creativity. Torrance, E. Paul. 1963. Education and the creative potential. Modern School Practices. Minneapolis: Univ. of Minnesota Press. An early text, in which Torrance summarizes his theoretical and empirical work. Users without a subscription are not able to see the full content on this page. Please subscribe or login. How to Subscribe Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here. Article - Abnormal Psychology - Academic Assessment - Acculturation and Health - Action Regulation Theory - Action Research - Addictive Behavior - Adolescence - Adoption, Social, Psychological, and Evolutionary Perspect... - Adulthood - Affective Forecasting - Ageism - Ageism at Work - Aggression - Allport, Gordon - Alzheimer’s Disease - Analysis of Covariance (ANCOVA) - Anger - Animal Behavior - Animal Learning - Anxiety Disorders - Art and Aesthetics, Psychology of - Assessment and Clinical Applications of Individual Differe... - Attachment in Social and Emotional Development across the ... - Attention-Deficit/Hyperactivity Disorder (ADHD) in Adults - Attention-Deficit/Hyperactivity Disorder (ADHD) in Childre... - Attitudes - Attitudinal Ambivalence - Attraction in Close Relationships - Attribution Theory - Authoritarian Personality - Autism - Bayesian Statistical Methods in Psychology - Behavior Therapy, Rational Emotive - Behavioral Economics - Behavioral Genetics - Belief Perseverance - Bereavement and Grief - Biological Psychology - Birth Order - Body Image in Men and Women - Burnout - Bystander Effect - Childhood and Adolescence, Peer Victimization and Bullying... - Clinical Neuropsychology - Clinical Psychology - Cognitive Consistency Theories - Cognitive Dissonance Theory - Cognitive Neuroscience - Communication, Nonverbal Cues and - Comparative Psychology - Competency to Stand Trial - Conflict Management in the Workplace - Conformity, Compliance, and Obedience - Consciousness - Coping Processes - Counseling Psychology - Courage - Creativity - Creativity at Work - Critical Thinking - Cross-Cultural Psychology - Cultural Psychology - Daily Life, Research Methods for Studying - Data Science Methods for Psychology - Data Sharing in Psychology - Death and Dying - Deceiving and Detecting Deceit - Defensive Processes - Depression - Depressive Disorders - Development, Prenatal - Developmental Psychology (Cognitive) - Developmental Psychology (Social) - Diagnostic and Statistical Manual of Mental Disorders (DSM... - Discrimination - Disgust - Dissociative Disorders - Drugs and Behavior - Eating Disorders - Ecological Psychology - Educational Settings, Assessment of Thinking in - Effect Size - Embodiment and Embodied Cognition - Emerging Adulthood - Emotion - Emotional Intelligence - Empathy and Altruism - Employee Stress and Well-Being - Environmental Neuroscience and Environmental Psychology - Ethics in Psychological Practice - Event Perception - Evolutionary Psychology - Expansive Posture - Experimental Existential Psychology - Exploratory Data Analysis - Eyewitness Testimony - Eysenck, Hans - Factor Analysis - Festinger, Leon - Five-Factor Model of Personality - Flynn Effect, The - Forensic Psychology - Forgiveness - Friendships, Children's - Fundamental Attribution Error/Correspondence Bias - Gambler's Fallacy - Game Theory and Psychology - Geropsychology, Clinical - Habit Formation and Behavior Change - Happiness - Health Psychology - Health Psychology Research and Practice, Measurement in - Heider, Fritz - Heuristics and Biases - History of Psychology - Human Factors - Humanistic Psychology - Humor - Hypnosis - Implicit Association Test (IAT) - Industrial and Organizational Psychology - Inferential Statistics in Psychology - Intelligence - Intelligence, Crystallized and Fluid - Intercultural Psychology - Intergroup Conflict - International Classification of Diseases and Related Healt... - International Psychology - Interviewing in Forensic Settings - Intimate Partner Violence, Psychological Perspectives on - Item Response Theory - Kurtosis - Language - Laughter - Law, Psychology and - Leadership - Learned Helplessness - Learning Theory - Learning versus Performance - LGBTQ+ Romantic Relationships - Lie Detection in a Forensic Context - Life-Span Development - Lineups - Locus of Control - Loneliness and Health - Mathematical Psychology - Meaning in Life - Mechanisms and Processes of Peer Contagion - Media Violence, Psychological Perspectives on - Mediation Analysis - Meditation - Memories, Autobiographical - Memories, Flashbulb - Memories, Repressed and Recovered - Memory, False - Memory, Human - Memory, Implicit versus Explicit - Memory in Educational Settings - Memory, Semantic - Meta-Analysis - Metacognition - Metamemory - Metaphor, Psychological Perspectives on - Mindfulness - Mindfulness and Education - Minnesota Multiphasic Personality Inventory (MMPI) - Money, Psychology of - Moral Conviction - Moral Development - Moral Psychology - Moral Reasoning - Motivation - Music - Narcissism - Narrative - Neuroscience of Associative Learning - Nonparametric Statistical Analysis in Psychology - Obsessive-Complusive Disorder (OCD) - Occupational Health Psychology - Olfaction, Human - Operant Conditioning - Optimism and Pessimism - Organizational Justice - Parenting Stress - Parenting Styles - Path Models - Peace Psychology - Perception - Perception, Person - Performance Appraisal - Personality and Health - Personality Disorders - Personality Psychology - Phenomenological Psychology - Placebo Effects in Psychology - Play Behavior - Positive Psychological Capital (PsyCap) - Positive Psychology - Posttraumatic Stress Disorder (PTSD) - Prejudice and Stereotyping - Pretrial Publicity - Prisoner's Dilemma - Problem Solving and Decision Making - Procrastination - Prosocial Behavior - Prosocial Spending and Well-Being - Protocol Analysis - Psycholinguistics - Psychological Literacy - Psychology, Political - Psychophysics, Visual - Psychotherapy - Psychotic Disorders - Race - Reasoning, Counterfactual - Rehabilitation Psychology - Relationships - Religion, Psychology and - Replication Initiatives in Psychology - Research Methods - Resilience - Risk Taking - Rumination - Savoring - Schizophrenic Disorders - School Psychology - School Psychology, Counseling Services in - Self, Gender and - Self, Psychology of the - Self-Construal - Self-Control - Self-Deception - Self-Determination Theory - Self-Efficacy - Self-Esteem - Self-Monitoring - Self-Regulation in Educational Settings - Sensation Seeking - Sex and Gender - Sexual Minority Parenting - Sexual Orientation - Signal Detection Theory and its Applications - Single People - Single-Case Experimental Designs - Skinner, B.F. - Sleep and Dreaming - Small Groups - Social Class and Social Status - Social Cognition - Social Neuroscience - Social Support - Social Touch and Massage Therapy Research - Somatoform Disorders - Spatial Attention - Sports Psychology - Stanford Prison Experiment (SPE): Icon and Controversy - Stereotype Threat - Stereotypes - Stress and Coping, Psychology of - Student Success in College - Subjective Wellbeing Homeostasis - Suicide - Taste, Psychological Perspectives on - Teaching of Psychology - Terror Management Theory - Testing and Assessment - The Reasoned Action Approach and the Theories of Reasoned ... - Theory of Mind - Therapies, Person-Centered - Therapy, Cognitive-Behavioral - Thinking Skills in Educational Settings - Time Perception - Trait Perspective - Trauma Psychology - Twin Studies - Type A Behavior Pattern (Coronary Prone Personality) - Unconscious Processes - Virtues and Character Strengths - Wisdom - Women and Science, Technology, Engineering, and Math (STEM...
https://www.oxfordbibliographies.com/view/document/obo-9780199828340/obo-9780199828340-0139.xml
Might not be the oldest dish of Hanoi, but “Chả Cá Lã Vọng” is definitely the most unique among best known dishes of the capital city. |Photo by Tri Nguyen| History of Cha Ca La Vong The origin of this dish dates back to the early 1900s when Vietnam was a French colony. Đoàn family, who lived at 14 Hàng Sơn Street, often hosted secret meetings for the resistance army. Therefore they decided to sell this home cook fish dish as a cover in addition to earning more money. The people called it “Chả Cá Lã Vọng” after the statue of “Lã Vọng”, a Chinese poet and revolutionist, sitting at the gate of the house. The delicious taste of “Chả Cá” soon made it popular, so much that the whole street was named after it later on. |Photo by Viethavvh| How to make Cha Ca La Vong? The fish used in this dish is Hemibagrus, a type of Catfish caught in the rivers of the northern mountainous area. It is one of the biggest river fishes thus very easy to remove all bones to have a big fillet. The fish is cut into matchbox-sized pieces, marinated in galangal and turmeric along with other spices. Then the spiced fish pieces are placed into bamboo clips to be grilled on charcoal until both sides are almost cooked. After that, the fish is put into a frying pan with hot oil, together with dill and spring onion for a short time. “Chả Cá” must be served while hot with rice vermicelli, fried peanuts and coriander, all to be dipped into Vietnamese dipping sauce (contains fish sauce, vinegar, salt, sugar, garlic) or shrimp pate (“mắm tôm”) mixed with lime juice. The grilled fish pieces must not be broken or too dry, but are yellow, tasty, fresh and fatty. |Photo by Lionel Ng| Imagine that you are one of the guests… While you sit down at the table, the waiter starts laying there some seasonings includes a bowl of well - stirred shrimp paste sauce mixed up with lemon. After dropping the liquor, he will decorate the bowl with a few slices of red fresh pimento, a plate of grilled ground nuts of gold yellow color, various species of mint vegetables onions in small white slices. To many customers, the sight of such seasoning already greatly stimulates their appetite. A few minutes later, fried fish, yellow in color and flagrant in smell put on a plate of anethum vegetable, is brought in. But that is not all. A few seconds more, as soon as a cauldron of boiling fat is brought in, the waiter starts pouring it on each bowl of grilled fish, thus producing a white smoke and sputtering noise. Now, this is the time for picking and choosing what you like from the dishes on the table; sticking them into your bowl. Everything in all dishes should be eaten together. Let’s taste… Where to eat Cha Ca La Vong in Hanoi? Although “Chả Cá” can be found in many restaurants even around the world, the orginal “Chả Cá Lã Vọng” is still open for locals and tourist after five generations. It is in the old quarter, at 14 Chả Cá Street. In addition, you can to visit restaurant at No 87 Nguyen Truong To street, Hanoi to enjoy. You can search the address on google maps or join our Hanoi city tour to explore Hanoi and enjoy this unique dishes of Hanoi.
https://www.viettravelmagazine.com/2015/12/cha-ca-la-vong-hanoi-tumeric-fish-with.html
Daniel Levinthal is the Reginald H. Jones Professor of Corporate Strategy at the Wharton School, University of Pennsylvania. Levinthal has published extensively on questions of organizational adaptation and industry evolution, particularly in the context of technological change with 70 articles and book chapters that have received some 20,000 citations. He is a Fellow of both the Strategic Management Society and the Academy of Management. In addition, he is a past winner of the Strategic Management Society’s Best Paper prize and has received the Distinguished Scholar from the Organization and Management Theory Division of the Academy, as well as the Outstanding Educator Award from the Business Policy Division of the Academy. He currently serves as Editor-in-Chief of Strategy Science and has previously served as Editor-in-chief of Organization Science. He has received honorary doctorates from the University of Southern Denmark, Tilburg University, and the University of Warwick and has held visiting professorships at the Harvard Business School (Bower Fellow), the Sant’Anna School of Advanced Studies, University of Pisa (Philip Morris Visiting Professor), and the University of New South Wales (Michael Crouch Visiting Professor). Daniel A Levinthal and Andrea Contigiani (2018), Situating the Construct of Lean Startup: Adjacent “Conversations” and Possible Future Directions, Industrial and Corporate Change, Forthcoming. Daniel A Levinthal (2017), Mendel in the C-Suite: Design and the evolution of strategies, Strategy Science, 2 (4), pp. 282-287. Abstract: A “Mendelian” executive is proposed as an image of strategy making that lies intermediate between the godlike powers of intentional design of rational choice approaches and a Darwinian process of random variation and market-based differential selection. The Mendelian executive is capable of intentional design efforts in order to explore possible adjacent strategic spaces. Furthermore, the argument developed here highlights the role of intentionality with respect to the selection and culling of strategic initiatives. The firm is viewed as operating an “artificial selection” environment in contrast to selection as the direct consequence of the outcome of competitive processes. Examining the nature of the processes generating these experimental variants and the bases of internal selection, and how these selection criteria may themselves change, is argued to be central to the formation of strategy in dynamic competitive environments. Daniel A Levinthal (2017), Resource allocation and firm boundaries, Journal of Management, 43 (8), pp. 2580-2587. Abstract: In a modern economy, much of the allocation of financial and nonfinancial resources is mediated by organizations. This essay points to three general features of this mediating role of organizations in the resource allocation process. One line of argument relates to the distinct opportunities and opportunity costs that an organization faces. The set of investment opportunities for organizations differs as a result of their privileged access to different investment opportunities. The second line of argument considers the impact of differential beliefs and perspectives on the resource allocation process. The diversity of independent budgetary entities, both internal to and external to the organization, is argued to importantly influence the heterogeneity of the bases of selection among alternative investment opportunities. Lastly, this mediation of resource allocation by the firm plays a particularly important role with respect to the allocation of resources over time on a given initiative. Organizations do not simply buffer initiatives from selection but potentially provide different bases for interim selection processes. Victor Bennett and Daniel A Levinthal (2017), Firm lifecycles: Linking employee incentives and firm growth dynamics, Strategic Management Journal, 38 (10), pp. 2005-2018. Abstract: While the economic advantages of scale are well understood, implications of the rate of firm growth are arguably less appreciated. Since firms' growth rate influences employees' promotion opportunities, the growth rate can have significant implications for the incentives employees face. Rapid growth, by creating more promotion opportunities, motivates employees to engage in extra-role behaviors that might result in promotion should an opportunity arise. Building on this argument, we develop a formal model linking the design of firms' incentive structure to their rate of growth. The associated dynamics lead to three distinct epochs of firms' lifecycle: rapid growth and high-powered incentives driven by frequent promotion opportunities; moderate growth with infrequent promotion opportunities, but large salary increases contingent on promotion; and finally, stagnant firms with low-powered incentives. Thorbjorn Knudsen, Daniel A Levinthal, Sidney G Winter (2017), Systematic differences and random rates: Reconciling Gibrat’s Law with firm differences, Strategy Science, 2 (2), pp. 111-120. Abstract: A fundamental premise of the strategy field is the existence of persistent firm-level differences in resources and capabilities. This property of heterogeneity should express itself in a variety of empirical “signatures,” such as firm performance and arguably systematic and persistent differences in firm-level growth rates, with low cost firms outpacing high cost firms. While this property of performance differences is a robust regularity, the empirical evidence on firm growth and Gibrat’s law does not support the later conjecture. Gibrat’s law, or the “law of proportionate effect,” states that, across a population of firms and over time, firm growth at any point is, on average, proportionate to size of the firm. We develop a theoretical argument that provides a reconciliation of this apparent paradox. The model implies that in early stages of an industry history. firm growth may have a systematic component, but for much of an industry’s and firm’s history should have a random pattern consistent with the Gibrat property. The intuition is as follows. In a Cournot equilibrium, firms of better “type” (i.e., lower cost) realize a larger market share, but act with some restraint on their choice of quantity in the face of a downward sloping demand curve and recognition of their impact on the market price. If firms are subject to random firm-specific shocks, then in this equilibrium setting a population of such firms would generate a pattern of growth consistent with Gibrat’s law. However, if broader evolutionary dynamics of firm entry, and the subsequent consolidation of market share and industry shake-out is considered, then during early epochs of industry evolution, one would tend to observe systematic differences in growth rates associated with firm’s competitive fitness. Thus, it is only in these settings far from industry equilibrium that we should see systematic deviations from Gibrat’s law. Felipe Csaszar and Daniel A Levinthal (2016), Mental representation and the discovery of new strategies, Strategic Management Journal, 37, pp. 2013-2049. Abstract: Managers' mental representations affect the perceived payoffs and alternatives that managers consider. Thus, mental representations affect how managers search for profitable strategies as well as the quality of strategies they discover. To study how mental representation and search interact, we formally model the dual search over possible representations and over policy choices of a strategy “landscape.” We analyze when it is preferable to emphasize searching for the best policies rather than the best mental representation, and vice versa. We show that, in the long run, a balance between the two search modes not only results in better expected performance, but also reduces the variation in performance. Additionally, the article describes conditions under which increased accuracy of mental representations can actually worsen firm performance. Daniel A Levinthal and Alessandro Marino (2015), Three facets of organizational adaptation: Selection, variety, and plasticity, Organization Science, 26 (3), pp. 743-755. Abstract: When considering the adaptive dynamics of organizations, it is important to account for the full set of adaptive mechanisms, including not only the possibility of learning and adaptation of a given behavior but also the internal selection over some population of routines and behaviors. In developing such a conceptual framework, it is necessary to distinguish between the underlying stable roots of behavior and the possibly adaptive expression of those underlying templates. Selection occurs over expressed behavior. As a result, plasticity, the capacity to adapt behavior, poses a trade-off as it offers the possibility of adaptive learning but at the same time mitigates the effectiveness of selection processes to identify more or less superior underlying roots of behavior. In addition, plasticity may mitigate the reliability with which practices are enacted. These issues are explored in the context of a computational model, which examines the interrelationship among processes of variation, selection, and plasticity. Daniel A Levinthal and Maciej Workiewicz (Working), Nearly decomposable systems and organizational structure: The adaptive properties of the multi-authority form. Daniel A Levinthal and Claus Rerup (Working), Grey zones and the variegated quality of success and failure: Deconstructing the interpretation of experience in the process of organizational learning. Michael D. Cohen, Daniel A Levinthal, Massimo Warglieny (2014), Collective performance: Modeling the interaction of habit-based actions, Industrial and Corporate Change, 23, pp. 329-360. Abstract: Recurring patterns of action are essential in our efforts to explain central properties of business firms and other organizations. However, the development of systematic theory has been hampered by the difficulty of adequately specifying foundational assumptions. We address this problem by defining a concept of collective performance, which brings together a range of recurring organizational action patterns that have been studied under labels such as “routine,” “practice,” standard operating procedure, or “genre of action.” All these forms of organizational action are based on human habit to a significant degree. We propose a conceptual framework for such habit-based organizational action patterns. The framework is a set of core principles and desirable model properties that can serve as a guide in the development of formal models of collective performance. It provides micro-foundations for the modeling of collective performance that are aligned with contemporary developments in psychology. Finally, we present a series of examples, developed in Supplementary Materials, that shows how our framework leads to new classes of formal models that can aid the analysis of collective performance. This course encourages students to analyze the problems of managing the total enterprise in the domestic and international setting. The focus is on the competitive strategy of the firm, examining issues central to its long- and short-term competitive position. Students act in the roles of key decision-makers or their advisors and solve problems related to the development or maintenance of the competitive advantage of the firm in a given market. The first module of the course develops an understanding of key strategic frameworks using theoretical readings and case-based discussions. Students will learn concepts and tools for analyzing the competitive environment, strategic position and firm-specific capabilities in order to understand the sources of a firm's competitive advantage. In addition, students will address corporate strategy issues such as the economic logic and administrative challenges associated with diversification choices about horizontal and vertical integration. The second module will be conducted as a multi-session, computer-based simulation in which students will have the opportunity to apply the concepts and tools from module 1 to make strategic decisions. The goal of the course is for students to develop an analytical tool kit for understanding strategic issues and to enrich their appreciation for the thought processes essential to incisive strategic analysis. This course offers students the opportunity to develop a general management perspective by combining their knowledge of specific functional areas with an appreciation for the requirements posed by the need to integrate all functions into a coherent whole. Students will develop skills in structuring and solving complex business problems. The management of large, established enterprises creates a range of multi-facet challenges for the general manager. A general manager needs to understand the internal workings of a firm, how to assess and create a strategy, and how to take into account increasing, globalization. While these issues are distinct, they are very much intertwined. As a result, this course will provide you with an integrated view of these challenges and show you that effective of an established enterprise requires a combination of insights drawn from economics, sociology, psychology and political economy. This course examines some of the central questions in management with economic approaches as a starting point, but with an eye to links to behavioral perspectives on these same questions. It is not a substitute for a traditional microeconomics course. Economics concerns itself with goal directed behavior of individuals interacting in a competitive context. We adopt that general orientation but recognize that goal directed action need not take the form of maximizing behavior and that competitive processes do not typically equilibrate instantaneously. The substantive focus is on the firm as a productive entity. Among the sorts of questions we explore are the following: What underlies a firms capabilities? How does individual knowledge aggregate to form collective capabilities? What do these perspectives on firms say about the scope of a firms activities, both horizontally (diversification) and vertically (buy-supply relationships)? We also explore what our understanding of firms says about market dynamics and industry evolution, particularly in the context of technological change. A central property of firms, as with any organization, is the interdependent nature of activity within them. Thus, understanding firms as "systems" is quite important Among the issues we explore in this regard are the following. Organizational "systems" have internal structure, in particular elements of hierarchy and modularity. Even putting aside the question of individual goals and objectives and how they may aggregate, the question of organizational goal is non-trivial. To say that a firms objective is to maximize profits is not terribly opera tional. How does such an overarching objective get decomposed to link to the actual operating activities of individual subunits, including individuals themselves. This issue of goals has links to some interesting recent work that links the valuation process of financial markets to firm behavior. Financial markets are not only a reflection of firm value, but may guide firms initiatives in systematic ways. Innovation is currently a hot topic. The popular stereotype holds that firms that are not innovating, or that are not at least saying that they are innovating, are backward at best and at risk of extinction at worst. Behind the hype ismore than half a century of theory and research on organizational and managerial innovation, and the seminar will focus on this domain. A useful overview of the domain is available in Fariborz Damanpour "Organizational Innovation" in Oxford Rsearch Encyclopedia on Business and Management Oxford University Press 2018. We will examine a number of theoretical and empirical approaches to understanding the phenomenon that have emerged during this period and identify key questions explored, principal contributions to the literature, and remaining questions and areas for future research. At the same time, we will embard on a journey in the sociology of knowledge, to encourange you to understand how the evolution of these approaches is linked to changes in the larger world. The seminar is also designed to allow you to pursue an angle on the subject matter that is of particular interest to you, and the deliverables are intended to allow you to investigate a topic that you wish to explore in some depth. Both will be discussed at the beginning of the course and will be fleshed out in subsequent discussions together. This is an introductory doctoral seminar on research methods in management. We examine basic issues involved in conducting empirical research for publication in scholarly management journals. We start by discussing the framing of research questions, theory development, the initial choices involved in research design, and basic concerns in empirical testing. We then consider these issues in the context of different modes of empirical research (including experimental, survey, qualitative, archival, and simulation). We discuss readings that address the underlying fundamentals of these modes as well studies that illustrate how management scholars have used them in their work, separately and in combination. Compared to other tech giants such as Amazon, Google, Tesla and Microsoft, Apple seems to be falling behind on innovation. But that impression is based on a limited understanding of what innovation means, Wharton experts say.
https://mgmt.wharton.upenn.edu/profile/dlev/
Tips From a Private San Diego College Tutor: 5 Tips for College Finals College students are finishing up another year of university level course work that has kept them busy since last August. They are just a few short days away from heading off for summer vacation and taking a mental break from all the hard work they’ve done. Before they can enjoy some rest and relaxation, they need to ace their final exams, which will entail an intense amount of studying, essay writing, and review. Finals week can be horrid and overly stressful or can be somewhat manageable, depending on how efficient the student’s study habits are and how intense their coursework is but there are some tips and tricks that can help them survive – our in-home San Diego college tutors are here to help you score high on your finals. 1. Talk with the professor or TA It’s really important for students to visit the professor or TA during office hours ahead of time so they have an opportunity to ask any questions or clarify any assignment details before the due date. Office hours fill up very quickly in the days leading up to finals, so students are encouraged to book ahead of time. Many students discover that they have follow-up questions or need further clarifications from the instructor so they should not wait until the last minute to communicate. 2. Re-read the syllabus College students are also encouraged to reread their syllabus for important information about final exams. Sometimes a final exam is worth as little as 5% of the final grade, making it a pretty low-stress situation where as, in other situations, a final exam can be worth 25% or even 50% of the final grade. That means a student’s entire letter grade could be decided based on one day. The syllabus might also offer clues as to where students should look for study materials or where they might find potential test questions within the reading or class notes. Instructors often put bonus point questions within the syllabus just to see if students read it (READ: 5 Signs You Need an Irvine Math Tutor in College). 3. Limit social activities to studying It’s important to have an overall balance and a fun social life while in college but finals week is not the time to be meeting with friends. Social activities should cease while studying for finals except working with a study group and helping each other succeed. Students will have plenty of time to meet with friends over the summer once they have aced all of their difficult exams. 4. Find a quiet study space During finals week it seems like every library cubicle, every couch in the dorm common room, and every bench under a tree is filled with a student anxiously studying for exams. This leaves many students stressed out and unable to find the peace and quiet they need to focus on their studies. Students may need to be creative and venture to a coffee shop or library off campus, or they may need to book a private study space at the school library in advance. Students who simply can’t find a quiet spot should use ear plugs or any other thing that limits distractions from classmates and the environment. 5. Prioritize Finals week is also a good time for students to think about priorities. Which classes are more important than others? Do they need to maintain an overall GPA to meet the requirements of their financial aid package? What are their grades like in their potential major field of study? What grades do they have going into the final? Taking 10 or 15 minutes to look at where students need to put the bulk of their time and effort can help them thrive through this difficult week. Getting an A+ in one course might not help if a student receives an F in another. Perhaps they’re better off with two Bs. Students need to plan ahead to make sure they meet their specific goals and have an overall successful semester. It’s not too late to book your private San Diego college tutor for finals. Our tutors work around your busy schedules. Call TutorNerds for more information.
https://tutornerds.com/learning-blog/college_finals-5-tips-from-a-private_san_diego_college_tutor/
Translations: Portuguese, Polish, Spanish Evidence has recently emerged that Martin Heidegger read Julius Evola. In an article entitled “Ein spirituelles Umsturzprogramm” (“A Spiritual Revolution Program”) published in the Frankfurter allgemeine Zeitung, December 30, 2015, Thomas Vasek reports on an important document he discovered: Julius Evola, the ultra-fascist Italian cultural philosopher, was eagerly read not only by Gottfried Benn, but also by Martin Heidegger as an unpublished note shows. The keyword of Martin Heidegger’s note is “race”; below that appears, in the handwriting of the philosopher, the following sentence: “Wenn eine Rasse die Berührung mit dem, was allein Beständigkeit hat und geben kann — mit der Welt des Seyns — verloren hat, dann sinken die von ihr gebildeten kollektiven Organismen, welches immer ihre Größe und Macht sei, schicksalhaft in die Welt der Zufälligkeit herab.” [“If a race has lost contact with what alone has and can give resistance — with the world of Beyng — then the collective organisms formed from it, whatever be their size and power, sink fatefully down into the world of contingency.”] The quotation is taken verbatim from the book Revolt Against the Modern World, which was first published in German in 1935; only the spelling of “Being” has been Heideggerized. The author of the work was the Italian cultural philosopher and esotericist Julius Evola (1898-1974) — a racist and anti-Semite who revered the SS as an elite order, developed a Fascist racial doctrine, and a wrote a Preface to the Protocols of the Elders of Zion. After the war, the Italian fascists revered him. To this day he is considered a leading figure of the extreme Right across Europe. [. . .] The as yet unpublished excerpt could give new direction to the ongoing Heidegger controversy. Evola’s name does not appear in Heidegger’s published writings, and Heidegger scholarship has taken little notice of him. Even the Italian philosopher Donatella di Cesare does not mentions Evola in her book Heidegger, the Jews, the Shoah (2015). Yet textual comparisons suggest that Heidegger had not only read Evola, as this note indicates, but was also influenced by his ideas from the mid-thirties on, from his critique of science and technology, his anti-humanism and rejection of Christianity, to his “spiritual” racism. If this thesis is correct, then perhaps one could view the late Heidegger as a radical fascist esotericist who hoped that rule by a spiritual elite would bring about the reappearance of the gods. Of course this single note establishes only that Heidegger read one of Evola’s books, not that he read it “eagerly.” Nor does it indicate what Heidegger thought of Evola. But it is still an important discovery. It could lead nowhere. (It could be Heidegger’s sole reference to Evola.) Or it could be the tip of an iceberg. An Evola connection could end up throwing a great deal of light on Heidegger’s interests and associations. No matter what the outcome, Vasek’s discovery is the beginning of an important academic research project. Are there other references to Evola in the Heidegger papers? Did Heidegger read other works by Evola? Did he annotate Evola’s books? Did Heidegger correspond with or meet Evola? (Both thinkers visited one another’s homelands.) I have long wondered if more mainstream thinkers of the Right like Heidegger and Carl Schmitt were aware of the Traditionalist school of Evola and René Guénon. This suspicion was based less on shared doctrines than on shared concerns. A philosopher’s concerns are, in effect, the questions he is trying to answer; his doctrines are his attempts to answer them. Heidegger and Schmitt shared their Right-wing politics and critical eye on modernity with Evola and Guénon. That alone was sufficient reason to read them, even if they arrived at very different conclusions. Thus I was pleased to learn from Mircea Eliade’s Portugal Journal that Schmitt said, “the most interesting man alive today is René Guénon” and that Eliade agreed, although his conviction sometimes wavered. (Eliade also met Evola, corresponded with him, and read his works.) And now we have positive evidence that Heidegger read Evola. I am skeptical, however, of Vasek’s assertion that Heidegger was influenced by Evola from the mid-1930s on, specifically on such matters as science and technology, anti-humanism, the rejection of Christianity, and race and anti-Semitism. For one thing, Heidegger had rejected Christianity long before the 1930s. I eagerly anticipate Vasek’s “textual comparisons,” but my fear is that they will be superficial. For although both Heidegger and Evola shared a generally Right-wing political outlook and believed that technological modernity was the culmination of a long process of decline going back to antiquity, their ultimate philosophical premises were very different. Evola’s “world of Being” is essentially a Platonic realm of eternal, intelligible truth that stands in opposition to the “world of contingency,” which is intelligible only insofar as it reflects the world of Being. By contrast, Heidegger’s concept of “Beyng” (a rendering of his use of Seyn, the archaic spelling of the German Sein) refers to his concept of “Ereignis,” which is actually an unintelligible contingency that establishes different reigning interpretations of man and world. Beyng is a source of historical meaning that can neither be understood nor controlled. Evola believed that history’s downward trajectory toward technological modernity and cultural decadence was a falling away from the world of Being into the world of contingency. Heidegger, however, regarded Evola’s essentially Platonic outlook as part of the decline itself, indeed as standing very close to its beginning. For Heidegger, the Platonic distillation of Being as pure intelligibility and intellect as the capacity to intuit the intelligible is false because it is an abstraction that overlooks a more fundamental unity, a mutual belonging of historical man and meaningful worlds. For Heidegger, we are too close to things and to ourselves, too involved in them, to fully understand or control them. He believes that metaphysics posits both intelligible Being and a self-transparent intellect out of a drive for mastery. Thus the will to power that comes to fruition in global technological civilization is present at the very beginning of the metaphysical tradition. Heidegger claims that we overlooked this fundamental unity because it, in effect, concealed itself. It is a historical event that cannot exist apart from man but nevertheless was not controlled by man either. The self-concealment of Beyng creates metaphysics. And metaphysics inaugurates the downward course of history, culminating in technological nihilism. Contra Evola, the beginning of decline is not a fall from metaphysics, but a fall into metaphysics. In terms of the topic of Heidegger’s unpublished note, namely race, Evola’s objection to biological racism is that it is insufficiently metaphysical, overlooking “races of the soul” and “races of the spirit.” Heidegger, however, had a very different objection to biological race. Throughout his philosophical career, Heidegger battled against false concepts of human nature. The common denominator of these false concepts is that they are universal. In the metaphysical tradition, the essence of man is what all men have in common. What reason tells us we all have in common is reason itself. Man is the rational animal. The rational animal is not, however, a national animal. Because reason is one, humanity is one, so the human community should be one as well. Thus more particular attachments are illegitimate. If man is the rational animal, and reason grasps the universal, then reason is in effect a “view from nowhere” which can take us anywhere. The view from nowhere makes us citizens of everywhere. The rational animal is a citizen of the world; the cosmos is our polis; we have wings not roots. Heidegger’s word for human nature, however, is Dasein, which means “being here/there.” Dasein is not a view from nowhere, but a view from somewhere. Dasein’s outlook on the world is particular, not universal. It is particularized by space and time, and particularized by language and culture, which it shares with other Dasein in its community—but not with all of humanity. Heidegger is a philosopher of distinct identities, of the concrete, of the local, and of belonging, which is a mutual relationship: We belong to our world, and our world belongs to us. (The name of this concrete mutual belongingness is Ereignis.) Heidegger’s concept of Dasein is inherently political. We are not the rational animal but the national animal, and nation is defined by a common history, language, culture, and destiny. The politics of Dasein is, therefore, ethnonationalism. Why not racial nationalism? Heidegger would not deny that race is part of ethnic identity. To be German one must be white. But there is more to being German than being white. Heidegger feared that defining identity in terms of biological race alone was another form of deracinating universalism. Not as universal and deracinating as “humanity,” but with similar consequences. For if whiteness is essential, then it is easy to become indifferent to Germanness and Englishness, which is the road to deracination and homogenization. But for Heidegger this is a form of inauthenticity, a failure to own up to our full identity and carry forward the cultural and linguistic particularities of our heritage. This is very different from Evola’s metaphysical critique of biological racism. For Evola, biological race is too concrete and insufficiently metaphysical. For Heidegger, biological race is too abstract and metaphysical. Nevertheless, despite the deep and fundamental rift between Heidegger’s and Evola’s views, Heidegger still chose to copy down Evola’s words. That means something. The fact that Heidegger also changed “Sein” to “Seyn” was probably no slip of the pen. It means something too. Was Heidegger endorsing Evola’s basic schema that the vitality of a people derives from its contact with a power that transcends its understanding and control? By changing Sein to Seyn, was he transposing Evola’s metaphysical version of this theme into his own anti-metaphysical key? Translated into Heideggerian terms, Evola’s schema is that a people, defined not in spiritual but cultural and historical terms, loses its vitality by turning away, not from Platonic ultimate reality, but from its participation in historically evolved practices of meaning—its traditions—and turning toward, not the world of contingency, but the modern mania for certainty and control, expressed in the racial sphere as eugenics and other public health measures. This is certainly consistent with Heidegger’s discussions of race from the 1930s, including remarks from before the publication of the German translation of Evola’s Revolt Against the Modern World. Some take the view that Heidegger’s philosophy was essentially apolitical. Then he blundered into his unfortunate dalliance with National Socialism. Then he returned to an apolitical outlook. This is false. Heidegger’s thought is ethnonationalist to the core. This implies that it never ceased being so. Thus the later Heidegger’s politics simply went underground. It became hidden, occult, esoteric. Which means that Vasek may be on to something when he writes, “perhaps one could view the late Heidegger as a radical fascist esotericist who hoped that rule by a spiritual elite would bring about the reappearance of the gods.” My view is that Heidegger’s disillusionment with National Socialism, which began in the middle of the 1930s, led him to search for a way of defining a post-totalitarian, ethnonationalist critique of globalizing, homogenizing modernity. In short, Heidegger was the first thinker of the New Right. If you want to support Counter-Currents, please send us a donation by going to our Entropy page and selecting “send paid chat.” Entropy allows you to donate any amount from $3 and up. All comments will be read and discussed in the next episode of Counter-Currents Radio, which airs every Friday. Don’t forget to sign up for the twice-monthly email Counter-Currents Newsletter for exclusive content, offers, and news. Notes Greg Johnson, “Mircea Eliade, Carl Schmitt, and René Guénon,” Counter-Currents, July 15, 2013. Michael Bell, “Julius Evola’s Concept of Race: A Racism of Three Degrees,” Counter-Currents, February 6, 2011. See, for instance, Martin Heidegger, Being and Truth [1933–34], trans. Gregory Fried and Richard Polt (Bloomington: Indiana University Press, 2010), p. 138; Mindfulness [1939–1939], trans. Parvis Emad and Thomas Kalary (London: Bloomsbury, 2016), pp. 241–42; Ponderings II–VI: Black Notebooks, 1931–1938, trans. Richard Rojcewicz (Bloomington: Indiana University Press, 2016), p. 266; and especially Ponderings XII–XV: Black Notebooks, 1939–1941, trans. Richard Rojcewicz (Bloomington: Indiana University Press, 2016), p. 44.
https://counter-currents.com/2016/02/notes-on-heidegger-and-evola/
The dreamlike compositions of Lebanese photographer Lara Zankoul are like contemporary explorations of the charm and mystery of the human psyche... Whimsical and playful, the 26-year-old's imagination represents an attempt to invent new worlds, to push against the boundaries of our reality and escape everyday life. In her upcoming second solo exhibition at the Ayyam Gallery Beirut entitled The Unseen, Zankoul presents elaborately composed real-size photographs; two visions within one viewing experience – commenting not only on the surface of things, but also on what may lie hidden beneath. The twelve photographs each display that which is known, and hint at that which is slowly revealed to be the truth through distorted perspective on the natural order, space and time. Each photograph uses a water tank to divide the surface above water and the space below, with the state of truth is to be found just under the surface. The Unseen runs February 10 to March 30 find out more on the Ayyam Gallery website.
https://www.buro247.me/culture/arts/theunseen-lara-zankoul.html
Moving Environments: Affect, Emotion, Ecology, and Film brings together a distinguished group of ecocinema scholars to explore the role of affect and emotion in films that regard natural environments and non-human actors. The third volume on ecocinema in Wilfrid Laurier’s Environmental Humanities series, the essays in Moving Environments ask how environments move us: “How do [films] affect our relationship to the human and more-than-human world and what can we say about their affective or ‘passionate’ politics?” Motivated by these questions, in 2011 the Rachel Carson Center for the Environment and Society hosted an ecocritical film studies workshop centred on affect, emotion, and non-human nature. The essays collected in the volume are the result of that workshop and reflect the dialogic nature of its origin, as authors refer to other essays in the collection that both expand and complicate their individual positions. The conversation that began at the Rachel Carson Center continued into the publication processes, culminating in a volume that functions as a model of a collaborative, if not co-authored, scholarship. In addition, this volume dedicates a quarter of its content to the “effects and affects” of animated films. In its consideration of a form and genre in which landscape is usually no more than the backdrop for characters and the plot’s progression, the essays draw attention to the capacity of animation to animate non-living things or to differently imagine what constitutes life. Thus, together the essays in this section suggest that animation [End Page 367] may present a unique opportunity for revealing the politics imbued in the land itself, a salient concern suggested by many of the essays in the collection. However, a majority of the articles read the non-human solely through animals, which raises questions about how affect and emotion can be registered in the biotic (especially in light of recent research in biosemiotics). As an exception, by contrasting a land-ethic approach, based on Aldo Leopold’s famous formulation, with an “organismic ecology,” Robin L. Murray and Joseph K. Heumann critique both methods for their lack of emotional force. Instead, the authors find that the organismic mode, when combined with close attention to the animal, can help avoid the pitfalls of an animal rights perspective by emphasizing animal welfare’s rhetorical and affective purchase. The authors’ conclusion regards the animal as an effective (and affective) mediator between human and the larger biotic community. Importantly, this volume delivers exactly what it promises—provocation to guide further scholarship. As an example, while most essays criticize western epistemologies as the structuring model for the global neo-liberal project, almost all the essays focus their readings on western films (broadly conceived to include avant-garde). However, Belinda Smaill takes pains to recognize the ethnocentricism of recent documentaries and Salma Monani discusses ecoactivist films screened at the 2011 “Mother Earth in Crisis” Native American Film and Video Festival. Some further questions raised by the thoughtful scholarship in this volume include, what can non-western film traditions offer to western scholars, activists, and politics? How do scholars in predominantly western academic settings discuss non-western films without appropriating, misinterpreting, or altering those voices? More broadly, what would it look like to read the biotic as characters with agency, animated in relational networks with the human and non-human animals?
https://muse.jhu.edu/article/632798
We are seeking a passionate and collaborative teacher to lead our expanding STEM program and to teach Middle School STEM for Fall 2022-23. Essential duties include: (1) oversee the STEM curriculum for K-8; (2) support science teachers in their planning, (3) teach Science labs to K-5; (4) planning and preparation of materials, classroom instruction with an eye towards differentiated and progressive practices with ongoing student assessment and feedback. Successful candidates will possess a growth mindset, self-motivation, curiosity and openness to new ideas, excellent oral and written communication skills, and comfort in a school setting where children’s learning and experiences are central to educational decisions. Key Responsibilities ● Oversee the STEM curriculum from K-8 ● Coach science teachers throughout the school ● Teach middle school science, Grades 6-8 ● Design and deliver curriculum according to Common Core and NGSS ● Collaborate with a team of teachers ● Positively manage classroom routines and behaviors ● Communicate effectively with families ● Develop students’ academic and social-emotional competencies Qualifications ● Masters Degree ● Training or experience in Special Education preferred ● Minimum 5 years of teaching experience ● Strong pedagogical skills ● Strong interpersonal skills ● Innovative and dynamic educator At Luria Academy of Brooklyn (www.luria-academy.org), we are inspiring a new generation of leaders, creators, thinkers and engaged citizens. Our two campuses serve 300 students in preschool through 8th grade. Luria offers a sophisticated Jewish day school education in a progressive environment. Our students come from a wide range of religious and economic backgrounds. At Luria, students are encouraged to be curious, to embrace one another’s differences and to engage in respectful dialogue. At Luria, we focus on the whole child; as such, our educators develop learning experiences that support and challenge each student. We track student progress using standards based assessment and grading practices.
https://www.lookstein.org/bulletin-boards/opportunities/science-instructional-leader-middle-school-science-teacher/
Food: Cocktail: “Ginger”. -house-made ginger ale with rice shochu and lime. Monte: The cocktail was light and refreshing, which was pleasant since it was hot outside at the time. Amuse: Chrysanthemum tea. Monte: Really light and floral, there might be something more poetic about it but it went over my head. 1. Chilled Summer Vegetables in Dashi Broth. -includes a zucchini blossom stuffed with a dumpling of scallop and crab. Monte: Again, it was light and freshly floral. Cooling and soothing. 2. Golden Crab Egg Custard “Chawan-mushi” with Truffle Ankake Sauce. -sauce of golden crab with black truffle pâté. -finished with a few Australian black winter truffle slices. Monte: After the first chrysanthemum tea and chilled vegetables, this truffled chawanmushi tasted like a heavy hitter. That’s just in contrast to the prior dishes though – after my taste buds adjusted, I found this dish to be very well-balanced and I liked the sweet crab meat. 3. Chef’s Sashimi Selection: -Back left: Seared toro. -Back right: Toro. -Middle: Ebi (shrimp). -Front left: Shima aji (striped jack). -Front right: Tai (snapper). -Bottom: Wasabi and a chili-and-daikon mix. Monte: The sashimi was very fresh, although nothing to go out of your way for. 4. Summer Vegetable Chirashi-zushi. Monte: aaand we go back to the gorgeous veggies. 5. (Victor) Yuzu Perfumed Akamutsu (Grilled Rosy Seabass). 5. (Monty) Maine Lobster Tail with Uni Sauce, Caviar, Yuzu Miso Cream (left), and Chrysanthemum Purée (right). Monte: This dish was pretty fun to eat and I liked the multiple sauce options. I ended up liking the yuzu-miso one more, it had this juxtaposition of deep earthiness and light-herbalness that was quite delightful. 6. Chilled Summer Corn Soup. -with hearts of palm, cape gooseberries, and a mint licorice leaf. Monte: All the flavors of a great corn chowder, without any of the heaviness. I would love to know the secret to make this at home. 7. Seared Duck Breast marinated in Green Tea. -Malanga Yam Purée. 8. (Victor) Chazuke Broth with Steamed Rice. -with crispy ume plum, hijiki seaweed, and dried daikon strips. 8. (Monty) Dungeness Crab Zousui Rice. -with fresh uni and sake-kasu broth. Monte: Crab + uni = winning 9. (Victor) Matcha green tea and wagashi. 9. (Monty) Soy sauce ice cream. Monte: Slightly salty/savory, but mostly sweet. It’s a really clever and well-balanced ice cream. I certainly would’ve never thought soy sauce would be a delicious ice cream flavor, so this was a great surprise! Post-dessert snack: -Right: Rice crackers topped with shiso powder. -Left: Rice crackers topped with green tea powder. Monte: The rice crackers were a great way to end the meal and just munch while reflecting on the great meal. They were almost gummy once you bit into it, which was a fun surprise. This was only my second kaiseki meal, so I don’t have as much of a reference as to how it compares to other experiences, but I personally thought it was really good.
https://happynoms.com/2012/12/01/nyc-kaiseki-brushstroke-aug-2012/
The Library Collections Curator advances the acquisition, documentation, care and access to the published collections held by VMHC. This position also works with the curatorial team to develop in-house and traveling exhibitions that engage diverse audiences in dialogue and the exploration of relevant issues and ideas, and that help VMHC realize its mission and business goals. Collection Responsibilities Build VMHC’s library collections through the identification, appraisal, and acquisition of appropriate items including books, rare books, serials, imprints, sheet music, broadsides and other published materials Promote and provide access to the library collections through technology and related services for the collections; work closely with staff librarians and archivists to research and document collections Support the physical management, care and preservation of the library collections Serve as an institutional authority and spokesperson for VMHC with the general public and media on specific aspects of Virginia’s history and material culture; engage in public outreach in the form of lectures, tours, and research assistance Exhibition Responsibilities Research and develop ideas for exhibitions and programs that meet VMHC’s mission, community engagement and business goals, and leverage the collections Provide content expertise as part of a core team that leads the collaborative development and implementation of exhibition projects Work within a cross-functional team to oversee the care, upkeep, and refreshment of core gallery experiences Collaborate with Guest Engagement Division staff to develop plans for exhibition-related programming, activities, and evaluation Project Management and Administration Lead and serve on cross-functional teams and committees Participate in VMHC institutional priority setting, and strategic and annual business planning Develop and steward collection donor relationships in partnership with VMHC staff and trustees Contribute to the development, writing, implementation and reporting of grants; participate in the development of corporate and other partnership proposals Contribute to the development and writing of articles, public relations and external communications materials as needed Keep abreast of developments in the fields of Virginia history, exhibitions, museums and library science; stay current with professional best practices Knowledge, Skills & Abilities Necessary: Knowledge of United States and Virginia history and material culture Knowledge of principles and best practices of library science Broad knowledge of current trends in curatorship, exhibition development and community engagement Understanding of museum ethics and standards for collections management Exceptional communication skills and experience with the general public, donor groups, press, external and internal stakeholders Education & Experience Requirements: A combination of experience and education that demonstrates possession of the necessary knowledge and abilities for this position is required as noted: Master’s degree in a relevant field such as history, library science, museum studies, or equivalent work experience required Demonstrated experience with collections research, development and documentation associated with published materials Three years of demonstrated experience developing exhibitions with broad community relevance and appeal Note & Special Requirements: Please include a two to three-page writing sample (e.g., exhibit script, grant application, article, blogpost) with your application. If interested in this job opportunity, please apply and upload your resume to https://www.virginiahistory.org/contact-us/jobs-and-volunteering. The Virginia Museum of History & Culture is owned and operated by the Virginia Historical Society — a private, non-profit organization established in 1831. The historical society is the oldest cultural organization in Virginia, and one of the oldest and most distinguished history organizations in the nation. For use in its state history museum and its renowned research library, the historical society cares for a collection of nearly nine million items representing the ever-evolving story of Virginia. The Virginia Historical Society is an Equal Opportunity Employer.
https://www.jobs.art/posts/library-collections-curator-virginia-museum-of-history-culture
If you are using a task chair or an executive chair in your office, then it is important that you adjust its seat depth properly. This will help you sit comfortably on it and avoid any discomfort or strain while working. A good rule of thumb is to ensure that your knees are at 90 degrees when you are seated. If they do not touch the front of your desk or table, then your office chair seat is too deep for you and needs to be adjusted. This article will show you office chair seat depth adjustment in the complete guide. And provide you the answer in detail to how to adjust office chair seat depth. What is the best seat depth for an office chair? Usually, 17-20 inches wide is the standard. However, the best office chair seat depth depends on your body type and posture. A shorter person will need a shorter seat depth, while a tall person will need a longer one. A good rule of thumb is to ensure that your knees are at 90 degrees when you are seated. If they do not touch the front of your desk or table, your office chair seat is too deep for you and needs to be adjusted. Many people have different preferences regarding the length of their office chair seats. Some like their legs fully extended, while others prefer their feet to rest flat on the floor, even when seated. Many factors contribute to this preference, including height and body type (e.g., short legs versus long legs). In addition, some people enjoy having their feet flat on the floor as it provides them with more stability than if they were resting their feet on top of an adjustable pedal base. Whatever your preference may be, you must find an office chair that fits your budget and your specific needs to remain comfortable throughout your workday. What To Check Before Adjusting Seat Depth? The seat depth adjustment is one of the most critical adjustments on your office chair. It affects your posture, comfort, and overall health. The ideal seat depth is based on your height and weight. If you're tall, a deeper seat will give you more support. If you're short, a shallower seat will provide better mobility. If you have back problems or trouble moving around in an office chair, you should consult a professional before adjusting your seat depth. The improper adjustment can cause more pain than it relieves. Before adjusting the seat depth on your office chair, check for these things: Seat Pan Height The first thing to check is how high or low the seat pan is on your chair. This will make a difference in how far you need to move the chair back to make room for your legs while at the same time keeping your shoulders well-supported. Distance From Your Back Of The Chair The next thing to look at is how far away from you the back of the chair is. You may need help getting comfortable with it if it's too close. The ideal distance between your knees and torso is about an inch or two less than shoulder width. If you can't find a good position with this distance, consider adjusting it so that it's just slightly closer than shoulder width, allowing for better support of your upper body without restricting movement too much. Armrest Height And Angle Adjust or replace any armrests on your office chair, so they're adequately positioned for you. Some people prefer having their elbows on armrests, while others prefer having them raised off their arms altogether. Adjusting them will allow for more comfort and support during long working periods at a desk by giving your upper body a break from the keyboard and mouse. Seatback Angle If sitting in an office chair with built-in lumbar support, ensure it's adjusted correctly. Many chairs have adjustable lumbar support mechanisms that are easy to use and have little knobs that can be turned with your fingers. If you don't have one, look into getting a lumbar cushion that fits under your clothing and provides additional support where it's needed most. Check The Lever Under The Office Chair The lever to adjust the depth of a seat is located under the chair. To get access, you will need to flip up the seat cushion—this may be done either manually or automated by pressing the button on the side of the chair. Adjust Through Metal Arms Of Your Office Chair The mechanism comprises one or more metal arms and pulleys connected and mounted underneath your desk chair. A handle attached to this assembly allows you to pull it back and forth, which adjusts how far away from your body your seat will be when you sit in it. NOTE: It's important for good posture and to prevent discomfort or injury over time (especially if you spend long hours sitting at work). How to Adjust Office Chair Seat Depth? - Let's Find Out There are a few different ways to adjust the seat depth. Control Lever One method is using a control lever below the seat and reaching under your chair to pull it up or down. This lever will usually be on either side or the back of your office chair. Adjustment Knob Another way is through an adjustment knob on top of your chair, allowing you to slide it back and forth without reaching under it easily. You may find that some chairs have both types of controls for easier access, but if yours doesn't, no worries! Check where yours is located when setting up shop for work each day! Mechanism To adjust the depth of your seat, you will need to find the mechanism underneath it. This will be located on either side of the chair between the backrest and seat cushion. To raise or lower your chair's seat depth, push forward or backward on this control lever until you reach your desired position. You may need to readjust this lever several times as you get used to sitting at different levels to find a comfortable seating position. Why Is Seat Adjustment is Necessary? The seat is adjustable to find the most comfortable position for your body size and shape. The seat height is adjustable, so you can change it to suit your preference. Seat adjustment is also important if you have bad back or spinal problems. If You Are Tall Or Short If you are tall or short, you might need help getting into a good sitting posture when using a chair with an adjustable seat height. A low-back chair with an adjustable seat height allows you to sit in a reclined position and maintain good posture. Base Of The Chair Can Adjust Well In addition, an adjustable seat depth means that the base of the chair can be adjusted by sliding forward or backward. This allows a better fit for different users and between the chair and desk. Office Chair Seat Depth Adjustment for Minimal Pressure at the Knees Office workers spend a lot of time sitting in their office chairs. Researchers at a University found that the average office worker sits for 8-9 hours a day. Over time, this can result in back pain, neck pain, and even Carpal Tunnel Syndrome (Cts). CTS is caused when the median nerve is compressed or irritated as it travels through the wrist. How To Avoid Medical Issues Occur By Wrong Seat Postures? The major key to avoiding these problems is to adjust your office chair to fit your body size and shape so that you are not working with an uncomfortable posture while sitting at your desk. Best Way The best way to reach this is by using an adjustable Lumbar support pillow which provides extra support in the lower back area and better pressure relief at the knees. This helps prevent unnecessary pressure on your lower back from prolonged sitting, which can lead to reduced blood circulation and nerve compression, which causes pain in the lower back region. Comfort And Long-Term Health Making changes to your chair is important for comfort and long-term health. Adjustments should be made based on your body type, so consult a medical professional if you have questions. If you can’t sit comfortably in the chair sometimes, it may be time to consider making some adjustments. When changing the seat position, try doing so gradually over time rather than all at once. How To Reduce Lower Back Muscle Tension? Some ways to avoid or reduce lower back muscle tension when using an office chair. Adjust Seat Depth One of these is by adjusting the seat depth of your office chair. The seat depth directly refers to the distance between your thighs and knees while sitting on an office chair. A good office chair should be able to provide enough space for your body so that it can maintain good posture while sitting on it. Keep Your Feet Flat Another way to avoid lower back muscle tension is by keeping your feet flat on the floor when using an office chair. This means you should never cross your legs while using such furniture since this will cause unnecessary strain on your lower back muscles and other body parts such as arms and shoulders. It may lead to serious injuries if done repeatedly over time, especially if these movements are done forcefully without proper technique, in injury. How to Avoid or Reduce Low Back Muscle Tension? The best way to avoid or reduce low back muscle tension is to maintain a proper sitting posture. The first step in correcting the problem is determining if you are sitting too far back in your chair or leaning forward to compensate for a short seat depth. The best way to do this is to sit in a chair with a more extended seat depth and see if your posture improves. If it does, the problem was caused by your current chair's seat depth being too short. If not, you're leaning forward to compensate for your contemporary chair's seat depth being too short. If you have a chair with adjustable arms, move them out as far as possible towards the front of the chair. If they are not flexible, these tips may help: - Spread your legs out as far as possible into the floor until they feel comfortable and stable without leaning forward to maintain balance or support their weight on the balls of their feet (or heels). The seat depth on an office chair is the distance between where your butt rests and the back of the chair. You will need to locate the adjustment lever on the front of the chair to adjust it. It's usually located underneath where your knees rest, but in some models, it can be found at the very bottom of the seat. You can also adjust this lever with your fingers if you know how to do so. - Place one foot flat on the floor while keeping your other foot on its toe so that it's raised off of its heel slightly (about 2 inches). Then place that foot flat on the floor just behind where it was before (on its toe again). Repeat this process until you get used to it. FAQs Q. What is meant by a "pneumatic lift"? A pneumatic lift is a device used to adjust the height of an office chair. It's usually found underneath the chair's base, where you can find a lever attached to a cylinder filled with air. This cylinder is attached to another lever which helps raise or lower the chair when pulled up or pushed down. Q. Will my chair always have a backrest? Some chairs are designed with no backrest, while others have one that you can adjust up or down depending on your comfort level. Some chairs have a fully adjustable headrest positioned along the backrest's length. Q. How do I know which type of seat would be best for me? It depends on what kind of work you're doing and how long you sit at your desk each day. If you are typing for most of the day, then an ergonomic office chair with built-in lumbar support would be best for your health and posture. If you spend more time talking on the phone or visiting co-workers, consider getting an executive chair with armrests so that you don't have to support yourself while reclining in your seat. Q. Should I get an over-sized office chair if I'm tall and overweight? “If you're tall and overweight, an over-sized office chair may be a good choice for helping you avoid back pain,” says Dr. Robert Chen. Which tool is made for seat adjustment? A seat slider is a tool used to adjust the depth of the seat. The slider has two components, the knob, and the actual slider. The knob can be turned to raise or lower the seat's height. For example, if you are tall and need a higher chair, you would turn the knob clockwise until you reach your desired height. To lower your seat, turn the knob counterclockwise until you get to your desired position. This tool is straightforward but effective in adjusting your chair's height. Why do I need to adjust the depth of my office chair seat? The depth of a chair should be adjusted so that it supports your body weight evenly across both buttocks and thighs. If this isn’t possible with standard height adjustments, then it’s time to look at adjustable seat depth options. How do I measure office chair seat depth? To measure seat depth, sit in your office chair with your feet flat on the floor and knees bent at 90 degrees (or close). Then place a ruler between your legs and measure up to where it touches your backside. Which Factors Based On Office Chair Adjustment? Adjusting the depth of a chair seat should be done based on the following factors: - Height - Weight - Posture What is the standard depth for an office chair? The standard depth for an office chair is 18 inches (46 cm). If you are shorter than 5’6″, you may need a shorter seat depth. If you have a long torso or broad shoulders, you may need a deeper seat depth. Is Choosing Supportive Material For Your Seat May help? If you want more support for your lower back, choose a chair with a seat pan made of foam or gel-infused mesh fabric. Try mesh fabric or leather upholstery if you want something that breathes better but still has some cushioning. Material is one of the most important factors when looking at office chairs. You will find that many different types of material are available, and each has its own set of pros and cons. Cushioning Material: The cushions on your seat make the chair comfortable for you to sit on for long periods. There are a few different types of cushioning material that you can choose from: Foam: Foam is usually used in low-end office chairs because it is cheap and easy to manufacture. It also tends to be too soft and doesn't provide much support for your back or hips. Plastic: Plastic cushions are less standard than foam but provide more support than foam. They work well for those who want something between foam and leather. Leather: These seats are usually found in higher-end chairs because they provide excellent lumbar support and help keep your bottom cool during hot days. Final Words The seat depth adjustment is one of the most important features of a chair. It allows you to customize your seating experience for comfort and health. The correct positioning of your body will help alleviate back pain, prevent injury and increase productivity at work. If you're unsure how to adjust your chair's seat depth, consult a medical professional. If you have questions about adjusting your office chair at home, consult an expert before making any changes.
https://beastoffice.com/office-chair-seat-depth-adjustment/
Published in the Bay magazine on November 27, 2018. African-inspired drumming educates and inspires in Newport County Connecting the Beats, an initiative from the Middletown-based concert destination Common Fence Music, has been working to educate local youth on African-inspired drumming for 10 years. Led by Tom Perrotti, the program brings teachers and performers of West African and Caribbean dance and drumming traditions to Newport County schools for educational workshops and performances. To give the public a taste of their mission, Connecting the Beats invited frequent collaborator the Lafia Ensemble to perform at the Broadway Street Fair. On one sunsoaked Saturday afternoon in October, the Lafia Ensemble – brainchild of Malian Master Drummer Issa Coulibaly, dancer and troupe leader Tara Murphy, Chris Keniley, and Matt Maloney – drew crowds and pulled them onto the stage.
https://isabelladeleo.com/2018/12/02/the-bay-magazine-connecting-the-beats/
UNIVERSITY PARK, Pa. — Because of a new narrative of stewardship, Pennsylvania farmers in the Chesapeake Bay watershed will be persuaded to look at conservation, not as something they have to do but rather something they want to do. That’s one of the key conclusions in a just-released report from a conference that brought together farmers, representatives of farm and environmental groups, and local, state and federal government officials to find new collaborative strategies for reducing excess nutrients from agriculture flowing into the Chesapeake Bay. Held in Hershey last March and called “Pennsylvania in the Balance,” the conference was organized by Penn State’s College of Agricultural Sciences. “Before this conference — which included 120 dedicated people representing the many perspectives needed to meet the challenge — there was a lot of concern about agriculture’s role in water quality initiatives in Pennsylvania,” said Matt Royer, director of the college’s Agriculture and Environment Center and coordinator of the conference. “But now it is clear that farmers have a real opportunity to play a key role moving forward. Seat at the table The conference has positioned agriculture to have an important seat at the table, to take a proactive role in finding a solution to the excess nutrient problem plaguing Pennsylvania’s rivers and streams and the Chesapeake Bay. That is a big change.” The new and exciting thing coming out of the conference, Royer added, is that there is universal support for having “champion farmers” lead other farmers to transform the entire community from perceiving conservation as “need to” to “want to.” Royer characterized conference attendees as leaders in agriculture and environmental protection working together to identify new, innovative solutions that can help ensure the state maintains a vibrant and productive agriculture industry while meeting water-quality goals for the commonwealth’s rivers and streams and the Chesapeake Bay. The conference report attempts to capture their many creative thoughts and innovative ideas on how Pennsylvania agriculture can help meet clean-water goals, Royer said. Other partners In addition to Penn State, the conference was sponsored by the National Fish and Wildlife Foundation, federal and state government agencies, nonprofit groups, agricultural organizations, and private-sector businesses. According to Royer, the four initiatives identified in the conference report describe areas in which progress can be made in the near future. They include the following: Increase technical capacity through training opportunities. These enhancements will complement existing USDA Natural Resources Conservation Service and state training programs to build the technical network of conservation professionals necessary to meet increased farmer demand for developing manure management plans and implementing their associated conservation practices. Partners will explore the development of training offerings to fill identified gaps and streamline training for interested professionals, as well as students within existing course offerings and degree and/or certificate programs. Farmer-to-farmer approaches and community, technical and vo-ag schooling opportunities will also be pursued. Develop and disseminate culture of stewardship through soil and stream health. The conference embraced agriculture and its ingrained culture of stewardship, which constitutes the overarching theme infusing the entire partnership’s work moving forward. This statewide education and outreach initiative will seek to involve producers, conservation technicians, Extension educators, nonprofits, and the ag industry. It will build off of successful farmer-led efforts and agency initiatives that promote water quality-based conservation practices in the broader context of maintaining soil health and economic profitability. Develop new and creative incentives to encourage conservation. An agricultural certification program will recognize and reward producers who have reached a high bar of conservation. Recognition based, certainty based and market-based incentives will all be explored to encourage producers to pursue certification. According to the report, farmers appreciate being recognized and rewarded for reaching high conservation standards within the industry. Recognition, perhaps paired with incentives, can also motivate peers to raise their conservation bar. Develop and deploy delivery mechanisms. Conference attendees emphasized the importance of focusing efforts in priority watersheds, where nutrient loads are high, local impairments exist, and local efforts are underway and can be built upon. To succeed in this prioritization effort, delivery mechanisms need to be developed and supported, including technical assistance in developing watershed plans that identify the right practices to be implemented in the right places, Royer noted. “Pennsylvania success stories are almost always locally led — this initiative seeks to transform local success stories from pilot programs to standard operating procedure for achieving water quality goals in the commonwealth.” STAY INFORMED. SIGN UP! Up-to-date agriculture news in your inbox!
https://www.farmanddairy.com/news/pa-farmers-play-key-role-in-water-quality-goals/409897.html
Cefastar is a broad-spectrum antibiotic of the cephalosporin type, effective in Gram-positive and Gram-negative bacterial infections. It is a bactericidal antibiotic. Cefastar is a first-generation cephalosporin antibacterial drug that is the para-hydroxy derivative of cefalexin, and is used similarly in the treatment of mild to moderate susceptible infections such as the bacterium Streptococcus pyogenes, causing the disease popularly called strep throat or streptococcal tonsillitis, urinary tract infection, reproductive tract infection, and skin infections. Other names for this medication: Cefadroxil, Duricef, Cedrox, Cefastar, Paxyl, Similar Products: Sarilen, Metropast, Anticol, Fladystin, Lorzaar, Salix, Emimycin, Anten, Atacor, Megapress, Orva, Nebicard, Quimoral, Carisoprodol, Efonidipine, Trisul, Albiz, Nodolex, Rulide, Zelitrex, Zumo, Lamez, Glita, Pegetron, Adovia, Finalo, Amoxycillin, Viroclear,
https://lipitor2020.site/cefastar_tablets.html
The development of higher education and the advancement of the educational institution need to lay the foundations of improvement and modernization and provide the elements of creativity and innovation, and the profound and radical changes that have affected the society in all different fields and the necessitate to link university education to the daily needs of citizens, which requires reconsidering the functions of universities, and how to provide appropriate outputs for the labor market have the capacity to free thinking and constructive criticism and logical analysis and creative imagination and the adoption of economic competition on the ability of human knowledge to production and access to advanced fields of science. Therefore it requires the development of human skills and the improvement of cadres and capabilities can handle the output of this era and adapt to its consequences. Therefore, emphasis is placed on the development of university performance and performance indicators, and a system for accreditation of university in order to ensure the quality and the continuous development of university systems. There is no doubt that the need for planning requires the realization of change in university education at the global level in the light of the following topics: - students, curriculum, the objectives of educational programs, faculty, education output, continuous development, facilities and services, institutional support. Accordingly, the divisions of quality and university performance in the Iraqi universities had been established since 2008. The most important tasks of the Division of Quality and University Performance are : - Spreading a culture of concepts of quality and evaluation of performance in higher education and their role in the service of the individual and society and seek to hold of periodic seminars and meetings to raise awareness and promote awareness of the definition of the culture of performance evaluation and quality accreditation and explain its importance with a view to reaching the scientific quality standards. - Prepare calendar files - Arab Universities Union file. - The calendar files of the university performance. - Evaluation files of the performance of senior university leaders. - Evaluation files of the performance of faculty members. - Evaluation files of the performance of university staff (employees) - Self-assessment report file-SSR according ABET criterion. - Good laboratory accreditation file-GLP. - Preparing the annual UNESCO report of the self-assessment for the department and identifying the strengths and weaknesses points. - Preparation and analysis of questionnaire forms for evaluation of the educational process (performance of professors, courses, laboratories and academic facilities, academic advising) by students, questionnaire of graduates' assessment by employers, exam questions questionnaire. - Evaluation of the educational process by graduates through the questionnaire forms. - Preparing the quarterly bulletin (brochure) of the scientific and cultural activities for Department and students.
https://che.uotechnology.edu.iq/index.php/bonus-pages/121-quality/313-313-quality-division
Approximately 50% of all savants have autism, while only about five to ten percent of individuals with autism possess extraordinary savant skills. What is Autism? Autism is a developmental disorder. It is neurobiological in nature, affecting the brain in areas of language/communication, social skills, sensory systems and behaviour. Autism impairs a person’s ability to communicate and relate to others. It is also associated with rigid routines and repetitive behaviours. Autism is a spectrum disorder, which means that symptoms can range from very mild to quite severe. Every individual with autism is one of a kind, so while they will have commonalities with others with autism, the way they are impacted will be unique to each individual. Did you know? - One in 68 children have been identified with autism spectrum disorder (ASD) according to estimates from CDC’s Autism and Developmental Disabilities Monitoring (ADDM) Network - ASD is almost 5 times more common among boys (1 in 42) than among girls (1 in 189) - ASD is reported to occur in all racial, ethnic, and socioeconomic groups. - Studies in Asia, Europe, and North America have identified individuals with ASD with an average prevalence of about 1%. A study in South Korea reported a prevalence of 2.6%. (Statistics from Centers for Disease Control and Prevention (CDC): http://www.cdc.gov/ncbddd/autism/data.html, April 10, 2014 ) How is Autism diagnosed? Presently, there is no medical test for autism spectrum disorders. The diagnosis is based on observed behaviour, and educational and psychological testing. Therefore, there is no blood test, brain scan, or other high-tech test that can be used to diagnose it. The diagnosis relies on the judgment of an experienced doctor or team of specialists and is based on observation of the child’s behaviour, educational and psychological testing, and parent reporting. Usually the team members evaluate the child, assessing his or her strengths and weaknesses, and then explain the test results to parents. The diagnostic criteria for ASD was recently revised in May 2013 in the Fifth Edition of the Diagnostic and Statistical Manual (DSM-V) produced by the American Psychiatric Association (APA). The edition immediately preceding it, the DSM-IV, had ASD or Autistic Disorder as one of five disorders under the umbrella of Pervasive Developmental Disorders (PDD): Autistic Disorder, Asperger’s Disorder, Childhood Disintegrative Disorder (CDD), Rett’s Disorder, and Pervasive Development Disorder-Not Otherwise Specified (PDD-NOS). Each of these disorders had specific diagnostic criteria but shared the primary symptoms of deficits in social communication, social interaction and rigid, stereotypical behaviors. When is Autism usually diagnosed? Autism can be diagnosed at any age. If children are showing signs of autism, it usually becomes evident before age three. Some parents report that they noticed differences in their children from birth; others became concerned when their young child was not hitting their developmental milestones or walking and talking. In some other cases parents report that their child was developing normally and then began to lose skills they already had. The child’s symptoms can be quite mild and only begin to struggle when the social demands of school increase. Sometimes children may be identified as having a developmental delay, such as a speech delay, before obtaining a diagnosis of autism or may have received a diagnosis of something else such as attention deficit disorder or non-verbal learning disability. These children are usually diagnosed with Asperger’s Syndrome and many are not diagnosed until after the age of seven or later. What are the first signs of Autism? Autism Speaks (www.autismspeaks.org/whatisit/learnsigns.php), a US-based non-profit organization, has developed the ASD Video Glossary (http://www.autismspeaks.org/what-autism/video-glossary), an innovative web-based tool designed to help parents, primary caregivers and professionals learn more about the early red flags and diagnostic features of autism spectrum disorders (ASD). This glossary contains over a hundred video clips and is available free of charge to anyone to help them see the subtle differences between typical and delayed development in young children and spot the early red flags for ASD. All of the children featured in the ASD Video Glossary as having red flags for ASD are, in fact, diagnosed with ASD. In clinical terms, there are a few “absolute indicators,” often referred to as “red flags,” that indicate that a child should be evaluated. For a parent, these are the red flags that indicate their child should be screened to ensure that he/she is on the right developmental path. - No big smiles or other warm, joyful expressions by six months or thereafter. - No back-and-forth sharing of sounds, smiles, or other facial expressions by nine months or thereafter. - No babbling by 12 months. - No back-and-forth gestures, such as pointing, showing, reaching, or waving by 12 months. - No words by 16 months. - No two-word meaningful phrases (without imitating or repeating) by 24 months. - Any loss of speech or babbling or social skills at any age. (Information from First Signs, Inc. For more information about recognizing the early signs of developmental and behavioural disorders, please visit www.firstsigns.org) Who can diagnose Autism? In Saskatchewan, diagnoses of autism spectrum disorders can be made by professionals registered with either the Saskatchewan College of Psychologists or the College of Physicians and Surgeons. The diagnoses may be made by a single professional such as a paediatrician, psychiatrist, or psychologist, or by a multidisciplinary team involving other professionals such as speech language therapists, occupational therapists, physiotherapists, and social workers. What causes Autism? Exactly what causes ASD is still unknown. Current research suggests that a predisposition to autism might be inherited. Researchers have not found a specific “autism gene” but instead a nonspecific factor, which may increase the likelihood of having cognitive impairments. Over the last five years, scientists have identified a number of rare gene changes, or mutations, associated with autism. Researchers have also found neurobiological differences in the brains of individuals with autism. The current theory is that ASD is caused by a combination of “risk genes” and environmental factors in the early brain development period. A small number of cases can be linked to genetic disorders, such as Fragile X, Tuberous Sclerosis, and Angelman’s Syndrome, as well as exposure to environmental agents such as infectious ones (maternal rubella or cytomegalovirus) or chemical ones (thalidomide or valproate) during pregnancy. While we don’t yet know what causes autism, we do know that it is not caused by bad parenting; rather, it is a neurobiological disorder.
https://www.autismservices.ca/resources/resources-for-families/a-comprehensive-guide-to-autism-spectrum-disorder/
In practice, researchers often do not know in advance how helpful archival research is going to be. This means that one advantage is that you may discover something unexpected which will confirm a hypothesis, change the course of your research, or save months of work, and one disadvantage is that you may discover nothing you could not have found online. The archival method of research is a gamble. This is also true of conducting research online or at your local library, but the stakes are higher, particularly if you have obtained a grant and traveled around the world to conduct research at a particular specialist archive. One of the great advantages of the archival method is that you can obtain types of material that would not be available any other way. If you are conducting high-level research on a well-known writer, for instance, there will probably be at least one archive which has hand-corrected manuscripts, first drafts, and letters describing literary progress. Sometimes you might be the first person to read these. Occasionally, archives do not even know what they have in their possession, and you might discover hitherto unknown works. Archives offer a way to break new ground as a researcher. On the other hand, the old tin trunk covered with dust may not contain anything of interest. Conversely, the archive may be meticulously organized, to the point that everything they have has already been picked over in detail by twenty other scholars. Finally, the archival method is much more worthwhile for some types of research than others. In history and biography, it may be vital, whereas in philosophy and the sciences, it will mainly be useful in providing subsidiary information. There are many advantages to the archival method. For instance, this method is generally inexpensive because although there might be a fee to access relevant research, there are many free archives as well, and the process is overall typically cheaper than collecting data oneself. Not collecting the data themselves can also save researchers a lot of time. The archival method also provides a trustworthy look at the past. This can be quite useful in comparative research. If a researcher is investigating how a topic has changed over time, they can use archival sources for proof of what it was like in the past. There are also many disadvantages to the archival method. For instance, it can be quite difficult to locate relevant materials because not all archives adhere to professional descriptive standards. It could also be difficult and time-consuming to understand archival sources that are in their original language. Many archives are also not digitalized and have to be accessed in person to be seen in full. That also makes the process time-consuming and in some cases impossible, such as during the current public health crisis. There are also downsides to the fact that the researcher using archives does not conduct the research themselves in this method. Because the researcher did not have a say in data collection, the research might not directly or completely what they are researching. One advantage of using the archival research method is that the data have already been collected; therefore, researchers do not need to go through the institutional review process to gain participants' permission to collect data. In addition, the data can be relatively easy and inexpensive to review. Finally, the data in archives can be very useful to answer questions in longitudinal studies, such as looking at health or development over a life span. Without archival data, the time span that researchers can look at might be limited. The disadvantages of using archival research is that the data may not directly respond to the research question, so the data may have to be re-coded to answer a new question. Also, the data may not, at times, offer the richness of other forms of data collection, such as interviews. Archival research is qualitative method of research in which you take data collected by someone else and analyze it in order to draw your own conclusions regarding your different hypothesis. This is useful for the type of research where you can access large quantities of information that has already been compiled. This is useful because it can help to reduce the amount of time and money spent on research. This type of research is helpful in a type of hypothesis in which you could not ethically assign participants to groups; it is also good and researching trends within a population. However a drawback of this type of research is that as a researcher you have no control over how the data was collected and what type of controls for extraneous variables were put in place. Archival research analyzes fellow researchers' studies or utilizes historical patient records. The archival method has many advantages and disadvantages. With archival research, one advantage is that the experimenter does not have to worry about erroneously introducing changes in participant behavior that would affect the outcome of the study. Moreover, the archival method is more cost-effective than other methods, because researchers can use internet databases to locate free archives. Another advantage is that archival research can be inclusive of long periods of time, thus allowing for a broader view of trends or outcomes. Conversely, archival research also has some disadvantages. The primary disadvantage is that the previous research may be unreliable, or not collected to the researcher’s standard; the researcher has no control over how the data was collected when using archived information. The data may prove to be incomplete or possibly fail to address certain key issues. We’ll help your grades soar Start your 48-hour free trial and unlock all the summaries, Q&A, and analyses you need to get better grades now. - 30,000+ book summaries - 20% study tools discount - Ad-free content - PDF downloads - 300,000+ answers - 5-star customer support Already a member? Log in here.
https://www.enotes.com/homework-help/what-are-the-advantages-and-disadvantages-of-332037
The pandemic has forced health professionals and patients alike to dig deeper into telemedicine – something they may have opposed not so long ago. A resistance area can focus on patients who believe they are not getting the same amount of time or attention from their doctor when communicating over the phone or computer as they would in person. However, research presented at the American College of Surgeons’ 2020 Virtual Clinical Convention last week found that this is not the case with surgical follow-up exams. The researchers learned that surgical patients who had virtual follow-up exams spent as much time with their surgeon or other team members as those who went to the office or the clinic. Study results The study included more than 400 patients who had either their appendix or gallbladder removed with minimally invasive surgery. Upon discharge, patients were given either face-to-face or telemedical follow-up appointments. The researchers wanted to know whether patients with telemedicine appointments spent as much time with the provider as patients who went to the office or the clinic. Not all patients went to their follow-up visits – only 64%. “Sometimes patients are so well after minimally invasive surgery that about 30% of these patients do not show up for a post-operative visit,” said Dr. Caroline Reinke in a press release. Dr. Reinke is the study’s lead author and Associate Professor of Surgery at Atrium Health in Charlotte, NC The researchers rated the patients who kept their appointments. They found that patients who had face-to-face appointments spent more time on the appointment (58 minutes), but much of that time checking in, waiting in the waiting room, waiting again in the office, and talking to the member of the surgical team embraced and then discharged. Patients with virtual appointments only spent 19 minutes including waiting for the meeting to start. Despite these large differences, there was no actual difference in personal experience between the groups. Face time with the surgical team averaged 8.3 minutes for the face-to-face visits and 8.2 minutes for the telemedicine visits. “I was pleasantly surprised that the time patients spent with the member of the surgical team was the same, since one of the main problems with virtual visits is that patients feel disconnected and there isn’t that much value in it,” said Dr. Said Reinke. Telehealth myths Patients may be reluctant to take the telemedicine route due to some myths associated with remote health care. Here are some common telemedicine myths: It hasn’t been a way of providing health care long enough, so it’s not proven. The state of Nebraska appears to have been at the forefront of telehealth when telepsychiatry was first used there in 1959. And doctors have long used the phone to speak to patients they call with concerns, and even call or fax prescriptions at pharmacies as needed. In the 1990s, doctors began to understand how useful the Internet can be for reaching patients. If I live near my doctor / hospital / clinic, I don’t need telehealth. It is true that telehealth is a boon to people who live too far from a doctor. However, this type of health care is not limited to them. You could live just around the corner from the office, but if you have a telemedicine appointment you can stay at home or at work so your day isn’t disrupted. You could even be traveling and answering the call when you are out of town. There are many reasons why you may not want or be able to leave your home. The weather might be too harsh, you might take care of someone you can’t leave alone, or you might feel too lousy to venture out. Doctors can’t properly examine me when I’m out of the office. While it’s true that your doctor can’t physically touch you during a telemedicine appointment, a lot of learning about a patient comes from careful questions. If the call is video, the doctor can view you and make assessments along with the questions. Another option is to email photos of a rash or unusual spot on your skin, for example, and then speak to the doctor over the phone. By the time you have a telemedicine appointment and need to schedule a face-to-face appointment, you’ve already discussed a lot of information your doctor needs, which may make your appointment shorter and more efficient. The take away Not every doctor’s appointment can be done virtually, but many appointments can. They can be used for triage (do I really have to go to the office, the clinic or the emergency room?), For diagnosis and, if necessary, for follow-up care. If you’re looking to dig into telemedicine, speak to your doctor to see what the office or clinic has to offer. Sometimes “going to a doctor” means simply going to your computer or picking up your phone.
http://resourcingstrategies.com/telehealth-sufferers-nonetheless-have-high-quality-physician-time/
ASTM International, and its international membership of volunteer technical experts, work to develop and integrate standards in an effort to improve public health and safety and consumer confidence. That’s why ASTM founded its Additive Manufacturing Center of Excellence (AM CoE) back in 2018—its academia, industry, and government partners lead R&D efforts to speed up the development and adoption of additive manufacturing, and by supporting standardization and certification, providing market intelligence and business strategy, coming up with training and certification programs, and offering advisory services via Wohlers Associates. Now, just before RAPID + TCT 2022 kicks off in Detroit, the AM CoE and its founding members have announced the formal launch of the Consortium for Materials Data and Standardization (CMDS). “Consortia for Materials Data & Standardization (CMDS) enables companies of all sizes from across the entire additive manufacturing ecosystem to collaborate on standardizing the best practices for materials data generation and creating, curating, and managing the data needed to accelerate the industrialization and full adoption of AM technologies,” the CMDS website states. The initiative will aim to join together leading organizations from many different industries that represent the full AM value stream, in order to work on standardizing the requirements for generating data on AM materials. Additionally, the members of this newly formed consortium will develop and manage reference datasets, like the one shown below for powder bed fusion (PBF) 3D printing, that will help speed up the qualification, and ultimate adoption, of additive manufacturing. In addition to ASTM International, the other AM CoE partners that will support the execution and main actions of the member-driven consortium are NASA and Auburn University. There are 21 founding members of the CMDS initiative, including Auburn University, Desktop Metal, Raytheon Technologies, Sigma Labs, AddUp, GE Additive, and Boeing. These members are obviously industry experts in their own right, and will work together to ensure access to these important material datasets, as well as shore up the AM CoE’s main mission of research to standards (R2S) filling standardization gaps. “This initiative connects industry, academia, and government teams to accelerate the path for additive adoption, moving additive manufacturing from a process with significant financial barriers to a standard manufacturing process for widespread use. Raytheon Technologies is bringing their experience and diverse research and data to help create the standards and necessary qualification methods,” stated Jeff Shubrooks, Raytheon Missiles and Defense Additive Manufacturing Technology Area Lead. The main focus of the CDMS will be developing important process-structure-property relationships needed to create new methods of, as ASTM explains, generating “machine agnostic materials data.” Consortium partners will organize a database of quality material data, which will be available to all members so they can develop the necessary data analytics to support fast qualification of new AM materials and applications; create tools like probabilistic and physics-based models; and ensure real-time quality assurance for scaling 3D printing. “GE Additive is pleased to continue its engagement with the AM CoE Industry Consortium. As a company that offers the entire additive ecosystem, from feedstock to finished parts, we recognize the importance of developing highly pedigreed materials property datasets,” explained Amber Andreaco, section manager – materials & powders, GE Additive. “By coming together to define a common approach to materials characterization, we hope to drive further understanding and expansion of the additive industry as a whole.” CMDS members, with necessary input from regulatory bodies and other government agencies, will also set up requirements and best practices for creating AM material datasets by determining sources of variability and quantifying the sensitivities, and by establishing guidelines that can be used to appraise the pedigree and quality of existing material datasets. The consortium will also pick interesting materials, and certain properties, such as thermal, corrosion, and static, that make the materials good for specific applications, and carry out projects that are supportive of developing these datasets and standards. Members of the CMDS will maintain a secure, members-only Data Management System, which optimizes the material data generation workflow that incorporates Common Data Dictionary, Data Exchange Formats, and Pedigree standards. Members have exclusive use of the material datasets developed as part of the initiative, but in exchange, they will need to share lessons learned and research results with related ASTM committees, such as the F42 subcommittee for additive manufacturing, in order to inform new AM specifications and standards. Sharing information like this can only help our industry, as it keeps everything consistent and up to date. To learn more about the new Consortium for Materials Data and Standardization, check out the website, or visit ASTM’s AM CoE at RAPID this week in Booth #2113. Subscribe to Our Email Newsletter Stay up-to-date on all the latest news from the 3D printing industry and recieve information and offers from thrid party vendors. You May Also Like Grand Opening: AddUp Solution Center Offers LPBF & DED Metal 3D Printing Global metal additive manufacturing OEM AddUp Solutions was established as a joint venture by French companies Michelin and fives back in 2015. The company’s main technology is laser powder bed fusion (LPBF) technology, but... “World’s Most Efficient” A/C System to Be Built with 3D Printing Hyperganic, a German developer of AI-based engineering software, has announced a new project aiming to create the world’s most efficient residential A/C system. The company is partnering with Strata Manufacturing,... Online 3D Printing Service Sculpteo Announces New CEO Sculpteo, BASF’s French 3D printing service, announced that the company’s new CEO is industrial designer Alexandre d’Orsetti. Promoted from in-house, d’Orsetti was previously the head of Sulpteo’s design studio for... On the Ground at Velo3D’s New European Tech Center for Metal 3D Printing Today, Velo3D (NYSE: VLD) opened a European Technical Center in Augsburg, Germany. The U.S. company has crossed over to Europe, where it can better educate and showcase its capabilities to... Print Services Upload your 3D Models and get them printed quickly and efficiently.
https://3dprint.com/291283/astm-launches-consortium-for-3d-printing-materials-data-standardization/
ON JUNE 27, 2018 my column was captioned “A View of the Public Service from Outside”. It was in response to the PM’s call to the public to report directly to him about complaints with the public service. I quoted from the editorial of the Searchlight newspaper that urged a closer look at the hiring, promotion and appointment procedures and performance of those who manage the public service. I indicated that those issues had long been festering and argued that “a happy and properly functioning public service is critical to any country’s development especially one where the state looms large”. The High Court’s ruling given by Justice Esco Henry in the case brought against the Public Service Commission (PSC) by the Public Service Union (PSU) about its failure to observe principles of fairness, transparency and objectivity in the promotion of public servants therefore caught my attention, and hopefully also the nation’s. The PSU used the cases of five public servants to challenge the system of promotion. It was bad enough when Justice Henry stated that the PSC “failed to observe principles of fairness, transparency and objectivity in exercising its function” according to the regulations that govern the process of promotion, but when at least in one Ministry she concluded that the breach of regulations “was deliberate and intentional” then we are into serious business. I supposed the government would have already responded to this although I am not aware of any. What we have here is a judgement about five public servants and one has to conclude that it also applies to others. It is of interest that this judgement became public at a time when 85 police officers received promotion. One wonders if the issues raised in the Court’s judgement also applied to police officers. Now that cases have been documented, there should be a call for a total revamp and closer examination of what happens with the Public Service Commission and in the public service. Persons were overlooked for promotion. One of the cases showed that one individual had not been promoted for 30 years. Not only were persons overlooked for promotion, but others were catapulted above senior persons with more experience and qualification. Along with all of this was that annual evaluation of workers either did not exist or was irregular. Documentation and confidential reports about those who were to be considered for or in line for promotion were apparently not a regular part of the process. The PSC we are told, acted “without due regard to the principle of fairness, transparency and objectivity”. Moreover, there were “unreasonable delays and inconsistencies in the process.” There might even have been cases of nepotism. This High Court judgement demands a quick and serious response. First by the public service unions which need to bring forward other cases and to ensure, too, that proper procedures for promotion are observed. The government also has to act quickly for not only is this a stain on its governance, but it would have had a negative impact on the proper functioning of the Public Service. I have long felt that there was a great deal of dissatisfaction within the public service and with that obviously goes apathy and unproductivity. How does a public servant feel or react when a junior without his experience or qualification is promoted above him and he is asked to teach him the job? The public service is central to the country’s development, so a satisfied and proper functioning public service is necessary not only for good governance but also critical to the country’s development. With the Court’s ruling, heads should now roll! But will they?
https://searchlight.vc/searchlight/dr-fraser/2019/01/11/damning-high-court-judgement-re-public-officers/
Posted by Google News staff on April 11, 2018 07:31:14Optimization is a term that describes the way that a system optimizes a computer system, using advanced algorithms. A computer has a set of algorithms, known as the algorithm, to do many different things, and these algorithms are optimized to work with the data it has. Optimizations can be applied to various types of data: things like a database or a file system, or even to data in general. There are many ways to achieve optimization: in-house or third-party solutions can be used. The most common optimization technique is called “in-house,” which means that the data is stored in the database or the file system and the system itself is not involved. This article provides an overview of different types of optimization, including in-home optimization, and describes the most common types. The term “optimization” has become a buzzword in the field of optimization and has become part of the industry. Optimization is not something that happens in a vacuum, but rather, the system’s software and hardware is optimized to handle the data and data processing that the system needs. Optical systems and software are used to improve the performance of a computer by increasing the quality of the data that the computer sees. Optical systems, such as optical drives, can also help computers read and write data faster, but there are also other ways to increase the performance. Optics are used in a variety of ways, such for reading and writing images, and for optical networking (which is an optical link that links two computers). Optical networks can be a valuable way to connect multiple computers together. Optical networks are useful because they are cheap, reliable, and flexible. Optic networking is also used to enable data-intensive tasks, such the creation of 3D models. It can also be used for remote data transfers and file transfers. Optical networking is typically used in large data centers that have lots of data, which makes it useful for storing and retrieving large amounts of data. Optial technologies also allow computers to perform calculations that are difficult or impossible with conventional computers. For example, optical networks allow a computer to transfer data faster than a conventional computer can read and transmit data at the same time. Optical technology can also improve the overall efficiency of a system by improving the ability of the computer to handle more data than normal. Optronic technologies such as Ethernet, which is used for networking, are used primarily to improve data transmission and distribution, as well as data transmission, processing, and storage. Ethernet is often used to move data between computers, and the ability to send data at much higher speeds is useful for the Internet of Things. Wireless technologies are used for data transfers between computers. Wireline connections, such a wired connection or wireless routers, are often used for voice and data transmissions, as opposed to wireless connections that connect directly to a device or a network. Wireless routers can also transfer data more efficiently than wired connections, and wireless networks can offer a faster data transfer than wired networks. Wirelessly connected devices can be found in most homes today, including computers, phones, and televisions. Wirefree connections are a more reliable method of communication than wired links. They provide the ability for users to talk to their devices, such that they can share their data with others. Wirewire connections are also available for use with the Internet. Wirelink technologies are often available for wireless devices that do not use a wired network, such telephones and mobile phones. Wireless networks, which are used by people with limited access to the Internet, can allow wireless users to use the Internet in a way that is not possible with a wired or wireless connection. Wirely connections are often not as cheap as wired connections. However, wireless connections can be much faster and have lower latency, and wireline connections are used mainly in homes. Wirefiber networks are usually used in home routers and other devices that connect to the internet. Wireless connections are more reliable and can deliver speeds of up to 100 times faster than wire connections. Wirenodes are wireless devices, which can be connected to an internet service provider, such an AT&T or Verizon, or to a wireless network, like a wireless hotspot, by a modem. Wirens are used mostly for wireless data transfers. Wirenic connections are usually available for wired connections in wireless networks, and they are sometimes also used in wireless devices such as laptops, smartphones, and tablets. WireNodes are sometimes used in devices that are used as an antenna for wireless communications, such wireless routers and cell phones. Wirenet technologies are also used for wireless communication, such wi-fi. Wirenets can be useful for wireless Internet connections, because they allow users to connect to a different Wi-Fi network at the time of a data transfer.
https://prozokti.com/2021/08/10/what-does-optimized-mean/
Conference themes and approaches can be looked at as sort of a crazy quilt representing the state of the industry: with pieces of all different shapes and colors that come together to form a cohesive whole. They fit together in many ways, depending on the organizers (quilters?); many pieces are omitted and saved for the next quilt. But how to find the most successful combinations requires technique as well as imagination. Here are some topics I’ve been enjoying this year so far. The annual WCBP conference and related CMC Strategy Forum early in the year are always good venues. We publish the forum’s resulting consensus papers, as you know. The particulates discussion this year was especially timely in light of new analytical techniques — and we’ll publish that paper this fall. I returned to DC for the Phacilitate Cell and Gene Therapy conference, during which my publisher and I worked to finalize our cell therapies supplement (May). Becoming more conversant with the vocabulary and science of regenerative medicine is a logical next step for BPI, and especially now when many of those companies are debating commercialization models. In March, the assembly of conferences in IBC’s Biopharmaceutical Production and Manufacturing week heightened for us the increasing sophistication of risk-based approaches, especially to process validation and technology transfer. And the BioProcess International Europe conference in Nice, organized by our UK-based IBC/IIR team, provided a comprehensive look at critical issues, offering additional tech transfer case studies and business models for scaling cell therapies up and/or out. A month later I was in Vienna for the biannial ESACT conference, where I found two sessions to be especially valuable: one on vaccines and the other on fusion proteins. Discussion of cell therapies continued. Midway through that week, I hopped up to Rotterdam for the ISCT conference — during which I was awed by the progress toward stem-cell therapies, organ replacement from a donor’s own cells, and other illustrations of progress in these areas that still sound like science fiction to me. A final presentation that week by Howard Levine of BPTC outlined a theme we’re tackling: What lessons from the MAb world can apply toward expediting commercialization of cell therapies? Then our BioProcess Theater at BIO was a huge success this year, with many presentations and both noon-time panels fully attended. Charles Squires of Pfenex led a panel discussion on vaccines, and Levine’s colleague Susan Dana Jones led one on expediting commercialization of cell therapies. Those have been the topics for this year! You will next see us at the annual BioProcess International Conference (in Long Beach, CA, this fall). I am indebted to all of you for helping me keep my brain cells tumbling around, happily loaded with the next big (and small) topics to piece together. And I never want to omit a critical piece of material, so please keep those emails coming my way. And I shall look forward to seeing a lot of you in Long Beach in November!
https://bioprocessintl.com/2011/from-the-editor-320924/
The dish of Knights! #food #home #Wellington #beef #London #Knights #thyme “Rumor has it that Beef Wellington got its name from Arthur Wellesley, the 1st Duke of Wellington, who counted the dish among his favorite recipes” — Paul Ebeling My Beef Wellington Beef Wellington is rapped in golden, buttery puff pastry and filled with deeply savory mushroom duxelles, beef Wellington is an unforgettable centerpiece to super meal. I have added dried porcini mushrooms as they deliver extra umami to the beef, while a touch of Dijon and chopped herbs adds a layer of freshness. If you skip the foie gras the dish is more approachable, and swapping out the traditional crepe lining for phyllo streamlines the process, but beef Wellington demands several hours of searing, stuffing, rolling, and chilling to ensure its magical result. I have been making this dish for 47yrs. The addition of Thyme makes it the dish of Knights, it is special! Yield: Serves 4 Ingredients Beef - 1 2.5lb center-cut Wagyu or grass fed prime beef tenderloin roast trimmed - 2 teaspoons kosher salt - 1 teaspoon freshly ground black pepper - ¼ oz dried porcini mushrooms (5 to 6 pieces), ground to a powder in a spice grinder - 2 tablespoons Avocado oil - 1 ½ tablespoons Dijon mustard Duxelles - 1 ½ pounds fresh cremini mushrooms, stems trimmed, coarsely chopped (8 cups) - 3 large shallots, roughly chopped (about 1/2 cup) - 3 tablespoons unsalted butter - 3 medium garlic cloves, finely chopped - 1 tablespoon chopped fresh thyme leaves, plus thyme branches for serving - 2 ½ tablespoons dry sherry - ½ teaspoon teaspoon freshly ground black pepper Additional Ingredients - 2 frozen phyllo pastry sheets, thawed - 8 thin prosciutto slices - ¼ cup finely chopped chives - ¼ cup finely chopped flat-leaf parsley - 1 (14-oz) package all-butter frozen puff pastry sheet thawed according to package directions - All-purpose flour, for dusting - 1 large egg, beaten - Flaky sea salt Directions Prepare the Beef - Using kitchen twine, tie tenderloin crosswise at 2-inch intervals, starting from center and working out to ends. Sprinkle beef all over with salt and pepper. Place on a wire rack set inside a rimmed baking sheet. Let stand at room temperature 1 hr. Prepare the duxelles - Pulse half of the cremini mushrooms and half of the shallots in a food processor until very finely chopped, about 10 pulses, stopping to scrape down sides and stir as needed so you have evenly sized pieces. Transfer mixture to a medium bowl. Repeat process with remaining creminis and shallots. - Melt butter in a large skillet over medium-high until foamy. Add cremini-shallot mixture; cook, stirring occasionally, until creminis are dry and beginning to brown and stick to bottom of skillet in spots, 25 to 30 minutes. Add garlic and thyme; cook, stirring constantly, until fragrant, about 1 minute. Add sherry and pepper, stirring to scrape up any browned bits on bottom of skillet. Cook, stirring often, until mixture is dry and just starts to stick to bottom of skillet again, 2 to 4 mins. Remove from heat. Spread mixture out on a small baking sheet. Chill, uncovered, until cold, about 30 mins. Cold duxelles may be stored in an airtight container in refrigerator up to 2 days. - Heat Avocado oil in a large skillet or a small roasting pan over medium-high until shimmering. Add tenderloin; cook, turning occasionally, until browned on all sides, 10 to 12 mins. Transfer tenderloin to a wire rack set inside a baking sheet; let cool 15 mins. Remove and discard twine. Brush tenderloin all over with mustard, and sprinkle all over with porcini powder. Chill beef in refrigerator, uncovered, at least 1 hr. - Moisten a clean work surface with a damp kitchen towel, and overlap 3 pieces of plastic wrap on work surface to form a 22-in square. Overlap the 2 phyllo sheets in center of plastic wrap to form a 13 1/2-by-12-in rectangle, with long edge facing you. Overlap prosciutto on top of phyllo in 2 rows, leaving about a 1/2-in border on the left and right phyllo edges. Spread duxelles evenly over prosciutto, and gently press down to form an even layer. Sprinkle with chives and parsley. Lay chilled beef lengthwise over bottom 1/3rd of duxelles. Roll up beef and phyllo into a log, using plastic wrap as a guide and keeping it on the exterior of the log. Hold the outer ends of plastic wrap, and roll log on work surface back toward you to tighten. Refrigerate while you prepare the puff pastry. - Preheat oven to 425°F. Roll puff pastry out on a lightly floured work surface to a 15-by-12-in rectangle with long edge facing you. Lightly brush top 1/3rd of puff pastry with some of the beaten egg. Unwrap chilled beef log, and discard plastic wrap. Lay log lengthwise on bottom edge of puff pastry. Holding edge in place, roll up jelly-roll style until log is completely wrapped. Roll to face log seam side up, and gently press overlapping dough to seal. Fold ends of puff pastry down over beef, pinching seams to seal. - Transfer beef log, seam side down, to a baking sheet lined with parchment paper. Brush off excess flour using a pastry brush. Brush puff pastry all over with beaten egg. Using a paring knife, very light score a line lengthwise down center of puff pastry. Very lightly score 2 lines parallel to the 1st, 1 on either side of the center line, and each spaced 1 1/2 ins outward from the center. Very lightly score zig-zag lines across beef Wellington, spacing rows 1/2 in apart and forming a herringbone-like pattern. Sprinkle with flaky sea salt. Using the tip of a paring knife, create 3 (1-in-long) steam vents along center line, spaced about 3 in apart. - Bake until puff pastry is puffed and browned and a thermometer inserted into center of beef registers 120°F, 40 to 45 mins. Using 2 large spatulas, carefully lift beef Wellington from baking sheet, and transfer to a cutting board. Let rest 15 mins. Using a serrated knife, cut into slices. - Serve beef Wellington on a platter, garnished with thyme branches, the herb of Knights! Pair with Beef Wellington with a dry and medium-bodied red wine such as a Bordeaux, Pinot Noir, or Barolo, Malbec to stand up to the beef flavors, while complementing the puff pastry, mushroom and foie gras flavors in this elegent dish.
https://www.livetradingnews.com/215569-215569.html
This committee was asked to address “the optimal approach to completing the NEO census called for in the George E. Brown, Jr. Near-Earth Object Survey section of the 2005 NASA Authorization Act.” The committee was also asked to address “the optimal approach to developing a deflection [i.e., orbit change] capability.” The committee concluded that there is no way to define “optimal” in this context in a universally acceptable manner: There are too many variables involved that can be both chosen and weighted in too many plausible ways. A key question nevertheless is: Given the low risk over a period of, say, a decade, how much should the United States invest now? This chapter discusses the cost implications of typical solutions that it considered for survey completion and mitigation. A summary of the background on these cost implications is presented first. Government funding, primarily through NASA, now supports a modest, ongoing program of sky surveys to discover and track NEOs. NASA also supports analysis and archiving activities. According to NASA, total expenditures are approximately $4 million annually, which does not include any funding for the Arecibo Observatory in Puerto Rico. As the committee concluded in its interim report and confirmed in this final report, current expenditures are insufficient to achieve the goals established by Congress in the George E. Brown, Jr., Near-Earth Object Survey Act of 2005. The committee was asked and did perform independent cost estimates of the solutions that it considered. However, most of the survey and detection and mitigation options that were cost estimated are technically immature, and cost estimates at this early stage of development are notoriously unreliable. At best, these estimates provide only crude approximations of final costs of pursuing any of these options. The committee therefore did not use these cost estimates in reaching its conclusions. The committee outlined three possible levels of funding and a possible program for each level. These three, somewhat arbitrary, levels are separated by factors of five: $10 million, $50 million, and $250 million annually. - $10-million level. The committee concluded that if only $10 million were appropriated annually, an approximately optimal allocation would be as follows: - $4 million for continuing ground-based optical surveys and for making follow-up observations on long-known and newly discovered NEOs, including determining their orbits and archiving these along with the observations; the archive would continue to be publicly accessible; - $2.5 million to support radar observations of NEOs at the Arecibo Observatory; - $1.5 million to support radar observations at the Goldstone Observatory; and - - - $2 million to support research on a range of issues related to NEO hazards, including but not necessarily limited to (see Chapter 6) the study of sky distribution of NEOs and the development of warning-time statistics; concept studies of mitigation missions; studies of bursts in the atmosphere of incoming objects greater than a few meters in diameter; laboratory studies of impacts at speeds up to the highest feasible to obtain; and leadership and organizational planning, both nationally and internationally. - The $10-million funding level would not allow on any time scale the completion of the mandated survey to discover 90 percent of near-Earth objects of 140 meters in diameter or greater. Also lost would be any possibility for mounting spacecraft missions—for example, to test active mitigation techniques in situ. (A caveat: The funds designated above to support radar observations are for these observations alone; were the maintenance and operations of the radar-telescope sites not supported as at present, there would be a very large shortfall for both sites: about $10 million annually for the Arecibo Observatory and likely a larger figure for the Goldstone Observatory.) - $50-million level. At a $50-million annual appropriations level, in addition to the tasks listed above, the committee notes that the remaining $40 million could be used for the following: - Support of a ground-based facility, as discussed in Chapter 3, to enable the completion of the congressionally mandated survey to detect 90 percent of near-Earth objects of 140 meters in diameter or greater by the delayed date of 2030. - The $50-million funding level would likely not be sufficient for the United States alone to conduct space telescope missions that might be able to carry through a more complete survey faster. In addition, this funding level is insufficient for the development and testing of mitigation techniques in situ. However, such missions might be feasible to undertake if conducted internationally, either in cooperation with traditional space partners or as part of an international entity created to work on the NEO hazards issue. Accommodating both the advanced survey and a mitigation mission at this funding level is very unlikely to be feasible, except on a time scale extended by decades. - $250-million level. At a $250-million annual budget level, a robust NEO program could be undertaken unilaterally by the United States. For this program, in addition to the research program a more robust survey program could be undertaken that would include redundancy by means of some combination of ground-and space-based approaches. This level of funding would also enable a space mission similar to the European Space Agency’s (ESA’s) proposed Don Quijote spacecraft, either alone, or preferably as part of an international collaboration. This space mission would test in situ instrumentation for detailed characterization, as well as impact technique(s) for changing the orbit of a threatening object, albeit on only one NEO. The target could be chosen from among those fairly well characterized by ground observations so as to check these results with those determined by means of the in situ instruments. The committee assumed constant annual funding at each of the three levels. For the highest level the annual funding would likely need to vary substantially as is common for spacecraft programs. Desirable variations of annual funding over time would likely be fractionally lower for the second level, and even lower for the first level. How long should funding continue? The committee deems it of the highest priority to monitor the skies continually for threatening NEOs; therefore, funding stability is important, particularly for the lowest level. The second level, if implemented, would likely be needed at its full level for about 4 years in order to contribute to the completion of the mandated survey. The operations and maintenance of such instruments beyond this survey has not been investigated by the committee. However, were the Large Synoptic Survey Telescope to continue operating at its projected costs, this second-level budget could be reduced. The additional funding provided in the third and highest level would probably be needed only through the completion of the major part of a Don Quijote-type mission, under a decade in total, and could be decreased gradually but substantially thereafter. Finding: A $10-million annual level of funding would be sufficient for continuing existing surveys, maintaining the radar capability at the Arecibo and Goldstone Observatories, and supporting a modest level of research on the hazards posed by near-Earth objects. This level would not allow the achievement of the goals established in the George E. Brown, Jr. Near-Earth Object Survey Act of 2005 on any time scale. A $50-million annual level of funding for several years would likely be sufficient to achieve the goals of the George E. Brown, Jr. Near-Earth Object Survey Act of 2005. A $250-million annual level of funding, if continued for somewhat under a decade, would be sufficient to accomplish the survey and research objectives, plus provide survey redundancy and support for a space mission to test in situ characterization and mitigation.
https://www.nap.edu/read/12842/chapter/10
Q: African Clicking Language During a class a friend of mine brought up an African Clicking language. I don't have a lot of information about this. Which language groups in Africa include clicking, and what is known about the cultural and ethnic origins of clicking languages in Africa. A: A little background here: there are generally considered to be 5 "races" of man historically native to Africa1: Afro-Asiatic, Niger-Congo, Nilo-Saharan, Pygmy, and Khoisan. Each would have originally had their own native language, and their own native turf: roughly North Africa, Sub-Saharan West Africa, Sub-Saharan Nile Valley, Southern Rainforest, and Southern non-Rainforest respectively. Back then, the Khoisan and most likely the Pygmy languages made generous use of click consonants. The others did not have them. Sometime around the year 1000BC, the Niger-Congo group acquired Iron age technology, and used it to slowly spread East across the whole continent. At this point, all the people to the south were still hunter-gatherers with no metallurgy. To an Iron age people, this is a huge power vacuum. History, like nature, abhors a vacuum, so what happened next should be no surprise: One group of the Niger-Congo peoples (who we call "Bantu") quickly moved south and conquered all of the territory that was of any use for their tropical-based agriculture. The Khoisan were left to the desert areas and the far temperate south, but at least kept their click languages. The Pygmy got to keep their jungle, but lost their languages and now all speak Bantu2. So we got left with the language distribution map you see here. However, the exchange wasn't all one-way. Many of the Bantu languages nearest the Pygmy and Khoisans ended up with some borrowed click-words (which is how we know the Pygmies probably had clicks in their languages). So today what have left are the Khoisan languages, which use clicks extensively, and some Bantu languages which borrowed a few clicks (and sometimes ran with them a bit). 1-Technically speaking all of mankind can ultimately be considered "native to Africa", but all other "races"/language groups spent the balance of their unique development and history outside of Africa. 2-Actually, this may not be completely true. Hadza traditionally has been thrown in with the Khoisan group because of its clicks, but quite recently linguists decided it is an unrelated isolate. Genetic studies seem to indicate that the speakers are related to...Pygmies! So this may actually be our one remaining Pygmy language.
Table Export Parameter missing for BAPI Extraction Type Error: System.Data.SqlClient.SqlException (0x80131904): Incorrect syntax near ')' Reason: Based on the BAPI signature (Export Table Parameter), T-SQL code is generated in Xtract Universal. If no export parameters are checked in the BAPI extraction, an incorrect insert statement is generated. Solution: Ensure that at least one export parameter is checked. The error messages are destination-dependent and can therefore differ.
https://support.theobald-software.com/helpdesk/KB/View/14786-table-export-parameter-missing-for-bapi-extraction-type
Dissection of cadavers has long been a rite of passage for first-year medical school students. In 1925, two students proved so adept at the task that their professor set them up with a particular challenge: to dissect a body’s entire nervous system, from the brain stem down to the base of the spinal cord, keeping the entire network intact. The project took the two brainiacs 1,500 hours, and their remarkable handiwork is still on display today. L.P. Ramsdell and M.A. Schalck were bespectacled members of the class of 1928 of what was then called Kirksville College of Osteopathy & Surgery, in Missouri. The institution was founded in 1892 by Dr. Andrew Taylor Still, a physician and surgeon known as the father of osteopathy, a branch of medical practice that emphasizes the treatment of medical disorders through the manipulation and massage of the bones, joints, and muscles. Today the institution is called A.T. Still University. The study of anatomy dates back thousands of years to the ancient Egyptians and Greeks, but the science of anatomical study developed later, and was notably promoted by Leonardo da Vinci for artistic purposes. In the late 1500s, anatomical theaters in Italy provided education and entertainment: hired hands would cut away the skin of cadavers, professors would lecture, and anyone was welcome to sit in the theater to observe. In the 18th and 19th centuries, medical schools in the U.S. and the U.K. formalized the practice of studying anatomy through cadaver dissection. Bodies were often obtained from poorhouses and prisons. (The demand for cadavers also gave rise to the notorious and nefarious practice of body snatching from graveyards to provide a profitable supply.) At Kirksville’s Osteopathic College, first-year students were required to dissect an arm to gain understanding of the interconnectedness of bones, joints, muscles, and nerves that underpins the practice of osteopathy. Ramsdell and Schalck’s work was so meticulous and detailed that they were asked to complete an entire nervous system. The two worked down from the brain to the spinal cord, carefully cutting through skin, muscle, and tissue to expose the nerve fibers without severing them. The project took them five months. The body’s identity has been lost to history, though it likely came from a prison or poorhouse. “After they cleared each nerve, they rolled them in cotton batting soaked in some kind of preservative,” Jason Haxton, director of the Museum of Osteopathic Medicine at A.T. Still, told Live Science. “So, as they worked their way down, there was just a mass of little rolls of cotton.” Ramsdell and Schalck mounted the nervous system on a board of shellacked wood and labeled the display, which was exhibited around the country at museums and medical conferences. “The two young men have dissected out a complete nervous system,” The Journal of Osteopathy reported in June 1926. “Brain, spinal cord, nerve trunks with their branches, sympathetic nerves, ganglia, vagus and branches are intact and free from all other tissue. It is the first time the task has ever been accomplished here.” Today, the fruits of their labor can be seen at the Museum of Osteopathic Medicine at Andrew Taylor Still University, in Kirksville, Missouri. The museum includes historical medical photographs, documents, and books dating from the early 1800s from the private collection of Andrew Taylor Still. Live Science reports that only three other such intact hand-removed dissections exist today: at the Smithsonian Institute in Washington, D.C., at Drexel University in Philadelphia, and at a medical museum in Thailand. (Body Worlds, a traveling exhibition, displays a nervous system extracted by chemicals.) Museum director, Jason Haxton, has said that its display has been valued at $1 million.
https://www.thevintagenews.com/2018/03/08/dissected-nervous-system/
Leonardo da Vinci - Anatomical Form with Function Leonardo da Vinci was not a physician, so why did he study human anatomy? He believed that good art was based on scientific understanding of everything depicted. He also subscribed to the microcosm hypothesis. Guy Rooker comes to EDFAS not from the arts world but from the realm of science in general and surgery in particular. In this lecture he will explore the delineation of our anatomical knowledge through a world of art and specifically the remarkable contribution made to our understanding of the subject by Leonardo da Vinci.
https://www.eveshamartscentre.co.uk/leonardo-da-vinci-anatomical-form-with-function
The composition distribution of an ethylene alpha-olefin copolymer refers to the distribution of comonomer (short chain branches) among the molecules that comprise the polyethylene polymer. When the amount of short chain branches varies among the polyethylene molecules, the resin is said to have a “broad” composition distribution. When the amount of comonomer per 1000 carbons is similar among the polyethylene molecules of different chain lengths, the composition distribution is said to be “narrow”. The composition distribution is known to influence the properties of copolymers, for example, extractables content, environmental stress crack resistance, heat sealing, and tear strength. The composition distribution of a polyolefin may be readily measured by methods known in the art, for example, Temperature Raising Elution Fractionation (TREF) or Crystallization Analysis Fractionation (CRYSTAF). Ethylene alpha-olefin copolymers are typically produced in a low pressure reactor, utilizing, for example, solution, slurry, or gas phase polymerization processes. Polymerization takes place in the presence of catalyst systems such as those employing, for example, a Ziegler-Natta catalyst, a chromium based catalyst, a metallocene catalyst, or combinations thereof. It is generally known in the art that a polyolefin's composition distribution is largely dictated by the type of catalyst used and typically invariable for a given catalyst system. Ziegler-Natta catalysts and chromium based catalysts produce resins with broad composition distributions (BCD), whereas metallocene catalysts normally produce resins with narrow composition distributions (NCD). Resins having a Broad Orthogonal Composition Distribution (BOCD) in which the comonomer is incorporated predominantly in the high molecular weight chains can lead to improved physical properties, for example toughness properties and Environmental Stress Crack Resistance (ESCR). Because of the improved physical properties of resins with orthogonal composition distributions needed for commercially desirable products, there exists a need for medium and high density polyethylenes having an orthogonal composition distribution.
A guide to preventing common security misconfigurations Security misconfigurations are still part of OWASP’s Top 10 Security Risk list. This indicates they have been a persistent issue over the years. Security misconfigurations happen when supposed safeguards still leave vulnerabilities in a website or application. This normally happens when a system or database administrator or developer does not properly configure the security framework of an application, website, desktop or server. These misconfigurations leave applications vulnerable to attack. Preventing security misconfigurations is not normally solely dependent on one source. For instance, even if the developer implements secure coding practices, it is still up to the integration team to properly integrate the application into production, and the responsibility of the system administrator to actively patch and update the system. It is also the responsibility of the system owners or the governance team to ensure there are proper rules in place to help avoid these types of issues. These vulnerabilities can be located anywhere within an infrastructure to include custom code, databases, application or web servers, user workstations, routers, switches or even firewalls. This means it is important for developers, admins and management to all collaborate. Avoiding security misconfigurations is a team effort, not a solo one. What are some common types of security misconfigurations? Some common security misconfigurations include: - Unpatched systems - Using default account credentials (i.e., usernames and passwords) - Unprotected files and directories - Unused web pages - Poorly configured network devices These security misconfigurations can happen for a myriad of reasons. Having underqualified, or poorly trained staff, could lead to the issue. If a system administrator does not understand the importance of reviewing available patches and has never been trained on how to implement properly, an organization could be at major risk. It is important to not only stay abreast of newly released patches, but to also implement them in a mirrored test environment first to ensure they don’t cause other issues within a system. If a patch was inadvertently downloaded from a malicious source, installing it in the test environment first ensures only the test environment is damaged, not production. Also, a larger enterprise environment can have custom code in use, and special configurations. At times, installing a patch in these situations could potentially cause more harm than good, even creating more vulnerabilities. Using the test environment ensures the system administrator has time to evaluate the effects of the patch. Poorly trained administrators and poorly written cybersecurity policies breed an environment where default accounts are used. Most hackers know, or are skilled enough to figure out, the default account credentials for networking devices, operating systems and many applications. Using these default accounts makes it easy for cybercriminals to access your system and escalate their privileges. This is an easy fix, but it is a vulnerability that happens quite often. How can I prevent security misconfigurations? One of the best ways to prevent security misconfigurations is education and training. Educating your staff on current security trends helps ensure they make better decisions, and follow best practices. You can’t correct something that you don’t know. Some other recommendations from various security experts to prevent security misconfigurations include: - Developing a repeatable patching schedule - Keeping software up to date - Disabling default accounts - Encrypting data - Enforcing strong access controls - Provide admins with a repeatable process to avoid overlooking items - Set security settings in development frameworks to a secure value - Run security scanners and perform regular system audits Making use of data at rest encryption schemes could help protect files from data exfiltration. So does applying proper access controls to both files and directories. These steps help offset the vulnerability of unprotected files and directories. Data exfiltration is a big fear for most organizations. Proprietary or sensitive data in the wrong hands can create embarrassment or dramatic losses for a company, both financially and in terms of personnel. Data is often a company’s most important asset. Running security scans on systems is an automated way to help identify vulnerabilities. Running these scans on a consistent schedule, and/or especially after making architectural changes, is an important step in reducing the vulnerability landscape. If implementing custom written code, using a static code security scanner is also an important step before integrating that code into the production environment. Only give users access to data they absolutely need to do their jobs. Implement strong access controls to include enforcing the use of a strong username and password, and implement two-factor authentication mechanisms. Compartmentalize data. Make sure admins have separate accounts for when they are using their administrative privileges verses just acting as a user of the system. Using outdated software is still one of the most common security vulnerabilities. Many companies do not feel the need to invest in the latest and greatest. It seems “cheaper” to continue using legacy software, but in actuality, using outdated software puts a company at risk to losing not only assets, but the trust of their customers, or even investors. Creating a consistent patch schedule, and keeping software updated is vital to reducing a company’s threat vectors. Conclusion Security misconfigurations are still on the OWASP Top Ten list, ranked as number six this year. To avoid this risk, it is important for organizations to educate their staff, keep software up to date and ensure they are configuring their network equipment to current industry best practices. Hackers continue to grow smarter year after year. Every effort should be made to secure networks, not just for the sake of the company, but for the sake of the public as well.
https://resources.infosecinstitute.com/topic/guide-preventing-common-security-misconfigurations/
Global EV Motor Controller market size will reach million US$ by 2025, from million US$ in 2018, at a CAGR of during the forecast period. In this study, 2018 has been considered as the base year and 2019-2025 as the forecast period to estimate the market size for EV Motor Controller. This industry study presents the global EV Motor Controller market size, historical breakdown data (2014-2019) and forecast (2019-2025). The EV Motor Controller production, revenue and market share by manufacturers, key regions and type; The consumption of EV Motor Controller in volume terms are also provided for major countries (or regions), and for each application and product at the global level. Market share, growth rate, and competitive factors are also evaluated for market leaders SME S.p.A., Sevcon, etc. The following manufacturers are covered in this report: SME S.p.A. Sevcon KellyController Dongfeng Electric Vehicle Zhongshan Broad-Ocean Motor Shenzhen V&T Technologies Inovance technology Shenzhen Espirit Technology DAJUN TECH Zhuzhou CRRC Times Electric Tianjin Santroll Electric Automobile Technology Fujian Fugong Engineering Technology EV Motor Controller Breakdown Data by Type Ac Permanent Magnet Synchronous Motor Controller Ac Asynchronous Motor Controller DC Motor Controller EV Motor Controller Breakdown Data by Application Car Bus Others EV Motor Controller Production by Region United States Europe China Japan South Korea India Other Regions EV Motor Controller Consumption by Region North America United States Canada Mexico Asia-Pacific China India Japan South Korea Australia Indonesia Malaysia Philippines Thailand Vietnam Europe Germany France UK Italy Russia Rest of Europe Central & South America Brazil Rest of South America Middle East & Africa GCC Countries Turkey Egypt South Africa Rest of Middle East & Africa The study objectives are: To analyze and research the global EV Motor Controller status and future forecastinvolving, production, revenue, consumption, historical and forecast. To present the key EV Motor Controller manufacturers, production, revenue, market share, SWOT analysis and development plans in next few years. To segment the breakdown data by regions, type, manufacturers and applications. To analyze the global and key regions market potential and advantage, opportunity and challenge, restraints and risks. To identify significant trends, drivers, influence factors in global and regions. To strategically analyze each submarket with respect to individual growth trend and their contribution to the market. To analyze competitive developments such as expansions, agreements, new product launches, and acquisitions in the market. In this study, the years considered to estimate the market size of EV Motor Controller : History Year: 2014 - 2018 Base Year: 2018 Estimated Year: 2019 Forecast Year: 2019 - 2025 This report includes the estimation of market size for value (million USD) and volume (K Units). Both top-down and bottom-up approaches have been used to estimate and validate the market size of EV Motor Controller market, to estimate the size of various other dependent submarkets in the overall market. Key players in the market have been identified through secondary research, and their market shares have been determined through primary and secondary research. All percentage shares, splits, and breakdowns have been determined using secondary sources and verified primary sources. For the data information by region, company, type and application, 2018 is considered as the base year. Whenever data information was unavailable for the base year, the prior year has been considered.
https://www.acquiremarketresearch.com/industry-reports/ev-motor-controller-market/34849/
Where Do Kangaroos Live In Australia Map? It lives in Northern Territory (the Top End), Western Australia (Kimberley region), and the tropical regions of Queensland (Cape York Peninsula). This kangaroo prefers a tropical environment like open tropical forests, lower hills, grassland and woodlands where eucalypts are common. Where in Australia do kangaroos live? It lives in Northern Territory (the Top End), Western Australia (Kimberley region), and the tropical regions of Queensland (Cape York Peninsula). This kangaroo prefers a tropical environment like open tropical forests, lower hills, grassland and woodlands where eucalypts are common. Do kangaroos live all over Australia? Most kangaroos live on the continent of Australia, though each species has a different place it likes to call home. For example, the musky rat-kangaroo likes to nestle down in little nests on the floor of the rainforests in northeastern Queensland. What habitat do kangaroos live in? Habitat and Distribution Kangaroos live in Australia, Tasmania, and surrounding islands in a variety of habitats such as forests, woodlands, plains, and savannas. Where do kangaroos rats live? Kangaroo rat tend to live in the desert flatlands, creosote flats, and the sandy soils of the desert washes. The rats burrow into the soil to better survive the sometimes harsh desert environment. Do kangaroo fart? Kangaroos don’t fart. These beasts were once the mystery of the animal kingdom — thought to produce low-methane, environmentally friendly toots. Do kangaroos live in the rainforest? Tree kangaroos live in lowland and mountainous rainforests in Papua New Guinea, Indonesia and the far north of Queensland, Australia. They have adapted to life in the trees, with shorter legs and stronger forelimbs for climbing, giving them somewhat of the appearance of a cross between a kangaroo and a lemur. Do kangaroos live in New Zealand? There are no kangaroos that are native in New Zealand, and the only ones to be found are at zoos and animal eclosures. In fact, people are often mistaken about the presence of kangaroos in New Zealand that it created a phenomenon called Phantom Kangaroo. Where do kangaroos live and eat? Their habitat is the southern and southwestern parts of Australia, where they like to live in grassland areas with a forest nearby. Like eastern grey kangaroos, the western grey kangaroo eats mostly grasses, but they also eat shrubs, herbs, leaves, and tree bark. Does kangaroos live in the desert? Kangaroos are found in many different regions of Australia, including the desert and semi-arid regions. Kangaroos from these areas have behavioural and structural adaptations that enable them to survive the harsh conditions. What countries are kangaroos found? Kangaroos are indigenous to Australia and New Guinea. Are there kangaroos in Tasmania? Tasmania has two species of wallaby – the Tasmanian pademelon and Bennetts wallaby – and one species of kangaroo, the Forester kangaroo. Occasionally, these species come into conflict with landowners. What eats the kangaroo? Threats to kangaroos Kangaroos have few natural predators: Dingoes, humans, Wedge-tailed Eagles and, before their extermination, Tasmanian Tigers. Introduced carnivores, such as wild dogs and foxes prey on the young, and introduced herbivores compete with kangaroos for food. What eats a kangaroo rat? Predators. Unfortunately for the kangaroo rat, it has many predators. There are many creatures out there who would like to make a tasty meal out of this small creature. Owls, snakes, bobcats, foxes, badgers, coyotes, ringtail, and your cat or dog are just a few. Where do rats have babies? These nests are usually built in crevices, in rotting trees or in buildings. Rats, generally, are baby-making machines. Female rats can mate around 500 times in a six-hour period and brown rats can produce up to 2,000 offspring in a year, according to Discover Magazine. Do spiders fart? Since the stercoral sac contains bacteria, which helps break down the spider’s food, it seems likely that gas is produced during this process, and therefore there is certainly the possibility that spiders do fart. What animal Cannot fart? Share All sharing options for: Farts: which animals do, which don’t, and why. Here’s a mind-boggling fact: Almost all mammals fart, yet the sloth does not. I learned this because I read Does it Fart? A Definitive Field Guide to Animal Flatulence, which published in April. Do kangaroos have two legs or four? Watch a kangaroo in the Australian outback, and you’ll notice something strange—when they walk, they have five “legs.” As they graze on grasses and shrubs, they place their tails on the ground in time with their front legs, forming a tripodlike arrangement that supports their body while they bring their hind legs … Do kangaroos poop in the pouch? Do Baby Kangaroos Poop In The Pouch? Baby kangaroos do poop in the pouch. They also pee in the pouch because they cannot go anywhere else in the first few months of their lives. When young kangaroos are a few months old the begin to leave the pouch from time to time. What kind of kangaroos live in Australia? There are four species of kangaroos; red kangaroo, eastern grey kangaroo, western grey kangaroo, and the antilopine kangaroo. These species are spread across Australia, and though they may be found living together in captivity, you’re very unlikely to see them mixing in the wild. Are kangaroos in Australia? Kangaroos are indigenous to Australia (seriously this should be higher on your list of things to see than a beach!) Kangaroos are strict herbivores, however they hardly release any methane, unlike most cattle. Are kangaroos in Africa? No. Kangaroos aren’t native to Africa. Kangaroos and wallabies are a type of marsupial called a macropod. Macropods only exist in Australia, New Guinea, and a few nearby islands. Are there kangaroos in America? As unlikely as it is, the simplest explanation would be that there is an unknown kangaroo population in America. All species of kangaroos are herbivores, and even in their native Australia, they are found living in habitats ranging from forests to grasslands. They can even weather colder temperatures.
https://www.dfoffer.com/where-do-kangaroos-live-in-australia-map.html
Professional Affiliations and Memberships The Umbra Institute of Arcadia University is committed to providing students with a safe and rewarding educational experience of exceptional quality both in and out of the classroom, supported by a wide variety of cultural immersion and exchange opportunities. Our partner institutions and affiliates share our mission and work closely with the Umbra Institute to foster and enhance intellectual and intercultural exchange through various activities, lectures, seminars, publications, demonstration projects, and public dissemination of educational achievements. Arcadia University – Glenside, PA The Umbra Institute was created by the Arcadia University College of Global Studies program and is accredited by the Middle States Association of Schools and Colleges, and nationally recognized as a leader in international education. Arcadia University also maintains a National Advisory Board (The Guild) of experienced professionals in academia who provide expert oversight and guidance for the program. Arcadia University Perugia is a full AACUPI member and registered with the Italian Ministry of Education and recognized as an American institution of higher learning in Italy. Università degli Studi di Perugia (University of Perugia) – Perugia, Italy The Università degli Studi di Perugia is the third oldest Italian university, formally established in 1304. The Umbra Institute is an approved provider of courses for the Università degli Studi di Perugia, and select courses at the Umbra Institute are valid towards the degree Laurea or Laurea Breve at the Università degli Studi di Perugia. Università per Stranieri di Perugia (University for Foreigners) – Perugia, Italy The Università per Stranieri di Perugia is Italy’s oldest and most-prestigious institution for Italian language and culture. The Umbra Institute and the Universita’ per Stranieri di Perugia collaborate to provide students with a unique and unparalleled Direct Enrollment program along with the Universita’ degli Studi di Perugia. Accademia di Belle Arti Pietro Vannucci (Academy of Fine Arts Pietro Vannucci) – Perugia, Italy The Accademia di Belle Arti was founded in 1573 and offers students the chance to immerse themselves in a prestigious environment where they focus on practical and theoretical courses in the arts. Courses are offered through Umbra’s Direct Enrollment program and held at the Accademia facility, housed in the beautiful former convent of San Francesco al Prato, where they are taught entirely in Italian. The Center for International Studies (CIS) – Northampton, MA CIS is a full-service program provider, offering more than 20 study abroad destinations worldwide including academic internships, volunteer, and degree-seeking programs. Coupled with years of experience in international education, CIS maintains an advisory board of experienced academics and professionals to ensure service quality and value in all CIS programs. Other U.S. Colleges and Universities The Umbra Institute also offers special academic programs in cooperation with a wide array of U.S. colleges and universities who collaborate through special cooperatives. MEMBERSHIPS The Umbra Institute is a proud member of the following organizations: COMMUNITY RELATIONS Comune di Perugia and Regione Umbria – Perugia, Italy Our commitment to community engagement has fostered strong links with the Comune di Perugia and the Regione Umbria. The Umbra Institute and the local authorities co-sponsor service learning projects, volunteer programs, social and cultural activities, and other initiatives that bring the community, local Italian students, and the Umbra Institute students and faculty together. The Umbra Institute also maintains formal relations with the following organizations:
https://www.umbra.org/about/professional-affiliations/
Electronegativity, symbol χ, is a chemical property that describes the tendency of an atom to attract electrons (or electron density) towards itself. An atom's electronegativity is affected by both its atomic number and the distance at which its valence electrons reside from the charged nucleus. The higher the associated electronegativity number, the more an element or compound attracts electrons towards it. The term "electronegativity" was introduced by Jöns Jacob Berzelius in 1811,though the concept was known even before that and was studied by many chemists including Avogadro.In spite of its long history, an accurate scale of electronegativity was not developed until 1932, when Linus Pauling proposed an electronegativity scale, which depends on bond energies, as a development of valence bond theory. It has been shown to correlate with a number of other chemical properties. Electronegativity cannot be directly measured and must be calculated from other atomic or molecular properties. Several methods of calculation have been proposed, and although there may be small differences in the numerical values of the electronegativity, all methods show the same periodic trends between elements. The most commonly used method of calculation is that originally proposed by Linus Pauling. This gives a dimensionless quantity, commonly referred to as the Pauling scale (χr), on a relative scale running from around 0.7 to 3.98 (hydrogen = 2.20). When other methods of calculation are used, it is conventional (although not obligatory) to quote the results on a scale that covers the same range of numerical values: this is known as an electronegativity in Pauling units. As it is usually calculated, electronegativity is not a property of an atom alone, but rather a property of an atom in a molecule. Properties of a free atom include ionization energy and electron affinity. It is to be expected that the electronegativity of an element will vary with its chemical environment, but it is usually considered to be a transferable property, that is to say that similar values will be valid in a variety of situations. On the most basic level, electronegativity is determined by factors like the nuclear charge (the more protons an atom has, the more "pull" it will have on electrons) and the number/location of other electrons present in the atomic shells (the more electrons an atom has, the farther from the nucleus the valence electrons will be, and as a result the less positive charge they will experience—both because of their increased distance from the nucleus, and because the other electrons in the lower energy core orbitals will act to shield the valence electrons from the positively charged nucleus). The opposite of electronegativity is electropositivity: a measure of an element's ability to donate electrons. Caesium is the least electronegative element in the periodic table (=0.79), while fluorine is most electronegative (=3.98). Francium and caesium were originally both assigned 0.7; caesium's value was later refined to 0.79, but no experimental data allows a similar refinement for francium. However, francium's ionization energy is known to be slightly higher than caesium's, in accordance with the relativistic stabilization of the 7s orbital, and this in turn implies that francium is in fact more electronegative than caesium. - Part of Speech: noun - Synonym(s): - Blossary: - Industry/Domain: Chemistry - Category: Organic chemistry - Company: - Product: - Acronym-Abbreviation: Other Languages: - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term - Add term Member comments Terms in the News Billy Morgan Sports; Snowboarding The British snowboarder Billy Morgan has landed the sport’s first ever 1800 quadruple cork. The rider, who represented Great Britain in the 2014 Winter Olympics in Sochi, was in Livigno, Italy, when he achieved the man-oeuvre. It involves flipping four times, while body also spins with five complete rotations on a sideways or downward-facing axis. The trick ... Marzieh Afkham Broadcasting & receiving; News Marzieh Afkham, who is the country’s first foreign ministry spokeswoman, will head a mission in east Asia, the state news agency reported. It is not clear to which country she will be posted as her appointment has yet to be announced officially. Afkham will only be the second female ambassador Iran has had. Under the last shah’s rule, Mehrangiz Dolatshahi, a ... Weekly Packet Language; Online services; Slang; Internet Weekly Packet or "Paquete Semanal" as it is known in Cuba is a term used by Cubans to describe the information that is gathered from the internet outside of Cuba and saved onto hard drives to be transported into Cuba itself. Weekly Packets are then sold to Cuban's without internet access, allowing them to obtain information just days - and sometimes hours - after it ... Asian Infrastructure Investment Bank (AIIB) Banking; Investment banking The Asian Infrastructure Investment Bank (AIIB) is an international financial institution established to address the need in Asia for infrastructure development. According to the Asian Development Bank, Asia needs $800 billion each year for roads, ports, power plants or other infrastructure projects before 2020. Originally proposed by China in 2013, a signing ... Spartan Online services; Internet Spartan is the codename given to the new Microsoft Windows 10 browser that will replace Microsoft Windows Internet Explorer. The new browser will be built from the ground up and disregard any code from the IE platform. It has a new rendering engine that is built to be compatible with how the web is written today. The name Spartan is named after the ... Featured Terms - 1525 Terms - 4 Blossaries - 42 Followers Golden Globes Recognition for excellence in film and television, presented by the Hollywood Foreign Press Association (HFPA). 68 ceremonies have been held since the ...
http://en.termwiki.com/EN/Electronegativity
Surveillance of epidemic outbreaks and spread from social media is an important tool for governments and public health authorities. Machine learning techniques for nowcasting the Flu have made significant inroads into correlating social media trends to case counts and prevalence of epidemics in a population. There is a disconnect between data-driven methods for forecasting Flu incidence and epidemiological models that adopt a state based understanding of transitions, that can lead to sub-optimal predictions. Furthermore, models for epidemiological activity and social activity like on Twitter predict different shapes and have important differences. In this paper, we propose two temporal topic models (one unsupervised model as well as one improved weakly-supervised model) to capture hidden states of a user from his tweets and aggregate states in a geographical region for better estimation of trends. We show that our approaches help fill the gap between phenomenological methods for disease surveillance and epidemiological models. We validate our approaches by modeling the Flu using Twitter in multiple countries of South America. We demonstrate that our models can consistently outperform plain vocabulary assessment in Flu case-count predictions, and at the same time get better Flu-peak predictions than competitors. We also show that our fine-grained modeling can reconcile some contrasting behaviors between epidemiological and social models. People Senior Research Associate Alumni Professor of Computer Science Director Alumni Publication Details - Date of publication: - March 22, 2016 - Journal: - Springer Data Mining and Knowledge Discovery - Page number(s): - 681-710 - Volume: - 30 - Issue Number:
https://sanghani.cs.vt.edu/research-publication/syndromic-surveillance-flu-twitter-using-weakly-supervised-temporal-topic-models-2/
- Published: Experiences of loneliness: a study protocol for a systematic review and thematic synthesis of qualitative literature Systematic Reviews volume 9, Article number: 284 (2020) - 566 Accesses - 7 Altmetric - Abstract Background Loneliness is a highly prevalent, harmful, and aversive experience which is fundamentally subjective: social isolation alone cannot account for loneliness, and people can experience loneliness even with ample social connections. A number of studies have qualitatively explored experiences of loneliness; however, the research lacks a comprehensive overview of these experiences. We present a protocol for a study that will, for the first time, systematically review and synthesise the qualitative literature on experiences of loneliness in people of all ages from the general, non-clinical population. The aim is to offer a fine-grained look at experiences of loneliness across the lifespan. Methods We will search multiple electronic databases from their inception onwards: PsycINFO, MEDLINE, Scopus, Child Development & Adolescent Studies, Sociological Abstracts, International Bibliography of the Social Sciences, CINAHL, and the Education Resource Information Center. Sources of grey literature will also be searched. We will include empirical studies published in English including any qualitative study design (e.g. interview, focus group). Studies should focus on individuals from non-clinical populations of any age who describe experiences of loneliness. All citations, abstracts, and full-text articles will be screened by one author with a second author ensuring consistency regarding inclusion. Potential conflicts will be resolved through discussion. Thematic synthesis will be used to synthesise this literature, and study quality will be assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Qualitative Research. The planned review will be reported according to the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) statement. Discussion The growing body of research on loneliness predictors, outcomes, and interventions must be grounded in an understanding of the lived experience of loneliness. This systematic review and thematic synthesis will clarify how loneliness is subjectively experienced across the lifespan in the general population. This will allow for a more holistic understanding of the lived experience of loneliness which can inform clinicians, researchers, and policymakers working in this important area. Systematic review registration PROSPERO CRD42020178105. Background Loneliness has become the focus of a wealth of research in recent years. This attention is well placed given that loneliness has been designated as a significant public health issue in the UK and is associated with poor physical and mental health outcomes [2,3,4,5] and an increase in risk of death similar to that of smoking . In light of this, it is concerning that recent research has found that loneliness is highly prevalent across age groups, with young people (under 25 years) and older adults (over 65 years) indicating the highest levels [7, 8]. Whilst an ever-increasing body of research is situating loneliness at its centre, there is relatively little work which focuses on the lived experience of loneliness: how loneliness feels and what makes up experiences of loneliness. Phenomena that might appear to describe loneliness, such as social isolation, are distinct from the actual experience of it. Whilst loneliness is generally characterised as the distress one experiences when they perceive their social connections to be lacking in number or quality, social isolation is the objective limitation or absence of connections . Social isolation does not necessarily beget loneliness, and indeed, Hawkley and Cacioppo remark on how humans can perceive meaningful social relationships where none objectifiably exist, such as with God, or where reciprocity is not possible, such as with fictional characters. Whilst associations between aloneness and loneliness have been richly demonstrated [10, 11], other research has found moderate and low correlations between social isolation and loneliness [12, 13]. These findings underline the need to better understand what makes up the subjective experience of loneliness, given that it is clearly not sufficiently captured by the objective experience of being alone. Given the subjective nature of the phenomenon, qualitative methods are particularly suited to research into experiences of loneliness, as they can aim to capture the idiosyncrasies of these experiences. A number of qualitative studies of loneliness experiences have been carried out. In perhaps the largest study of its type, Rokach analysed written accounts of 526 adults’ loneliest experience, specifically asking about their thoughts, feelings, and coping strategies. This generated a model with four major elements (self-alienation, interpersonal isolation, distressed reactions, and agony) and twenty-three components such as emptiness, numbness, and missing a specific person or relationship. Although this study offered impressive scale, the vast majority of participants were between 19 and 45 years old, and as a result, the model may underestimate factors experienced across the lifespan. The findings might be usefully integrated with more recent research which qualitatively explores loneliness in other age groups (e.g. ). Harmonising this research by looking closely at how people describe their experiences of loneliness and working from the bottom-up to create a fine-grained view of what makes up these experiences will provide a more holistic understanding of loneliness and how it might best be defined and ameliorated. There are a number of available definitions of loneliness offered by researchers. The widely accepted description from Perlman and Peplau , for example, states that loneliness is an unpleasant and distressing subjective phenomenon arising when one’s desired level of social relations differs from their actual level. However, research lacks an overarching subjective perspective, by which we mean a description of loneliness which is grounded in accounts of people’s lived experiences. This is a significant gap in the field given that loneliness is, by its nature, a subjective experience. Unlike objective phenomena like blood pressure or age, loneliness can only be definitively measured by asking a person whether they feel lonely. Weiss argued that whilst available definitions of loneliness may be helpful, they do not sufficiently reflect the real phenomenon of loneliness because they define it in terms of its potential causes rather than the actual experience of being lonely. As such, studies which begin from definitions of loneliness like these may obscure the ways in which it is actually experienced and fail to capture the components and idiosyncrasies of these experiences. A recent systematic review report has explored the conceptualisations of loneliness employed in qualitative research, finding that loneliness tended to be defined as social, emotional, or existential types. However, the review covered only studies of adults (16 years and up), including heterogenous clinical populations (e.g. people receiving cancer treatment, people living with specific mental health conditions, and people on long-term sick leave), and placed central importance on the concepts, models, theories, and frameworks of loneliness utilised in research. Studies which did not employ an identified concept, model, framework, or theory of loneliness were excluded. Moreover, rather than synthesising how people describe their loneliness, the authors aimed to assess how research conceptualises loneliness across the adult life course. This leaves a gap with respect to how research participants specifically describe their lived experiences of loneliness, rather than how researchers might conceptualise it. Achterberg and colleagues recently conducted a meta-synthesis of qualitative studies on experiences of loneliness in young people with depression. As the findings are specific to experiences in this population, they may not reflect those of wider age groups or individuals who do not have depression. Kitzmüller and colleagues used meta-ethnography to synthesise studies regarding experiences and ways of dealing with loneliness in older adults (60 years and older). However, they synthesised only articles from health care disciplines published in scientific journals from 2001 to 2016 and included studies on clinical populations, such as older women with multiple chronic conditions. Moreover, there has been an increase in research output regarding loneliness in recent years, and relevant studies may have been published since this review was conducted (e.g. ). To the authors’ knowledge, the systematic review report on conceptual frameworks used in loneliness research , the meta-synthesis of loneliness in young people with depression , and the meta-ethnography of older adults’ loneliness are the only such systematic reviews of qualitative literature regarding experiences of loneliness to date. The current systematic review will instead take a bottom-up approach which focuses on non-clinical populations of all ages to synthesise findings on participants’ experiences of loneliness, rather than the conceptualisations that might be imposed by study authors. This will fill a gap in the literature by synthesising the qualitative evidence focusing on experiences of loneliness across the lifespan. This inductive synthesis of the available subjective descriptions of loneliness will offer a nuanced view of loneliness experiences. It is imperative for research and practice that we deepen the current understanding of these experiences to inform how we approach describing, researching, and attempting to ameliorate loneliness. Aims The proposed research aims to offer a holistic view of the experience of loneliness across the lifespan through a systematic review and thematic synthesis of the qualitative literature focusing on these experiences. To address this aim, there is one central research question: How do people describe their experiences of loneliness? This research question concerns aspects of loneliness which participants discuss when describing their lived experiences. Whilst we expect that this would concern emotional, social, and cognitive components of the experience, we understand that these findings may also come to reflect perceived causes or effects of loneliness. This review will also consider the age groups that have been studied and how experiences of loneliness might vary across the different age groups examined in this literature. Loneliness research is often weighted towards investigations of older adults, despite the fact that the prevalence of loneliness is high across the lifespan; recent UK research found a prevalence of 40% in 16- to 24-year-olds and 27% in people over 75 . This review will also shed light on the age groups that have been included in qualitative research on loneliness experiences. In doing so, this research may identify age groups which have been understudied and may be underrepresented in this field of research, potentially pointing to life stages where experiences of loneliness might be usefully explored in more detail in the future. Furthermore, given the relatively small number of qualitative studies into the experience of loneliness compared with quantitative research in this area, this review will also consider the reasons that study authors may offer for the relative shortage of qualitative work. This is an important point given that the review will inherently be constrained by the number of studies that exist and the focus that has primarily been given to quantitative loneliness research thus far. Methods Protocol registration and reporting The review protocol has been registered within the International Prospective Register of Systematic Reviews (PROSPERO) database from the University of York (registration number: CRD42020178105). This review protocol is being reported in accordance with the reporting guidance provided in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) statement (see checklist in Additional file 1). The proposed systematic review will be reported in accordance with the reporting guidance provided in the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) statement . The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement will inform the process of completing and reporting this planned review. Eligibility criteria Due to its suitability for qualitative evidence synthesis, the SPIDER tool was used to assist in defining the research question and eligibility criteria in line with the following criteria: Sample, Phenomenon of Interest, Design, Evaluation, Research type (see Table 1 for details of these criteria). The exclusion criteria are as follows: - 1 Studies not meeting the inclusion criteria described in Table 1 - 2 Studies not published in English - 3 Studies with no qualitative component - 4 Studies of clinical populations - 5 Studies which report solely on objective phenomena such as social isolation rather than the subjectively perceived experience of loneliness - 6 Studies in which the primary focus or one of the primary focuses is not experiences of loneliness Papers will be deemed to focus sufficiently on experiences of loneliness if studying these experiences is a key aspect of the work rather than simply a part of the output. Accordingly, studies will only be included if authors state a relevant aim, objective, or research question related to investigating experiences of loneliness (i.e. to study experiences of loneliness) or if loneliness experiences are clearly explored and described (e.g. relevant questions are present in an appended interview guide). At the title and abstract screening stage, at least one relevant sentence or information that indicates likely relevance must be present for inclusion. The decision to exclude articles which do not primarily or equally focus on these experiences was made in order to gather meaningful data about loneliness experiences specifically and to capture experiences identified as loneliness by participants as much as possible, rather than related phenomena which may be grouped and labelled retrospectively as loneliness by researchers. Information sources and search strategy The primary source of literature will be a structured search of multiple electronic databases (from inception onwards): PsycINFO, MEDLINE, Scopus, Child Development & Adolescent Studies, Sociological Abstracts, International Bibliography of the Social Sciences (IBSS), CINAHL, and the Education Resource Information Center (ERIC). The secondary source of potentially relevant material will be a search of the grey or difficult-to-locate literature using Google Scholar. In line with the guidance from Haddaway and colleagues on using Google Scholar for systematic review, a title-only search using the same search terms will be conducted and the first 1000 results will be screened for eligibility. These searches will be supplemented with hand-searching in reference lists, such that the titles of all articles cited within eligible studies will be checked. When eligibility is unclear from the title, abstracts and full-texts will be checked until eligibility or ineligibility can be ascertained. This process will be repeated with any articles that are found to be eligible at this stage until no new eligible articles are found. Systematic reviews on similar topics will also be searched for potentially eligible studies. Grey literature will be located through searches of Google Scholar, opengrey.eu, ProQuest Dissertations and Theses, and websites of specific loneliness organisations such as the Campaign to End Loneliness, managed in collaboration with an information specialist. Efforts will be made to contact authors of completed, ongoing, and in-press studies for information regarding additional studies or relevant material. The search strategy for our primary database (MEDLINE) was developed in collaboration with an information specialist. In collaboration with a specialist, the strategy will be translated for all of the databases. The search strategy has been peer reviewed using the Peer Review of Electronic Search Strategies (PRESS) checklist . Strategies will utilise keywords for loneliness and qualitative studies. A draft search strategy for MEDLINE is provided in Additional file 2. Qualitative search terms were supplemented with relevant and useful subject headings and free-text terms from the Pearl Harvesting Search Framework synonym ring for qualitative research . The inclusion of search terms related to social isolation specifically and related terms (e.g. “social engagement”) was considered and tested extensively through scoping searches and discussion with an information specialist. Adding these terms (and others such as “Patient Isolation” and “Quarantine”) did not appear to add unique papers that would be included above and beyond subject heading and free-text searching for “Loneliness”. Given the aim to include studies focused on experiences of loneliness specifically, this search strategy was deemed most appropriate. A similar strategy has been employed in other recent systematic review work focusing on loneliness (e.g. [28, 29]). Moreover, test searches employing the search strategy retrieved all of seven informally identified likely eligible articles indexed in Scopus, indicating good sensitivity of the strategy. A free-text search to capture “perceived social isolation” was included as this specific term is used by some authors as a direct synonym for loneliness. The completed PRESS checklist is provided in Additional file 3. Data collection and analysis Study selection Firstly, the main review author (PMP) will perform the database search and hand-searching and will screen all titles to remove studies which are clearly not relevant. PMP will also undertake abstract screening to exclude any which are found to be irrelevant or inapplicable to the inclusion criteria. A second author (JG) will independently screen 50% of the titles and abstracts. Finally, full-text versions of the remaining articles will be read by PMP to assess whether they are suitable for inclusion in the final review. JG will independently review 50% of these full texts. In cases of disagreement, the two reviewers will discuss the study to reach a decision about inclusion or exclusion. In case agreement cannot be reached after discussion between the two reviewers, a third reviewer will be invited to reconcile their disagreement and make a final decision. The reason for the exclusion at the full-text stage will be recorded. After this screening process, the remaining articles will be included in the review following data extraction, quality appraisal, and analysis. The PRISMA statement will be followed to create a flowchart of the number of studies included and excluded at each stage of this process. Data management The articles to be screened will be managed in EndNoteX9, with subsequent EndNote databases used to manage each stage of the screening process. Data extraction Data will be extracted from the studies by PMP using a purpose-designed and piloted Microsoft Excel form. Information on author, publication year, geographic location of study, methodological approach, method, population, participant demographics, and main findings will be extracted to understand the basis of each study. JG will check this extracted data for accuracy. For the thematic synthesis, in line with Thomas and Harden , all text labelled as “results” or “findings” will be extracted and entered into the NVivo software for analysis. This will be done because many factors, including varied reporting styles and misrepresentation of data as findings, can make it difficult to identify the findings in qualitative research ; accordingly, a wide-ranging approach will be used to capture as much relevant data as possible from each included article. The aim is to extract all data in which experiences of loneliness are described. Quality appraisal Quality of the included articles will be assessed using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Qualitative Research . This quality will be considered during the development of the data synthesis. Different authors hold different viewpoints about inclusion versus exclusion of low-quality studies. However, given that they may still add important, authentic accounts of phenomena that have simply been reported inadequately , it is common to include lower-quality studies and consider quality during the synthesis process rather than excluding on the basis of it. Accordingly, this approach will be used in the present research. Data synthesis There are various accepted approaches to reviewing and synthesising qualitative research, including meta-ethnography , meta-synthesis , and narrative synthesis . The current systematic review will utilise thematic synthesis as a methodology to create an overarching understanding of the experiences of loneliness described across studies. In thematic synthesis, descriptive themes which remain close to the primary studies are developed. Next, a new stage of analytical theme development is undertaken wherein the reviewer “goes beyond” the interpretations of the primary studies and develops higher-order constructs or explanations based on these descriptive themes . The process of thematic synthesis for reviewing is similar to that of grounded theory for primary data, in that a translation and interpretative account of the phenomena of interest is produced. Thematic synthesis has been used to synthesise research on the experience of fatigue in neurological patients with multiple sclerosis , children’s experiences of living with juvenile idiopathic arthritis , and parents’ experiences of parenting a child with chronic illness . This use of thematic synthesis to consider subjective experiences (rather than, for example, attitudes or motivations) melds well with the present research, which also sets its focus on a subjective experience. As well as its successful application in similar systematic reviews, thematic synthesis was selected based on its appropriateness to the research question, time frame, resources, expertise, purpose, and potential type of data in line with the RETREAT framework for selecting an approach to qualitative evidence synthesis . The RETREAT framework considers thematic synthesis to be appropriate for relatively rapid approaches which can be sustained by researchers with primary qualitative experience, unlike approaches such as meta-ethnography in which a researcher with specific familiarity with the method is needed. This is appropriate to the project time frame and background of this research team. The Joanna Briggs Institute Reviewer’s Manual also notes that thematic synthesis is useful when considering shared elements across studies which are otherwise heterogenous, which is likely to be the case in this review given that the common factor (experiences of loneliness) may be present across studies with otherwise diverse populations and methodologies. Guidance from Thomas and Harden will be followed to synthesise the data. Firstly, the extracted text will be inductively coded line-by-line according to content and meaning. This inductive creation of codes should allow the content and meaning of each sentence to be captured. Multiple codes may be applied to the same sentence, and codes may be “free” or structured in a tree formation at this stage. Before moving forward, all text referred to by each code will be rechecked to ensure consistency in what is considered a single code or whether more levels of coding are required. After this stage, similarities and differences between the codes will be examined, and they will begin to be organised into a hierarchy of groups of codes. New codes will be applied to these groups to describe their overall meaning. This will create a tree structure of descriptive themes which should not deviate largely from the original study findings; rather, findings will have been integrated into an organised whole. At this stage, the synthesis should remain close to the findings of the included studies. At the final stage of analysis, higher-order analytical themes may be inferred from the descriptive themes which will offer a theoretical structure for experiences of loneliness. This inferential process will be carried out through collaboration between the research team (primarily PMP and JG). Sensitivity analysis After the synthesis is complete, a sensitivity analysis will be undertaken in which any low-quality studies (as identified through the JBI checklist) are excluded from the analysis to assess whether the synthesis is altered when these studies are removed, in terms of any themes being lost entirely or becoming less rich or thick . Sensitivity analysis will also be used to assess whether any age group is entirely responsible for a given theme. In this way, the robustness of the synthesis can be appraised and the individual findings can remain grounded in their context whilst also extending into a broader understanding of the experiences of loneliness. Risk of bias in individual studies Risk of bias in individual studies will be taken into account through utilisation of the JBI checklist, which includes ten questions to assess whether a study is adequately conceptualised and reported . PMP will use the checklist to assess the quality of each study. Whilst all eligible studies will be included in the synthesis (as described in the “Quality appraisal” section), any lower-quality studies will be excluded during post-synthesis sensitivity analysis in order to assess whether their inclusion has affected the synthesis in any way as suggested by Carroll and Booth . Confidence in cumulative evidence The Grading of Recommendations, Assessment, Development and Evaluation – Confidence in the Evidence from Reviews of Qualitative Research (GRADE-CERQual) approach [44, 45] will be used to assess how much confidence can be placed in the findings of this qualitative evidence synthesis. This will allow a transparent, systematic appraisal of confidence in the findings for researchers, clinicians, and other decision-makers who may utilise the evidence from the planned systematic review. GRADE-CERQual involves assessment in four domains: (1) methodological limitations, (2) coherence, (3) adequacy of data, and (4) relevance. There is also an overall rating of confidence: high, moderate, low, or very low. These findings will be displayed in a Summary of Qualitative Findings table including a summary of each finding, confidence in that finding, and an explanation for the rating. Assessments for each finding will be made through discussion between PMP and JG. Discussion The proposed systematic review will contribute to our knowledge of loneliness by clarifying how it is subjectively experienced across the lifespan. Synthesising the qualitative literature focusing on experiences of loneliness in the general population will offer a fine-grained, subjectively derived understanding of the components of this phenomenon which closely reflects the original descriptions provided by those who have experienced it. By including non-clinical populations of all ages, this research will provide an essential view of loneliness experiences across different life stages. This can be used to inform future research into correlates, consequences, and interventions for loneliness. The use of thematic synthesis will enable us to remain close to the data, offering an account which might also be useful for policy and practice in this area. There are a number of limitations to the planned research. Primarily, this review will be unable to capture aspects of loneliness experiences which have not been described in the qualitative literature, for example, due to the sensitivity of the topic, given that loneliness can be stigmatising, or aspects that are specific to a given unstudied population. Moreover, by focusing on lifespan non-clinical research, we aim to offer a general synthesis which can in future be informed by insights from clinical groups, rather than subsuming and potentially obscuring the aspects of loneliness which might be unique to them. Whilst primary empirical studies are not themselves extensive sources, with books in particular often offering rich descriptions of loneliness (see, e.g. [11, 46]), this research will focus on primary empirical studies of subjective descriptions to offer a manageable level of scope and rigour. As with any systematic review, some studies may also be missing information which would inform the synthesis. Quality appraisal and sensitivity analysis will aim to capture and potentially control for this issue, but it will ultimately be difficult to ascertain how missing information might affect the synthesis. By providing a thorough overview of how loneliness is experienced, we expect that the findings from the planned review will be informative and useful for researchers, policymakers, and clinicians who work with and for people experiencing loneliness, as well as for these individuals themselves, to better understand this important, prevalent, and often misunderstood phenomenon. Mansfield et al. have offered an illuminating systematic review covering the conceptual frameworks and models of loneliness included in the existing evidence base (i.e. social, emotional, and existential loneliness). This review will build upon this work by including research with children and adolescents and taking a bottom-up approach similar to grounded theory where the synthesis will remain close to the participants’ subjective descriptions of loneliness experiences within the included studies, rather than reflecting pre-existing themes in the evidence base. As such, this systematic review will offer specific insights into lifespan experiences of loneliness. This synthesis of lived experiences will shed light on the nuances of loneliness which existing definitions and typologies might overlook. It will offer an experience-focused overview of loneliness for people studying and developing measures of this phenomenon. In focusing on qualitative work, the planned review may also identify processes relevant to loneliness which are not expressed by statistical models. In this way, it may also provide a starting point for more nuanced qualitative work with specific populations and circumstances to ascertain components which may be characteristic of certain experiences. Availability of data and materials Not applicable. Abbreviations - ENTREQ: - Enhancing Transparency in Reporting the Synthesis of Qualitative Research - ERIC: - Education Resource Information Center - GRADE-CERQual: - Grading of Recommendations, Assessment, Development and Evaluation – Confidence in the Evidence from Reviews of Qualitative Research - IBSS: - International Bibliography of the Social Sciences - JBI: - Joanna Briggs Institute - JG: - Jenny Groarke - KY: - Keming Yang - PMP: - Phoebe McKenna-Plumley - PRISMA: - Preferred Reporting Items for Systematic Reviews and Meta-Analyses - PRISMA-P: - Preferred Reporting Items for Systematic Reviews and Meta-Analyses – Protocol - PROSPERO: - International Prospective Register of Systematic Reviews - RETREAT: - Review question – Epistemology – Time/Timescale – Resources – Expertise – Audience and purpose – Type of data - RT: - Rhiannon Turner - SPIDER: - Sample, Phenomenon of Interest, Design, Evaluation, Research type References - 1. Department for Digital C, Media and Sport. A connected society: a strategy for tackling loneliness – laying the foundations for change. London; 2018. - 2. Cacioppo JT, Hawkley LC, Thisted RA. Perceived social isolation makes me sad: 5-year cross-lagged analyses of loneliness and depressive symptomatology in the Chicago Health, Aging, and Social Relations Study. Psychol Aging. 2010;25(2):453–63. - 3. Hawkley LC, Cacioppo JT. Loneliness matters: a theoretical and empirical review of consequences and mechanisms. Ann Behav Med\. 2010;40(2):218–27. - 4. Lim MH, Rodebaugh TL, Zyphur MJ, Gleeson JF. Loneliness over time: the crucial role of social anxiety. J Abnorm Psychol. 2016;125(5):620–30. - 5. Qualter P, Brown SL, Rotenberg KJ, Vanhalst J, Harris RA, Goossens L, et al. Trajectories of loneliness during childhood and adolescence: predictors and health outcomes. J Adolesc. 2013;36(6):1283–93. - 6. Holt-Lunstad J, Smith TB, Layton JB. Social relationships and mortality risk: a meta-analytic review. PLoS Med. 2010;7(7):e1000316. - 7. Hammond C. The surprising truth about loneliness: BBC Future; 2018. Available from: http://www.bbc.com/future/story/20180928-the-surprising-truth-about-loneliness. - 8. Victor CR, Yang K. The prevalence of loneliness among adults: a case study of the United Kingdom. J Psychol. 2012;146(1-2):85–104. - 9. De Jong GJ, van Tilburg TG, Dykstra PA. Loneliness and social isolation. In: Perlman AVD, editor. The Cambridge handbook of personal relationships. 2nd ed. Cambridge: Cambridge University Press; 2016. p. 485–500. - 10. Savikko N, Routasalo P, Tilvis RS, Strandberg TE, Pitkälä KH. Predictors and subjective causes of loneliness in an aged population. Arch Gerontol Geriatr. 2005;41(3):223–33. - 11. Yang K. Loneliness: a social problem. New York: Department for Culture Media and Sport; 2019.. - 12. Coyle CE, Dugan E. Social isolation, loneliness and health among older adults. J Aging Health. 2012;24(8):1346–63. - 13. Matthews T, Danese A, Wertz J, Odgers CL, Ambler A, Moffitt TE, et al. Social isolation, loneliness and depression in young adulthood: a behavioural genetic analysis. Soc Psychiatry Psychiatr Epidemiol. 2016;51(3):339–48. - 14. Rokach A. The experience of loneliness: a tri-level model. J Psychol. 1988;122(6):531. - 15. Ojembe BU, Ebe KM. Describing reasons for loneliness among older people in Nigeria. J Gerontol Soc Work. 2018;61(6):640–58. - 16. Perlman D, Peplau LA. Toward a social psychology of loneliness. In: Duck S, Gilmour R, editors. Personal relationships in disorder. London: Academic; 1981. p. 31–56. - 17. Weiss RS. Reflections on the present state of loneliness research. J Soc Behav Pers. 1987;2(2):271–6. - 18. Mansfield L, Daykin N, Meads C, Tomlinson A, Gray K, Lane J, Victor C. A conceptual review of loneliness across the adult life course (16+ years): synthesis of qualitative studies: What Works Wellbeing, London; 2019. - 19. Achterbergh L, Pitman A, Birken M, Pearce E, Sno H, Johnson S. The experience of loneliness among young people with depression: a qualitative meta-synthesis of the literature. BMC Psychiatry. 2020;20(1):415. - 20. Kitzmüller G, Clancy A, Vaismoradi M, Wegener C, Bondas T. “Trapped in an empty waiting room”—the existential human core of loneliness in old age: a meta-synthesis. Qual Health Res. 2017;28(2):213–30. - 21. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1. - 22. Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12(1):181. - 23. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9 w64. - 24. Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012;22(10):1435–43. - 25. Haddaway NR, Collins AM, Coughlin D, Kirk S. The role of Google Scholar in evidence reviews and its applicability to grey literature searching. PLoS One. 2015;10(9):e0138237. - 26. McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6. - 27. Stansfield C. Qualitative research 2014. Available from: https://sites.google.com/view/pearl-harvesting-search/ph-synonym-rings/structure-or-study-design/qualitative-research. - 28. Kent-Marvick J, Simonsen S, Pentecost R, McFarland MM. Loneliness in pregnant and postpartum people and parents of children aged 5 years or younger: a scoping review protocol. Syst Rev. 2020;9(1):213–9. - 29. Michalska da Rocha B. Is there a relationship between loneliness and psychotic experiences? An empirical investigation and a meta-analysis [D.Clin. Psychol.]. Edinburgh: University of Edinburgh; 2016. - 30. Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8(1):45. - 31. Sandelowski M, Barroso J. Finding the findings in qualitative studies. J Nurs Scholarsh. 2002;34(3):213–9. - 32. Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based Healthc. 2015;13(3):179–87. - 33. Soilemezi D, Linceviciute S. Synthesizing qualitative research: reflections and lessons learnt by two new reviewers. Int J Qual Methods. 2018;17(1):1609406918768014. - 34. Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies (vol. 11). California: Sage; 1988. - 35. Lachal J, Revah-Levy A, Orri M, Moro MR. Metasynthesis: an original method to synthesize qualitative literature in psychiatry. Front Psychiatry. 2017;8:269. - 36. Popay J, Roberts HM, Sowden A, Petticrew M, Arai L, Rodgers M, Britten N. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC Methods Programme; 2006. - 37. Newton G, Griffith A, Soundy A. The experience of fatigue in neurological patients with multiple sclerosis: a thematic synthesis. Physiotherapy. 2020;107:306–16. - 38. Tong A, Jones J, Craig JC, Singh-Grewal D. Children’s experiences of living with juvenile idiopathic arthritis: a thematic synthesis of qualitative studies. Arthritis Care Res (Hoboken). 2012;64(9):1392–404. - 39. Heath G, Farre A, Shaw K. Parenting a child with chronic illness as they transition into adulthood: a systematic review and thematic synthesis of parents’ experiences. Patient Educ Couns. 2017;100(1):76–92. - 40. Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, et al. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. J Clin Epidemiol. 2018;99:41–52. - 41. Institute JB. JBI reviewer’s manual – 2.4: the JBI approach to qualitative synthesis. 2019 Available from: https://wiki.joannabriggs.org/display/MANUAL/2.4+The+JBI+Approach+to+qualitative+synthesis. - 42. Carroll C, Booth A, Lloyd-Jones M. Should we exclude inadequately reported studies from qualitative systematic reviews? An evaluation of sensitivity analyses in two case study reviews. Qual Health Res. 2012;22(10):1425–34. - 43. Carroll C, Booth A. Quality assessment of qualitative evidence for systematic review and synthesis: is it meaningful, and if so, how should it be performed? Res Synth Methods. 2015;6(2):149–54. - 44. Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018;13(1):2. - 45. Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gülmezoglu M, et al. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895. - 46. Bound AF. A biography of loneliness: the history of an emotion. Oxford: Oxford University Press; 2019. Acknowledgements The authors would like to acknowledge and thank Ms. Norma Menabney and Ms. Carol Dunlop, subject librarians at the McClay Library, Queen’s University Belfast, for their advice and assistance with designing a search strategy for this review. The authors would also like to acknowledge and thank Dr. Ciara Keenan, a research fellow at Queen’s University Belfast and associate director of Cochrane Ireland, for her completion of the PRESS checklist and guidance regarding the search strategy and systematic review methodology. Funding PMP wishes to acknowledge the funding received from the Northern Ireland and North East Doctoral Training Partnership, funded by the Economic and Social Research Council with support from the Department for the Economy Northern Ireland. The funder did not play a role in the development of this protocol. Ethics declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Information Additional file 1. PRISMA-P 2015 Checklist. Additional file 2. Search executed in MEDLINE ALL (OVID) 16/11/2020. Additional file 3. PRESS Guideline — Search Submission & Peer Review Assessment. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. About this article Cite this article McKenna-Plumley, P.E., Groarke, J.M., Turner, R.N. et al. Experiences of loneliness: a study protocol for a systematic review and thematic synthesis of qualitative literature. Syst Rev 9, 284 (2020). https://doi.org/10.1186/s13643-020-01544-x Received: Accepted: Published:
https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-020-01544-x
At Midtown International School, the Arts are recognized as an essential part of our everyday life. MIS offers a wide variety of arts classes to develop and nurture the students’ unique gifts and talents. The students can create their own arts experience and learn through theatre, sculpture, drawing, photography, music, and film studies. The classes are enriched through carefully selected field trips to arts institutions such as the High Museum of Art or the Atlanta Symphony or visits to local artists. All of our arts classes encourage and develop a growth mindset through a process oriented creative curriculum and an emphasis on intentional thinking through design. The Arts at MIS promote empathy and enable students to collaborate, communicate, and cultivate a spirit of community in which learning can be fun. MIS Theatre is devoted to building student confidence, knowledge, and growth. MIS Theatre designs intentional projects and performance opportunities that challenge students to explore, expand, create, and collaborate both on and off stage. Working within a nurturing and safe environment, students focus on producing quality performance through solid theatrical principles. Public Speaking and Presentation is about creating tomorrow's leaders. ​PSP students are engaged with the tools and principles that are currently being used in today’s thriving business, social media, visual and audible platforms. “Don’t You Dare Say UM” is an annual public speaking competition where students train and compete for first place. Speakers are judged on voice, diction, body/gesturing, and content relevance. They are given a topic and have one minute to successfully deliver a speech without saying “UM” or UGH.” In addition, there is a 4 second delay rule in place. In the MIS Visual Arts studio, students will recognize themselves as creative and capable artists, and understand how visual arts has a place in culture and society. The MIS Visual Arts program allows students to develop their artistic skills through a challenging and engaging art curriculum. The MIS Visual Arts Department strives to give students the opportunity to develop their creative-thinking and problem-solving skills as they explore a variety of mediums and techniques and apply them to their own unique works of art. The MIS Visual Arts department believes that all students have a natural desire to create and visually express themselves, their ideas, and their curiosities, and therefore each student receives instruction and support that best benefits his or her needs. Student achievement is based on the individual progress of a student and their ability to work through challenges, towards the completion of their own artwork. ​Kindergarten through 4th grade students will use their imagination and creative-thinking skills to generate ideas and express themselves through their artwork. Students will be given the opportunity to explore a variety of mediums and art techniques to strengthen their artistic skills while learning about other cultures and making real life connections. Students will brainstorm, visualize, and sketch on their own and with a group to develop their ideas from start to finish, and understand that art is a process. Throughout the school year, students will participate in art exercises and work on various works of art that focus on one or more of the Elements of Art and Principles of Design. In middle school, students will examine their own art progress and use critical-thinking skills to improve their art techniques in various mediums. They will make a greater effort in creating meaningful works of art that capture their own thoughts and ideas on selected themes and issues. They will brainstorm, visualize, and sketch on their own and with a group to continually develop their ideas before working on a final piece. Throughout the year, they will discover their own style of art, while gaining an appreciation for art history. They will make intentional choices with one or more of the Elements of Art and Principles of Design, and practice giving and receiving constructive feedback while they discuss their own work and that of their peers. ​In high school, students are given the opportunity to explore and focus on the following art disciplines: Sculpture, Digital Photography, World Dance, Playwriting, Animation, and Film & Media Studies. Media is everywhere. For many of us, going a single day without interacting with or consuming some form of media is a rarity. Because we’re surrounded by so much media so much of the time, it’s easy to forget that media products and messages are deliberately constructed within a specific place and time, that each media form carries its own specific language, that making sense of these products and messages can be a complex process, and that the things we see, read, play, and hear in our various media has a real effect on how we make sense of our social reality. That’s where media literacy comes in. Media literacy will help students actively approach the media they love with a more critical and thoughtful eye and resist the urge to simply become complacent consumers. In Film Studies, students will explore the history, development, and aesthetics of film as a technology, a business, and an art form, and be sensitive to how the films we see today owe huge debts to films of the past. Of particular importance throughout the course will be the practice of questioning the very assumptions of what has driven historiographers to document certain occurrences in film history, sometimes to the detriment of others. Naturally, questions of race and gender play a big role in these paths of query, and students will learn to respectfully engage with such questions in a safe environment dedicated to the thirst for knowledge. Our teachers are busy making amazing things happen in their creative spaces. Website updates will be made soon. Students at MIS dive immediately into music and get a hands on learning experience. Through the integration of music, movement, and speech, the students build a foundational understanding of music and musicianship and instill lifelong learning with and through music. Students experience a broad variety of musical impressions, and learn about ways to musically express themselves through singing, dancing, playing instruments, listening to music, and music theory. BandBand is offered as an elective in middle and high school. Band provides the opportunity to develop, continue, and expand basic skills on wind and percussion instruments. Students of all proficiency levels collaboratively engage in learning and performance opportunities. Some of the goals are: introducing students to the fundamental skills of playing a musical instrument; developing and reinforcing the fundamentals of music theory skills; and providing students with ensemble and performance experiences. The students perform at various school ​concerts. ChorusThe MIS Chorus is offered as an elective starting in fourth grade. Chorus emphasizes on vocal development, music comprehension, and the study of a variety of choral repertoire. The students develop basic vocal techniques and skills including: awareness, good posture, tone quality, diction, and vocal health. The students also focus on understanding and practice of basic elements of music including music reading skills. The students perform at various school concerts.
https://www.midtowninternationalschool.com/arts.html
Is QE losing its effectiveness? What does companies’ monetary behaviour tell us? QE is widely said to ‘have lost its effectiveness’. As I explained in my weekly e- mail note of 23rd November, QE is to be understood as ‘deliberate action by the state to increase the quantity of money’, while the claim that it is ineffective (or less effective) is equivalent to the claim that increases in the quantity of money have no effect (or a diminishing effect) on the equilibrium level of national income’. In my 23rd November note I reviewed the relationship between changes in the aggregate quantity of money (as measured by M4x) and changes in the UK’s national income over the last 15 years. A key fact emerged, that the increases in both the quantity of money and of national income had been lower in the last five years than for many decades. The vital word is ‘both’. Money and national income have moved together, bang in line with basic theory. On this basis claims of QE’s ineffectiveness are bunkum. This week’s exercise is somewhat different. All agents have a desired level of money holdings (relative to income and the attractiveness of money in comparison with non-money assets), and national income and wealth are at their equilibrium levels only when agents’ actual money holdings are equal to this desired level (i.e., in jargon, ‘when monetary equilibrium prevails’). But in the real world agents’ actual money holdings often differ from the desired levels, and transactions (indeed many rounds of transactions) are undertaken in order to move closer to equilibrium. The value of transactions in bank settlement systems – which nowadays is typically 50 times gross domestic product – includes transactions in capital assets. The main kinds of agent involved in these transactions include households, companies and financial institutions. The focus here is on the monetary behaviour of companies. I show that the ratio of money holdings to bank borrowings of British companies today is much the same today as it was 50 years ago, and that corporate monetary behaviour in the recent cycle has been much the same as in other cycles. To repeat, the notion of QE’s ineffectiveness is bunkum. The 23rd November note included a few sentences on the monetary theory of national income determination. As explained there, agents are understood to have a demand to hold money balances, just as they have a demand for the services of other assets (such as those provided by the housing stock or transport equipment). A ‘money demand function’ can therefore be described, with the key explanatory variables being income and the so-called ‘own return on money’ (i.e., the attractiveness of money relative to other assets). Basic theory then says that, if the arguments in the money demand function other than income are assumed to take constant values, the standard money demand function has the property that money and agents’ desired level of income rise or fall equi-proportionally. It follows that – if, for example, the quantity of money is doubled – agents’ ‘equilibrium’ level of income (i.e., national income and expenditure in the aggregate) ought also to double. In other words, basic theory implies that policy measures directed to changing the quantity of money – such as QE, which is the deliberate creation of money by the state – are always effective. Further, the heart of the so-called ‘transmission mechanism’ of monetary policy lies in agents’ adjustments of expenditures and portfolios to monetary imbalances. A monetary imbalance (or ‘a monetary disequilibrium’) exists if actual ratios of money (to expenditures and portfolios) differ from the desired ratios. the working-out of a period of monetary disequilibrium. clear evidence that deviations from the ‘normal’ level of a ratio are associated with macroeconomic developments attributable to agents’ endeavours to return to that normal level. I will now present evidence that, in a key respect relevant to the so-called ‘transmission mechanism’ of monetary policy, the UK corporate sector has for over almost 50 years behaved with remarkable stability. Further, the turmoil of the Great Recession has not disturbed that stability. My conclusions are that the basic theory is correct, changes in the quantity of money do not alter agents’ desired ratio of money to other variables, and that action by the state to alter the quantity of money (upwards or downwards) remains an extremely powerful means of influencing macroeconomic outcomes. The UK’s monetary data in its modern from began in 1963, following a recommendation in the 1959 Radcliffe Report. The data are not only for the aggregate quantity of money, but also for the money holdings of particular sectors of the economy. We are therefore fortunate to have data on the money holdings of companies (as well as that of households and non-bank financial institutions) going back almost 50 years. As already noted, the standard money demand function includes in its arguments both income and the own return on money. In this context companies are rather unusual, since they have no income in their own right. (They belong to shareholders.) It makes no sense to view their demand to hold money as related to ‘income’ or ‘wealth’, as in the traditional formulations. However, just as most non-corporate agents’ demand to hold money is finite, so companies cannot let their negative money balances become too negative. If the balances become too negative, they may become unable to meet debts as they fall due and hence are deemed insolvent. (They can suffer this fate even if they are intrinsically good businesses, with strong future cash flows in prospect.) It follows that many companies operate with an understanding of the ‘right’ ratio between their money holdings and their bank borrowings. The borrowings are often on a medium- or long-term basis, secured against certain assets, while the money is held in an account that can be used immediately to make payments. What do the data – which, to repeat, go back to 1963 – show about the ratio between UK companies’ money holdings and their bank borrowings? At any rate, we observe an astonishing stability in the monetary behaviour of the corporate sector of the British economy. The ratio of companies’ money to their bank borrowings has changed in this period of almost 50 years, but these changes have been trivial relative to the changes in the variables in which the ratio has been expressed. In a chart representing the levels of these variables, the y axis has to be in logarithmic terms, and yet the parallelism of the movements in money and bank debt is plain. Has the corporate liquidity ratio affected companies’ behaviour in the way expected by economic theory? When companies are ‘short of cash’, they tend to retrench. Each company individually can rebuild its money balances by selling an asset (a subsidiary, a piece of land, an office building), but – if the asset is sold to another company – the total amount of money held by companies in the aggregate is unchanged. It follows that, if all companies are short of cash, transactions in order to rebuild every company’s money holdings fail in that purpose. Instead asset values fall, and that makes people and companies feel poorer, and they cut back on their expenditure. Conversely, if companies are ‘rolling in cash’, the same story applies, but in reverse. Each company individually can lower its money balances by purchasing an asset, but – if the asset is bought from another company – the corporate sector’s money holdings are unchanged. So transactions in order to disembarrass companies of their excess money get nowhere. Instead asset values rise, agents feel richer and they boost their expenditure. The equation was a r2 of 0.35, while the t statistic on the regression coefficient is over 10, indicating that companies’ balance-sheet strength – as measured by the corporate liquidity ratio – has a statistically significant effect on expenditure. (The value of the r2 would undoubtedly rise if a more complex lag structure were estimated.) Inspection of the chart agrees with the identification of a worthwhile relationship between these two variables. The liquidity ratio was particularly low in 1966, 1969, late 1974, early 1980, late 1990 and late 2008, and in every case the economy was already in a recession or above to enter one; it was notably high in 1972 and 1973, and again between 1986 and 1989, on these occasions coinciding with buoyant economic conditions. It needs to be recognised that many other variables affect the strength of demand over the short run, with – for example – the effects of world demand and the inventory cycle largely separate from the balance-sheet influence running from money holdings. Nevertheless, the robustness of the relationship over the medium terms is impressive. Companies’ money holdings are the numerator in the liquidity ratio and they affect demand in just the way that basic theory would predict. The relationship between the corporate liquidity ratio and demand would be breaking down if the residuals from the estimated best-fitting equation were larger in the last few years than before and showing signs of particularly clear increases in the last few quarters. The chart at the top of the next page plots the residuals from the best-fitting equation. Assessment of this chart is to some extent in the eye of the beholder. It is true that in the Great Recession the best-fitting equation between the liquidity ratio and expenditure deteriorated in one sense. The equation produced values for demand that were appreciably higher than the outturn (i.e., implying a negative residual) and indeed did so to a degree not seen at any other point in the 48 years in the current exercise. (Quite large negative residuals were also recorded in the 1970s.) However, the residuals in the very latest quarters are not moving relentlessly in one direction, while the last few residuals have taken values lower than those which have been seen on many occasions in the past. My conclusion is that the behavioural relationship continues to work. To recall, that relationship is between companies’ liquidity ratio (which incorporates their money holdings as the numerator) and changes in total domestic expenditure in real terms. This is very much a relationship relevant to the testing of the effectiveness of QE, since QE is intended to boost the aggregate quantity of money, and company money holdings are part of aggregate money. The relationship between corporate money holdings and expenditure is one element in the larger relationship between money and nominal GDP. That relationship is robust, surviving without difficulty through a series of cyclical fluctuations in the UK economy over almost five decades. This reinforces the conclusion in the 23rd November note. As stated there, the facts of the relationship between money and nominal GDP since 1997 are consistent with the standard monetary theory of national income determination. The theory implies that a medium-term relationship holds between the rates of change of money and nominal GDP, and that the equilibrium level of nominal national income is a function of the quantity of money. A necessary consequence is that – if the state engineers deliberately large changes in the rate of money growth (as it undoubtedly can do) – it is taking powerful actions to affect macroeconomic outcomes. To repeat, claims that ‘QE has become ineffective’ are contrary to evidence and at variance with a large body of well-established theory.
http://mv-pt.co.uk/is-qe-losing-its-effectiveness/
Conversational AI has come a long way in raising the bar to create more human-like conversations with Virtual Agents (VAs) and users now expect their interactions with VAs to run in a smooth and efficient manner. We have explored some of the most encountered challenges in dialogue management for conversational agents in previous articles on different #TeneoTuesday events. We described the way to handle these challenges using Teneo Platform. This article is a summarized collection of previous articles that highlighted the different dialogue aspects and challenges that can be managed in Teneo. Out -of-Scope and Irrelevant User Input Virtual Agents may be exposed to all kinds of user requests. Some of which are within the knowledge of the VA, and some fall out of its scope. In a dedicated article, we learned how to Build an Out-of-Scope Flow in Teneo to capture input that the VA does not have the right response to yet. In the image below, we can see that the bot answer to out-of-scope input would provide an improved user experience as opposed to Safetynet answers to all input that the bot does not understand. In addition to that, using Teneo you can improve the user experience by setting expectations to the topics that the VA can handle. This strategy consists of providing suggestions in the form of quick messaging such as lists or buttons. The image below is just an example for displaying suggested topics to the user in the form of buttons in Teneo Web Chat. Using such methods ensures an improved user experience, since they are presented with the most common topics and they can easily click on a button to receive the desired information or service. Ambiguous User Requests Ambiguous user input results in indeterminate user intent. This may be caused either by: - Keyword messages. Where the challenge lies in multiple intents relevant to the user input. Or, - Unclear messages. Which makes it difficult to define the relevant intent. As a conversation designer, you want to make sure that you set up the necessary components to ensure smooth and efficient dialogues with the bot. Luckily, using Teneo platform, you can disambiguate indefinite user requests with a set of strategies inside your solution. Make sure to check out to our article about Handling ambiguous input for a full guide on this topic. The most common type of ambiguous input is keyword requests. Such user requests consist of one or a couple of words that constitute short phrases; these phrases may be linked to multiple intents. As a conversation designer, you want to start by identifying the products, services, or topics that the users may ask about using short phrases. Once you have identified the input, then you can build a disambiguation flow inside Teneo that can be triggered by keyword requests and that extracts more information from the user to be able to complete a task or respond with the queried information. On Teneo.ai, you can find a practical example on building disambiguation flows. The images below display what the trigger of such a flow is matched against. This flow is built to capture user requests relating to any sort of coffee. The Match requirement is therefore a Language Object that consists of coffee types. The flow continues to collect more specific user input to be able to disambiguate the intent, by asking the user whether he/she wants to order coffee or see the menu. Depending on the user answer to that, the flow then branches to multiple flows using Flow Links. Interruptions in Conversational AI Dialogues When a user converses with a bot, she/he may switch from one topic to another. This topic switch may be challenging. As long as a certain topic is discussed, the conversation is following the triggered flow path, but as soon as the path is interrupted with a topic switch, the conversation follows the path of the newly triggered flow. The challenge lies in not losing the information that was collected in the prior flow, and the fact that the user will not be taken back to that flow unless she/he triggers that flow again. In Teneo, interruptions are allowed and handled with the revisit output-node feature and the flow stack logic. On a previous #TeneoTuesday, we published an article that provides details on How Interruptions are Handled with Teneo. The image below displays an example of handling interruptions. Where the user switches from an intent to buy a cube to an intent to ask about price. As soon as the second topic is addressed, the bot brings back the first topic again. This is made possible by setting a revisit option on the output node in the first flow (buy a cube flow). The illustration below explains the flow switch process. This will allow the flow to remain on the flow stack even if the next user input does not match the flow trigger. The new user input would match another trigger of a second flow (how much does the cube cost flow). Once that flow is dropped or in other words, the answer is provided to the user, then the first flow is back on the top of the flow stack. Depending on the use-case, you can change the maximum number of times that a node should be revisited. Once that maximum number is reached, the flow will not be on the flow stack anymore; it will only be triggered with new matching input. User Intent Recognition: Trigger Ordering In the previous sections we highlighted the most common strategies that can be used in Teneo to handle very common dialogue management challenges. From the first stages of designing a conversational AI project and throughout the way, the main concern of the developers is accurate intent recognition. Several components play a role in user intent classification and identification; most importantly: the Natural Language Understanding component, the classification algorithms involved, and the methods used to determine intent matching and flow triggering. In Teneo, the information needed for matching to relevant intents and activating the corresponding flows is stored in flow triggers in form of Match requirements . Furthermore, Triggers in Teneo can be placed in different order groups that determine which trigger is evaluated first, second, etc. until a trigger match requirement is met, and the corresponding flow is activated. Trigger Ordering is a powerful tool to configure the intent matching mechanism and flow triggering process in your solution to best suit your bot functionality and use-case. In a different article, we explain in detail the impact of trigger ordering on intent recognition in conversational AI solutions, and provide recommendations for a good ordering methodology. The illustration below clarifies how the rank of a trigger in relation to the other triggers in your solution controls when the corresponding flow is triggered. Contexts Restrictions for Improved Conversational AI Interaction Depending on your conversational AI project, you may want to adapt the dialogues depending on contextual information. This information can be related to the user, the channel, time of day, previous topic in the conversation, season of the year …etc. In Teneo you can create two types of context restrictions: variable-based and scripted restrictions. These restrictions may be placed on Triggers or Transitions of a Flow. The image below is an example of using variable-based restrictions; a Global Variable value is evaluated as a context restriction for a Trigger Match requirement. This is an example flow for ordering coffee. The user would say “can I order coffee” and this flow will be triggered. What if during the same conversation the user says, “I will have one of these”? Adding this context restriction means that if the Global Variable “coffeeTypeInFocus” has a value then this trigger will match. In a previous article, we present a set of Commonly-used Global Contexts, and provide three examples on how scripted context restrictions can help you alter the bot communications with end users depending on contextual information. For instance, in case your bot is live on different channels, and you want to cater the functionality depending on the channel that the user is on, you can add a scripted context restriction to handle that. The image below is an example of using a scripted context restriction to manage different bot functionality depending on different channels such as: Alexa, Microsoft Teams, Facebook Messenger. Conclusion Setting up the necessary strategies to ensure efficient dialogues with conversational virtual agents can be a challenging task. In this article , we present a summary of the most common challenges along with methods to handle them using Teneo conversational AI platform. Does your project face any of these challenges? Or different ones? Let us know in the comments.
https://community.teneo.ai/t/dialog-management-in-teneo-summary/691
Other questions? Contact us at astro[at]cs.lists.rpi[dot]edu! Science Summary Milkyway@home studies the history of our galaxy by analyzing the stars in the Milky Way galaxy's Galactic Halo. This includes searching for elusive dark matter. This research is done by mapping structures of stars orbiting the Milky Way - many these structures are actually "tidal debris streams," or dwarf galaxies that are being pulled apart by our Galaxy's superior gravitational field. The orbits, shapes, and compositions of these dwarf galaxies provide vital clues to the history of our Galaxy, as well as to the distribution of dark matter. Additionally, Milkyway@home has recently started developing the "N-body" sub-project, which creates simulated dwarf galaxies and "shoots" them into the Milky Way's gravitational field. We allow the simulated dwarf galaxy's initial conditions to vary until the final simulated dwarf matches what we see in actual halo structures. In other words, we are trying to match dwarf galaxy models to real data, in order to learn more about what is (and what isn't) possible for our Galaxy. For both projects, we use data from the Sloan Digital Sky Survey(see below) Here's a visualization by Shane Reilly, showing the Milky Way galaxy (center blue-to-red spiral), a model for the disrupted Sagittarius dwarf galaxy (blue), and an example wedge of SDSS data (yellow). Source: Shane Reilly, Milkyway@home Until the late 1990's, the Galactic halo was thought to smooth and uninteresting, and Heidi Newberg's 2002 paper ("The Ghost of Sagittarius and Lumps in the Halo of the Milky Way") proved that the halo is actually full of these tidal debris (the halo is "lumpy"). Since then, astronomers have been actively searching for and characterizing these structures. So Milkyway@home is doing science in a field that's barely over ten years old - this is cutting edge astronomy, and we want you to be a part of it! Introduction: The Shape of the Milky Way What is the Milky Way? The Milky Way is our home galaxy, one of BILLIONS of known galaxies in the Universe. In addition to our Sun, the Milky Way contains around 400 billion other stars - that's about 57 stars for every human being alive on Earth today! Even though that sounds big, the Milky Way is actually thought to be an average-sized galaxy. (For more information on galaxies, see the Wikipedia article:Galaxy.) The Milky Way is currently understood to be a barred spiral galaxy (Hubble type SBbc) that is 100,000 light-years across - that is, it takes 100,000 years for light (the fastest thing known to exist) to travel from one end of the Milky Way to the other. For comparison, light takes 8 minutes to get from the Sun to the Earth. While the light-year is a physically useful unit, astronomers tend to use "parsecs" when measuring distances. A parsec (short for "parallax-second") is 3.26 light-years, and is related to one of the most precise methods of determining distances to other stars ("parallax"). In Galactic astronomy, we work with truly astronomical distances, as so we use "kiloparsecs" (kpc), or thousands of parsecs, as our distance units. The radius of the Milky Way, then, is 15 kpc, with our Sun being 8 kpc from the center of the galaxy. The modern view of the Milky Way galaxy contains four major components: The disk, bulge, stellar halo, and dark matter halo: Source: Matthew Newby, Milkyway@home The disk is the most obvious component of the galaxy, and is considered to consist of two parts: the thin disk and the thick disk. The thin disk is about 0.3 kpc thick and contains almost all of the dust, gas, and young stars (including the Sun) in our Galaxy. The thick disk is about 1 kpc thick, and marks the thickness where star densities drop dramatically. The bulge lies at the center of the disk, has a radius of only a few kpc, and contains both old and young stars. Recently, it was determined that the bulge contains a prominent bar. Additionally, a super massive black hole resides at the center of the galaxy - with a mass equal to that of 4 million Suns! The stellar halo is a nearly spherical spheroid of stars that surrounds the entire galaxy. The density of stars in the halo is very low compared to densities found in the disk, and the majority of halo stars are found within 30 kpc of the galactic center. The stellar halo is the focus of Milkyway@home. The dark matter halo is the most mysterious of all the galactic components. Information from galactic rotation curves, galaxy collisions, and dark matter simulations all strongly indicate that there is a large amount of invisible mass surrounding every galaxy. Modern astronomers hope to gain clues about the shape and composition of the dark matter halo from structures in the disk and stellar halo. "Dark" Matter Dark Matter is the mass that is needed to make up for the unseen mass in physical observations. Although other solutions to these discrepancies have been proposed, such as modifications to Newton's and/or Einstein's theories of gravitation, dark matter is the only solution that describes all of the observed scientific anomalies simultaneously. Therefore, understanding dark matter is currently one of the major goals of science. To understand what "dark" matter is, we need to understand "light" matter (the stuff we are used to). "Light" matter is made of baryons, which are particles that are made of quarks. The most important consequence of baryons being built of quarks is that they interact electromagnetically. This means that light, which is an electromagnetic wave, can interact with baryons. Light waves have a large variety of wavelengths that make up the electromagnetic spectrum (See Figure, from Wikipedia). Depending on how the baryons are arranged, baryonic matter will absorb, reflect, or emit certain wavelengths of light. In fact, all baryonic matter will emit some wavelengths of light based on its temperature - stars, for example, are very hot, and so they can emit visible light. The higher an object's temperature, the shorter the wavelengths that can be emitted. Therefore, all baryonic matter "glows" at certain wavelengths (including humans! We glow in the infrared). Source: Wikimedia Commons Dark matter is different. Dark matter does not emit light at any wavelength. Dark matter does not absorb light, and it doesn't reflect it, either. Dark matter, then, does not interact electromagnetically at all. This is why it is "dark:" light waves can never even know it's there. Since dark matter doesn't interact with light, the only way that we can currently study it is through gravity. By studying the distribution of baryonic matter (stars and gas) in the Milky Way, we will gain insight into the arrangement and composition of dark matter. Milkyway@home furthers this goal by studying stars in the stellar halo, using data from the Sloan Digital Sky Survey. Part I: Sloan Digital Sky Survery (SDSS) The Sloan Digital Sky Survey is a five-color, deep-field survey that covers a large part of the the sky. It started taking data from its 2.5-meter telescope at Apache Point Observatory in the year 2000, and will release its last dataset in 2014. All of the almost 500 million objects in the database are available to the public. For more information about SDSS, see the SDSS website. If you want to explore SDSS data yourself, check out the SDSS DR9 Navigate Tool. Source: SDSS-III Part II: How do we Search for Dark Matter? So, what can the Galactic halo tell us about dark matter and the structure of the Milky Way? Astronomers seek to understand the Galactic potential of the Milky Way, which is a measure of how the Milky Way's gravity affects other objects, and therefore, a measure of the distribution of mass (matter) in the galaxy. If we can compare the galactic potential to the potential of the known (baryonic) matter, we can then determine the potential of the dark matter - which will tell us how dark matter is distributed in the Milky Way. Astronomers use the physics of gravity to determine the potential of the Galaxy. In a simple analogy, let's look at how someone would go about investigating the potential of our sun. The Sun is massive and spherical, and so its potential will be simple - 'spherically symmetric,' in physics lingo. The measured strength of this spherically symmetric potential depends only on the mass of the Sun, and the distance that you are away from it. The spherically symmetric gravitational potential of the Sun leads to Kepler's Law. If we plot the velocity (or orbital speed) of planets orbiting the Sun versus their radius (or orbital distance) from the Sun, we get the rotation curve of the Solar System. For a system obeying Kepler's Law, such as the Solar System, a clearly "falling" (decreasing with distance) rotation curve is observed: Source: Matthew Newby, Milkyway@home A Galaxy is a bit more complicated. Since there's not just one big mass at the center, the rotation curve should look different then that of the Solar System. When astronomers add up all of the light from stars in a Galaxy (even other Galaxies), we find that most of the light comes from near the center, with the amount of light decreasing with distance from the center. From this "light curve," we can calculate the distribution of light matter, which lets us calculate what the rotation curve of a Galaxy should look like. What we find is that the curve should fall with distance - but when astronomers actually measure the rotation curve of the Milky Way (and other Galaxies), we find that it is almost flat, and not falling much at all! Source: Matthew Newby, Milkyway@home The rotation problem actually goes back to the 1930's, with an astronomer named Fritz Zwicky. Zwicky measured the velocities of galaxies rotating around a galaxy cluster, and concluded that there was "missing mass" that wasn't being seen in the cluster. In the 1970's, astronomer Vera Rubin measured the rotation curves of other Galaxies, and showed definitively that there is, indeed, more mass in each galaxy than can be seen. So, how do we find this dark matter? Our best bet seems to be gravity. Using gravitational lensing, or the fact that dense pockets of matter can cause the path of light to warp around them, astronomers can actually map dark matter within very dense galaxy clusters, such as the Abell Cluster: Source: Hubblesite.org But these clusters are very far away from us, and we can't see the details. So we really want to figure out where dark matter is in our Galaxy, and then figure out what it is from there. The stars in the Galactic halo orbit outside the disk of the Milky Way, and so their orbits will tell us what the Milky Way's gravitational potential looks like, and therefore, where the mass is. But these stars are just far enough away that they don't seem to move at all - if you don't know how something is moving, it's really difficult to figure out what its orbit is. This is where tidal streams save the day! These streams, formed from dwarf galaxies being torn apart by the Milky Way's gravity, trace continuous orbits around the Galaxy. So, even if we can't see the individual stars move, we can follow the line of the tidal stream to determine their direction of motion. From there we can determine the stars' orbits, and then we can determine the distribution of dark matter! Now the trick is figuring out exactly where these streams are. While this may seem easy, in reality the streams are mixed in with regular halo stars and even with other streams! Also, there are errors in the data, especially as you get further out in the halo, and these need to be accounted for. All of this means that we have to apply detailed mathematical analysis to the stars, which in turn leads to a very difficult computational problem... Part III: Milkyway@home Separation This is where Milkyway@home comes in. The goal of the "Separation" or "Stream Fit" part of Milkyway@home is to do this analysis - figuring out exactly where the big tidal streams are in the big jumble of stars that is the Galactic halo. To do this, we had to create a mathematical model of SDSS data stripes (see Nathan Cole's PhD Thesis [pdf]) and a method for finding the best way to fit this model to the actual SDSS data. Each separation work unit is a single evaluation of the model - that is, a single set of parameters for the model that are checked against the real data. Each of these work units then determines the likelihood that the given set of model parameters matches the data, and sends this to our server. Our server then uses this information (see below) to determine the next set of parameters to try, and generates a new work unit - this continues until we see very little improvement in the likelihoods, and we can then declare a parameter set that gives the best likelihood of matching the data (This type of problem is called a maximum likelihood problem). In other words, the separation project searches for the best way to describe the streams in the Galactic halo. Once we have that, we get a very accurate description of the tidal streams in a given SDSS wedge. Current Progress: We've managed to finish describing the North Galactic cap ("above" the Galactic disk) part of the Sagittarius dwarf galaxy's tidal stream, and the results will be published in the Astrophysical Journal soon (March-April 2013). Some figures from that paper: The Sagittarius stream separated from the background data (see this thread for more info): Source: Newby et al. (2013), Astrophysical Journal The path of the Sagittarius stream around the Galaxy, represented by an arrow for each SDSS stripe that we analyzed. See this thread for more info: Source: Newby et al. (2013), Astrophysical Journal Work In-Progress: Milkyway@home is currently studying the North Galactic cap streams that are not Sagittarius - we are re-analyzing the same data, but with the Sagittarius stream removed. This is necessary because, with Sagittarius, the other streams were too faint to accurately find. Additionlly, we are gearing up to start working on the SDSS Data Release 8 data, which fills in some of the areas in the Southern Galactic Cap. Finally, once all of this is done, we will analyze the stellar spheroid - the stars in the Galactic halo that do not belong to dwarf galaxy tidal streams. Understanding the stellar spheroid is a current hot topic in astronomy, so a thorough analysis will make a big splash! If all goes well, the Separation project should be complete in late-2013 to mid-2014. N-body The N-body project on Milkyway@home simulates dwarf galaxies colliding with (or being disrupted by) the Milky Way. This disruptions often result in tidal streams, like Sagittarius. The goal of the N-body project is to match simulated dwarf galaxies to real dwarf galaxy data, and thereby constrain the properties of the Milky Way galaxy's gravitational potential (as well as the properties of the dwarf galaxies). Here's an example of a dwarf galaxy being disrupted by the Milky Way's gravity (the Milky Way is not shown, and would be at the center of the picture): Source: Shane Reilly, Milkyway@home Work In-Progress: The N-body project is currently under development, and is almost stable. Soon, we'll be running test data through through it to verify that the techniques work, then we'll start crunching simulations against real data! Eventually, we hope to to make N-body the main project on Milkyway@home, and to add GPU support. Previous work and publications.
https://milkyway.cs.rpi.edu/milkyway/science.php
About the project: We have been approached by Awtad to create their website and reflect their visual identity. First, we studied the content and created a digital strategy to help strengthen the website messaging. Since the website targeted a corporate level target audience we made sure it’s clear and easy to browse the website. The design reflected their brand identity and we fell in love with their shade of blue; it was just the perfect colour combination for “consultancy”.
https://www.inspiretbb.com/portfolio/awtad-consultancy-website/
Neuroscientists at the Neuronano Research Centre at Lund University in Sweden have developed and tested an ambitious new design for processing and storing the massive amounts of data expected from future implantable brain machine interfaces (BMIs) and brain-computer interfaces (BCIs). The system would simultaneously acquire data from more than 1 million neurons in real time. It would convert the spike data (using bit encoding) and send it via an effective communication format for processing and storage on conventional computer systems. It would also provide feedback to a subject in under 25 milliseconds — stimulating up to 100,000 neurons. Monitoring large areas of the brain in real time. Applications of this new design include basic research, clinical diagnosis, and treatment. It would be especially useful for future implantable, bidirectional BMIs and BCIs, which are used to communicate complex data between neurons and computers. This would include monitoring large areas of the brain in paralyzed patients, revealing an imminent epileptic seizure, and providing real-time feedback control to robotic arms used by quadriplegics and others. “A considerable benefit of this architecture and data format is that it doesn’t require further translation, as the brain’s [spiking] signals are translated directly into bitcode,” making it available for computer processing and dramatically increasing the processing speed and database storage capacity. “This means a considerable advantage in all communication between the brain and computers, not the least regarding clinical applications,” says Bengt Ljungquist, lead author of the study and doctoral student at Lund University. Future BMI/BCI systems. Current neural-data acquisition systems are typically limited to 512 or 1024 channels and the data is not easily converted into a form that can be processed and stored on PCs and other computer systems. “The demands on hardware and software used in the context of BMI/BCI are already high, as recent studies have used recordings of up to 1792 channels for a single subject,” the researchers note in an open-access paper published in the journal Neuroinformatics. That’s expected to increase. In 2016, DARPA (U.S. Defense Advanced Research Project Agency) announced its Neural Engineering System Design (NESD) program*, intended “to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. … “Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.” * DARPA has since announced that it has “awarded contracts to five research organizations and one company that will support the Neural Engineering System Design (NESD) program: Brown University; Columbia University; Fondation Voir et Entendre (The Seeing and Hearing Foundation); John B. Pierce Laboratory; Paradromics, Inc.; and the University of California, Berkeley. These organizations have formed teams to develop the fundamental research and component technologies required to pursue the NESD vision of a high-resolution neural interface and integrate them to create and demonstrate working systems able to support potential future therapies for sensory restoration. Four of the teams will focus on vision and two will focus on aspects of hearing and speech.” Abstract of A Bit-Encoding Based New Data Structure for Time and Memory Efficient Handling of Spike Times in an Electrophysiological Setup. Recent neuroscientific and technical developments of brain machine interfaces have put increasing demands on neuroinformatic databases and data handling software, especially when managing data in real time from large numbers of neurons. Extrapolating these developments we here set out to construct a scalable software architecture that would enable near-future massive parallel recording, organization and analysis of neurophysiological data on a standard computer. To this end we combined, for the first time in the present context, bit-encoding of spike data with a specific communication format for real time transfer and storage of neuronal data, synchronized by a common time base across all unit sources. We demonstrate that our architecture can simultaneously handle data from more than one million neurons and provide, in real time (< 25 ms), feedback based on analysis of previously recorded data. In addition to managing recordings from very large numbers of neurons in real time, it also has the capacity to handle the extensive periods of recording time necessary in certain scientific and clinical applications. Furthermore, the bit-encoding proposed has the additional advantage of allowing an extremely fast analysis of spatiotemporal spike patterns in a large number of neurons. Thus, we conclude that this architecture is well suited to support current and near-future Brain Machine Interface requirements.
There are many potential ways to assess experiential activities, both external and internal. These methods are tied to reflection, helping learners to focus their learning while also producing a product for assessment purposes. Moon lists several examples: - “Maintenance of a learning journal or a portfolio - Reflection on critical incidents - Presentation on what has been learnt - Analysis of strengths and weaknesses and related action planning - Essay or report on what has been learnt (preferably with references to excerpts from reflective writing) - Self-awareness tools and exercises (e.g. questionnaires about learning patterns) - A review of a book that relates the work experience to own discipline - Short answer questions of a ‘why’ or ‘explain’ nature - A project that develops ideas further (group or individual) - Self-evaluation of a task performed - An article (e.g. for a newspaper) explaining something in the workplace - Recommendation for improvement of some practice (a sensitive matter) - An interview of the learner as a potential worker in the workplace - A story that involves thinking about learning in the placement - A request that students take a given theory and observe its application in the workplace - An oral exam - Management of an informed discussion - A report on an event in the work situation (ethical issues) - Account of how discipline (i.e. subject) issues apply to the workplace - An identification of and rationale for projects that could be done in the workplace” (2004, p. 166) Of these methods, Qualters singles out the learning portfolio as one of the most comprehensive methods of assessing experiential learning. Learning portfolios are distinguished from standard professional portfolios through their inclusion of a reflection component. It therefore becomes more than just “a showcase of student materials,” and instead becomes a “purposefully designed collection connected by carefully thought out structured student reflections.” Beyond assessing student learning, well-constructed portfolios can be used for accreditation, university-wide outcome assessment, and to document and understand the learning process at both the level of course and program (Qualters, 2010, p. 60). John Zubizarreta proposes a simple model for a learning portfolio with three fundamental and interrelated components: - Reflection - Documentation - Collaboration (2008, p. 1). This conception of a learning portfolio mirrors that of a teaching portfolio, pairing a concise, reflective narrative with a series of appendices containing appropriate evidence for each area of reflection. Zubizarreta believes that “the value of portfolios in improving student learning resides in engaging students not just in collecting representative samples of their work for assessment, evaluation, or career preparation, but in addressing vital reflective questions that invite systematic inquiry” (2008, p. 2). Portfolios engage students in “intellectually challenging, creative, rigorous work,” and serve as both a process and an end product. This recalls the above-stated definition of experiential learning as being as much about the means as about the ends, and the necessity of devising assessment methods to measure success in both the process and the product. Keeping Zubizarreta’s three fundamental components in mind, it is important to remember that there is no right way of constructing a portfolio, and each portfolio will be different depending on the program of study or experiential learning activity. Zubizarreta provides the following generic table of contents to give suggestions as to the potential contents of a portfolio and a logical order that can be used to drive learning: - Philosophy of learning: What, how, when, and why did I learn? A reflective narrative on the learning, process, learning style, value of learning - Achievements in Learning: What have I accomplished with my learning? Records—transcripts, course descriptions, resumes, honors, awards, internships, tutoring - Evidence of Learning: What products, outcomes do I have to demonstrate learning? Outcomes—research papers, critical essays, field experience logs, creative displays/performances, data/spreadsheet analysis, lab results - Assessment of Learning: What measures and accounting to I have of my learning? Instructor feedback, course test scores, exit/board exams, lab/data reviews, research project appraisals, practicum reports - Relevance of Learning: What difference has learning made in my life? Practical applications, leadership, relation of learning to personal and professional domains, ethical/moral growth, affiliations, hobbies, volunteer work, affective value of learning - Learning Goals: What plans do I have to continue learning? Response to feedback; plans to enhance, connect, and apply learning, career ambitions - Appendices: How coherently have I integrated evidence with reflections and self-assessments in the portfolio? Selected documentation for areas 1 through 6 (Zubizarreta, 2008, p. 4). To plan a learning portfolio project, Zubizarreta provides a short rubric that asks instructors to first identify the purpose of the portfolio, and then answer the following questions: - What kind of reflective questions should students address? - What kinds of evidence or learning outcomes would be most useful? - How will students engage in collaboration and mentoring during the process? (Zubizarreta, 2008, p. 4) The purpose of a learning portfolio “strongly determines the themes of the reflective narrative, as well as the types of documentation or evidence selected in the appendices.” A planning rubric representing this can be a table with three columns—purpose, theme, and evidence—and the content of these columns can be quite broad. For example, if the purpose of the portfolio is “improvement,” then the themes could be “development, reflective inquiry, focus on goals, philosophy of learning,” and the evidence for that could be “drafts, journals, online threaded discussions, emails, statements of goals, classroom assessments, research notes.” If the purpose of the portfolio is “problem solving,” then the themes could be “critical thinking, creativity, application of knowledge, flexibility, curiosity,” and the evidence for that could be “problemsolving log, lab reports, computer programs, spreadsheet data analyses” (Zubizarreta, 2008, p. 5). No matter what the contents of the learning portfolio, a well-designed project will keep students, active, engaged, and reflective, helping them to “own their own learning as more independent, self-directed, and lifelong learners.” To that end, Zubizarreta cites a recent trend amongst universities to supply alumni with perpetual server space, enabling students to maintain their learning portfolios electronically long after their time in university, “a nod toward a true conception of portfolio development as a lifelong commitment to learning” (Zubizarreta, 2008, p. 6). Extract from :
https://glenhavenpark.com.au/methods-for-assessing-experiential-activities/
Bring yourself up to speed with our introductory content. Supply chain planning and execution RFI (request for information) An RFI (request for information) is a formal process for gathering information from potential suppliers of a good or service. Continue Reading Kaizen (continuous improvement) Kaizen is an approach to creating continuous improvement based on the idea that small, ongoing positive changes can reap significant improvements. Continue Reading Essentials of EDI in supply chain management Electronic data interchange (EDI) applies to many different business aspects. Learn three ways EDI has streamlined supply chain management. Continue Reading - material requirements planning (MRP) Material requirements planning (MRP) is a system for calculating the materials and components needed to manufacture a product. It consists of three primary steps: taking inventory of the materials and components on hand, identifying which additional... Continue Reading Top 7 manufacturing challenges for 2020 Growing supply chain complexity, emerging next-gen technologies and other compelling forces will continue to challenge the manufacturing sector. Here's a look at what's ahead for 2020. Continue Reading - What is supply chain management (SCM) and why is it important? Supply chain management (SCM) is the broad range of activities required to plan, control and execute a product's flow from materials to production to distribution in the most economical way possible.Continue Reading Supply Chain Planning (SCP) Supply chain planning (SCP) is the process of anticipating the demand for products and planning their materials and components, production, marketing, distribution and sale.Continue Reading logistics Logistics is the process of planning and executing the efficient transportation and storage of goods from the point of origin to the point of consumption.Continue Reading strategic sourcing Strategic sourcing is an approach to supply chain management that formalizes the way information is gathered and used so that an organization can leverage its consolidated purchasing power to find the best possible values in the marketplace.Continue Reading 4 ways to use blockchain in the supply chain As blockchain technology enters its 11th year, its use in the supply chain is burgeoning. Here are some of the areas that are garnering the most interest.Continue Reading - Introduction to supply chain management software of 2019 Discover the functions of supply chain management software and learn how SCM software helps companies manage the complex demands of today's customers.Continue Reading 7 steps to implementing blockchain in the supply chain If your organization believes blockchain will bring greater visibility to the supply chain, first understand the critical steps needed to enable a successful implementation.Continue Reading order management Order management is the administration of business processes related to orders for goods or services.Continue Reading 7 supply chain management key terms you need to know Consumers are demanding greater visibility, but how do you give it them? We're starting with a list of supply chain management basic terms that you should be aware of.Continue Reading supply chain sustainability (SCS) Supply chain sustainability (SCS) is a holistic view of supply chain processes, logistics and technologies that affect the environmental, social, economic and legal aspects of a supply chain's components. Typically, sustainability initiatives ...Continue Reading service supply chain The service supply chain is the part of the supply chain dedicated to providing service on products.Continue Reading Demand Planning Demand planning is the process of forecasting the demand for a product or service so it can be produced and delivered more efficiently and to the satisfaction of customers.Continue Reading Why is supplier segmentation key to supplier relationship management? Your customer relies on your supply chain to work well, so that means you need effective supplier relationship management. Here's information to help you create it.Continue Reading What is the difference between logistics and supply chain management? Logistics is part of supply chain management, but the two terms are often used synonymously. Here's a quick explanation of how these two are different.Continue Reading asset performance management (APM) Asset performance management (APM) is both a strategy and a set of software tools for tracking and managing the health of an organization's physical assets.Continue Reading What are some best practices for crafting a supply chain technology strategy? Supply chain managers have a tough job and face many challenges. Technology can help but must also be approached with return on investment top of mind. Here are three tips that can help.Continue Reading Supply chain blockchain is quickly taking hold The pieces needed to create supply chain blockchain networks are emerging. But blockchain won't fix all the problems that supply chains face, such as theft and data accuracy.Continue Reading 3PL (third-party logistics) A 3PL (third-party logistics) provider offers outsourced logistics services, which encompass anything that involves management of one or more facets of procurement and fulfillment activities.Continue Reading virtual commissioning Virtual commissioning is the practice of using 3D technology to create a simulation model of a manufacturing plant so that proposed changes and upgrades can be tested before they are implemented to the actual plant.Continue Reading Get to Industry 4.0 with a smart factory roadmap Creating a smart factory needn't require starting from scratch. Here's advice on how you can create connectivity with the industrial machines you do have.Continue Reading Planning for supply chain risk assessment and mitigation Natural disasters, geopolitical tensions and unacceptable social or environmental practices can endanger supply chains, but the right tools can mitigate the risks.Continue Reading GS1 GS1 is a global, not-for-profit association that maintains standards for barcodes and RFID tags and for supply chain messaging such as Electronic Data Interchange (EDI).Continue Reading Supply chain risk management tools vary from generic to niche Specialized supply chain monitoring and risk tools exist, but you're more likely to use an assortment of general-purpose analytics, financial and GRC applications.Continue Reading bill of materials (BOM) A bill of materials (BOM) is a comprehensive inventory of the raw materials, assemblies, subassemblies, parts and components, as well as the quantities of each, needed to manufacture a product.Continue Reading GDSN (Global Data Synchronization Network) GDSN (Global Data Synchronization Network) is an internet-based network that enables trading partners to exchange product-identification data in a standardized way in real time.Continue Reading smart factory A smart factory is a highly digitized and connected production facility that relies on smart manufacturing.Continue Reading sales and operations planning (S&OP) Sales and operations planning (S&OP) is a process for better matching a manufacturer's supply with demand by having the sales department collaborate with operations to create a single production plan.Continue Reading working capital Working capital is the difference between a business's current assets and current liabilities.Continue Reading What should companies know about sustainability software? Sustainability is composed of social, environmental and economic pillars. That's why technologies focused on this market address different needs. Here's a narrow glimpse.Continue Reading logistics management Logistics management is the governance of supply chain management functions that helps organizations plan, manage and implement processes to move and store goods.Continue Reading How can manufacturers begin an Industry 4.0 roadmap? Are you driving toward the next industrial revolution? Industry 4.0 will transform the manufacturing landscape, according to experts. Here's how to get ready.Continue Reading Industry 4.0 Industry 4.0, which refers to the fourth industrial revolution, is the cyber-physical transformation of manufacturing.Continue Reading How will self-driving vehicles affect the environment? Autonomous vehicles are on track to change logistics forever. Some believe that might be a good thing for the environment.Continue Reading supply chain analytics Supply chain analytics is the application of mathematics, statistics, predictive modeling and machine-learning techniques to find meaningful patterns and knowledge in order, shipment and transactional and sensor data.Continue Reading dual sourcing Dual sourcing is the supply chain management practice of using two suppliers for a given component, raw material, product or service. Companies use this approach to lower the risk of relying on a single supplier, a practice called single sourcing. ...Continue Reading reverse logistics Reverse logistics is the set of activities that is conducted after the sale of a product, such as servicing, refurbishment and recycling, for the purpose of recapturing value or proper disposal.Continue Reading Boost bottom line with supply chain analytics The transformation of supply chain management is happening now. IoT is driving that change, but supply chain analytics is instrumental in taming the massive amounts of data generated by IoT sensors, devices and objects and turning it into insight --...Continue Reading What supply chain management fundamentals are most core? Overseeing materials, information and finances as they move through the supply chain requires learning -- and making peace with -- some critical truths.Continue Reading Eight keys to supply chain sustainability Figuring out how to create a comprehensive and compelling supply chain sustainability initiative can be overwhelming. Here's guidance on getting it right.Continue Reading Should companies make supply chain sustainability a priority? There are many compelling reasons to focus on supply chain sustainability. Here's a look at some of those and why sustainability is a key issue for every company.Continue Reading digital supply chain A digital supply chain is a supply chain whose foundation is built on Web-enabled capabilities to fully capitalize on connectivity, system integration and the information-producing capabilities of "smart" connected products.Continue Reading IoT supply chain disruption: Get ahead of it now The idea of untold devices talking to one another may still have an aura of science fiction, but here's what we recommend: If you want your company to live long and prosper, stake your IoT strategy now. The Internet of Things is indeed poised to ...Continue Reading What are the basics of supply chain finance? Old models of financing are giving way to supply chain finance, a technology-enabled approach that helps suppliers get paid early and buyers get more time to pay.Continue Reading distribution requirements planning (DRP) Distribution requirements planning (DRP) is a time-based systematic process to make the delivery of goods more efficient by determining which goods, in what quantities, and at what location are required to meet anticipated demand.Continue Reading How can manufacturers improve field service operations?
https://searcherp.techtarget.com/info/getstarted/Supply-chain-planning-and-execution
- To promote the importance of the use of sustainable techniques in the design, construction and operation of buildings and urban areas in order to achieve a sustainable living environment with a minimal impact on the surroundings. - To raise social awareness through education and encouragement of the creation of clean and healthy living spaces. - To support and encourage the development and implementation of such methods. - To promote its activities and achievements in various forums in order to solidify the development of sustainable construction industry in Bulgaria, a valuable member of the European union. - To catalyse the development and implementation of sustainable practices through direct education and marketing to industry professionals such as architects, investors, and all other participants in construction and regional development both on a local and national scales. - To support and cooperate with all governmental and non-governmental organisations in the creation of an adequate legislature to guarantee a sustainable built-up environment in the future. - To maintain a EU standard of quality in all of its programs. - To take active participation in the creation and implementation of a single certification system in Bulgaria for efficient buildings, based on the current German certification system. - To train professionals for all steps of the certification process. The Bulgarian green building council started work on BIMEPD project Adapted senior training program on BIM methodologies for the integration of EPD in sustainable construction strategies The implementation of the BIM in Europe is already a reality!
http://bgbc.bg/en/pages/Aboutus/Goals/
ICDP is an independent, non-profit, non-political organisation that encourages and facilitates dialogue, discussions and better relationships between current and emerging Pacific and Australian leaders in government, civil society and the private sector on common challenges. A stable and prosperous Pacific is essential to Australia’s security and foreign policy. However, the complexity of regional challenges demands fresh and innovative approaches. ICDP aims to forge genuine and enduring strategic relationships between Pacific and Australian individuals and organisations to progress ‘Second Track’ dialogue, policy discussions and leadership development. It will support and strengthen the existing connections and help build new networks and coalitions for business, public sector, academia and community leaders in Australia and the Pacific. ICDP was established by the institute for active policy Global Access Partners (GAP) in July 2017. Through its ‘Second Track’ process, GAP generates links in all sectors to increase stakeholder participation in implementation of government policy and promotes innovative, cross-disciplinary approaches to problem-solving. Supported by GAP’s alumni network of over 3,500 people in a broad range of fields, ICDP draws on the very best of Australian expertise and utilises the GAP model to expand and improve strategic discussions, policy processes and networks between Australia and the Pacific. The Centre’s objectives are: - Facilitating the greater exchange of ideas and expertise, and fostering enhanced collaboration and business opportunities between Australia and the Pacific in applying digital technology to support sustainable development; - Shaping and influencing emerging leaders from the Pacific and Australia with a greater focus on promoting Pacific Island women as current and future leaders; - Increasing Australia’s regional standing and links by cultivating enduring relationships between current and emerging leaders from Australia and the Pacific; and - Enabling Pacific Islanders to deepen their understanding and appreciation of Australia through participation in the range of Pacific Connect programme activities, and as a member of the Pacific Connect Community (alumni). ICDP would like to thank Joshua Dean for his excellent work in designing the logos for our organisation and Pacific Connect. Joshua is a visual artist at The Greenhouse Studio in Fiji and has produced material for several NGOs in the Pacific.
https://www.icdp.com.au/about-us/
Introduction {#S1} ============ Patterns of relative timing between consonants and vowels appear to be conditioned in part by abstract phonological structure, such as syllables, but also modulated by the particular gestures being coordinated (e.g., [@B30]; [@B29]; [@B8]; [@B51]; [@B19]; [@B66]). The most rigorous attempts to formalize phonologically relevant temporal patterns have come within the Articulatory Phonology (AP) framework, which draws a distinction between the inter-gestural level of representation and the inter-articulator level ([@B5]; [@B47]). In AP, context-independent phonological representations are given at the inter-gestural level, in the form of dynamical systems that exert task-specific forces on articulators. The form of the dynamical system for a gesture remains constant across different phonological and lexical contexts. Contextual effects on articulatory behavior, due to the starting position of the articulators or to temporal co-activation of gestures, is resolved at the inter-articulator level. The same gesture can have different net effects on articulatory behavior in different contexts owing to the way that competing demands on an articulator are resolved at the inter-articulator level. Crucially, AP is a feedforward control system. Gestures (at the inter-gestural level) exert forces on articulators but do not receive feedback from the state of the articulators in space or time. Feedback of this sort is encapsulated within the inter-articulator level. The two-level feedforward control system of AP accounts for some language-specific phonetic patterns. It can account for target undershoot phenomenon and context effects on articulation without sacrificing phonological constancy ([@B6]). Moreover, higher level phonological structures have been linked to characteristic patterns of timing between gestures, results which receive a natural account within the inter-gestural level of AP. For example, languages that allow syllables with complex onsets, such as English, Polish and Georgian, pattern together in how word-initial consonant clusters are coordinated to the exclusion of languages that disallow complex onsets, such as Arabic and Berber ([@B17]; [@B51]; [@B19]). In addition to simplex vs. complex syllables onsets, segment complexity may also have a temporal basis ([@B50]). [@B50] show that in palatalized stops of Russian, e.g., /p^j^/, the labial and lingual gestures are timed synchronously whereas superficially similar sequences in English, e.g., /pj/in/pju/"pew", and unambiguous sequences in Russian, e.g.,  /br/, are timed sequentially. This difference between complex segments and segment sequences mirrors behavior found at the syllabic level. Language-specific temporal organization of phonology, as illustrated by cases such as these receives a natural account within the inter-gestural level of AP. In contrast to AP, neuro-anatomical models of speech production rely on auditory and somatosensory state feedback to control movement timing ([@B23]; [@B21]). In these models there are no context-independent dynamics comparable to the gestures of AP. Rather, articulation is controlled through the mechanism of feedback. Adjustments to articulation are made online in order to guide articulators to producing target sounds. While these models are silent on the phonological phenomena for which the inter-gestural level of AP provides a natural explanation, they provide an account for how some speakers adjust articulation online in response to perturbation of auditory feedback (e.g., [@B22]). In AP, articulator position information is available only to the inter-articulator level, which is governed by the Task Dynamics model ([@B47]). Within the inter-articulator level, Task Dynamics assumes perfect information about articulator positions, although more recent work has explored replacing this assumption with a more realistic model of feedback ([@B42]). Crucially for our purposes, there is no mechanism for state-based feedback at the inter-articulator level to influence inter-gestural coordination. This means that while auditory/somatosensory feedback could drive articulatory adjustments to how a particular task is achieved it cannot trigger earlier/later activation of a gesture. Experimental evidence indicating that information from the articulator level can feed back to the inter-gestural level is available from perturbation studies. In experimental contexts when there is a physical perturbation to articulation, gestures have been observed to "reset" ([@B45]; [@B46]). Phase-resetting in response to physical perturbation suggests that coordination at the inter-gestural level does not uni-directionally drive articulatory movement. [@B46] argue: "intergestural and interarticulatory dynamics must be coupled bidirectionally, so that feedback information can influence the intergestural clock in a manner that is sensitive to articulatory state (p. 422)." Some recent kinematic studies suggest possible links between the spatial position of articulators and relative timing observable outside of perturbation experiments ([@B8]; [@B37]). [@B8] list the spatial position of the articulator as one of a number of factors that influences measures of gesture coordination, leading to consonant-specific variation in timing patterns in German. [@B37] investigated whether coarticulatory resistance, a measure of the degree to which an articulator resists spatial perturbation ([@B4]; [@B43]; [@B9]) influences the relative timing of a consonant and following vowel. In line with their hypotheses, overlap between a consonant and vowel was affected by the coarticulatory resistance of the consonant. C-V overlap was greater for consonants less resistant to coarticulation. Pastätter and Pouplier also report a corresponding effect of consonant identity on the spatial position of the vowel. Vowels that showed less temporal overlap with the preceeding consonant were spatially closer to the preceeding consonant, converging evidence that consonants with high coarticulatory resistance delay vowel movements. In order to account for this pattern, Pastätter and Pouplier proposed to vary coupling strength at the intergestural level by articulator. In this way, different articulators could enter into the same basic coordination relation, e.g., in-phase or anti-phase timing, but exert differential forces on vowel timing. The theoretical account offered by Pastätter and Pouplier makes properties of articulators (but not their spatial positions) visible to inter-gestural timing. The account preserves language-specific timing at the inter-gestural level and feedforward control but does not reconcile the need for state-based feedback observed by [@B46]. Our aim in this paper is to provide a direct test of whether the spatial position of the tongue influences consonant-vowel (C-V) coordination. To do so, we conducted an Electromagnetic Articulography (EMA) study of Mandarin Chinese. Mandarin is a good language to investigate C-V coordination, both because of its phonological properties and because it is relatively well-studied otherwise. Mandarin allows fairly free combination of tones with consonants and vowels to make CV monosyllabic words. Varying lexical tone, while keeping the consonant and vowel sequence constant allowed us to generate a comparatively large number of phonologically distinct monosyllables to test our research question. We focused on non-low back vowels in Mandarin because past work has shown that variation in lexical tone for these vowels does not influence the spatial location of the vowel target; /i/ and /a/, in contrast, vary with tone ([@B49]). Our stimuli were CV monosyllables, consisting of a labial consonant and a back vowel. Single-syllable words in isolation allow for considerable variability in the starting position of the articulators. Across the observed variation in the spatial position of the tongue body, we investigated whether inter-gestural coordination between the lips, for the consonant, and the tongue body, for the vowel, remained constant, as is predicted by feedforward control. There are competing hypotheses about the feedforward control regime for Mandarin C-V syllables. [@B63] theorizes that consonants and vowels (as well as lexical tones) begin synchronously, at the start of the syllable. This assumption has been implemented in computational modeling of *f*~0~ for tone and intonation ([@B65]; [@B64]). A slightly different conclusion about Mandarin CV timing was reached by [@B14], [@B15]. In an EMA experiment tracking tongue and lip movements, [@B15] found that there is positive C-V lag, i.e., the vowel gesture does not begin movement until after the onset of movement of the consonant. Gao attributed the positive C-V lag to competitive coordination between consonant, vowel, and tone gestures. The account incorporates pressure to start the consonant and vowel at the same time, i.e., in-phase coordination, along with other competing demands on coordination. The tone and vowel are coordinated in-phase, but the consonant (C) and tone (T) are coordinated sequentially (anti-phase). The competing demands of anti-phase C-T timing, in-phase C-V, and in-phase C-T timing are resolved by starting the vowel at the midpoint between the onset of consonant and tone gestures. Notably, Gao's analysis of C-V lag in Mandarin mirrors the analysis of C-V timing in languages with syllable-initial consonant clusters ([@B7]; [@B12]; [@B17]; [@B30]; [@B20], [@B19]; [@B29]; [@B51]). The common thread is that the observed C-V lag in a CCV syllable is driven by competing forces on inter-gestural coordination -- anti-phase coordination for the consonants and in-phase coordination between each onset consonant and the vowel. [@B64] do not address Gao's data. However, both accounts of C-V lag in Mandarin described above, although they differ in assumptions, involve feed-forward control of articulation. As such, they predict that relative timing is blind to the spatial position of the articulator. In the experiment that follows, we test this hypothesis. Experiment {#S2} ========== Speakers {#S2.SS1} -------- Six native speakers of Mandarin Chinese (3 male) participated. They were aged between 21 and 25 years (*M* = 23.7; SD = 1.5) at the time of the study. All were born in Northern China (Beijing and surrounding areas) and lived there until at least 18 years of age. The speakers all lived in Sydney, Australia, where the experiment was conducted, at the time of their participation. All participants were screened by a native speaker of Mandarin Chinese to ensure that they spoke standard Mandarin. Procedures were explained to participants in Mandarin by the second author, a speaker of Taiwanese Mandarin. Participants were compensated for their time and local travel expenses. Materials {#S2.SS2} --------- Target items were a set of CV monosyllables that crossed all four lexical tones of Mandarin, tone 1 "high", tone 2 "rise", tone 3 "low", and tone 4 "fall" with two labial consonants {/m/, /p/} and three back rounded vowels {/ou/, /u/, /uo/} yielding 24 items, which were repeated 6--12 times by each speaker producing a corpus of 949 tokens for analysis. We chose labial consonants because of the relative independence between the consonant (lips) and the vowel (tongue dorsum) gestures. We chose back vowels in particular because of past work showing that /u/ in Mandarin resists the coarticulatory effects of tone, which influence /i/ and /a/ ([@B49]). We also report an analysis of unrounded /i/ and /a/, drawing on data from [@B49]. The purpose of this additional analysis is to assess whether the pattern for our target items generalizes to unrounded vowels. Target items were randomized with fillers and displayed one at a time on a monitor in Pinyin, a standard Romanization of Chinese. The three back vowels included in the materials have the following representation in Pinyin: "o" /uo/, "u" /u/, "ou" /ou/. Here and throughout, we use slashes to refer to IPA symbols. Orthographic representations of vowels not in slashes refer to Pinyin. Many of the items were real words and could have been displayed as Chinese characters. We chose to represent the items with Pinyin orthography because it allowed us to collect all combinations of the onset consonants, vowels and tones under study including those that do not correspond to real words. The Pinyin sequences that are not attested words were combinations of /p/ with /ou/. Equipment {#S2.SS3} --------- We used an NDI Wave Electromagnetic Articulograph system sampling at 100 Hz to capture articulatory movement. We attached sensors to the tongue tip (TT), body (TB), dorsum (TD), upper lip (UL), lower lip (LL), lower incisor (Jaw), nasion and left/right mastoids. Acoustic data were recorded simultaneously at 22 KHz with a Schoeps MK 41S supercardioid microphone (with Schoeps CMC 6 Ug power module). Stimulus Display {#S2.SS4} ---------------- Syllables were displayed in Pinyin on a monitor positioned outside of the NDI Wave magnetic field 45 cm from participants. Stimulus display was controlled manually using a visual basic script in Excel. This allowed for online monitoring of hesitations, mispronunciations and disfluencies. These were rare, but when they occurred, participants were asked to repeat syllables. Post-processing {#S2.SS5} --------------- Head movements were corrected computationally after data collection with reference to the left/right mastoid and nasion sensors. The post-processed data was rotated so that the origin of the spatial coordinates is aligned to the occlusal plane. The occlusal plane was determined by having each participant hold between their teeth a rigid object (plastic protractor) with three sensors configured in a triangle shape. Lip Aperture (LA), defined as the Euclidean distance between the upper and lower lip sensors, was also computed following rotation and translation to the occlusal plane. [Figure 1](#F1){ref-type="fig"} shows the range of movement for the entire experiment for one speaker following head correction. ![Spatial distribution of EMA sensors across the experiment for one subject.](fpsyg-10-02726-g001){#F1} Articulatory Analysis {#S2.SS6} --------------------- The articulatory data analysis focuses on the relative timing between consonant and vowel gestures, which we define in terms of temporal lag, and the position of EMA sensors at linguistically relevant spatio-temporal landmarks: the *onset* of articulatory movement and the achievement of the gestural *target*. Onset and target landmarks were determined according to thresholds of peak velocity in the movement trajectories. For the labial consonants, the Lip Aperture trajectory was used. For the back vowels, landmarks were determined with reference to the Tongue Dorsum sensor in the anterior-posterior dimension (i.e., TDx). Landmark labeling was done using the *findgest* algorithm in MVIEW, a program developed by Mark Tiede at Haskins Laboratories ([@B56]). [Figure 2](#F2){ref-type="fig"} shows an example of how the articulatory landmarks, labeled on the Lip Aperture signal (top panel) relate to the velocity peaks (lower panel). As the lips move together for the labial consonant, the lip aperture (top panel) gradually narrows. The peak velocity in this closing phase of −10 cm/s occurs just after 100 ms. The signal was thresholded at 20% of this velocity peak, resulting in the Onset and Target landmarks. We also explored the velocity minimum as a possible articulatory landmark for analysis but found that the threshold of peak velocity provided more reliable measurements across tokens. The cause seemed to be that some of the monophthongs in the experiment tended to have relatively long periods of low velocity around the point of maximum opening corresponding to the vowels. Although the NDI Wave system produced high spatial resolution recordings, even a small degree of measurement error (∼0.6 mm) makes picking out the true velocity minima from the wide basin of low velocity movement subject to sizeable temporal variation. Using the threshold of peak velocity mitigates the effect of measurement noise, providing a reliable vowel target landmark across tokens. ![Illustration of the onset and target landmarks for a labial consonant. The **top panel** shows lip aperture over time; the **bottom panel** shows the corresponding velocity signal.](fpsyg-10-02726-g002){#F2} The primary dependent variable of interest in this study was the temporal lag between consonants and vowels, henceforth C-V lag. A schematic diagram of C-V lag is provided in [Figure 3](#F3){ref-type="fig"}. C-V lag was determined by subtracting the timestamp of the gesture onset of the consonant, $\text{C}_{\text{ts}}^{\text{onset}}$, from the timestamp of the gesture onset of the vowel, $\text{V}_{\text{ts}}^{\text{onset}}$: ![A schematic depiction of the C-V lag measurement, the interval between the onset of the consonant gesture and the onset of the vowel gesture.](fpsyg-10-02726-g003){#F3} CVlag = V ts onset \- C ts onset The primary independent variable of interest is the distance between the tongue at movement onset for the vowel and at the achievement of target. We quantified this in a few different ways. First, we measured the spatial position of the TD sensor at the onset of movement of the vowel. Since all of the target vowels in this study were back vowels, the primary movements for the vowels involved tongue retraction, i.e., movement from a more anterior position to a more posterior position. We refer to the position of the tongue dorsum in this dimension as TDx: TDx = coordinate ⁢ of ⁢ the ⁢ tongue ⁢ dorsum ⁢ sensor ⁢ in ⁢ the anterior ⁢ \- ⁢ posterior ⁢ dimension For the speaker shown in [Figure 1](#F1){ref-type="fig"}, the range of TDx values is about 18 mm, i.e., from −42 to −60 mm. The negative coordinates are relative to the occlusal plane, so −60 mm indicates 60 mm behind the occlusal plane clenched in the participants' teeth. The value of TDx at movement onset for the vowel served as the key independent measure in the study. The closer the value of TDx at vowel onset was to zero, the further the tongue would have to move to achieve its target. In addition to TDx at movement onset, we also measured more directly how far away the tongue was from its target at the onset of movement. We call this measure *Tdist*, for distance to target. We used inferior-superior (*y*) and anterior- posterior (*x*) dimensions for both TD and TB in the calculation. Hence, Tdist is the four-dimensional Euclidean distance between the position of lingual sensors (TB, TD) at the onset of vowel movement and at the vowel target. The vowel target for each subject was determined by averaging the position of these sensors at the *target* landmark across tokens of the vowel. The formula for Tdist is defined below: Tdist = ( TD x Onset \- mean ⁢ ( TD x Target ) ) 2 \+ ( TD y Onset \- mean ⁢ ( TD y Target ) ) 2 \+ ( TB x Onset \- mean ⁢ ( TB x Target ) ) 2 \+ ( TB y Onset \- mean ⁢ ( TB y Target ) ) 2 [Figure 4](#F4){ref-type="fig"} shows a visual representation of Tdist. The left panel shows the average position of the sensors for one speaker's "o" /uo/ vowel. The right panel shows the TB and TD components of Tdist as directional vectors in 2D (*x,y*) space. The start of the vector is the position of the sensors at the onset of movement, represented as red circles. The end of the vectors are the vowel targets for TB and TD. The length of the arrow from the vowel onset to the vowel target is the Euclidean distance for each sensor. Tdist is the combination of the two vectors. ![Vowel targets for /uo/ for one speaker, calculated as the average position of the TD and TB sensors across repetitions. Red circles show the spatial positions of the sensors at the onset of movement toward the vowel target. The black circles with the white "x" denote the vowel target. The arrows represent the Euclidean distance between the sensors at the onset of movement and the achievement of target.](fpsyg-10-02726-g004){#F4} Our main analysis assesses the effect of TDx and Tdist on C-V lag. To do this, we fit a series of nested linear mixed effects models to C-V lag. All models contained a random intercept for subject. We explored a baseline model with fixed effects for [VOWEL]{.smallcaps} (o, u, ou), [CONSONANT]{.smallcaps} (b, m), and [TONE]{.smallcaps} (1, 2, 3, 4). We ultimately dropped [TONE]{.smallcaps} from the baseline model because it did not improve over a model with just [VOWEL]{.smallcaps} and [CONSONANT]{.smallcaps} as fixed effects. This was somewhat expected since we deliberately selected vowels unlikely to be influenced by tone. Both remaining fixed factors in the baseline model were treatment coded -- "o" /uo/ was the reference category for [VOWEL]{.smallcaps} and "b" /p/ was the reference category for [CONSONANT]{.smallcaps}. To this baseline model, we added one of our main factors of interest: TDx or Tdist. We also investigated whether another kinematic variable, peak velocity of the vowel gesture, explained C-V lag above and beyond the variables related to TD position at the onset of movement, i.e., TDx and Tdist. The modeling results are given in the next section following some visualization and description of the main factors of interest. Results {#S3} ======= Effect of Spatial Position on C-V Lag {#S3.SS1} ------------------------------------- [Figure 5](#F5){ref-type="fig"} shows the probability density functions of C-V lag in raw milliseconds (i.e., not normalized) for the three vowels, fitted by kernel density estimations. We report the distribution in milliseconds to facilitate comparison across studies. The solid black vertical line at the 0 point indicates no lag -- the vowel and the consonant start at the same time. In tokens with negative lag (the left side of the figure) the vowel started movement before the consonant; in tokens with a positive lag (right side of the figure), the consonant starts movement before the vowel. The distribution of lag values is centered on a positive lag for all three vowels, indicating that, on average, vowel movement follows consonant movement. Moreover, the size of the lag is comparable to what has been reported in past studies of CV lag in Mandarin ([@B15]; [@B67]) and other lexical tone languages ([@B26]; [@B24]; [@B27]). There is also, however, substantial variation. The main aim of this paper is to evaluate whether the variability observed in CV lag is related to variability in the spatial position of the tongue dorsum at the onset of movement. ![Kernel density plot of lag values by vowel. The legend shows the Pinyin for the vowels, which correspond to: "o" /uo/, "ou" /ou/, "u" /u/.](fpsyg-10-02726-g005){#F5} The distribution of tongue backness values (as indicated by TDx at the onset of movement of the TD toward the vowel target) was multi-modal, due to inter-speaker variation in the size of the tongue and the placement of the TD sensor. To normalize for speaker-specific sensor location and lingual anatomy, we calculated *z*-scores of TDx within speaker. The normalized values are centered on 0. We also normalized the C-V lag measures by *z*-score. The normalized measures of C-V lag and TDx are shown in [Figure 6](#F6){ref-type="fig"}. The resulting distributions for both TDx and C-V lag are roughly normal. ![Kernal density plot of normalized C-V lag **(A)** and TDx **(B)**. The legend shows the Pinyin for the vowels, which correspond to: "o" /uo/, "ou" /ou/, "u" /u/.](fpsyg-10-02726-g006){#F6} The main result is shown in [Figure 7](#F7){ref-type="fig"}. The normalized measure of C-V lag is plotted against TDx, i.e., tongue dorsum backness at movement onset. The figure shows a significant negative correlation (*r* = −0.31; *p* \< 0.001). Variation in C-V lag is correlated with variation in the spatial position of the tongue dorsum at the onset of movement. C-V lag tends to be shorter when the tongue dorsum is in a more anterior position at movement onset. When the starting position of the TD is more posterior, i.e., closer to the vowel target, C-V lag is longer. Thus, [Figure 7](#F7){ref-type="fig"} shows that the vowel gesture starts earlier, relative to the consonant gesture, when it has farther to go to reach the target. To evaluate the statistical significance of the correlation in [Figure 7](#F7){ref-type="fig"}, we fit linear mixed effects models to C-V lag, using the lme4 package ([@B3]) in R. The baseline model included a random intercept for speaker and fixed effects for vowel quality and onset consonant. A second model added the main fixed factor to the baseline model. To index the position of the tongue dorsum relative to the vowel target, we considered both TDx and Tdist as fixed factors. For both of these factors as well as for C-V lag, we used the z-score-normalized values in all models. The normalized values of TDx and Tdist were highly collinear (*r* = 0.48^∗∗∗^), which prevents us from including both in the same model. ![Scatter plot of C-V lag (*y*-axis) and Tongue Dorsum backness (*x*-axis). The legend shows the Pinyin for the vowels, which correspond to: "o" /uo/, "ou" /ou/, "u" /u/.](fpsyg-10-02726-g007){#F7} As expected, the effects of these factors on C-V lag were quite similar. The correlation between Tdist and C-V lag was slightly weaker (*r* = −0.28^∗∗∗^) than the correlation between TDx and C-V lag. Adding TDx to the model led to a slightly better improvement over baseline than Tdist. We therefore proceed by using TDx as our primary index of the starting position of the tongue dorsum. We also considered whether the speed of the vowel movement impacts C-V lag. The peak velocity of articulator movements is known to be linearly related to gesture magnitude, i.e., the displacement of the articulator in space ([@B33]; [@B36]). For this reason, TDx, which, as shown above, is strongly correlated to Tdist, is also highly correlated with the peak velocity of the movement (*r* = 0.33, *p* \< 0.001). The natural correlation between peak velocity and displacement can be normalized by taking the ratio of peak velocity to displacement, a measure sometimes referred to as kinematic stiffness ([@B1]; [@B48]; [@B40]; [@B61]). This provides a kinematic measure of speed that can be assessed across variation in TDx. We evaluated the correlation between stiffness and C-V lag and found that there was no effect (*r* = −0.03). This indicates that gesture velocity, once gesture magnitude is factored in, has no effect of C-V lag. Adding TDx resulted in significant improvement to the baseline model (χ^2^ = 125.52; *p* \< 2.20E-16). Moreover, the increased complexity of the model is justified by the variance explained. The six degrees of freedom in the baseline model increased to seven degrees of freedom in the baseline + TDx model, but the AIC and BIC scores were lower in the baseline + TDx model (AIC~baseline~ = 2607.2, AIC~baseline+TDx~ = 2483.7; BIC~baseline~ = 2636.3, BIC~baseline+TDx~ = 2517.7). This indicates that the spatial position of the tongue dorsum has a significant effect on inter-gestural timing. A summary of the fixed effects for our best model, baseline + TDx, is as follows. [VOWEL]{.smallcaps} had only a marginal effect on C-V lag. The effect of [CONSONANT]{.smallcaps} was negative (β = −0.276; *t* = −4.722^∗∗∗^), indicating that syllables that begin with \[m\] have shorter C-V lag than those that begin with \[p\], the intercept category for the consonant factor. The strongest fixed factor in the model was that of TDx (β = −0.559; *t* = −12.245^∗∗∗^). The strong negative effect indicates, as shown in [Figure 7](#F7){ref-type="fig"}, that C-V lag decreases with increases in TDx. Larger TDx values indicate a more anterior position of the tongue. Since the vowel targets in the stimuli were all posterior (back vowels), the negative effect of TDx can be interpreted as shorter C-V lag values in tokens with more front starting positions for the vowel. In other words, the farther the tongue dorsum is from the (back) vowel target, the earlier the movement starts (and, thus, the shorter the C-V lag). Exemplification of the Main Result {#S3.SS2} ---------------------------------- The general trend in the data is that C-V lag decreases with the anteriority of the tongue. To put this another way, movement toward the vowel target (relative to the consonant) is delayed when the tongue happens to be already near the target position. This pattern is exemplified with specific tokens in [Figure 8](#F8){ref-type="fig"}. The top left panel shows the mean position of the sensors at the target of /uo/ for one speaker. At the target, the average backness of the TD sensor is −50.4(3.2) mm (black circles). The panel on the upper right zooms in on the position of the TB and TD sensors for two tokens, token 168, shown as red circles is relatively close to the vowel target for /uo/. Token 280, in contrast, is further away (green circles). The bottom two panels compare the time course of movement for each of these tokens. The panel on the left shows token 168, which starts closer to the target. In line with the general trend in the data, movement toward the target in token 168 is somewhat late relative to the lip aperture gesture. TD movement toward the target does not start until about halfway through the closing phase of the labial gesture. The TD movement in token 280, shown on the right, starts earlier in the phase of the consonant. Consequently, the lag between the consonant gesture and the vowel gesture is shorter in token 280 (right) than in token 168 (left). ![Comparison of two tokens differing in both the backness of the tongue dorsum (TDx) at the onset of vowel movement and C-V lag. In line with the general trend in the data, C-V lag is greater when the tongue dorsum is closer to the target **(left)** than when it is further away **(right)**.](fpsyg-10-02726-g008){#F8} Extension to Unrounded Vowels {#S3.SS3} ----------------------------- The target items in this study involved labial consonants followed by rounded vowels. As described above, we selected high back vowels since they are known to resist tonal coarticulation. However, since high back vowels in Mandarin Chinese are rounded, there is a potential for interaction between gestural control of the lips by the labial consonant and gestural control by the rounded vowel. While the particular nature of this interaction for Mandarin is not known, some possibilities include gestural blending, whereby the movement of the lips results from a compromise between temporally overlapped task goals, or gesture suppression, whereby one of the overlapping gestures takes full control of the articulator. In the task dynamics model, these outcomes are dictated by the blending strength parameter ([@B47]), which is hypothesized to be language specific ([@B25]). In some languages, the labial and dorsal components of high back rounded vowels enter into a trading relation such that the degree of rounding, for, e.g., /u/, varies with the degree of tongue dorsum retraction ([@B38]). This raises the question -- to what extent is our main result related to the presence of rounding for the vowels? To address this question, we extended our analysis to unrounded vowels, /a/ and /i/, drawing on EMA data reported in [@B49]. The items in [@B49] included multiple repetitions of /pa/ and /pi/ produced with all four Mandarin tones by the same six speakers analyzed in this study. Following the procedure outlined in section "Experiment", we calculated C-V lag and TDx position for /pa/ and /pi/ syllables. A total of 470 tokens (233 /pa/ tokens; 237 /pi/ tokens) were analyzed. Both syllables show a correlation between C-V lag and TDx that is similar in strength to what we observed for high back vowels ([Figure 7](#F7){ref-type="fig"}). For /pa/, the direction of the correlation was negative (*r* = −0.36; *p* \< 0.001), the same direction as for the high back vowels. When the tongue dorsum is in a more front position (farther from the /a/ target), C-V lag tends to be shorter, indicating an earlier vowel movement relative to the consonant; when the tongue dorsum is in a more back position (closer to the /a/ target), C-V lag is longer. We observed the same pattern for the low back vowel, which is unrounded, as we observed for the high back vowels, which are rounded. The correlation between C-V lag and TDx is similarly strong for /pi/ syllables (*r* = 0.45; *p* \< 0.001), but the correlation is positive. The positive correlation for /pi/ makes sense given the anterior location of the vowel target. In contrast to the back vowels, a relatively front tongue dorsum position puts the tongue close to the /i/ target; in this case, C-V lag tends to be long, indicating a delayed vowel gesture onset (relative to the consonant). [Figure 9](#F9){ref-type="fig"} provides a scatterplot of C-V lag and TDx for /pi/ and /pa/. The positive correlation for /pi/ is essentially the same pattern as the negative correlation observed for /pa/ and for the high back vowels that served as the main target items for the study. From this we conclude that whatever the effect of vowel rounding is on the lip gestures in Mandarin, it does not seem to have any influence on the relation between TDx position at the onset of the vowel gesture and C-V lag. We observe the same pattern across rounded and unrounded vowels. ![Scatter plot of C-V lag (*y*-axis), as indexed by the onset of the gestures, and Tongue Dorsum backness (*x*-axis), as indexed by TDx at the onset of movement, for /pa/ and /pi/ syllables. Larger values of TDx indicate a more front tongue position. For /pa/, there is a negative correlation -- shorter C-V lag when TD is more front (farther from the /a/ target) and longer C-V lag when TD is more back (closer to the /i/ target). For /pi/, there is a positive correlation -- longer C-V lag when the TD is more back (farther from the /i/ target) and longer C-V lag when TD is more front (closer to the /i/ target).](fpsyg-10-02726-g009){#F9} Discussion {#S4} ========== Analysis of C-V lag in Mandarin monosyllables confirmed patterns reported in the literature and also revealed new effects that have theoretical implications for models of speech timing control. First, we found that C-V lag in the Mandarin syllables in our corpus, which all have lexical tone, tends to be positive. The vowel typically starts well after the consonant. This pattern, positive C-V lag, has been reported for Mandarin before ([@B14], [@B15]) and for other lexical tone languages ([@B26]; [@B24]; [@B27]). C-V lag tends to be longer for languages with lexical tone than for languages that have intonational tones or pitch accents ([@B32]; [@B35]; [@B18]). In terms of millisecond duration, the C-V lag in tone languages reported in the studies above is in the range of ∼50 ms while the C-V lag for languages that lack lexical tone tends to be smaller, ∼10 ms. The C-V lag in our study was substantially longer (roughly twice) than other reports of lexical tone languages ([Figure 5](#F5){ref-type="fig"}). This difference in absolute duration is probably due at least in part to the nature of our stimuli. Monosyllables read in isolation in Pinyin encourages hyperarticulation but served the specific purpose in our study of allowing variation in tongue position at the onset of movement while controlling for other factors that could influence C-V timing in longer speech samples. Another possible reason for the longer absolute C-V lag in our materials could be the onset consonants. Studies of tone and intonation tend to select sonorant consonants as stimuli to facilitate continuous tracking of   *f*~0~ across consonants and vowels. Our stimuli included both a nasal onset consonant, /m/, and an oral onset consonant, /p/. Although this was not expected, there was a significant effect of onset consonant identity on C-V lag. C-V lag was significantly shorter in syllables beginning with the nasal stop than in syllables beginning with the oral stop. The longer C-V lag found in our materials overall is conditioned in part by our inclusion of oral plosive onsets. As to why oral plosives condition longer C-V lag (than nasals), we currently have no explanation. We found no effect of tone on C-V lag and only a negligible effect of vowel. Syllables with all four Mandarin tones and all three back vowels showed similarly positive C-V lag. The lack of a tone effect was expected from past work on Mandarin, including [@B14]. We avoided /i/ and /a/ vowels in our target items because past research had shown that the target tongue position for these vowels varies across tones whereas /u/ has a stable target ([@B49]). Conceivably, the effect of tone on C-V lag would be more complicated for other vowels, because a change in tone may also condition a change in the magnitude of tongue displacement toward the vowel target. The vowel written with Pinyin "o" after labial consonants is pronounced as a diphthong /uo/ in standard Mandarin; the first target of this diphthong is the same target as for the monophthong /u/. The third vowel in the study was /ou/, which is also in the high back space. From the standpoint of feed-forward models of timing, effects of vowel quality on C-V coordination are not expected in general. This study does not offer a particularly stringent test of this assumption, since the vowel targets were similar. Rather, the materials in this study were optimized to evaluate effects of variation at the onset of the vowel. We found a significant effect of the main factor of interest in this study. The spatial position of the tongue dorsum at the onset of vowel movement had a significant effect on C-V lag. We also showed that this main pattern generalized to /a/ and /i/ by re-analyzing data from [@B49]. C-V lag values showed substantial token-by-token variation ([Figure 5](#F5){ref-type="fig"}); however, the variation was not random. Variation in when the vowel movement starts relative to the consonant was systematically related to the spatial position of the tongue dorsum. When the tongue dorsum was further forward -- farther from the vowel target -- movement started earlier than when the tongue dorsum was further back -- closer to the vowel target. This type of behavior is not expected from a strictly feedforward model of relative timing control, such as the coupled oscillator model of inter-gestural timing ([@B16]). However, the results are not inexplicable. There are a range of possible explanations. Before moving on to discuss possible theoretical explanations for the pattern, we first address a potential limitation of the study. Our strategy of eliciting words in isolation was successful in that we obtained variation in the starting position of the tongue dorsum. The structure of this variation played an important role in revealing the main result. Since the stimuli consisted of labial consonants followed by vowels, each trial ended with the mouth in an open position (for production of the vowel) and the next trial began with a labial gesture, requiring either narrowing of the lips (/f/ in some filler trials) or closure (/m/, /p/). This design allows for the possibility that participants take up a rest posture in between trials which involves lip closure. In labeling the gestures for further analysis, we noticed that the lips typically remained open until the onset of the labial gesture; however, a small number of tokens involved lip closures that were unusually early, possibly because the lips closed before active control associated with the target stimuli. These tokens show up as outliers to the statistical distribution for the lip aperture gesture, i.e., extra long closure duration. Since our analysis did not exclude statistical outliers, we consider here the possible impact that they could have on our main result. To assess the role of outliers resulting from early closure, we re-ran our analysis excluding outliers using each of two well-established methods: *a priori* trimming and outlier removal through model critique ([@B2]). The mean lip aperture duration in the data was 327 ms (SD = 117); the median was 300 ms (27 ms shorter than the mean), which, consistent with our token-by-token observations from labeling, suggests a skew toward longer duration outliers. Following the *a priori* trimming method, we excluded tokens from analysis that were three standard deviations from the mean lip aperture duration value and re-fit the nested lmer models reported above. Removing outliers in this way improved the model fit, as indicated by a lower AIC:2382 for trimmed data set, c.f., 2483 for full data set. The effect of TDx on C-V lag was reduced slightly following *a priori* trimming, as indicated by the coefficient estimate for TDx: for the trimmed data set β = −0.53(SE = 0.043), c.f., for the full data set β = −0.56 (SE = 0.046). The slight change in the coefficient is reflected as well in the pearson's correlation between C-V lag and TDx: *r* = −0.30 for the trimmed data set vs. *r* = −0.31 for the full data set. We also removed outliers via model critique. Following the method suggested in [@B2], we removed outliers to our best fitting model. Residuals to model fit greater than three standard deviations were removed and the model was refit to the trimmed data set. The resulting model showed further improvement; AIC dropped to 2297. The coefficient for TDx decreased slightly β = −0.52 (SE = 0.043). The pearson's correlation between C-V lag and TDx was the same as for the a prior trimming: *r* = −0.30. Removing outliers based on model fit does not directly reference lip aperture duration. Nevertheless, this approach produced similar results to removing outliers with unusually long lip closure duration (*a priori* trimming). Removing outliers based on lip closure duration had the effect of improving model performance overall with only a negligible influence on the estimate for TDx. This suggests that the occasional long labial closure in the data introduced noise (unexplained variance) in the model but did not have a substantial influence on the observed relation between spatial position (TDx) and intergestural timing (C-V lag). We focus the remainder of this discussion on two possible explanations for the main result (section "Downstream Targets" and "Neutral Attractors") as well as some additional theoretical implications (section "Additional Theoretical Implications"). Downstream Targets {#S4.SS1} ------------------ One possible explanation is that gesture coordination makes use of a richer set of gestural landmarks than just gesture onsets. For example, [@B12] proposes a set of five articulatory landmarks which are referenced by a grammar of gestural coordination. These landmarks include the onset of movement, the achievement of target, the midpoint of the gesture plateau (or "c-center"), the release from target and the offset of controlled movement (p. 271). Variation in gesture onsets, as we observed for the vowel movements in this study could potentially subserve later production goals, such as the coordination of the target landmark or others landmarks that occur later in the unfolding of the gesture, i.e., after the gesture onset. To illustrate this concept, [Figure 10](#F10){ref-type="fig"} shows two coordination schemas. The left panel, [Figure 10A](#F10){ref-type="fig"} shows a pattern of synchronous consonant and vowel gestures. In this schema the vowel onset is aligned to the consonant onset -- the two gestures are in-phase. This can be contrasted with [Figure 10B](#F10){ref-type="fig"}, which shows a later vowel target. The target of the vowel in this case is timed to the offset of the consonant gesture. The coordination schema dictates that the vowel achieves its spatial target at the offset of controlled movement for the consonant. If the coordination relation controlling C-V timing references the vowel target (and not the vowel onset), the vowel onset would be constrained only by the requirement that the target is achieved at the end of the consonant gesture. This could dictate that the timing of the vowel onset varies as a function of its distance to the vowel target. This account suggests some degree of state-feedback from articulator position to inter-gestural timing control. If the onset of the vowel gesture is timed to achieve its target at the end of the consonant gesture, speech motor control must have access to the position of the tongue, i.e., state feedback, either through proprioception or through tactile information. ![Two schematic diagrams of timing relations. Panel **(A)** shows the onset of the vowel timed to the onset of the consonant; panel **(B)** shows the target of the vowel timed to the offset of the consonant.](fpsyg-10-02726-g010){#F10} To assess the downstream target hypothesis we calculated the lag between the vowel target and two other landmarks in the consonant gesture, the consonant *release* and consonant *offset*. These two landmarks were defined according to thresholds of peak velocity in the movement away from the consonant constriction, i.e., the positive velocity peak in [Figure 2](#F2){ref-type="fig"}. Accordingly, they are the release-phase equivalents of the onset and target landmarks. [Figure 11](#F11){ref-type="fig"} shows the distribution of lag values for C~release~ to V~target~ ([Figure 11B](#F11){ref-type="fig"}) and for C~offset~ to V~target~ ([Figure 11C](#F11){ref-type="fig"}). These are obtained by subtracting the consonant landmark from the vowel landmark, V~target~ - C~offset~. For comparison, the lag values for C~onset~ to V~onset~, first presented in [Figure 5](#F5){ref-type="fig"}, are repeated as [Figure 11A](#F11){ref-type="fig"}. The top panels show schemas of lag measurements and the bottom panels show kernel density plots. In each plot a vertical black line is drawn at the 0 point. For C~onset~ to V~onset~ ([Figure 11A](#F11){ref-type="fig"}) and C~release~ to V~target~ ([Figure 11B](#F11){ref-type="fig"}), the lag is positive (on average). For C~offset~ to V~target~ ([Figure 11C](#F11){ref-type="fig"}), the probability mass is centered on zero. Although there is substantial variability around the mean, the target of the vowel occurs, on average, at the offset of the consonant. This pattern is consistent with the downstream target hypothesis. The target of the vowel is aligned to the offset of the consonant. In order to achieve the vowel target at the offset of consonant movement, movement toward the vowel target must start during the consonant gesture. How much earlier in time the vowel gesture starts is free to vary with the spatial position of the relevant articulators. ![Temporal lag between three sets of C-V landmarks: **(A)** C~onset~ to V~onset~; **(B)** C~release~ to V~target~; **(C)** C~offset~ to V~target~. The top row shows a schema for the lag measurement. The schema represents the C-V timing pattern under which the lag measure is zero (perfect alignment). The bottom row shows the distribution of lag values. Lag measures were computed by subtracting the vowel landmark from the consonant landmark. The average lag between the C~offset~ and V~target~ **(C)** is zero; in contrast, the average lag for the schemas in **(A)** and **(B)** is positive.](fpsyg-10-02726-g011){#F11} The alignment between C~offset~ and V~target~ ([Figure 11C](#F11){ref-type="fig"}) has a possible alternative explanation. Since the vowels of our target items are rounded, it is possible that C~offset~ corresponds to an articulatory landmark associated with the labial component of the vowel instead of the consonant release phase. A hint of this possibility is apparent in the lip aperture (LA) signal in [Figure 8](#F8){ref-type="fig"} (left), token 168, which shows a multi-stage time function. There is an abrupt decrease in LA velocity at around 900 ms; after this change, LA widens more slowly until around 1200 ms, when the TD achieves its target. It is possible that control of LA passes smoothly from the consonant gesture to a vowel gesture in such a way that the threshold of peak velocity applied to LA picks up on the labial component of the vowel, instead of the actual C~offset~, which could occur earlier, i.e., around 900 ms in token 168. We therefore pursue another set of predictions that can differentiate the alignment schemas in [Figure 10](#F10){ref-type="fig"}. To further evaluate the alignment schemas in [Figure 10](#F10){ref-type="fig"}, we conducted an analysis that leverages the temporal variability in the data. Articulatory coordination, like biological systems more generally, exhibit variation, owing to a wide range of factors. In assessing the predictions of control structures, such as the coordination schema in [Figure 10B](#F10){ref-type="fig"}, we therefore look to the patterns of variability that are uniquely predicted. This approach follows past work exposing coordination relations by examining how they structure temporal variability in kinematic ([@B52], [@B53]; [@B13]; [@B51]). To exemplify, consider [Figure 12](#F12){ref-type="fig"}. The top panels repeat the schema in [Figure 10](#F10){ref-type="fig"}; the bottom panels show the same schema with longer consonant gestures. As the consonant gesture increases in length from the top panels to the bottom panels, we observe different effects on C-V lag. In the left panel, where the vowel onset is timed to the consonant onset, there is no effect of consonant duration on C-V lag. In the right panel, in contrast, C-V lag increases with consonant duration. Since the vowel is timed to the offset of the consonant, a longer consonant entails longer C-V lag (assuming that gesture duration for the vowel remains constant). This prediction can also be tested in our data. Moreover, testing this prediction does not require that we disentangle the release of the labial consonant from the labial component of the vowels. If the vowel target is timed to any landmark of the consonant following the consonant target, then an increase in consonant duration predicts an increase in C-V lag. ![Comparison of two C-V coordination schema under different consonant durations. The **top panels** show shorter consonants and the **bottom panels** show longer consonants. As consonant duration increases from the **top panel** to the **bottom panel**, C-V lag is increased only for the schema on the right, where the vowel target is timed to the release of the consonant.](fpsyg-10-02726-g012){#F12} To evaluate this prediction, we investigated the correlation between C-V lag and the closing phase of the consonant. The closing phase of the consonant was defined as the duration from the onset of consonant movement to the achievement of target in the lip aperture signal, defined by a threshold of peak velocity (see [Figure 2](#F2){ref-type="fig"}). A positive correlation between C-V lag and consonant duration is predicted by the downstream target hypothesis ([Figure 12](#F12){ref-type="fig"}: right) but not by the C-V in-phase hypothesis ([Figure 12](#F12){ref-type="fig"}: left). If the consonant and vowel gestures are in-phase, then C-V lag should be unaffected by consonant duration. The correlation between C-V lag and consonant duration was quite high (*r* = 0.61, *p* \< 0.001), which is consistent with the downstream target prediction. A scatter plot is shown in [Figure 13](#F13){ref-type="fig"}. ![A scatter plot of C-V lag (*y*-axis) and the duration of the closing phase of the consonantal gesture.](fpsyg-10-02726-g013){#F13} [Figure 13](#F13){ref-type="fig"} shows that temporal variation in C-V lag is structured in a manner consistent with [Figure 12](#F12){ref-type="fig"}: right. Variation in consonant duration stems from numerous factors, including individual differences that may have a neuro-muscular basis ([@B10]; [@B58]; [@B59]). Nevertheless, this variability is useful in exposing the underlying control structure. As consonant duration varies, C-V lag also varies in a manner predicted by downstream targets, as in [Figure 10B](#F10){ref-type="fig"}, but not by in-phase timing, [Figure 10A](#F10){ref-type="fig"}. The significant correlation is predicted by any alignment pattern in which the vowel target is timed to a consonant landmark later than the consonant target. Despite variation in speech rate and the absolute duration of consonantal and vocalic intervals, we observe consistency in temporal covaration predicted by a specific pattern of gesture coordination. [@B51] report a similar result for English. The pattern of temporal variation found across 96 speakers followed the predictions of a common pattern of gestural coordination, even as the absolute duration of consonant and vowel intervals varied substantially. While our discussion has focused so far on intergestural timing, i.e., the timing of the vowel gesture relative to the consonant, the target-based timing account described above also suggests something about intra-gestural control that can be tested in the data. The vowel gesture may start earlier in time when it has farther to go to reach the target and starts later in time when there is less distance to travel. Stated this way, the timing of the vowel onset is relative not to the consonant (i.e., inter-gestural timing) but to the distance to the vowel target, i.e., gesture amplitude. Notably, this particular relation is one that is predicted by a non-linear dynamical system with an anharmonic potential and not by a linear dynamical system ([@B55]: 204). To provide a direct test of this hypothesis about intra-gestural timing, [Figure 14](#F14){ref-type="fig"} plots vowel gesture amplitude, as indexed by the displacement of TDx from vowel onset to vowel target, against the duration of the opening phase of the vowel, as indexed by the temporal interval from vowel onset to vowel target. There is a significant positive correlation between gesture amplitude and gesture duration (*r* = 0.45; *p* \< 0.001). This result helps to sharpen the interpretation of the C-V lag results as well. It appears that the vowel gesture starts earlier when it has farther to go to reach the target, an aspect of intra-gestural control consistent with a non-linear dynamical systems model of the gesture. ![Scatter plot of vowel gesture duration (*y*-axis), as measured from the onset of movement to the achievement of target based on the TDx trajectory, and gesture amplitude (*x*-axis) as measured from the degree of TD sensor displacement in the anterior-posterior dimension (i.e., TDx).](fpsyg-10-02726-g014){#F14} We were curious as well about whether the variation in vowel gesture onset has consequences for acoustic vowel duration. Since the onset of vowel gestures typically takes place sometime during the consonant closure, variation in the gesture onset is potentially masked in the acoustics by the overlapped consonant. To investigate this, we measured the interval from the acoustic onset of the vowel, as indicated by the onset of formant structure, to the articulatory vowel target (as per [Figure 2](#F2){ref-type="fig"}). This acoustic interval of the vowel was *not* positively correlated with the magnitude of the vowel gesture (TDx). There was a slight negative correlation (*r* = −0.15, n. s.). This indicates that the strong correlation between gesture magnitude and gesture duration is largely masked in the acoustic vowel interval from onset of voicing to the vowel target. The distance of the tongue to the vowel target (gesture amplitude), which is significantly correlated with vowel start times and is reflected in C-V lag, does not correlate with acoustic vowel duration. Neutral Attractors {#S4.SS2} ------------------ A second possible explanation for the main result is that there is a neutral attractor at work. Neutral attractors have been hypothesized to take control of articulators that are not otherwise under gesture control ([@B47]). When a gesture achieves its target, control of the model articulator falls to the neutral gesture, which will drive the articulator toward a neutral position. The explanation of the main result -- that TD position correlates with C-V lag -- in terms of a neutral attractor is as follows. Consider again two tokens that differ in the position of the TD during the pre-speech period of silence ([Figure 8](#F8){ref-type="fig"}). When the TD is at an extreme position, the neutral attractor drives it toward a neutral position before the vowel gesture takes control. The momentum of the articulator movement controlled by the neutral attractor carries over to gestural control by a vowel. On this account, vowels with more extreme tongue dorsum positions may appear to start earlier in time relative to the overlapped consonant because control of the TD passes smoothly from a neutral attractor to a vowel gesture. In contrast, when the TD is already in a neutral position, movement does not start until the vowel gesture is activated. On this account, the early onset of vowel gestures that begin far from targets is an epiphenomenon of neutral attractor control. The contrast between a token with early TD movement and one with later movement is shown in [Figure 15](#F15){ref-type="fig"}. The top panel shows the token with a non-extreme TD backness position. The green box shows the vowel gesture activation interval, terminating with the achievement of target. The bottom panel illustrates the neutral attractor proposal. The yellow box shows the neutral attractor which drives the TD away from an extreme front position. Since the vowel target is back, the neutral attractor happens to be driving the TD in the same direction as the vowel gesture, which kicks in at the same time across tokens. Typical heuristics for parsing gesture onsets from EMA trajectories based on the velocity signal, including those used in this paper, would likely be unable to differentiate between movement associated with the vowel gesture proper (top panel) and movement that is associated with a sequence of neutral attractor followed by a vowel gesture. ![Two vowel tokens from our EMA data are shown with hypothetical gesture control structures overlaid. The **top panel** illustrates a case in which only a vowel gesture controls movement. The **bottom panel** illustrates a case in which a neutral attractor first brings the TD from an extreme front position to a less extreme position before the vowel gesture takes control.](fpsyg-10-02726-g015){#F15} Notably, the neutral attractor analysis does not necessarily require the type of state-feedback discussed for the "downstream target" alternative. In this sense, the neutral attractor account of our data is parsimonious with the two level feedforward model of AP. However, the need for bidirectional interaction between inter-gestural and inter-articulator levels has been argued for elsewhere ([@B46]) and other more recent developments in the AP framework may render neutral attractors less necessary than in earlier work. For example, [@B34] pursues the hypothesis that the movement toward and away from constrictions are controlled by independent gestures. On this account, the "split-gesture" hypothesis, it is less clear that a neutral attractor is needed at all to return articulators to a neutral position, as this could be accomplished by the release gesture associated with consonants. Other empirical work has identified cases of anticipatory movements in speech which at times pre-empt the linguistically specified timing pattern and cannot easily be explained by a neutral attractor ([@B11]; [@B57]). Using real-time MRI, [@B57] observed a range of idiosyncratic (across speaker) patterns of anticipatory movement during silence. He suggested that neutral attractors, if they were to account for the data, would have to be sensitive to upcoming gestures. Other relevant anticipatory movement phenomena include [@B62], who found that, when reading aloud, speakers plan coarticulation based upon available information in the visual stimulus. Similarly, [@B11] observed anticipatory articulatory movements in response to subliminal presentation of words in a masked priming task. These findings suggest that orthographic stimuli, even when brief (\<50 ms) or absent until speech initiation, condition anticipatory speech movements. Phonetically sensitive neutral attractors have been suggested elsewhere in the literature ([@B41]) but this proposal would have to be developed significantly to encompass the broader range of articulatory phenomena. Thus, while, in the case of our data, a "standard" neutral attractor, i.e., per [@B47], may be sufficient to account for anticipatory movement, alternative mechanisms, e.g., release gestures, planning gestures or otherwise, "phonetically sensitive" attractors are theoretical developments that could potentially subsume the neutral attractor analysis. In closing this section, we would like to highlight that the two possible theoretical explanations that we've offered for the effect of spatial position on relative timing are not mutually exclusive. The neutral attractor could explain some of the early vowel movements, even if the downstream target hypothesis is also correct. The preceding discussion of neutral attractors notwithstanding, it's possible that both mechanisms are independently necessary. The relative variability of movement onsets in contrast to movement targets has been noted in past work ([@B39]) and discussed as evidence against a system of speech timing control driven by movement onsets ([@B60]). While the neutral attractor may explain some of the variability found generally for gesture onsets in this and other studies, we note that the neutral attractor hypothesis does not predict the correlation between consonant (closing phase) duration and C-V lag, which was found to be quite strong. This correlation (C-closing and C-V lag) could instead be attributable to yet another factor, such as a general slowdown (scaling) of the clock related to, e.g., speech rate, or to the interaction between general slowdown and an amplitude-gesture duration tradeoff predicted by non-linear dynamical system. However, such a factor will also predicts a positive correlation between C-V lag and vowel duration, which was not shown in our data (see section "Neutral Attractors"). Additional Theoretical Implications {#S4.SS3} ----------------------------------- On average, C-V lag (V~onset~ to C~onset~) is positive in our data, which may be driven by the interaction between competing forces on coordination, as per the coupled oscillator model of gesture coordination ([@B16]). Such positive C-V lag in tone languages has been explained by the hypothesis that the onset of the tone gesture is temporally aligned with the offset of the consonant gesture (anti-phase timing) while the vowel onset is competitively coupled to both the consonant and tone gestures ([@B14]). However, if the downstream target hypothesis generalizes to tone, then the positive C-V lag found generally for syllables with lexical tone may also have an alternative explanation in terms of downstream targets. Tones, just as vowels, may be timed with reference to a tonal target or to other downstream landmarks, as opposed to the tone onset. Cross-linguistically, it seems necessary for tones to have different modes of syllable-internal alignment. In Dzongka, for example, tones appear to be left-aligned within the syllable, in that the high and low tones are most distinct near the onset of voicing ([@B28]). Tones in Mandarin, in contrast, are differentiated later in the syllable ([@B31]; [@B54]). In Dinka, the timing of tones within a syllable is minimally contrastive ([@B44]). These cross-linguistic patterns suggest a richer ontology of syllable-internal timing patterns than may be possible if coordination makes reference only to gesture onsets. Conclusion {#S5} ========== Consonant and vowel gestures in Mandarin were generally not synchronous in our data. The vowel movement typically began after the consonant, which is consistent with past work on Mandarin and other lexical tone languages ([@B15]; [@B24]; [@B27]; [@B67]). The spatial position of the tongue influenced when the vowel movement begins relative to the consonant. This is to our knowledge the first direct evidence that the spatial position of the articulators conditions the relative timing of speech movements in unperturbed speech (c.f., [@B46]). On the face of it, this finding seems to challenge strictly feed-forward models of timing control adding to past experimental evidence for bidirectional interaction between the inter-gestural level and the inter-articulator level of speech movement control. We discussed two possible explanations for the effect. The first proposal involves downstream targets. Movement onsets vary with spatial position to achieve coordination of later articulatory events. In this case, it would be necessary for state-based feedback to inform relative timing. Moreover, since the onset of vowel movement often occurred before phonation (during silence), the relevant state-based feedback must be somatosensory (likely proprioceptive) in nature. The "downstream targets" proposal made some additional testable predictions that are consistent with the data. As consonant duration varies, C-V lag covaries in the manner predicted by an alignment of the vowel target to some landmark in the release phase of the consonant. We also found a correlation between gesture amplitude and the duration of the opening movement of vowels, which is predicted by a non-linear dynamical model of gestures ([@B55]). The second proposal involves neutral attractors which drive articulators toward rest position when they are not under active control of a gesture. This is in many ways a simpler solution in that it treats the effect of spatial position on C-V timing as an epiphenomenon of natural speech preparation. While these are both possible accounts of our data, we note that they are not mutually exclusive and that future research is needed to fully evaluate the proposals. Regardless of the proper theoretical account of this finding, future empirical work investigating the relative timing of movement onsets should factor spatial position into the analysis. Data Availability Statement {#S6} =========================== All datasets generated for this study are included in the article/supplementary material. Ethics Statement {#S7} ================ This study was carried out in accordance with the recommendations of the Western Sydney University Interval Review Board with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Western Sydney University Interval Review Board. Author Contributions {#S8} ==================== JS and W-RC designed the experiment, collected the data, and discussed each stage of the analysis. JS conducted the statistical analysis and wrote the first draft of the manuscript. W-RC made some of the figures. JS and W-RC contributed to the manuscript revision, read, and approved the submitted version. Conflict of Interest {#conf1} ==================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. **Funding.** This research was funded by a MARCS Institute grant to JS and US NIH grant DC-002717 to Haskins Laboratories. For assistance with subject recruitment, data acquisition and processing, we would like to thank Donald Derrick, Michael Proctor, Chong Han, Jia Ying, and Elita Dakhoul. We would also like to thank Doug Whalen for comments on an earlier version of this manuscript as well as the Yale Phonology group, audiences at Haskins Laboratories, Brown University, Cornell University, the University of Southern California, and LabPhon 16, where parts of this work were presented. [^1]: Edited by: Pascal van Lieshout, University of Toronto, Canada [^2]: Reviewed by: Louis Goldstein, University of Southern California, United States; Philip Hoole, Ludwig Maximilian University of Munich, Germany [^3]: This article was submitted to Language Sciences, a section of the journal Frontiers in Psychology
Dive Brief: - Organizations using some of the top cloud providers are leaving themselves vulnerable by improperly managing access privileges, according to a report from CloudKnox. By allowing inactive or over-privileged accounts to remain active, a malicious actor can gain access to organizations, where they can move laterally over time and gain access to sensitive information. - More than 90% of identities are using less than 5% of permissions granted, according to the report, which is based on 150 risk assessments. - Companies need to close this gap by implementing least privileged access policies and a zero trust security model, according to CloudKnox. Dive Insight: Companies are too lenient in terms of allowing high-level access to their IT systems. Access gaps open the door for malicious threat actors to hide inside the corporate systems using trusted identities to exfiltrate data. "There is an industrywide cloud permissions gap crisis that is leaving countless organizations at risk due to improper identity access management (IAM) controls," Raj Mallempati, chief operating officer at CloudKnox said via email. Cloud infrastructure misconfigurations continue to be a problem at Global 5,000 companies, he said. "The lack of a holistic approach to cloud security across all industries along with a rapid, unplanned adoption of cloud infrastructure at scale leads to security being an afterthought," he said. Limiting access privileges has been a recurring issue during the past several months, as some of the nation's leading IT companies and government agencies recover from the nation-state attack that targeted SolarWinds as well as other sophisticated campaigns. By 2023, about 75% of security failures will result from inadequate management of identities, access and privileges, up from 50% during 2020, according to a 2020 report on privileged access and cloud infrastructure from Gartner. "The cloud poses many challenges and we're just getting a taste of this new area that needs to be accounted for," David Mahdi, senior research director at Gartner, said via email. "As more organizations leverage cloud services, they will need to ensure that they account for new risks, including identity and privilege access management in cloud environments." Gartner research has shown traditional identity tooling solutions — like identity governance & administration (IGA) or privileged access management tools (PAM) — are insufficient to manage the complexity of cloud-based user entitlements, Mahdi said. Security and IAM leaders can mitigate the risks of data breaches and outages caused by excessive entitlement by considering whether cloud infrastructure entitlements management (CIEM) in existing and emerging tools. "They typically use analytics, machine learning and other methods to detect anomalies in entitlements, like accumulation of privileges, dormant and unnecessary entitlements," Mahdi said.
https://www.cybersecuritydive.com/news/privileged-account-access-management/598465/
From 250 A.D. to the late 1500's A.D., three advanced civilizations, the Mayans, the Aztecs, and the Incas controlled Central America and South America. Each of them was different but all shared some of the same qualities. They all were civilizations that had a daily life than revolved around religion. Their religions also required a lot of human sacrifices to please the gods. Also, they all invented calendars that were surprisingly accurate compared to the calendars today. One of the calendars was less than a minute from being exact. Finally, they all declined for many different reasons but the Spanish conquistadors was one of the most common and deadliest, due to their advanced weapons and the diseases they brought. But even thought they all had mysterious declines, these ancient civilizations of Latin America were very advanced and had many great achievements. The earliest Latin American civilization was the empire of the Mayans. They controlled most of the Yucatan peninsula and part of southern Mexico. The Mayans were known for their advanced math skills. They invented the number zero and developed a sophisticated counting system; the Incas also had an advanced counting system like the Mayans. Mayans communicated through the use of hieroglyphics. They had over 800 symbols that represented things like words, syllables, days, and numbers. The Mayans were also well known for their new type of farming. They called it slash and burn farming in which they cut down all the trees in an area, then they lit the area on fire, so that the ashes from the trees would enrich the soil and make it better to grow in. the decline of the Mayan empire is a mystery still to this day. The most popular theories are that the soil became infertile, so they ran out of food. Another theory was that they just got up and left and walked into the jungles. The only other possible theory is because of constant warfare between the Mayan city-states. After the fall of the Mayan empire, the great civilization of the Aztecs came into power. The Aztecs centered themselves on Lake Texcoco, in the city of Tenochtitlan. The capital city of the Aztecs was built in the center of Lake Texcoco, with large causeways, which were large sturdy bridges, connecting the city to the mainland. The Aztecs advanced engineering allowed them to adapt to their environment just as the Mayans and Incas did. The written language of the Aztecs was a combination of hieroglyphics and pictorial symbols. Like the Mayans, the Aztecs language was used for counting, which was more primitive in Aztec culture, days, records, and communication. The Aztecs expanded their empire through military strength. They conquered the nearby city-states, and forced them to pay tribute or face destruction. But later needed for sacrificial subjects led to less aggressive and less deadly military tactics, causing the strength of the Aztec army to diminish. Around the late 1400's into the 1500's, the power and vastness of the Aztecs began to fade. The once calm and under control city-states began to rebel, leading to less human sacrifice and less resources being collected. Also, the conquistadors came from Spain, brining along with them deadly diseases like smallpox, and far superior steel weapons, along with a lust for gold. They soon conquered and destroyed the Aztec empire with almost no effort at all. The Incas, down in the Andes Mountains, never came into contact with the Central American civilizations. They never borrowed any customs or traditions, yet their civilization was fairly similar. They had very advanced engineering skills, which led them to the building of extensive road systems. They also began the idea of terrace farming in which you carve out steps into a hill and farm on the man-made steps. Even though the Incas had no written language, they passed everything down through oral communications. But they did have a complicated counting system in which different sized beads and colored ropes were used to counts things such as military, crops, population, and many other things. The Incas, as did the other Latin empires, had invented a calendar that had 365 days and was kept accurate by looking at the position of the stars, the moon, the planets, and the sun. Due to the complex road system the Incas built, they had to devise a simple messenger system to communicate throughout the empire. Runners would have to travel over twenty-five miles a day to run a message to another runner, then the first runner would have to pass off the message to the second runner and then the first runner would rest. It works somewhat like a relay race only it isn't a competition. In the late 1500's civil war began to break out between the sons of the emperor after he had died. The empire then split into two halves, but it never became whole again and just slowly began to crumble until it was gone. In conclusion, these three civilizations were the most sophisticated in all of the Americas at the time. The Mayans were excellent astronomers and mathematicians, the Aztecs were experienced warriors, and the Incas were skilled engineers. Even though all of the empires had different strengths, they all had some similar qualities. They all built stone buildings, with the Mayans and Aztecs and their well-built pyramids, and they all were polytheistic cultures that practiced sacrifices daily. All of these sacrifices led to a large loss in resources both human and natural. The Aztecs alone would sacrifice a quarter of a million people a year. They all used gold and melted it and made figurines to give to the gods. Soon their empires fell apart and the people of the empire just walked away. "The Mayans, The Aztecs, And The Incas" StudyMode.com. 05, 2004. Accessed 05, 2004. https://www.studymode.com/essays/Mayans-Aztecs-And-Incas-65002630.html.
https://www.studymode.com/essays/Mayans-Aztecs-And-Incas-65002630.html
Just count the wraps within one inch. Take that number and divide it in half for plain weave. If you are weaving twill, take two thirds of your wraps per inch instead. … You would use a sett of 15 epi for plain weave (30 x 0.5 = 15). How is Epi PPI calculated? How to Find Fabric EPI and PPI by Yourself? - Step 1: Collect fabric swatch. Collect fabric swatch for which you are going to find EPI and PPI. … - Step 2: Make a square of one sq. inch on the sample. … - Step 3: Count number of ends and Picks inside those squares. How do you calculate weaving production? Therefore, Cloth weight = Weight of warp + Weight of weft + Weight of size (All in-lbs.) … Fabric Production Calculation. |Type of Yarn||Moisture Regain %||Moisture Content %| |Jute||13.75||12.10| |Silk||11.00||9.91| |Rayon, Viscose||11.00||9.91| |Wool||17.00||14.50| What does EPI mean in weaving? The number of yarn ends per inch (or EPI) is the number of warp strands you must warp in one inch on the loom. It affects the design and appearance of the weaving. How is warp count calculated? Total length of warp yarn in metres, = Total number of ends × Tape length in metres. Or, = (Cloth length in metres + warp regain%) × Total number of ends. How is GSM calculated? How to Calculate Fabric GSM by GSM Cutter: - Cut the fabric with the GSM cutter (gram per square inch). - Weight the fabric with the electric balance. - The cut sample is 100 sq. cm. The weight of the cut sample is multiplied by 100. - The result is the GSM of that particular fabric. Why is EPI higher than PPI? In case of plain weave, finer yarns are used in weft, coarser yarns are used in warp and picks per inch (PPI) will be more than ends per inch (EPI). Weft crimp may be more than warp crimp therefore, it shrinks more in weft direction. How do you calculate denier? You can, however, calculate the deniers of a sample from its standard density, measured in grams per cubic centimeter. Divide the result by 4 x 10^-6, a constant conversion factor: 0.004972 / (4 x 10^-6) = 1,243. This is the yarn’s density in deniers. How do you calculate fabric count? The direct system is calculated with the formula N = (W/l) / (L/w). The indirect system uses the formula: N = (L/w) / (W/l). In these formulas, N is the yarn count, W is the weight of a sample of yarn, l is the unit of length, L is the length of the sample, and w is the unit of weight. How do you calculate ends per inch in weaving? A project with a sett of 20 epi, for example, has 20 warp ends in each inch of weaving while the project is on the loom. Determining what sett a piece has is pretty easy: count the number of warp ends and divide by the number of inches in the width. What is warp set or epi? EPI stands for Ends Per Inch and refers to how many individual warps you need for every inch of your weaving to achieve the desired type of weaving you want. It can also be called your warp sett or your warp spacing. What is EPI and PPI? Fabric, material typically produced by weaving, knitting or knotting textile fibers, yarns or threads, is measured in units such as the momme, thread count (a measure of the coarseness or fineness of fabric), ends per inch (e.p.i) and picks per inch (p.p.i). Is Epi the same as sett? The spacing of your warp is called your sett. It is usually indicated by “epi” which stands for “ends per inch” (also referred to as dpi which stands for “dents per inch”). This indicates how many warp threads (these are the verticle threads on a loom) you have in one horizontal inch. How do you calculate warp and weft? To calculate the amount of weft, you need to know warp width, the number of picks per inch, and the length of the weaving. I usually add ten percent to that number for weft take-up. (So for an 8″ wide warp woven at 20 picks per inch for 65″: 8″ x 20 x 65″ = 10,400″ divided by 36″/yd = 288 yd plus 10% = 317 yd. What is RPM in weaving machine? The world’s fastest jet loom The figure “2,105 rpm” means that 2,105 weft threads are inserted per one minute. How is crimp percentage calculated? The crimp percentage is the difference between straightened thread length and the distance between the ends of the thread in the fabric. - Crimp % = (l-s)/s x 100. … - Crimp % , C=(L-S)/S * 100% … - Warp Length (L), L=(1+C)* S. … - Cloth length (S), S=L/(1+C) … - Solution: - Crimp Percentage: … - Take up Percentage. … - Solution:
https://petiterepublic.com/mosaicism/how-is-epi-weaving-calculated.html
Botvinick, M., & Plaunt, D. C. (2002). Representing task context: proposals based on a connectionist model of action. Psychological Research, 66, 298-311. DOI 10.1007/s00426-002-0103-8 Representations of task context play a crucial role in shaping human behavior. While the nature of these representations remains poorly understood, existing theories share a number of basic assumptions. One of these is that task representations are discrete, independent, and non-overlapping. We present here an alternative view, according to which task representations are instead viewed as graded, distributed patterns occupying a shared, continuous representational space. In recent work, we have implemented this view in a computational model of routine sequential action. In the present article, we focus specifically on this model’s implications for understanding task representation, considering the implications of the account for two influential concepts: (1) cognitive underspecification, the idea that task representations may be imprecise or vague, especially in contexts where errors occur, and (2) information-sharing, the idea that closely related operations rely on common sets of internal representations. Most models (e.g. Cooper and Shallice) posit discrete, independent internal representaitons; Botvinick and Plaut propose, instead, that representations grade into one another, and share multidimensional, conceptual space. The model is a typical connectionist model, with input nodes (sensation/perception), hidden nodes (interneurons), and output nodes (motor operations on the environment). Activation in the hidden nodes can cycle back onto those nodes, making theirs a “recurrent” connectionist model. When trained to make coffee (a la Cooper and Shallice), minor perturbations to the hidden layer produced behavioral slips (correct behavioral sequences in incorrect contexts); major perturbations appeared similar to action disorganization syndrome. First proposal: “task representations can be imprecise or underspecified.” Action slips can occur when random jitter (a simulation of degraded representation) deflects the current activity from the target activity into the space it shares with competitor actions. Frequency of experience moderates action slips. The less frequently trained the behavior, the smaller its stake in multidimensional space. The representation for items infrequently trained is therefore more fragile — even minor perturbations may result in a transition into a more practiced (and therefore larger area-holding) routines. Traditional models of action representations might seem to incorporate underspecification, too, in that any of a number of hierchies could be activated in a given context. However, these models would seem to hold that any hierarchy could be substituted in place of any other (originating at the same level of abstraction)–there’s no sense of the neighborhood of representations offered by the connectionist model. Second proposal: “task representations may be ‘shared’ by multiple, structurally interrelated activities.” A connectionist model allows similar tasks to share features, and aids generalization to novel, but related actions. The model applied what it knew about adding sugar to coffee, to add cocoa mix to water. The biggest advantage of such a model may be an an account of learning. When faced with novelty (e.g. altering “a scoop of cocoa” to become “a BIG scoop of cocoa), discrete symbolic models have to implement a categorical decision, some threshold below which the action is carried out according to the old representation and above which a new representation is composed anew. The connectionist model allows for continuous remodeling of representations, gradually differentiating new from old. Traditional models of action representations might seem to employ information-sharing, in that supraordinate nodes can share subordinate nodes. But the models seem to posit that these subnodes are identical, the actions they represent unchanged as we shift from one context to another. My questions - The simulation aligned with empirical evidence that showed well learned representations occupy more multidimensional space. So, well learned routines are more robust to violations; while more weakly represented routines are more likely to drift into the gravity well stronger representations. Does this square with the talk last week about highly frequent words being at the forefront of language change? - The connectionist model of action here requires fairly specific feedback. The model receives information about the intended outcome, calculates some deviation score based on this and the product of the output nodes, and back-propagates some correction factor through the network. How explicit must this feedback be? How often do we receive informative-enough negative feedback about our language learning? - Can gradualist models like this account for “ah ha” moments of insight, like those documented by Kohler? Is violation of expectation possible when brand new situations present themselves? Is there even such a thing as a “brand new situation” in a connectionist network?
http://jasonwallin.com/2014/11/10/connectionism-and-action-sequences/
Assessing the impact of socio-economic variables on breast cancer treatment outcome disparity. We studied Surveillance, Epidemiology and End Results (SEER) breast cancer data of Georgia USA to analyze the impact of socio-economic factors on the disparity of breast cancer treatment outcome. This study explored socio-economic, staging and treatment factors that were available in the SEER database for breast cancer from Georgia registry diagnosed in 2004-2009. An area under the receiver operating characteristic curve (ROC) was computed for each predictor to measure its discriminatory power. The best biological predictors were selected to be analyzed with socio-economic factors. Survival analysis, Kolmogorov- Smirnov 2-sample tests and Cox proportional hazard modeling were used for univariate and multivariate analyses of time to breast cancer specific survival data. There were 34,671 patients included in this study, 99.3% being females with breast cancer. This study identified race and education attainment of county of residence as predictors of poor outcome. On multivariate analysis, these socio-economic factors remained independently prognostic. Overall, race and education status of the place of residence predicted up to 10% decrease in cause specific survival at 5 years. Socio-economic factors are important determinants of breast cancer outcome and ensuring access to breast cancer treatment may eliminate disparities.
CROSS REFERENCE TO RELATED APPLICATION This application claims the benefit of Korean Patent Application No. 10-2016-0061993 filed May 20, 2016, which is hereby incorporated by reference in its entirety into this application. BACKGROUND OF THE INVENTION 1. Technical Field 2. Description of the Related Art The present invention relates to technology for securing a quantum key generated in a Quantum Key Distribution (QKD) system. A QKD system is configured such that, when a transmission unit transmits a randomly selected quantum state using two nonorthogonal bases, a reception unit receives it and estimates the quantum state using a randomly selected a measurement basis of two bases. Such a QKD system may provide an environment in which secure key distribution is guaranteed because an eavesdropper may be detected in the process of estimating the quantum state. A QKD system has a limitation as to distance when the system is implemented. In order for two users, father farther apart from each other than the allowable distance, to share encryption keys using a QKD system, a method in which encryption keys are relaxed by a quantum repeater or a trustworthy key distribution center is used. Here, because it is not easy to implement a quantum repeater, a method for relaying encryption keys through a key distribution center is widely used. In this method, encryption keys, individually created by a key distribution center and users, are delivered to users. However, this method is problematic in that a security weak point exists in that the key distribution center is aware of the encryption keys shared among the users. Meanwhile, Korean Patent Application Publication No. 10-2011-0057448, titled “A method of user-authenticated quantum key distribution”, discloses a method for authenticating a quantum channel by sharing a position having the same basis without disclosing information about the basis using previously shared secret keys and checking whether there is the same measured outcome at that position in order to guarantee unconditional security of BB84 QKD protocol, which is vulnerable to man-in-the-middle attacks. This invention was supported by the ICT R&D program of MSIP/IITP [1711028311, Reliable crypto-system standards and core technology development for secure quantum key distribution network] and the R&D Convergence program of NST (National Research Council of Science and Technology) of Republic of Korea (Grant No. CAP-18-08-KRISS). SUMMARY OF THE INVENTION An object of the present invention is to improve the security of quantum key distribution by preventing information about the encryption of quantum keys, which are finally distributed to quantum key distribution client devices, from being exposed to a quantum key-distribution center. Another object of the present invention is to improve the security of quantum key distribution through the process of a cryptographic operation on au authentication key, shared among client devices, and an output bit string, in which an error is corrected. A further object of the present invention is to distribute a quantum key encrypted with a hash function having improved security. In order to accomplish the above objects, a QKD center on a quantum network according to an embodiment of the present invention includes an authentication key sharing unit for sharing authentication keys with QKD client devices; a quantum key generation unit for generating sifted keys, corresponding to the QKD client devices, using quantum states; an error correction unit for generating distribution output bit strings by correcting errors of the sifted keys; and a bit string operation unit for calculating an encryption bit string by performing a cryptographic operation on the authentication keys and the distribution output bit strings corresponding to the QKD client devices. Here, the quantum key generation unit may randomly select bases for quantum states, corresponding to the QKD client devices, using quantum mechanics, compare measurement bases for quantum states, received from the QKD client devices, generate sifted keys, corresponding to bits that remain after checking security of a channel using bits on the same basis, for the respective QKD client devices. Here, the error correction unit may generate distribution output bit strings by correcting the errors of the sifted keys. The error correction unit may correct the errors of the sifted keys using Hamming code, Winnow algorithm, LDPC or the like. Here, the bit string operation unit is configured to prove the identity of the QKD center by transmitting a result of a cryptographic operation performed on the first authentication key and the first distribution output bit string to the QKD client device. Here, the bit string operation unit is configured to prove the identity of the QKD center by transmitting a result of a cryptographic operation performed on the second authentication key and the second distribution output bit string to the QKD client device. Here, the bit string operation unit may transmit the encryption bit string, calculated by performing a cryptographic operation on the second authentication key, the first distribution output bit siring and the second distribution output bit string, to any one of the first QKD client device and the second QKD client device only when authentication of the QKD center succeeds. Also, in order to accomplish the above object, a QKD client device on a quantum network according to an embodiment of the present invention includes an authentication key sharing unit for sharing authentication keys with a QKD center and an additional QKD client device; a quantum key generation unit for generating a sifted key, corresponding to the QKD center, using a quantum states; an error correction unit for generating output bit strings by correcting an error of the sifted key in conjunction with the QKD center; a bit string calculation unit for calculating a shared key bit string by performing a cryptographic operation on one or more of a first output bit string, a second output bit string of the additional QKD client device, an inter-client authentication key, which is included in the authentication key and is shared with the additional QKD client device, and an encryption bit string received from the QKD center and a privacy amplification unit for generating a final key bit string by applying a hash function to the shared key bit string. Here, the quantum key generation unit may select a measurement basis for a quantum state, corresponding to the QKD center, using quantum mechanics, compares a preparation basis for a quantum state, received from the QKD center, and generate a sifted key corresponding to bits that remain after checking security of a channel using bits on the same basis. Here, the error correction unit may generate output bit strings by correcting the errors of the sifted keys. The error correction unit may correct the errors of the sifted keys using Hamming code, Window algorithm, LDPC or the like. Here, the bit string calculation unit may authenticate the QKD center to a first QKD client device by comparing a result of a cryptographic operation performed on a first authentication key, which is shared with the first QKD client device, and a first distribution output bit string with a result of a cryptographic operation performed on the first authentication key and a first output bit string, the first authentication key being included in the authentication keys, the first distribution output bit string being included in the distribution output bit strings, and the first output bit string being included in the output bit strings. Here, the bit string calculation unit may authenticate the QKD center to a second QKD client device by comparing a result of a cryptographic operation performed on a second authentication key, which is shared with the second QKD client device, and a second distribution output bit string with a result of a cryptographic operation performed on the second authentication key and a second output bit string, the second authentication key being included in the authentication keys, the second distribution output bit string being included in the distribution output bit strings, and the second output bit string being included in the output bit strings. Here the privacy amplification unit may generate a final key bit string by applying a hash function. The privacy amplification unit may delete some of information about a key, leaked to an eavesdropper in the process of error correction. Here, only when the authentication of the QKD center succeeds and the QKD client device requests the QKD center to communicate, the bit string calculation unit may receive the encryption bit siring, which is calculated by performing a cryptographic operation on the second authentication key, the first distribution output bit string and the second distribution output bit string, the distribution output bit strings being generated by correcting an error of the sifted key in the QKD center. Here, only when the authentication of the QKD center succeeds and the QKD client device requests the QKD center to communicate the bit string calculation unit may calculate an encrypted shared key by performing a cryptographic operation on the encryption bit string, the second authentication key and the second output bit string. Here, only when the authentication of the QKD center succeeds and the QKD client device requests the QKD center to communicate, the bit string calculation unit may calculate the shared key bit string by performing a cryptographic operation on the encrypted shared key and the inter-client authentication key. Here, only when the authentication of the QKD center succeeds and the QKD client device is requested to communicate by the QKD center, the bit string calculation unit may calculate the shared key bit siring by performing a cryptographic operation on the first output bit string and the inter-client authentication key. Also, in order to accomplish the above objects, a QKD method on a quantum network according to an embodiment of the present invention includes sharing authentication keys among the QKD center and the QKD client devices; generating sifted keys, corresponding to the QKD center and the QKD client devices, using quantum states; generating output bit strings by correcting errors of the sifted keys; calculating a shared key bit string by performing a cryptographic operation on the output bit strings and an inter-client authentication key; and generating a final key bit string by applying a hash function to the shared key bit string. Here, the generating the sifted keys may be configured to select a preparation basis and a measurement basis for a quantum state using quantum mechanics, to compare a preparation basis of the QKD center with measurement basis of the QKD client devices, and to generate the sifted keys corresponding to bits that remain after checking security of a quantum channel using bits on the same basis. Here, the generating output bit strings creates the output bit strings by correcting the errors of the sifted keys. The error correction methods use Hamming code, Winnow algorithm, LDPC or the like. Here, the calculating the shared key bit string may include calculating an encryption bit string by the QKD center; calculating, by the QKD client device that requests communication, the shared key bit string; and calculating, by the QKD client device that is requested to communicate, the shared key bit string. Here, the calculating the encryption bit string may be configured to transmit the encryption bit string, calculated using a result of a cryptographic operation performed on the second authentication key and the distribution output bit strings, to any one of the QKD client devices only when authentication of the QKD center succeeds. Here, the calculating, by the QKD client device that requests communication, the shared key bit string may be configured such that only when authentication of the QKD center succeeds and any one of the QKD client devices requests the QKD center to communicate, the QKD client device that requests communication receives the encryption bit siring and calculates an encrypted shared key by performing a cryptographic operation on the received encryption bit string, the second authentication key and the second output bit string, generated by the QKD client device that requests communication. Here, the calculating, by the QKD client device that requests communication, the shared key bit string may be configured such that only when authentication of the QKD center succeeds and any one of the QKD client devices requests the QKD center to communicate, the shared key bit siring is calculated by performing a cryptographic operation on the encrypted shared key and the inter-client authentication key, which is included in the authentication keys and shared among the QKD client devices. Here, the calculating, by the QKD client device that is requested to communicate, the shared key bit string may be configured such that only when authentication of the QKD center succeeds and any one of the QKD client devices is requested to communicate by the QKD center, the shared key bit string is calculated by performing a cryptographic operation on the first output bit string, generated by the QKD client device that is requested to communicate, and the inter-client authentication key. Here, the generating a final key bit string may be calculated by performing a hash function on the shared key bit string. Here, the QKD center may not be aware of the shared key bit string, shared among the QKD client devices. BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which: FIG. 1 is a block diagram illustrating a simple quantum key distribution (QKD) system according to an embodiment of the present invention; FIG. 2 FIG. 1 is a block diagram illustrating an example of the QKD center illustrated in ; FIG. 3 FIG. 1 is a block diagram illustrating an example of the QKD client device illustrated in ; FIG. 4 FIGS. 1 to 3 is a block diagram illustrating an example of the QKD system illustrated in ; FIG. 5 FIG. 4 is a block diagram specifically illustrating an example of the quantum key generation unit of the QKD center and an example of the quantum key generation unit of the first QKD client device, illustrated in ; FIG. 6 FIG. 4 is a block diagram specifically illustrating an example of the bit string calculation unit and an example of the bit string operation unit, illustrated in ; FIG. 7 FIG. 4 is a block diagram specifically illustrating an example of the privacy amplification unit illustrated in ; FIG. 8 is a flowchart illustrating a QKD method according to an embodiment of the present invention; FIG. 9 FIG. 8 is a flowchart specifically illustrating an example of the step of generating a sifted key, illustrated in ; FIG. 10 FIG. 8 is a flowchart specifically illustrating an example of the step of generating an output bit string, illustrated in ; FIG. 11 FIG. 8 is a flowchart specifically illustrating an example of the step of calculating a shared key bit string, illustrated in ; and FIG. 12 is a block diagram illustrating a computer system according to an embodiment of the present indention. DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention will be described in detail below with reference to the accompanying drawings. Repeated descriptions and descriptions of known functions and configurations which have been deemed to make the gist of the present invention unnecessarily obscure will be omitted below. The embodiments of the present invention are intended to fully describe the present invention to a person having ordinary knowledge in the art to which the present invention pertains. Accordingly, the shapes, sizes, etc. of components in the drawings may be exaggerated in order to make the description clearer. Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings. FIG. 1 FIG. 2 FIG. 1 FIG. 3 FIG. 1 FIG. 4 FIGS. 1 to 3 is a block diagram illustrating a QKD system according to an embodiment of the present invention. is a block diagram specifically illustrating an example of the QKD center illustrated in . is a block diagram specifically illustrating an example of the QKD client device illustrated in . is a block diagram specifically illustrating an example of the QKD system illustrated in . FIGS. 1 to 4 100 200 300 Referring to , a QKD system according to an embodiment of the present invention includes a QKD center , a first QKD client device , and a second QKD client device . 100 200 300 The QKD center may share authentication keys and keyed hash functions with the QKD client devices and in advance. 100 200 300 Here, the QKD center may share a first authentication key and keyed hash function with the first QKD client device , and may share a second authentication key and keyed hash function with the second QKD client deice . 100 Here, the QKD center may generate sifted keys, corresponding to the QKD client devices, using quantum states. 200 300 100 100 Here, the QKD client device and may authenticate the QKD center , and the QKD center may open a communication channel in response to a request by an authenticated user. Here, in order to authenticate the QKD center, the authentication keys shared with the QKD client devices, the output bit strings, the distribution output bit strings, and keyed hash functions may be used. 100 100 200 100 300 The QKD center may generate distribution output bit strings by correcting the errors of the sifted keys. The distribution output bit strings may include a first distribution output bit string and a second distribution output bit string, wherein the first distribution output bit string is acquired in such a way that the QKD center corrects the error of the sifted key, which is generated corresponding to the first QKD client device and the second distribution output bit string is acquired in such a way that the QKD center corrects the error of the sifted key, which is generated corresponding to the second QKD client device . 100 300 The QKD center may calculate an encryption bit string by performing a cryptographic operation on the second authentication key, shared with the QKD client , and the distribution output bit strings, acquired by correcting the errors. 100 Here, the QKD center may transmit the encryption bit string to the QKD client device that requests communication. 200 300 The first QKD client device may be the QKD client device that is requested to communicate by the second QKD client device . 200 100 300 100 The first QKD client device may share authentication keys and keyed hash function with the QKD center and the second QKD client device , and may generate a sifted key corresponding to the QKD center , using a quantum state. 200 100 Here, the first QKD client device may share a first authentication key and keyed hash function with the QKD center . 200 300 Here, the first QKD client device may share an inter-client authentication key and a hash function with the second QKD client device . Here, the shared authentication keys, the sifted key, and a keyed hash function may be used for user authentication. 200 100 The first QKD client device may generate a first output bit string by correcting the error of the sifted key corresponding to the QKD center . 200 300 The first QKD client device may calculate a shared key bit string by performing a cryptographic operation on the first output bit string and the inter-client authentication key, which is shared with the second QKD client device . 200 The first QKD client device may generate a final key bit string by applying a hash function to the shared key bit string. 300 200 The second QKD client device may be the QKD client device that requests the first QKD client device to communicate therewith. 300 100 200 100 The second QKD client device may share authentication key with the QKD center and the first QKD client device , and may generate a sifted key, corresponding to the QKD center , using a quantum state. 300 100 Here, the second QKD client device may share a second authentication key and keyed hash function with the QKD center . 300 200 Here, the second QKD client device may share an inter-client authentication key and a hash function with the first QKD client device . Here, the shared authentication keys, the output bit strings, and a keyed hash function may be used for authentication of the QKD center. 300 100 The second QKD client device may generate a second output bit string by correcting the error of the sifted key corresponding to the QKD center . 300 100 The second QKD client device may calculate a shared key bit string by performing a cryptographic operation on the second output bit string, the inter-client authentication key and the encryption bit string, which is received from the QKD center . 300 The second QKD client device may generate a final key bit string by applying a hash function to the shared key bit string. FIG. 2 FIG. 4 101 110 120 130 Referring to and , a QKD center according to an embodiment of the present invention includes an authentication key sharing unit , a quantum key generation unit , an error correction unit , and a bit string operation unit . 101 101 200 300 A B The authentication key sharing unit may share authentication keys with QKD client devices. Here, the authentication key sharing unit may share a first authentication key Akwith the first QKD client device and may share a second authentication key Akwith the second QKD client device . 110 The quantum key generation unit may generate sifted keys, corresponding to the QKD client devices, using quantum states. 110 111 112 113 114 Here, the quantum key generation unit may include a quantum state transmission unit , a random number generator , a classical bit transceiver , and a logic control unit . 111 51 111 The quantum state transmission unit may prepare a quantum state and transmit it via a quantum channel . Here, the quantum state transmission unit may transmit the quantum state through a quantum key distribution protocol such as BB84, B92, or the like. 112 112 The random number generator may generate a random signal using quantum mechanics. Here, the random number generator may randomly select a quantum preparation basis (polarization basis) and quantum states. 113 212 200 112 The classical bit transceiver may receive the measurement basis selected by the random number generator of the first QKD client device , and may transmit the preparation basis, selected by the random number generator . 114 51 200 52 120 52 52 52 113 Here, the logic control unit checks the security of the quantum channel by sharing the information about polarizing plates (preparation bases and measurement bases) with the first QKD client device is a classical channel , and may then transmit the information that remains after checking the security of the quantum channel to the error correction unit ). Here, the classical channel may be a public channel and may be eavesdropped on by anybody. However, in the classical channel , falsification and the addition of additional information may not be allowed. For example, the classical channel may correspond to the concept of a public board such as a newspaper. Here, the classical bit transceiver may guarantee the integrity of information using Message Authentication Code (MAC). 114 112 212 200 Here, the logic control unit may store information in order to compare the preparation basis for a quantum state, which is randomly selected by the random number generator , with the measurement basis for a quantum state, which is randomly selected by the random, number generator of the first QKD client device . 114 113 213 200 Here, the logic control unit compares the preparation basis for the quantum state with the measurement basis for the quantum state through communication between the classical bit transceiver and the classical bit transceiver of the first QKD client device , and may check whether a channel is secure using some bits of the bit string on the same basis. 114 Here, the logic control unit may output a sifted key based on the bits remaining after checking the security of the channel. 110 200 300 The above-mentioned process of generating the sifted key, performed by the quantum key generation unit , may be applied not only to the first QKD client device but also to the second QKD client device . 120 120 120 220 320 53 63 220 320 120 220 320 220 320 120 220 200 120 320 300 The error correction unit may generate distribution output bit strings by correcting the errors of the sifted keys. Here, the error correction unit may correct the errors of the sifted keys using Hamming code, Winnow algorithm, LDPC or the like. Specifically, the error correction unit divides the bit string to be transmitted into multiple blocks, and may transmit the parity bit of each of the blocks to the error correction unit or of the QKD client device via the classical channel or . The error correction unit or of the QKD client device may detect a block containing a data, error by checking the parity bit of the block. Then, the error correction unit subdivides the block contain big the data, error, which is detected and announced by the error correction unit or of the QKD client device, and the parity of the subdivided block is repeatedly checked by the error correction unit or of the QKD client device. Through the repetition of this process, when the length of the block containing the parity error becomes the length to which Hamming code can be applied, the bit containing the error may be determined and corrected by applying Hamming code thereto. Here, the bit string generated by this error-correction process may correspond to a common output bit string between the error correction unit and the error correction unit of the first QKD client device or a common output bit string between the error correction unit and the error correction unit of the second QKD client device . 120 200 A Here, the error correction unit may output a first distribution output bit string Rk′ by correcting the error of the sifted key corresponding to the first QKD client device . 120 300 B Here, the error correction unit may output a second distribution output bit string Rk′ by correcting the error of the sifted key corresponding to the second QKD client device . 220 200 100 A Here, the error correction unit of the first QKD client device may output a first output bit string Rkby correcting the error of the sifted key corresponding to the QKD center . 320 300 100 B Here, the error correction unit of the second QKD client device may output a second output bit string Rkby correcting the error of the sifted key corresponding to the QKD center . 130 The bit siring operation unit may prove the identity of the QKD center by transmitting a result of a cryptographic operation performed on the first authentication key and the first distribution output bit string to the QKD client device. 130 The bit string operation unit may prove the identity of the QKD center by transmitting a result of a cryptographic operation performed on the second authentication key and the second distribution output bit string to the QKD client device. 130 The bit siring operation unit may calculate an encryption bit string by performing a cryptographic operation on the second authentication key and the distribution output bit strings in which the error has been corrected. 130 The bit string operation unit may transmit the encryption bit string, calculated by performing a cryptographic operation on the second authentication key, the first distribution output bit string and the second distribution output bit string, to any one of the first QKD client device and the second QKD client device only when authentication of the QKD center succeeds. 130 Here, the bit string operation unit authenticates own identity to the QKD client devices, and may open a communication channel in response to the requests by users (QKD clients). Here, in order to prove own identity, the authentication keys, the distribution output bit strings, output bit strings, and a keyed hash function may be used. 130 131 132 135 The bit string operation unit may include memory units and and an operation unit . 131 120 231 100 54 100 A A A The memory unit may store the first distribution output bit string Rk′, which is received from the error correction unit . Here, the memory unit may receive a bit string, acquired by performing an operation on the first authentication key Akand the first distribution output bit string Rk′, from the QKD center via a channel , and may store the received bit string therein for a authentication of the QKD center . 232 100 232 100 A A A A A AkA A AkA A h Rk h Rk Here, the operation unit may authenticate the QKD center by comparing the result of a cryptographic operation performed on the first authentication key Akand the first distribution output bit string Rk′ with the result of a cryptographic operation performed on the first authentication key Akand the first output nit string Rkusing Equation (1). Here, the operation unit may authenticate the QKD center using a keyed hash function with the first authentication key Ak. ()=(′) (1) 132 120 331 100 64 100 B B B The memory unit may store the second distribution output bit string Rk′, which is received from the error correction unit . Here, the memory unit may receive a bit string, acquired by performing an operation on the second authentication key Akand the second distribution output bit string Rk′, from the QKD center via a classical channel , and may store the received bit string therein for a authentication of the QKD center . 332 100 332 100 B B B B B AkB B AkB B h Rk h Rk Here, the operation unit may authenticate the QKD center by comparing the result of a cryptographic operation performed on the second authentication key Akand the second distribution output bit string Rk′ with the result of a cryptographic operation performed on the second authentication key Akand the second output bit string Rkusing Equation (2). Here, the operation unit may authenticate the QKD center using a keyed hash function with the second authentication key Ak. ()=(′) (2) 135 100 B A B A B AkB A B Rk ′⊕Rk h Rk ′⊕Rk AkB B where ⊕ may correspond to an XOR operation, ∥ may correspond to a concatenation and hmay correspond to a keyed hash function using the second authentication key Ak. Here, the operation unit may generate an encryption bit string by performing a cryptographic operation on the second authentication key Ak, the first distribution output bit string Rk′ and the second distribution output bit string Rk′ using Equation (3) only when authentication of the QKD center succeeds. (′)∥(′) (3) 135 A B The operation unit may calculate the encryption bit string by performing an XOR operation and a keyed hash function on the first distribution output bit string Rk′ and the second distribution output bit string Rk′. 135 300 Here, the operation unit may transmit the encryption bit string to the QKD client device that requests communication. According to an embodiment of the present invention, the QKD client device that requests communication may be the second QKD client device . 135 332 300 65 Here, the operation unit may transmit the encryption bit string to the operation unit of the second QKD client device via a classical channel . FIGS. 3 and 4 200 201 210 220 230 240 Referring to , the first QKD client device according to an embodiment of the present invention includes an authentication key sharing unit , a quantum key generation unit , an error correction unit , a bit string calculation unit , and a privacy amplification unit . 200 300 Here, the components of the first QKD client device may correspond to the components of the second QKD client device . 200 300 Here, according to an embodiment of the present invention, the first QKD client device may be the QKD client device that is requested to communicate, and the second QKD client device may be the QKD client device that requests communication. 201 201 100 300 A AB The authentication key sharing unit may share authentication keys with the QKD center and other QKD client device. Here, the authentication key sharing unit may share the first authentication key Akwith the QKD center , and may share an inter-client authentication key Akwith the second QKD client device . 301 300 100 200 B AB Here, the authentication key sharing unit of the second QKD client device may share the second authentication key Akwith the QKD center , and may share the inter-client authentication key Akwith the first QKD client device . 210 100 The quantum key generation unit may generate a sifted key, corresponding to the QKD center , using a quantum state. 210 211 212 213 214 Here, the quantum key generation unit may include a quantum state reception unit , a random number generator , a classical bit transceiver , and a logic control unit . 210 310 300 Here the components of the quantum key generation unit may correspond to the components of the quantum key generation unit of the second QKD client device . 211 212 51 211 The quantum state reception unit prepares the measurement basis using the random number generator , and may receive a quantum state via the quantum channel and measure it. Here, the quantum state reception unit may receive the quantum state through a quantum key distribution protocol such as BB84, B92, or the like. 212 212 211 The random number generator may generate a random signal using quantum mechanics. Here, the random number generator may randomly select a measurement basis (polarization basis). This selections determine the measurement basis of the quantum state reception unit . 213 112 100 212 The classical bit transceiver receives the quantum state preparation basis, which is selected by the random number generator of the QKD center , and may transmit the measurement basis for the quantum state, prepared by the random number generator . 214 51 100 52 220 52 52 52 213 Here, the logic control unit checks the security of the quantum channel by sharing the information about polarizing plates (preparation bases and measurement bases) with the QKD center via the classical channel , and may then transmit the information that remains after checking the security of the quantum channel to the error correction unit . Here, the classical channel may be a public channel, and may be eavesdropped on by anybody. However, in the classical channel , it may be impossible to falsify information or add additional information. For example, the classical channel may correspond to the concept of a public board such as a newspaper. Here, the classical bit transceiver may guarantee the integrity of information using Message Authentication Code (MAC). 214 212 112 100 Here, the logic control unit may store information in order to compare the measurement basis for a quantum state, which is randomly selected by the random number generator , with the preparation basis for a quantum state, which is randomly selected by the random number generator unit of the QKD center . 214 213 113 Here, the logic control unit compares the preparation basis for the quantum state with the measurement basis for the quantum state through communication between the classical bit transceiver and the classical bit transceiver , and may check whether a channel is secure using some bits of the bit string on the same basis. 214 Here, the logic control unit may output a sifted key based on the bits remaining alter checking the security of the channel. 210 310 300 The above-mentioned process of generating the sifted key, performed by the quantum key generation unit , may correspond to the process of generating the sifted key, performed by the quantum key generation unit of the second QKD client device . 220 100 220 220 120 100 53 120 100 220 120 100 120 100 220 120 100 220 200 120 100 320 300 A The error correction unit may generate a first output bit string Rkby correcting the error of the sifted key corresponding to the QKD center . Here, the error correction unit may correct the error of the silted key using Hamming code, Winnow algorithm, LDPC or the like. Specifically, the error correction unit divides the bit string to be transmitted into multiple blocks, and may transmit the parity bit of each of the blocks to the error correction unit of the QKD center via the classical channel . The error correction unit of the QKD center may detect a block containing a data error by checking the parity bit of the block. Then, the error correction unit subdivides the block containing the data error, which is detected and announced by the error correction unit of the QKD center , and the parity of the subdivided block is repeatedly checked by the error correction unit of the QKD center . Through the repetition of this process, when the length of the block containing the parity error becomes the length to which Hamming code can be applied, the error correction unit may determine and correct the bit containing the error by applying Hamming code thereto. Here, the bit string generated by this error-correction process may correspond to a common output bit string between the error correction unit of the QKD center and the error correction unit of the first QKD client device . Another bit string generated by this error-correction process may correspond to a common output bit string between the error correction unit of the QKD center and the error correction unit of the second QKD client device . 220 320 300 120 100 63 The above-mentioned process of collecting the error performed by the error correction unit , may correspond to the process in which the error correction unit of the second QKD client device corrects the error of the sifted key through communication with the error correction unit of the QKD center via the classical channel . 220 100 A Here, the error correction unit may output the first output bit string Rkby correcting the error of the sifted key corresponding to the QKD center . 320 300 100 B Here, the error correction unit of the second QKD client device may output the second output bit string Rkby correcting the error of the sifted key corresponding to the QKD center . 120 100 200 A Here, the error correction unit or the QKD center may output the first distribution output bit string Rk′ by correcting the error of the sifted key corresponding to the first QKD client device . 120 100 300 B Here, the error correction unit of the QKD center may output the second distribution output bit string Rk′ by correcting the error of the sifted key corresponding to the second QKD client device . 230 100 200 A A The bit string calculation unit may calculate a bit string by performing a cryptographic operation on the first output bit siring Rk, in which an error is corrected, and the first authentication key Ak, shared between the QKD center and the first QKD client device . 230 The bit string calculation unit may authenticate the QKD center to the first QKD client device by comparing a result of a cryptographic operation performed on a first authentication key, which is shared with the first QKD client device, and a first distribution output bit siring with a result of a cryptographic operation performed on the first authentication key and a first output bit string, the first authentication key being included in the authentication keys, the first distribution output bit string being included in the distribution output bit strings, and the first output bit string being included in the output bit strings. 230 The bit siring calculation unit may authenticate the QKD center to the second QKD client device by comparing a result of a cryptographic operation performed on a second authentication key, which is shared with the second QKD client device, and a second distribution output bit string with a result of a cryptographic operation performed on the second authentication key and a second output bit string, the second authentication key being included in the authentication keys, the second distribution output bit string being included in the distribution output bit strings, and the second output bit string being included in the output bit strings. 230 100 100 Here, the bit string calculation unit requests the QKD center to authenticate own identity, and the QKD center may open a communication channel in response to the request by the user. Here, authentication of QKD center may be performed using the authentication key, the output bit strings, the distribution output bit strings, and a keyed hash function. 230 231 232 The bit string calculation unit may include a memory unit and an operation unit . 130 131 132 135 The bit string operation unit may include a memory unit and , and a operation unit . 131 120 131 231 200 54 A A A The memory unit may store the first distribution output bit string Rk′, which is received from the error correction unit . Here, the memory unit may transmit a bit string, acquired by performing an operation on the first authentication key Akand the first distribution output bit string Rk′, to the memory unit of the QKD client via the classical channel . 232 100 A A A A AkA A AkA A h Rk h Rk Here, the operation unit may authenticate the QKD center by comparing the result of a cryptographic operation performed on the first authentication key Akand the first output bit string Rkwith the result of a cryptographic operation performed on the first authentication key Akand the first distribution output bit string Rk′ using Equation (1): ()=(′) (1) 330 300 331 332 Meanwhile, the bit string calculation unit of the second QKD client device , which requests communication, may include a memory unit and an operation unit . 132 120 132 331 300 64 B B B The memory unit may store the second distribution output bit string Rk′, which is received from the error correction unit . Here, the memory unit may transmit a bit string, acquired by performing an operation on the second authentication key Akand the second distribution output bit string Rk′, to the memory unit of the QKD client via the classical channel . 332 100 B B B B AkB B AkB B h Rk h Rk Here, the operation unit may authenticate the QKD center by comparing the result of a cryptographic operation performed on the second authentication key Akand the second output bit string Rkwith the result of a cryptographic operation, performed on the second authentication key Akand the second distribution output bit string Rk′ using Equation (2): ()=(′) (2) 130 100 135 100 B A B A B AkB A B Rk ′⊕Rk h Rk ′⊕Rk Here, the bit string operation unit of the QKD center may generate an encryption bit string in such a way that the operation unit performs a cryptographic operation on the second authentication key Ak, the first distribution output bit string Rk′ and the second distribution output bit string Rk′ using Equation (3) only when authentication of the QKD center succeeds. (′)∥(′) (3) AkB B where ⊕ may correspond to an XOR operation, ∥ may correspond to a concatenation and hmay correspond to a keyed hash function using the second authentication key Ak. 135 130 A B B The operation unit of the bit string operation unit may calculate the encryption bit siring by performing an XOR operation on the first distribution output bit string Rk′ and the second distribution output bit string Rk′, and keyed hash function using the second authentication key Ak. 135 Here, the operation unit may transmit the encryption bit string to the QKD client device that requests communication. 135 332 300 65 Here, the operation unit may transmit the encryption bit string to the operation unit of the second QKD client device via the classical channel . 332 300 332 135 331 A B AkB A B B A B B A Rk ′⊕Rk Rk ′⊕Rk Here, the operation unit of the second QKD client device may perform a keyed hash the front of the encryption bit string, (Rk′⊕Rk′). If the outcome is same with the back end of the encryption bit string, h(Rk′⊕Rk′), the operation unit may perform an XOR operation on the front of the encryption bit string, received from the operation unit , and the second output bit string Rk, received from the memory unit , using Equation (4): (′)⊕′) (4) 300 100 332 B B A Here, if authentication between the second QKD client device and the QKD center succeeds, because the second distribution output bit siring Rk′ corresponds to the second output bit string Rk, the operation unit may calculate the first distribution output bit string Rk′ using the result of the XOR operation of Equation (4). 332 200 A AB A AB Rk ′⊕Ak Here, the operation unit may calculate a shared key bit string by performing an XOR operation on the calculated first distribution output bit string Rk′ and the inter-client authentication key Ak, shared with the first QKD client device , which is requested to communicate, using Equation (5): (5) 65 B AB A AB Rk ⊕Ak =Rk ′⊕Ak That is, the combination of Equations (3) to (5) may correspond to Equation (6): (INPUT)⊕ (6) 65 135 100 65 Here, INPUTmay correspond to the front of the encryption bit string, which is received from the operation unit of the QKD center via the classical channel . 200 100 232 300 A AB A AB Rk ⊕Ak Here, if authentication between the first QKD client deice , which was requested to communicate, and the QKD center succeeds, the operation unit may calculate a shared key bit string by performing a cryptographic operation on the first output bit string Rkand the inter-client authentication key Ak, shared with the second QKD client device , which requested the communication, using Equation (7): (7) 200 100 A A If authentication between the first QKD client device , which was requested to communicate, and the QKD center succeeds, the first distribution output bit string Rk′ of Equation (4) may correspond to the first output bit string Rkaccording to Equation (1). A AB A AB 232 200 332 300 Accordingly, because the shared key bit string (Rk⊕Ak), calculated using Equation (7) by the operation unit of the first QKD client device , corresponds to the shared key bit siring (Rk′⊕Ak), calculated using Equation (5) by the operation unit of the second QKD client device , the same encryption key may be shared therebetween. 240 300 240 The privacy amplification unit may generate a final key bit string by applying a hash function, which is shared with the second QKD client , to the shared key bit string. Here, the privacy amplification unit may delete some of the information about the key, leaked to an eavesdropper in the process of error correction and modification. 240 K =h Rk ⊕Ak AB A AB In other words, the privacy amplification unit may calculate the final key-bit string using Equation (8): () (8) 240 Here, h may be a hash function that the privacy amplification unit uses in order to delete the information exposed to the eavesdropper. The hash function may be shared among the QKD client devices in advance. 340 300 K =h Rk ′⊕Ak AB A AB The privacy amplification unit of the second QKD client device , which requested the communication, may calculate the final key hit string using Equation (9): () (9) 340 Here, h may be the hash function that the privacy amplification unit uses in order to delete the information exposed to the eavesdropper. The hash function may be shared among the QKD client devices in advance. 240 340 AB Here, the length of the bit siring may be reduced through the process in which the privacy amplification unit or deletes the information exposed to the eavesdropper by applying the hash function to the shared key bit string. Accordingly, if the hash function h is suitably selected, information about the final key bit string Kmay be protected from eavesdroppers. FIG. 5 FIG. 4 is a block diagram specifically illustrating an example of the quantum key generation unit of the QKD center and an example of the quantum key generation unit of the first QKD client device, illustrated in . FIG. 5 210 200 211 212 213 214 Referring to , the quantum key generation unit of the first QKD client device according to an embodiment of the present invention includes a quantum state reception unit , a random number generator , a classical bit transceiver , and a logic control unit . 210 310 300 Here, the components of the quantum key generation unit may correspond to the components of the quantum key generation unit of the second QKD client device . 210 100 The quantum key generation unit may generate a sifted key, corresponding to the QKD center , using a quantum state. 211 212 51 211 The quantum state reception unit prepares the measurement basis using the random number generator , and may receive a quantum state via the quantum channel and measure it. Here, the quantum state reception unit may receive the quantum state through a quantum key distribution protocol such as BB84, B92, or the like. 212 212 The random number generator may generate a random signal using quantum mechanics. Here, the random number generator may randomly select a measurement basis (polarization basis). 213 112 100 212 The classical bit transceiver receives the quantum state preparation basis, which is selected by the random number generator of the QKD center , and may transmit the measurement basis for the quantum state, prepared by the random number generator . 214 51 100 52 220 52 52 52 213 Here, the logic control unit checks the security of the quantum channel by sharing the information about polarizing plates (preparation bases and measurement bases) with the QKD center via the classical channel , and may then transmit the information that remains after checking the security of the quantum channel to the error correction unit . Here, the classical channel may be a public channel, and may be eavesdropped on by anybody. However, in the classical channel , falsification, and the addition of additional information may not be allowed. For example, the classical channel may correspond to the concept of a public board such as a newspaper. Here, the classical bit transceiver may guarantee the integrity of information using Message Authentication Code (MAC). 214 212 112 100 Here, the logic control unit may store information in order to compare the measurement basis for a quantum state, which is randomly selected by the random number generator , with the preparation basis for a quantum state, which is randomly selected by the random number generator unit of the QKD center . 214 213 113 Here, the logic control unit compares the preparation basis for the quantum state with the measurement basis for the quantum state through communication between the classical bit transceiver and the classical bit transceiver , and may check whether a channel is secure using some bits of the bit string on the same basis. 214 Here, the logic control unit may output a sifted key based on the bits remaining after checking the security of the channel. 210 310 300 The above-mentioned process of generating the sifted key, performed by the quantum key generation unit , may correspond to the process of generating the sifted key, performed by the quantum key generation unit of the second QKD client device . 110 100 Meanwhile, the quantum key generation unit of the QKD center may generate sifted keys, corresponding to the QKD client devices, using quantum states. 110 111 112 113 114 Here, the quantum key generation unit may include a quantum state transmission unit , a random number generator , a classical bit transceiver , and a logic control unit . 111 51 111 The quantum state transmission unit may prepare a quantum state and transmit it via the quantum channel . Here, the quantum state transmission unit may transmit the quantum state through a quantum key distribution protocol such as BB84, B92, or the like. 112 112 The random number generator may generate a random signal using quantum mechanics. Here, the random number generator may randomly select a preparation basis (polarization basis) and quantum states. 113 212 200 112 The classical bit transceiver may receive the quantum state selected by the random number generator of the first QKD client device , and may transmit the preparation basis for the quantum state, selected by the random number generator . 114 51 200 52 120 52 52 52 113 Here, the logic control unit checks the security of the quantum channel by sharing the information about polarizing plates (preparation bases and measurement bases) with the first QKD client device via the classical channel , and may then transmit the information that remains after checking the security of the quantum channel to the error correction unit . Here, the classical channel may be a public channel, and may be eavesdropped on by anybody. However, in the classical channel , falsification and the addition of additional information may not be allowed. For example, the classical channel may correspond to the concept of a public board such as a newspaper. Here, the classical bit transceiver may guarantee the integrity of information using Message Authentication Code (MAC). 114 112 212 200 Here, the logic control unit may store information in order to compare the preparation basis for a quantum state, which is randomly selected by the random number generator , with the measurement basis for a quantum state, which is randomly selected by the random number generator of the first QKD client device . 114 113 213 200 Here, the logic control unit compares the preparation basis for the quantum state with the measurement basis for the quantum state through communication between the classical bit transceiver and the classical bit transceiver of the first QKD client device , and may check whether a channel is secure using some bits of the bit string on the same basis. 114 Here, the logic control unit may output a sifted key based on the bits remaining after checking the security of the channel. 110 200 300 The above-mentioned process of generating the sifted key, performed by the quantum key generation unit , may correspond not only to the process of generating the sifted key in the first QKD client device , but also to the process of generating the sifted key in the second QKD client device . FIG. 6 FIG. 4 is a block diagram specifically illustrating an example of the bit string calculation unit and an example of the bit string operation unit, illustrated in . FIG. 6 230 200 231 232 Referring to , the bit string calculation unit of the first QKD client device may include a memory unit and an operation unit . 131 120 131 231 200 54 A A A The memory unit may store the first distribution output bit string Rk′, which is received from the error correction unit . Here, the memory unit may transmit a bit siring, acquired by performing an operation on the first authentication key Akand the first distribution output bit string Rk′, to the memory unit of the QKD client via the classical channel . 232 100 A A A A AkA A AkA A h Rk h Rk Here, the operation unit may authenticate the QKD center by comparing the result of a cryptographic operation performed on the first authentication key Akand the first output bit string Rkwith the result of a cryptographic operation performed on the first authentication key Akand the first distribution output bit string Rk′ using Equation (1): ()=(′) (1) 330 300 331 332 Meanwhile, the bit string calculation unit of the second QKD client device , which requested communication, may include a memory unit and an operation unit . 132 120 132 331 300 64 B B B The memory unit may store the second distribution output bit string Rk′, which is received from the error correction unit . Here, the memory unit may transmit a bit siring, acquired by performing an operation on the second authentication key Akand the second distribution output bit string Rk′, to the memory unit of the QKD client via the classical channel . 332 100 B B B B AkB B AkB B h Rk h Rk Here, the operation unit may authenticate the QKD center by comparing the result of a cryptographic operation performed on the second authentication key Akand the second output bit string Rkwith the result of a cryptographic operation performed on the second authentication key Akand the second distribution output bit string Rk′ using Equation (2): ()=(′) (2) 130 100 135 100 B A B A B AkB A B Rk ′⊕Rk h Rk ′⊕Rk Here, the bit siring operation unit of the QKD center may generate an encryption bit string in a such a way that the operation unit performs a cryptographic operation on the second authentication key Ak, the first distribution output bit string Rk′ and the second distribution output bit string Rk′ using Equation (3) only when authentication of the QKD center succeeds. (′)∥(′) (3) AkB B where ⊕ may correspond to an XOR operation, ∥ may correspond to a concatenation and hmay correspond to a keyed hash function using the second authentication key Ak. 135 130 A B B The operation unit of the bit string operation unit may calculate the encryption bit string by performing an XOR operation on the first distribution, output bit string Rk′ and the second distribution output bit string Rk′ and performing keyed hash function using the second authentication key Ak. 135 Here, the operation unit may transmit the encryption bit string to the QKD client device that requested communication. 135 332 300 65 Here, the operation unit may transmit the encryption bit string to the operation unit of the second QKD client device via the classical channel . 332 300 332 135 331 A B AkB A B B A B B A Rk ′⊕Rk Rk =Rk Here, the operation unit of the second QKD client device may perform a keyed hash on the front of the encryption bit string, (Rk′⊕Rk′). If the outcome is same with the back end of the encryption bit string, h(Rk′⊕Rk′), the operation unit may perform an XOR operation on the front of the encryption bit string, received from the operation unit , and the second output bit string Rk, received from the memory unit using Equation (4): (′)⊕′ (4) 300 100 332 B B A Here, if authentication between the second QKD client device and the QKD center succeeds, because the second distribution output bit string Rk′ corresponds to the second output bit string Rk, the operation unit may calculate the first distribution output bit string Rk′ using the result of the XOR operation of Equation (4). 332 200 A AB A AB Rk ′⊕Ak Here, the operation unit may calculate a shared key bit string by performing an XOR operation on the calculated first distribution output bit string Rk′ and the inter-client authentication key Ak, shared with the first QKD client device , which was requested to communicate, using Equation (5): (5) 65 B AB A AB Rk ⊕Ak =Rk ′⊕Ak That is, the combination of Equations (3) to (5) may correspond to Equation (6): (INPUT)⊕ (6) 65 135 100 65 Here, INPUTmay correspond to the front of the encryption bit string, which is received from the operation unit of the QKD center via the classical channel . 200 100 232 300 A AB A AB Rk ⊕Ak Here, if authentication between the first QKD client device , which was requested to communicate, and the QKD center succeeds, the operation unit may calculate a shared key bit string by performing a cryptographic operation on the first output bit string Rkand the inter-client authentication key Ak, shared with the second QKD client device , which requested the communication, using Equation (7): (7) 200 100 A A If authentication between the first QKD client device , which was requested to communicate, and the QKD center succeeds, the first distribution output bit string Rk′ of Equation (4) may correspond to the first output bit string Rkaccording to Equation (1). A AB A AB 232 200 332 300 Accordingly, because the shared key bit string (Rk⊕Ak), calculated using Equation (7) by the operation unit of the first QKD client device , corresponds to the shared key bit string (Rk′⊕Ak), calculated using Equation (5) by the operation unit of the second QKD client device , the same encryption key may be shared therebetween. FIG. 7 FIG. 4 is a block diagram specifically illustrating an example of the privacy amplification unit illustrated in . FIG. 7 240 340 240 340 Referring to , the privacy amplification unit or may generate a final key bit string by applying a hash function to the shared key bit string. Here, the privacy amplification unit or may delete some of the information about the key, which was leaked to an eavesdropper in the process of error correction and modification. 240 200 K =h Rk ⊕Ak AB A AB In other words, the privacy amplification unit of the first QKD client device , which was requested to communicate, may calculate the final key bit string using Equation (8): () (8) 240 Here, h may be a hash function that the privacy amplification unit uses in order to delete the information exposed to the eavesdropper. The hash function may be shared among the QKD client devices in advance. 340 300 K =h Rk ⊕Ak AB A AB The privacy amplification unit of the second QKD client device , which requested the communication, may calculate the final key bit string using Equation (9): () (9) 340 Here, h may be the hash function that the privacy amplification unit uses in order to delete the information exposed to the eavesdropper. The hash function may be shared among the QKD client devices in advance. 240 340 AB Here, the length of the bit string may be reduced through the process in which the privacy amplification unit or deletes the information exposed the eavesdropper by applying the hash function to the shared key bit string. Accordingly, if the hash function h is suitably selected, information about the final key bit string Kmay be protected from eavesdroppers. FIG. 8 is a flowchart illustrating a quantum key distribution method according to an embodiment of the present invention. FIG. 8 410 Referring to , in the quantum key distribution method according to an embodiment of the present invention, authentication keys and a hash function are shared at step S. 410 100 At step S, the QKD center and the QKD client devices may share authentication keys and a hash function therebetween. 420 100 200 100 300 200 300 A B AB Specifically, at step S, the QKD center and the first QKD client device may share a first authentication key Aktherebetween, the QKD center and the second QKD client device may share a second authentication key Aktherebetween, and the first QKD client deuce and the second QKD client device may share an inter-client authentication key Akand a hash function h therebetween. 420 Also, in the quantum key distribution method according to an embodiment of the present invention, sifted keys may be generated at step S. 420 421 Specifically describing step S, first, a quantum state may be transmitted and received at step S. 421 51 61 421 Here, the quantum state may be transmitted and received through a quantum key distribution protocol such as BB84, B92, or the like at step S. Here, the quantum state is prepared, and may be transmitted and received via a quantum channel or at step S. 422 Also, the selected preparation basis may be compared with the measurement basis at step S. 422 Here, at step S, a random signal may be generated using quantum mechanics. 422 52 62 Here, at step S, the selected measurement basis and preparation basis for a quantum state may be transmitted and received via a classical channel or . 423 Also, the security of a channel may be checked on the same basis at step S. 423 51 61 52 62 51 61 430 52 62 52 62 52 62 423 In other words, at step S, the security of a quantum channel or is checked by comparing information about polarizing plates (preparation bases and measurement bases) through the classical channel or , and the information remaining after checking the security of the quantum channel or may be used at step S. Here, the classical channel or may correspond to a public channel, and may be eavesdropped on by anybody. However, in the classical channel or , it may be impossible to falsify information or to add additional information. For example, the classical channel or may correspond to the concept of a public board such as a newspaper. Here, at step S, the integrity of information may be guaranteed using Message Authentication Code (MAC). 423 At step S, information may be stored in order to compare the randomly selected preparation basis for a quantum state with the randomly selected measurement basis for the quantum state. 423 52 62 Here, at step S, the preparation basis for a quantum state is compared with the measurement basis for the quantum state through the classical channel or , and the security of a channel may be checked using some bits of the bit string, acquired on the same basis. 424 Also, sifted keys may be generated at step S. 424 Here, at step S, the sifted keys may be output based on the bits remaining after checking the security of the channel. 424 100 Here, at step S, sifted keys corresponding to the QKD center and QKD client devices may be generated. 424 100 200 100 300 Here, at step S, sifted keys corresponding to the QKD center and the first QKD client device may be generated, and sifted keys corresponding to the QKD center and the second QKD client device may be generated. 430 Also, in the quantum key distribution method according to an embodiment of the present invention, output bit strings may be generated at step S. 430 431 Specifically describing step S, first, the error of a sifted key may be corrected at step S. 431 53 63 431 431 Here, output bit strings may be generated by correcting the errors of the sifted keys at step S. Here, the error may be corrected using Hamming code, Winnow algorithm, LDPC or the like. Here, the bit string to be transmitted is divided into multiple blocks, and the parity bit of each of the blocks may be transmitted and received via the classical channel or . At step S, a block containing a data error may be detected by checking the parity bit of the block. Then, the block, containing the data error, is subdivided, and the parity bit thereof is repeatedly checked. Through such a repetition, when the length of the block containing the parity error becomes a length to which Hamming code can be applied, Hamming code is applied thereto, whereby the bit containing the error may be determined and corrected at step S. 430 100 432 Also, at step S, distribution output bit strings of the QKD center may be generated at step S. 432 100 200 A Here, at step S the first distribution output bit string Rk′ may be generated in such a way that the QKD center corrects the error of the sifted key corresponding to the first QKD client device . 432 100 300 B Here, at step S, the second distribution output bit string Rk′ may be generated in such a way that the QKD center corrects the error of the sifted key corresponding to the second QKD client device . 430 433 Also, at step S, the output bit strings of the QKD client devices may be generated at step S. 433 200 100 A Here, at step S, the first output bit string Rkmay be generated in such a way that the first QKD client device corrects the error of the sifted key corresponding to the QKD center . 433 300 100 B Here, at step S, the second output bit string Rkmay be generated in such a way that the second QKD client device corrects the error of the sifted key corresponding to the QKD center . 432 433 Here, the order in which step S and step S may be performed at the same time. 100 The bit string, generated through the error-correction process, may correspond to a common output bit string between the QKD center and the QKD client devices. 430 100 434 Also, at step S, the QKD center may be authenticated at step S. 434 100 Here, at step S, the QKD center may authenticate own identify according to a request of one QKD client and open a communication channel in response to the requests by the QKD client devices. Here, authentication may be performed using the authentication key shared with the QKD client devices, the output bit strings, the distribution output bit strings, and a keyed hash function. 434 100 200 54 A A Here, at step S, a bit string, acquired by performing an operation on the first authentication key Akand the first distribution output bit string Rk′ may be transmitted from the QKD center to the first QKD client device via the classical channel . 434 200 100 A A A A AkA A AkA A h Rk h Rk Here, at step S, the QKD client device may authenticate the QKD center by comparing the result of a cryptographic operation performed on the first authentication key Akand the first distribution output bit string Rk′ with the result of a cryptographic operation performed on the first authentication key Akand the first output bit string Rkusing Equation (1): ()=(′) (1) 134 100 300 64 B B Here, at step S, the result of the operation performed on the second authentication key Akand the second distribution output bit string Rk′ may be transmitted from the QKD center to the second QKD client device via the classical channel . 434 300 100 B B B B AkB B AkB B h Rk h Rk Here, at step S, the QKD client device may authenticate the QKD center by comparing the result of a cryptographic operation performed on the second authentication key Akand the second distribution output bit string Rk′ with the result of a cryptographic operation performed on the second authentication key Akand the second output bit string Rkusing Equation (2): ()=(′) (2) 434 100 Here, at step S, the QKD center may authenticate own identity according to Equation (1) and Equation (2). 440 Also, in the quantum key distribution method according to an embodiment of the present invention, a shared key bit string may be calculated at step S. 440 441 Specifically describing step S, first, an encryption bit string may be calculated at step S. 441 100 B A B A B AkB A B Rk ′⊕Rk h Rk ′⊕Rk Here, at step S, only when authentication of the QKD center has succeeded, the encryption bit string may be generated by performing a cryptographic operation on the second authentication key Ak, the first distribution output bit string Rk′ and the second distribution output bit string Rk′ using Equation (3): (′)∥(′) (3) AkB B where ⊕ may correspond to an XOR operation, ∥ may correspond to a concatenation and hmay correspond to a keyed hash function using the second authentication key Ak. 441 A B B Here, at step S, the encryption bit string may be calculated by performing an XOR operation on the first distribution output bit siring Rk′ and the second distribution output bit string Rk′, and performing keyed hash function using the second authentication key Ak. 442 Also, a shared key bit string of the QKD client device that requested communication may be calculated at step S. 442 Here, at step S, the encryption bit string may be transmitted to the QKD client device that requested communication. 442 300 65 Here, at step S, the encryption bit string may be transmitted to the second QKD client device , which requested the communication, via the classical channel . 442 300 B A B B A Rk ′⊕Rk Rk ′⊕Rk Here, at step S, an XOR operation may be performed on the front of the encryption bit string and the second output bit string Rkin the second QKD client device using Equation (4): (′)⊕′) (4) 300 100 442 B B A Here, if authentication between the second QKD client device and the QKD center succeeds, because the second distribution output bit siring Rk′ corresponds to the second output bit string Rk, the first distribution output bit string Rk′ may be calculated from the XOR operation of Equation (4) at step S. 442 200 300 A AB A AB Rk ′⊕Ak Here, at step S, a shared key bit string may be calculated by performing an XOR operation on the calculated first distribution output bit string Rk′ and the inter-client authentication key Ak, shared between the first QKD client device which is requested to communicate, and the second QKD client deice , which requests communication, using Equation (5): (5) 65 B AB A AB Rk ⊕Ak =Rk ′⊕Ak That is, the combination of Equations (3) to (5) may correspond to Equation (6): (INPUT)⊕ (6) 65 135 100 65 Here, INPUTmay correspond to the front of the encryption bit string, which is received from the operation unit of the QKD center via the classical channel . 443 Also, a shared key bit string of the QKD client device, which was requested to communicate, may be calculated at step S. 443 200 100 300 A AB A AB Rk ⊕Ak Here, at step S, if authentication between the first QKD client device , which was requested to communicate, and the QKD center succeeds, a shared key bit string may be calculated by performing an XOR operation on the first output bit string Rkand the inter-client authentication key Ak, shared with the second QKD client device , which requested the communication, using Equation (7): (7) 200 100 A A If authentication between the first QKD client device , which was requested to communicate, and the QKD center succeeds, the first distribution output bit string Rk′ of Equation (4) may correspond to the first output bit string Rkaccording to Equation (1). A AB A AB 232 200 332 300 Accordingly, because the shared key bit string (Rk⊕Ak), calculated by the operation unit of the first QKD client device according to Equation (7), corresponds to the shared key bit string (Rk′⊕Ak), calculated by the operation unit of the second QKD client device according to Equation (5), the same encryption key may be shared therebetween. 450 Also, in the quantum key distribution method according to an embodiment of the present invention, a final key bit string may be generated at step S. 450 450 Here, at step S, the final key bit string may be generated by applying a hash function to the shared key bit string. Here, at step S, some information about the key, leaked to an eavesdropper in the process of error correction and modification, may be deleted. 450 200 K =h Rk ⊕Ak AB A AB Here, at step S, the final key bit string may be calculated using Equation (8) in the first QKD client device , which is requested to communicate. () (8) where h may be a hash function that is used to delete the information exposed to the eavesdropper. 410 The hash function may be shared among the QKD client devices in advance at step S. 450 300 K =h Rk ′⊕Ak AB A AB Here, at step S, the final key bit string may be calculated using Equation (9) in the second QKD client device , which requested the communication. () (9) where h may be the hash function that is used to delete the information exposed to the eavesdropper. 410 The hash function may be shared among the QKD client devices in advance at step S. 450 AB Here, at step S, the length of the bit string may be reduced by deleting the information exposed to the eavesdropper by applying the hash function to the shared key bit string. Therefore, if the hash function h is suitably selected, information about the final key bit string Kmay be protected from eavesdroppers. FIG. 9 FIG. 8 is a flowchart specifically illustrating an example of the step of generating a sifted key, illustrated in . FIG. 9 420 421 Referring to , at step S, first, a quantum state may be transmitted and received at step S. 421 51 61 421 Here, the quantum state may be transmitted and received through a quantum key distribution protocol such, as BB84, B92, or the like at step S. Here, the quantum state is prepared, and may be transmitted and received via a quantum channel or at step S. 422 Also, the selected preparation basis may be compared with the measurement basis at step S. 422 Here, at step S, a random signal may be generated using quantum mechanics. 422 52 62 Here, at step S, the selected measurement basis and preparation basis for a quantum state may be transmitted and received via a classical channel or . 423 Also, the security of a channel may be checked based on the same basis at step S. 423 51 61 52 62 51 61 430 52 62 52 62 52 62 423 Here, at step S, the security of a quantum channel or is checked by comparing information about polarizing plates (preparation bases and measurement bases) through the classical channel or , and the information remaining after checking the security of the quantum channel or may be used at step S. Here, the classical channel or may correspond to a public channel, and may be eavesdropped on by anybody. However, in the classical channel or , it may be impossible to falsify information or to add additional information. For example, the classical channel or may correspond to the concept of a public board such as a newspaper. Here, at step S, the integrity of information may be guaranteed using Message Authentication Code (MAC). 423 At step S, information may be stored in order to compare the randomly selected preparation basis for a quantum state with the randomly selected measurement basis for the quantum state. 423 52 62 Here, at step S, the preparation basis for a quantum state is compared with the measurement basis for the quantum state through the classical channel or , and the security of a channel may be checked using some bits of the bit string, acquired on the same basis. 424 Also, sifted keys may be generated at step S. 424 Here, at step S, the sifted keys may be output based on the bits remaining after checking the security of the channel. 424 100 Here, at step S, sifted keys corresponding to the QKD center and QKD client devices may be generated. 424 100 200 100 300 Here, at step S, sifted keys corresponding to the QKD center and the first QKD client deice may be generated, and sifted keys corresponding to the QKD center and the second QKD client device may be generated. FIG. 10 FIG. 8 is a flow chart specifically illustrating an example of the step of generating an output bit string, illustrated in . FIG. 10 431 Referring to , first, the error of a sifted key may be corrected at step S. 431 53 63 431 431 Here, output bit strings may be generated by correcting the error of the sifted keys at step S. Here, the error may be corrected using Hamming code, Winnow algorithm, LTPC or the like. Here, the bit string to be transmitted is divided into multiple blocks, and the parity bit of each of the blocks may be transmitted and received via the classical channel or . At step S, a block containing a data error may be detected by checking the parity bit of the block. Then, the block, containing the data error, is subdivided, and the parity bit thereof is repeatedly checked. Through such a repetition, when the length of the block containing the parity error becomes a length to which Hamming code can be applied, Hamming code is applied thereto, whereby the bit containing the error may be determined and corrected at step S. 100 432 Also, distribution output bit strings of the QKD center may be generated at step S. 432 100 200 A Here, at step S, the first distribution output bit string Rk′, included in the distribution output bit strings, may be generated in such a way that the QKD center corrects the error of the sifted key corresponding to the first QKD client device . 432 100 300 B Here, at step S, the second distribution output bit string Rk′, included in the distribution output bit strings, may be generated in such a way that the QKD center corrects the error of the sifted key corresponding to the second QKD client device . 433 Also, the output bit strings of the QKD client devices may be generated at step S. 433 200 100 A Here, at step S, the first output bit string Rkbe generated in such a way, that the first QKD client device corrects the error of the sifted key corresponding to the QKD center . 433 300 100 B Here, at step S, the second output bit string Rkmay be generated in such a way that the second QKD client device corrects the error of the sifted key corresponding to the QKD center . 432 433 Here, the order in which step S and step S may be performed at the same time. 100 The bit siring, generated through the error-correction process, may correspond to a common output bit string between the QKD center and the QKD client devices. 430 100 434 Also, at step S, the QKD center may be authenticated at step S. 434 100 100 100 Here, at step S, the QKD client devices request authentication of the QKD center to the QKD center , and the QKD center may authenticate own identity and open a communication channel in response to the request by the QKD client devices. Here, authentication may be performed using the authentication key shared with the QKD client devices, the output bit strings, the distribution output bit strings, and a keyed hash function. 434 100 200 54 A A Here, at step S, a bit string, acquired by performing an operation on the first authentication key Akand the first output bit string Rk, may be transmitted from the QKD center to the first QKD client device via the classical channel . 434 100 A A A A AkA A AkA A h Rk h Rk Here, at step S, the QKD center may authenticate own identity by comparing the result of a cryptographic operation performed on the first authentication key Akand the first distribution output bit string Rk′ with the result of a cryptographic operation performed on the first authentication key Akand the first output bit string Rkusing Equation (1): ()=(′) (1) 434 100 300 64 B B Here, at step S, the result of operation performed on the second authentication key Akand the second output bit string Rkmay be transmitted from the QKD center to the second QKD client device via the classical channel . 434 100 B B B B AkB B AkB B h Rk h Rk Here, at step S, the QKD center may authenticate own identity by comparing the result of a cryptographic operation performed on the second authentication key Akand the second distribution output bit string Rk′ with the result of a cryptographic operation performed on the second authentication key Akand the second output bit string Rkusing Equation (2): ()=(′) (2) 434 100 Here, at step S, the QKD center may authenticate own identity according to Equation (1) and Equation (2). FIG. 11 FIG. 8 is a flowchart specifically illustrating an example of the process of calculating a shared key bit string, which is illustrated in . FIG. 11 440 441 Referring to , at step S, first, an encryption bit string may be calculated at step S. 441 100 B A B A B AkB A B Rk ′⊕Rk h Rk ′⊕Rk Here, at step S, only when authentication of the QKD center has succeeded, the encryption bit string may be generated by performing a cryptographic operation on the second authentication key Ak, the first distribution output bit string Rk′ and the second distribution output bit string Rk′ using Equation (3): (′)∥(′) (3) AkB B where ⊕ may correspond to an XOR operation, ∥ may correspond to a concatenation and hmay correspond to a keyed hash function using the second authentication key Ak. 411 A B B Here, at step S, the encryption bit string may be calculated by performing an XOR operation on the first distribution output bit siring Rk′ and the second distribution output bit string Rk′, and performing keyed hash function using the second authentication key Ak. 442 Also, a shared key bit string of the QKD client device, which requested communication, may be calculated at step S. 442 Here, at step S, the encryption bit string may be transmitted to the QKD client device that requested communication. 442 300 65 Here, at step S, the encryption bit siring may be transmitted to the second QKD client device , which requested the communication, via the classical channel . 442 300 B A B B A Rk ′⊕Rk Rk ′⊕Rk Here, at step S, an XOR operation may be performed on the front of the encryption bit string and the second output bit string Rkin the second QKD client device using Equation (4): (′)⊕′) (4) 300 100 442 B B A Here, if authentication between the second QKD client device and the QKD center succeeds, because the second distribution output bit string Rk′ corresponds to the second output bit string Rk, the first distribution output bit string Rk′ may be calculated from the XOR operation of Equation (4) at step S. 442 200 300 A AB A AB Rk ′⊕Ak Here, at step S, a shared key bit string may be calculated by performing an XOR operation on the calculated first distribution output bit string Rk′ and the inter-client authentication key Akshared between the first QKD client device , which is requested to communicate, and the second QKD client device , which requests communication, using Equation (5): (5) 65 B AB A AB Rk ⊕Ak =Rk ′⊕Ak That is, the combination of Equations (3) to (5) may correspond to Equation (6): (INPUT)⊕ (6) 65 135 100 65 Here, INPUTmay correspond to the front of the encryption bit string, which is received from the operation unit of the QKD center via the classical channel . 443 Also, a shared key bit string of the QKD client device, which is requested to communicate, may be calculated at step S. 443 200 100 300 A AB A AB Rk ⊕Ak Here, at step S, if authentication between the first QKD client device , which was requested to communicate, and the QKD center succeeds, a shared key bit siring may be calculated by performing an XOR operation on the first output bit string Rkand the inter-client authentication key Ak, shared with the second QKD client device , which requested the communication, using Equation (7): (7) 200 100 A A If authentication between the first QKD client device , which was requested to communicate, and the QKD center succeeds, the first distribution output bit string Rk′ of Equation (4) may correspond to the first output bit string Rkaccording to Equation (1). A AB A AB 232 200 332 300 Accordingly, because the shared key bit string (Rk⊕Ak), calculated by the operation unit of the first QKD client device according to Equation (7), corresponds to the shared key bit string (Rk′⊕Ak), calculated by the operation unit of the second QKD client device according to Equation (5), the same encryption key may be shared therebetween. FIG. 12 is a block diagram illustrating a computer system according to an embodiment of the present invention. FIG. 12 FIG. 12 1100 1100 1110 1130 1140 1150 1160 1120 1100 1170 1180 1110 1130 1160 1130 1160 1131 1132 Referring to , an embodiment of the present indention may be implemented in a computer system , such as a computer-readable recording medium. As illustrated in , the computer system may include one or more processors , memory , a user interface input device , a user interface output device , and storage , which communicate with each other via a bus . Also, the computer system may further include a network interface connected to a network . The processor may be a central processing unit (CPU) or a semiconductor device for executing processing instructions stored in the memory or the storage . The memory and the storage may be volatile or nonvolatile storage media. For example, the memory may include ROM or RAM . The present invention may improve the security of quantum key distribution by preventing information on the encryption of a quantum key, which is finally distributed among QKD client devices, from being exposed to a QKD center. Also, the present invention may improve the security of quantum key distribution by the process of a cryptographic operation on an authentication key, shared among client devices, and an output bit string in which an error is corrected. Also, the present invention may distribute a quantum key, encrypted with a hash function having improved security, to users. As described above, the QKD center and method according to the present invention are not limitedly applied to the configurations and operations of the above-described embodiments, but all or some of the embodiments may be selectively combined and configured so that the embodiments may be modified in various ways.
Musical road signs: dynamics, tempo, fermatas, repeats, etc. In order for a piece to be played accurately and with expression and dynamics, written music includes a number of signs and symbols to guide the musician. Some of these include words that tell the musician how loudly or softly to play a note or passage. The following is a list of dynamics often used: Pianissimo: very soft. Piano: soft. Mezzo piano: half as soft as piano. Mezzo forte: half as loud as forte. Forte: loud. Fortissimo: very loud. Sforzando: forced, abrupt, fierce Crescendo: a gradual increase in volume. Diminuendo (or decrescendo) a gradual decrease in volume. One piece of music can contain many symbols for dynamics, everything from very soft passages (pianissimo) to loud passages (forte) to passages that increase or decrease in volume (crescendo or decrescendo). In some cases, the conductor (or leader) of a group will request changes in dynamics that do not appear in the music (leaving to their discretion the interpretation of the music). Tempo is measured in beats per minute (bpm). A tempo of 60 bpm would match the ticking of a clock with a beat every second. Quite often, you’ll see the tempo (in bpm) displayed at the beginning of the piece. For a piano or other music student, a metronome is sometimes used as a training device. The metronome can be set for a wide variety of beats per minute and helps the student develop consistency of tempo in their playing. When you see a drummer in a rock band click his drum sticks four times, or call out the numbers 1, 2, 3, 4!, he is setting the tempo for the rest of the band. Tempo has a great effect on the feel and effectiveness of the music played and it’s critical when musicians are playing for dancers. Dances such as the waltz and two-step require a particular tempo. Bar lines (vertical lines on the staff) are used to separate a song into measures. Measures divide the music into regular groupings of beats be it three, four, or six beats per measure. Except in rare cases, each measure contains the same number of beats throughout a song. Measures are often numbered so that there is a “road map” for the musician when playing as part of a group. For example, a conductor may ask the orchestra to “begin with measure 31.” A repeat (sign) is used quite often in music. If a particular music passage is to be repeated, a double bar line, preceded by two dots is used. This tells the musician to return to the beginning of the passage and play it again. Other markings such as the coda, and da capo (dc) are used to guide the musician to the proper place in the music such as playing the passage again from the beginning (passages are repeated quite often) or jumping ahead to a particular measure or point in the music. A fermata (sometimes called a “bird’s eye” because of its appearance) tells the musician that a particular note is to played longer than its normal duration. How long the note is to be held is usually up to the musician or conductor. A fermata is usually displayed above the note it affects. Some music contains breath marks that show the musician when to take a breath (if singing or playing a wind instrument) or when to lift the bow for string players.
http://www.playpiano.com/wordpress/music/musical-road-signs-dynamics-tempo-fermatas-repeats-etc
Jakarta (Indonesia), 24 January 2019 - The United Nations Office on Drugs and Crime (UNODC) Indonesia, the Directorate-General of Corrections (DGC) of Indonesia, and Second Chance Foundation signed a new agreement to enhance vocational training and work opportunities in traditional Indonesian batik production for prisoners in Class IIA Semarang Women's Prison in Central Java, Indonesia. The Director General of Corrections, Ms. Sri Puguh Budi Utami, noted that this agreement is a form of the Government of Indonesia's commitment to "collaborate with other parties in order to enhance the competency of all its citizens so that they could achieve a better life." Prison-based rehabilitation and social integration programme, such as vocational training and work training, are essential to reducing recidivism and preparing inmates to be a contributing member of society. However, with limited infrastrucutre, tools, and equipment, Semarang Women's prison provides limited opportunities for prisoner rehabilitation and social reintegration. The newly signed agreement, which is part of UNODC's Global Programme for the Implementation of the Doha Declaration, therefore seeks to provide prisoners with professional skills and official certificates to increase their employment prospects upon release and prevent recidivism. The selection of Semarang Women's Prison is also "a noted achievement. Although women makeup a small percentage of the prison population, they face unique challenges as prisons are generally not desgned for women. We see this agreement as an opportunity to enhance services for women in the corrections system," noted UNODC Indonesia Country Manager, Mr. Collie F. Brown. This initiative aligns with the revised United Nations Standard Minimum Rules for the Treatment of Prisoners (the Nelson Mandela Rules) and the United Nations Rules for the Treatment of Women Prisoners and Non-custodial Measures for Women Offenders (Bangkok Rules), which call on states to develop and implement prison-based rehabilitation programmes in order to equip inmates with the skills and qualifications to rebuild their lives upon release. The batik vocational training programme in Semarang Women's Prison was first established in 2012 with the goal of training the inmates on the use of natural colorant in Batik production, which is still a unique procedure in Indonesia. Prisoners who have passed the evaluation and received recommendation from the rehabilitation staff get the opportunity to participate in a three-month vocational training cycle, provided by Second Chance Foundation, through which they gain theoretical background knowledge as well as practical skills in the techniques of traditional Indonesian batik production. After completion of the training programme, prisoners have the possibility to continue working at the production facility, with remuneration based on the profit resulting from the sale of the fabrics. Indonesian Batik, which was inscribed on the UNESCO Representative List of the Intangible Cultural Heritage of Humanity in 2009, has a long-standing tradition in Indonesia. The technique involves hand-drawing or stamping patterns onto fabric with hot wax to create a surface that resists dying, thus allowing individual parts of the fabric to be coloured selectively. Batik clothes enjoy high popularity in the country and naturally-dyed batik carries a high economic value, making the fabrics of batik from Semarang Women's Prison well-suited for sale on the local market. Through UNODC's support, the work training premises of Semarang Women's Prison will be modernized and enhanced to increase the number of prisoners benefiting from the vocational training programme and subsequent employment. In order to ensure that the production cycle of the fabrics is environmentally friendly and sustainable, the workshop will also be equipped with a wastewater management system and a hot wax recycling tool.
https://www.unodc.org/roseap/en/indonesia/2019/01/female-fffenders/story.html
This past year scientists and conservation organizations declared that a long list of species may have gone extinct, including dozens of frogs, orchids and fish. Most of these species haven’t been seen in decades, despite frequent and regular expeditions to find out if they still exist. The causes of these extinctions range from diseases to invasive species to habitat loss, but most boil down to human behavior. Of course, proving a negative is always hard, and scientists are often cautious about declaring species truly lost. Do it too soon, they warn, and the last conservation efforts necessary to save a species could evaporate, a problem known as the Romeo and Juliet Effect. Because of that, and because many of these species live in hard-to-survey regions, many of the announcements this past year declared species possibly or probably lost, a sign that hope springs eternal. And there’s reason for that hope: When we devote energy and resources to saving species, it often works. A study published in 2019 found that conservation efforts have reduced bird extinction rates by 40 percent. Another recent paper found that conservation actions have prevented dozens of bird and mammal extinctions over just the past few decades. The new paper warns that many of the species remain critically endangered, or could still go extinct, but we can at least stop the bleeding. And sometimes we can do better than that. This year the IUCN—the organization that tracks the extinction risk of species around the world—announced several conservation victories, including the previously critically endangered Oaxaca treefrog (Sarcohyla celata), which is now considered “near threatened” due to protective actions taken by the people who live near it. [. . .] [. . .] Nazareno (Monteverdia lineata)—Scientific papers declared this Cuban flowing plant species extinct in 2010 and 2015, although it wasn’t catalogued in the IUCN Red List until this year. It grew in a habitat now severely degraded by agriculture and livestock farming. [. . .] Craugastor myllomyllon—A Guatemalan frog that never had a common name and hasn’t been seen since 1978 (although it wasn’t declared a species until 2000). Unlike the other frogs on this year’s list, this one disappeared before the chytrid fungus arrived; it was likely wiped out when agriculture destroyed its only habitat. [. . .] Roystonea stellate—Scientists only collected this Cuban palm tree a single time, back in 1939. Several searches have failed to uncover evidence of its continued existence, probably due to conversion of its only habitats to coffee plantations. [. . .] Jalpa false brook salamander (Pseudoeurycea exspectata)—Small farms, cattle grazing and logging appear to have wiped out this once-common Guatemalan amphibian, last seen in 1976. At least 16 surveys since 1985 did not find any evidence of the species’ continued existence. [. . .] Euchorium cubense—Last seen in 1924, this Cuban flowing plant—the only member of its genus—has long been assumed lost. The IUCN characterized it as extinct in 2020 along withBanara wilsonii, another Cuban plant last seen in 1938 before its habitat was cleared for a sugarcane plantation. [. . .]Cora timucua—This lichen from Florida was just identified from historical collections through DNA barcoding. Unfortunately, no new samples have been collected since the turn of the 19th century. The scientists who named the species this past December call it “potentially extinct” but suggest it be listed as critically endangered in case it still hangs on in remote parts of the highly developed state. They caution, however, that it hasn’t turned up in any recent surveys. [. . .]
Description :“HEALTHY MIND LIVES IN A HEALTHY BODY " To promote the importance and role of right Nutrition for the human body and cultivate good eating habits amongst our children, St. Anns. Senior Secondary School, Roorkee organized an online activity on the Importance of Nutritious Food on 27.09.2021. More than 350 students from classes 1 to 5 actively participated in this activity, wherein they displayed that a balanced and healthy diet is a key for good health. They displayed many nutritious food items such as pulses, fruits, green leafy vegetables, milk etc. which forms a healthy and balanced diet. The students were educated about the ill-effect of malnutrition and junk food.
https://www.stannsroorkee.org/gallery/importance-of-nutritious-food-for-class-v
1 The components of a balanced diet A balanced diet contains six key nutrient groups that are required in appropriate amounts for health. These groups are outlined below. Proteins are involved in growth, repair and general maintenance of the body. Carbohydrates are usually the main energy source for the body. Lipids or fats are a rich source of energy, key components of cell membranes and signalling molecules, and as myelin they insulate neurons (nerve cells). Vitamins are important in a range of biochemical reactions. Minerals are important in maintaining ionic balances and many biochemical reactions. Water is crucial to life. Metabolic reactions occur in an aqueous environment and water acts as a solvent for other molecules to dissolve in. A deficiency of any one type of nutrient can lead to disease, starvation (or dehydration in the case of water) and subsequent death. Fibre is a component of food that is not nutritious but is important to include in our diet. Fibre or roughage is non-digestible carbohydrate and it has an important role in aiding the movement of food through the gut. There is also an absolute requirement for some specific molecules in the diet. This is because, although the body can manufacture most of the molecules it needs, some essential molecules cannot be made by the body. These molecules are called essential nutrients, and must be supplied in the diet, for example lysine and methionine, which are essential amino acids. Other components of the human diet are not nutrients at all, as they do not perform the functions of producing energy or promoting growth and repair, but are eaten for other purposes. For example, spices and other flavourings help make food more palatable; tea and coffee drinks provide a good source of water and may also contain other valuable substances such as antioxidants (see below). An adequate diet is essential for health and education plays a key role in providing people with the knowledge of what constitutes a healthy diet, but as is so often the way with science, the information keeps changing. The information about what we should be eating comes from various sources: in the UK a large amount of data was collected and published by COMA, the Committee on Medical Aspects of Food Policy (1991). This committee has now been disbanded, but its publications still represent a valid source of information about diet. Currently (2004), the Scientific Advisory Committee on Nutrition (SACN) advises the Department of Health and the Food Standards Agency (FSA). The Food Standards Agency produces a guide to choosing a healthy diet, ‘The balance of good health’, a version of which is illustrated in Figure 1. A lack of an adequate supply of any nutrient is known as malnutrition and leads to poor health. Activity 1 Does Figure 1 enable you to identify any nutrient that might be inadequately represented in your diet? Answer Figure 1 is a representation known as a pie chart. It enables you to see the relative proportions of each of the food categories that are likely to make up your diet. It is a fairly crude instrument and does not allow you to identify any nutrient deficiency. Figure 1 can be useful tool for teaching children about a healthy diet. Activity 2 How much fruit and vegetables should you eat? Answer Fruit and vegetables should make up over a quarter of your daily intake. The message in Figure 1 is simple and it emphasizes balance rather than focusing on specific nutrients. However, SACN does recommend a range of intake levels for all nutrients and energy for males and females throughout life, known as the dietary reference values (DRVs). Because individuals vary in their exact energy requirements, depending on sex, age, occupation and many other factors, often the estimated average requirement (EAR) is given, with the understanding that some individuals need more than this value and others less. EAR values for energy are shown in Table 1. Table 1 Estimated average requirements, EAR, for energy throughout the lifespan. Also shown are the extra (+) amounts of energy required during pregnancy and lactation. Values based on COMA data published by the Department of Health in 1991. At the time of writing these are the most recent data available. |Age||EAR/kcal per day| |Males||Females| |0–3 months||545||515| |4–6 months||690||645| |7–9 months||825||765| |10–12 months||920||865| |1–3 years||1230||1165| |4–6 years||1715||1545| |7–10 years||1970||1740| |11–14 years||2220||1845| |15–18 years||2755||2110| |19–50 years||2550||1940| |51–59 years||2550||1900| |60–64 years||2380||1900| |65–74 years||2330||1900| |over 75 years||2100||1810| |pregnancy||+ 200*| |lactation||+ 450–80| *During the last three months of pregnancy. Activity 3 From Table 1, identify the age at which males require the most dietary energy. Suggest why this may be so. Answer The highest energy intake, 2755 kcal per day, is required between the ages of 15 and 18. This is the age range in which boys grow and increase their muscle mass to achieve their adult size and shape. They are also maturing sexually at this stage. Dietary reference values in fact comprise three numbers: the EAR, just discussed, the reference nutrient intake, RNI, and the lower reference nutrient intake, LRNI. These figures replace the old recommended daily amount (RDA), which was felt not to offer sufficient flexibility for individuals’ differing needs. The RNI is set at a level that satisfies the requirements of 97.5% of the population, and the LRNI is set at a level that satisfies the needs of only 2.5% of the population. Thus almost everyone has requirements falling between these two figures. You may have noticed that over the last few years the information on food packaging has shifted emphasis from a categorical ‘satisfies 20% of RDA’, for example, and more towards a list of ingredients, perhaps with an exhortation to ‘eat five portions of fruit each day’. This reflects the move away from a ‘one-size-fits-all’ RDA to a situation in which individuals’ needs can be accommodated. Individual requirements for nutrients vary considerably depending on factors such as age and sex as you saw above. Other relevant factors are size, metabolic rate (see below) and occupation. The situation is further complicated as interactions between components of the diet may alter the efficiency of absorption or utilization of a particular nutrient. The body also has stores of certain nutrients (fat-soluble vitamins, for example) so that variations in daily intake of such nutrients can be accommodated. Thus it could be misleading to recommend a particular daily intake level. In summary: A balanced diet consists of six main nutrient groups; proteins, carbohydrates, lipids, vitamins, minerals and water. Dietary reference values (DRVs) comprise a range and an estimated average of recommended daily intake levels for nutrients and energy for males and females at different stages of their life.
https://www.open.edu/openlearn/science-maths-technology/biology/obesity-balanced-diets-and-treatment/content-section-1
Please select another system to include it in the comparison. Our visitors often compare JanusGraph and Sphinx with PostgreSQL, MongoDB and Elasticsearch. |Editorial information provided by DB-Engines| |Name||JanusGraph successor of Titan Xexclude from comparison||Sphinx Xexclude from comparison| |Description||A Graph DBMS optimized for distributed clusters It was forked from the latest code base of Titan in January 2017||Open source search engine for searching in data from different sources, e.g. relational databases| |Primary database model||Graph DBMS||Search engine| |Website||janusgraph.org||sphinxsearch.com| |Technical documentation||docs.janusgraph.org||sphinxsearch.com/docs| |Developer||Linux Foundation; originally developed as Titan by Aurelius||Sphinx Technologies Inc.| |Initial release||2017||2001| |Current release||0.5.0, March 2020||3.2.1, January 2020| |License Commercial or Open Source||Open Source Apache 2.0||Open Source GPL version 2, commercial licence available| |Cloud-based only Only available as a cloud service||no||no| |DBaaS offerings (sponsored links) Database as a Service| Providers of DBaaS offerings, please contact us to be listed. |Implementation language||Java||C++| |Server operating systems||Linux| OS X Unix Windows |FreeBSD| Linux NetBSD OS X Solaris Windows |Data scheme||yes||yes| |Typing predefined data types such as float or date||yes||no| |XML support Some form of processing data in XML format, e.g. support for XML data structures, and/or support for XPath, XQuery or XSLT.||no| |Secondary indexes||yes||yes full-text index on all search fields| |SQL Support of SQL||no||SQL-like query language (SphinxQL)| |APIs and other access methods||Java API| TinkerPop Blueprints TinkerPop Frames TinkerPop Gremlin TinkerPop Rexster |Proprietary protocol| |Supported programming languages||Clojure| Java Python |C++ unofficial client library| Java Perl unofficial client library PHP Python Ruby unofficial client library |Server-side scripts Stored procedures||yes||no| |Triggers||yes||no| |Partitioning methods Methods for storing different data on different nodes||yes depending on the used storage backend (e.g. Cassandra, HBase, BerkeleyDB)||Sharding Partitioning is done manually, search queries against distributed index is supported| |Replication methods Methods for redundantly storing data on multiple nodes||yes||none| |MapReduce Offers an API for user-defined Map/Reduce methods||yes via Faunus, a graph analytics engine||no| |Consistency concepts Methods to ensure consistency in a distributed system||Eventual Consistency| Immediate Consistency |Foreign keys Referential integrity||yes Relationships in graphs||no| |Transaction concepts Support to ensure data integrity after non-atomic manipulations of data||ACID||no| |Concurrency Support for concurrent manipulation of data||yes||yes| |Durability Support for making data persistent||yes Supports various storage backends: Cassandra, HBase, Berkeley DB, Akiban, Hazelcast||yes The original contents of fields are not stored in the Sphinx index.| |User concepts Access control||User authentification and security via Rexster Graph Server||no| More information provided by the system vendor We invite representatives of system vendors to contact us for updating and extending the system information, Related products and services We invite representatives of vendors of related products to contact us for presenting information about their offerings here.
https://db-engines.com/en/system/JanusGraph%3BSphinx
Repair, reuse, recycle… That is the concept behind the circular economy, an economic model that is expected to become increasingly important in years to come. In March 2020, the European Commission presented an action plan for a new circular economy conceived to tackle a number of issues. The document covers waste reduction, designing more sustainable products, the right to repair, and other areas. All industries have their place in the circular economy, but undoubtedly some warrant a closer focus than others. In particular, resource-intensive industries such as textiles, manufacturing and construction. In the building industry, the circular economy applied to the construction industry is often referred to as “circular construction”. The Circular Economy: definition What is the circular economy? As we have seen, the term indicates a specific economic system characterized by in-built eco-sustainability. A circular economic system is defined by its ability to regenerate, thanks to processes for sharing, reusing, repairing and recycling materials in order to fully leverage existing resources. The Ellen MacArthur Foundation provided a much-cited definition of the circular economy: it is “a generic term to define an economy that is regenerative by design. In a circular economy, materials follow one of two types of flow: biological, capable of being reintegrated into the biosphere, or technical, destined to be re-used without ever entering the biosphere.” The end-goal is to extend the product lifecycle as far as possible and minimize waste production. Within such a system, when a product has completed its function or breaks down, where possible it is recycled, the materials from which it was created reintroduced back into the economic cycle to generate new value from the same resources, without consuming any new raw materials. The Circular Economy: Benefits As we have seen, the European Union is specifically investigating the circular economy as a model for the future, not least because the alternatives are far less palatable: globally, there is a continuous upswing in demand for raw materials, at a time when resources are becoming ever-scarcer. In a world where raw materials are indubitably finite and the human population is growing vertiginously, the circular economy is our best – perhaps only – possible solution. Adding to this now-general belief is the urgency of the environment, considering that, among other things, many extraction processes generate very grave impacts: the more circular the economy, the lower our carbon dioxide emissions and energy consumption. As well as rendering more materials available and reducing environmental impact, the benefits of the circular economy include reducing dependency on other countries for sourcing raw materials. Indeed, many EU countries depend totally or almost totally on other countries for resources that they are unable to obtain domestically. The circular economy’s potential employment benefits are also well worth considering: according to the European Union, by following this new approach, 700,000 new jobs will be created by 2030. Circular economy infrastructure Not surprisingly, as the principles of the circular economy have become more widely publicized, circular construction in particular has attracted greater attention. Italy’s environmental research and protection agency ISPRA has repeatedly pointed out that the construction industry generates more special waste than any other. For example, in 2017, the construction industry produced 57 million tons of waste material from construction and demolition. Given this situation, the European Union set fairly stringent targets some time ago: by 2020, all member countries should have achieved 70% recovery of demolition and construction waste, a figure that is very much higher than Italy’s current national average of between 20% and 30%. Italy needs to achieve a veritable step change by following tangible construction industry circular economy best practice. Consider, for example, what was achieved with the Circl building, the headquarters of Dutch bank ABN AMRO: not only were materials from other demolished buildings used to construct it, old garments were used too. Indeed, ABN AMRO collected old jeans from its employees, using sixteen pairs of them to make insulation layers for the building. This somewhat unconventional approach is nevertheless fully in line with Dutch government guidance, which has set a target for the entire national economy to run on recycled raw materials by 2050. Circular economy principles in construction: The Italia 2030 Study Among the latest and most interesting developments in circular construction is the position paper drafted under Italia 2030, a Ministry of Economic Development and LUISS Business School project to ensure Italy’s sustainable future. Italia 2030’s many working groups, including one dedicated to the construction industry, resulted in a paper entitled, “The Circular Economy: An Opportunity to Rethink Construction”, in which, among other things, three priority areas of intervention were identified to rejig the construction industry. First, focusing on the issue of waste from demolition and construction, particularly the legal and technical barriers to recovering, recycling and using such materials. A second area of interest is alternative materials, specifically the need to come up with indicators of sustainability and durability for selected materials. Last but not least, it will be necessary to work on criteria regarding construction design and management, with a special focus on digitization and reducing climate vulnerability.
https://www.webuildvalue.com/en/infrastructure-news/circular-economy.html
Job Overview: The Supply Chain Manager coordinates and/or manages all aspects of the supply chain: Strategy, Planning, Sourcing/Purchasing, Manufacturing, and Delivery/Logistics and Returns (for defective or unwanted products). - Lead strategic Supply Chain Management initiatives or projects and be responsible for ensuring security and flexibility of supply critical to the product portfolio. - Develop and implement these strategies to ensure Quality, Delivery, Cost, are achieved for the given products and that organizational objectives are met. Responsible for managing relationships with key suppliers. Supplier management activities may include, but not be limited to, the following: supplier visits, running weekly/bi-weekly meetings, business reviews, score cards, ensuring supply continuity, addressing quality issues, etc. - Optimize the balance between supply availability, cost, inventory investment, and risk of obsolescence/backorder while working within Company Policies. - Be the voice of Supply Chain for the assigned business units. - Act as a business liaison for Demand and Supply for all assigned regions globally. Maintain a strong working relationship with all colleagues to ensure that changes and activities are managed on a timely and cost effective basis. • Maintain minimal to no backorders on all products released to market. • When supply issues arise, work with manufacturing, quality, regulatory, product managers, customer solutions, and others, as needed, to resolve the issues. Supplier Management • Ensure that suppliers are aligned with The Company’s business strategy and objectives. • Manage supplier relationships and understand their capabilities and capacities. • Drive continuous improvement and supplier development • Understand suppliers’ strategic initiatives/direction • Maintain continuity of supply, make recommendations to improve quality, productivity and overall efficiency of operations Project Management • Ensure that Project/Department milestones/goals are met and that they adhere to approved budgets. New Product introductions • Work with Product Management, Engineering and New Product Introduction process where applicable Requirements: • Experience in purchasing, supplier management, operations, sourcing, demand planning • 5+ years of work experience, or an equivalent combination of training, education, and experience which demonstrates the ability to perform the duties of the position • Proven ability to effectively interact with multi-functional areas (Marketing, Sales, Manufacturing, Engineering, Purchasing, Finance, Quality, etc.).
https://pinpoint-pharma.com/scientific-clinical-careers/?job_details=828
The Senior Program Manager is responsible for leading and ensuring quality design, monitoring, evaluation, proposal development and reporting efforts. English: Fluent Arabic: Fluent Kurdish: Strong Education and experience University degree. Minimum 4-6 years experience in designing, monitoring and evaluating programs and grant development and management. Minimum 4 years experience working with major donors such as: USAID, EU/ECHO, DFID, UN Agencies, and others in KRI, Iraq, and Northeast Syria Responsibilities Design, monitoring and evaluation · Lead assessment planning. Determine what data is required, review existing data, determine what new data is needed and identify best methodology to secure the data. Analyze data and apply it to design. · Lead project design efforts. Design should be based on evidence of best practice, context assessment and SAMS experience. Ensure SMART design. · Guide monitoring efforts. Ensure data is collected, analyzed and fed back to improve implementation. · Support efforts to evaluate if projects meet the planned outcomes. · Ensure compliance on planning and reporting requirements put forward by SAMS donors · Develop new partnerships and joint programming initiatives by working with the relevant local stakeholders · Support strategic partnerships with United Nations agencies and non state actors (NSAs) active in the philanthropic sectors Grant management · Build strong relationships with government, NGOs, international organizations, donors and relevant private sector organizations in order to secure their support for increasing and diversifying the funding base of SAMS programs stakeholders. · Analyze and advise SAMS HQ and field office staff on health priorities according to international donors such as: OCHA, IHF, USAID, GIZ, and European governments · Act as the main point of contact regarding grants with donor field offices. · Conduct regular coordination meetings with donor field offices and report information to HQ. · Lead coordination of business development efforts (positioning, proposal development, etc.) for KRI, Federal Iraq, and Northeast Syria · Lead the development of all grant reports, narrative and financial and share with headquarters Grant Department · Assist headquarters with anti-terrorist checks – vendor information – before payments are made. · Ensure that SAMS’s staff inside Syria and warehouse staff are adhering to compliance regulations. Bring any concerns regarding violations of compliance to the program director and to the director of Turkey office. · Ensure and track grants spending on monthly basis from all donors · Hold a weekly meeting to review grants status with headquarters program team. Representation · Represent SAMS in all relevant fora in relation to the position duties, e.g. Government of KRI and Federal Iraq, NGOs, donors. · Liaise with partners on local authorities and representatives in Northeast Syria · Attend UN agency working group meetings and conferences · Contacts in the Iraq government (Federal and KRI), WHO, and local Directorates of Health preferred Staff management · Develop performance objectives for each staff. Evaluate staff on a semi-annual basis. · Develop professional development plans for direct reports. · Build an effective team approach ensuring direct reports are working together effectively. · Hold weekly team meetings with direct reports for planning purposes, information sharing and problem solving. Office culture · Promote and model a positive, professional and respectful office culture. How to apply:
https://www.ungojobs.org/2020/09/senior-manager-programs-iraq-nationals.html
Ataxia may be classified as episodic, acute, intermittent, or chronic. Acute ataxia in children is caused by central nervous system (CNS) tumors, trauma, CNS infection, toxins, metabolic dysfunction, or stroke. Recurrent ataxia may be due to metabolic dysfunction, seizures, basilar artery migraine, or toxins. Chronic ataxia is usually the result of hereditary ataxia, CNS tumors, congenital anomalies, hydrocephalus, or metabolic disorders. Acute Ataxia Acute cerebellar ataxia Acute cerebellar ataxia is the most common cause of acute ataxia in children. It is more common in children under the age of 5 years following an infection. It is usually benign and self-limited. Careful examination and work-up are needed to exclude more serious, life-threatening causes of ataxia. Acute intoxication Acute intoxication is a common cause of childhood ataxia due to the ingestion of medications affecting the CNS. Children present with altered mental status, ataxia, and behavior changes. Anticonvulsants, alcohol, benzodiazepines, illicit drugs, and lead are common medications causing ataxia in children. Tumors Tumors are a common cause of cerebellar ataxia. Cerebellar tumors or tumors of the posterior fossa (e.g., hemangioblastoma or neuroblastoma) can interfere with cerebrospinal fluid drainage, leading to early elevated intracranial tension. Patients complain of headache, vomiting, blurring of vision, and focal neurological lesions. Acute edema and hemorrhage lead to disturbed consciousness. Ataxia develops from cerebellar compression. Paraneoplastic syndromes such as Opsoclonus-myoclonus syndrome (OMS) Paraneoplastic syndromes are degenerative disorders that are triggered by the immune system in response to a tumor. Opsoclonus-myoclonus syndrome is a rare condition that can be associated with neuroblastoma. Its presentation includes opsoclonus, body myoclonus, and truncal titubation, as well as ataxia. Encephalopathy and sleep problems may also arise. Acute stroke Acute stroke in children with sickle cell disease, systemic lupus erythematosus, nephrotic syndrome, or homocystinuria is a well-known cause of acute ataxia. Hemorrhage due to vertebral artery dissection is the most common cause of stroke in children. The occurrence of cerebrovascular events leads to the deprivation of oxygen and nutrients to some parts of the brain, leading to brain damage that causes ataxia. Intracranial hemorrhage Intracranial hemorrhage in children can be the result of trauma or vascular malformation. It can lead to life-threatening elevation of intracranial tension and, subsequently, ataxia. CNS infections Both current and resolving CNS infections can lead to ataxia. Viral or bacterial agents with autoimmune-mediated cerebellitis can cause temporary ataxia in children; these usually resolve with no permanent sequelae. Common viruses include Epstein-Barr, herpes simplex, varicella, parvovirus B19, mumps, and influenza. Cerebellar abscess, encephalitis, meningitis, and acute post-infectious demyelinating encephalomyelitis are all causes of acute infectious ataxia. Patients present with fever, elevated intracranial tension, focal neurological symptoms, seizures, altered consciousness level, and ataxia. Acute postinfectious demyelinating encephalomyelitis sometimes presents with ataxia, as well as seizures and sensory and motor abnormalities. Labyrinthitis Labyrinthitis describes inflammation of the labyrinth due to viral or bacterial infection. It may complicate otitis media, leading to hearing loss, vertigo, and loss of balance. Guillain-Barré syndrome (GBS) Guillain-Barré syndrome usually causes motor paralysis, but ataxia may develop due to a loss of cerebellar sensory input. Other Causes Other causes of acute ataxia include inborn errors of metabolism, hypoglycemia, tick paralysis, and conversion disorder. Hereditary Ataxia There are many other types of autosomal dominant ataxia, most of which result from a gain of function mutation due to trinucleotide repeat expansion. They commonly present during adulthood, though are sometimes present during childhood. A family history of autosomal dominant ataxia is the key determining factor. Autosomal recessive ataxia is common in children and may also manifest in adults. Sensory polyneuropathy, as well as other forms of organ dysfunction, is a common presentation. The most common hereditary ataxia is Friedreich ataxia. Friedreich’s Ataxia Friedreich’s ataxia is due to the expansion of the GAA repeat in the FXN gene responsible for iron clusters, with resultant iron overload in the mitochondria. Patients present with ataxia, weakness, absent reflexes, and dorsiflexion of the toes. Other associated presentations may include scoliosis, visual and auditory dysfunctions, cardiomyopathy with conduction abnormalities, and diabetes mellitus. Magnetic resonance imaging (MRI) scan shows late atrophy of the cerebellum and a thin cervical spinal cord. Ataxia Telangiectasia Ataxia telangiectasia, also known as Louis–Bar syndrome, is considered the second-most common cause of autosomal recessive ataxia. It is caused by a mutation in the ATM gene responsible for managing DNA repair. The mutation leads to cerebellar ataxia, telangiectasias, oculomotor apraxia, delayed puberty, dysarthria, and diabetes. The immune system is also impaired, increasing the risk of infections and cancer. An MRI scan shows cerebellar atrophy. Subacute and Chronic Ataxia Most children presenting with subacute or chronic ataxia have a long-standing disorder such as nutritional issues or inflammatory, autoimmune, infectious, and endocrine diseases. Vitamin B12 and folate deficiencies are common causes of sensorineural ataxia in adults and children. Clinical Evaluation of Ataxia Any child presenting with ataxia should be evaluated for the following: - History of infection, trauma, or medications - Family history of ataxia or neurological disease, metabolic disease, headache, behavioral changes, prior motor development, and onset of ataxia Vital signs correlating with increased intracranial tension include bradycardia, hypertension, and abnormal respiration, while fever correlates with infections. An examination should exclude life-threatening danger signs including disturbed consciousness, meningism, and bulging fontanelles. The examination should evaluate for: - Acute cerebellar ataxia: abnormal gait, normal position and vibration sense, and negative Rhomberg sign. Cerebellar lesions affect gait, speech, and voluntary movements. Patients will present with wide-stance or staggering gait, speech dysarthria, truncal position abnormalities, and over- or under-shooting of voluntary movements during a finger-to-nose test or rapid alternating movements. - Pseudo-ataxia: patients present with weakness, areflexia, acute disseminated encephalomyelitis, and multiple sclerosis. - Infection: fever, neck rigidity, positive Kernig’s sign and Brudzinski’s sign. - Labyrinthitis and ear vestibular neuritis: confirmed by performing the Dix-Hallpike maneuver. - Cranial nerve abnormalities: suggest posterior fossa tumor compressing the cranial nerves, as they originate from the brain stem or Miller Fisher syndrome. - Weakness, abnormal gait, and ataxia or pseudo-ataxia: as in Guillain-Barré syndrome, myasthenia gravis, and tick paralysis. - Impaired proprioception: seen during sensory examination in patients with sensorineural ataxia. Investigations of Ataxia A urine toxicology screen should be the first step in the pediatric evaluation of ataxia. Accidental ingestion of drugs or medications may be responsible for acute ataxia in many children. Basal metabolic profile and blood gases are important to exclude life-threatening metabolic abnormalities. Amino acid determinations of blood and urine, pyruvate level, ammonia levels, and urine organic acid are also useful for determining inborn errors of metabolism. Diabetes can be screened using a random blood glucose level. Imaging of Ataxia A computed tomography scan is the preferred study in the emergency room setting. It is useful for detecting posterior fossa lesions including hemorrhage, tumors, and traumatic lesions. An MRI allows better visualization of the brain and brain stem but is less useful in emergency cases. Imaging is indicated in cases with acute loss of consciousness, focal neurological signs, or evidence of increased intracranial pressure. Management of Ataxia Management of children with acute ataxia is supportive. Most of the cases resolve within a few weeks. Prednisolone and IV immunoglobulin have been tried for persistent cases. Intermittent and chronic cases are difficult to manage. Transcranial magnetic stimulation and deep brain stimulation have been attempted but with a lack of beneficial evidence.
https://www.lecturio.com/magazine/ataxia-in-childhood/?appview=1
Does Traditional Team Building Work? For most employees, team building exercises and group problem-solving activities lift spirit and foster closer teams overall. However, for some, they can be seen in a sarcastic light, perhaps even nerve-racking or embarrassing situations that their employers might miss in hopes of bringing a team closer. The question is: Do they actually work? Here are four reasons why traditional team building activities need to change if they are to really work. - Learning It’s hard to bring out useful learning points if all of your employees are to grow together. It is hard to make sure that people take the learning back to the workplace so understanding the activity and how it can help them is integral. By removing any negativity surrounding the activity and by being creative while ensuring everyone knows and understands the motives of it. The group is less likely to feel it as a mandatory thing to get through but instead something to enjoy. - Balanced attention for each employee Some employees may not enjoy having attention drawn to themselves, which may cause them to be anxious of team building activities that single everyone out. This may leave to a wedge being drawn between them and their colleagues and the employer rather than bring them together, which is the goal of team building overall. A better way is to involve the group using skills that each employee shines at, drawing on each of them collectively to positively encourage participation and reaffirm their contribution as a whole. - Socializing Versus Team Building The main reason why traditional team building doesn’t motivate teams as it is hard to determine whether we are simply socializing or participating in a team building activity. While many employees like to socialize together even outside of work, others might not and if the line is blurred between a social event versus a team building activity, we are back to item 1 – learning. What is it that everyone is supposed to take away from it? Have an answer for this before planning the next event and ensure the guidelines and information is clear on what the objective is…to build stronger teams. Solving the traditional team building conundrum Be eager to choose team-building activities that are activities specific for the group that uses their skills but isn’t just a push for an existing work project that they’re doing anyways. For example, create a game that uses strategy for short and long-term goals for the group but take away the direct “so how will you now do your task?” element. This brainstorming process before creating the activity will clarify what kinds of things will best motivate your team. How Adventure Games Team Building Can Help Your Team Traditional team building is out and modern team building games and activities that allow your team to thrive are in! Fun means different things to different people, but if you allow your employees to operate at their best, not only will you have a positive work environment, but they feel motivated and invested in positive results. Have fun at work with regular team building games that boost morale, increase productivity and foster relationships between co-workers naturally. No matter your industry, our large selection of creative team building activities and games will help your business. We can customize a team building experience for you and your staff that is a helpful business tool. A fun work environment is something to aspire to as a business owner in your industry. A positive place will improve efficiency and increase your bottom line. What more could you want? We at AdVenture Games Inc are experts at helping you to find your team’s strengths and weaknesses and devising a plan that helps work on them exclusively. How closely a team works is one of the positive parts of employee engagement as a whole. We can’t wait to help you to improve upon your staff’s skills and to improve your business overall. Let’s get started today!
https://www.adventuregamesinc.com/why-traditional-team-building-no-longer-motivates-teams/
The most important reason researchers erred so badly on risk measurement is the manner in which EMH and most other economic investigators conduct their research. Since the Second World War the social sciences have attempted to become as rigorous as the physical sciences. No discipline has put more effort into this goal than economics. Starting about fifty years ago, economists held out high hopes that through mathematics they could make the dismal science as predictable as Einstein's theory of relativity or Kepler's laws of planetary motion. Nobel laureate Paul Samuelson, then a young professor of economics at MIT, was the first to integrate the techniques of differential equations, which had met with such success in physics, into a structured approach which could be used to study virtually any economic problem. The key assumption was rationality: for a firm it meant maximizing profits, for an individual maximizing his or her economic desires. Rational behavior is the bedrock of Samuelson's work. This dubious platform allowed the economist to merrily build the most complex mathematical models. Economics could now be converted into a precise physical science. The great majority of economic research gravitated in this direction, despite the warnings of some of the important economic thinkers of the past. John Maynard Keynes, for example, was trained as a mathematician but refused to build his classic theory on unrealistic assumptions. Like his teacher, the great Victorian economist Alfred Marshall, Keynes believed economics was a branch of logic, not a pseudo-natural science. Marshall himself wrote that most economic phenomena do not lend themselves to mathematical equations, and warned against the danger of falling into the trap of overemphasizing the economic elements that could be most easily quantified. The Samuelson revolution, however, with its emphasis on complex quantification parroting the physical sciences, came to totally dominate economics in the postwar period. Mathematics, which pre-Samuelson was a valuable but subordinate aid to reality-based assumptions, now rules economics. Good ideas are often ignored by economists simply because they are not written down in pages of highly complex statistical formulas, or don't employ equations using most of the letters of the Greek alphabet. The vast amount of research published in the academic journals contains minuscule additions to economic thinking, but is dressed in sophisticated mathematical models. Bad ideas planted in deep math tend to endure, even when the assumptions are questionable and evidence strongly contradicts the conclusions. Economic ideas and principles once understood by educated readers are now unfathomable to all but the most highly trained mathematical researchers. This would be well and good if economics had achieved the predictability of a physical science. But without realistic assumptions the dismal science has been broken down rather than rejuvenated by mathematics. As John Cassidy points out in an excellent article in The New Yorker, complex new mathematical theories such as those of Robert Lucas, a Nobel Prize winner from the University of Chicago, while causing a generation of novice economists to build ever more complex models, are discredited in the end, with no agreement on what should replace them. Lucas's work concluded that the Federal Reserve should not actively guide the economy, but only increase the money supply at a constant rate.5 The research came under sharp theoretical attack, again because at the core of Lucas's complex mathematical formulas were untenable simple assumptions such as supply always equals demand in all markets. (If this were true we could not have unemployment—the supply of workers would never exceed the demand for them.) Once the supply-demand assumption is dropped, few of Lucas's conclusions hold up. Commenting on the impracticality of Lucas's work, Joseph Stiglitz, then chairman of the President's Council of Economic Advisers, said, "You can't begin with the assumption of full employment when the President is worried about jobs—not only this President, but any President."6 Economics, traditionally one of the most important of the social sciences, has suffered a self-inflicted decline. Not all are unaware of this. In 1996 the Nobel Prize for economics was awarded to two men, William Vickrey, an emeritus professor at Columbia (for a research paper in 1961) and James Mirless, a professor at Cambridge. Although the popular press extolled Vickrey's contribution as breaking fresh intellectual ground in fields as diverse as tax policy and government bond auctions, the professor denied the hyperbole. He said, "[it's] one of my digressions into abstract economics At best it's of minor significance in terms of human welfare."7 When interviewed, he talked instead about other unrelated work he had done, which he considered far more important. The failure of the complex statistical models to provide much insight into current economic problems has resulted in a cutback in hiring of economists by Wall Street and major corporations. Lawrence Myers, who before becoming a governor of the Federal Reserve, ran one of the nation's most successful economic forecasting firms, St. Louis-based Macroeconomic Advisers, said, "In our firm we always thanked Robert Lucas for giving us a virtual monopoly. Because of Lucas and others, for two decades no graduate students were trained who were capable of competing with us by building econometric models that had a hope of explaining short-term output and price dynamics. We educated a lot of macroeconomists who were trained to do only two things—teach macroeconomics to graduate students and publish in the journals. . . . [These economists] don't care what happens out there. [They] don't try to build models which are consistent with the real world."8 Complicated statistical analysis is no different in the investment arena, nor should it be, since it's another branch of economics. Simple assumptions are usually necessary as a platform for abstruse statistical methods. More complex assumptions, though far more descriptive of the real world, do not allow the development of the statistical analysis the researchers desire, or the academic journals will publish. The assumption of total rationality is the mother lode of complex statistical analysis. It eliminates the need for any other psychological assumptions, which, though likely to provide better guidelines to investor behavior in the real world, would vastly complicate the analysis, and probably send it in directions completely away from the researchers' paradigm. Given the simple assumption of rationality, researchers in the best tradition of the Samuelson Revolution can merrily take off to examine how the totally rational investor will approach markets. They can then use the most complex differential equations or other statistical methodology to discover new results. Whether these assumptions have the remotest connection to reality is irrelevant. Who cares? Was this article helpful?
https://www.dothefinancial.info/contrarian-strategies/the-crisis-of-modern-economics.html
The improvement of experimental methods, generally driven by technological development, has led to significant changes in the experimental sciences. Biology, as an experimental science, has become much more quantitative and dependent on mathematical and statistical models. The emergence of new laboratory technologies poses many challenging problems in mathematics. Indeed, this high-performance data in biology have been, is, and will be an important drive for new statistical, computational, and mathematical research. Reciprocally, mathematical models play a key role in the prediction of pandemics and constitute a very valuable tool to guide the design of public health policies, which are used to mitigate the spread of diseases. The objective of this research program will also include the development of optimization tools for everyday activities such as transportation of people, logistics, energy supply, food production, etc. with the aim of improving the efficiency of processes and leading to a more sustainable society. These objectives will be worked on through the following PIs. PI Biostatistics and Biomathematics: At the population scale, we encounter challenges in ecology or population dynamics, and even in epidemiology, where the use of models of systems of differential equations has a long tradition. At the organism level, mathematical and statistical contributions to the field of medicine include both deterministic and stochastic models. In addition, knowledge of genes and their sequences allows the design of experiments to simultaneously measure their expression under various laboratory conditions. From the data, it is possible to build interaction models of the millions of molecules that make up each cell. PI Understanding the environment and climate change: Describing and modeling environmental and ecological processes, for short and long time scales, requires the use of complex mathematical and statistical models. Numerical simulation, deterministic and stochastic modeling, and data analysis can be designed to forecast and locate extreme event risks. Risk forecasting and localization can be combined with resource management strategies in environmental emergencies with significant ecological and social impacts. PI Towards sustainability: The exploitation of natural resources has led certain species to critical situations. The mathematical modeling of biological communities and their dynamics, both statistical and deterministic, will provide a deeper vision and knowledge about these areas, allowing the design of adequate preservation policies. On the other hand, the estimates of limited reserves of fossil fuels and, to a greater extent, the deterioration of the planet due to the emissions derived from their use, make the development of renewable energy extraction technology (solar, wind, maritime...) a priority. In this sense, mathematical modeling, numerical simulation, and optimization techniques play an important role in the development of prototypes, as well as in the creation of devices that control and manage them.
https://citmaga.gal/en/m4-vida-y-sostenibilidad
1 mole of magnesium chloride is decomposed to form 1 mole of magnesium and 1 mole of Chlorine. Chemical reactions involve moles of substance. Mass (grams) are not used because the relationships are too complex. If the above reaction was discussed using grams, it would have to say: 95 grams of magnesium chloride are decomposed to form 24 g of Mg and 71 grams of chlorine. As you can see the ratios would be 95:24:71 and these values bear no resemblance to the coefficients of the chemical equation. There is no way to simplify these numbers. By dealing with moles the ratios are 1:1:1 and the calculations are basic. When the products of a reaction like this are measured, it is easiest to measure the weight (mass) of the products. For example, a chemist decomposes some MgCl2. After the reaction is over, the chemist weighs the Mg and finds that 24 g were produced. The question is how much MgCl2 was needed to produce the 24 g of Mg. Since chemical reactions are about moles, we first need to convert the 24 g of Mg into an equivalent number of moles of Mg. The following calculation does that: 1 mole Mg 24 g Mg x ----------- 24 g Mg Looking at the reaction, it can be seen that for 1 mole of Mg to be produced, it takes 1 mole of MgCl2 to be decomposed. A close look at the reaction above shows the ratios of MgCl2:Mg:Cl2 to be 1:1:1. This means that 1 mole MgCl2 makes 1 mole Mg and 1 mole Cl2. Once we know how many moles of MgCl2 were need we can then convert the number of moles into an equivalent mass through the atomic mass of MgCl2 using the following calculation: First the atomic mass of MgCl2 is 24 g + 35.5 g + 35.5 g = 95 g. 95 g MgCl2 1 mole MgCl2 x -------------- 1 mole MgCl2 Thus it requires 95 g of MgCl2 to produce 24 g of Mg. Some other reactions: A chemist decomposes sucrose according to the following reaction: Suppose that the chemist measured the mass of C produced and found the value to be 144 g. How much sucrose was needed to produce 144 g of C? First, determine how many moles of C is equivalent to 144 g like so: 1 mole C 144 g C x ---------- 12 g C The ratio of sucrose to C is 1:12. Since we have 12 moles of C produced, the reaction and the ratios show that we must have had 1 mole of sucrose since we need a ratio of 1:12. Last we calculate the mass of 1 mole of sucrose. 342 g C12H22O11 1 mole C12H22O11 x ------------------ 1 mole C12H22O11 Another example is the decompostion of water. A chemist measures 64 g of O2 being produced. How much water was decomposed to produce the 64 g of O2? First calculate how many moles of O2 are equivalent to 64 g of O2. 1 mole O2 64 g O2 x ----------- 32 g O2 The ratio is 2:1 for H2O:O2. That is 2 moles of H2O are needed to produce 1 mole of O2. We calculated above that 2 moles of O2 were produced. So if 2 moles H2O produce 1 mole O2 then it must take 4 moles H2O to produce 2 moles O2. Last convert the moles of H2O to grams of H2O using the atomic weight. The atomic weight or atomic mass of H2O is 1g + 1g + 16g = 18 g. So:
http://drbillboyle.com/chem/reaction.htm
History has moments in time that make you pause and think, how could this happen? June 19, 1865 is one of those moments. Although the Emancipation Proclamation was issued on January 1, 1863, the news of the Proclamation did not reach enslaved African-American people in Galveston, Texas, until June 19, 1865. As you can imagine, this was surely met with jubilation and surprise, given the 2+ years it took for this news to reach them. As the oldest nationally celebrated commemoration of the end of legal slavery in the US, this was initially celebrated predominantly by the newly-freed people in Galveston. The following year, freedmen and freedwomen organized the first of what became the annual celebration of “Jubilee Day” on June 19 in Texas. Over time, the annual celebration spread across the rest of the US, focusing on education, history, culture, pride and more. So, I am sure you are wondering what this has to do with the employee experience. Over the last year, we have seen many companies and organizations look at the different ways to observe the holiday. So, how can you honor Juneteenth in the workplace? Closing for Juneteenth is certainly an option. However, this is such an important historical and cultural holiday that decisions to close in observance of Juneteenth should reflect an authentic drive toward diversity, equity and inclusion. Training! Learn how to be a contributing ally, gain an understanding of nuances between cultural appropriation and celebration, bring in guest speakers to guide these important conversations. Have your ERG sponsor a workplace activation, and continue this for other commemorations for people of color. Foster reflection and giving: Have a volunteer day with your team to give back to local communities and find a meaningful way to observe the day. Hold a town hall meeting to discuss your DEI plans. Juneteenth symbolizes more than just freedom, it’s important to ensure employees that race, religion, sexual identity and gender will never be barriers to their growth and success. Foster the sense of belonging! Encourage courageous conversations in the workplace, which will help continue your DEI efforts for a safe space for all. Celebrate and recognize the contributions of your African American and Black employees, leaders and stakeholders. Have a lunch and learn with your team and watch the virtual tour from the Smithsonian National Museum of African American History and Culture’s Slavery and Freedom exhibit. “Watch the video as the Museum's Founding Director Lonnie Bunch III leads a tour through the Slavery and Freedom exhibition to celebrate Juneteenth highlighting stories behind some of their most popular objects, including Nat Turner's bible, freedom papers of free African Americans and a Sibley tent that housed African Americans who ran from Southern plantations in search of freedom with the Union army.” Create a place of belonging for all employees. Not only will you show the importance of their experience, you show that they matter and you show that their history matters. Having an ERG is not enough, you must continue to build out and evolve your diversity, equity and inclusion programs and activations. Observing and recognizing Juneteenth in the workplace solidifies the organization’s commitments to its mission, vision and values and continues to promote a diverse, inclusive place of belonging for their employees.
https://www.cheerpartners.com/post/juneteenth-the-employee-experience
The information presented on this website is for informational purposes only. Medical information or statements made within this web site are not intended for use in, or as a substitute for the medical advice, diagnosis and/or treatment provided by a healthcare provider which has been established by an in person consultation of a patient. This web site does not suggest nor provide medical advice, and does not endorse any professional service obtained through information provided within this web site. Any information made available or published through this website does not replace the medical services provided by our physician or the medical staff under our physician’s supervision. The Cosmetic Center of RI or its affiliates shall not be responsible or liable to anyone for any loss or injury caused in whole or in part by your use of this information, or for any decision you make or action you take in reliance on information you received from this web site. The information contained on our web site is compiled from a variety of sources. It may not be complete or timely. It does not cover all diseases, physical conditions, ailments or treatments. Each individual’s treatment and results may vary based upon circumstances, the individual’s situation, the physician’s medical judgment and other relevant evaluations. You should not rely on the information in this web site to determine any course of treatment, diagnosis or outcome. The information should not be used in place of an individual consultation/evaluation with a physician or other healthcare provider. Any statements, before and after photos, and testimonials within this web site are in no way a guarantee of outcome or substitute for the physician’s discussion and evaluation. There is no implied guarantee of outcome or benefit with any treatment or injection. You should never disregard the advice of your physician or other qualified healthcare provider because of information you read on this web site. If you have any healthcare questions, please consult with your physician or other qualified healthcare provider. Always consult with your physician or other qualified healthcare provider before you begin any new treatment, diet or fitness program.
https://www.cosmeticcenterri.com/disclaimer/
ISBN-13: 9780198422327 Author: Kevin Morley, Kawther Saa'd AlDin Curriculum: IB Diploma Programme International Baccalaureate Dimension: 8.6 x 0.8 x 10.9 inches Edition: 2nd Revised Edition Format: Paperback Isbn 10: 0198422326 Language: English Pages: 456 pages Release date: 05/17/2018 Series: IB Diploma Programme English B SL and HL students Year: 2018 Features - Each Course Book pack is made up of 1 full-colour, print textbook and 1 interactive, online textbook - Address crucial changes to the syllabus structure via in-depth coverage of all 5 prescribed themes and all 5 concepts - Build sophisticated reading, writing, speaking and listening skills via contemporary, international texts and accompanying activities - Tackle the new listening component with a wide range of interactive audio exercises, embedded in your digital Course Book - Develop the transdisciplinary skills central to long-term success with clear links to TOK, CAS and the Extended Essay, and thorough coverage of ATL and language concepts - Your digital Course Book may be accessed by a single student or teacher, using a printed access card that is sent in the mail. If you are unable to receive a printed access card, please contact Oxford In the new English Language Acquisition courses (first examinations 2020), learners will develop the cross-cultural skills and global perspectives they will need for studying and working in a complex, multi-cultural world. English Language Acquisition promotes international mindedness through five overarching themes that place the course content in their global context: - identities - experiences - human ingenuity - social organisation - sharing the planet Structure When developing our new English B Course Book pack, we based the 10 inquiry-led content chapters on these major five themes in a way that will fully engage both SL and HL learners. Each chapter poses an overarching research question that provides a starting point for an initial interactive exploration of one specific aspect of Anglophone cultures and peoples. | | Theme | | Chapter title | | Research question | | Identity | | Fit for life | | What ideas and images do we associate with a healthy lifestyle? | | Experiences | | Sketching our lives | | How would travelling to or living in another culture affect my worldview? | | Human ingenuity | | Celebrity | | What can we learn about a culture through its artistic expression? | | Social organisation | | Volunteers | | What is the individual’s role in the community? | | Sharing the planet | | Protecting the planet | | What environmental and social issues present challenges to the world and how can these challenges be overcome? | | Identity | | Human rights | | What ethical issues arise from living in the modern world and how do we resolve them? | | Experiences | | Life’s challenges | | Does our past shape our present and our future? | | Human ingenuity | | The impact of scientific Innovation, | | How do developments in science and technology influence our lives? | | Social organisation | | Education for the 21stcentury | | What opportunities and challenges does the 21st century bring to education and work? | | Sharing the planet | | Our rights | | How do language and culture contribute to form our identity? These specific inquiries give students of English B multiple opportunities to examine aspects of English speaking cultures, and then allow them to reach insights into global contexts that transcend one specific language or culture. The global contexts that we have emphasized in the new English B Course Book include: - Beliefs and values - Concepts of culture and diversity - Global engagement - Identity and cultural diversity - Intercultural understanding - Interdependence and globalization - International-mindedness and global citizenship - Peace building and conflict resolution - Social justice and human rights - Sustainable futures Each chapter is further subdivided into three inquiry-based sections, each exploring a specific topic and research question for the chapter. For example, Chapter 3 - Celebrity - explores the syllabus theme of Human Ingenuity. The chapter asks students to inquire into three related topics and research questions: Artistic expression (What qualities do you need to become a successful musician?); Media and communication (How are celebrities affected by fame?) and Entertainment (Why do some fans hero worship some celebrities?). In this way, the above chapter on Celebrity examines these global issues: - Beliefs and values - Concepts of culture and diversity - Identity and cultural diversity Moreover, all the global contexts listed above are visited several times over the course of the 10 chapters. Communicating concepts Each section of each chapter contains skills-based activities that help students to explore these global contexts while simultaneously developing language competence: - Input: Reading/Listening - “Before reading/listening” activities that help students to contextualize and articulate prior experiences or knowledge. - A listening or reading text that relates directly to the specific topic and research question of each section. - Listening or reading comprehension exercises with specific references to the concepts and global contexts within the text. - Processing: Interaction Follow-up interactive tasks such as discussion, debate and role-play, that reinforce students’ understanding of the global concept and subtopic under investigation and allow students to practice language related to them. - Output: Speaking - Internal assessment (Standard level) – guided analysis of a visual stimulus related to the chapter theme and global context. - Internal assessment (Higher level) – guided exploration of a literary extract related to the chapter theme and global context. - Output: Writing Writing tasks with guidelines and scaffolds to help students write about the chapter theme and global context while also analysing and practicing specific text types. - Additional personal explorations Students are also encouraged to make their own further explorations of global contexts while responding to these additional stimuli: - ‘TOK moments’ to investigate ideas from a TOK perspective. - ATL sidebars to encourage the development of cross-curricular skills. - Conceptual understanding sidebars to encourage students to think about the global concepts under review from the point of view of audience, context, variation, meaning, purpose. Coming to enduring understandings At the end of each inquiry, students will revisit the research question and in groups, or as a class, reflect on what they have learned, draw conclusions and come to enduring understandings about the world in which they live. For instance, Chapter 5 concludes by asking: In this chapter what have you learned about today’s environmental and social challenges and the methods of overcoming them? By encouraging English B students to articulate the life lessons they have learned, students can develop language and communication skills, and also become wiser and more curious about the world in which they live. Students are then challenged to take their learning to a new, independent phase beyond the chapter by reflecting on the extent and quality of their learning: - What questions do you still have about the topic? - What more would you like to know about the subject? - How can you find answers to these questions? Putting ideas into practice Finally, in order to ensure that the lessons learned have practical applications, the concluding section of each chapter, 'Beyond the classroom', invites students to apply the ideas explored in the chapter to students’ own communities and beyond: - Creativity – using the communication skills developed in the chapter to assist local community causes. - Service – applying lessons learned to schools CAS projects and Service learning programmes. - Research – conducting further reading and personal reflection on the issues raised in the chapter. Using these combined strategies students can engage meaningfully in first Anglophone and then global contexts while simultaneously developing communicative competence in English. In this way, the English B Course Book pack exploits the use of global contexts to the full and, thereby, supports the IB’s mission “to develop inquiring, knowledgeable and caring young people who help to create a better and more peaceful world through intercultural understanding and respect”.
https://myibsource.com/products/9780198422327-english-course-book-pack-programme-dp-0198422326
Email: Degrees and Certifications: Ms. Jessica Grier- School Counselor Jessica Grier is a native of Decatur, GA where she currently resides with her husband. She is a graduate of Spelman College (2014) and Richmont Graduate University (2019). She has worked in many capacities as a Teacher and Mental Health Counselor. She is proud to be faculty member at Woodland Middle School where she serves as a School Counselor. Mrs. Grier is looking forward to addressing the mental, emotional, and social health needs of students so they can lead a purposeful life. In her free time, Mrs. Grier enjoys working with her K-12 girl scout troop and serving in ministry. Phone: 470-254-7990 Email: Degrees and Certifications: Denise McDonald-Taylor- Graduation Coach Denise McDonald-Taylor currently serves as the Graduation Coach/SST Chair for the 2020-2021 school year here at Woodland MS. Her main goal is to help scholars remain on track for their targeted graduation date by supporting scholars and their families with social, emotional and academic resources thereby facilitating a successful middle school experience. She has been an educator for twenty-three years, previously serving as a Reading Specialist, Language Arts and Social Studies teacher, and Administrative Assistant. Over the past six years, she has enjoyed taking students on in-state and out-of-state college tours, coordinated the Junior Achievement Experience for the 6th and 7th grade scholars, as well as the annual Summer Bridge program. Additionally, she has been the point of contact for afterschool programs such as Future Foundation, ARROWS, and Land of Promise, which is a weekly food program that supplies students with food backpacks for the weekend. “You didn’t come this far just to come this far” is a saying I firmly believe and together with support, encouragement, and hard work, all students will experience success! Phone: (470) 254-4798 Email: Degrees and Certifications: Shari Simpson- Student Services Clerk Ms. Simpson is a graduate of Tuskegee University and has over 30 years of customer service experience. She has been employed with Fulton County Schools for 15 years in the position of Student Records Management and Support Services which include activity bus driver. Ms. Simpsons is the parent of two former Woodland Middle Schools students. She takes pride in providing professional customer service that includes identifying and addressing the needs of family members with regard to school enrollment (or transfers) by utilizing resources and collaborating with our administrative staff members, outside agencies, other school systems and Fulton County constituents. Phone: Email: Degrees and Certifications: Phone: (470) 254-9804 Email: Degrees and Certifications: Alicia Stokes- School Psychologist Mrs. Alicia Stokes has served as the School Psychologist at Woodland Middle School since 2019. She graduated with a Bachelor of Science degree in Psychology from Kennesaw State University in 2014. Mrs. Stokes then worked as a neurodevelopmental teacher at a private school in Roswell, Georgia where she worked with students with special needs. Afterwards, she attended graduate school at Georgia State University where she earned her Master of Education degree and an Education Specialist’s degree in School Psychology. While in graduate school, she completed her practicum and internship years within Fulton County Schools. She completed graduate school in May of 2019 and was then hired to Fulton County for the upcoming school year. As the School Psychologist, Mrs. Stokes evaluates students to assist with eligibility in special education and consults with parents, teachers, staff, and students regularly to assist with academic success. In her free time, Mrs. Stokes enjoys spending time with family and friends and playing with her dogs. Phone: (470) 254-7996 Email: Degrees and Certifications: Nakia Jeter- RTI/ Disproportionality Specialist Phone: Email: Degrees and Certifications:
https://www.fultonschools.org/Page/20167
We have identified in Lisp some of the elements that must appear in any powerful programming language: Now we will learn about procedure definitions, a much more powerful abstraction technique by which a compound operation can be given a name and then referred to as a unit. We begin by examining how to express the idea of ``squaring.'' We might say, ``To square something, multiply it by itself.'' This is expressed in our language as (define (square x) (* x x)) We can understand this in the following way: (define (square x) (* x x)) To square something, multiply it by itself.We have here a compound procedure, which has been given the name square. The procedure represents the operation of multiplying something by itself. The thing to be multiplied is given a local name, x, which plays the same role that a pronoun plays in natural language. Evaluating the definition creates this compound procedure and associates it with the name square. The general form of a procedure definition is (define (name formal parameters) body)The name is a symbol to be associated with the procedure definition in the environment. The formal parameters are the names used within the body of the procedure to refer to the corresponding arguments of the procedure. The body is an expression that will yield the value of the procedure application when the formal parameters are replaced by the actual arguments to which the procedure is applied. The name and the formal parameters are grouped within parentheses, just as they would be in an actual call to the procedure being defined. Having defined square, we can now use it: (square 21) 441 (square (+ 2 5)) 49 (square (square 3)) 81 We can also use square as a building block in defining other procedures. For example, x2 +y2 can be expressed as (+ (square x) (square y))We can easily define a procedure sum-of-squares that, given any two numbers as arguments, produces the sum of their squares: (define (sum-of-squares x y) (+ (square x) (square y)))Now we can use sum-of-squares as a building block in constructing further procedures: (sum-of-squares 3 4) 25 (define (f a) (sum-of-squares (+ a 1) (* a 2)))Compound procedures are used in exactly the same way as primitive procedures. Indeed, one could not tell by looking at the definition of sum-of-squares given above whether square was built into the interpreter, like + and *, or defined as a compound procedure.
https://mitpress.mit.edu/sites/default/files/sicp/full-text/sicp/book/node9.html
Cite this asLeesri T, Srisuphan W, Senaratana W, Vannarit T, Rerkasem K (2016) CIPP Model Evaluation of a Collaborative Diabetic Management in Community Setting. Glob J Medical Clin Case Rep 3(1): 029-034. DOI: 10.17352/2455-5282.000030 Diabetic management requires a continuous assessment of performance, looking at successes and failures. Especially in community setting, the development of a Collaborative Diabetes Management [CDM], Low Sugar Volunteers [LSVs], core-community working group, vigorously participated in all community activities to manage diabetes disease and its relevance. CIPP model was used to guide an evaluation of all activities that happened during a CDM development. Context, input, process, product, and appraisal sustainability were presented via LSVs community driven. CDM via community participation strategies prejudiced community health impacts, community health policy, and community sustainability. Moreover, various findings from CDM development influence actions and community direction either immediately or in the future. CIPP model: Context, Input, Process, and Product Model; CDM: Collaborative Diabetes Management; LSVs: Low Sugar Volunteers Globally, the prevalence rate of diabetes is continuously increasing; the number of persons with diabetes in 2010 was 285 million, with a prevalence of 6.4%. By the year 2030, the estimated number is expected to be 439 million with a prevalence of 7.7% . In Thailand, the national prevalence of diabetes in Thai people was also increasing and have high rate of the top ten in Asia . Of these, 35.4% were not previously diagnosed . Diabetic complications also have continuously increased, the number of deaths in adults due to diabetes is estimated to be 3.96 million per year and the mortality rate of diabetes in all ages is 6.8% at the global level [4,5]. To achieved the effective development strategies through active engagement with, and participation from all stakeholders at individual, family, community, societal and national levels. Successful implementation of the plan will ultimately lead to a sufficiency health system and a happy and peaceful society . Thus, the management for diabetes should develop from the participation of each group’s competency. For sustainable diabetes management, the development of continuous interventions and strategies sharing decision-making will develop a collaborative management that is effective for diabetes management in the community setting [7,8]. The collaboration between community and academic resources should develop and maintain diabetes management and the understanding of diabetes situations. Similarly, the community workers helps the community to preserve the health of its members and promote self-care among individuals and families, besides identifying the high-risk aggregates in the community and the development of appropriate interventions to ensure accessible services for the whole population . As to the academic roles, the researcher will offer and justify the effective strategies required to develop effective interventions, by enhancing skills and building competencies for community capacity by promoting community participation applied as a form to ascertain the diabetes management in the community setting, shaped on the method of action research. In community setting, where were risk persons and persons living with diabetes, they have to participate in all daily life of community activities. Especially, the local tradition was emphasized the group or community activities by all residents. It is importance to develop the effective interventions that appropriate with their lifestyle for sustainable development. Formerly, the comprehensive approach to diabetes management with both active and passive interventions for appropriate features is urgently required. In keeping with the increasing rate of diabetic complications among persons with diabetes and also the readiness of the community to participate in diabetes management, and based on the good participation and well organization of local setting, all community partners together developed a Collaborative Diabetes Management: CDM , this project encouraged individual and local organizations to participate in all diabetic activities in the community. Then, the collaboration between community and academic resources should develop and maintain self-management and the understanding of diabetes situations. As to the academic roles, the researcher will offer and justify the effective strategies that are required to develop effective interventions, by enhancing skills and building competencies for community capacity by promoting community participation [7,8]. This Collaborative Diabetes Management was developed for encouraging all community residences and local organizations to participate in every activity related to diabetes management. During employment of CDM, various methodologies were utilized for developing an effective diabetes management in the community including participating workshops, community brainstorming and discussion, community group meeting, community forum or public hearing, and community welfare. Six components were created for managing diabetes based on community participation as presented in Table 1. Therefore, this study aimed to evaluate the collaborative diabetes management in community setting that provided by local health volunteers, Low-Sugar Volunteers. They were representative from persons with diabetes, diabetic family caregivers, village health volunteers, community leaders, and other community stakeholders. CIPP [Context, Input, Process, and Product] evaluation model is a comprehensive framework for conducting and reporting evaluation. Context evaluation assesses needs, problems, and opportunities as bases for defining goals and priorities and judging the significance of outcomes. Input evaluation assesses alternative approaches to meeting needs as a means of planning programs and allocating resources. Process evaluation assesses the implementation of plans to guide activities and later to help explain outcomes. Product evaluation identifies intended and unintended outcomes both to help keep the process on track and determine effectiveness [11,12]. Therefore, the continuous improvement of community activities as LSVs impacts should be evaluated via the appropriate methods. After-Action Review Study was guided by CIPP model to evaluate the collaborative management of diabetic problems in community setting. This study was reviewed and approved by the Research Ethics Review Committee Faculty of Nursing, Chiang Mai University. (Approval Code No. 116/2010) Participants were selected by purposive sampling consisting of community people who live in community. There were three groups of participants: community stakeholders, core-working group, and community residents. First, community stakeholders consisted of formal and informal community leaders. Second, core-working groups called Low Sugar Volunteers which composed of 15 village health volunteers, 13 persons with diabetes, 12 family caregivers, 1 municipal member, 2 leaders community group, and 2 healthcare providers; and 1 community member who willing to participate in all processes of the study. Finally, community residents were other people in community who affected from diabetes management activities. The researchers, as one instrument, prepared the knowledge and skills on qualitative method in nursing research and take the role as the research facilitator, coordinator, and leader. An interview guide for focus group and semi-structured interviews were developed by the researcher and tape recordings were used as the research tools during evaluation of diabetes management in community setting. Also, an observational guide was used when the researcher participates in community events and during project activities. The creditability and confirmability were guided for trustworthiness of this study by using the triangulation technique by gathering the information from different sources: multiple data sources through focus group discussions, semi-structured interview, and participant observation, member checking that allows participants to check and verify the accuracy of the information recorded. For achieving the efficiency data, the researcher conducted the focus group and in-depth interview with the representative LSVs. These included persons with diabetes, family caregivers, village health volunteers, community leaders and other community residents. Moreover, the researcher conducted a participatory observation during all diabetic management activities in community setting. Moreover, content analysis was employed to analyze and classify words or statements from target group’s opinions during focus groups and semi-structured interviews. The findings of an evaluated Collaborative Diabetes Management [CDM] were presented after development of six components. These were presented as follows: This community was ready to participate in any health activities according to the resident’s active health behaviors, local community leaders, local budgets allocations, and community resources. Various community readiness for diabetes management included human community resources, supportive budgets and local concerning. These were indorsed by the good co-operation from local organization including health care personal and local government officer welling to participate in all processes of this development. Opinion of community leaders presented as follows: “There are many community resources for raising every project: supported budgets from local municipality, local personal from our government office, and resident concerning.” (Community Leader) The participatory learning was used to create the active activities in all processed of development. As an important leader group, the LSVs performed preliminary roles for health education, based on responsible activities involving five modules of the Low-Sugar curriculum for the Low-Sugar school. The LSVs asked participants to share and discuss the problems and collaboratively find solutions by adjusting some procedures so as to enable activities to be carried out smoothly. They played significant roles in facilitating activities run as integrated comprehensive care in local activities included home visiting, health education and experience sharing, as well as community board committee advising. Opinions of community members were presented as follows: “…During the project implementation, I realized that I can consult you (the researcher) every time by phone or by mail. This made me more confident and secure in doing many activities with others related to diabetes management. Also, I provided information and other resources for my community residents to help them with on-going diabetes control efforts.” (Low-Sugar Volunteer) “…When I take part in Low-Sugar projects, I apply and manage diabetes suppression by myself at home. When I have a problem I can consult the LSVs responsible in my community. They provide me with more information and suggest appropriate ways for diabetes management. I believe their recommendations because they have already received effective training from the specific providers. Especially, they have been trained to be community leaders, Low-Sugar Volunteers, for diabetes control.” (Person with diabetes) The commissioning continuous activities required continual stimulation and support of the processes of mutual-collaborative action research, which was achieved by developing strategic plans to promote the collaborative diabetes management to community by ensuring sustainability. These facilitating and consulting activities were provided at both levels, by the LSVs groups for community members, and the researcher for the core-working group. As the important leading group, LSVs performed preliminary roles for health educating based on all the activities of the five modules of the LS-curriculum for the LS-School. All LSVs asked participants to share and discuss their problems and collaboratively find solutions by adjusting some procedures to enable activities to be carried out smoothly. They played significant roles for facilitating activity, running integrated comprehensive care in local activities that included incorporated home visiting, health education, experience-sharing and community board committee meeting. Dering employment of the CDM development, various methodologies were utilized the effective diabetes management comprised of participating workshop, community brainstorming and discussion, community group meeting, community forum or public hearing as well as community welfare were displayed in Table 2. After implementing of six CDM components, the effective community group for leading community health interventions especially in diabetic management. They were named “Low-Sugar Volunteers for Low-Sugar Community” and played important roles of health promotion education, prevention, and screening as well as good model of healthy people in their community. Various impacts form this development included: After and during the development of the CDM, it was found that numerous community modifications were occasioned among persons with diabetes (individual consequences), and LSVs (community leaders consequences), as also community health policy and the sustainability. These consequences were elucidated as follows: “Before this project’s implementation, I thought that diabetes was a very dangerous disease. Although it was an urgent situation to be in, I did not prepare myself. I resisted in any way I could and did not accept anything, because nobody in my family was a diabetes patient. After and during joining this project, I changed my diabetes perceptions and was willing to admit I was a diabetes patient. Diabetes is not scary and we can live happily with others. Moreover, I have been able to share my feelings with other persons who have been diabetes patients for a long time and have received much valuable information to about how to monitor myself.” (Person with diabetes) Moreover, persons with diabetes received an increased flow of useful information and practical applications to diabetes lifestyle management. They claimed that the activities learned from this project improved their lifestyle behaviors relating to eating habits, increased physical activities, stress management, and treatment adherence. They started with minor behavioral changes and became totally committed. These project activities became immersed in their everyday lives once they tried to practice. The practical changes of the program’s participants are commented on as follows: “I obtained useful strategies to control my blood sugar, such as various ways to exercise, the diabetes nutrition alternative behaviors, the local community herbs that I could apply to my diet, and appropriate ways to help myself. During group workshops, I received many strategies for preventing diabetes complication: the self-screening methods in order to eye and foot examination, the self-monitoring book record toward the measurement of blood sugar levels and blood pressure controlling. Interestingly, these activities provided me and my family monitor diabetes conditions together.” (Person with diabetes) The LSVs (community leader’s consequence), the project’s activities advanced all LSVs increased essential capacity including diabetes overview and management, self-confidence, knowledge transporting skills, and healthy role model. They were approved in team working skills composed of leader and follower roles, cooperative acts, active monitoring, facilitating, and counseling as well as community devoting. Moreover, they increased essentially individual capacity related to diabetes encompassed of the diabetes overview: types; signs and symptoms; complication and prevention strategies also significant self-management support for persons with diabetes. Importantly, the skills for transporting these knowledge and techniques were encourage them increasing self-confidence. Likewise, these activities supported them to be healthy role model and accepted persons in diabetes phenomenon. Moreover, the net-working for diabetes management in a community context were also created by sharing the lessons learned via verbal communication, community activities welfare, social media and newspaper. After community activities finished the LSVs presented their ideas related the project participation as follows: “This project made me more self-confidence; I could present all diabetes knowledge to my community members. It provided the good opportunity for me and our community which encouraged all community members work together, supported all accessories for activity implementation. Importantly, you (the researcher) played the good role model to be the community health workers who stress-free to contact, good relationship and creative-thinking stimulator.” (Low-Sugar Volunteer) “Not only useful diabetes information for my community management but I can utilize effective strategies for managing my diabetes conditions also. According to my community roles, I was developed to be community LSVs. My duties were health educating, transforming diabetes information, and providing effective and appropriate lifestyle activities for my community residences. Another important role, I had to care my mother who was person with diabetes at home also. This project supported my responsiveness for my family member and prevented myself from diabetes also.” (Low-Sugar Volunteer) “This project formed our community unity increasing. We learned more about group process, team working, and creative thinking. We are happy to participate in all activities regarding effective designs for diabetes management in community. There was significant changing for my community between before and after project implementation. We are proud to present “We are LSVs” and develop my community to be Low-Sugar Community also. Especially, we remembered all process development for applying to the other community problems.” (Low-Sugar Volunteer) The important community policy commitment for sustainable development was policy creation and expansion. The diabetes community policy was separated into two main sections included community action plans and policy implementation. These results were gained from local community key persons’ in-depth interview and focus group discussion with community participants as follows: the researcher together with LSVs and local community leaders created the community vision and mission for guiding diabetes management. Then they on-going developed the community action plans for collaborative thinking via LS curriculum and covered all community diabetes populations. This curriculum provided diabetes management guidelines for every community members. The community advisory board committees for diabetes management were established for useful suggestion and community resources supporting. Moreover, the appropriate suggestions related to model planning validity were approved by all community stakeholders before actions plans implemented. All community participants mentioned that the further collaboration between the local organization and community residents also with the key persons of each village should be supported by local municipality. These had significant influences on the community activity’s sustainability, and other community members especial the other persons with diabetes, their family caregivers, and local health volunteers still continually developed and maintained. Importantly, the development of effective group leaders was defined as LSVs for Low-Sugar Curriculum [LSC] which is the key factor of the constant successful community diabetes management. However, the local community leaders were established as community advisory board committees for community diabetes management also defined as the important factors for continuity and sustainability of this project. Also, to increase sense of community and community power, most of community activities were encouraged all community participation especially LSVs and local organization increased concerning for community health issues. These diabetes activities were fulfilled the needs through community membership by providing all community group workshops for empowering their community responsibility. After community activities for diabetes management implemented, the community participants together generated the community decrelation for Diabetes Management. This decrelation was provided the commitment of three influencing organization related to community diabetes management including local municipality, Public Health Center, and LSVs group (representative from all significant persons relevant diabetes management in the community). And developed the community outcomes of their implemented was called ‘The Innovative Health Promoting School’ for generating the appropriate and creative community activities for diabetes issue and others. Other significant factors for CDM sustainability in the future would be increased continually support resources from local municipality and the stimulated the innovative community participants’ thinking. Also, the integration diabetes management in the local traditions was shown the effective diabetes interventions for community citizens. The group of persons with diabetes though it should be preferable for them to add more diabetic persons and persons in diabetes risks into this project. Effective community resources based on sufficiency economy should be promoted in the community members together with all their community local government organization for continuing diabetes management activities. Finally, community participants also though that the project should be promoted more for the whole community and local organization as well as in other health issues were expected to help improve the project and make it sustainable. After cooperative evaluation for sustainable development, the researcher and LSVs presented the successful project to the local government organization for proposing the outcomes and impacts from this project and sending the referral activities to develop the sustainable local community activities for diabetes management. All activities since the project beginning development to the last activities that all community members together established were presented by the researcher and LSVs. Before sending the referral information, the researcher and LSVs created the yearly diabetes management activities calendar for monitor all performances related diabetes in this community. As results of this study, the community stakeholders had gained more knowledge, information, perceptions, attitudes, beliefs, skills, experiences, practices, confidence, and resources. Also, the CDM could encourage local organization and community residents collaborative to conduct activities for managing diabetes in the community setting. This community was opened for all community members influenced diabetes management including persons with diabetes, their family caregivers, local healthcare providers, and other people who interested were in need of undertaking diabetes management activities through community collaboration. The collaboration between local organization and community citizens provided a means for all community stakeholders to participate fully in the research process. This collaboration was an important foundation for a successful diabetes management intervention, which included practical activities for effective diabetes management and achieving a healthy community. Additionally, sustainability of the activities is the ultimate goal of any community health project. Dean and Doherty & Mendenhall [13,14] indicated that social action focused on policy development can subsidize the collaboration and to continue sustainability. Moreover, short-term success contributes to long-term effectiveness of local activities. Long-term sustainability of this CDM should be continued development in the future through reimplementation of all processes development and contributing to the more effective collaboration management. The provided participatory monitoring between the researcher and LSVs, and between the LSVs and their community members were created for all ongoing and finished community activities implementation. These provided all community participants and other stakeholders with the continuous feedback on implementation , identified actual or potential successes and problems as early as possible to facilitate timely adjustments to all community diabetes management operations. For the continuous evaluation, it is a periodic assessment of the CDM development relevant performance, efficiency, effectiveness, impact, and sustainability in relation of during implementation and terminal evaluation. Both continuous monitoring and evaluation, they also provided a basis for accountability in the use of community resources. Given the greater transparency expected of the local community members and organization needed any respond to be success on the grassroots . All activities of CDM development, monitoring and evaluation can help to strengthen development design and implementation and stimulate partnership with community stakeholders and members. Similarly with the study of Laverack & Wallerstein , the participatory monitoring and evaluation in primary health care empower communities and health workers to make informed decision on interventions, and performance; and promote collaboration, transparency, accountability and sustainability in community development. Therefore, the development of a collaborative diabetes management in the community was cooperated from local government organizations and community participants also the researcher. The various community activities covered community diabetes awareness and understanding, community diabetes support, community diabetes involvement ultimate to the community commitment for sustainable diabetes management. Based on the collaboration of community group activities empowered all community members and local organization, this promoted all diabetes conditions via local policies development related to diabetes management between community members and the local municipality. The sustainable achievement for diabetes management through the creation of continuous monitoring and active evaluation, construction of the local community policy, and building the Memorandum of Understanding [MOU] for generating the community declaration of diabetes management in the community. The significant factors forced successfulness of collaborative diabetes management in community setting was the effective LSVs. The encouraged LSVs, community leaders and all community participants produced more creative think for protecting the boring activities. Some people were felt unable, low self-confidence, lacking of essential skills and knowledge to deal or run the project. After several training and also joining together as well as learning by doing from community collaborative workshops in the many times repeated, it also completely lead the feeling of empowerment for them. Importantly, achievement more interested community residences and other participants would always keep in mind in every step and also disseminate the findings for everyone participated including reflect the finding to the whole community could encouraged more motivation for other community interested residences to join. For community collaboration, the researcher should always be conscious that community participants is the project mobilizer and the researcher is taking a role as the facilitator & trigger in the project. The researcher should reassure and affirm the community participants’ competence and worth. The researcher needed to be a good listener, supporter and encouragement, trustable, flexible, and respectful, be always punctual, keep promise, and be patient, stay in commitment and importantly contribution and activities run by the community committees and fulfill the pride of the member on their contribution to the project, be award and motivate for further running the project. Importantly, the researchers should be immersed themselves as the normal community members, joining all local community activities in both formal and informal movement for deep understanding community context. This study was financially supported by the Office of the Higher Education Commission [OHEC], Ministry of Education of Thailand, without which the present study could not have been completed. Subscribe to our articles alerts and stay tuned.
https://www.peertechzpublications.com/articles/GJMCCR-3-130.php
Barely three weeks into office, Nairobi Metropolitan Services (NMS) Director-General Mohamed Badi is facing a battle that could make or break his nascent administration. A titanic battle is brewing between the military man and Governor Mike Sonko and the workers’ union. The standoff is about the deployment of county staff to NMS. On Friday, the Public Service Commission ordered the deployment of 6,052 county workers to report to the new office this week. Drawn from the transferred functions, the staff will now be under PSC. TOP OFFICIALS Moreover, the Head of Public Service, Joseph Kinyua, last month seconded 32 senior officers — chief officers, directors and secretaries — to NMS, a move that left county ministers and other top officials in a limbo. These decisions seem to have angered Mr Sonko and union officials, who have vowed to take the war to the doorsteps of the Air Force major-general. The beleaguered City Hall boss has fired the first salvo by calling on the county staff to ignore the letter and threats by the PSC. He said the county’s board will advise on the procedure of the secondment, “in a manner that shall address all their concerns as raised by the Nairobi City County Government Workers Union (NCGWU)”. “All Nairobi County government employees are hereby directed to ignore any further communication regarding the transfer of functions from any quarters whatsoever, whenever that communication is not in writing and copied to the Governor and the County Secretary,” said Mr Sonko. COUNTY EMPLOYEES He said NMS and PSC have no jurisdiction over county employees. The governor said the agreement did not amount to a takeover or dissolution of the county government. “A few overzealous individuals in the national government have chosen to break all known laws and the provisions of the Deed of Transfer, to pursue their narrow and myopic interests at the expense of the goodwill and good intentions of the agreement,” said Mr Sonko. “Evidently, there are a few individuals who have chosen to hijack the noble mission that the President and I embarked on to move Nairobi forward, but I wish to assure them that their sinister motives shall not succeed and will never happen under my watch,” said the Governor. Mr Sonko said the deal was clear on the channel of communication between the levels of government, which he claimed had been disregarded by individuals from the national government. Unorthodox channels MEETINGS They have continued to summon, call for meetings with and issue directives to county employees through phone calls, SMS, WhatsApp and other unorthodox channels, and deliberately excluding him, he added. Union officials echoed Mr Sonko’s concerns, saying the secondment of employees is the responsibility of the employer, not PSC. They also raised concerns at the speed at which the transfer of functions is being implemented without compliance with the law. “The purpose of this letter is to demand, which we hereby do, that the said exercise be forthwith held in abeyance/suspended until the issues we raised in our previous letter are fully addressed and proper precautionary measures are taken to avert them from contracting coronavirus,” stated the union. However, the county assembly seems to be fully behind Maj-Gen Badi. On Thursday, it approved the transfer of more than Sh15 billion to NMS towards the delivery of transferred functions falling under the new office under the Supplementary Appropriations Bill, 2020. The MCAs agreed that going forward, payment of salaries for county staff will be undertaken by PSC for the next three months to enable the “clean up and reorganisation” of the payroll in line with the transfer of functions.
Data aren't always entered in the sequence you need. Find out how to reorder the results of your query. - [Instructor] When we get information back from a query, it's usually in an order that isn't always helpful. So to sort data, we can use the "order by" keyword. Let's set up a statement that will give us a lot of information back here. I'll write "select first name, last name, from people." There's my thousand rows and I can see that it's all over the place in terms of sorting. There's no pattern to the first name or to the last name. This is the order that the records are in the database. Sure, but that's not how I want to see them. I'll add an "order by" clause and I'll need to give it a field to sort on. I'll add "first name" for now. There we go. The first names are alphabetical. These are sorted in an order called ascending, which means that the values start small and get larger. And that's the default. I can also tell the database to give me ascending order by adding the keyword "ASC" in the "order by" clause. Or, I can switch up the order to descending, with larger values first, with the "DESC" keyword. These are a little bit confusing to me sometimes because when I think of something ascending, I think of it going up. But the larger values, what I would associate with letters later in the alphabet, end up below smaller values in the list. So, a helpful trick for me is to remember that in ascending, "A" will come closer to the top. And in descending, the letter "D" would come closer to the top than "A" would. We can add more fields after the first one as well if we want to add a secondary or tertiary sort order. For example, we could sort first by state and then within the state, sort by last name. I'll write a new query with "select state, last name, first name, from people, order by state, ascending, last name, ascending." I'll run that. And here I have the data listed alphabetically by state, and then, within that, for each state the data are sorted by the last name in alphabetical order. I'll change the last name sort order to descending here. And I'll run it again. And now, even though the states are in alphabetical order, last names are in reverse alphabetical order. Take some time to explore the data now that you're able to change the sort order of results. Released8/15/2017 - Name the predicate of the following statement: SELECT EyeColor, Age FROM Student WHERE FirstName = 'Tim' ORDER BY LastName ASC; - Explain what to use to enforce the order in which an expression must be evaluated if the WHERE clause contains multiple expressions to evaluate. - Identify the best option to join two tables in a database to be able to display data from both. - List a data type that is not numeric. - Determine the result of running the following statement on a table containing columns col_1 and col_2: - INSERT INTO Box (col_1, col_2) VALUES ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'); - Determine the best approach of deleting Jon Ramirez (ID 3452) from a Student table.
https://www.lynda.com/SQL-tutorials/Organize-responses-ORDER/548044/635462-4.html
The Foundation will build living villages for the benefit of the Indigenous community. The villages will archive, preserve and promote our Indigenous story (as opposed to being trapped in mischaracterized colonial history), language, traditions, culture, education, health, social, and economic well-being of individuals and of the tribal community. The village will utilize the medicine wheel philosophy, which teaches that the four parts of each human being: physical, spiritual, emotional, and intellectual, are equally important and when balanced can create internal peace, harmony, and interconnectedness with the Creator. The village, based on cultural values and traditions, will seek to bring community and family members together to empower them to find solutions for common problems, implementation of those solutions, and to serve as a link between local solutions and mainstream services appropriate to the Indigenous communities. Visitors who seek to stay at the village will be provided with this holistic approach to find their balance and learn trust and integrity: a path to achieving self-sufficiency and autonomy. The living / healing village staff will provide culturally responsive programs and education opportunities that benefit families and the community alike, including such activities as basket weaving, flint knapping, cooking, dancing, gardening, and learning Indigenous languages. With a firm foundation of community-based strategies, the villages will also promote balance through natural pathetic remedies a strategy to emphasize the notion that leadership, self-governance, programs, and participants are all vital and necessary to balance one another. This organizational pattern will be used to develop and implement the programs that will serve the Indigenous communities, and maintain a holistic perspective that is true to the nature of the Medicine Wheel. A Foundation of community-based strategies that firmly root in tradition and culture, we promote balance, natural pathetic remedies, leadership and self-governance through our programs. We develop and implement programs that will serve Indigenous communities, and maintain a holistic perspective that is true to the nature of the Medicine Wheel.
https://lenapepath.org/living-villages-healing%E2%80%8B/?v=58e69a293e3d
In the early 1800s, Dr. Constantine Hering first used diseased animal tissue to make homeopathic remedies. He called them “nosodes” to distinguish them from traditional remedies made from herbs or minerals. Dr. Hering and his colleagues created a nosode remedy with infected sheep’s spleen. They used this homeopathic preparation to treat anthrax. Last reviewed September 2014 by EBSCO CAM Review Board Last Updated: 9/18/2014 EBSCO Information Services is fully accredited by URAC. URAC is an independent, nonprofit health care accrediting organization dedicated to promoting health care quality through accreditation, certification and commendation. This content is reviewed regularly and is updated when new and relevant evidence is made available. This information is neither intended nor implied to be a substitute for professional medical advice. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with questions regarding a medical condition. To send comments or feedback to our Editorial Team regarding the content please email us at [email protected]. Our Health Library Support team will respond to your email request within 2 business days. All rights reserved.
http://healthlibrary.epnet.com/GetContent.aspx?token=83ee77b6-5d7c-451c-b269-7f0bab6eb1f5&chunkiid=38349
Constructs robust quality control chart based on the median or Hodges-Lehmann estimator (location) and the median absolute deviation (MAD) or Shamos estimator (scale). These estimators are all unbiased with a sample of finite size. For more details, see Park, Kim and Wang (2020) <doi:10.1080/03610918.2019.1699114>. In addition, using this package, the conventional quality control charts such as X-bar, S and R charts are also easily constructed. This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (NRF-2017R1A2B4004169). |Version:||1.20.7| |Depends:||R (≥ 3.2.3)| |Published:||2020-07-05| |Author:||Chanseok Park [aut, cre], Min Wang [ctb]| |Maintainer:||Chanseok Park <statpnu at gmail.com>| |BugReports:||https://github.com/AppliedStat/R/issues| |License:||GPL-2 | GPL-3| |URL:||https://AppliedStat.GitHub.io/R/| |NeedsCompilation:||no| |Citation:||rQCC citation info| |Materials:||NEWS| |CRAN checks:||rQCC results| |Reference manual:||rQCC.pdf| |Vignettes:|| Factors for constructing control limits | A note on the "rcc" function in the "rQCC" package |Package source:||rQCC_1.20.7.tar.gz| |Windows binaries:||r-devel: rQCC_1.20.7.zip, r-release: rQCC_1.20.7.zip, r-oldrel: rQCC_1.20.7.zip| |macOS binaries:||r-release: rQCC_1.20.7.tgz, r-oldrel: rQCC_1.20.7.tgz| |Old sources:||rQCC archive| Please use the canonical form https://CRAN.R-project.org/package=rQCC to link to this page.
https://cran.r-project.org/web/packages/rQCC/index.html
Meet the 2022 CFAES Alumni Award winners! https://advancement.cfaes.ohio-state.edu/newsletter/cfaes-connect/december-2021/meet-2022-cfaes-alumni-award-winners considered for a future Alumni Award, nominations for 2023 are now open. Click here to nominate a deserving ... - Winter Solstice Labyrinth Walk and Conifer Tour https://chadwickarboretum.osu.edu/events/winter-solstice-labyrinth-walk-and-conifer-tour labyrinth, hot cocoa, and roasted chestnuts! Note: While this event is free, parking in a nearby parking ... - December 2021 CFAES Connect https://advancement.cfaes.ohio-state.edu/email/december-2021-cfaes-connect Copy: The Buckeyes are headed to Pasadena. Learn everything you need to know about in-person events ... Update your information Button context: Takes users to the CFAES Calendar of Events " ... style="text-align: center; text-decoration: none;"> Calendar of events Gray bar, white mastheadCollege of ... - COVID-19 Updates Order ends or December 1, 2020, whichever comes first. All in-person OSUE events are cancelled or ... triplicate from the recertification class or conference that you attended) is your temporary certification ... I consisdered an essential employee? Structural Pest Control is considered essential Food related pest control ... - Cuyahoga County Extension Survey https://cuyahoga.osu.edu/news/cuyahoga-county-extension-survey Have you participated in one of our programs or events? Have you thought about it? We want your ... help deciding what trainings and events we offer in the future! Please complete this short survey by ... - November 2021 CFAES Connect https://advancement.cfaes.ohio-state.edu/email/november-2021-cfaes-connect 1:30 p.m. on Friday, Nov. 19, 2021. The event will be held at Mershon Auditorium on the ... university's Columbus campus. A link to watch via livestream will be available closer to the event. Button ... Takes individuals to specific fund 1" class="m-15-25" style="color: #bb0000; ... - Webinars https://woodlandstewards.osu.edu/resources/webinars events, climate change poses a risk to the maple syrup production community. These changes alter ... Marne's presentation slides. Timber Rattlesnake Ecology and Conservation in Ohio- November 2020 In this ... Natural Resources shares ongoing research on the endangered timber rattlesnake in Ohio. Several videos on ... - Pop-Up Tree Sale: Arboretum Grown Tree Seedlings https://chadwickarboretum.osu.edu/events/pop-tree-sale-arboretum-grown-tree-seedlings Quercus, Oaks Thuja plicata, Arborvitae Trees will sell for $10.00, cash only! Find a printable version ... - ASABE Student Branch https://fabe.osu.edu/undergraduate/clubs-and-organizations/asabe-student-branch and environmentally sensitive methods of producing food, fiber, timber, and renewable energy sources ... club events. President: Sharolyn Balbaugh ([email protected]) Vice President: Ethan Grum ... - Youth Advocacy and Leadership Coalition https://cuyahoga.osu.edu/program-areas/4-h-youth-development/cuyahoga-county-4-h/youth-advocacy-and-leadership-coaltion roles RTA bus passes and gas cards to get to all events Food at every meeting YALC is now offering in ... member, and teens will get access to workshops on "adulting," STEAM topics, cooking classes and ...
https://woodlandstewards.osu.edu/search/site/classes%20events%20selling%20timber%20consider?page=4&f%5B0%5D=hash%3Afyk9kw&f%5B1%5D=hash%3Apo682h&f%5B2%5D=hash%3A2spxwe&f%5B3%5D=hash%3A9p8lli&f%5B4%5D=hash%3Aixlnj0&f%5B5%5D=hash%3A7c2frn&f%5B6%5D=hash%3A2lwttd&f%5B7%5D=hash%3A2rnydt&f%5B8%5D=hash%3Ad6o9bp
Europe and its protected species – Austria allows killing of the fish otter. Europe is proud of its tremendous efforts to protected its flora and fauna and to restore its habitats. It has defined in the FFH directive a series of species which are supposedly highly protected in the EU due to their low surviving numbers. This list includes besides the wolf, also the fish otter. The amazing fact is, that while one country is using EU funds to reintroduce the fish otter another EU country is killing them. Please also read: Study on impact of fish otters on fish population in Upper Austrian rivers The UK Biodiversity Action Plan envisages the re-establishment of otters in all the UK rivers and coastal areas they inhabited in 1960. The Netherlands are reintroducing the otter along their rivers and what does Austria do: it just sanctioned the killing of 40 fish otters in Lower Austria. The argument for the killing of the otters is the decrease in fish population in local streams and rivers! In Carinthia, a similar step is being prepared, where otters are removed from a whole valley and relocated to the Netherlands for the purpose of a scientific monitoring programme. The real scandal in all of this, is that the EU laws of protecting protected species is used against protected species. One of the main argument for the killing of the fish otter in Lower Austria is its threat to other protected species, while we all know that the real threat to the fish or and any other wildlife population is the continued impact of humans on our nature. This case in Carinthia is even more disturbing. The authorities are removing the only otter population to monitor in a scientific (and most likely financially supported) project, if the removal of the otters will positively influence the fish population. Everyone already knows what the outcomes will be: Otters eat fish, so the fish population in absence of the otter will increase. The result of this monitoring programme will therefore provide scientific arguments for further killing or relocation of otters because of their tendency to eat fish. The main argument against the fish otter here is: it eats too many fish, so the fisherman association is threathing to the province. So we kill a protected animal so that humans can continue to fish for fun. WWF is thinking about legal actions to stop this but as often, the local authorities do not release the documents justifying the killing – shame on anyone who believes this is an indicator for a lack of scientific reason for this drastic step. PS: By the way, the so called highly protected fish otters are to be caught and then killed in traps and in the winter can be shot by the – TATATATA – Fishery association and fish farmer association.
https://wilderness-society.org/europe-protected-species-austria-allows-killing-protected-fish-otter/
Update October 2013 Our Mission: The Iowa Bicycle Coalition promotes safe and enjoyable bicycling in Iowa through education, events, better policy, and growing a community of supporters. Our Vision: Bicycling in Iowa is safe, enjoyable and accessible for all. Our Values: The following lists make up our values in four categories. Click each heading for more information. Cyclists have legal rights and responsibilities - Bicyclists have a legal right to use the public roads of the state of Iowa. - Motorists should pass bicycles in the same manner as they would pass other vehicles by using the opposite lane clear of oncoming traffic and not returning to the right side of the roadway until safely past the bicyclist. - Increasing and supporting public awareness of the right of the legal rights of bicyclists is important for safe and enjoyable cycling. - Educational material on the rights and responsibilities of bicyclists should be included in driver education programs, the driver’s manual, and the driver’s license exam. - Cyclists using the roadway are to follow the rules of the road. - Experienced, knowledgeable cyclists should take the responsibility to educate other cyclists through formal instruction, personal interaction, and by setting a good example in the safe and efficient operation of their bicycles. - Age-appropriate education for young cyclists in both basic skills and in learning the rules of the road should be encouraged. - All cyclists of all ages should be encouraged, but not mandated to wear a well-fitted bicycle helmet every time they ride. Trail users should be encouraged to adopt and follow the rules of the road, for example, keeping to the right except when passing. - Cyclists should receive equitable legal treatment in cases of cycle/vehicle collision and in cases of the lawful use of the road. - Motorists who threaten or attack cyclists are committing a prosecutable offense. - Bicycling on unpaved trails where bicycle use is permitted is done responsibly. - Sidewalk bicycling is dangerous for pedestrians and can lead to increased bicycle-motor vehicle collisions. People with the ability to understand and follow traffic principles should bicycle on the streets or appropriate bicycle facilities (and not on the sidewalks) for both pedestrian safety and self-safety. - Bicyclists should have the right to choose their route based upon their own personal safety. - The use of alcohol and drugs while bicycling could have severe, if not fatal consequences for the bicyclist as well as others around them. Bicyclists should not ride while their abilities are impaired. Please ride responsibly. - Bicycling on trails where bicycle use is permitted is done responsibly and care is given when around pedestrians and non-cyclists (cyclists hate being ‘buzzed’ by cars, so pedestrians should not be ‘buzzed’ by bicyclists) - Bicyclists should use headlights and taillights or rear reflectors at night as required by the Code of Iowa. Penalties for bicycling without lights should include repair or purchase of a headlight and taillight in lieu of a financial penalty. Community Design City and county governance need to incorporate services, planning, and regulations that encourage and facilitate the development of equitable, accessible, quality multi-modal transportation services and facilities - Communities should adopt complete streets policies that require the construction of safe bicycle facilities in public transportation roadways. - Bicycle transportation networks should connect as a system just as automobile-focused transport systems connect. - ‘Complete street design’ education and planning training for city officials and planners is conducted regularly to achieve high-quality transportation services and facilities. - Roads surfaces should be free of obvious hazards. - Cities and towns incorporate the interests of cyclists in design, maintenance, and policing of roads. - Multi-user paths are well designed for safety and access. - Zoning ordinances specify the equitable quantity, acceptable design, and strategic location of bicycle parking. - Zoning ordinances require bicycle parking whenever automobile parking is required. - Covered bicycle parking is provided where automobile parking is covered. - Cities provide bicycle lock-up racks on public property near the entrances of businesses and in other locations likely to be frequented by bicyclists. - Public garages provide bicycle racks, preferably within view of the lot attendant. - Municipalities work proactively with area businesses to provide appropriate bicycle storage. - Employers provide safe and convenient bicycle storage and showers and lockers for their employees who cycle to work. - Bicycles belonging to employees or residents are allowed to be brought inside buildings and into elevators. - Mass transit vehicles, including planes, buses, and trains, are modified to carry bicycles. - Covered and secure bike parking is provided at public transit stations. The public is informed about the availability of multimodal transportation. - Bike Share Systems are a good investment as public transportation. Facility Design and Maintenance Public roadways are transportation corridors that need to safely and efficiently accommodate multiple motorized and non-motorized modes of transportation. - Generally accepted national standard and practices for cycling, for example, AASHTO guidelines and NACTO guidelines should be used to design bicycle facilities. - Pavement markings, such as sharrows, bike lane stripes are commonly utilized. - Bicycle warning signs are posted when there is insufficient room for side-by-side lane sharing or when there are narrow roads with significant traffic. - Full user assessments of benefits and tradeoffs for incremental increases in the paved width of heavy traffic roads are conducted before design decisions are made. - Roadways should be maintained free of sand, gravel, ridges, and holes. - Annual street maintenance plans include resources for winter sand removal from streets, filling of longitudinal cracks, and addressing other unsafe road designs such as drain grates. - Loop detectors at signalized intersections are adjusted to detect bicycles. - Multi-use trails and greenway corridors incorporate the needs of non-bicyclists, complement the road network and: - accommodate all users, including pedestrians and in-line skaters - function as a low-speed roadway, not as sidewalks - have sufficient sight lines where paths cross roadways - ensure that barriers, if any, where paths cross roadways are sufficient to accommodate two directions of travel for tandems, recumbents, and bicycles with trailers Public Policy can and should support safe and enjoyable cycling. - Iowa should have a vision zero policy encouraging reduction of roadway deaths, including bicyclists, to zero fatalities per year. - Iowa’s transportation system should be designed to reduce bicycle crashes. HSIP funding should include funds to reduce bike crashes. - Bike Lanes, separated bike lanes, protected bike lanes, and/or buffered bike lanes are part of the roadway. Construction of these facilities should be included in roadway construction. - Drivers of vehicles who open their doors into the paths of legally operating bicyclists without checking their mirror or looking should be liable for damage or injuries from a crash. - Hit and run penalties should be equal to drunk driving penalties. - It is important that local governments and the Iowa DOT have a viable reporting system so that bicyclists can notify them of problems. - Recreational trails are a form of public, not private, development. - State Recreational Trails program deserve sufficient funding to continue current projects, develop future bicycle trails and assist with trail maintenance. - Citizens, municipalities, and law enforcement should cooperate to increase public safety for all road users through improved enforcement and compliance with the legal rules of the road. - Police officer training should include knowledge of the rules of the road with respect to bicycling. - Bicyclists need to register their bicycles, place identifying information inside the bicycle, and make it a practice to lock their bicycles securely. - Law enforcement officers have a role in assisting in recovering stolen bicycles and apprehending bicycle thieves. - Mandatory sidepath laws are clearly against the Code of Iowa. - Bicycle events on Iowa roads should not require permits.
https://iowabicyclecoalition.org/mission-vision-and-values/
Almost all U.S. mayors have some level of concern about climate change in their communities, but local leaders face some challenges in taking action. A new report from the Boston University Initiative on Cities indicates that most U.S. mayors are concerned about the impacts of climate change on their communities. As Michael Brady writes in Smart Cities Dive, 97 percent of mayors surveyed said climate change was a concern, while over half worry about drought, extreme heat, flooding, and air pollution. Notably, “There was no partisan gap among mayors.” According to the report, “Mayors said their regulatory powers, especially building codes and zoning, are their most effective tools to address climate change.” Cities are also focusing on replacing municipal fleets with electric vehicles in an effort to reduce emissions in the transportation sector. Per the survey, 74 percent “of mayors support replacing their city’s municipal vehicles before their natural lifecycle ends, which suggests a major opportunity to capitalize on new federal funds for things like electric school buses, fire trucks, and public works vehicles.” However, “Local climate action can be costly and complicated, and it has to compete with all of the other challenges mayors are facing.” Brady explains, “Major concerns for mayors include the current costs and environmental effects of energy supplies.” Some are also concerned about the political fallout of unpopular decisions, seeking solutions with the fewest hard tradeoffs for their constituents. FULL STORY: Nearly all US mayors worry about climate change’s local effects: report The Right to Mobility As we consider how to decarbonize transportation, preserving mobility, especially for lower- and middle-income people, must be a priority. America’s Best New Bike Lanes PeopleForBikes highlights some of the most exciting new bike infrastructure projects completed in 2022. Early Sharrow Booster: ‘I Was Wrong’ The lane marking was meant to raise awareness and instill shared respect among drivers and cyclists. But their inefficiency has led supporters to denounce sharrows, pushing instead for more robust bike infrastructure that truly protects riders. L.A. County Towns Clash Over Homelessness Policies Local governments often come to different conclusions about how to address homelessness within their respective borders, but varying approaches only exacerbate the problem. A Mixed-Use Vision for Houston Landfill Site A local nonprofit is urging the city to consider adding mixed-use development to the site, which city officials plan to turn into a stormwater detention facility. Milwaukee County Makes Substantial Progress on Homelessness In 2022, the county’s point-in-time count of unhoused people reflected just 18 individuals, the lowest in the country. Town of Palm Beach Texas A&M AgriLife Extension York County Government HUD's Office of Policy Development and Research HUD's Office of Policy Development and Research Harvard GSD Executive Education City of Fitchburg, WI City of Culver City Sonoma County Transportation Authority Urban Design for Planners 1: Software Tools This six-course series explores essential urban design concepts using open source software and equips planners with the tools they need to participate fully in the urban design process. Planning for Universal Design Learn the tools for implementing Universal Design in planning regulations.
https://www.planetizen.com/news/2023/01/121110-survey-mayors-concerned-about-direct-impacts-climate-change
Case Law also states that when a judge acts as a trespasser of the law, when a judge does not follow the law, he then loses subject matter jurisdiction and the Judges orders are void, of no legal force or affect. Although judges should be independent, they must comply with the law and should comply with this Code. … They should be applied consistently with constitutional requirements, statutes, other court rules and decisional law, and in the context of all relevant circumstances. You may file a complaint about a federal judge who you have reason to believe has committed misconduct or has a disability that interferes with the performance of their judicial duties. Supreme Court: The Supreme Court holds the power to overturn laws and executive actions they deem unlawful or unconstitutional. The Supreme Court cannot directly enforce its rulings, but it relies on respect for the Constitution and for the law for adherence to its judgments. Judges may be impeached by majority vote of the legislature and removed with the concurrence of two thirds of the members of the court of impeachment. The supreme court sits as the court of impeachment, unless a supreme court justice has been impeached. Perjury. Perjury is the criminal act of lying or making statements to misrepresent something while under oath. Lying under oath disrupts the judicial process and is taken very seriously. Being convicted of perjury can result in serious consequences, including probation and fines. The answer is yes he could. It doesn’t mean it’s the right decision, but since the Judge controls everything that happens in the courtroom, he controls what comes into evidence. If the judge makes the wrong decision and I ultimately lose the case, I can appeal on that precise issue. Judicial corruption includes the misuse of judicial funds and power, such as when a judge hires family members to staff the court or manipulates contracts for court construction and equipment. You can’t write to the judge. You can hire your own attorney to make your case to the court. The courts apply the law, and settle disputes and punish law-breakers according to the law. Our judicial system is a key aspect of our democratic way of life. … A court’s ability to deliver justice depends on its power to enforce its rulings. Only a court of appeal can overturn the ruling of a lower court. “The rule of per incuriam can be applied where a court omits to consider a binding precedent of the same court or the superior court rendered on the same issue or where a court omits to consider any statute while deciding that issue.” 142. In a Constitution Bench judgment of this Court in Union of India v. Internal accountability to “the judiciary” In the sense that their decisions are subject to appeal and other judges are responsible for the allocation of cases to them, individual judges are accountable to senior judges or judges holding positions of responsibility. In the United States the constitution provides that federal judges hold office during good behaviour and may be removed by means of impeachment by the House of Representatives and trial and conviction by the Senate, the stated grounds of removal being “Treason, Bribery or other high Crimes and Misdemeanours”. Yes. Particularly, concludes Jack Fernandez, the author of “An Essay Concerning the Indictment of Lawyers for their Legal Advice,” when the legal advice is not only specious but involves a strong element of self-dealing. A narcissist is arrogant. They look down on other people and require constant or excessive admiration. They are jealous of people they perceive to have more authority, wealth, or talent than they possess. … A judge can see firsthand the combative, abusive, and controlling nature of the narcissistic parent. Anything the witness said or wrote themselves, including text messages, social media posts, and voicemails, are generally admissible in family court. If they said something in such a message that directly contradicts what they said on the stand, you can use that evidence to prove that they’re lying. “You’re wrong (or words to that effect)” Never, ever tell a judge that he or she is wrong or mistaken. Instead, respectfully tell the judge WHY he or she may be wrong or mistaken. The court decisions they make can have a lasting impact on the direction of our country. As a co-equal branch of government, the judiciary must remain impartial and non-political in order to do its job. The judges that President Trump has nominated, and the Senate has confirmed, understand this. Judges in the United States are immune from suit for any “judicial act” that they perform. This immunity applies even when the judge acts maliciously or corruptly. There is no set schedule. Some hearing offices say it will take approximately six weeks to receive a decision; some judges tell claimants they try to have the decision out in 30 days. A judge is elevated to the bench either by election or by appointment by the Governor. A judge must also be a licensed attorney to be eligible to serve on the bench. A commissioner, on the other hand, is an individual who is hired by the court to help out with a judge’s case load. Right to appeal or request a new trial. When your constitutional rights are breached during the criminal justice process, and the breach contributes to a guilty conviction, you can pursue an appeal based on an error in the criminal procedure or jury misconduct, or file a motion for a new trial. According to California Penal Code 92, the bribery of any judicial officer is illegal. To be sure, there are times that letters (written in consultation with an attorney) can be useful, such as at the time of sentencing. However, when a person is awaiting trial, writing a letter to the judge will not help. At best, the letter will go unread by the judge, and will be of no help. Most courts will accept copies of electronically delivered letters, but be sure to check with the attorney first. Remember that judges read hundreds of letters. The easier you make it for the judge to read, the most likely the judge will be able to focus on the message you are trying to convey. Normally in very hard cases the judges mention that the law has been created or changed, but the law cannot be reformulated according to the wish of the court. … So the judges do make laws but almost heresy to say so. Hence, judges have been upholding, declaring and making law. Judges do not make law because the existing law provides all the resources for their decisions. A judge does not decide a case in a legal vacuum but on the basis of existing rules, which express, and, at the same time, are informed by, underlying legal principles. In this context, it is relevant to note that Article 226A inserted in the Constitution by the 42nd amendment provided that a High Court cannot consider the constitutional validity of a Central legislation. If no past cases with similar circumstances exist, a new decision is made, which would then become a precedent for a future similar case. If no statute law applies to cover a particular situation, common law will apply; however, statute law always overrides common law.
https://daitips.com/what-if-a-judge-ignores-the-law/