content
stringlengths
71
484k
url
stringlengths
13
5.97k
logographies. A true alphabet has letters for the vowels of a language as well as the consonants. The first "true alphabet" in this sense is believed to be the Greek alphabet, which is a modified form of the Phoenician alphabet. In other types of alphabet either the vowels are not indicated at all, as was the case in the Phoenician alphabet (such systems are known as abjads), or else the vowels are shown by diacritics or modification of consonants, as in the devanagari used in India and Nepal (these systems are known as abugidas or alphasyllabaries). There are dozens of alphabets in use today, the most popular being the Latin alphabet (which was derived from the Greek). Many languages use modified forms of the Latin alphabet, with additional letters formed using diacritical marks. While most alphabets have letters composed of lines (linear writing), there are also exceptions such as the alphabets used in Braille and Morse code. Alphabets are usually associated with a standard ordering of their letters. This makes them useful for purposes of collation, specifically by allowing words to be sorted in alphabetical order. It also means that their letters can be used as an alternative method of "numbering" ordered items, in such contexts as numbered lists. Etymology The English word alphabet came into Middle English from the Late Latin word alphabetum, which in turn originated in the Greek ἀλφάβητος (alphabētos), from alpha and beta, the first two letters of the Greek alphabet. Alpha and beta in turn came from the first two letters of the Phoenician alphabet, and originally meant ox and house respectively. History The history of alphabetic writing goes back to the consonantal writing system used for Semitic languages in the Levant in the second millennium B.C.E. Most or nearly all alphabetic scripts used throughout the world today ultimately go back to this Semitic proto-alphabet. Its first origins can be traced back to a Proto-Sinaitic script developed in Ancient Egypt to represent the language of Semitic-speaking workers in Egypt. This script was partly influenced by the older Egyptian hieratic, a cursive script related to Egyptian hieroglyphs. Although the following description presents the evolution of scripts in a linear fashion, this is a simplification. For example, the Manchu alphabet, descended from the abjads of West Asia, was also influenced by Korean hangul, which was either independent (the traditional view) or derived from the abugidas of South Asia. Georgian apparently derives from the Aramaic family, but was strongly influenced in its conception by Greek. The Greek alphabet, itself ultimately a derivative of hieroglyphs through that first Semitic alphabet, later adopted an additional half dozen demotic hieroglyphs when it was used to write Coptic Egyptian. The Beginnings in Egypt By 2700 B.C.E. the ancient Egyptians had developed a set of some 22 hieroglyphs to represent the individual consonants of their language, plus a 23rd that seems to have represented word-initial or word-final vowels. These glyphs were used as pronunciation guides for logograms, to write grammatical inflections, and, later, to transcribe loan words and foreign names. However, although alphabetic in nature, the system was not used for purely alphabetic writing. That is, while capable of being used as an alphabet, it was in fact always used with a strong logographic component, presumably due to strong cultural attachment to the complex Egyptian script. The Middle Bronze Age scripts of Egypt have yet to be deciphered. However, they appear to be at least partially, and perhaps completely, alphabetic. The oldest examples are found as graffiti from central Egypt and date to around 1800 B.C.E. These inscriptions, according to Gordon J. Hamilton, help to show that the most likely place for the alphabet’s invention was in Egypt proper. The first purely alphabetic script is thought to have been developed by 2000 B.C.E. for Semitic workers in central Egypt. Over the next five centuries it spread north, and all subsequent alphabets around the world have either descended from it, or been inspired by one of its descendants, with the possible exception of the Meroitic alphabet, a third century B.C.E. adaptation of hieroglyphs in Nubia to the south of Egypt. Middle Eastern scripts The apparently "alphabetic" system known as the Proto-Sinaitic script appears in Egyptian turquoise mines in the Sinai peninsula dated to the fifteenth century B.C.E., apparently left by Canaanite workers. An even earlier version of this first alphabet was discovered at Wadi el-Hol and dated to circa 1800 B.C.E. This alphabet showed evidence of having been adapted from specific forms of Egyptian hieroglyphs dated to circa 2000 B.C.E., suggesting that the first alphabet had been developed around that time. Based on letter appearances and names, it is believed to be based on Egyptian hieroglyphs. This script had no characters representing vowels. An alphabetic cuneiform script with 30 signs including three which indicate the following vowel was invented in Ugarit before the fifteenth century B.C.E. This script was not used after the destruction of Ugarit. This Semitic script did not restrict itself to the existing Egyptian consonantal signs, but incorporated a number of other Egyptian hieroglyphs, for a total of perhaps thirty, and used Semitic names for them. However, by the time the script was inherited by the Canaanites, it was purely alphabetic. For example, the hieroglyph originally representing "house" stood only for b. The Proto-Sinaitic script eventually developed into the Phoenician alphabet, which is conventionally called "Proto-Canaanite" before 1050 B.C.E. The oldest text in Phoenician script is an inscription on the sarcophagus of King Ahiram. This script is the parent script of all western alphabets. By the tenth century two other forms can be distinguished namely Canaanite and Aramaic, which then gave rise to Hebrew. The South Arabian alphabet, a sister script to the Phoenician alphabet, is the script from which the Ge'ez alphabet (an abugida) is descended. The Proto-Sinatic or Proto Canaanite script and the Ugaritic script were the first scripts with limited number of signs, in contrast to the other widely used writing systems at the time, Cuneiform, Egyptian hieroglyphs, and Linear B. The Phoenician script was probably the first phonemic script and it contained only about two dozen distinct letters, making it a script simple enough for common traders to learn. Another advantage of Phoenician was that it could be used to write down many different languages, since it recorded words phonemically. The script was spread by the Phoenicians across the Mediterranean. In Greece, it was modified to add the vowels, giving rise to the ancestor of all alphabets in the West. The Greeks took letters which did not represent sounds that existed in Greek, and changed them to represent the vowels. The syllabical Linear B script which was used by the Mycenaean Greeks from the sixteenth century B.C.E. had 87 symbols including 5 vowels. In its early years, there were many variants of the Greek alphabet, a situation which caused many different alphabets to evolve from it. Descendants of the Aramaic abjad The Phoenician and Aramaic alphabets, like their Egyptian prototype, represented only consonants, a system called an abjad. The Aramaic alphabet, which evolved from the Phoenician in the seventh century B.C.E. as the official script of the Persian Empire, appears to be the ancestor of nearly all the modern alphabets of Asia: - The modern Hebrew alphabet started out as a local variant of Imperial Aramaic. (The original Hebrew alphabet has been retained by the Samaritans.) - The Arabic alphabet descended from Aramaic via the Nabatean alphabet of what is now southern Jordan. - The Syriac alphabet used after the third century C.E. evolved, through Pahlavi and Sogdian, into the alphabets of northern Asia, such as Orkhon (probably), Uyghur, Mongolian, and Manchu. - The Georgian alphabet is of uncertain provenance, but appears to be part of the Persian-Aramaic (or perhaps the Greek) family. - The Aramaic alphabet is also the most likely ancestor of the Brahmic alphabets of the Indian subcontinent, which spread to Tibet, Mongolia, Indochina, and the Malay archipelago along with the Hindu and Buddhist religions. (China and Japan, while absorbing Buddhism, were already literate and retained their logographic and syllabic scripts.) A true alphabet has letters for the vowels of a language as well as the consonants. The first "true alphabet" in this sense is believed to be the Greek alphabet which was modified from the Phoenician alphabet to include vowels. The Greek alphabet was then carried over by Greek colonists to the Italian peninsula, where it gave rise to a variety of alphabets used to write the Italic languages. One of these became the Latin alphabet, which was spread across Europe as the Romans expanded their empire. Even after the fall of the Roman state, the alphabet survived in intellectual and religious works. It eventually became used for the descendant languages of Latin (the Romance languages) and then for most of the other languages of Europe. Greek Alphabet By at least the eighth century B.C.E. the Greeks had borrowed the Phoenician alphabet and adapted it to their own language. The letters of the Greek alphabet are the same as those of the Phoenician alphabet, and both alphabets are arranged in the same order. However, whereas separate letters for vowels would have actually hindered the legibility of Egyptian, Phoenician, or Hebrew, their absence was problematic for Greek, where vowels played a much more important role. The Greeks chose Phoenician letters representing sounds that did not exist in Greek to represent their vowels. For example, the Greeks had no glottal stop or h, so the Phoenician letters ’alep and he became Greek alpha and e (later renamed epsilon), and stood for the vowels /a/ and /e/ rather than the Phoenician consonants. This provided for five or six (depending on dialect) of the twelve Greek vowels, and so the Greeks eventually created digraphs and other modifications, such as ei, ou, and o (which became omega), or in some cases simply ignored the deficiency, as in long a, i, u. Several varieties of the Greek alphabet developed. One, known as Western Greek or Chalcidian, was west of Athens and in southern Italy. The other variation, known as Eastern Greek, was used in present-day Turkey, and the Athenians, and eventually the rest of the world that spoke Greek, adopted this variation. After first writing right to left, the Greeks eventually chose to write from left to right, unlike the Phoenicians who wrote from right to left. Latin Alphabet A tribe known as the Latins, who became known as the Romans, also lived in the Italian peninsula like the Western Greeks. From the Etruscans, a tribe living in the first millennium B.C.E. in central Italy, and the Western Greeks, the Latins adopted writing in about the fifth century. In adopted writing from these two groups, the Latins dropped four characters from the Western Greek alphabet. They also adapted the Etruscan letter F, pronounced 'w,' giving it the 'f' sound, and the Etruscan S, which had three zigzag lines, was curved to make the modern S. To represent the G sound in Greek and the K sound in Etruscan, the Gamma was used. These changes produced the modern alphabet without the letters G, J, U, W, Y, and Z, as well as some other differences. Over the few centuries after Alexander the Great conquered the Eastern Mediterranean and other areas in the third century B.C.E., the Romans began to borrow Greek words, so they had to adapt their alphabet again in order to write these words. From the Eastern Greek alphabet, they borrowed Y and Z, which were added to the end of the alphabet because the only time they were used was to write Greek words. When the Anglo-Saxon language began to be written using Roman letters after Britain was invaded by the Normans in the eleventh century further modifications were made: W was placed in the alphabet by V. U developed when people began to use the rounded U when they meant the vowel u and the pointed V when the meant the consonant V. J began as a variation of I, in which a long tail was added to the final I when there were several in a row. People began to use the J for the consonant and the I for the vowel by the fifteenth century, and it was fully accepted in the mid-seventeenth century. Some adaptations of the Latin alphabet are augmented with ligatures, such as æ in Old English and Icelandic and Ȣ in Algonquian; by borrowings from other alphabets, such as the thorn þ in Old English and Icelandic, which came from the Futhark runes; and by modifying existing letters, such as the eth ð of Old English and Icelandic, which is a modified d. Other alphabets only use a subset of the Latin alphabet, such as Hawaiian, and Italian, which uses the letters j, k, x, y and w only in foreign words. Other Another notable script is Elder Futhark, which is believed to have evolved out of one of the Old Italic alphabets. Elder Futhark gave rise to a variety of alphabets known collectively as the Runic alphabets. The Runic alphabets were used for Germanic languages from 100 C.E. to the late Middle Ages. Its usage is mostly restricted to engravings on stone and jewelry, although inscriptions have also been found on bone and wood. These alphabets have since been replaced with the Latin alphabet, except for decorative usage for which the runes remained in use until the twentieth century. The Old Hungarian script is a contemporary writing system of the Hungarians. It was in use during the entire history of Hungary, albeit not as an official writing system. From the nineteenth century it once again became more popular. The Glagolitic alphabet was the initial script of the liturgical language Old Church Slavonic and became, together with the Greek uncial script, the basis of the Cyrillic script. Cyrillic is one of the most widely used modern alphabetic scripts, and is notable for its use in Slavic languages and also for other languages within the former Soviet Union. Cyrillic alphabets include the Serbian, Macedonian, Bulgarian, and Russian alphabets. The Glagolitic alphabet is believed to have been created by Saints Cyril and Methodius, while the Cyrillic alphabet was invented by the Bulgarian scholar Clement of Ohrid, who was their disciple. They feature many letters that appear to have been borrowed from or influenced by the Greek alphabet and the Hebrew alphabet. Asian alphabets Beyond the logographic Chinese writing, many phonetic scripts are in existence in Asia. The Arabic alphabet, Hebrew alphabet, Syriac alphabet, and other abjads of the Middle East are developments of the Aramaic alphabet, but because these writing systems are largely consonant-based they are often not considered true alphabets. Most alphabetic scripts of India and Eastern Asia are descended from the Brahmi script, which is often believed to be a descendant of Aramaic. Zhuyin (sometimes called Bopomofo) is a semi-syllabary used to phonetically transcribe Mandarin Chinese in the Republic of China. After the later establishment of the People's Republic of China and its adoption of Hanyu Pinyin, the use of Zhuyin today is limited, but it is still widely used in Taiwan where the Republic of China still governs. Zhuyin developed out of a form of Chinese shorthand based on Chinese characters in the early 1900s and has elements of both an alphabet and a syllabary. Like an alphabet the phonemes of syllable initials are represented by individual symbols, but like a syllabary the phonemes of the syllable finals are not; rather, each possible final (excluding the medial glide) is represented by its own symbol. For example, luan is represented as ㄌㄨㄢ (l-u-an), where the last symbol ㄢ represents the entire final -an. While Zhuyin is not used as a mainstream writing system, it is still often used in ways similar to a romanization system—that is, for aiding in pronunciation and as an input method for Chinese characters on computers and cellphones. In Korea, the Hangul alphabet was created by Sejong the Great Hangul is a unique alphabet: it is a featural alphabet, where many of the letters are designed from a sound's place of articulation (for example P to look like the widened mouth, L to look like the tongue pulled in); its design was planned by the government of the day; and it places individual letters in syllable clusters with equal dimensions (one syllable always takes up one type-space no matter how many letters get stacked into building that one sound-block). European alphabets, especially Latin and Cyrillic, have been adapted for many languages of Asia. Arabic is also widely used, sometimes as an abjad (as with Urdu and Persian) and sometimes as a complete alphabet (as with Kurdish and Uyghur). Types The term "alphabet" is used by linguists and paleographers in both a wide and a narrow sense. In the wider sense, an alphabet is a script that is segmental at the phoneme level—that is, it has separate glyphs for individual sounds and not for larger units such as syllables or words. In the narrower sense, some scholars distinguish "true" alphabets from two other types of segmental script, abjads and abugidas. These three differ from each other in the way they treat vowels: abjads have letters for consonants and leave most vowels unexpressed; abugidas are also consonant-based, but indicate vowels with diacritics to or a systematic graphic modification of the consonants. In alphabets in the narrow sense, on the other hand, consonants and vowels are written as independent letters. The earliest known alphabet in the wider sense is the Wadi el-Hol script, believed to be an abjad, which through its successor Phoenician is the ancestor of modern alphabets, including Arabic, Greek, Latin (via the Old Italic alphabet), Cyrillic (via the Greek alphabet) and Hebrew (via Aramaic). Examples of present-day abjads are the Arabic and Hebrew scripts; true alphabets include Latin, Cyrillic, and Korean hangul; and abugidas are used to write Tigrinya, Amharic, Hindi, and Thai. The Canadian Aboriginal syllabics are also an abugida rather than a syllabary as their name would imply, since each glyph stands for a consonant which is modified by rotation to represent the following vowel. (In a true syllabary, each consonant-vowel combination would be represented by a separate glyph.) All three types may be augmented with syllabic glyphs. Ugaritic, for example, is basically an abjad, but has syllabic letters for /ʔa, ʔi, ʔu/. (These are the only time vowels are indicated.) Cyrillic is basically a true alphabet, but has syllabic letters for /ja, je, ju/ (я, е, ю); Coptic has a letter for /ti/. Devanagari is typically an abugida augmented with dedicated letters for initial vowels, though some traditions use अ as a zero consonant as the graphic base for such vowels. The boundaries between the three types of segmental scripts are not always clear-cut. For example, Sorani Kurdish is written in the Arabic script, which is normally an abjad. However, in Kurdish, writing the vowels is mandatory, and full letters are used, so the script is a true alphabet. Other languages may use a Semitic abjad with mandatory vowel diacritics, effectively making them abugidas. On the other hand, the Phagspa script of the Mongol Empire was based closely on the Tibetan abugida, but all vowel marks were written after the preceding consonant rather than as diacritic marks. Although short a was not written, as in the Indic abugidas, one could argue that the linear arrangement made this a true alphabet. Conversely, the vowel marks of the Tigrinya abugida and the Amharic abugida (ironically, the original source of the term "abugida") have been so completely assimilated into their consonants that the modifications are no longer systematic and have to be learned as a syllabary rather than as a segmental script. Even more extreme, the Pahlavi abjad eventually became logographic. (See below.) Thus the primary classification of alphabets reflects how they treat vowels. For tonal languages, further classification can be based on their treatment of tone, though names do not yet exist to distinguish the various types. Some alphabets disregard tone entirely, especially when it does not carry a heavy functional load, as in Somali and many other languages of Africa and the Americas. Such scripts are to tone what abjads are to vowels. Most commonly, tones are indicated with diacritics, the way vowels are treated in abugidas. This is the case for Vietnamese (a true alphabet) and Thai (an abugida). In Thai, tone is determined primarily by the choice of consonant, with diacritics for disambiguation. In the Pollard script, an abugida, vowels are indicated by diacritics, but the placement of the diacritic relative to the consonant is modified to indicate the tone. More rarely, a script may have separate letters for tones, as is the case for Hmong and Zhuang. For most of these scripts, regardless of whether letters or diacritics are used, the most common tone is not marked, just as the most common vowel is not marked in Indic abugidas; in Zhuyin not only is one of the tones unmarked, but there is a diacritic to indicate lack of tone, like the virama of Indic. The number of letters in an alphabet can be quite small. The Book Pahlavi script, an abjad, had only twelve letters at one point, and may have had even fewer later on. Today the Rotokas alphabet has only twelve letters. (The Hawaiian alphabet is sometimes claimed to be as small, but it actually consists of 18 letters, including the ʻokina and five long vowels.) While Rotokas has a small alphabet because it has few phonemes to represent (just eleven), Book Pahlavi was small because many letters had been conflated—that is, the graphic distinctions had been lost over time, and diacritics were not developed to compensate for this as they were in Arabic, another script that lost many of its distinct letter shapes. For example, a comma-shaped letter represented g, d, y, k, or j. However, such apparent simplifications can perversely make a script more complicated. In later Pahlavi papyri, up to half of the remaining graphic distinctions of these twelve letters were lost, and the script could no longer be read as a sequence of letters at all, but instead each word had to be learned as a whole—that is, they had become logograms as in Egyptian Demotic. The alphabet in the Polish language contains 32 letters. The largest segmental script is probably an abugida, Devanagari. When written in Devanagari, Vedic Sanskrit has an alphabet of 53 letters, including the visarga mark for final aspiration and special letters for kš and jñ, though one of the letters is theoretical and not actually used. The Hindi alphabet must represent both Sanskrit and modern vocabulary, and so has been expanded to 58 with the khutma letters (letters with a dot added) to represent sounds from Persian and English. The largest known abjad is Sindhi, with 51 letters. The largest alphabets in the narrow sense include Kabardian and Abkhaz (for Cyrillic), with 58 and 56 letters, respectively, and Slovak (for the Latin script), with 46. However, these scripts either count di- and tri-graphs as separate letters, as Spanish did with ch and ll until recently, or uses diacritics like Slovak č. The largest true alphabet where each letter is graphically independent is probably Georgian, with 41 letters. Syllabaries typically contain 50 to 400 glyphs, and the glyphs of logographic systems typically number from the many hundreds into the thousands. Thus a simple count of the number of distinct symbols is an important clue to the nature of an unknown script. Names of letters The Phoenician letter names, in which each letter was associated with a word that begins with that sound, continue to be used to varying degrees in Samaritan, Aramaic, Syriac, Hebrew, Greek and Arabic. The names were abandoned in Latin, which instead referred to the letters by adding a vowel (usually e) before or after the consonant (the exception is zeta, which was retained from Greek). In Cyrillic originally the letters were given names based on Slavic words; this was later abandoned as well in favor of a system similar to that used in Latin. Orthography and pronunciation When an alphabet is adopted or developed for use in representing a given language, an orthography generally comes into being, providing rules for the spelling of words in that language. In accordance with the principle on which alphabets are based, these rules will generally map letters of the alphabet to the phonemes (significant sounds) of the spoken language. In a perfectly phonemic orthography there would be a consistent one-to-one correspondence between the letters and the phonemes, so that a writer could predict the spelling of a word given its pronunciation, and a speaker could predict the pronunciation of a word given its spelling. However this ideal is not normally achieved in practice; some languages (such as Spanish and Finnish) come close to it, while others (such as English) deviate from it to a much larger degree. Languages may fail to achieve a one-to-one correspondence between letters and sounds in any of several ways: - A language may represent a given phoneme with a combination of letters rather than just a single letter. Two-letter combinations are called digraphs and three-letter groups are called trigraphs. German uses the tesseragraphs (four letters) "tsch" for the phoneme German pronunciation: [tʃ] and "dsch" for [dʒ], although the latter is rare. Kabardian also uses a tesseragraph for one of its phonemes, namely "кхъу". Two letters representing one sound is widely used in Hungarian as well (where, for instance, cs stands for [č], sz for , zs for [ž], dzs for [ǰ], etc.). - A language may represent the same phoneme with two different letters or combinations of letters. An example is modern Greek which may write the phoneme Template:IPA-el in six different ways: ⟨ι⟩, ⟨η⟩, ⟨υ⟩, ⟨ει⟩, ⟨οι⟩, and ⟨υι⟩ (although the last is rare). - A language may spell some words with unpronounced letters that exist for historical or other reasons. For example, the spelling of the Thai word for "beer" [เบียร์] retains a letter for the final consonant "r" present in the English word it was borrowed from, but silences it. - Pronunciation of individual words may change according to the presence of surrounding words in a sentence (sandhi). - Different dialects of a language may use different phonemes for the same word. - A language may use different sets of symbols or different rules for distinct sets of vocabulary items, such as the Japanese hiragana and katakana syllabaries, or the various rules in English for spelling words from Latin and Greek, or the original Germanic vocabulary. Some national languages like Finnish, Turkish, Serbo-Croatian (Serbian, Croatian and Bosnian), and Bulgarian have a very regular spelling system with a nearly one-to-one correspondence between letters and phonemes. Strictly speaking, these national languages lack a word corresponding to the verb "to spell" (meaning to split a word into its letters), the closest match being a verb meaning to split a word into its syllables. Similarly, the Italian verb corresponding to 'spell (out)', compitare, is unknown to many Italians because the act of spelling itself is rarely needed since Italian spelling is highly phonemic. In standard Spanish, it is possible to tell the pronunciation of a word from its spelling, but not vice versa; this is because certain phonemes can be represented in more than one way, but a given letter is consistently pronounced. French, with its silent letters and its heavy use of nasal vowels and elision, may seem to lack much correspondence between spelling and pronunciation, but its rules on pronunciation, though complex, are consistent and predictable with a fair degree of accuracy. At the other extreme are languages such as English, where the spelling of many words simply has to be memorized as they do not correspond to sounds in a consistent way. For English, this is partly because the Great Vowel Shift occurred after the orthography was established, and because English has acquired a large number of loanwords at different times, retaining their original spelling at varying levels. Even English has general, albeit complex, rules that predict pronunciation from spelling, and these rules are successful most of the time; rules to predict spelling from the pronunciation have a higher failure rate. Sometimes, countries have the written language undergo a spelling reform to realign the writing with the contemporary spoken language. These can range from simple spelling changes and word forms to switching the entire writing system itself, as when Turkey switched from the Arabic alphabet to a Turkish alphabet of Latin origin. The sounds of speech of all languages of the world can be written by a rather small universal phonetic alphabet. A standard for this is the International Phonetic Alphabet. Alphabetical order Alphabets often come to be associated with a standard ordering of their letters, which can then be used for purposes of collation – namely for the listing of words and other items in what is called alphabetical order. Thus, the basic ordering of the Latin alphabet (ABCDEFGHIJKLMNOPQRSTUVWXYZ), for example, is well established, although languages using this alphabet have different conventions for their treatment of modified letters (such as the French é, à, and ô) and of certain combinations of letters (multigraphs). Some alphabets, such as Hanunoo, are learned one letter at a time, in no particular order, and are not used for collation where a definite order is required. It is unknown if the earliest alphabets had a defined sequence. However, the order of the letters of the alphabet is attested from the fourteenth century B.C.E. Tablets discovered in Ugarit, located on Syria’s northern coast, preserve the alphabet in two sequences. One, the ABGDE order later used in Phoenician, has continued with minor changes in Hebrew, Greek, Armenian, Gothic, Cyrillic, and Latin; the other, HMĦLQ, was used in southern Arabia and is preserved today in Ethiopic. Both orders have therefore been stable for at least 3000 years. The Brahmic family of alphabets used in India abandoned the inherited order for one based on phonology: The letters are arranged according to how and where they are produced in the mouth. This organization is used in Southeast Asia, Tibet, Korean hangul, and even Japanese kana, which is not an alphabet. The historical order was also abandoned in Runic and Arabic, although Arabic retains the traditional "abjadi order" for numbering.
https://www.vyiru.com/2021/04/alphabet-everything-you-need-to-know.html
# Phoenician alphabet The Phoenician alphabet is an alphabet (more specifically, an abjad) known in modern times from the Canaanite and Aramaic inscriptions found across the Mediterranean region. The name comes from the Phoenician civilization. The Phoenician alphabet is also called the Early Linear script (in a Semitic context, not connected to Minoan writing systems), because it is an early development of the Proto- or Old Canaanite or Proto-Sinaitic script, into a linear, purely alphabetic script, also marking the transfer from a multi-directional writing system, where a variety of writing directions occurred, to a regulated horizontal, right-to-left script. Its immediate predecessor, the Proto-Canaanite, Old Canaanite or Proto-Sinaitic script, used in the final stages of the Late Bronze Age, first in either Egypt or Canaan and then in the Syro-Hittite kingdoms, is the oldest fully matured alphabet, and it was derived from Egyptian hieroglyphs. The Phoenician alphabet was used to write the Early Iron Age Canaanite languages, subcategorized by historians as Phoenician, Hebrew, Moabite, Ammonite and Edomite, as well as Old Aramaic. Its use in Phoenicia (coastal Levant) led to its wide dissemination outside of the Canaanite sphere, spread by Phoenician merchants across the Mediterranean world, where it was adopted and modified by many other cultures. It became one of the most widely used writing systems. The Phoenician alphabet proper remained in use in Ancient Carthage until the 2nd century BC (known as the Punic alphabet), while elsewhere it diversified into numerous national alphabets, including the Aramaic and Samaritan, several Anatolian scripts, and the early Greek alphabets. In the Near East, the Aramaic alphabet became especially successful, giving rise to the Jewish square script and Perso-Arabic scripts, among others. "Phoenician proper" consists of 22 consonant letters (leaving vowel sounds implicit) – in other words, it is an abjad – although certain late varieties use matres lectionis for some vowels. As the letters were originally incised with a stylus, they are mostly angular and straight, although cursive versions steadily gained popularity, culminating in the Neo-Punic alphabet of Roman-era North Africa. Phoenician was usually written right to left, though some texts alternate directions (boustrophedon). ## History ### Origin The earliest known alphabetic (or "proto-alphabetic") inscriptions are the so-called Proto-Sinaitic (or Proto-Canaanite) script sporadically attested in the Sinai and in Canaan in the late Middle and Late Bronze Age. The script was not widely used until the rise of Syro-Hittite states in the 13th and 12th centuries BC. The Phoenician alphabet is a direct continuation of the "Proto-Canaanite" script of the Bronze Age collapse period. The inscriptions found on the Phoenician arrowheads at al-Khader near Bethlehem and dated to c.1100 BCE offered the epigraphists the "missing link" between the two. The so-called Ahiram epitaph, whose dating is controversial, engraved on the sarcophagus of king Ahiram in Byblos, Lebanon, one of five known Byblian royal inscriptions, shows essentially the fully developed Phoenician script, although the name "Phoenician" is by convention given to inscriptions beginning in the mid-11th century BC. The German philologist Max Müller (1823-1900) believed that the Phoenician alphabet was derived from the Ancient South Arabian script during the 9th-century BC rule of the Minaeans over parts of the Eastern Mediterranean. ### Spread and adaptations Beginning in the 9th century BC, adaptations of the Phoenician alphabet thrived, including Greek, Old Italic and Anatolian scripts. The alphabet's attractive innovation was its phonetic nature, in which one sound was represented by one symbol, which meant only a few dozen symbols to learn. The other scripts of the time, cuneiform and Egyptian hieroglyphs, employed many complex characters and required long professional training to achieve proficiency; which had restricted literacy to a small elite. Another reason for its success was the maritime trading culture of Phoenician merchants, which spread the alphabet into parts of North Africa and Southern Europe. Phoenician inscriptions have been found in archaeological sites at a number of former Phoenician cities and colonies around the Mediterranean, such as Byblos (in present-day Lebanon) and Carthage in North Africa. Later finds indicate earlier use in Egypt. The alphabet had long-term effects on the social structures of the civilizations that came in contact with it. Its simplicity not only allowed its easy adaptation to multiple languages, but it also allowed the common people to learn how to write. This upset the long-standing status of literacy as an exclusive achievement of royal and religious elites, scribes who used their monopoly on information to control the common population. The appearance of Phoenician disintegrated many of these class divisions, although many Middle Eastern kingdoms, such as Assyria, Babylonia and Adiabene, would continue to use cuneiform for legal and liturgical matters well into the Common Era. According to Herodotus, the Phoenician prince Cadmus was accredited with the introduction of the Phoenician alphabet—phoinikeia grammata, "Phoenician letters"—to the Greeks, who adapted it to form their Greek alphabet. Herodotus claims that the Greeks did not know of the Phoenician alphabet before Cadmus. He estimates that Cadmus lived sixteen hundred years before his time (while the historical adoption of the alphabet by the Greeks was barely 350 years before Herodotus). The Phoenician alphabet was known to the Jewish sages of the Second Temple era, who called it the "Old Hebrew" (Paleo-Hebrew) script. ### Notable inscriptions The conventional date of 1050 BC for the emergence of the Phoenician script was chosen because there is a gap in the epigraphic record; there are not actually any Phoenician inscriptions securely dated to the 11th century. The oldest inscriptions are dated to the 10th century. KAI 1: Ahiram sarcophagus, Byblos, c. 850 BC. KAI 14: Eshmunazar II sarcophagus, 5th century BC. KAI 15-16: Bodashtart inscriptions, 4th century BC. KAI 24: Kilamuwa Stela, 9th century BC. KAI 46: Nora Stone, c. 800 BC. KAI 47: Cippi of Melqart inscription, 2nd century BC. KAI 26: Karatepe bilingual, 8th century BC KAI 277: Pyrgi Tablets, Phoenician-Etruscan bilingual, c. 500 BC. Çineköy inscription, Phoenician-Luwian bilingual, 8th century BC. (Note: KAI = Kanaanäische und Aramäische Inschriften) ### Modern rediscovery The Phoenician alphabet was deciphered in 1758 by Jean-Jacques Barthélemy, but its relation to the Phoenicians remained unknown until the 19th century. It was at first believed that the script was a direct variation of Egyptian hieroglyphs, which were deciphered by Champollion in the early 19th century. However, scholars could not find any link between the two writing systems, nor to hieratic or cuneiform. The theories of independent creation ranged from the idea of a single individual conceiving it, to the Hyksos people forming it from corrupt Egyptian. It was eventually discovered that the Proto-Sinaitic alphabet was inspired by the model of hieroglyphs. ## Table of letters The chart shows the graphical evolution of Phoenician letter forms into other alphabets. The sound values also changed significantly, both at the initial creation of new alphabets and from gradual pronunciation changes which did not immediately lead to spelling changes. The Phoenician letter forms shown are idealized: actual Phoenician writing is less uniform, with significant variations by era and region. When alphabetic writing began, with the early Greek alphabet, the letter forms were similar but not identical to Phoenician, and vowels were added to the consonant-only Phoenician letters. There were also distinct variants of the writing system in different parts of Greece, primarily in how those Phoenician characters that did not have an exact match to Greek sounds were used. The Ionic variant evolved into the standard Greek alphabet, and the Cumae variant into the Italic alphabets (including the Latin alphabet). The Runic alphabet is derived from Italic, the Cyrillic alphabet from medieval Greek. The Hebrew, Syriac and Arabic scripts are derived from Aramaic (the latter as a medieval cursive variant of Nabataean). Ge'ez is from South Arabian. ## Letter names Phoenician used a system of acrophony to name letters: a word was chosen with each initial consonant sound, and became the name of the letter for that sound. These names were not arbitrary: each Phoenician letter was based on an Egyptian hieroglyph representing an Egyptian word; this word was translated into Phoenician (or a closely related Semitic language), then the initial sound of the translated word became the letter's Phoenician value. For example, the second letter of the Phoenician alphabet was based on the Egyptian hieroglyph for "house" (a sketch of a house); the Semitic word for "house" was bet; hence the Phoenician letter was called bet and had the sound value b. According to a 1904 theory by Theodor Nöldeke, some of the letter names were changed in Phoenician from the Proto-Canaanite script. This includes: gaml "throwing stick" to gimel "camel" digg "fish" to dalet "door" hll "jubilation" to he "window" ziqq "manacle" to zayin "weapon" naḥš "snake" to nun "fish" piʾt "corner" to pe "mouth" šimš "sun" to šin "tooth" Yigael Yadin (1963) went to great lengths to prove that there was actual battle equipment similar to some of the original letter forms named for weapons (samek, zayin). Later, the Greeks kept (approximately) the Phoenician names, albeit they didn't mean anything to them other than the letters themselves; on the other hand, the Latins (and presumably the Etruscans from whom they borrowed a variant of the Western Greek alphabet) and the Orthodox Slavs (at least when naming the Cyrillic letters, which came to them from the Greek by way of the Glagolitic) based their names purely on the letters' sounds. ## Numerals The Phoenician numeral system consisted of separate symbols for 1, 10, 20, and 100. The sign for 1 was a simple vertical stroke (𐤖). Other numerals up to 9 were formed by adding the appropriate number of such strokes, arranged in groups of three. The symbol for 10 was a horizontal line or tack (𐤗‎). The sign for 20 (𐤘) could come in different glyph variants, one of them being a combination of two 10-tacks, approximately Z-shaped. Larger multiples of ten were formed by grouping the appropriate number of 20s and 10s. There existed several glyph variants for 100 (𐤙). The 100 symbol could be multiplied by a preceding numeral, e.g. the combination of "4" and "100" yielded 400. The system did not contain a numeral zero. ## Derived alphabets Phoenician is well prolific in terms of writing systems derived from it, as many of the writing systems in use today can ultimately trace their descent to it, and consequently Egyptian hieroglyphs. The Latin, Cyrillic, Armenian and Georgian scripts are derived from the Greek alphabet, which evolved from Phoenician; the Aramaic alphabet, also descended from Phoenician, evolved into the Arabic and Hebrew scripts. It has also been theorised that the Brahmi and subsequent Brahmic scripts of the Indian cultural sphere also descended from Aramaic, effectively uniting most of the world's writing systems under one family, although the theory is disputed. ### Early Semitic scripts The Paleo-Hebrew alphabet is a regional variant of the Phoenician alphabet, so called when used to write early Hebrew. The Samaritan alphabet is a development of Paleo-Hebrew, emerging in the 6th century BC. The South Arabian script may be derived from a stage of the Proto-Sinaitic script predating the mature development of the Phoenician alphabet proper. The Geʽez script developed from South Arabian. ### Samaritan alphabet The Phoenician alphabet continued to be used by the Samaritans and developed into the Samaritan alphabet, that is an immediate continuation of the Phoenician script without intermediate non-Israelite evolutionary stages. The Samaritans have continued to use the script for writing both Hebrew and Aramaic texts until the present day. A comparison of the earliest Samaritan inscriptions and the medieval and modern Samaritan manuscripts clearly indicates that the Samaritan script is a static script which was used mainly as a book hand. ### Aramaic-derived The Aramaic alphabet, used to write Aramaic, is an early descendant of Phoenician. Aramaic, being the lingua franca of the Middle East, was widely adopted. It later split off (due to political divisions) into a number of related alphabets, including Hebrew, Syriac, and Nabataean, the latter of which, in its cursive form, became an ancestor of the Arabic alphabet. The Hebrew alphabet emerges in the Second Temple period, from around 300 BC, out of the Aramaic alphabet used in the Persian empire. There was, however, a revival of the Phoenician mode of writing later in the Second Temple period, with some instances from the Qumran Caves, such as the "Paleo-Hebrew Leviticus scroll" dated to the 2nd or 1st century BC. By the 5th century BCE, among Jews the Phoenician alphabet had been mostly replaced by the Aramaic alphabet as officially used in the Persian empire (which, like all alphabetical writing systems, was itself ultimately a descendant of the Proto-Canaanite script, though through intermediary non-Israelite stages of evolution). The "Jewish square-script" variant now known simply as the Hebrew alphabet evolved directly out of the Aramaic script by about the 3rd century BCE (although some letter shapes did not become standard until the 1st century CE). The Kharosthi script is an Arabic-derived alphasyllabary used in the Indo-Greek Kingdom in the 3rd century BC. The Syriac alphabet is the derived form of Aramaic used in the early Christian period. The Sogdian alphabet is derived from Syriac. It is in turn an ancestor of the Old Uyghur. The Manichaean alphabet is a further derivation from Sogdian. The Arabic script is a medieval cursive variant of Nabataean, itself an offshoot of Aramaic. ### Brahmic scripts It has been proposed, notably by Georg Bühler (1898), that the Brahmi script of India (and by extension the derived Indic alphabets) was ultimately derived from the Aramaic script, which would make Phoenician the ancestor of virtually every alphabetic writing system in use today, with the notable exception of written Korean (whose influence from the Brahmi-derived 'Phags-pa script has been theorized but acknowledged to be limited at best, and cannot be said to have derived from 'Phags-pa as 'Phags-pa derived from Tibetan and Tibetan from Brahmi). It is certain that the Aramaic-derived Kharosthi script was present in northern India by the 4th century BC, so that the Aramaic model of alphabetic writing would have been known in the region, but the link from Kharosthi to the slightly younger Brahmi is tenuous. Bühler's suggestion is still entertained in mainstream scholarship, but it has never been proven conclusively, and no definitive scholarly consensus exists. ### Greek-derived The Greek alphabet is derived from the Phoenician. With a different phonology, the Greeks adapted the Phoenician script to represent their own sounds, including the vowels absent in Phoenician. It was possibly more important in Greek to write out vowel sounds: Phoenician being a Semitic language, words were based on consonantal roots that permitted extensive removal of vowels without loss of meaning, a feature absent in the Indo-European Greek. However, Akkadian cuneiform, which wrote a related Semitic language, did indicate vowels, which suggests the Phoenicians simply accepted the model of the Egyptians, who never wrote vowels. In any case, the Greeks repurposed the Phoenician letters of consonant sounds not present in Greek; each such letter had its name shorn of its leading consonant, and the letter took the value of the now-leading vowel. For example, ʾāleph, which designated a glottal stop in Phoenician, was repurposed to represent the vowel /a/; he became /e/, ḥet became /eː/ (a long vowel), ʿayin became /o/ (because the pharyngeality altered the following vowel), while the two semi-consonants wau and yod became the corresponding high vowels, /u/ and /i/. (Some dialects of Greek, which did possess /h/ and /w/, continued to use the Phoenician letters for those consonants as well.) The Alphabets of Asia Minor are generally assumed to be offshoots of archaic versions of the Greek alphabet. The Latin alphabet was derived from Old Italic (originally derived from a form of the Greek alphabet), used for Etruscan and other languages. The origin of the Runic alphabet is disputed: the main theories are that it evolved either from the Latin alphabet itself, some early Old Italic alphabet via the Alpine scripts, or the Greek alphabet. Despite this debate, the Runic alphabet is clearly derived from one or more scripts that ultimately trace their roots back to the Phoenician alphabet. The Coptic alphabet is mostly based on the mature Greek alphabet of the Hellenistic period, with a few additional letters for sounds not in Greek at the time. Those additional letters are based on the Demotic script. The Cyrillic script was derived from the late (medieval) Greek alphabet. Some Cyrillic letters (generally for sounds not in medieval Greek) are based on Glagolitic forms. ### Paleohispanic scripts These were an indigenous set of genetically related semisyllabaries, which suited the phonological characteristics of the Tartessian, Iberian and Celtiberian languages. They were deciphered in 1922 by Manuel Gómez-Moreno but their content is almost impossible to understand because they are not related to any living languages. While Gómez-Moreno first pointed to a joined Phoenician-Greek origin, following authors consider that their genesis has no relation to Greek. The most remote script of the group is the Tartessian or Southwest script which could be one or several different scripts. The main bulk of PH inscriptions use, by far, the Northeastern Iberian script, which serves to write Iberian in the levantine coast North of Contestania and in the valle of the river Ebro (Hiber). The Iberic language is also recorded using two other scripts: the Southeastern Iberian script, which is more similar to the Southwest script than to Northeastern Iberian; and a variant of the Ionic Greek Alphabet called the Greco-Iberian alphabet. Finally, the Celtiberian script registers the language of the Celtiberians with a script derived from Northeastern Iberian, an interesting feature is that it was used and developed in times of the Roman conquest, in opposition to the Latin alphabet. Among the distinctive features of Paleohispanic scripts are: Semi-syllabism. Half of the signs represent syllables made of occlusive consonants (k,g,b,d,t) and the other half represent simple phonemes such as vowels (a,e,i,o,u) and continuous consonants (l,n,r,ŕ,s,ś). Duality. Appears on the earliest Iberian and Celtiberian inscriptions and refers to how the signs can serve a double use by being modified with an extra stroke that transforms, for example ge with a stroke becomes ke . In later stages the scripts were simplified and duality vanishes from inscriptions. Redundancy. A feature that appears only in the script of the Southwest, vowels are repeated after each syllabic signs.
https://en.wikipedia.org/wiki/Phoenician_writing
It's pervasive. You see it everywhere—you're looking at it right now, in fact—which makes it hard to remember sometimes that someone somewhere invented the alphabet, that it's not a natural part of our being, not even as organic as counting to ten on your fingers. It's also wise to remember that transcribing spoken words is only one way to record thoughts. That is, writing does not have to be alphabetic—it can be pictographic, with symbols representing images, the way men's and women's bathrooms are identified with male and female symbols all across the globe. Symbols are just as valid a way of expressing a thought as any word written using a alphabet. After all, who said what you hear or speak is closer to what you think than what you see or draw? Certainly, not Rembrandt! And so there was a time before the invention of the alphabet, when other systems of writing prevailed in the West. This has always been true in China, for instance, where the writing system has never been tied explicitly to oral communication. In particular, ancient Mesopotamian peoples like the Sumerians and Babylonians wrote using ideograms, graphic symbols representing ideas or objects. Because of that, wherever cuneiform went, so did the supposition that ideograms were the way to write. But what happened to that mode of inscription? If early Western civilizations used a pictorial system to write, why don't we still today? There's an easy answer to that question. Ideograms require an enormous investment of time, especially in the early stages of acquisition. It takes Chinese children, for instance, considerably more effort and usually much longer to learn how to write than their occidental counterparts trained in an alphabetic system. That's because Asian students must essentially start from scratch and master a whole new way of communicating, whereas Western students with their ABC's can depend somewhat on the spoken language they've already absorbed to help them read and write. "Somewhat" is key here, however, because alphabets are notoriously imprecise in recording the sounds actually articulated in speech. We'll return to that point at the end of this section. For now, let's begin by surveying in brief what's known about how the alphabet evolved. Its original stimulus seems to have come from Egyptian hieroglyphics, as they spread through Semitic communities in the Sinai and the deserts south of Palestine. From there it moved north across the ancient Near East. The Phoenicians, a sea-faring empire based on trade, carried alphabetic writing west, especially into Greece where it's first evidenced around 850 BCE. In Greek hands, the alphabet underwent important transformations, particularly the inclusion of vowels for the first time. Because they had strong ties to Italy, the Greeks handed their version of the alphabet to the Etruscans, an early Italic people, through whom it later passed to the Romans. Both made notable changes to accommodate alphabetic writing to their particular tongues. The Roman ABC's subsequently formed the basis of both the Medieval and modern alphabets used in the West. Perhaps what's more important to note is that, for all it's seen, this type of writing has remained remarkably stable because, once set, an alphabet is hard to change. II. Egyptian Hieroglyphics The earliest predecessor of today's Western alphabet is evidenced only long after its invention, leaving its origin deep in the mists of historical speculation. But since certain symbols found in ancient Egyptian scripts bear striking resemblance to some later alphabetic forms, scholars have hypothesized that the alphabet evolved out of hieroglyphics, at least in part. This insight stems from our understanding of the nature and evolution of ancient Egyptian writing, and for that we are in debt to the brilliant French linguist, Jean François Champollion, who in 1822 took the first crucial steps toward deciphering hieroglyphics. His principal assumption, that they incorporated at least some phonetic symbols, signs based on sounds—that is, Egyptian writing did not entirely comprised ideograms—broke important, new ground, allowing us not only to hear the Egyptians' stories and histories in their own terms but also to grasp the contribution they made to modern writing, too. Even though Mesopotamian cuneiform predates any known Egyptian script by at least a century or so, the Egyptians, it seems, invented hieroglyphics independently. If, instead, they learned from Mesopotamians how to write and didn't come up with it all on their own, it can only have been written communication in its most rudimentary form, little more than the inspiration itself to write. That's because there's all but no apparent similarity between the cuneiform and hieroglyphic scripts. Far more important than any civilization's claim to originality, however, are the advancements the Egyptians engineered in the technology of writing. To understand this, it's necessary to delve briefly into the nature of hieroglyphics itself. In describing a world as complex as theirs, the scribes of ancient Egypt sought ways to expand the possibilities their writing system afforded. So, instead of relying strictly or even primarily on ideographic signs, they explored ways of representing spoken words in a written form. To wit, they began writing down what they heard, not just what they saw. From this evolved a syllabic script which could be used to write virtually any word in their language based on its pronunciation. In other words, the Egyptians developed a series of signs representing the syllables they used in speech, symbols, for instance, which represented the letter b in combination with any vowel: ba, be, bi, bo, bu and so on. With these, they could approximate the sound of any word—or wo-ra-de as they might have written our word "word" back then—and to help people remember the values these sounds portrayed, many of them were invested with mnemonic qualities, meaning their shapes served as aids to the reader's memory of the consonants they signified. So, for instance, the sign for "r" in hieroglyphics looked like a mouth since r or r't is the Egyptian word for "mouth." Still, having to phrase every word as some sort of wo-ra-de left things open to more than a little confusion. In other words, if you use syllabic signs, how can you tell that wo-ra-de means "word," not "ward" or "weird"? But instead of doing what seems so obvious to us now, that is, use vowels to distinguish "word" from "weird"—the assignment of vowel qualities to letters like a, e, i, o, and u came only much later, and from a source far outside Egypt—Egyptian scribes devised a different and remarkably ingenious solution to the problem. They came up with a complex system of determinatives, ideographic signs used in tandem with syllabic figures to represent a word. It's as if you wanted to write "pen" but had no vowels and could only put down symbols which represented the sounds "pa" and "ne." Your reader might, then, interpret your pa-ne as not "pen," but "pin" or "pan" or "pane" or "pine" or "pun." So, to clarify which pa-ne you meant you drew an ideograph which looked like a pen after the word to show that the pa-ne you meant was "pen." Such ideographic determinatives are found throughout hieroglyphics, and are part of what made and still makes Egyptian writing a formidable challenge to read. But that's also clearly part of the point of hieroglyphics. The scribal profession in Egypt was a highly selective and lucrative vocation, a monopoly of a sort in which scribes had a vested interest in maintaining a complex system which only they and their trained colleagues could decipher. Thus, it wasn't in the general interest of the literate community in ancient Egypt to simplify or popularize writing, and so, while the Egyptian scriptural tradition had in it all the elements necessary for the creation of an alphabet, no such revolution ever took place in all of ancient Egypt's long history. Those who could write didn't want anyone to have an alphabet because it would have put them out of a job. So, because of their inherently cryptic nature, it took a linguistic genius on the order of Champollion to unravel the secrets of hieroglyphics for the modern age. And it took an even greater genius to see that using only the alphabetic symbols inherent in a scribal system like hieroglyphics could make writing a feature of daily life for everyone. That stroke of brilliance belongs to some person or persons whose identity has been lost amidst the ravaged historical records of the second millennium BCE. "A is for Apple, B is for Boy, C is for Cat, . . ." The alphabet, then, was not so much invented as isolated. That much is clear, even if the question is when and where and by whom is not. Hints of an alphabetic script are found as far back as 1700 BCE in evidence left behind by miners in the turquoise quarries of the Sinai (the triangular peninsula between Egypt and the Holy Lands). Soon thereafter, other early alphabetic scripts begin to emerge from texts written in Palestine. So it seems the alphabet escaped Egypt, much like the Hebrews of the Exodus, fleeing east and north across the desert, and wandered like Moses for many years in the wilderness. Of this alphabet's inventors we know nothing certain other than that they spoke a Semitic language, one related to Arabic and Hebrew, because the letters of this early alphabet conform well with the consonants prevalent in Semitic tongues (see Section 14). As such, it includes a number of gutturals, not the same sounds made in the back of the mouth as we saw in Indo-European languages (see Section 7) but rasping sounds made deep in the throat and found frequently in Hebrew, Arabic and their linguistic kin. In other words, the early alphabet was designed to suit a Semitic speaker's natural mode of talking. It would be more accurate, however, to say alphabets—plural!—since the letters we ultimately ended up with don't represent the only attempt to craft alphabetic writing in the second millennium BCE. Clearly, the idea of finding a way to simplify and popularize writing was in the air at this time. At Ugarit, for example, a city in northern Syria and a rich cosmopolitan trade center, there evolved an alphabet based not on the letter shapes with which we are familiar but cuneiform symbols, the type of writing popular in Mesopotamia at the time. Thus, this cuneiform alphabet is not a forerunner but an analog of the lettering system we use today. That is, someone faced east and tried doing the same thing with Mesopotamian cuneiform that the inventors of our alphabet did looking southwest to Egypt and hieroglyphics. That this cuneiform alphabet eventually didn't catch on in the long run is probably little more than a fluke of fate. All evidence, however, seems to indicate that the letters we use to write didn't derive from this cuneiform-based script but the syllabic signs employed in Egyptian hieroglyphics. Somehow these letter forms made their way north to Phoenicia (on the eastern shore of the Mediterranean Sea) where they flourished and began to spread widely, as evidenced by an explosion in alphabetic writing toward the end of the second millennium BCE in the lands around Palestine. There for the first time we see names given to the letters themselves: ‘aleph, beth, gimel, daleth, etc. These would later turn into the well-known register of Greek letters: alpha, beta, gamma, delta, ktl., from which comes our word alphabet, an abridged form of this list "alpha-bet(a-gamma, . . .)." Though nonsense to us, it's easy to see why these particular names were chosen in Phoenician. They signify the letters' values. ‘Aleph is the Phoenician word for "ox," beth means "house," gimel "camel," daleth "door," and so on. In other words, the Phoenician alphabet incorporates the same mnemonic device the Egyptians used, that each letter's shape depicts a common thing, the word for which begins with the sound that letter represents. But the Phoenicians went further than the Egyptians and named the very letter itself after that thing. When they then used only these letters in writing, that is, no ideographs or determinatives, a fully alphabetic script had at last been born. And in much the same way we teach children the alphabet today by having them recite "A is for apple, B is for boy, C is for cat, . . .," the Phoenicians memorized their alphabet with similar mnemonics, except that their world was one of oxen and camels. In the shapes of the letters themselves it's also possible to see their figurative origin as illustrations designed to aid the memory. A is formed the way it is because it looks like an "ox"—turn it upside down and it has horns—B looks like a "house," originally a rectangle divided in half as if it were an aerial drawing of a two-room home. The curve of C was originally a crude rendition of a camel's hump, and so on. All this was designed to help Phoenicians recall each letter's value, a pictographic reminder of the alphabet's sounds, making it much easier to deploy than the daunting variety of signs required in either cuneiform or hieroglyphic writing. That clear advantage was, however, offset by the complexity entailed in alphabetic writing as it moved between languages. Many of those problems encountered in transmission stemmed from the wide variety of consonant sounds found in different tongues. For instance, few languages other than English utilize the interdental /th/. Putting your tongue between your teeth as they're closing is a thing most people instinctually avoid. So, exporting the alphabet from Phoenicia wasn't as easy an affair as simply handing it to foreigners and saying, "Here, use this to write with!" The ‘aleph-beth-gimel alphabet, so clearly tailored to Semitic linguistic structures and especially the Phoenician language, makes an excellent in point. To wit, early alphabetic writing evidences well its innate regionalism in one of its more unusual qualities—unusual to Westerners, at least—its lack of vowels. Phoenician, and Semitic languages in general—which include modern Hebrew and Arabic—freely alter the internal vowels in a word according to an established schema, thereby changing its function. That is, by inserting different vowels it's possible to change the way a word works in a sentence, in the same way we turn "write" into "wrote" to create a past-tense form in English. In Semitic languages, however, this system is far more complex and comprehensive, allowing vowel substitutions to make a verb into a noun. KTB, for instance, is the Semitic root for "write," rendering many words in Arabic: katib "writer," kitab "book," katab "wrote," and so on. To put it simply, consonants in Semitic languages tend to reflect root vocabulary, whereas vowels supply grammatical structures or clarify a word's function in a sentence. Because of this, the early Semitic inventors of the alphabet wrote only consonants, those being the principal agents of vocabulary in their language. This is why the name of God given to Moses, JHWH, is a string of consonants only, later rendered variously as Jehovah or Jahweh (see Section 11). Early Hebrews had no way to write vowels with their alphabet and, in fact, saw little need for them because through their native understanding of the Hebrew language they could supply the vowels in words as they read. And that's where the Greeks come in. That the Greeks inherited the alphabet from the Phoenicians is clear in several ways. First, the order of the letters in the Greek alphabet is basically the same as that in the Phoenician. Second, the Phoenician letter-names were carried over into Greek with only minor change —alpha, beta, gamma, delta—even though to the Greeks these names were meaningless terms. Third, the ancient Greeks themselves attested to their alphabet's Phoenician heritage by calling it Phoinikeia grammata, "Phoenician letters," and claiming it was brought to Greece by the Phoenician-born hero Cadmus, a figure in Greek mythology. The remarkable consistency between the Greek and Phoenician alphabets extends to much more than the names for letters, however. With a stability maintained for millennia, the alphabet underwent very few changes during its translation into Greece, such that even if a Phoenician letter imported a sound the Greeks didn't use, they retained the letter. That, however, opened the door to other developments. One can see the problems—or opportunities—which the early alphabet presented the Greeks nowhere better than with ‘aleph, the first Phoenician letter. Before the Greeks recast this as it alpha, it represented a guttural consonant, something that sounds like gargling to us and has no counterpart in either English or Greek. Yet, the Greeks not only kept ‘aleph in their alphabet but retained it in the first position, a remarkably conservative posture. But this conservatism also presented important opportunities for significant change, two in particular. First, when the Greek felt they needed to add new letters, they put them at the end of the alphabet, even where it made more sense to put them next to related letters. That's because it's very difficult to take ABC and turn it into ABWXYZC. Too many parents and teachers have nursed too many young readers on ABC, those letters with those values in that order, to make such a change work. Thus, the new letters the Greeks needed to add—and they had little choice but to put them in their alphabet, since without them the Greeks couldn't transcribe all the words of their language alphabetically—they more or less had to include them at the end of the alphabet. These were their phi, chi, psi, omega, the last four letters of the Greek alphabet. This set a trend in alphabetic evolution that new comes last, explaining why our alphabet ends W, X, Y, Z. Every one of this final quarter is a later addition appended onto the alphabet. Besides that, the Greeks introduced a second major innovation in alphabetic writing, the vowel. Because Indo-European languages didn't employ vowels as grammatical markers the way Semitic languages did, it wasn't possible to write Greek or any Indo-European language using only consonants. Wtht vwls ts hrd t knw wht wrds yr rdng. And basic words like English a or I or French eau ("water") would have been completely impossible to write. To make any use at all of the alphabet, the Greeks had to find some way of representing vowels. Fortuitously, the solution to this problem worked in concert with the remedy for another. The Greeks needed vowels in order to write their language, and at the same time several of the letters they'd inherited from the Phoenicians represented sounds useless to them. So, with typical Greek confidence-in-rationalism they reassigned the phonetic value of these letters and turned them into vowels, without changing the traditional order of the letters. And so ‘aleph became alpha, the forerunner of our letter a, as did epsilon the ancestor of e, iota i, omicron and omega o, and upsilon u. This explains why our vowels are all over the alphabet instead of being neatly collected in one place, as logic would dictate. They are, at heart, phonetic substitutions for the wide array of Phoenician gutturals found all across the original lettering system inherited by but useless to Greeks who were bold enough to give these letters new value but not so Philistine as to give them a new position in the alphabet. The addition of vowels entailed monumental consequences in the history of writing in the West, showing that, like politics, writing encompasses the art of the possible. By endowing alphabetic writing with the possibility of much broader cultural applicability, the Greeks' invention of vowels proved a turning point in Western Civilization. John Healey sums up neatly the significance of vowels: So enticing, in fact, were these Greek-devised vowels that ultimately those cultures which had inspired the alphabet but had at first written only in consonants ultimately adopted them, too. Hebrew and Arabic writing today marks vowels, though not with letters but punctuation marks added near a consonant. The Greeks fostered one other significant development in alphabetic writing, the regular predisposition to write left-to-right. While early Greek lettering could go either left-to-right or right-to-left, and even sometimes both—a script that alternates between left-to-right and right-to-left on every other line is called boustrophedon, literally in Greek "as the ox turns (in plowing a field)"—eventually the Greeks settled on left-to-right as the standard form for writing, part and parcel of the general privileging of right-handedness in Western Civilization. That is, when righthanders put ink on paper, they're less inclined to smear the letters if they pull their hands away from what they're writing, and thus Greek scripts eventually settled into a left-to-right disposition, leaving lefties, on the other inky hand, to their own sinister deviances. In the East, the Hebrews and other Semitic groups including the ancestors of the modern Arabs developed their own alphabet and direction of writing (right to left). These, too, evolved into different types of scripts, especially as time passed and Semitic languages multiplied. In particular, Aramaic, the most widespread of those daughter languages, ultimately replaced Hebrew as the common tongue used by the ancient Israelites, bringing with it its own species of alphabet (see Section 13). Meanwhile, letters were spreading westward, too. The first non-Greek peoples we know of who used the Western alphabet in Italy were the Etruscans. This civilization was based in the area north of Rome, around modern Florence and Tuscany, and during the sixth and fifth centuries dominated the inhabitants of central Italy, including the early Romans. Among the many cultural artifacts which Etruscan control left behind in Roman life was the Greek alphabet, though in an adapted form. For instance, the Greek alphabet which began alpha, beta, gamma, the equivalent of our ABG, evolved under Etruscan management into ABC because the letters C and G are closely related and thus easily confused (see Section 7). In the process of this shift, not only did G end up being removed and replaced by C but later it had to be re-inserted into the alphabet to restore the g-sound. This also left the alphabet with two hard c-sounds represented by C and K, the way it still is today. It would have made sense to eliminate either C or K, if that didn't entail effecting a fundamental change in the presentation of the alphabet, a structure rarely so liberal as to admit that sort of editing. Several other changes occurred as a result of the importation of the Greek alphabet into Italy. One entailed the letter Z, a sound which the Romans didn't use until they came under the influence of the Greek civilization and began borrowing words with zeta in them, the letter that represented that sound in Greek and seen in English words of Greek derivation like zeal, zone and Zeus. In the Greek alphabet zeta comes rather early, immediately after epsilon (the Greek equivalent of E). While early Italians had inherited zeta along with the rest of the Greek alphabet, they had no words with the z-sound in them and, having no immediate reason to keep the letter, had omitted it from their earlier version of the alphabet. When the later Romans found that they did, in fact, need it, they re-introduced Z into their alphabet, putting it at the end where it wouldn't disrupt the order of letters which was by then well-established. And that's why Z comes last in the Roman alphabet and all its descendants, including ours. Another such change involved the letter which has come down to us as F. Called digamma in Greek, it originally signified not the f-sound but was the equivalent of our /w/. Before the Classical Age, however, it had fallen into disuse because all w-sounds disappeared from the Greek language. Even though not in use, digamma remained for a long time in the Greek alphabet and, as such, was exported wherever the Greek alphabet traveled, to early Rome for instance. And because the Romans needed a letter to represent the f-sound which they had but Greek didn't, they simply re-assigned digamma the value of /f/, the sound it has signified in Western writing ever since. After the disintegration of the Roman synthesis in the fifth century ending classical antiquity (see Section 8), literacy in the West relapsed into near extinction. This again opened up the possibility for substantive changes to be made in the alphabet—the fewer people who know something, the easier it is to revise it—despite that, however, not many modifications of any real significance actually took place in alphabetic writing during the Middle Ages. And those few that did remained generally true to the inherited letter forms in order, sound and shape. So, even amidst several changes in scripts, ABC and its literal successors still held sway. One of the few notable changes which took place was the separation of I and J, letters which come from the same original character in the Roman alphabet. Originally, the Phoenician letter yod ("hand")—a hand held up with fingers closed still resembles the upright form of the letter I—had developed into the Greek iota, one of the vowel-sounds the Greeks introduced into the alphabet. This subsequently passed to the Romans as the letter I, used in Latin to represent both the vowel sound /i/ and the consonant sound /y/. In the Middle Ages, this caused confusion since the i-sound and y-sound are different, even though closely related. To distinguish them, Medieval writers added a curved tail onto I when it was being used as a consonant, rendering the modern form J. Even though this letter later took on a different value, the sound which begins modern English words like "jar" and "joint," many modern languages still retain the letter's original vocal quality, the y-sound. So, for instance, a German word like jung is pronounced "yung." Thus, the creation of J out of I explains not only why these letters look alike but also the reason they sit next to each other in the alphabet. In similar fashion, Roman U replicated during the Middle Ages, but into three different letters: U, V and W. Just like I and J, this trio evolved to reflect separate sounds, the vowel (U) and consonants (V, W), all forms of front rounded sounds, that is, what comes out of the mouth when the vocal chords are used and the lips pursed. The similarity of shape U, V and W share shows their common origin, too. The complexity we've just reviewed—though it's not so complex if you take into account the many centuries the alphabet has been around and all the evolution it might have undergone—the variety of changes in form and value which alphabetic signs have embraced raises the difficult issue of its general usefulness in modern society. That is, it's supposed to be a simple way of writing, but it's not. So then is the alphabet really a good idea? After all, if the spelling of words today has become so obtuse that English speakers can hold spelling bees and people need dictionaries just to figure out how to spell a word—and what would the inventors of the alphabet have to say about that?—we've definitely lost the sight of the original purpose of the alphabet, to simplify writing and make it easier to learn and do. But it's not the alphabet's fault really. At the heart of modern people's problems with writing in English is the strange misfortune that our spelling has not been comprehensively revised for centuries. So, we can't blame the alphabet itself but our own tendency not to reform the way we deploy it, not only the shape and order of its letters but their application in writing as well. Our reluctance to renovate this long-standing tradition in our society is what leaves us in such dyer straights—I mean "dire straits"? Yet, conservatism is a hallmark of the alphabet's nature. History certainly documents that much. If that weren't true and the alphabet didn't constitute so basic an element of our culture, we could easily eliminate much of the confusion in spelling, for example, taking out either C or K and having only one way of writing the hard c-sound. But it doesn't seem very likely we'll ever be able to do that—in fakt, one kould kall it klose to inkonkeivable to akkomplish!—because both the letters and the ways we use them are too deeply entrenched in our civilization today. The result is a cacophony of sound symbols, a confused writing system chock-full of archaic spellings like "knight," originally pronounced "kuh-nee-guh-tuh" which might have been fine for Chaucer but not for anyone alive now. To that can be added a long litany of lost consonants—gnat, gnaw, folk, would, aisle, eight—all pronounced at one time but now the fossil imprints of defunct phonemes. Multiply that with foreign borrowings like buffet and chutzpah which bring with them exotic letter clusters (-et = -ay) or foreign sounds (ch- = guttural) and the situation comes close to untenable. All in all, nothing says absurdity quite like garbage: one word, two g's, and each pronounced differently. In fact, spelling in English has reached such a pitch of insanity certain sounds are expressed with a ludicrous array of letter configurations. For instance, the /sh/ sound can be represented at least eight different ways in English: shoe, sugar, passion, ambition, ocean, champagne, Confucius, and Sean . The long-o sound shows up in as many manifestations, too: go, beau, boat, stow, sew, doe, though, escargot. Worse yet, even the simplest words aren't consistent in their spelling. Consider four, fourth, fourteen, twenty-four, but forty. They all sound the same, so what happened to the u in forty? This astounding and needless confusion has inspired many an attempt at reform. Among those who have attempted to revise English spelling are some of the most notable exponents of our language ever: Noah Webster, Arthur Conan Doyle, Charles Darwin, Mark Twain, George Bernard Shaw, Andrew Carnegie and Brigham Young. But all these influential voices have run up against one impassable obstacle: which pronunciation is one to use in revising the spelling of words? Take girl, for instance. To which spelling do we "correct" it: gal (American dialect), goil (New York), gull (Irish), gel (London), gill (South African) or gairull (Scottish)? Because alphabets are tied to pronunciation, spelling accordingly fragments as languages break up into dialects. And even if we could come up with a quick and ready solution for our pressing literal woes, changing times would demand revisions in spelling almost as soon as repairs had been effected. That's the disadvantage of using a writing system based on spoken language, the dark counterpart to its great advantage, how easy it is to learn. Except, it's not easy to learn, not any more at least. If we go without revising English spelling much longer, the letter forms will have so little affinity with the sound of words we speak, the alphabet might as well be an ideographic system. We claim, for instance, it's easier to learn to write alphabetically than memorize all the characters a Chinese student has to but, with a century or two more of disjunction in spelling and sound, it won't be. And even as it is, most English-speaking adults have yet to master our incomprehensable spelling completely. Or is that incomprehensible? And how complex is the Chinese writing system really? There, every word is a separate symbol, each based on about 212 fundamental radicals (basic forms). More complicated ideas employ a combination of symbols, such as "eye" + "water" = "teardrop," or a sign with two symbols for "women" means "quarrel," and with three it means "gossip." Though there are around fifty thousand symbols total, only four thousand are in common use because the combination of symbols allows the system to reach out broadly across the continuum of thought. Typing Chinese is, granted, a nightmare. The best typists manage about ten words a minute, and the old mechanical typewriters were comical to observe in use, so long that typists had to run up and down the keyboard, literally. And Chinese dictionaries are hard to organize, too, since how do you alphabetize words when there's no alphabet? Needless to say, there are no Chinese crossword puzzles, Scrabble® or Morse code. But in spite of all that, the Chinese system offers some enormous advantages, such as not having to be modified according to changes in dialect or as spoken language evolves. Actually, in some respects Chinese writing hasn't had to evolve at all, no more at least than our form ("star") which is represented today by an asterisk (*, literally "little star" in Greek), a symbol which has remained essentially the same since the time of ancient Babylon. Not only that but the ideographic system used in China can be understood any place the system is known, even where spoken language isn't. Thus, the ancient Chinese philosopher Confucius, who would hardly understand a single spoken work today, would be able to read many parts of a modern newspaper. His Western counterpart, Socrates, who lived more than a millennium after Confucius, would be totally at sea in print or conversation. All this raises the difficult question of whether or not we should perhaps entertain the idea of adopting an ideographic scheme of writing like the one the Chinese employ, and give up on seeking ways to revise the alphabetic system we currently employ. Alphabets inherently bring with them such profound problems—archaisms like "knight," confrontations between what's said and what's written like "girl/gal/goil," letters with multiple values like the g's in garbage, various ways of construing the same sound like /sh/ (ocean, notion, passion, fashion, etc.) and, worst of all, a tendency toward traditionalism which obstructs even the most fundamental and necessary revisions—it seems impossible to come up with a solution that will have any general applicability or appeal. And given the "great men" who have tried, I doubt we ever will. So, in the end, we have to ask: ABC = :-( ?
http://www.usu.edu/markdamen/1320Hist&Civ/chapters/17ABGS.htm
Fawnia Lucas December 4, 2019 Alphabet In the Middle Bronze Age, an apparently ”alphabetic” system known as the Proto-Sinaitic script appears in Egyptian turquoise mines in the Sinai peninsula dated to circa the 15th century BC, apparently left by Canaanite workers. In 1999, John and Deborah Darnell discovered an even earlier version of this first alphabet at Wadi el-Hol dated to circa 1800 BC and showing evidence of having been adapted from specific forms of Egyptian hieroglyphs that could be dated to circa 2000 BC, strongly suggesting that the first alphabet had been developed about that time. Based on letter appearances and names, it is believed to be based on Egyptian hieroglyphs. This script had no characters representing vowels, although originally it probably was a syllabify, but unneeded symbols were discarded. An alphabetic cuneiform script with 30 signs including three that indicate the following vowel was invented in Ugarit before the 15th century BC. This script was not used after the destruction of Ugarit. Despite the conflict in theories, scholars are generally agreed that, for about 200 years before the middle of the 2nd millennium bce, alphabet making was in the air in the Gyro-Palestinian region. It is idle to speculate on the meaning of the various discoveries referred to. That they manifest closely related efforts is certain; what the exact relationship among these efforts was, and what their relationship with the North Semitic alphabet was, cannot be said with certainty. The boundaries between the three types of segment scripts are not always clear-cut. For example, Sorani Kurdish is written in the Arabic script, which is normally an abjured. However, in Kurdish, writing the vowels is mandatory, and full letters are used, so the script is a true alphabet. Other languages may use a Semitic abjured with mandatory vowel diacritics, effectively making them abugidas. On the other hand, the Phagspa script of the Mongol Empire was based closely on the Tibetan abugida, but all vowel marks were written after the preceding consonant rather than as diacritic marks. Although short a was not written, as in the Indic abugidas, one could argue that the linear arrangement made this a true alphabet. Conversely, the vowel marks of the Tigrinya abugida and the Amharic abugida (ironically, the original source of the term ”abugida”) have been so completely assimilated into their consonants that the modifications are no longer systematic and have to be learned as a syllabify rather than as a segment script. Even more extreme, the Pahlavi abjured eventually became logographic. The names were abandoned in Latin, which instead referred to the letters by adding a vowel (usually e) before or after the consonant; the two exceptions were Y and Z, which were borrowed from the Greek alphabet rather than Etruscan, and were known as Y Graeca ”Greek Y” (pronounced I Graeca ”Greek I”) and zeta (from Greek)—this discrepancy was inherited by many European languages, as in the term zed for Z in all forms of English other than American English. Over time names sometimes shifted or were added, as in double U for W (”double V” in French), the English name for Y, and American zee for Z. Comparing names in English and French gives a clear reflection of the Great Vowel Shift: A, B, C and D are pronounced, but in contemporary French they are /a, be, se, de/. The French names (from which the English names are derived) preserve the qualities of the English vowels from before the Great Vowel Shift. By contrast, the names of F, L, M, N and S remain the same in both languages, because ”short” vowels were largely unaffected by the Shift. In default of other direct evidence, it is reasonable to suppose that the actual prototype of the alphabet was not very different from the writing of the earliest North Semitic inscriptions now extant, which belong to the last two or three centuries of the 2nd millennium bce. The North Semitic alphabet was so constant for many centuries that it is impossible to think that there had been any material changes in the preceding two to three centuries. Moreover, the North Semitic languages, based as they are on a consonant root (i.e., a system in which the vowels serve mainly to indicate grammatical or similar changes), were clearly suitable for the creation of a consonant alphabet. The script was spread by the Phoenicians across the Mediterranean. In Greece, the script was modified to add vowels, giving rise to the ancestor of all alphabets in the West. It was the first alphabet in which vowels have independent letter forms separate from those of consonants. The Greeks chose letters representing sounds that did not exist in Greek to represent vowels. Vowels are significant in the Greek language, and the syllabicate Linear B script that was used by the Mycenaean Greeks from the 16th century BC had 87 symbols, including 5 vowels. In its early years, there were many variants of the Greek alphabet, a situation that caused many different alphabets to evolve from it. Recent Post Categories Archive Most Popular Tag Cloudrabbit pictures to colour sea otter coloring page llama coloring sheet cheetah pictures to color detailed animal coloring pages reindeer pictures to color penguin pictures to color kawaii animals coloring pages goat coloring tarantula coloring page hermit crab coloring page free printable animal pictures christmas cat coloring pages porcupine coloring page printable elephant pictures armadillo coloring page spider pictures to color blue whale coloring page animal printouts monkey colouring pictures cute chibi coloring pages african animals coloring pages elephant mandala coloring pages fox coloring pictures dinosaur colouring images sailor moon coloring koala bear coloring page cute coloring pages to print fish coloring book cheetah coloring kitty pictures to color animal alphabet coloring pages safari animals coloring pages parrot coloring hyena coloring page cow coloring sheet panda pictures to color woodland animal coloring pages tortoise colouring pages snake coloring sheet deer pictures to color macaw coloring page snake pictures to color printable farm animal pictures danganronpa coloring pages printable shark pictures Latest Review Latest News Recent Post © 2020 60israel. All rights reserved.
http://60israel.org/88Vx25nS/xY5916qH/
Painting: "Homer singing his Iliad at the gate of Athens" by Guillaume Lethière, 1811. The language used by Homer is an archaic version of Ionic Greek, with admixtures from certain other dialects such as Aeolic Greek. It later served as the basis of Epic Greek, the language of epic poetry, typically written in dactylic hexameter verse. Greek is the mother tongue of the inhabitants of Greece and of the Greek population of the island of Cyprus. Greek is also the language of the Greek communities outside Greece, as in the United States, Canada, and Australia. There are Greek-speaking enclaves in Calabria (southern Italy) and in Ukraine. Two main varieties of the language may be distinguished: the local dialects, which may differ from one another considerably, and the Standard Modern Greek. The Greek alphabet derived from the North Semitic alphabet via that of the Phoenicians. The Greek alphabet was modified to make it more efficient and accurate for writing a non -Semitic language by the addition of several new letters and the modification or dropping of several others. Most important, some of the symbols of the Semitic alphabet, which represented only consonants, were made to represent vowels: the Semitic consonants ʾalef, he, yod, ʿayin, and vav became the Greek letters alpha, epsilon, iota, omicron, and upsilon, representing the vowels a,e,i,o, and u, respectively. The addition of symbols for the vowel sounds greatly increased the accuracy and legibility of the writing system for non-Semitic languages. The early Greek alphabet was written, like its Semitic forebears, from right to left. This gradually gave way to the boustrophedon style, and after 500 BC Greek was always written from left to right. Before the 5th century BC the Greek alphabet could be divided into two principal branches, the Ionic (eastern) and the Chalcidian (western); differences between the two branches were minor. The Chalcidian alphabet probably gave rise to the Etruscan alphabet of Italy in the 8th century BC and hence indirectly to the other Italic alphabets, including the Latin alphabet, which is now used for most European languages. In 403 BC, however, Athens officially adopted the Ionic alphabet as written in Miletus, and in the next 50 years almost all local Greek alphabets, including the Chalcidian, were replaced by the Ionic script, which thus became the classical Greek alphabet. The classical alphabet had 24 letters, 7 of which were vowels, and consisted of capital letters, ideal for monuments and inscriptions. From it were derived three scripts better suited to handwriting: uncial, which was essentially the classical capitals adapted to writing with pen on paper and similar to hand printing; and cursive and minuscule, which were running scripts similar to modern handwriting forms, with joined letters and considerable modification in letter shape. Uncial went out of use in the 9th century AD, and minuscule, which replaced it, developed into the modern Greek handwriting form. Our own word alphabet comes from the first two letters of the Greek alphabet: alpha and beta. Very interesting history – thank you.
https://www.meetcrete.com/learning-the-greek-alphabet/
Hanrietta Marques December 3, 2019 Alphabet The Georgian alphabet is an alphabetic writing system. With 33 letters, it is the largest true alphabet where each letter is graphically independent. The script was spread by the Phoenicians across the Mediterranean. In Greece, the script was modified to add vowels, giving rise to the ancestor of all alphabets in the West. It was the first alphabet in which vowels have independent letter forms separate from those of consonants. The Greeks chose letters representing sounds that did not exist in Greek to represent vowels. Vowels are significant in the Greek language, and the syllabicate Linear B script that was used by the Mycenaean Greeks from the 16th century BC had 87 symbols, including 5 vowels. In its early years, there were many variants of the Greek alphabet, a situation that caused many different alphabets to evolve from it. In the usual case, each alphabetic character represents either a consonant or a vowel rather than a syllable or a group of consonants and vowels. As a result, the number of characters required can be held to a relative few. A language that has 30 consonant sounds and five vowels, for example, needs at most only 35 separate letters. In a syllabify, on the other hand, the same language would require 30 × 5 symbols to represent each possible consonant-vowel syllable (e.g., separate forms for ba, be, bi, bo, bu; da, de, di; and so on) and an additional five symbols for the vowels, thereby making a total of 155 individual characters. Both syllables and alphabets are phonographic symbolization; that is, they represent the sounds of words rather than units of meaning. Over the centuries, various theories have been advanced to explain the origin of alphabetic writing, and, since Classical times, the problem has been a matter of serious study. The Greeks and Romans considered five different peoples as the possible inventors of the alphabet—the Phoenicians, Egyptians, Assyrians, Cretans, and Hebrews. Among modern theories are some that are not very different from those of ancient days. Every country situated in or more or less near the eastern Mediterranean has been singled out for the honor. Egyptian writing, cuneiform, Cretan, hieroglyphic Hittite, the Cypriot syllabify, and other scripts have all been called prototypes of the alphabet. The Egyptian theory actually subdivides into three separate theories, according to whether the Egyptian hieroglyphic, the hierarchic, or the demotic script is regarded as the true parent of alphabetic writing. Similarly, the idea that cuneiform was the precursor of the alphabet may also be subdivided into those singling out Sumerian, Babylonian, or Assyrian cuneiform. In default of other direct evidence, it is reasonable to suppose that the actual prototype of the alphabet was not very different from the writing of the earliest North Semitic inscriptions now extant, which belong to the last two or three centuries of the 2nd millennium bce. The North Semitic alphabet was so constant for many centuries that it is impossible to think that there had been any material changes in the preceding two to three centuries. Moreover, the North Semitic languages, based as they are on a consonant root (i.e., a system in which the vowels serve mainly to indicate grammatical or similar changes), were clearly suitable for the creation of a consonant alphabet. Thus the primary classification of alphabets reflects how they treat vowels. For tonal languages, further classification can be based on their treatment of tone, though names do not yet exist to distinguish the various types. Some alphabets disregard tone entirely, especially when it does not carry a heavy functional load, as in Somali and many other languages of Africa and the Americas. Such scripts are to tone what abrades are to vowels. Most commonly, tones are indicated with diacritics, the way vowels are treated in abundances. This is the case for Vietnamese (a true alphabet) and Thai (an abugida). In Thai, tone is determined primarily by the choice of consonant, with diacritics for disambiguation. In the Pollard script, an abugida, vowels are indicated by diacritics, but the placement of the diacritic relative to the consonant is modified to indicate the tone. More rarely, a script may have separate letters for tones, as is the case for Hmong and Zhuang. For most of these scripts, regardless of whether letters or diacritics are used, the most common tone is not marked, just as the most common vowel is not marked in Indic abugidas; in Zhuyin not only is one of the tones unmarked, but there is a diacritic to indicate lack of tone, like the virama of Indic. Categories Archive Recent Post Most Popular Tag Cloudhippo coloring sheet cat and dog pictures to color pet coloring sheets animal mechanicals coloring pages crocodile colouring pictures llama coloring pictures reindeer coloring pictures pangolin coloring page seahorse coloring sheet lizard coloring sheet elephant pictures to print free mammal coloring pages dog mandala coloring pages cute unicorn coloring octopus for coloring beyblade burst coloring sea turtle coloring pages printable coloring pics of animals fox girl coloring pages beyblade coloring pictures snail coloring cute animal coloring pages for kids goblin shark coloring page jellyfish coloring sheet coloring pages of animals to print camel coloring sheet coloring kitten starfish coloring sheet sleeping bear coloring page cute mouse coloring page beluga whale coloring page pictures of farm animals to color free unicorn colouring pages farm animal coloring pages for toddlers rooster coloring elephant coloring images realistic farm coloring pages baby animals to color geometric animal coloring pages halloween owl coloring page disney animal coloring pages Latest Review Latest News Recent Post © 2020 60israel. All rights reserved.
http://60israel.org/W6h4E02x/xH6027yY/
Mavis Love December 14, 2020 Alphabet Worksheet Have you ever imagined what the world today would be if ancient people hadnt invented the alphabet? I dont dare to think about it but it must be such the worst of the worst. Without the alphabet, mankind would never have advanced to the levels we have today. If alphabet hadnt been invented, book would never have appeared and consequently people would be uneducated and life would be boring. You wouldnt know anything about history, the human knowledge wouldnt have been preserved and you also wouldnt be able to read my essay. We would be narrow minded and shallow. Our imagination would be limited. There would be no paper, no printing press, no television, no Galileo Galilei, no William Shakespeare, no Charles Darwin, no Albert Einstein,... Instead of that, tolerance would be low and the world would have been destroyed by war and cruel. In my opinion, it would be quite a miserable world where no one can read, write, or think freely. To be honest, if alphabet hadnt existed, there would be a new way of communicating ideas. The Phoenician alphabet is acrophonic meaning each letter represents the initial sound of the name of the letter. For example, the last letter in the Phoenician is called "taw" or "tah" (meaning mark) and led to our current letter T with the same sound. The Greeks adapted that alphabet in the eighth century BC by adding vowels. The Etruscans borrowed the Greek alphabet which was later adopted by the Romans. These early scripts were most frequently written by pressing or scratching a stylus into a soft clay tablet which was then allowed to harden. By the first century BC, the Romans had developed several scripts. There was a cursive hand which could be quickly scratched into a wax tablet or written with a reed pen on paper made of papyrus. There was also a script called the Imperial Capital which was carved in stone and survives on monuments and buildings from the time. This script was also written using a brush on paper. All subsequent Western scripts have evolved from the Roman letters, in fact the Imperial Capital script serves as the basis for our modern capital letters. Government studies confirm that children who are read to and and who recognize the letters of the alphabet have an easier time learning how to read and thus do better in school, so maximize your childrens exposure to the letters of the alphabet. Give them alphabet books, alphabet blocks, alphabet magnets and clothing decorated with the alphabet. Here are some ideas for using such objects in simple alphabet and spelling games that you can play with your young children, to help them get off on the right foot, and do it all while making it an enjoyable part of everyday life. Make a game of naming the letters of the alphabet and pronouncing the sounds that they make. Practice them in alphabetical order and, later, in random order. Sing the alphabet song, while pointing to the letters of the alphabet. Go around the room and hang signs on common objects with simple one and two-syllable names: "table," "bed," "lamp," and such. Help your children name the letters in them and spell and pronounce the words. Later, once the spellings are known, remove the signs, but continue to have the children spell the words for you. Look at pictures of everyday objects and use movable letters, such as alphabet magnets or blocks or flash cards, to form the words and practice their spellings. Simplified Chinese and traditional Chinese coexist. The traditional Chinese should represent the genuine Chinese since simplification of Chinese characters was due to foreign influence. The simplification of Chinese characters was accompanied by change in writing direction. Change in direction is consistent with and also promotes simplification. Simplification and change from vertical to horizontal are in fact Chineses deviation from its essential nature. Characters internal structure is less concerned about. People are now accustomed to reason basing on double- or multi-character words rather than single characters. There is debate whether simplification of Chinese is beneficial. Simply put, as Chinese system has turned into horizontal system, further simplification should be its future. The Chinese still use this kind of writing in syllables. The trouble is that there are so many different syllables, it takes a scholar years to learn them all. In the Chinese language, there are more than fifty thousand and most of these are being used even today. A Chinese student does not master the writing of his language until he is beyond the age at which an American student may have graduated from college-say, twenty-five years old. In comparison with his thousands of characters, the American schoolboy has to learn only twenty-six letters. Therefore the next step in the development of the alphabet was to have a symbol, or letter, for each sound that was used in the language being spoken. There are many more sounds that a human being can use than we have in our alphabet, and the alphabets used for other languages have in them certain letters that we do not need in writing the English language. But also we have some letters that they do not need. No alphabet needs more than thirty or forty letters. A child can master these in a year or two. Writing with letters instead of with pictures is more than five thousand years old. Just as we got our alphabet from the Greeks, they got theirs from the Semitic peoples-the Phoenicians and the Jews and other ancient peoples who spoke Semitic languages. There are many types of worksheets you can use as a teaching aid. First is coloring pages. This is good in teaching kids the different colors and their names, and the proper way to color. With First Crafts, kids learn how to make simple crafts and enjoy the fruits of their hard work. There are also worksheets that teach how to read. It includes the basic sounds each letter produce. Kids try to read the words displayed before them. In the First Alphabet worksheet, kids learn how to write the alphabet. And in the First Animals worksheet, kids try to recognize the animals in the picture and learn the names of these animals. There are many more worksheets available. They vary in complexity of the activity depending on the age and grade level of a child. Recent Post Archive Most Popular Tag Cloud Latest Review Latest News Recent Post © 2021 Theconsciouscraft. All rights reserved.
https://theconsciouscraft.com/6xp05i/8is9r0i9mwh270wb/
The word "alphabet" came into Middle English from the Late Latin word Alphabetum, which in turn originated in the Ancient Greek Αλφάβητος Alphabetos, from alpha and beta, the first two letters of the Greek alphabet. Alpha and beta in turn came from the first two letters of the Phoenician alphabet, and meant ox and house respectively. There are dozens of alphabets in use today. Most of them are composed of lines (linear writing); notable exceptions are Braille, fingerspelling (Sign language), and Morse code. The term alphabet prototypically refers to a writing system that has characters (graphemes) which represent both consonant and vowel sounds, even though there may not be a complete one-to-one correspondence between symbol and sound. A grapheme is an abstract entity which may be physically represented by different styles of glyphs. There are many written entities which do not form part of the alphabet, including numerals, mathematical symbols, and punctuation. Some human languages are commonly written using a combination of logograms (which represent morphemes or words) and syllabaries (which represent syllables) instead of an alphabet. Egyptian hieroglyphs and Chinese characters are two of the best-known writing systems with predominantly non-alphabetic representations. Non-written languages may also be represented alphabetically. For example, linguists researching a non-written language (such as some of the indigenous Amerindian languages) will use the International Phonetic Alphabet to enable them to write down the sounds they hear. Most, if not all, linguistic writing systems have some means for phonetic approximation of foreign words, usually using the native character set. The history of the alphabet started in ancient Egypt. By 2700 BC Egyptian writing had a set of some 22 hieroglyphs to represent syllables that begin with a single consonant of their language, plus a vowel (or no vowel) to be supplied by the native speaker. These glyphs were used as pronunciation guides for logograms, to write grammatical inflections, and, later, to transcribe loan words and foreign names. However, although seemingly alphabetic in nature, the original Egyptian uniliterals were not a system and were never used by themselves to encode Egyptian speech. In the Middle Bronze Age an apparently "alphabetic" system known as the Proto-Sinaitic script is thought by some to have been developed in central Egypt around 1700 BC for or by Semitic workers, but only one of these early writings has been deciphered and their exact nature remains open to interpretation. Based on letter appearances and names, it is believed to be based on Egyptian hieroglyphs. This script eventually developed into the Proto-Canaanite alphabet, which in turn was refined into the Phoenician alphabet. It also developed into the South Arabian alphabet, from which the Ge'ez alphabet (an abugida) is descended. Note that the scripts mentioned above are not considered proper alphabets, as they all lack characters representing vowels. These early vowelless alphabets are called abjads, and still exist in scripts such as Arabic, Hebrew and Syriac. Phoenician was the first major phonemic script. In contrast to two other widely used writing systems at the time, Cuneiform and Egyptian hieroglyphs, it contained only about two dozen distinct letters, making it a script simple enough for common traders to learn. Another advantage of Phoenician was that it could be used to write down many different languages, since it recorded words phonemically. The script was spread by the Phoenicians, whose thalassocracy allowed the script to be spread across the Mediterranean. In Greece, the script was modified to add the vowels, giving rise to the first true alphabet. The Greeks took letters which did not represent sounds that existed in Greek, and changed them to represent the vowels. This marks the creation of a "true" alphabet , with both vowels and consonants as explicit symbols in a single script. In its early years, there were many variants of the Greek alphabet, a situation which caused many different alphabets to evolve from it. The Cumae form of the Greek alphabet was carried over by Greek colonists from Euboea to the Italian peninsula, where it gave rise to a variety of alphabets used to inscribe the Italic languages. One of these became the Latin alphabet, which was spread across Europe as the Romans expanded their empire. Even after the fall of the Roman state, the alphabet survived in intellectual and religious works. It eventually became used for the descendant languages of Latin (the Romance languages) and then for most of the other languages of Europe. Another notable script is Elder Futhark, which is believed to have evolved out of one of the Old Italic alphabets. Elder Futhark gave rise to a variety of alphabets known collectively as the Runic alphabets. The Runic alphabets were used for Germanic languages from AD 100 to the late Middle Ages. Its usage was mostly restricted to engravings on stone and jewelry, although inscriptions have also been found on bone and wood. These alphabets have since been replaced with the Latin alphabet, except for decorative usage for which the runes remained in use until the 20th century. The Glagolitic alphabet was the script of the liturgical language Old Church Slavonic, and became the basis of the Cyrillic alphabet. The Cyrillic alphabet is one of the most widely used modern alphabets, and is notable for its use in Slavic languages and also for other languages within the former Soviet Union. Variants include the Serbian, Macedonian, Bulgarian, and Russian alphabets. The Glagolitic alphabet is believed to have been created by Saints Cyril and Methodius, while the Cyrillic alphabet was invented by the Bulgarian scholar Clement of Ohrid, who was their disciple. They feature many letters that appear to have been borrowed from or influenced by the Greek alphabet and the Hebrew alphabet. Most alphabetic scripts of India and Eastern Asia are descended from the Brahmi script, which is often believed to be a descendent of Aramaic. In Korea , the Hangul alphabet was created by Sejong the Great in 1443. Understanding of the phonetic alphabet of Mongolian Phagspa script aided the creation of a phonetic script suited to the spoken Korean language. Mongolian Phagspa script was in turn derived from the Brahmi script. Hangul is a unique alphabet in a variety of ways: it is a featural alphabet, where many of the letters are designed from a sound's place of articulation (P to look like widened mouth, L sound to look like tongue pulled in, etc.); its design was planned by the government of the time; and it places individual letters in syllable clusters with equal dimensions, in the same way as Chinese characters, to allow for mixed script writing (one syllable always takes up one type-space no matter how many letters get stacked into building that one sound-block). Zhuyin (sometimes called Bopomofo) is a semi-syllabary used to phonetically transcribe Mandarin Chinese in the Republic of China . After the later establishment of the People's Republic of China and its adoption of Hanyu Pinyin, the use of Zhuyin today is limited, but it's still widely used in Taiwan where the Republic of China still governs. Zhuyin developed out of a form of Chinese shorthand based on Chinese characters in the early 1900s and has elements of both an alphabet and a syllabary. Like an alphabet the phonemes of syllable initials are represented by individual symbols, but like a syllabary the phonemes of the syllable finals are not; rather, each possible final (excluding the medial glide) is represented by its own symbol. For example, luan is represented as ㄌㄨㄢ (l-u-an), where the last symbol ㄢ represents the entire final -an. While Zhuyin is not used as a mainstream writing system, it is still often used in ways similar to a romanization system that is, for aiding in pronunciation and as an input method for Chinese characters on computers and cell phones. The term "alphabet" is used by linguists and paleographers in both a wide and a narrow sense. In the wider sense, an alphabet is a script that is segmental at the phoneme level that is, it has separate glyphs for individual sounds and not for larger units such as syllables or words. In the narrower sense, some scholars distinguish "true" alphabets from two other types of segmental script, abjads and abugidas. These three differ from each other in the way they treat vowels: abjads have letters for consonants and leave most vowels unexpressed; abugidas are also consonant-based, but indicate vowels with diacritics to or a systematic graphic modification of the consonants. In alphabets in the narrow sense, on the other hand, consonants and vowels are written as independent letters. The earliest known alphabet in the wider sense is the Wadi el-Hol script, believed to be an abjad, which through its successor Phoenician is the ancestor of modern alphabets, including Arabic, Greek, Latin (via the Old Italic alphabet), Cyrillic (via the Greek alphabet) and Hebrew (via Aramaic). The number of letters in an alphabet can be quite small. The Book Pahlavi script, an abjad, had only twelve letters at one point, and may have had even fewer later on. Today the Rotokas alphabet has only twelve letters. (The Hawaiian alphabet is sometimes claimed to be as small, but it actually consists of 18 letters, including the ʻokina and five long vowels.) While Rotokas has a small alphabet because it has few phonemes to represent (just eleven), Book Pahlavi was small because many letters had been conflated that is, the graphic distinctions had been lost over time, and diacritics were not developed to compensate for this as they were in Arabic, another script that lost many of its distinct letter shapes. For example, a comma-shaped letter represented g, d, y, k, or j. However, such apparent simplifications can perversely make a script more complicated. In later Pahlavi papyri, up to half of the remaining graphic distinctions of these twelve letters were lost, and the script could no longer be read as a sequence of letters at all, but instead each word had to be learned as a whole that is, they had become logograms as in Egyptian Demotic. The largest segmental script is probably an abugida, Devanagari. When written in Devanagari, Vedic Sanskrit has an alphabet of 53 letters, including the visarga mark for final aspiration and special letters for kš and jñ, though one of the letters is theoretical and not actually used. The Hindi alphabet must represent both Sanskrit and modern vocabulary, and so has been expanded to 58 with the khutma letters (letters with a dot added) to represent sounds from Persian and English. The largest known abjad is Sindhi, with 51 letters. The largest alphabets in the narrow sense include Kabardian and Abkhaz (for Cyrillic), with 58 and 56 letters, respectively, and Slovak (for the Latin alphabet), with 46. However, these scripts either count di- and tri-graphs as separate letters, as Spanish did with ch and ll until recently, or uses diacritics like Slovak č. The largest true alphabet where each letter is graphically independent is probably Georgian, with 41 letters. Syllabaries typically contain 50 to 400 glyphs (though the Múra-Pirahã language of Brazil would require only 24 if it did not denote tone, and Rotokas would require only 30), and the glyphs of logographic systems typically number from the many hundreds into the thousands. Thus a simple count of the number of distinct symbols is an important clue to the nature of an unknown script. It is not always clear what constitutes a distinct alphabet. French uses the same basic alphabet as English, but many of the letters can carry additional marks, such as é, à, and ô. In French, these combinations are not considered to be additional letters. However, in Icelandic, the accented letters such as á, í, and ö are considered to be distinct letters of the alphabet. In Spanish, ñ is considered a separate letter, but accented vowels such as á and é are not. The ll and ch were also considered single letters, distinct from a single l followed by an l and c followed by an h, respectively, but in 1994 the Real Academia Española changed them so that ll is between lk and lm in the dictionary and ch is between cg and ci.Real Academia Española. "Spanish Pronto!: Spanish Alphabet." Spanish Pronto! 22 April 2007. January 2009 Spanish Pronto: Spanish > English Medical Translators. In German, words starting with sch- (constituting the German phoneme /ʃ/) would be intercalated between words with initial sca- and sci- (all incidentally loanwords) instead of this graphic cluster appearing after the letter s, as though it were a single letter – a lexicographical policy which would be de rigueur in a dictionary of Albanian, i.e. dh-, gj-, ll-, rr-, th-, xh- and zh- (all representing phonemes and considered separate single letters) would follow the letters d, g, l, n, r, t, x and z respectively. Nor is, in a dictionary of English, the lexical section with initial th- reserved a place after the letter t, but is inserted between te- and ti-. German words with umlaut would further be alphabetized as if there were no umlaut at all – contrary to Turkish which allegedly adopted the Swedish graphemes ö and ü, and where a word like tüfek, “gun”, would come after tuz, “salt”, in the dictionary. The Danish and Norwegian alphabets end with æ – ø – å, whereas the Swedish and the Finnish ones conventionally put å – ä – ö at the end. Some adaptations of the Latin alphabet are augmented with ligatures, such as æ in Old English and Icelandic and Ȣ in Algonquian; by borrowings from other alphabets, such as the thorn þ in Old English and Icelandic, which came from the Futhark runes; and by modifying existing letters, such as the eth ð of Old English and Icelandic, which is a modified d. Other alphabets only use a subset of the Latin alphabet, such as Hawaiian, and Italian, which uses the letters j, k, x, y and w only in foreign words. It is unknown whether the earliest alphabets had a defined sequence. Some alphabets today, such as Hanunoo, are learned one letter at a time, in no particular order, and are not used for collation where a definite order is required. However, a dozen Ugaritic tablets from the fourteenth century BC preserve the alphabet in two sequences. One, the ABGDE order later used in Phoenician, has continued with minor changes in Hebrew, Greek, Armenian, Gothic, Cyrillic, and Latin; the other, HMĦLQ, was used in southern Arabia and is preserved today in Ethiopic. Both orders have therefore been stable for at least 3000 years. The historical order was abandoned in Runic and Arabic, although Arabic retains the traditional "abjadi order" for numbering. The Phoenician letter names, in which each letter is associated with a word that begins with that sound, continue to be used in Samaritan, Aramaic, Syriac, Hebrew, and Greek. However, they were abandoned in Arabic, Cyrillic and Latin. Each language may establish rules that govern the association between letters and phonemes, but, depending on the language, these rules may or may not be consistently followed. In a perfectly phonological alphabet, the phonemes and letters would correspond perfectly in two directions: a writer could predict the spelling of a word given its pronunciation, and a speaker could predict the pronunciation of a word given its spelling. However, languages often evolve independently of their writing systems, and writing systems have been borrowed for languages they were not designed for, so the degree to which letters of an alphabet correspond to phonemes of a language varies greatly from one language to another and even within a single language. A language may represent a given phoneme with a combination of letters rather than just a single letter. Two-letter combinations are called digraph and three-letter groups are called trigraph. German uses the tesseragraphs (four letters) "tsch" for the phoneme and "dsch" for , although the latter is rare. Kabardian also uses a tesseragraph for one of its phonemes. A language may represent the same phoneme with two different letters or combinations of letters. An example is modern Greek which may write the phoneme in six different ways: "ι", "η", "υ", "ει", "οι" and "υι" (although the last is very rare). A language may spell some words with unpronounced letters that exist for historical or other reasons. National languages generally elect to address the problem of dialects by simply associating the alphabet with the national standard. However, with an international language with wide variations in its dialects, such as English, it would be impossible to represent the language in all its variations with a single phonetic alphabet. Some national languages like Finnish, Turkish and Bulgarian have a very regular spelling system with a nearly one-to-one correspondence between letters and phonemes. Strictly speaking, there is no word in the Finnish, Turkish and Bulgarian languages corresponding to the verb "to spell" (meaning to split a word into its letters), the closest match being a verb meaning to split a word into its syllables. Similarly, the Italian verb corresponding to 'spell', compitare, is unknown to many Italians because the act of spelling itself is almost never needed: each phoneme of Standard Italian is represented in only one way. However, pronunciation cannot always be predicted from spelling in cases of irregular syllabic stress. In standard Spanish, it is possible to tell the pronunciation of a word from its spelling, but not vice versa; this is because certain phonemes can be represented in more than one way, but a given letter is consistently pronounced. French, with its silent letters and its heavy use of nasal vowels and elision, may seem to lack much correspondence between spelling and pronunciation, but its rules on pronunciation are actually consistent and predictable with a fair degree of accuracy. At the other extreme, are languages such as English, where the spelling of many words simply has to be memorized as they do not correspond to sounds in a consistent way. For English, this is partly because the Great Vowel Shift occurred after the orthography was established, and because English has acquired a large number of loanwords at different times, retaining their original spelling at varying levels. Even English has general, albeit complex, rules that predict pronunciation from spelling, and these rules are successful most of the time; rules to predict spelling from the pronunciation have a higher failure rate. Sometimes, countries have the written language undergo a spelling reform in order to realign the writing with the contemporary spoken language. These can range from simple spelling changes and word forms to switching the entire writing system itself, as when Turkey switched from the Arabic alphabet to the Roman alphabet. The sounds of speech of all languages of the world can be written by a rather small universal phonetic alphabet. A standard for this is the International Phonetic Alphabet. Daniels and Bright (1996), pp. 92–96. “上親制諺文二十八字…是謂訓民正音(His majesty created 28 characters himself... It is Hunminjeongeum (original name for Hangul)”, 《세종실록 (The Annals of the Choson Dynasty : Sejong)》 25년 12월. Millard, A.R. "The Infancy of the Alphabet", World Archaeology 17, No. 3, Early Writing Systems (February 1986): 390–398. page 395. — (Overview of modern and some ancient writing systems). —(Chapter 3 traces and summarizes the invention of alphabetic writing).
http://maps.thefullwiki.org/Alphabet
As English speakers, it’s pretty much a guarantee that we have the alphabets ingrained in our memory in the specific order of A-Z from a very tender age. You can most likely say the ABCs at very quick speed without even thinking about it, and if you are a little special, you may even be able to say it backwards. A lot of us can’t imagine the alphabet being arranged in any other order… but the big question is, why are the letters arranged in that order in the first place? Looking at the Alphabets deeply, you will realise that it’s not arranged by vowels and consonants, similar sounds, or how often the letters are used. These factors however actually varies by language; on French keyboards, the letter “Q” is where the letter “A” of English Alphabets are. So how did the order of the English Alphabets come to be? There’s really not an easy answer. No one woke up and decided to put the letters in that order; the alphabet evolved slowly over a long period of time to become what it is today. As a matter of fact, the English letters can be traced all the way back to ancient Egypt, where foreign workers developed alphabetic lettering while the Egyptians themselves were still using hieroglyphics. This first alphabet was adapted by the Phoenicians, whose Mediterranean civilization thrived from 1500 to 300 BC. The Greeks then began using it around the 8th century BC, and they are to thank for the vowels we have this day. From Greece, the letters travelled to Rome, and it’s the Romans who turned it into the “modern” alphabet, with the letters we know today. The Romans took “Z,” which was near the beginning of the Greek alphabet as “zeta” but had since disappeared, and tacked it onto the end of their alphabet. They also did something similar with “Y.” The big question is why are the alphabets arranged in the order A-Z? No one knows for sure. Some scholars theorize that it was based off of the order of Egyptian hieroglyphics. One of the most popular theories suggests that there was a numerical component; each letter had a number equivalent, and those have just been lost over time.
https://www.empiremedia.com.ng/2018/10/why-english-alphabets-are-arranged-from.html
Damiana Gaillard November 15, 2019 Alphabet Over the centuries, various theories have been advanced to explain the origin of alphabetic writing, and, since Classical times, the problem has been a matter of serious study. The Greeks and Romans considered five different peoples as the possible inventors of the alphabet—the Phoenicians, Egyptians, Assyrians, Cretans, and Hebrews. Among modern theories are some that are not very different from those of ancient days. Every country situated in or more or less near the eastern Mediterranean has been singled out for the honor. Egyptian writing, cuneiform, Cretan, hieroglyphic Hittite, the Cypriot syllabify, and other scripts have all been called prototypes of the alphabet. The Egyptian theory actually subdivides into three separate theories, according to whether the Egyptian hieroglyphic, the hierarchic, or the demotic script is regarded as the true parent of alphabetic writing. Similarly, the idea that cuneiform was the precursor of the alphabet may also be subdivided into those singling out Sumerian, Babylonian, or Assyrian cuneiform. It is difficult to overestimate the importance of the Phoenician alphabet in the history of writing. The earliest definitely readable inscription in the North Semitic alphabet is the so-called Ahiram inscription found at Byblos in Phoenicia (now Lebanon), which probably dates from the 11th century bce. There is, however, no doubt that the Phoenician use of the North Semitic alphabet went farther back. By being adopted and then adapted by the Greeks, the North Semitic, or Phoenician, alphabet became the direct ancestor of all Western alphabets. Despite the conflict in theories, scholars are generally agreed that, for about 200 years before the middle of the 2nd millennium bce, alphabet making was in the air in the Gyro-Palestinian region. It is idle to speculate on the meaning of the various discoveries referred to. That they manifest closely related efforts is certain; what the exact relationship among these efforts was, and what their relationship with the North Semitic alphabet was, cannot be said with certainty. In the Middle Bronze Age, an apparently ”alphabetic” system known as the Proto-Sinaitic script appears in Egyptian turquoise mines in the Sinai peninsula dated to circa the 15th century BC, apparently left by Canaanite workers. In 1999, John and Deborah Darnell discovered an even earlier version of this first alphabet at Wadi el-Hol dated to circa 1800 BC and showing evidence of having been adapted from specific forms of Egyptian hieroglyphs that could be dated to circa 2000 BC, strongly suggesting that the first alphabet had been developed about that time. Based on letter appearances and names, it is believed to be based on Egyptian hieroglyphs. This script had no characters representing vowels, although originally it probably was a syllabify, but unneeded symbols were discarded. An alphabetic cuneiform script with 30 signs including three that indicate the following vowel was invented in Ugarit before the 15th century BC. This script was not used after the destruction of Ugarit. The Georgian alphabet is an alphabetic writing system. With 33 letters, it is the largest true alphabet where each letter is graphically independent. The names were abandoned in Latin, which instead referred to the letters by adding a vowel (usually e) before or after the consonant; the two exceptions were Y and Z, which were borrowed from the Greek alphabet rather than Etruscan, and were known as Y Graeca ”Greek Y” (pronounced I Graeca ”Greek I”) and zeta (from Greek)—this discrepancy was inherited by many European languages, as in the term zed for Z in all forms of English other than American English. Over time names sometimes shifted or were added, as in double U for W (”double V” in French), the English name for Y, and American zee for Z. Comparing names in English and French gives a clear reflection of the Great Vowel Shift: A, B, C and D are pronounced, but in contemporary French they are /a, be, se, de/. The French names (from which the English names are derived) preserve the qualities of the English vowels from before the Great Vowel Shift. By contrast, the names of F, L, M, N and S remain the same in both languages, because ”short” vowels were largely unaffected by the Shift. Recent Post Archive Categories Most Popular Tag Cloudletter e coloring pages preschool abc letters coloring pages free printable letter a coloring pages printable animal alphabet letters alphabet coloring worksheets for kindergarten coloring letters printable color by letters coloring pages free printable illuminated letters letter x coloring abc pictures to color abc coloring pages for toddlers abc alphabet coloring pages letter q coloring sheet abc coloring worksheets the letter i coloring pages letter a coloring pages free sesame street alphabet coloring pages abc 123 coloring pages abcd coloring pages letter q coloring block letter coloring pages bible alphabet coloring pages abc coloring book printable letter v coloring sheet letter d coloring worksheets alphabet coloring pages for kids letter g coloring worksheets disney alphabet coloring pages precious moments alphabet coloring pages color by letter printable letter x coloring sheet letter w coloring sheet letter u coloring sheet letter u coloring alphabet color by letter j coloring sheet free printable alphabet letters to color letter z coloring sheet free printable letter coloring pages abc coloring pages for kids Latest Review Latest News Recent Post © 2020 60israel. All rights reserved.
http://60israel.org/9s8MWz51/rB1594tK/
The Latin script, one of the most widely used scripts today, is believed to be derived from the Greek Chalcidian alphabet, with a strong lineage connecting it to some of the greatest ancient civilizations. How the Latin alphabet developed is quite the story, as many influential cultures helped evolve the script into what we know today. In the Bronze Age between 1500 and 1200 BC, the Mycenaeans, an early tribe of the ancient Greeks, adapted the Minoan syllabary, known as Linear B, to write an early form of the Greek alphabet. It was hard for the Mycenaeans to adapt to the Minoan script, as it was difficult to decipher without knowing how the language was pronounced when written. This may be one of the main reasons why much of the script took thousands of years to decode, along with untold hours by thousands of linguists. Due to this, a Semitic-speaking group in ancient Egypt would adopt their hieroglyphics to represent the sounds of their language. The Proto-Sinaitic script is credited with being the first alphabetic system, which the Phoenicians and Arabs would later expand upon. The name Proto-Sinaitic comes from its place of origin on the Sinai Peninsula in Egypt. It is estimated to have been founded as a writing script any time between 2100 and 1600 BC. As the Phoenicians, known as Canaanites in the lower Levant, were vassals of the ancient Egyptians, the proto script would reach the upper Levant, and there, the Phoenicians created their own alphabet. The Phoenician alphabet and language extend from the Canaan version, which is closely related to Hebrew. For example, the word for “son” is bar in the Aramaic script and ben in Hebrew and the Phoenician alphabet. The Amarna letters confirm that the lineage of the Egyptian proto alphabet and culture extended into the upper Levant. The Phoenician Alphabet The Phoenicians, known as masters of the sea for their naval navigation, had expanded their influence across the Mediterranean with settlements in places such as Cyprus, Sardinia, Sicily, Spain, and Tunisia. One Phoenician alphabet, the Fayum alphabet, had origins in the ancient city of Kition on the eastern Mediterranean isle of Cyprus. This alphabet would later influence the Greek script. What made it particularly distinct was that it ended with the letter T (tau) like the Phoenician alphabet, while other variations ended with Y. Another document created by the Greek historian Herodotus stated that a Phoenician named Cadmus introduced the alphabet to the ancient Greeks, though this has been argued to be a legend. Nonetheless, the Phoenicians played a major role in passing down the alphabet to the ancient Greeks. The Greeks would become the first Europeans to learn and write with an alphabet. Spreading throughout the upper Mediterranean, like the Phoenicians had, they shared the knowledge of their writing system and established their own colonies. One of the most influential colonies was Euboea, where another derivation of the modern alphabet was established. The new Euboean alphabet was used as the official script in Greek colonies such as Pithekoussai and Cumae. The Euboean alphabet was a western variant of the early Greek script and was prominent from the eighth through the fifth centuries BC. This script allowed more concrete recording of the sounds and pronouns of the language. The Etruscans, the predecessors of the Romans, adopted the Greek alphabet, forming the Latin script. Later, the Romans would emulate Greek civilization, as Greek culture played a major role in influencing Roman language, architecture, and mythology. Rome would conquer Greece, but in turn, Greek culture would conquer Rome. The Chalcidian/Cumae alphabet was the western variant of the Greek alphabet that eventually gave rise to the Latin alphabet. Ancient human civilizations were able to spread knowledge and influence across the Mediterranean, from Egypt to the Levant, to Cyprus, to Greece, and then to Italy, whence it spread to the world. The history of the Latin alphabet also shows the evolution of ancient civilizations with their own scripts, and a rich history human civilization should never forget. Links available at www.languagemagazine.com/mcbride-links.
https://www.languagemagazine.com/2022/08/11/evolution-of-the-latin-script-across-ancient-civilizations/
There are over 150 laws concerning how the Hebrew Alphabet must be written by the Jewish Scribe. After all, the closest letter in the English Alphabet for Letter Vov is a W? The question incorrectly assumes there is a connection between the English alphabet and Hebrew alphabet. Finally, keep in mind that in Hebrew they also use the numbers 1234567890, and so even though they are not Hebrew characters, they are universally known and used. The Early Hebrew alphabet, like the modern Hebrew variety, had 22 letters, with only consonants represented, and was written from right to left; but the early alphabet is more closely related in letter form to the Phoenician than to the modern Hebrew. The characters of the Hebrew Alphabet are derived from the so-called Phenician or Old Semitic letters, to which almost all systems of letters now in use, even the Roman, can be traced. In the case of alphabets having a highly developed system of ligature, like the Arabic, the writer might obtain good results by artistic grouping of letters, but in a block text, such as the Hebrew, in which every letter must be strictly separated, efforts in the direction of ornamentation were confined to the individual letter. In the Saracenic, or, as they were called, Sephardic (Spanish) lands the Hebrew Alphabet is distinguished for its roundness, for the small difference between the thickness of the horizontal and upright strokes as well as for the inclined position of the letters. The alphabet consists of twenty-two letters (five of which have a different form when they appear at the end of a word,.....but more about that later). Hebrew Alphabet used in writing STA"M. There is another style used for handwriting, in much the same way that cursive is used for the Roman (English) alphabet. There are many reasons to learn Hebrew such as to read the Tenach (the Old Testament of the Bible written in Hebrew) in its original language or simply to learn how to pronounce Hebrew words such as those in Strong's Concordance without having to use the transliterations. Hebrew on the other hand uses "Alephbet" as they are the first two letters of the Hebrew Alephbet; Aleph and Bet. One advantage to Hebrew is that the sound for each letter remains the same unlike English where one has to memorize many variations such as the word circus where one "c" is pronounced like an "S" and the other like a "K". Hebrew is taught at the University of Oregon by the Department of Judaic Studies. Hebrew Language at the University of Texas - a nice collection of video clips and sound bites that are used in the modern Hebrew courses at UT. You can replay small bits of the files to develop your listening and translation skills. Hebrew on the Net - For definitive information about reading and writing Hebrew over the internet, this is the place to start. Hebrew is written from right to left and books therefore begin at the opposite end from a book written in the English Language. Hebrew is the language in which the Torah is written; it is also used for prayer and study. Modern Hebrew is spoken in Israel today although it stopped being a language for everyday use until about a century ago. Letters of the Hebrew Alphabet, with Intial/Medial and Final forms, the most standard Latin-letter transliteration values used in academic work, pronunciation using IPA symbols for modern standard Israeli pronunciation, reconstructed Tiberian pronuncation, reconstructed older pronunciation, and a SAMPA column for modern transliteration for those lacking fonts or browsers to properly handled the IPA characters. Following the Babylonian exile, Jews gradually stopped using the Hebrew script, and instead adopted the Babylonian Aramaic script (which was also originally derived from the Phoenecian script). Following the decline of Hebrew and Aramaic as the spoken languages of the Jews, the Hebrew alphabet was adopted in order to write down the languages of the Jewish diaspora (Yiddish and Judaeo-Spanish). It is my opinion that the Hebrew words, as written in the standard 22-letter Hebrew alphabet, contain the consonants and some of the vowels of the spoken words. As it happens, the Greeks seem to have borrowed their alphabet from the Hebrews, and the first Hebrew letter "aleph" corresponds to the first Greek letter "alpha". The modern Hebrew scholars claim that the Hebrew letter "heth" is pronounced in that manner, but this letter corresponds to the Greek letter "eta", which seems to be pronounced as A-long. If you are familiar with Greek, you will no doubt notice substantial similarities in letter names and in the order of the alphabet. The line of text at the right would be pronounced (in Sephardic pronunciation, which is what most people today use): "V'ahavta l'rayahkhah kamokha" (And you shall love your neighbor as yourself, Leviticus 19,18). Another style is used in certain texts to distinguish the body of the text from commentary upon the text. This style is known as Rashi Script, in honor of Rashi, the most popular commentator on the Torah and the Talmud. The alefbet at the right is an example of Rashi Script. Our goal then, is to establish a one-to-one correspondence of Tarot to Hebrew letter through each Tarot's astrological and alpha-numeric symbolism, and then bring the pattern into coherence by restoring the positions of seven Tarot trumps, which just happen to represent the seven classical planets, to their original order according to ancient cosmology and astrology. When this is done, the "locked" formative meanings of the Hebrew alphabet are opened, and may be used for decoding certain texts written in that language. There, they will find the Hebrew alphabet, and Shabbatai Donnolo in the 10th, The Kalonymus Family escaping lethal perscution in the 12th, and Abraham Abulafia in the 13th century in Italy and can learn where Pico and Ficono got their material in the 15th, around the time the twenty-two images of the Tarot appeared. Hebrew is one of the longest continuously recorded languages that has survived to the modern day. While the script on this inscription is called Old Hebrew, it is barely discernible from Phoenician from where it originated. The Hebrew alphabet as it is adopted from Phoenician actually doesn't reproduce all the sounds in the Hebrew language, so some letters represent multiple sounds. In the late 19th and early 20th century the Zionist movement brought about the revivial of Hebrew as a widely-used spoken language, and it became the official languge of Israel in 1948. Finally, keep in mind that in Hebrew they also use the numbers 1234567890, and so even though they are not Hebrew characters, they are universally known and used by the name Arabic characters. The Hebrew language is similar to English in that there are many different fonts which one can use to write the letters. Other Hebrew fonts, supporting not only Hebrew characters but punctuation, vowel and cantillation marks, are discussed on Mechon Mamre's font page. Hebrew numbers on the other hand, simply add the values of each letter together and the position doesn't matter. Secondary evidence includes an Ugaritic tablet of the fourteenth century B.C. containing the 30 letters of the Ugaritic alphabet -- which is the oldest known ABC -- including the symbols of the twenty-two North Semitic letters in the same order as they appear in the modern Hebrew alphabet. In addition to ordinal values, the letters of the Hebrew alphabet are also assigned numerical values2. This led to the approach of examining the Hebrew alphabet itself to determine what sets of letters could be shown to have a unique place in the alphabet following the pattern found with "Israel" and the possible significance of the patterns. We can learn the aleph-bet by examining various Biblical passages which are written as acrostics (alphabetically ordered verses and each first word commencing with each Hebrew letter of the alphabet in turn, from 1 through to 22). Psalm 119 is a famous example, written with 8 verses for each of the Hebrew consonants in order, so verses 1-8 each have a first word beginning with 'aleph and verses 9-16 each have a first word beginning with beth, and so on. The first Hebrew letter, 'Aleph, originally represented an ox head and was similarly portrayed in Phoenician and Ancient Greek as well as Ancient Hebrew. Two characteristics of ancient Hebrew were the pure use of consonants, and the use of an epicene personal pronoun (a personal pronoun that does not distinguish for male and female - the same word is used for both "he" and "she", as in Genesis 3:15). Systematic genetic correspondences are possible because the Hebrew, Greek and Roman alphabets derive from a common source, the North Semitic alphabet of c.1700 BCE. It's worth noting that the Modern Hebrew alphabet, which is commonly used in esoteric work, is no closer in form to the original alphabet than are the Greek or Roman alphabets. This is because the Roman alphabet developed from an early Greek alphabet in which the letter shaped X (with numerical value 600) had the sound /ks/. The Hebrew alphabet is often called the "alef-bet," because of its first two letters. People who are fluent in the language do not need vowels to read Hebrew, and most things written in Hebrew in Israel are written without vowels. Hebrew codes contain 27 letters, the 22 basic letters of the Hebrew alphabet and the 5 final forms. The cantillation marks, also known as accents (in Hebrew, Teamim or Teamey Hamiqra) are used with biblical texts to indicate precise punctuation and the notes for reading the text in public. Hebrew and Arabic are written from right to left, while numbers and other languages are written from left to right.
http://www.factbites.com/topics/Hebrew-alphabet
Pierretta Meyer November 20, 2019 Alphabet The basic ordering of the Latin alphabet (A B C D E F G H I J K L M N O P Q R S T U V W X Y Z), which is derived from the Northwest Semitic ”Abgad” order, is well established, although languages using this alphabet have different conventions for their treatment of modified letters (such as the French e, a, and o) and of certain combinations of letters (multi graphs). In French, these are not considered to be additional letters for the purposes of collation. However, in Icelandic, the accented letters such as a, i, and o are considered distinct letters representing different vowel sounds from the sounds represented by their unaccented counterparts. In Spanish, n is considered a separate letter, but accented vowels are not. The ll and ch were also considered single letters, but in 1994 the Real Academia Spain changed the collating order so that ll is between lk and lm in the dictionary and ch is between cg and ci, and in 2010 the tenth congress of the Association of Spanish Language Academies changed it so they were no longer letters at all. At the end of the 2nd millennium bce, with the political decay of the great nations of the Bronze Age—the Egyptians, Babylonians, Assyrians, Hittites, and Cretans—a new historical world began. In Syria and Palestine, the geographical center of the Fertile Crescent, three nations—Israel, Phoenicia, and Aram—played an increasingly important political role. To the south of the Fertile Crescent, the Sabaeans, a South Arabian people (also Semites, though South Semites), attained a position of wealth and importance as commercial intermediaries between the East and the Mediterranean. National languages sometimes elect to address the problem of dialects by simply associating the alphabet with the national standard. Some national languages like Finnish, Armenian, Turkish, Russian, Serbo-Croatian (Serbian, Croatian and Bosnian) and Bulgarian have a very regular spelling system with a nearly one-to-one correspondence between letters and phonemes. Strictly speaking, these national languages lack a word corresponding to the verb ”to spell” (meaning to split a word into its letters), the closest match being a verb meaning to split a word into its syllables. Similarly, the Italian verb corresponding to ’spell (out)’, compare, is unknown to many Italians because spelling is usually trivial, as Italian spelling is highly phonemic. In standard Spanish, one can tell the pronunciation of a word from its spelling, but not vice versa, as certain phonemes can be represented in more than one way, but a given letter is consistently pronounced. French, with its silent letters and its heavy use of nasal vowels and elision, may seem to lack much correspondence between spelling and pronunciation, but its rules on pronunciation, though complex, are actually consistent and predictable with a fair degree of accuracy. Originally, graphs were perhaps “motivated” pictorial signs that were subsequently used to represent the initial sound of the name of the pictured object. The North Semitic alphabet remained almost unaltered for many centuries.It includes the scratching of the first five letters of the early Hebrew alphabet in their conventional order, and it belongs to the 8th or 7th century bce. The evolution of the alphabet involved two important achievements. The first was the step taken by a group of Semitic-speaking people, perhaps the Phoenicians, on the eastern shore of the Mediterranean between 1700 and 1500 bce. This was the invention of a consonant writing system known as North Semitic. The second was the invention, by the Greeks, of characters for representing vowels. This step occurred between 800 and 700 bce. While some scholars consider the Semitic writing system an vocalized syllabify and the Greek system the true alphabet, both are treated here as forms of the alphabet. To the west, seeds were sown among the peoples who later constituted the nation of Hellas—the Greeks. As a result, an alphabet developed with four main branches: the so-called Canaanite, or main branch, subdivided into Early Hebrew and Phoenician varieties;the Aramaic branch the South Semitic, or Sabaean, branch; and the Greek alphabet, which became the progenitor of the Western alphabets, including the Etruscan and the Latin. The Canaanite and Aramaic branches constitute the North Semitic main branch. Categories Archive Recent Post Most Popular Tag Cloudmango coloring sheet mango tree coloring page banana coloring book apple coloring worksheet apple tree coloring pages to print outline of fruits and vegetables for colouring pineapple colouring pineapple printable coloring page coloring book fruits and vegetables fruits outline for colouring fruits and vegetables coloring worksheets lemon tree coloring page fruit basket coloring sheet pineapple coloring pages to print guava coloring pages coloring fruits for kindergarten watermelon colouring images fruit outlines for coloring fruit and vegetable coloring pages to print pineapple coloring image free pineapple coloring pages coloring pages vegetables and fruits fruit bowl pictures to colour pear colouring page outline of fruits for colouring fruits and vegetables coloring book apple coloring page to print outline of grapes for colouring fruits coloring images fruits worksheets for colouring kiwi colouring pages vegetable coloring pages preschool free printable strawberry coloring pages kiwi fruit coloring page banana colouring in grapes coloring pages printable apple colouring in fruit shapes for coloring coloring vegetables worksheets coloring images of fruits apricot coloring page vegetables coloring pages for kindergarten grapes coloring images Latest Review Latest News Recent Post © 2020 60israel. All rights reserved.
http://60israel.org/2kHY626u/lN2627wD/
Because European mathematics is very heavily rooted in the mathematics of ancient Greece, and due to the need for many symbols to represent constants, variables, functions and other mathematical objects, mathematicians frequently use letters from the Greek alphabet in their work. Are Greek letters still used today? The Greek alphabet is still used for the Greek language today. The letters of the Greek alphabet are now also used as symbols for concepts in equations of the interrelated fields of mathematics and science—for example, the lowercase alpha (⍺) can be used to represent an angle in mathematics. Why do statistics use Greek letters? Greek letters represent population parameter values; roman letters represent sample values. A Greek letter with a “hat” represents and estimate of the population value from the sample; i.e., μx represents the true population mean of X , while ^μx represents its estimate from the sample. Do Greeks use Greek letters in maths? Pretty much every Greek letter is used as a mathematical variable. Why do we use symbols in algebra? We use symbols to explain concepts: simile and metaphor. Symbols in math are straightforward. Also, english is an emotional language, whereas math is logical. If at any place the logic is misinterpreted or covered with bias, it’s no longer factual. Why does Greek have 24 letters? The Greeks borrowed the idea of a written language from the Phoenicians and then improved upon it by adding vowels to their alphabet. … In fact, our word “alphabet” comes from the first two letters of the Greek alphabet: alpha and beta! While the English alphabet has 26 letters, the Greek alphabet has 24 letters. How does the Greek alphabet influence us today? The Greek alphabet is still used today. It is even used in the United States where Greek letters are popular as mathematical symbols and are used in college fraternities and sororities. The Greeks learned about writing and the alphabet from the Phoenicians. … They also assigned some of the letters to vowel sounds. Why are Greek letters used in math and physics? Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. … The archaic letter digamma (Ϝ/ϝ/ϛ) is sometimes used. What Greek is used for probability? Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics. According to this Wikipedia entry, “Mu was derived from the Egyptian hieroglyphic symbol for water, which had been simplified by the Phoenicians and named after their word for water”. Are Greek letters used for statistics or for parameters? In statistics, the difference between the statistic that describes the sample of the population and the parameter that describes the entire population is important. Greek letters are used for the population parameters; Latin letters are used for the sample statistics. How do Greeks do algebra? The Greeks created a geometric algebra where terms were represented by sides of geometric objects, usually lines, that had letters associated with them, and with this new form of algebra they were able to find solutions to equations by using a process that they invented, known as “the application of areas”. What does chi mean in math? In mathematics, the Greek letter chi is used as a symbol for characteristic polynomial or characteristic function. This greek letter is also mentioned as a symbol in Plato’s “Timaeus” and Thomas Browne’s “The Garden of Cyrus” too. Do you think using symbols in mathematics is important? Symbols play crucial roles in advanced mathematical thinking by providing flexibility and reducing cognitive load but they often have a dual nature since they can signify both processes and objects of mathematics. The limit notation reflects such duality and presents challenges for students. How can you explain symbols as the language of mathematics? In mathematics, a symbolic language is a language that uses characters or symbols to represent concepts, such as mathematical operations, expressions, and statements, and the entities or operands on which the operations are performed. What is the importance of symbols in statistics? symbols will increase access to data from different sources and make them comparable across time and source. Provision of consistent data markers will enable the comparison of statistical products and create an environment in which multiple data sources can be integrated.
https://picturesfrombulgaria.com/albania/you-asked-why-do-we-still-use-greek-symbols-in-math.html
# Romanisation of Sindhi Sindhi romanisation or Latinization of Sindhi is a system for representing the Sindhi language using the Latin script. In Sindh, Pakistan the Sindhi language is written in modified persio-Arabic script and in India it is written in Devanagari (Hindi) Script. Sindhis living in Pakistan as well as Sindhis living in India are able to speak and understand each other, however, they cannot write to each other because of the two different scripts. Indus Roman Sindhi Script gives ability to Sindhis and would allow Sindhis all over the world to communicate with each other through one common script. "Indus Roman Sindhi" system is different than Haleem Brohi's Roman Sindhi (Haleem Brohee jee Roman Sindhee). Indus Roman Sindhi is developed by Fayaz Soomro. ## Indus Roman Sindhi Indus Roman Sindhi Sindhi: سنڌو رومن سنڌي لپي is one system for the romanisation of Sindhi. ### Elongation chart The alphabet of persio-Arabic Sindhi script is highly context sensitive. Many of the letters of Sindhi alphabet share a common base form diacritical marks and diacritical points place either above or below. ### Basics "Alif" (Sindhi: ا), (In Romanized Sindhi: alif is "A"). For example: Ambu/Anbu (Sindhi: انبُ) "Alif" (Sindhi: ا) is the first letter of Sindhi alphabet and it is a base letter of Sindhi alphabet. Though in Sindhi there are no vowels but the below mentioned letters and compound letters considered almost as vowels in Sindhi language and all of them are formed with the help of alif (Sindhi: ا). Roman Sindhi Vowels/رومن سنڌي سُر/ सुर अ आ इ ई उ ऊ A aa i ee u oo اُو اُ اِي اِ آ اَ ए ऐ ओ औ अं ह e ai o au a'n h ھَ اَنّ اَؤ او اَي ئي In Indus Roman Sindhi, English alphabet's letter "A" stands for alif (Sindhi: ا), "AA" stants for alif mand aa (Sindhi: آ) and alif zabar (Sindhi: اَ) and ubho alif (vertical alif). Consonants (Sindhi: ب) is the second letter of Sindhi alphabet. In Indus Roman Sindhi, English alphabet's letter "B" stands for (Sindhi: ب). For example: Badak (Sindhi: بَدَڪ) The chart shows different sounds of "B" (Sindhi: ب) Peculiar sounds of Sindhi language There are six peculiar sounds in Sindhi language, four of them known as "Chaar choosinna aawaz" (Sindhi: چار چوسڻا آواز) (sounds made with back of the tongue) and two other peculiar sounds, known as Nasalization Consonant or Nasal sounds or "Noonaasik or Nikwaan (weenjann) aawaz (Phoneme)" (Sindhi: نوناسڪ يا نڪوان ”وينجڻ“ آواز). The chart of four peculiar Chaar choosinna aawaz sounds The chart of two peculiar 'Nasal sounds' or "Noonaasik or Nikwaan (weenjann) aawaz" sounds When you make a speech sound, air usually passes through your oral cavity and comes out of your mouth. But you can also direct the flow of air through your nose, making a nasal sound. To get the air to come out of your nose, you lower your velum. This opens up your nasal cavity and lets the air out through your nostrils. You can let air out through your nose and mouth at the same time: This makes a nasalized sound. Nasal consonants are made by closing the mouth at specific places of articulation and opening the velum. The resulting nasal consonants are called stops because the oral cavity is closed, but air still flows out through the nasal cavity. This Sindhi alphabet's letter (Sindhi: ٻ) is one of four peculiar "chaar choosinna aawaz" sounds and in Indus Roman Sindhi it stands as "BB". For example: Bbakiri (Sindhi: ٻَڪِري) The chart shows different sounds of "BB" (Sindhi: ٻ) ### Multi purpose use of "D" The Roman letter "D" is some times used as Aspirates in Roman Sindhi. For example: d', dd and some times with the combination of letter "H", suppose: dh or ddh to make peculiar sounds of Sindhi language. ## Romanization of Sindhi words There is a difference between transliteration and Romanisation. The present modified persio-arabic script of Sindhi language is highly context sensitive. Many of the letters of Sindhi alphabet share a common base form diacritical marks and diacritical points place either above or below (Zer, Zabar and peshu). Therefore, through transliteration, the Romanization of Sindhi words is not possible. Therefore, each and every word should be Romanized separately from persio-arabic script into Roman Sindhi script.
https://en.wikipedia.org/wiki/Romanization_of_Sindhi
Perhaps it is fitting that the English language should use the Roman alphabet. English has passed through so many evolutions, with so many influences and borrowings, and its words have changed so much in meaning over time, it might as well use a borrowed alphabet that has just as checkered a history, and use it in an irregular and abnormal fashion. A language that can have two words, “cleave” and “cleave,” spelled and sounding the same but meaning “adhere” and “divide,” can handle using a letter—C—to represent two quite different sounds, [k] and [s]. A language that can have dozens of synonyms for some words but does not have an easy future or past tense for “can” (“will be able to”? yuck) probably should have at least three letters to represent the sound [k] but still needs to combine letters to represent other basic sounds such as those heard in “sheathe” (the closing sound of which English used to have a perfectly good single letter for, but which we discarded for the sake of fashion). And a language in which words such as “awful” and “doubt” can go through a complete change of connotation within a couple of centuries could hardly do without letters like F, U and Y, all of which began life millennia ago as the same letter, a letter that stood for the sound [w]—a sound that is now represented by a letter made by a doubling of the original shape of the letter U (a shape now used by the newer letter V). With this rather fascinating perversity and tortuous history as his focus, David Sacks—a writer on culture and a sometime classicist—decided it was about time to do an exploration of the full story behind each of the letters we use in English. And it is not a dry book; originally conceived as a 26-part series for the Ottawa Citizen, it has a reasonable amount of wit and a wealth of illustrations and tables, allowing the reader to see all the stages of evolution and to trace the whole process clearly. Sacks has done a lot of good research and illustration acquisition in support of his main through-line. The reader can see at a persuasive glance the various mutations the alphabet has gone through, from the first known use of signs to represent individual sounds in Egypt 4,000 years ago, through Phoenician adaptations and their evolution into Hebrew script, on to the adaptation and partial confusion of the Phoenician alphabet by the Greeks to include signs for vowel sounds, and on to the Latin forms by way of the Etruscans—who spoke a language quite different from Greek or Latin. And, from the perfection of the capital letter form in Roman inscription, Sacks shows the evolution of lowercase letter forms and the various changes in shape and sound value of the letters in the past two millennia, plus the sometimes hesitant addition of new letters, up to the completion of the set with the rather late and grudging acceptance of J and V as separate letters (they were not accorded full independence from their parents I and U until the 19th century). He does this in two ways: he gives each of our current 26 letters a chapter tracing its development from its earliest (and often surprising) precursor to its modern forms and usages, and he traces the development of the alphabet as a whole in the sidebars that follow many of the chapters. Sacks’s information on the history of the letters is reliable and well researched. He enlightens the reader on things that have surely long puzzled many, such as the association of “Rx” with pharmacy (from an abbreviation of Latin “recipe,” meaning “take,” with the x actually just a slash on the tail of the R to indicate the abbreviation) and why we put X’s as signs of kisses at the end of a letter (because centuries ago the illiterate could sign an agreement with a cross or X and then kiss it as a promise to abide by the agreement). And he manages to get his facts straight on many details where many others have perpetuated misconceptions, for instance in the origins of “OK” and “mind your p’s and q’s” (I won’t give these away— actually, they would take too much space to explain here). I am particularly tickled that he explains how the Y in “Ye Olde” is actually a corruption of a now disused letter representing a “th” sound (so it is really “The Olde”)—now if only sign makers and marketers could get it straight. Sacks is an enthusiastic author. His interest in the subject is well communicated to the reader, and he is sure to provoke a similar enthusiasm for the subject in many. He shows how nearly all the alphabets in the world are descended from the same original alphabet. He notes that the Hebrew letter aleph, precursor to our A, is seen in the Kabbalistic tradition as representing the divine energy that preceded and initiated creation, which is why (the Kabbalists contend) the Bible begins with the second letter of the Hebrew alphabet, beth: Bereshith, “In the beginning.” He points out that baby talk seems to influence adult talk—in particular words for “mother” and “father,” which can be remarkably similar even in completely unrelated languages, tending to involve “ma” and “da” sounds. He also obliges the reader with explanations of such things as theories of how H (“aitch”) got its name, why W is a “double u” and not a “double v,” and how small i got its dot (I’ll tell you this one: in medieval script, the dot—originally a stroke: í—was added so the letter would be more distinguishable on the page, quite useful in words such as “minimum”). Sacks does not stay in the past, either; he brings the letters right up to the present, discussing their popular usages (the viral spread of “e-” lately, for example, and appearances of M in books by Lewis Carroll and Ian Fleming) and connotations (very positive for A, for instance, and rather more negative for F).However, he does run a bit too far with some interpretations in order to press a nice, tidy point.Discoursing on O, he tells us (rather like an overzealous art student writing a museum placard) that brand names such as Veg-O-Matic “visually use O to suggest push-button ease,” that a fairly run-of-the-mill capital O on the word “offered” in a real estate ad “seems to invite a bid, or at least a look,” and that “the hidden source of O’s commercial strength” is “its subliminal vaginal reference.”Of S, he tells us it is “nearly an infinity symbol” and as such “can imply timeless continuity.” Not to me, it can’t. Sacks also stumbles in one other area of some importance: pronunciation. This is something of a pity, since his history of the alphabet follows the evolution not just of the letter forms but also of their pronunciation, in English and in other languages. To be fair, it can be difficult to draw a clean line between details of pronunciation that are germane to the topic at hand and those that are not. Moreover, giving pronunciation guides in English can be cussedly difficult, what with the wild inconsistencies of our spelling, and Sacks could not get too technical without risking losing his audience. But he can only confuse readers by using “hard ‘ch’” to refer to a sound as in German “ach” on one page and to a basic [k] sound on the next, or by representing the sound heard at the beginning of “Zhivago” and French “je” as “shj” (what’s wrong with “zh”?). And if he is going to cite an example from another language, it would not be too hard to double-check it and avoid saying, for example, that the Mandarin pronunciation of “Beijing” ends in a hard [k]. There are also a few details of design that may not be Sacks’s fault but do detract slightly from the book. Sidebars that work well on a newspaper page can be more disruptive as multiple-page interruptions in a book. As well, one would think that a book that deals with the various forms of letters—even describing typefaces—would show careful attention to its own typography, but the designer has been confusingly inconsistent with the italic fonts in the sidebars. And a final irony: one would expect this book, of all books, to have a colophon, a little paragraph on an end page that describes the type faces used in the publication. It does not. In fact, the only place the word “colophon” is to be found in the book is on the bibliographic information page: “Knopf Canada and colophon are trademarks.” With the above protests registered, however, I will nonetheless recommend Language Visible. In the main thrust of the book, Sacks has produced a valuable, well-researched and highly readable work, and if I am going to keep it on my shelf and use it as a reference— and I am—I can hardly say others should not do likewise. What’s more, I would recommend it to a very wide audience: pretty much everyone who uses the alphabet, which includes all of the readers of the LRC and probably some of their pets as well.
http://reviewcanada.ca/magazine/2004/04/26-stories-to-tell/
An abjad (pronounced or) is a type of writing system where each symbol or glyph stands for a consonant, leaving the reader to supply the appropriate vowel. So-called impure abjads do represent vowels, either with optional diacritics, a limited number of distinct vowel glyphs, or both. The name abjad is based on the old Arabic alphabet's first four letters - a, b, j, d - to replace the common terms "consonantary", "consonantal alphabet" or "syllabary" to refer to the family of scripts called West Semitic. The name "abjad" ( Arabic: أبجد) is derived from pronouncing the first letters of the Arabic alphabet in order. The ordering () of Arabic letters used to match that of the older Hebrew, Phoenician and Semitic alphabets: . According to the formulations of Daniels, abjads differ from alphabets in that only consonants, not vowels, are represented among the basic graphemes. Abjads differ from abugidas, another category defined by Daniels, in that in abjads, the vowel sound is implied by phonology, and where vowel marks exist for the system, such as nikkud for Hebrew and ḥarakāt for Arabic, their use is optional and not the dominant (or literate) form. Abugidas mark the vowels (other than the "inherent" vowel) with a diacritic, a minor attachment to the letter, or a standalone glyph. Some abugidas use a special symbol to suppress the inherent vowel so that the consonant alone can be properly represented. In a syllabary, a grapheme denotes a complete syllable, that is, either a lone vowel sound or a combination of a vowel sound with one or more consonant sounds. The antagonism of abjad versus alphabet, as it was formulated by Daniels, has been rejected by other scholars because abjad is also used as a term not only for the Arabic numeral system but, which is most important in terms of historical grammatology, also as term for the alphabetic device (i.e. letter order) of ancient Northwest Semitic scripts in opposition to the 'south Arabian' order. This caused fatal effects on terminology in general and especially in (ancient) Semitic philology. Also, it suggests that consonantal alphabets, in opposition to, for instance, the Greek alphabet, were not yet true alphabets and not yet entirely complete, lacking something important to be a fully working script system. It has also been objected that, as a set of letters, an alphabet is not the mirror of what should be there in a language from a phonological point of view; rather, it is the data stock of what provides maximum efficiency with least effort from a semantic point of view. The first abjad to gain widespread usage was the Phoenician abjad. Unlike other contemporary scripts, such as cuneiform and Egyptian hieroglyphs, the Phoenician script consisted of only a few dozen symbols. This made the script easy to learn, and seafaring Phoenician merchants took the script wherever they went. The Phoenician abjad was a radical simplification of phonetic writing, since hieroglyphics required the writer to pick a hieroglyph starting with the same sound that the writer wanted to write in order to write phonetically, much as man'yougana (Chinese characters, or kanji, used solely for phonetic use) was used to represent Japanese phonetically before the invention of kana. Phoenician gave rise to a number of new writing systems, including the Greek alphabet and Aramaic, a widely used abjad. The Greek alphabet evolved into the modern western alphabets, such as Latin and Cyrillic, while Aramaic became the ancestor of many modern abjads and abugidas of Asia. Impure abjads have characters for some vowels, optional vowel diacritics, or both. The term pure abjad refers to scripts entirely lacking in vowel indicators. However, most modern abjads, such as Arabic, Hebrew, Aramaic and Pahlavi, are "impure" abjadsthat is, they also contain symbols for some of the vowel phonemes, although the said non-diacritic vowel letters are also used to write certain consonants, particularly approximants that sound similar to long vowels. A "pure" abjad is exemplified (perhaps) by very early forms of ancient Phoenician, though at some point (at least by the 9th century BC) it and most of the contemporary Semitic abjads had begun to overload a few of the consonant symbols with a secondary function as vowel markers, called matres lectionis. This practice was at first rare and limited in scope but became increasingly common and more developed in later times. See main article: Greek alphabet. In the 9th century BC the Greeks adapted the Phoenician script for use in their own language. The phonetic structure of the Greek language created too many ambiguities when vowels went unrepresented, so the script was modified. They did not need letters for the guttural sounds represented by aleph, he, heth or ayin, so these symbols were assigned vocalic values. The letters waw and yod were also adapted into vowel signs; along with he, these were already used as matres lectionis in Phoenician. The major innovation of Greek was to dedicate these symbols exclusively and unambiguously to vowel sounds that could be combined arbitrarily with consonants (as opposed to syllabaries such as Linear B which usually have vowel symbols but cannot combine them with consonants to form arbitrary syllables). Abugidas developed along a slightly different route. The basic consonantal symbol was considered to have an inherent "a" vowel sound. Hooks or short lines attached to various parts of the basic letter modify the vowel. In this way, the South Arabian alphabet evolved into the Ge'ez alphabet between the 5th century BC and the 5th century AD. Similarly, around the 3rd century BC, the Brāhmī script developed (from the Aramaic abjad, it has been hypothesized). The other major family of abugidas, Canadian Aboriginal syllabics, was initially developed in the 1840s by missionary and linguist James Evans for the Cree and Ojibwe languages. Evans used features of Devanagari script and Pitman shorthand to create his initial abugida. Later in the 19th century, other missionaries adapted Evans' system to other Canadian aboriginal languages. Canadian syllabics differ from other abugidas in that the vowel is indicated by rotation of the consonantal symbol, with each vowel having a consistent orientation. The abjad form of writing is well-adapted to the morphological structure of the Semitic languages it was developed to write. This is because words in Semitic languages are formed from a root consisting of (usually) three consonants, the vowels being used to indicate inflectional or derived forms. For instance, according to Classical Arabic and Modern Standard Arabic, from the Arabic root Arabic: ذ ب ح Dh-B-Ḥ (to slaughter) can be derived the forms Arabic: ذَبَحَ (he slaughtered), Arabic: ذَبَحْتَ (you (masculine singular) slaughtered), Arabic: يُذَبِّحُ (he slaughters), and Arabic: مَذْبَح (slaughterhouse). In most cases, the absence of full glyphs for vowels makes the common root clearer, allowing readers to guess the meaning of unfamiliar words from familiar roots (especially in conjunction with context clues) and improving word recognition while reading for practiced readers.
http://everything.explained.today/Abjad/
The art of graphic design has its roots deep in the past, beginning with prehistoric images carved on fragments of bone or painted on the walls of caves, these images represent humanity’s first attempts to communicate a message visually, which is the essence of graphic design The prehistoric period can be divided into two parts: Old Stone Age or Paleolithic dating from 30,000-10,000 B.C., and the New Stone Age or Neolithic which dated from 10,000-5,000 B.C. The Cro-Magnons, our direct ancestors, developed innovations in the areas of; technology (making of tools, etc.), social organization, and the arts. They were also the first people to communicate visually. They lived mainly in southern France and northern Spain. The dramatic example of their early images can be found on the walls and ceilings of the caves at Lascaux in France. Here they recorded daily life experiences. The earliest images that have survived are called pictographs, symbols representing things. As society becomes more complex, so did visual communication. Pictographs began to take on extended meanings. They were no longer limited to representing objects or things; they now could express thoughts, such as actions or ideas. For example, the simple drawing of the sun no longer represented just the sun it also could mean day or time. Or the symbol of a foot could mean to stand or to walk. We now refer to these expanded pictographs as ideographs, because they express ideas or actions. The Chinese language is an example of a modern language that is primarily ideographic. This evolution in visual communication from pictographs to ideographs represents a major step in development of a written language. The Egyptians were responsible for the invention of papyrus, a paper-like material, that came from the plant that grew along the Nile River, allowed written communication to be carried over great distances, compared to heavy clay tablets of the Sumerians. The Egyptians also produced a great amount of architecture, sculpture, and painting, much of it dedicated to their obsession with life after death. The interiors of tombs were covered with wall paintings illustrating the daily events of life. [back to top] The earliest Egyptian writing, dates back to around 3,000 B.C. and was a picture-writing system that utilized both pictographs and ideographs. This form of writing is called Hieroglyphics, which in Greek means “sacred carving”. The Rosetta Stone was discovered by a French officer in Napoleon’s army in the town of Rosetta Egypt in 1799, and is the most significant example of Egypt’s hieroglyphics. The stone was taken by the English after a French surrender in 1802, and is in now in a British museum. The Rosetta Stone dates from 197-196 B.C. The black slab, which is a stone called basalt, has an inscription that includes two languages; Egyptian Demotic and Greek, and the script of Egyptian hieroglyphics The origin of civilization across the world in China is a mystery, but it its thought that by the year 2,000 B.C. a culture was evolving. It evolved in virtual isolation from civilization in Mesopotamia and Egypt. Among the many inventions/innovations from the ancient Chinese include; • Compass • Gunpowder • Chinese Calligraphy • Paper (105 A.D.) • Printing • Moveable Type Chinese calligraphy, an ancient writing system, is still used today by more people than any other visual language system. This form of calligraphy is purely visual, not alphabetical, and was invented around 1,800 B.C. The invention of paper is credited to Ts’ai in China in the year 105. The first paper was made from a variety of vegetable fibers like mulberry bark, bamboo, silk, cotton, linen, and rope. The Phoenicians were an aggressive seafaring nation that settled along the coast of what is now Lebanon and northern Israel. Because the Phoenicians were successful merchants and they required an efficient writing system. They realized that only 22 important sounds existed in their spoken language. They decided that if they were to designate a different sign for each sound they could write their language using just 22 signs. The Greeks made one important modification to the Phoenician letter form, vowels were emphasized in the Greek spoken language so they added; A, E, I O, U. The Greeks added the vowels and took away a few of the Phoenician characters, creating a 25 character system that accounted for all of the important sounds in their language. The Romans took the Greek alphabet and further modified it to create their own alphabet of 23 letters, which is the closest alphabet to our alphabet today. They also dropped the Greek practice of calling the letters alpha, beta, and gamma in favor of a simpler A, B, C like we use today. The years from 500 to 1000 are referred to as the Early Middle Ages or the Dark Ages. It was during this time that monks found themselves heirs to the culture and traditions of the Roman Empire. Monasteries became literary and writing centers where scribes copied religious and secular works by hand.
http://anneserdesign.com/HisPrehistoric.html
Dyslexia is usually defined when an individual experiences difficulty with the following reading-related skills; spelling of words, rapidly reading, writing out words, sounding out words mentally, word pronunciation and understanding what is being read. It is often not diagnosed until children reach school age. The diagnosis for dyslexia is done through testing vision, spelling, memory, and reading. Individuals with dyslexia generally do not have a learning disorder and have normal intelligence and desire to learn. Both genetic and environmental factors have been known to contribute to dyslexia and specifically involve the brain’s language processing center; it can coexist with other diagnoses such as attention deficit hyperactivity disorder (ADHD) and emotional instabilities. There is no cure for dyslexia, but there are techniques that can improve and resources that can reduce the frustration associated with it. Several techniques have been developed, which can assist an individual with overcoming and or reducing the challenges associated with dyslexia. Specific focus is practiced to connect the alphabet letters to the combined sounds they make, memorizing sounds with grouping specific letters, as well as combining activities such as reading and spelling together. Reducing stress and external distraction can be helpful by allowing the individual extra time and a comfortable quiet place to focus. There are specific fonts that can be utilized, which are reducing the similarity of shapes in letters. This can be easily confused using standard fonts; using a larger font size and adjusting the spacing of letters is also helpful. Students experiencing this may be assigned to an intervention specialist, who assists the child throughout the day and in various subjects and assists the teacher in the application of techniques that will improve reading and comprehension. Dysgraphia is classified as a writing or transcription disability that involves barriers in the motor skills required for writing and recording information. Those diagnosed with dysgraphia show difficulty in handwriting, transferring mental thoughts to text and show limited, slow or involuntary motions while writing, which makes recognition of written text challenging. Those with dysgraphia may exhibit difficulties performing other fine motor skills; it is believed to be caused by genetics, but physical injury can also cause symptoms. Young children may first exhibit dysgraphia traits when coloring “outside” of the lines, tracing and copying letters appears difficult, or inability to tie a shoe. Various tests like writing tests, including digital devices, can measure and finetune the specific movements found in atypical writing and create a more direct plan for intervention. Those with dysgraphia may exhibit unusual writing posture, hand placement, and figure usage; they may find writing fatiguing. These signs will exhibit emotional frustration and often want to avoid tasks that involve writing or activities involving attention to detail using the hands. Dysgraphia can lead to poor self-esteem and decrease cognitive retention and learning potential. Individuals with dyslexia generally have normal intelligence and desire to learn, but if not diagnosed and interviewed early may fall behind on grade level proficiency assessments. Children with dysgraphia benefit from occupational therapy to help strengthen muscles and build neurologic connections and improve kinesthetic function. In the classroom, students will work with an intervention specialist to reinforce writing techniques, which improve student comprehension, minimize frustration and improve self-esteem. Students may be given extra time, modified or specialized instructions, or utilize technology to complete assignments and improve cognition and legibility. While it is possible for a child to have both dyslexia and dysgraphia, the majority of the time the child will have one or the other with secondary traits of the other. Marsha Ferrick PhD BCC is a licensed clinical psychologist that can help in diagnosing dyslexia and dysgraphia. Get in touch today!
https://marshaferrickcoaching.com/dyslexia-vs-dysgraphia/
Dear students, please read the brief extracts below on the origins of the Greek alphabet. Keep in mind the importance of writing as one of the key elements of complex societies/civilizations. 1. Phoenician alphabet. The Phoenician’s most enduring achievement was a technology that transformed the ancient world: the alphabet. The oldest surviving example of an alphabet can be found on an inscription that runs round the top of the sarcophagus of Ahiram, the Phoenician king of Byblos sometime around 1100 BC. Ominously enough it is a curse against anyone who dares to disturb the tomb, but the development of an alphabet by the Phoenicians was a blessing that we are still benefiting from today. The written alphabet was probably not a purely Phoenician invention; it seems most likely to have developed in Mesopotamia around the fifteenth century BC. But it was the Phoenicians who adapted the letters to make it simpler to use and did the most to disseminate it across the eastern Mediterranean. Earlier writing systems, such as Egyptian hieroglyphics or Akkadian cuneiform, were, broadly speaking, representational. This meant that they consisted of an array of symbols, sometimes hundreds of them, which stood for the things described. They were a kind of bureaucratic code and the skills required were usually restricted to a class of trained specialists known as scribes. An alphabet works differently; it is more like a speech-recording device. Each letter indicates the sound of a spoken word, or part of it, so if you can pronounce the alphabet correctly you can sound out a word even if you do not know what it means (this is how children read to learn phonetically). Quicker and easier to use, the alphabet made literacy more widespread, and it also allowed literature to become more expressive and inventive, echoing the music and rhythms of speech. (Richard Miles, Ancient Worlds: The Search for the Origins of Western Civilization, 66) 2. Greek illiteracy Our first surviving inscription in Greek characters is on a jug of about 750 BC. It shows how much the renewal of Aegean civilization owed to Asia. The inscription is written in an adaptation of Phoenician script; Greeks were illiterate until their traders brought home this alphabet. (Roberts and Westad, (2013) The Penguin History of the World.) ***** Map and extracts shared privately with students for educational purposes only. Please see the relevant booksellers for copies of the books and consult the website https://historica.fandom.com/wiki/Phoenicia for the map and related materials.
https://englishreadingroom.com/category/west-eastern-civilisation-conflicted-and-hyrbid-histories/civilization-semester-1/phoenician-alphabet/
Mediums: - Oil Paints - Water Colours - Pencils - Charcoal - Soft Pastels - Texture white Portfolio A portfolio is a collection of your work, which shows how your skills and ideas have developed over a period of time. It demonstrates your creativity, personality, abilities and commitment, and helps us to evaluate your potential. When we assess a portfolio, the research and processes you have used to develop your work are as important as the final work itself. We are particularly interested in your most recent work presented in the best possible manner. Advanced Diploma in Fine Arts Online by Konsult Fine art In European academic traditions, fine art is art developed primarily for aesthetics or beauty, distinguishing it from decorative art or applied art, which also has to serve some practical function, such as pottery or most metalwork. In the aesthetic theories developed in the Italian Renaissance, the highest art was that which allowed the full expression and display of the artist’s imagination, unrestricted by any of the practical considerations involved in, say, making and decorating a teapot. I t was also considered important that making the artwork did not involve dividing the work between different individuals with specialized skills, as might be necessary with a piece of furniture, for example. Even within the fine arts, there was a hierarchy of genres based on the amount of creative imagination required, with history painting placed higher than still life. Historically, the five main fine arts were painting, sculpture, architecture, music, and poetry, with performing arts including theatre and dance. In practice, outside education the concept is typically only applied to the visual arts. The old master print and drawing were included as related forms to painting, just as prose forms of literature were to poetry. Today, the range of what would be considered fine arts (in so far as the term remains in use) commonly includes additional modern forms, such as film, photography, video production/editing, design, and conceptual art.[original research?][opinion] One definition of fine art is “a visual art considered to have been created primarily for aesthetic and intellectual purposes and judged for its beauty and meaningfulness, specifically, painting, sculpture, drawing, watercolor, graphics, and architecture.” In that sense, there are conceptual differences between the fine arts and the decorative arts or applied arts (these two terms covering largely the same media). As far as the consumer of the art was concerned, the perception of aesthetic qualities required a refined judgment usually referred to as having good taste, which differentiated fine art from popular art and entertainment. The word “fine” does not so much denote the quality of the artwork in question, but the purity of the discipline according to traditional Western European canons. Except in the case of architecture, where a practical utility was accepted, this definition originally excluded the “useful” applied or decorative arts, and the products of what were regarded as crafts. In contemporary practice, these distinctions and restrictions have become essentially meaningless, as the concept or intention of the artist is given primacy, regardless of the means through which this is expressed. The term is typically only used for Western art from the Renaissance onwards, although similar genre distinctions can apply to the art of other cultures, especially those of East Asia. The set of “fine arts” are sometimes also called the “major arts”, with “minor arts” equating to the decorative arts. This would typically be for medieval and ancient art.
https://konsultart.com/product/advanced-diploma-in-fine-art-240-hours-campus/
Similarly, How do you define a art? A visual product or experience purposefully made via an expression of ability or imagination, sometimes known as art or visual art (to separate it from other art genres). Painting, sculpture, printing, drawing, decorative arts, photography, and installation all fall under the umbrella of art. Also, it is asked, What is the meaning or arts? The art of creating friends is a talent that may be learned by experience, study, or observation. 2a: a kind of education: (1) A humanistic discipline. (2) liberal arts (plural). b archaic: scholarship, learning Secondly, Which is the best definition of the arts? Art is defined as the result of imagination and creativity, manifested in a tangible form. Art may be defined as a painting, a theatrical performance, or a sculpture. Also, What is art in your own opinion? Art is a personal expression of our feelings, emotions, intuitions, and aspirations, but it’s also about communicating how we view the world, which for many is an extension of personality. It’s the conveying of personal ideas that can’t be adequately expressed by words alone. People also ask, Why is art so important? Art aids in the processing of emotions and the comprehension of your environment. It helps you to perceive life in a new light and makes you feel more alive. Since the dawn of time, art has been an integral aspect of human culture. Art has long been used to facilitate cultural exchange, education, and expression. Related Questions and Answers What is the meaning of study of arts? The arts are scholastic disciplines that contain an artistic or social component rather than being strictly utilitarian or job-related. You’re studying the arts if you major in English and minor in music. What is your definition of art today? “Something constructed with vision and expertise that is lovely,” according to one current definition. Art is often characterized by its beginnings in the human mind. Any work of art requires the use of imagination. Some people believe that art is what occurs when your creativity manifests itself in a tangible manner. Why do you study art? It allows us to make meaning of our own lives while also allowing us to identify with the lives of others. It’s also becoming more widely acknowledged as a catalyst for the creative thought required to address the world’s most serious issues. Learning and practicing art, as well as tapping into your creativity, may help you improve in any field. Why do you love arts? You can’t help but feel something when you’re looking at art. When I look at art, I experience numerous emotions: yearning, lust, empathy, rage, contempt, want, and connection. Even art-induced boredom, which is uncommon, is a distinct sensation—a sensation of not experiencing. Do you agree with the definition of art? “The purposeful application of talent and creative imagination, particularly in the manufacture of attractive items,” according to Webster’s New Collegiate Dictionary. Art, on the other hand, is much more than a medium or a collection of words on a page. It’s a way of expressing what we’ve gone through. Art is unmistakably human and inextricably linked to civilization. How important is art in our world today? It has the ability to teach people on nearly any subject. It has the ability to raise awareness and deliver information in a style that is readily digestible by a large number of people. Art makes education an even bigger equalizer of society in a world where some people do not even have access to a decent education. Why is art important to humanity? Art stimulates multiple sections of the brain, which aids human growth in terms of learning and comprehending complex topics. It provides a visual representation instead than merely words or figures, allowing users to solve problems and grasp more complicated topics. What is full form of art? Antiretroviral medications are pharmaceuticals that are used to treat HIV. What are the 7 elements of art? COLOR, FORM, LINE, SHAPES, SPACE, TEXTURE, AND VALUE ARE THE VISUAL ELEMENTS OF ART. How do arts express yourself? Visual and performing arts may help people express themselves positively and gain confidence. The successes are most fulfilling and self-esteem is improved when art is inspired by people’s particular interests, ideas, feelings, needs, or preferences. Why do people create art? Making our surroundings more attractive, establishing records of a certain time, place, person, or thing, and expressing and sharing ideas are just a few of the motivations that inspire the production of art. The human intellect is inspired and stimulated by art. Why is emotion important in art? Emotions in the arts have an impact on us on a subjective and physical level, influencing aesthetic judgments such as liking. As a result, emotions in the arts are not solely reflected in a perceiver through a cognitive or detached mode, as cognitivistic art theories sometimes suggest. What can we learn about art? Let’s take a look at some lessons you may learn from art. Art may assist us in becoming more creative. Music has the ability to uplift you. Writing as a form of treatment A artwork might pique your interest. Any piece of art may assist us in appreciating beauty. Before you die, there are 100 things you must accomplish. Exploring and looking for solutions. Art may assist us in becoming better persons. Why is art the best subject? Creative improves concentration and attention, improves hand-eye coordination, necessitates practice and strategic thinking, and includes connecting with the tangible world using various tools and art materials. 3) Art helps children prepare for the future. People that are creative and open-minded are in great demand in all fields. How does art make us feel? Art brings happiness and improves one’s attitude. Relaxation, creativity, and inspiration are all aided by art. In our brains, any sort of creativity may lower the stress hormone cortisol and increase the positive neurotransmitters endorphins and dopamine. How art can change the world? Art depicts reality in such a manner that it has the potential to alter the audience’s perception of the world. Art functions as a catalyst, separating facts from preconceptions and blending them with imagination to generate new meaning. How does art benefit society? Art has an impact on society because it can change people’s minds, inculcate ideals, and translate experiences across place and time. According to studies, art has an impact on one’s core sense of self. The arts, such as painting, sculpture, music, literature, and other forms of expression, are sometimes seen as the storehouse of a society’s collective memory. What is art everywhere? POSSESS AN ABUNDANCE OF PUBLIC ART By bringing art into the public domain, Art Everywhere promotes art and artists. Why do we consider art is everywhere? Art helps us to express our imagination while also allowing us to express our inventiveness. Artistic works, whether in painting, sculpture, photography, or another media, enable people to express their emotions, ideas, and observations while also helping us to appreciate what we have or to perceive the world in new ways. Is art everything Why or why not? Art isn’t everything, and what a “artist” claims to be “art” isn’t always true. Art has an objective significance and satisfies an objective human need. Those seeking to profit off the prestige and worth associated with art have muddled the meaning. How many subjects are there in ART? There are five disciplines that must be studied in addition to a few electives. Political Science, Economics, English, History, and Geography are among the most diverse fields of study in the arts. History, English, Hindi, Sanskrit, Economics, Political Science, Psychology, and other arts subjects are available. What is art in a paragraph? Art is a form of expression for imaginative or technical talent. It creates a finished result, a thing. Art encompasses a wide variety of human activities including the creation of visual and performance objects as well as the expression of the author’s inventive imagination. Conclusion Arts are the types of activities that use creativity, imagination, and skill to produce works of art. They can be anything from painting or drawing to music or dance. This Video Should Help:
https://theshavedhead.com/what-is-arts/
Similarly, How do you define the art? A visual product or experience purposefully made via an expression of talent or imagination, sometimes known as art (to separate it from other art forms). Painting, sculpture, printing, drawing, decorative arts, photography, and installation all fall under the umbrella of art. Also, it is asked, What are the 4 definitions of art? Noun. The ability to execute successfully what one has created is referred to as art, skill, cunning, artifice, and craft. Art denotes an inexplicable, personal creative force. Secondly, What is the truest definition of art? Art is a means of communication in its widest definition. It has whatever meaning the artist intended it to have, and that meaning is formed by the materials, methods, and forms it employs, as well as the thoughts and sentiments it evokes in its audience. Art is a way of expressing emotions, ideas, and observations. Also, Why is art so important? Art stimulates multiple sections of the brain, which aids human growth in terms of learning and comprehending complex topics. It provides a visual representation instead than merely words or figures, allowing users to solve problems and grasp more complicated topics. People also ask, What is art for you answer? Art is a personal expression of our feelings, emotions, intuitions, and aspirations, but it’s also about communicating how we view the world, which for many is an extension of personality. It is the expression of personal feelings that cannot be adequately expressed by words alone. Related Questions and Answers Why is it hard to define art? Because art has no fixed norms or rulebook, it may be difficult to define. Art may also refer to anything that the creator claims to be art. Art critics also use their expertise and experience to assess what they consider to be excellent or terrible art; they attempt to define art in order to understand what they are seeing. Do you agree with the definition for art? “The purposeful application of talent and creative imagination, particularly in the manufacture of attractive items,” according to Webster’s New Collegiate Dictionary. However, art is much more than a medium or a collection of words on a paper. It is the manifestation of our feelings. Art is both uniquely human and inextricably linked to culture. What are the principles of art? ARTISTIC PRINCIPLES: Balance, emphasis, movement, proportion, rhythm, unity, and variation; the methods by which an artist organizes components inside a piece of art. a visual pace or beat is created by the careful positioning of recurring components in a piece of art. What are the 7 elements of art? There are seven art aspects that are regarded to be the foundations of all art. Line, color, value, shape, form, space, and texture are the seven elements. What are the branches of art? The visual arts (painting, drawing, sculpture, and other forms expressed on flat surfaces), the graphic arts (painting, drawing, design, and other forms expressed on flat surfaces), the plastic arts (sculpture, modeling), and the decorative arts (enamelwork, etc.) are all traditional categories within the arts. What defines art famous? Art is the transformation of basic natural principles into attractive forms appropriate for human use. Is it possible to define art? In current philosophy, the concept of art is debatable. The question of whether art can be defined has also sparked debate. The philosophical utility of an art term has also been questioned. Contemporary definitions may be categorized according to the aspects of art that they stress. Why there is no single definition of art? There is no one definition of art, nor is there a single reason why or what an artist paints. Every artist is looking for their own solutions. The quotations that follow describe art through the eyes of several well-known painters. “All great art is a man’s joy in God’s work, not his own.” Ruskin, John. Why do artists create art? What motivates artists to make artwork? Art may be developed for a variety of purposes, including the desire to improve the beauty of our surroundings, preserve information about time, location, people, or items, and communicate ideas to others. Art stimulates and inspires people’s minds. What is the best form of art? The highest kind of art is still literature. What is art form example? the more or less fixed framework, pattern, or system used in the creation of a piece of art: Art styles include the sonata, sonnet, and novel. Ballet, sculpture, opera, and other art forms are examples of mediums for creative expression. What is the space element of art? A sense of depth or three dimensions is referred to as space in a piece of art. It might also relate to how the artist uses the space inside the image plane. Negative space is the space surrounding the principal things in a piece of art, while positive space is the space filled by the primary objects. What is the most important principle of art? HARMONY. Harmony is a feeling of continuity or consistency across an artwork that establishes a link and a flow of purpose. It is one of the most significant and flexible art concepts. How do you describe elements of art? Color, line, shape, form, value, texture, and space are the aspects of art that make up an artwork. They are the instruments that artists employ to create art. Contrast, rhythm, proportion, balance, unity, emphasis, movement, and variation are the design concepts that govern how those building pieces are placed. What are the 10 principles of art? Color, form, line, shape, space, texture, and value are the elements. Balance, emphasis, harmony, movement, pattern, proportion, repetition, rhythm, unity, and variation are the 10 basic principles of art. What are the themes of art? Conflict and Adversity are two themes explored in art. Social Change and Freedom Leaders and Heroes The Environment and Humans Identity. Immigration and migration are two different things. Progress, Industry, and Invention What are the two division of art? The liberal arts and the creative arts are the two disciplines of art. There are two types of creative art. These are the visual and performing arts. Performing arts is a kind of entertainment that combines music and acting. What are the 8 arts? Painting, sculpture, architecture, literature, music, film, and theater are examples of diverse genres of art. Why is art so important in society? Art has an impact on society through altering people’s minds, teaching ideals, and transmitting experiences across time and distance. According to studies, art has an impact on one’s core sense of self. Painting, sculpture, music, literature, and other forms of art are often seen as repositories of a society’s collective memory. What is art according to authors? Aristotle remarked, “The objective of art is to reflect not the apparent aspect of things, but their internal meaning.” Throughout art history, the idea of art as a reflection of beauty or nature has persisted. What is art universal? Art is a common practice among youngsters in all cultures throughout the globe, and it is something that all people can do well into their senior years. If they make artwork, anybody may call oneself an artist; professional artists, on the other hand, get paid for their labor. How is art created? A new combination of components in the medium is created when a piece of art is created (tones in music, words in literature, paints on canvas, and so on). The ingredients were already there, albeit in different combinations; creativity is the re-formation of these pre-existing materials. Conclusion Art is a word that means different things to different people. What is art in your own words? This Video Should Help: The “definition of art by different authors” is a list of different authors that provide their definition for the word “art”.
https://timeisart.org/what-is-art-definition/
Art is a broad spectrum of human activities including aesthetic appreciation, visual sense, physical ability, creative skill, aesthetic sense, technical ability, personal imagination, or personal motivation, and the ability to produce a product, assemble something from assorted elements, use tools to build something, communicate something, or even act as a consumer and seller of art products. The word “art” can also refer to the visual arts, like painting, drawing, photography, printmaking, sculpture, and music. Art has been around since ancient times and is found in almost all cultures around the world, although in modern times many people associate the word with modern culture. Art is used to interpret and appreciate the visual world, and in doing so, the term art has sometimes been used pejoratively to suggest that those who do not accept the authority of art are lacking in respect for the culture that produces and respects art. Art can be defined as a form of self-expression and communication, but it has different meanings for different people. For some it may mean creative knowledge or skill, for others it may mean a product of their own imagination. Some people consider only creative products or works of art to be art, while others use the term “visual art” to include theater, motion pictures, architecture, and the visual arts such as music, painting, and sculpture. The term fine art, on the other hand, is the term sometimes used to describe contemporary art more generally, and is usually derived from the French term “oeuvre en mouveau.” Modern artists have expanded the range of acceptable expressive mediums to include still life, photography, printmaking, video, and computer-generated work. Most of the visual art produced today is produced by the artist as a result of his or her personal vision and artistic inspiration. There are many different art forms and styles. There is figurative art, which is typically the output of the figurative nature of the artist’s mind; facial or portrait paintings, which are highly personal; furniture, which includes furniture made especially for your bedroom or living room; and sculpture, which is very specific and concentrated on a particular form, such as painting, sculpture, printmaking, and sculpture. There is so much more to this seemingly complex art form than meets the eye. It is a true art form that requires an artist’s personal creativity and artistic vision.
https://msseawolves.com/visual-art-defined/
This is a tutorial group in which artists and art students bring to the group projects that they are working on and which they want to pursue. Guidance is given on an individual basis in direct relationship to the needs and interests of each individual participant, thus enabling artistic development on both beginner and advanced levels. Instruction and guidance is geared toward the natural direction and creative expression of each participant. The common ground within the group is that each person is progressing on their own creative and artistic trajectory. Areas of study can include, the development of visual perception and imaginative exploration through all aspects of drawing, painting, mixed media and design, using a variety of media, including both figurative and abstract concepts. This class is designed for those with a serious interest in developing their art skills. It could apply to those preparing a college entrance portfolio, and, or, an Advanced Placement portfolio. Also appropriate for artists of all levels wanting to develop their painting skills. All studies are built around the development of visual perception and imaginative exploration and problem solving. For further information and how to sign up for this class please contact me using the form below. This is an amazing workshop for any painter. Perfect preparation for those planning to take the following ‘Figure Painting” workshop. Juliette will lead you on a creative journey which will give you a thorough understanding of the use of tone and color relationships, mixing colors, building forms and constructing space through an understanding of color dynamics. 9 hours of working from the model in oils with instruction from one of Dallas’ best figure painters. Develop strategies for constructing figurative forms in space while developing your ability to see clearly when working from observation. Also, develop general strategies for working in oils, color mixing and compositional dynamics. Friday Oct 13th 2017 6 – 9pm, Saturday Oct.14th 12:30 – 3:30pm, Sunday Oct.15th 12:30 – 3:30pm. Workshop participants will be led through a number of exercises and artistic explorations that are designed to spur creativity and develop sensitivity to a visual arts vocabulary. Learn strategies to bridge the gap between outer reality and the world of inner imagination and fantasy; kick start your ‘out of the box’ thinking. Friday Feb. 16th 2018, 6 – 9pm, Saturday Feb. 17th 12:30 – 3:3pm0, Sunday Feb. 18th 12:30 – 3:30pm.
http://www.juliettemccullough.com/workshops.html
However, today contemporary art is more than just painting and is defined by 7 disciplines of art: painting, sculpture, architecture, poetry, music, literature, and dance. In this article : What is today’s art called? Contemporary art is the art of today, produced in the second half of the 20th century or the 21st century. This may interest you : How to do 5d diamond painting. Contemporary artists work in a globally influenced, culturally diverse and technologically advancing world. What are the 7 different art forms giving examples for each? The 7 different art forms are Painting, Sculpture, Literature, Architecture, Theater, Film and Music. What are the 10 forms of contemporary art? The different types of contemporary art are painting, sculpture, drawing, printing, collage, digital art / collage, photography, video art, installation art, art art, intervention art (public) and art of performance. What are the 7 different forms of art and meaning? Traditional categories in the arts include literature (including poetry, drama, history, etc.), visual arts (painting, drawing, sculpture, etc.), graphic arts (painting, drawing, design, and art). other expressed on flat. surfaces), plastic arts (sculpture, modeling), decorative arts (enamel work, … What are the 3 styles of art? The three main general styles of art as seen in class. To see also : Painting quotes. Realism, abstract, and non-objective. What are 4 types of art styles? 9 Art Styles That Will Always Be Popular - Abstract. Let’s start with the most delicate! … - Modern. If you’ve ever been lucky enough to visit the MOMA (Museum of Modern Art) in New York, you’ll know how captivating modern art can be. … - Impressionist. … - Pop Art. … - Cubism. … - Surrealism. … - Contemporary. … - Fantasy. What are the 3 types of art? There are countless art forms. When it comes to visual arts, there are usually 3 types: decorative, commercial and fine art. The broader definition of “art” covers everything from painting to theater, music, architecture, and more. How do I draw a face? What is group style in art? Group style. Sometimes artists form alliances, exhibit together, and publicize their goals as a group. On the same subject : How painting started. develop and promote a distinctive style. What is an art group called? Loosely defined, an art collective is a group of artists working together to achieve a common goal. COBRA members bring their work to the First International Exhibition of Experimental Artists, Stedelijk Museum, Amsterdam, November 1949. Is Visual an art? Visual arts are art forms that create works that are primarily visual in nature, such as ceramics, drawing, painting, sculpture, prints, design, crafts, photography, video, filmmaking, and architecture. To see also : How to use painting tape. What is the difference between art and visual art? Art forms. Fine art includes art forms such as drawing, painting, sculpture, photography and printing. Visual arts include art forms such as painting, sculpture, ceramic art, prints, design, crafts, photography, architecture, filmmaking, graphic design, industrial design and fashion design. What kind of art is visual? Visual arts are art forms such as painting, drawing, printing, sculpture, ceramics, photography, video, film production, design, crafts and architecture. Many artistic disciplines such as performing arts, conceptual arts, and textile arts also involve aspects of visual arts as well as arts of other types. What does visual art mean? These are the arts that meet the eye and evoke emotion through the expression of skill and imagination. These include the oldest forms, such as painting and drawing, and the arts that were born thanks to the development of technology, such as sculpture, printing, photography and the art of installation. Do artists have multiple styles? Yes, an artist can have various styles and mediums. … The number of different possible artistic styles is endless. See the article : How to painting wood furniture. Whether it’s simply the artist’s approach and technique in drawing or the materials they use, the art world is filled with countless opportunities to create something completely unique. Is it normal to have different styles of art? Artists can work in many different styles. … Now, artists have a lot more freedom to work as they choose. While it’s perfectly fine to work in a variety of styles, it’s usually best to focus on the ones you like the most, in order to fully develop your artistic potential in that style. How many styles of art are there? There are an infinite number of Art Styles. But what are the key ones? Looking back at those who have left a mark on Art History, we can understand the changes that Art has seen. 21 Art Styles, ranging from Romanticism to Modernism. What is art form example? the structure, pattern, or scheme roughly established followed in the shaping of a work of art: The sonata, the sonnet, and the novel are all forms of art. Read also : How to use painting knives. a means of artistic expression: ballet, sculpture, opera, and other art forms. What is the meaning of the art form? Definition of art form 1: a form or means of expression recognized as fine art sees dance as both an art form and an entertainment. 2a: an unconventional form or medium in which impulses considered artistic can be expressed describing pinball as a great form of American art … Tom Buckley. What are the 3 types of art forms? The three arts of painting, sculpture, and architecture are sometimes referred to as “major art,” with “minor art” referring to commercial or decorative art styles.
https://thecrazyjungle.com/how-painting
Feeling a little down? Need a shot of mental or physical energy? Go paint a painting or sing your heart out at karaoke, or go to an art museum or concert (virtually or otherwise) to experience someone else’s work. Greater Prescott has a burgeoning cultural scene spearheaded by an array of art galleries downtown, local music ensembles ranging from orchestras to bar bands, community theater organizations, regional and national acts at the Findlay Toyota Center, Yavapai College Performing Arts Center and Elks Theatre. Our options for seeing and otherwise participating in live events and even movies have been limited over the past year but they are slowly coming back. Our local arts organizations need our support no matter what else is going on — it’s a matter of community health. Many researchers report a link between the arts and improved health, and to further explore this link the World Health Organization (WHO) released a survey in 2019 of more than 3,000 European studies on the mental and physical health effects of participating in arts and cultural activities. With the most comprehensive research of its kind to date, the WHO found involvement with visual, performing, literary and or online arts: - Supports child development by enhancing speech and language acquisition and support parent-child bonding. - Promotes healthy living and engagement with health care providers. - Helps to stop or slow down progression of illness by increasing well-being, reducing the impact of trauma, lowering the risk of cognitive decline and even improving symptoms. - Supports recovery and relieving symptoms from mental illness. - Improves the experience of and outcomes from hospital care for patients of all ages. - Supports patients with neurological disorders including autism spectrum diagnoses, cerebral palsy, stroke and dementia. People benefit from participating in the arts as well as looking at the work of others. This includes: improved sleep and concentration in children after being read to by parents; teens participating in dramatic reenactments of tough life situations they may face; and older people with dementia improving their memory by singing. People of all ages can benefit from taking up such art-related hobbies as painting or drawing, pottery, music, photography, writing, or even animation or other online arts. Following these activities throughout your life or taking them up as an older adult keeps your mind engaged and helps to ward off cognitive decline. At the community level, the WHO says residents and officials can help promote arts events, programs and organizations by ensuring art forms are available to everyone in the community, including minorities and lower-income groups. As an individual you can make sure your children have access to and participate in arts-related programs at school and attend museums, live shows and festivals whenever possible — streaming content or in-person when available. It’s going to do you and everyone else good!
https://prescottlivingmag.com/support-the-arts-to-support-yourself/
The Visual Arts is a valued and integral learning area at Beaconsfield Primary School, with students from PP to Year 6 participating in a weekly one-hour session run by a specialist teacher in a modern purpose-built art studio. These lessons are a doorway for students to discover and cultivate the exploration of imagination, creativity and curiosity through making and responding to art. Students learn about the visual elements of art through practical hands on activities that develop skills, techniques and processes. They engage with, and explore, the qualities and properties of different materials such as clay, wire paint and textiles as well as approaches such as painting, collage, sculpture, drawing, print making and ceramics. The Visual Arts provides opportunities for students to become ‘Citizens of the World’, making connections with new ideas. Getting to know the stories and viewpoints of artists, artworks and movements that have powerful social, cultural and historic contexts fosters empathy and a deeper understanding of the role of art in society. Special Visual Arts Projects Over the years and when opportunities arise, special art projects and artists in residence works enhance the school grounds. They provide a ‘master class’ in in various art techniques and styles as well as fostering a sense of community and pride in the school. Art in the School Grounds As part of the school’s visual arts program, in partnership with the P&C and wider community, students have contributed to the installation of art throughout the school grounds. In recent years this has included:
https://www.beaconsfieldps.wa.edu.au/keyprograms/visual-arts/?s=
Paint with the imagination Drawing and painting workshop by Kids No TV in collaboration with Think Big and MACBA The aim of this new workshop is to bring you closer to drawing and painting through the creative travelling spirit of the artist Lawrence Weiner. We will explore our creativity without any rules or limits, letting our imagination fly and following a series of exercises related to aesthetics and the artist’s attitudes. We will draw and paint in different colours on a variety of surfaces and forms, in a fun way and using incredible techniques. In this workshop we will play freely while allowing the imagination of other participants to surprise us.
https://www.macba.cat/en/exhibitions-activities/activities/paint-imagination
At Birkenhead Primary School we believe that through art, children learn to work and respond independently and imaginatively in ways that enrich their whole lives. We believe that art is vital to the curriculum, contributing to all aspects of learning. Success in Art encourages success in other areas of the curriculum and establishes a sense of achievement. It is fundamental to the beliefs of every culture and society. It is purposeful, enjoyable, real and valued here at BPS. to stimulate and nurture children’s creativity and imagination by providing visual, tactile and sensory experiences. to develop children’s understanding of colour, form and texture, pattern and their ability to use materials and processes to communicate their ideas, feelings and meanings of the world around us. to explore the work of various artists, craftspeople and designers, and help them learn about the functions of art, craft and design in their own lives and in different times and cultures. to develop positive attitudes towards co-operative work. Our Visual Art Curriculum scheme aims to provide progression and continuity through Pikiake, Whanake and Panuku teams. It encourages the key concepts and knowledge pupils should require, together with skills to be taught. The visual elements of colour, line, tone, texture, pattern, shape and form are incorporated with the processes of drawing, painting, printing, collage, sculpture and design. Throughout their time at school, the above processes are revisited on a two yearly cycle so that the children can build on their previous work and achievement, with painting and drawing revisited throughout every year as these are more core skills and form the basis to everything else. Children's understanding and enjoyment of Art are developed through activities that bring together requirements from both investigating and making as well as knowledge and understanding, wherever possible. As a result, our children acquire an extensive knowledge of art techniques and artists models over their time at BPS and leave confident, inspired and able to discuss and express their ideas around Art.
https://www.birkenheadprimary.school.nz/visual-arts.html
The substantial motion of the nature is to balance, to survive, and to reach perfection. The evolution in biological systems is a key signature of this quintessence. Survival cannot be achieved without understanding the surrounding world. How can a fruit fly live without searching for food, and thereby with no form of perception that guides the behavior? The nervous system of fruit fly with hundred thousand of neurons can perform very complicated tasks that are beyond the power of an advanced supercomputer. Recently developed computing machines are made by billions of transistors and they are remarkably fast in precise calculations. But these machines are unable to perform a single task that an insect is able to do by means of thousands of neurons. The complexity of information processing and data compression in a single biological neuron and neural circuits are not comparable with that of developed today in transistors and integrated circuits. On the other hand, the style of information processing in neural systems is also very different from that of employed by microprocessors which is mostly centralized. Almost all cognitive functions are generated by a combined effort of multiple brain areas. In mammals, Cortical regions are organized hierarchically, and they are reciprocally interconnected, exchanging the information from multiple senses. This hierarchy in circuit level, also preserves the sensory world within different levels of complexity and within the scope of multiple modalities. The main behavioral advantage of that is to understand the real-world through multiple sensory systems, and thereby to provide a robust and coherent form of perception. When the quality of a sensory signal drops, the brain can alternatively employ other information pathways to handle cognitive tasks, or even to calibrate the error-prone sensory node. Mammalian brain also takes a good advantage of multimodal processing in learning and development; where one sensory system helps another sensory modality to develop. Multisensory integration is considered as one of the main factors that generates consciousness in human. Although, we still do not know where exactly the information is consolidated into a single percept, and what is the underpinning neural mechanism of this process? One straightforward hypothesis suggests that the uni-sensory signals are pooled in a ploy-sensory convergence zone, which creates a unified form of perception. But it is hard to believe that there is just one single dedicated region that realizes this functionality. Using a set of realistic neuro-computational principles, I have explored theoretically how multisensory integration can be performed within a distributed hierarchical circuit. I argued that the interaction of cortical populations can be interpreted as a specific form of relation satisfaction in which the information preserved in one neural ensemble must agree with incoming signals from connected populations according to a relation function. This relation function can be seen as a coherency function which is implicitly learnt through synaptic strength. Apart from the fact that the real world is composed of multisensory attributes, the sensory signals are subject to uncertainty. This requires a cortical mechanism to incorporate the statistical parameters of the sensory world in neural circuits and to deal with the issue of inaccuracy in perception. I argued in this thesis how the intrinsic stochasticity of neural activity enables a systematic mechanism to encode probabilistic quantities within neural circuits, e.g. reliability, prior probability. The systematic benefit of neural stochasticity is well paraphrased by the problem of Duns Scotus paradox: imagine a donkey with a deterministic brain that is exposed to two identical food rewards. This may make the animal suffer and die starving because of indecision. In this thesis, I have introduced an optimal encoding framework that can describe the probability function of a Gaussian-like random variable in a pool of Poisson neurons. Thereafter a distributed neural model is proposed that can optimally combine conditional probabilities over sensory signals, in order to compute Bayesian Multisensory Causal Inference. This process is known as a complex multisensory function in the cortex. Recently it is found that this process is performed within a distributed hierarchy in sensory cortex. Our work is amongst the first successful attempts that put a mechanistic spotlight on understanding the underlying neural mechanism of Multisensory Causal Perception in the brain, and in general the theory of decentralized multisensory integration in sensory cortex. Engineering information processing concepts in the brain and developing new computing technologies have been recently growing. Neuromorphic Engineering is a new branch that undertakes this mission. In a dedicated part of this thesis, I have proposed a Neuromorphic algorithm for event-based stereoscopic fusion. This algorithm is anchored in the idea of cooperative computing that dictates the defined epipolar and temporal constraints of the stereoscopic setup, to the neural dynamics. The performance of this algorithm is tested using a pair of silicon retinas.
https://edoc.ub.uni-muenchen.de/25966/
Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten. Multisensory integration processing during olfactory‐visual stimulation An fMRI graph theoretical network analysis : Ripp, Isabelle ; Nieden, Anna-Nora zur ; Blankenagel, Sonja ; Franzmeier, Nicolai ; Lundström, Johan N. ; Freiherr, Jessica : DOI: 10.1002/hbm.24206 Human brain mapping 39 (2018), No.9, pp.3713-3727 ISSN: 1065-9471 ISSN: 1097-0193 English Journal Article Fraunhofer IVV ( ) Abstract In this study, we aimed to understand how whole‐brain neural networks compute sensory information integration based on the olfactory and visual system. Task‐related functional magnetic resonance imaging (fMRI) data was obtained during unimodal and bimodal sensory stimulation. Based on the identification of multisensory integration processing (MIP) specific hub‐like network nodes analyzed with network‐based statistics using region‐of‐interest based connectivity matrices, we conclude the following brain areas to be important for processing the presented bimodal sensory information: right precuneus connected contralaterally to the supramarginal gyrus for memory‐related imagery and phonology retrieval, and the left middle occipital gyrus connected ipsilaterally to the inferior frontal gyrus via the inferior fronto‐occipital fasciculus including functional aspects of working memory. Applied graph theory for quantification of the resulting complex network topologies indicates a significantly increased global efficiency and clustering coefficient in networks including aspects of MIP reflecting a simultaneous better integration and segregation. Graph theoretical analysis of positive and negative network correlations allowing for inferences about excitatory and inhibitory network architectures revealed—not significant, but very consistent—that MIP‐specific neural networks are dominated by inhibitory relationships between brain regions involved in stimulus processing. :
http://publica.fraunhofer.de/documents/N-524335.html
The somatosensory system is part of the sensory nervous system, which is a complex system of sensory neurons and neural pathways that respond to changes on the surface or in the body. The axons (as afferent nerve fibers) of sensory neurons connect with or respond to various receptor cells. These sensory receptor cells are activated by different stimuli (such as heat and nociception) and give functions to the responding sensory neurons (such as thermoreceptors that carry information about temperature changes). Other types include mechanoreceptors, chemoreceptors, and nociceptors, which send signals along the sensory nerves to the spinal cord, where they can be processed by other sensory neurons and then passed to the brain for further processing. Sensory receptors are distributed throughout the body, including skin, epithelial tissue, muscles, bones and joints, internal organs and cardiovascular system. Somatosensory Pathway All incoming touch/vibration information rises through the spinal cord via the posterior (dorsal) column-medial lemma pathway. The somatosensory pathway usually has three neurons: first-order, second-order, and third-order. First-order neurons are a type of pseudo-unipolar neurons, and their cell bodies are always located in the dorsal root ganglia of spinal nerves. If the somatosensory pathway is located in a part of the head and neck, and the cervical nerves are not covered, the primary neurons will be the trigeminal ganglia or other sensory cranial nerve ganglia. The cell bodies of second-order neurons are located in the spinal cord or brain stem. The ascending axon of this neuron will cross (disappear) to the opposite side in the spinal cord or brain stem. In the case of touch and certain types of pain, the cell bodies of third-order neurons are located in the ventral posterior nucleus of the thalamus and terminate in the central posterior parietal gyrus of the primary somatosensory cortex (or S1). Fig.1 CNS processing of sensory information. Clinical Significance Somatosensory defects may be caused by peripheral neuropathy involving peripheral nerves in the somatosensory system. This may manifest as numbness or paresthesias. Creative Biolabs provides a complete list of antibodies and protein products to help our customers better understand the interaction between the somatosensory system and the neurological diseases. With our easy-to-use guide below, choose the best marker tools you need for your research.
https://neuros.creative-biolabs.com/category-somatosensory-system-147.htm
ness and arousal. Association Cortex and Perceptual Processing The cortical association areas presented in Figure 7–14 are brain areas that lie outside the primary cortical sensory or motor areas but are adjacent to them. The association areas are not considered part of the sensory pathways, but they play a role in the progressively more complex analysis of incoming information. Although neurons in the earlier stages of the sensory pathways are necessary for perception, information from the primary sensory cortical areas undergoes further processing after it is relayed to a cortical association area. The region of association cortex closest to the primary sensory cortical area processes the information in fairly simple ways and serves basic sensory-related functions. Regions farther from the primary sensory areas process the information in more complicated ways. These include, for example, greater contributions from areas of the brain serving arousal, attention, memory, and lan- guage. Some of the neurons in these latter regions also inte- grate input concerning two or more types of sensory stimuli. Thus, an association area neuron receiving input from both the visual cortex and the “neck” region of the somatosensory cortex might integrate visual information with sensory infor- mation about head position. In this way, for example, a viewer understands a tree is vertical even if his or her head is tipped sideways. Auditory cortex Central sulcus Frontal lobe association area Parietal lobe association area Somatosensory cortex Visual cortex Taste cortex Occipital lobe association area Temporal lobe association area Figure 7–14 Primary sensory areas and areas of association cortex. The olfactory cortex is located toward the midline on the undersurface of the frontal lobes (not visible in this picture). The specifi c ascending pathways that transmit informa- tion from somatic receptors —that is, the receptors in the framework or outer walls of the body, including skin, skeletal muscle, tendons, and joints—go to the somatosensory cor- tex. This is a strip of cortex that lies in the parietal lobe of the brain just posterior to the central sulcus, which separates the parietal and frontal lobes (see Figure 7–14). The specifi c ascending pathways from the eyes go to a different primary cortical receiving area, the visual cortex, which is in the occipital lobe. The specifi c ascending pathways from the ears go to the auditory cortex, which is in the temporal lobe. Specifi c ascending pathways from the taste buds pass to a cor- tical area adjacent to the region of the somatosensory cortex where information from the face is processed. The pathways serving olfaction project to portions of the limbic system and the olfactory cortex, which is located on the undersurface of the frontal lobes. Finally, the processing of afferent information does not end in the primary cortical receiving areas, but continues from these areas to association areas in the cerebral cortex where complex integration occurs. In contrast to the specifi c ascending pathways, neurons in the nonspecifi c ascending pathways are activated by sen- sory units of several different types ( Figure 7–15 ) and there- fore signal general information. In other words, they indicate that something is happening, without specifying just what or where. A given ascending neuron in a nonspecifi c ascend- ing pathway may respond, for example, to input from several afferent neurons, each activated by a different stimulus, such as maintained skin pressure, heating, and cooling. Such path- way neurons are called polymodal neurons. The nonspecifi c Spinal cord Touch Temperature Thalamus and brainstem Cerebral cortex Temperature Touch Specific ascending pathways Nonspecific ascending pathway Figure 7–15 Diagrammatic representation of two specifi c ascending sensory pathways and a nonspecifi c ascending sensory pathway.
http://signinghispraises.com/Vander's%20Human%20Physiology%20The%20Mechanisms%20of%20Body%20Function/229
A recursive group utility function that is capable of bringing the group of sensors into consensus is used and can be used to tackle the multi-sensor object identification problem. Generic model of an autonomous sensor - Computer Science - 1994 Modeling and fusing uncertain multi‐sensory data - Computer Science - 1996 A probabilistic approach for modeling the uncertainty and cooperation in sensory teams and shows how the Information Variation measure can be used to capture both the quality of sensory data and the interdependence relationships that might exist between the different sensors. Dynamic across time autonomous-sensing, interpretation, model learning and maintenance theory - Computer Science - 1994 Informational Maneuvering in Dynamic Environment - Mathematics - 1995 By allowing sensors to continuously and autonomously correlate the outcomes of their previous observations and use them to plan for next observations, more effective sensory activities can be… Multisensor Integration and Fusion Model That Uses a Fuzzy Inference System - Computer ScienceDynamic Systems and Control: Volume 1 - 2000 The results from the proposed work are a stepping stone towards the development of generic autonomous sensor models that are capable of data interpretation, self-calibration, data fusion from other sources and even learning so as to improve their performance with time. Informational maneuvering in dynamic environment - Mathematics1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century - 1995 By allowing sensors to continuously and autonomously correlate the outcomes of their previous observations and use them to plan for next observations, more effective sensory activities can be… Multi-sensor integration system with fuzzy inference and neural network - Computer Science[Proceedings 1992] IJCNN International Joint Conference on Neural Networks - 1992 It is shown that a sensor integration system (SIS) with multiple sensors can expand the measurable region of an intelligent robotic system with high accuracy and that operators can use the system as… An approach towards smart fault-tolerant sensors - Computer Science2009 IEEE International Workshop on Robotic and Sensors Environments - 2009 This work aims at providing an architecture for fault-tolerant sensors and offering a uniform interface to the application by using a mathematical model to evaluate sensor data and achieves a more reliable position estimation. Sensor Fusion in Time-Triggered Systems - Computer Science - 2002 A time-triggered approach for real-time sensor fusion applications that partitions the system into three levels, and proposes two sensor fusion algorithms to accomplish this task, the systematic confidence-weighted averaging algorithm and the application-specific robust certainty grid algorithm. References SHOWING 1-10 OF 25 REFERENCES Information and multi-sensor coordination - Computer ScienceUAI - 1986 Consistent integration and propagation of disparate sensor observations - Computer Science - 1987 The invariant topology of relations between uncertain geometric features is used to develop a method for propagating observations through the world model and forces a consistent interpretation of the environment to be main tained and makes maximum use of sensor information. Information Maps for Active Sensor Control - Computer Science - 1987 It is shown how linear estimation theory can be applied to the problem of controlling a nonlinear observation system observing data implicitly related to parameters of interest, and the notion of an information map showing the information expected from sensor viewpoints is developed. The specification of distributed sensing and control - Computer ScienceJ. Field Robotics - 1985 This article demonstrates how control issues can be handled in the context of Logical Sensor System Specification to include a control mechanism which permits control information to flow from more centralized processing to more peripheral processes and be generated locally in the logical sensor. Uncertain geometry in robotics - Computer ScienceProceedings. 1987 IEEE International Conference on Robotics and Automation - 1987 This work considers uncertain points, curves and surfaces, and shows how they can be manipulated and transformed between coordinate frames in an efficient and consistent manner. Integration, coordination, and control of multi-sensor robot systems - Computer Science - 1986 A structure for Multi-Sensor Systems and The Team Decision Problem, with a focus on the Multi-Bayesian Team, and Implementation and Results. Object Recognition Using Vision and Touch - Computer ScienceIJCAI - 1985 A system is described that integrates vision and tactile sensing in a robotics environment to perform object recognition tasks that uses multiple sensor systems to compute three dimensional primitives that can be matched against a model data base of complex curved surface objects containing holes and cavities. Redundant Sensors for Mobile Robot Navigation - Computer Science - 1985 Sonar and infrared sensors are used here in tandem, each compensating for deficiencies in the other, to build a representation which is more accurate than if either sensor were used alone. On Optimally Combining Pieces of Information, with Application to Estimating 3-D Complex-Object Position from Range Data - Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence - 1986 The necessary techniques for optimal local parameter estimation and primitive boundary or surface type recognition for each small patch of data are developed, and optimal combining of these inaccurate locally derived parameter estimates are combined to arrive at roughly globally optimum object-position estimation.
https://www.semanticscholar.org/paper/Sensor-Models-and-Multisensor-Integration-Durrant-Whyte/48e2e3de255b309cf1d0f00dda19a06f1cce0414
Explore the words cloud of the NeuralCoding project. It provides you a very rough idea of what is the project "NeuralCoding" about. The following table provides information about the project. |Coordinator|| UNIVERSITY COLLEGE LONDON | Organization address contact info |Coordinator Country||United Kingdom [UK]| |Project website||https://www.ucl.ac.uk/wolfson-institute-biomedical-research/circuit-neuroscience| |Total cost||183˙454 €| |EC max contribution||183˙454 € (100%)| |Programme|| 1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility) | |Code Call||H2020-MSCA-IF-2015| |Funding Scheme||MSCA-IF-EF-ST| |Starting year||2017| |Duration (year-month-day)||from 2017-09-01 to 2019-08-31| Take a look of project's partnership. |#| |1||UNIVERSITY COLLEGE LONDON||UK (LONDON)||coordinator||183˙454.00| How is information encoded in the brain? Sensory neurons transform information from the outside world into electrical signals which are transmitted via the sensory pathway into the neocortex. In the neocortex, the area crucially involved in higher cognitive functions, neurons form networks that exhibit complex time-varying patterns of activity. The nature of the neural code that is used by these neuronal networks to encode and pass on information by means of spatiotemporal activity patterns is largely unknown. I will combine large-scale neuronal recordings, advanced analysis tools and targeted manipulation of neuronal activity in the context of behaviour to extract population activity patterns that encode stimulus information and most crucially identify their functional relevance in the behaving animal. Specifically, I will establish a fine-tuned texture discrimination task in head-fixed mice that depends on information processing in layer 2/3 of barrel cortex. I will use two-photon calcium imaging to detect activity in large populations of neurons during task performance. I will apply advanced analysis tools including dimensionality reduction methods, dynamical systems approaches, and network simulations to extract and characterise stimulus and task-specific population activity patterns. In order to establish behavioural relevance I will perturb neural activity during two-photon imaging in the behaving mouse by using time-varying patterned optogenetic manipulation. This will allow me to directly probe the functional relevance of neural activity patterns and establish a causal link between identified population activity patterns and behaviour. This project will provide unprecedented insights into the nature of neural dynamics in neocortex as well as constraints for computational models of neocortical function that will be used to provide a mechanistic understanding of the neural code. Are you the coordinator (or a participant) of this project? Plaese send me more information about the "NEURALCODING" project. For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc. Send me an email ([email protected]) and I put them in your project's page as son as possible. Thanks. And then put a link of this page into your project's website. The information about "NEURALCODING" are provided by the European Opendata Portal: CORDIS opendata.
https://www.fabiodisconzi.com/open-h2020/projects/203076/index.html
The scientific goal of this research training group is to apply theoretical and computational tools in order to understand the principles underlying sensory processing and perception. Specifically, we plan to address perception at different scales and different levels of abstraction, aiming at the integration of computation, i.e. the algorithmic level, and neural processing, i.e. its implementation. Therefore, the scientific program: - addresses sensory processing on the local (neurons and circuits) and global (brain networks and behavior) levels, - combines experiment-dominated bottom-up (data → computational models → functional concepts) with theory-dominated top-down approaches (functional hypotheses → computational models → testable predictions), - combines approaches from biophysical modeling, dynamical systems and stochastic processes with methods from machine learning and engineering, providing links between the different levels of abstraction, - addresses sensory computation in its behavioral context. Since we are interested in principles of computation that generalize across systems and species, we feel that it is beneficial to study a variety of systems and paradigms, ranging from invasive (electrophysiology, imaging) studies in animals to non-invasive (EEG, fMRI, behavior) studies in humans. Projects thus cover the range from single neuron computation to human psychophysics, because we feel that it is highly important for the success of the research training group that doctoral researchers become familiar with the most important theoretical concepts on all relevant levels of abstraction. The research program is structured into two pillars: - Pillar A: Local computations: Neurons, networks & invasive studies. - Pillar B: Global computation: Brain networks, cognitive aspects & human neuroscience. Within pillar A, we want to understand computations implemented by local circuits and the role of the observed spatiotemporal responses of networks in sensory processing. Projects address different sensory modalities, and are complemented by studies of the hippocampus as an example for brain structure which supports sensory integration and contextual processing. Pillar B makes the link to perception and human neuroscience. More conceptually oriented projects will attempt to construct dynamical models of brain networks underlying sensory computation, combining for the first time high-resolution, whole brain Diffusion Tensor Imaging data with fMRI measurements of resting state and evoked activities The projects will provide doctoral researchers with the experience with a broad range of theoretical concepts. Classical methods from computational neuroscience will be complemented by less well-known methods from dynamical systems, stochastic processes, and control. Established methods from the machine learning and statistical pattern recognition fields will be complemented by recent developments in Bayesian inference and variational methods, reinforcement learning, information geometry, subspace methods, or transfer learning. With the proposed spectrum of projects, mechanisms underlying sensory computation and perception will be studied at many different levels. What is more, doctoral researchers will directly gain hands-on experience in developing theoretical and computational methods for linking those different levels, for example, the biophysical and dynamical properties of single neurons with the effective dynamics of large populations, the representation of information with the observed neural signals, the computation being performed with the underlying neural implementation, and quantitative descriptions of behavior with the underlying computation and the underlying neural correlates. Hence doctoral researchers will be exposed to the problem of linking the activity of neurons and networks to task-dependent performance measures while – at the same time – being forced to formulate quantitative hypotheses about the ongoing computation and putting them to test.
https://www.eecs.tu-berlin.de/grk_15891/menue/research/parameter/en/font3/maxhilfe/
The laboratory performs complex numerical computations (CUDA) and processes large sets of text data (Big Data). Services - Simulations of artificial and deep neural networks (deep learning) - Simulations of neural networks similar to biological models (spiking neural networks) - Simulations of mathematical models of neural cells biological networks, in particular: computational column in the cerebral cortex, topological maps, selected information processing systems, e.g. hearing, vision, motor, etc. - EEG signals processing in order to control objects and recreate the environment imaged by the brain on the basis of signals from sensory systems.
https://cnt.edu.pl/en/projekty/mcb-en/laboratory-located-at-the-national-information-processing-institute-national-research-institute/
The Internet began as a U.S. military computer network meant to survive a nuclear attack. ARPANET, developed in the 1960's, stored information in a broad network of computers linked by distributed hubs so that an attack against one or more hubs could not bring down the entire thing. Decentralized. Interconnected. Robust. Nuke-Proof. Wouldn't it be great if you could get your students to build the same kind of neural networks around the subjects you teach? How can you move beyond superficial, short-term learning to create learning that sticks? Modern brain science suggests one answer that you can apply in your classroom today. Providing a Sensory Banquet The key to creating robust neural networks in students' brains is to give them a rich sensory diet as they engage with material. Here's what Ronald Kotulak says in Inside the Brain: The brain gobbles up its external environment in bites and chunks through its sensory system: vision, hearing, smell, touch, and taste. Then the digested world is reassembled in the form of trillions of connections between brain cells that are constantly growing or dying, or becoming stronger or weaker, depending upon the richness of the banquet. By engaging many different senses, you lay the information down through multiple interconnected neural pathways, placing it in long-term memory and making it robust and "nuke-proof." Map of the Reading Brain: Imagine that you ask your students to read a paragraph about the Treaty of Versailles. The information comes in through the eyes, imprints upon the retinas, and then travels through the optic nerves to the occipital lobe at the back of the brain. The information there is decoded into words, and impulses travel to the temporal lobes, where language processing occurs. The impulses then arc up to the frontal lobe meaning of the words is evaluated. Map of the Writing Brain: If you now ask students to freewrite to reflect on the Treaty of Versailles, the brain fires back and forth between the frontal lobe and the temporal lobe, putting thoughts into words. It also sends signals back and forth between the frontal lobe and the limbic system as students explore their feelings about the treaty. Meanwhile, neural pathways fire from the temporal lobe to the cerebellum, which controls the motor neurons that make the fingers move the pen or press keys on the keyboard. And, of course, when a student writes, he or she is also simultaneously reading, so the map of the reading brain gets overlaid on that of the writing brain. The parietal lobe monitors it all, helping with sensory integration. Map of the Listening Brain: After students write paragraph reflections on the Treaty of Versailles, you ask them to share their thoughts with the class. One student speaks, and the others listen. The impulse this time comes in through the ears and gets processed in the temporal lobes and also the limbic system. Much of the meaning of oral communication comes through tone of voice, which reveals the speaker's feelings about the topic, the audience, the context, and the purpose. And don't forget about the eyes. Much more information comes from nonverbal cues such as posture, facial expression, and gesture. So the signal from the eyes goes to the occipital lobes and through the limbic system before converging with the auditory signals in the frontal cortex. Again, the parietal lobes help integrate the senses. Map of the Speaking Brain: When the time comes for a given student to read thoughts aloud, we have the reading brain map overlaid on a speaking brain map, which works much like the writing brain did. This time, however, the motor impulses go to the mouth instead of the fingers. And, of course, as we speak, we listen to ourselves and others around us, so the listening map is also overlaid. And the parietal lobe is once again integrating all of the impulses. From Communicating to Collaborating So, we've seen how multimodal presentation styles—visual, auditory, oral, and motor—fire the brain in different ways, creating diverse connections and pathways about a specific topic. But students can also go beyond reading, writing, listening, and speaking—by collaborating together for a common goal. Imagine that you assign students to be ambassadors from the Allied Powers who constructed the Treaty of Versailles. Each student will represent one of the Allied nations—France, Britain, Italy, the United States, or Japan. Groups of five students must convene to create a new Treaty of Versailles to try to fix the problems with the actual treaty. Students must think about what their nations need, what other nations need, the problems with the original treaty, ways to address the problems—and all while acting, reacting, and interacting. Every sensory system and every motor system is engaged, and the emotional context is critical as well. Students are experiencing the Treaty of Versailles, not just reading about it and taking a test. This rich banquet of sensory and motor experience means the information will be well and truly learned, placed in long-term memory. In his article, “Neural Pathway Development,” Dr. Gene Van Tassell shows how learning through varied sensory pathways can help shift information from short-term memory (STM) to long-term memory (LTM): Memories can be stored with visual, audio, or other sensory perceptions. These types of storage are part of an LTM system. Educators are wise to encourage LTM storage as opposed to STM storage, which requires rehearsal and a minimum of cross referencing in the labeling system. With more pathways sensitized and networked within the brain, it is easier to access items in the memory where they might be stored.
https://k12.thoughtfullearning.com/blogpost/networking-your-students-neurons
David Poeppel, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany and Department of Psychology, New York University, New York, NY, U.S.A. Pasko Rakic, Yale University, New Haven, CT, U.S.A. Terry Sejnowski, Salk Institute, La Jolla, CA, U.S.A. Wolf Singer, Ernst Strüngmann Institute, Frankfurt am Main, Germany Peter Strick, Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, U.S.A. Julia Lupp, Ernst Strüngmann Forum, Frankfurt, Germany Recent advances in novel and powerful methods have revolutionized neuroscience and led to an exponential growth of the database on the structural and functional organization of the brain. This is particularly true for the cerebral cortex which appeared late in evolution, exhibits exceedingly complex circuitry, and is held responsible for the emergence of cognitive functions specific to humans. Yet, our concepts have advanced less than our ability to characterize and intervene with neural circuits at ever greater resolutions. Top of page Recent advances in artificial intelligence, addressed as “Deep Neural Networks” (DNNs), have led to the design of artificial systems whose performance on selected cognitive tasks approaches that of humans. Striking structural similarities support the view that DNNs exploit the same computational principles as natural brains: (a) Both DNNs and sensory systems consist of multiple, hierarchically organized layers of integrator units that are connected via diverging and converging feedforward pathways. (b) The gain of these connections is adjusted by an iterative learning process to generate invariant and well-classifiable responses to trained patterns at the output layer. (c) The response properties of the units “recorded” in DNNs trained with natural objects and scenes resemble those of neurons at comparable hierarchical levels of natural systems. However, the implementation of the learning process is different: Unsupervised and supervised Hebbian synaptic modifications in biological systems versus an error-driven backpropagation algorithm in DNNs. Although backpropagation is biologically implausible, it leads to modifications similar to Hebbian learning. DNNs provide a model system for analyzing the principles for how information is distributed in neural populations. However, several arguments suggest that natural systems have additional strategies of information processing that are likely to differ radically from those currently used in AI systems. Network nodes in natural systems have a high propensity to oscillate. These new insights justify considering the brain as a complex, self-organized system with nonlinear dynamics in which principles of distributed, parallel processing coexist with serial operations within highly interconnected networks. The observed dynamics suggest that cortical networks are capable of providing an extremely high-dimensional state space in which a large amount of evolutionary and ontogenetically acquired information can coexist and be accessible to rapid parallel search for the interpretation of sensory signals and the generation of complex motor commands. Top of page In 1987, a Dahlem Conference on the “Neurobiology of Neocortex” was convened “to identify principles of cortical operations, to challenge system specialists to become more eclectic in their interests, to exchange information about various systems, and to evaluate common properties among cytoarchitectonically and functionally distant areas of the cortex” (Cortex 1.0, see sidebar). At the time, systems neuroscience was guided by a behaviorist stance. Stimulus-response paradigms prevailed and the research strategy consisted of analyzing the transformation of response properties of individual neurons along processing streams, extending from sensory organs to executive structures. This approach was extremely successful and supported the notion of serial processing across hierarchically organized cortical areas. This view agreed with early anatomical data, which emphasized that feedforward connections exhibit high topographic precision and possess strong driving synapses whereas feedback connections are diffuse and only modulatory. However, advances in the analysis of the cortical connectome, the introduction of multisite recording techniques, and the development of imaging methods to assess whole brain activity have generated data that (a) necessitate an extension of classical views, (b) raise novel questions, and (c) are likely to provide new solutions to old problems. In 1987 genetic determinants of cortical organization were viewed in terms of prevailing notions of the DNA-RNA-Protein sequence, which establish the initial cell phenotype and its subsequent connections. The evolutionary changes were considered to be a result of random gene mutations, which, if reproducible and positive, help survival of the species. In the meantime, the advances made in -omic methods and concepts of molecular biology elaborated and modified this schema to include the role of “epigenetic mechanisms that include regulatory elements such as non-coding DNA and miRNA. The evolutionary expansion and elaboration of the cerebral cortex that culminates in humans is considered to be a result not only of the increased number of cortical neurons, but also the genesis of the new cell phenotypes, modification of neuronal migration, and introduction of new cortical areas along with their local and long distance connections. The genetic and molecular origins of these evolutionary innovations are only beginning to be understood and so far accumulated data indicate the validity of the initial concepts that need to be modified to include findings obtained with new techniques and approaches. The novel anatomical and functional data suggested that processing is distributed in densely coupled, recurrent networks capable of supporting complex dynamics. Even simple cognitive and executive functions have been shown to involve widely distributed networks. Furthermore, it became increasingly clear that the brain plays an active part not only in the generation of movements but also in the processing of sensory information; this led to the notions of active sensing, predictive coding, and Bayesian inference. These newly discovered processing strategies obviously require a high degree of coordination of the distributed processes, suggesting that special mechanisms are implemented to dynamically bind local processes into coherent global states and to configure functional networks on the fly in a context- and goal-dependent way. To examine the mechanisms serving this self-organizing coordination, an Ernst Strüngmann Forum was convened in 2009 (Cortex 2.0, see sidebar). Since then there have again been dramatic advances in novel techniques for neuroscientific inquiry and these have been game-changing in terms of our ability to characterize and intervene with neural circuits at ever higher resolutions. Also, new concepts on computational strategies have been developed that may be relevant for neuroscience. Theoreticians have begun to explore and appreciate the computational power of self-organizing recurrent neuronal networks (SORN), the respective concepts being addressed as reservoir, echo-state, and liquid computing. Neurobiological investigations of these concepts are, however, still rare. Top of page This Forum (Cortex 3.0) is being convened to explore to which extent the rich data accumulated over the past decade can be embedded in unifying conceptual frameworks. It is our hope that it will contribute to the identification of gaps in knowledge and generate suggestions for promising research directions. To do so, we will approach these goals from four angles, summarized briefly below. Evolution: What distinguishes cerebral neocortex from other layered structures (cerebellum, tectum, hippocampus, pallium in reptiles and birds) and integrative centers of invertebrates (insects, cephalopods)? Is there any evidence for independent evolution of cortical structures? Are there some unique and novel principles of neuronal organization (question also for Group 2)? Which factors cause the augmentation of cortical surface? What are the characteristics of areas added late in evolution? However, a even more intriguing and challenging question is how human neocortex (generally considered as the biological substrate of some unique cognitive abilities) acquires new types of neurons and pattern of synaptic connections that are not observed in other mammals? How are such additional cortical areas integrated in existing architectures? (question also to Group 2)? Ontogeny: Are the radial unit and protomap hypotheses of cortical development and evolution, which postulate that columnar, laminar and areal organization of the neocortex are formed by migration of postmitotic neurons from the proliferative ventricular (VZ) and subventricular (SVZ) zones of the dorsal telencephalon, sufficient and the final cornerstone? What is the role of the subplate (SP) zone? How are interneurons, which immigrate from the SVZ of the ventral telencephalon incorporated? How are late-arriving neurons integrated into existing circuitry? When does neurogenesis in the cerebral cortex stop? What determines cytoarchitectonic differences and input-output connections between areas? Which components of cortical circuitry are genetically predetermined and fixed and which are modifiable by epigenetic influences? What distinguishes early (classical critical period) and late (adolescence) developmental periods of the cortex? What is the criterion for maturity? What constrains adult plasticity? How does a deficit in cell production, placement and connectivity impact the cognitive abilities of offspring? Intraareal connectivity: Is there a canonical circuit for the networks of excitatory and inhibitory neurons across areas and species? Are there specific features which characterize primates and humans? Do cytoarchitectonic differences reflect different computational functions, differences in afferent and efferent connectivity, or both? Is the concept of a cortical module justified? How do areal boundaries affect intrinsic connectivity (tangential connections, dendritic arbors, distribution of modulatory inputs)? Interareal connectivity: Is the coexistence of cortico-cortical and cortico-thalamo-cortical feedforward loops universal? Does the same principle hold for top-down connections? Are there exceptions to the rule of reciprocity of interareal connections? What is the evidence on inhibitory long-range connections? Are there rules explaining the heterogeneity of conduction velocities (myelinated vs. unmyelinated, thick vs. thin axons) How are noncortical processors connected to neocortex (cerebellum, basal ganglia, hippocampus, tectum, limbic nuclei, modulatory systems), and are there overarching principles? Can graph theory provide unifying concepts? Can the cortical connectome be deconvolved to approximate a deep neuronal network, or are there principle differences? Cellular level: Is there anything special with respect to the computational capacity of excitatory and inhibitory neuron classes (variability of conductances, pacemaker currents, coincidence sensitivity etc.)? What are the rules for and mechanisms of synaptic plasticity? Does STDP account for all forms? Microcircuits: How are the diverse receptive, response, and motion fields generated and what are their functions (cardinal cells or members of Hebbian assemblies? How and why are these responses modulated so extensively by cross-modal interactions, central states, self-generated movements, attention and reward expectation? What are the specific functions emerging from characteristic features of circuitry (recurrency, inhibitory networks)? Are the resulting dynamics functionally relevant or dysfunctional epiphenomena (oscillations in distinct frequency bands, correlations, synchrony, phase shifts, self-generated spatio-temporal patterns)? How is “activation” or “inactivation” defined in EEG, MEG, ECoG and fMRI signals, and how are variations in these signals related to underlying network activities? How are memories (short- and long- term) formed, encoded, and read out? How are priors stored in cortical networks and how are they compared with input signals? What is propagated: error signals, signals matching with expectations (priors), or both, and if so via the same or different pathways? Interareal interactions: Which are appropriate tools for the analysis of network interactions (e.g. coherence analysis, measures of Granger causality, dynamic causal modeling, multivariate techniques) and which are the relevant temporal and spatial scales (correlated BOLD fluctuations, phase locking and cross frequency coupling of EEG/MEG/ECoG signals, spike–field correlations, spike-spike correlations)? Are functional networks dynamically configured or just reflecting the backbone of fixed anatomical connections? Is information contained in resting state activity? Can we construct an inventory of computational primitives that hold across different areas (local vs. generic)? How are distributed representations of sensory objects configured and translated into executive commands? How can arbitrary correspondence between sensory and motor maps be established? How is top-down selection of inputs initiated and realized (e.g., feature-specific attentional selection)? How can the system distinguish between activity arising from computations toward a result and activity representing a result? What is the signature of consistent activation states that lead to “eureka” experiences, trigger activation of reward systems, and activate now print commands for memory formation? Are there computational principles beyond those considered in classical neuronal network theory that capitalize on the properties of complex, self-organizing dynamic systems with non-linear dynamics? The neurobiological and functional properties of cerebral cortex, as discussed and debated in detail in the other groups, must be accountable, in the end, to the attributes of human perception, cognition, and affect. The terrific progress at the implementational level of description must “align” with the complex and dynamic features that form the basis for human thought and action. Can we identify linking hypotheses that allow us to go beyond mere correlational descriptions of how cortex underpins the suite of functions comprising cognition, affect and action? Building on that broader question, what are the computational parts that form the basis for human experience? How might we account for compelling psychological phenomena such as drawing one’s “theory of mind,” the ability to plan/hallucinate/structure the future (mental time travel), or the sense of agency and volition? Can we make paradigm-shifting progress on our understanding of consciousness, binding, reportability? Can a deeper understanding of processing sequences inform our mechanistic explanations of predictive coding, Bayesian hypotheses, abstract structures such as syntax, and, more broadly, illuminate the temporal structure of perceptual and cognitive experience? How do we store and retrieve the information that lies at the very basis of who we are (self) and what we do (remember facts, process words)?
https://esforum.de/forums/esf27_Cerebral_Cortex_3.html
- In vitro Magnetic Stimulation: A Simple Stimulation Device to Deliver Defined Low Intensity Electromagnetic Fields. - A craniofacial-specific monosynaptic circuit enables heightened affective pain. - A device to study the effects of stretch gradients on cell behavior. - A disinhibitory circuit mediates motor integration in the somatosensory cortex. - A multi-channel whisker stimulator for producing spatiotemporally complex tactile stimuli. - A peripheral cannabinoid mechanism suppresses spinal fos protein expression and pain behavior in a rat model of inflammation. - Activation of p38 mitogen-activated protein kinase in spinal microglia contributes to incision-induced mechanical allodynia. - Altered urinary bladder function in mice lacking the vanilloid receptor TRPV1. - Antidepressant-like effects of cranial stimulation within a low-energy magnetic field in rats. - Behavioral detection of tactile stimuli during 7-12 Hz cortical oscillations in awake rats. - Behavioral modulation of tactile responses in the rat somatosensory system. - Behavioral properties of the trigeminal somatosensory system in rats performing whisker-dependent tactile discriminations. - Bilateral integration of whisker information in the primary somatosensory cortex of rats. - Biomechanical analysis of silicon microelectrode-induced strain in the brain. - Brain mechanisms supporting the modulation of pain by mindfulness meditation. - Care of preterm infants: programs of research and their relationship to developmental science. - Catechol-O-methyltransferase gene polymorphisms are associated with multiple pain-evoking stimuli. - Colonic irritation in the rat sensitizes urinary bladder afferents to mechanical and chemical stimuli: an afferent origin of pelvic organ cross-sensitization. - Computing with thalamocortical ensembles during different behavioural states. - ConceFT for Time-Varying Heart Rate Variability Analysis as a Measure of Noxious Stimulation During General Anesthesia. - Developmental chlorpyrifos effects on hatchling zebrafish swimming behavior. - Differential orientation of 10T1/2 mesenchymal cells on non-uniform stretch environments. - Distinct neural signatures of threat learning in adolescents and adults. - Does oral experience terminate ingestion? - Dynamic and distributed properties of many-neuron ensembles in the ventral posterior medial thalamus of awake rats. - Dynamic mechanical response of elastic spherical inclusions to impulsive acoustic radiation force excitation. - Dysferlin, annexin A1, and mitsugumin 53 are upregulated in muscular dystrophy and localize to longitudinal tubules of the T-system with stretch. - Effects of cardiac motion on right coronary artery hemodynamics - Estimates of echo correlation and measurement bias in acoustic radiation force impulse imaging. - Estimation of shear wave speed in the human uterine cervix. - Expanding the primate body schema in sensorimotor cortex by virtual touches of an avatar. - Fluid shear stress induces endothelial transforming growth factor beta-1 transcription and production. Modulation by potassium channel blockade. - Graded exposure therapy for addressing claustrophobic reactions to continuous positive airway pressure: a case series report. - Heterogeneous integration of bilateral whisker signals by neurons in primary somatosensory cortex of awake rats. - In vivo luminescent imaging of NF-κB activity and NF-κB-related serum cytokine levels predict pain sensitivities in a rodent model of peripheral neuropathy. - In vivo vomeronasal stimulation reveals sensory encoding of conspecific and allospecific cues by the mouse accessory olfactory bulb. - Integration of bilateral whisker stimuli in rats: role of the whisker barrel cortices. - Integration of oral habituation and gastric signals in decerebrate rat pups. - Interactions of parents and nurses with high-risk preterm infants. - Investigation of central pain processing in shoulder pain: converging results from 2 musculoskeletal pain models. - Lack of evidence for ectopic sprouting of genetically labeled Aβ touch afferents in inflammatory and neuropathic trigeminal pain. - Layer-specific somatosensory cortical activation during active tactile discrimination. - Light scattering by polymorphonuclear leukocytes stimulated to aggregate under various pharmacologic conditions. - Light touch induces ERK activation in superficial dorsal horn neurons after inflammation: involvement of spinal astrocytes and JNK signaling in touch-evoked central sensitization and mechanical allodynia. - Maternal satisfaction with administering infant interventions in the neonatal intensive care unit. - Mechanical sensation and pain thresholds in patients with chronic arthropathies. - Mechanoelectrical excitation by fluid jets in monolayers of cultured cardiac myocytes. - Mechanoreception at the cellular level: the detection, interpretation, and diversity of responses to mechanical signals. - Mechanosensitive ion channel Piezo2 is important for enterochromaffin cell response to mechanical forces. - Meniett clinical trial: long-term follow-up. - Methodological Considerations for the Temporal Summation of Second Pain. - Modeling the adaptive permeability response of porcine iliac arteries to acute changes in mural shear - Modulation of network excitability by persistent activity: how working memory affects the response to incoming stimuli. - Modulation of rat chorda tympani nerve activity by lingual nerve stimulation. - Monkey median nerve repaired by nerve graft or collagen nerve guide tube. - Monocyte rolling, arrest and spreading on IL-4-activated vascular endothelium under flow is mediated via sequential action of L-selectin, beta 1-integrins, and beta 2-integrins. - Neural substrate of modified and unmodified pathways for learning in monkey vestibuloocular reflex. - Neurogenic bowel treatments and continence outcomes in children and adults with myelomeningocele. - Neurophysiology: electrically evoking sensory experience. - Neurotrophins: peripherally and centrally acting modulators of tactile stimulus-induced inflammatory pain hypersensitivity. - Nociceptors are interleukin-1beta sensors. - Nuclear factor-kappa B regulates pain and COMT expression in a rodent model of inflammation. - PPI deficit induced by amphetamine is attenuated by the histamine H1 antagonist pyrilamine, but is exacerbated by the serotonin 5-HT2 antagonist ketanserin. - Paclitaxel-induced neuropathic hypersensitivity in mice: responses in 10 inbred mouse strains. - Peripheral axonal injury results in reduced mu opioid receptor pre- and post-synaptic action in the spinal cord. - Peripheral noxious stimulation induces phosphorylation of the NMDA receptor NR1 subunit at the PKC-dependent site, serine-896, in spinal cord dorsal horn neurons. - Persistent Catechol-O-methyltransferase-dependent Pain Is Initiated by Peripheral β-Adrenergic Receptors. - Phase I Safety Trial: Extended Daily Peripheral Sensory Stimulation Using a Wrist-Worn Vibrator in Stroke Survivors. - Prevalence and correlates of maternal early stimulation behaviors during pregnancy in northern Ghana: a cross-sectional survey. - Principal component analysis of neuronal ensemble activity reveals multidimensional somatosensory representations. - Radiation force imaging of viscoelastic properties with reduced artifacts. - Rapid neck muscle adaptation alters the head kinematics of aware and unaware subjects undergoing multiple whiplash-like perturbations. - Receptor endocytosis and dendrite reshaping in spinal neurons after somatosensory stimulation. - Repetitive transcranial magnetic stimulation to SMA worsens complex movements in Parkinson's disease. - Retention of VOR gain following short-term VOR adaptation. - Selective attention and multisensory integration: multiple phases of effects on the evoked brain activity. - Simultaneous encoding of tactile information by three primate cortical areas. - Simultaneous top-down modulation of the primary somatosensory cortex and thalamic nuclei during active tactile discrimination. - Spatiotemporal properties of layer V neurons of the rat primary somatosensory cortex. - Spatiotemporal structure of somatosensory responses of many-neuron ensembles in the rat ventral posterior medial nucleus of the thalamus. - Striatal firing rate reflects head movement velocity. - Supraventricular tachycardia terminated by external mechanical stimulation: a case of "pothole conversion". - TRPV4 mediates pain-related behavior induced by mild hypertonic stimuli in the presence of inflammatory mediator. - Tactile-kinesthetic stimulation effects on sympathetic and adrenocortical function in preterm infants. - Tactile/kinesthetic stimulation effects on preterm neonates. - Tension and combined tension-extension structural response and tolerance properties of the human male ligamentous cervical spine. - The force-driven conformations of heparin studied with single molecule force microscopy. - The relationship of the audible pop to hypoalgesia associated with high-velocity, low-amplitude thrust manipulation: a secondary analysis of an experimental study in pain-free participants. - The role of stimulus salience and attentional capture across the neural hierarchy in a stop-signal task. - The three-dimensional vestibulo-ocular reflex evoked by high-acceleration rotations in the squirrel monkey. - Trigeminal responses to thermal stimulation of the oral cavity in rattlesnakes (Crotalus viridis) before and after bilateral anesthetization of the facial pit organs. - Ultrasonic disruption of the blood-brain barrier enables in vivo functional mapping of the mouse barrel field cortex with manganese-enhanced MRI. - Ultrasonic tracking of acoustic radiation force-induced displacements in homogeneous media. - Ultrastructral effects of in virto experimentation on right ventricular papillary muscle from cats in hypovolemic shock (38487). - Uranyl acetate-induced sensorimotor deficit and increased nitric oxide generation in the central nervous system in rats. - Ventral Tegmental Dopamine Neurons Control the Impulse Vector during Motivated Behavior.
https://scholars.duke.edu/display/meshD010812
We are searching data for your request: Upon completion, a link will appear to access the found materials. Both visual and auditory stimuli are sent to the brain via ganglion cells (retinal resp. spiral). Both are the first cells along their resp. pathways that produce action potentials. My question concerns typical frequencies of action potentials sent along the axons of the visual vs. auditory ganglion cells as a reaction to a "typical stimulus", i.e. a medium long, medium strong signal of some fixed frequency (e.g. light: red, sound: 440Hz) against a white resp. silent background. Are these frequencies of comparable range, or does one type of ganglion cell (retinal vs. spiral) fire with a significantly higher or lower rate than the other? (The question would not make sense, if the physical frequencies of light and sound - which trigger the receptor cells - would be coded by frequencies of action potentials. But I assume that this is not the case, is it?) The auditory brainstem shows "phase-locking" typically up to 1-3Khz at most; 3000Hz is an incredibly high firing rate for a single neuron, but this phase-locking is achieved not by individual cells firing in-phase with an auditory stimulus, but rather with a population of cells that tend to fire in-phase, such that if you average across the population you get a phase-locked population volley. In some cases, in some animals, this phase locking can even get to the higher frequencies (see here for example). However, this phase locking seems primarily important for sound localization via interaural time differences. Frequency itself is encoded by which population of hair cells is activated, according to the properties of the basilar membrane. Firing rates of individual spiral ganglion cells are only faster than 100 Hz at very high stimulus intensities. Similar to the spiral ganglion cells, retinal ganglion cells primary encode intensity information in their firing rates. However, in both cases, it's important to recognize how crucial adaptation is in sensory systems. RGCs in particular fire primarily to transients, so it is typical to use light flashes, drifting gratings, or other dynamic stimuli. The response to a "medium long, medium strong signal of some fixed (wavelength)" is going to be brief, followed by silence, not a constant response like you imply. A developmental switch in the response of DRG neurons to ETS transcription factor signaling Two ETS transcription factors of the Pea3 subfamily are induced in subpopulations of dorsal root ganglion (DRG) sensory and spinal motor neurons by target-derived factors. Their expression controls late aspects of neuronal differentiation such as target invasion and branching. Here, we show that the late onset of ETS gene expression is an essential requirement for normal sensory neuron differentiation. We provide genetic evidence in the mouse that precocious ETS expression in DRG sensory neurons perturbs axonal projections, the acquisition of terminal differentiation markers, and their dependence on neurotrophic support. Together, our findings indicate that DRG sensory neurons exhibit a temporal developmental switch that can be revealed by distinct responses to ETS transcription factor signaling at sequential steps of neuronal maturation. Figures Figure 1. Replacement of Er81 by EWS-Pea3 Figure 1. Replacement of Er81 by EWS-Pea3 (A) Generation of Er81 EWS-Pea3 mutant mice. Above… Figure 2. Rescue of Ia Proprioceptive Afferent… Figure 2. Rescue of Ia Proprioceptive Afferent Projections into the Ventral Spinal Cord in Er81… Figure 3. Defects in the Establishment of… Figure 3. Defects in the Establishment of Sensory Afferent Projections upon Precocious Expression of EWS-Pea3… Figure 4. Neurotrophin-Independent Neurite Outgrowth In Vitro… Figure 4. Neurotrophin-Independent Neurite Outgrowth In Vitro of DRG Neurons Expressing EWS-Pea3 Precociously Figure 5. DRG Neurons Expressing EWS-Pea3 Isochronically… Figure 5. DRG Neurons Expressing EWS-Pea3 Isochronically Depend on Neurotrophins for Survival Figure 6. Loss of Trk Receptor Expression… Figure 6. Loss of Trk Receptor Expression and Increased Survival in DRG Neurons upon Precocious… Figure 7. Gene Expression Analysis upon Induction… Figure 7. Gene Expression Analysis upon Induction of Precocious or Isochronic ETS Signaling Figure 8. Precocious ETS Signaling Induces Gene… Figure 8. Precocious ETS Signaling Induces Gene Expression Changes Cell-Autonomously Figure 9. Progressive Neuronal Specification Is Paralleled… Figure 9. Progressive Neuronal Specification Is Paralleled by a Developmental Shift in Response to ETS… The Effect of Lipopolysaccharides on Primary Sensory Neurons in Crustacean Models Many types of gram-negative bacteria are responsible for serious infections, such as septicemia. Lipopolysaccharides (LPS), the endotoxins released from these bacteria, are responsible for inducing the immune response of organisms such as crustaceans, who have well-conserved Toll- like receptors. Little is known about the direct impact LPS has on primary sensory neurons apart from this immune reaction. Previous studies have demonstrated that motor neurons increase both spontaneous and evoked firing frequencies with LPS, but differences have been observed across species. Here, the effects of LPS from two strains of gram-negative bacteria (Serratia marcescensand Pseudomonas aeruginosa) on firing frequency of primary sensory proprioceptors in the crab propodite-dactylopodite (PD) organ and crayfish muscle receptor organ (MRO) is examined. These sensory organs correlate to mammalian proprioception, as the MRO is analogous to the mammalian muscle spindle, and the PD organ allows for the separation of motor nerve function from sensory neuronal transduction. The neuronal function of the two model organisms was studied through the stretch-activation of rapidly-adapting and slowly-adapting sensory neurons. Results indicated that there is no statistically significant impact on sensory transduction through the application of LPS however, in the crab PD organ, the application of LPS from both strains decreased the nerve activity except when the LPS from both bacteria was applied together. In the crayfish MRO, there usually was an increase in nerve activity. In saline controls, there was also an increase in firing of the neurons in both preparations, but this also was not statistically significant. Interestingly, the MRO muscle fibers often contracted upon the addition of LPS, perhaps indicating that the known impact LPS has on motor nerve function is partially responsible for the results obtained. The Brain ‘Rotates’ Memories to Save Them From New Sensations To revist this article, visit My Profile, then View saved stories. Research in mice shows that neural representations of sensory information get rotated 90 degrees to transform them into memories. In this orthogonal arrangement, the memories and sensations do not interfere with one another. Illustration: Samuel Velasco/Quanta Magazine To revist this article, visit My Profile, then View saved stories. During every waking moment, we humans and other animals have to balance on the edge of our awareness of past and present. We must absorb new sensory information about the world around us while holding on to short-term memories of earlier observations or events. Our ability to make sense of our surroundings, to learn, to act, and to think all depend on constant, nimble interactions between perception and memory. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. But to accomplish this, the brain has to keep the two distinct otherwise, incoming data streams could interfere with representations of previous stimuli and cause us to overwrite or misinterpret important contextual information. Compounding that challenge, a body of research hints that the brain does not neatly partition short-term memory function exclusively into higher cognitive areas like the prefrontal cortex. Instead, the sensory regions and other lower cortical centers that detect and represent experiences may also encode and store memories of them. And yet those memories can’t be allowed to intrude on our perception of the present, or to be randomly rewritten by new experiences. A paper published recently in Nature Neuroscience may finally explain how the brain’s protective buffer works. A pair of researchers showed that, to represent current and past stimuli simultaneously without mutual interference, the brain essentially “rotates” sensory information to encode it as a memory. The two orthogonal representations can then draw from overlapping neural activity without intruding on each other. The details of this mechanism may help to resolve several long-standing debates about memory processing. To figure out how the brain prevents new information and short-term memories from blurring together, Timothy Buschman, a neuroscientist at Princeton University, and Alexandra Libby, a graduate student in his lab, decided to focus on auditory perception in mice. They had the animals passively listen to sequences of four chords over and over again, in what Buschman dubbed “the worst concert ever.” These sequences allowed the mice to establish associations between certain chords, so that when they heard one initial chord versus another, they could predict what sounds would follow. Meanwhile, the researchers trained machine-learning classifiers to analyze the neural activity recorded from the rodents’ auditory cortex during these listening sessions, to determine how the neurons collectively represented each stimulus in the sequence. Buschman and Libby watched how those patterns changed as the mice built up their associations. They found that over time, the neural representations of associated chords began to resemble each other. But they also observed that new, unexpected sensory inputs, such as unfamiliar sequences of chords, could interfere with a mouse’s representations of what it was hearing—in effect, by overwriting its representation of previous inputs. The neurons retroactively changed their encoding of a past stimulus to match what the animal associated with the later stimulus—even if that was wrong. The researchers wanted to determine how the brain must be correcting for this retroactive interference to preserve accurate memories. So they trained another classifier to identify and differentiate neural patterns that represented memories of the chords in the sequences—the way the neurons were firing, for instance, when an unexpected chord evoked a comparison to a more familiar sequence. The classifier did find intact patterns of activity from memories of the actual chords that had been heard—rather than the false “corrections” written retroactively to uphold older associations—but those memory encodings looked very different from the sensory representations. The memory representations were organized in what neuroscientists describe as an “orthogonal” dimension to the sensory representations, all within the same population of neurons. Buschman likened it to running out of room while taking handwritten notes on a piece of paper. When that happens, “you will rotate your piece of paper 90 degrees and start writing in the margins,” he said. “And that’s basically what the brain is doing. It gets that first sensory input, it writes it down on the piece of paper, and then it rotates that piece of paper 90 degrees so that it can write in a new sensory input without interfering or literally overwriting.” In other words, sensory data was transformed into a memory through a morphing of the neuronal firing patterns. “The information changes because it needs to be protected,” said Anastasia Kiyonaga, a cognitive neuroscientist at UC San Diego who was not involved in the study. This use of orthogonal coding to separate and protect information in the brain has been seen before. For instance, when monkeys are preparing to move, neural activity in their motor cortex represents the potential movement but does so orthogonally to avoid interfering with signals driving actual commands to the muscles. Still, it often hasn’t been clear how the neural activity gets transformed in this way. Buschman and Libby wanted to answer that question for what they were observing in the auditory cortex of their mice. “When I first started in the lab, it was hard for me to imagine how something like that could happen with neural firing activity,” Libby said. She wanted to “open the black box of what the neural network is doing to create this orthogonality.” Experimentally sifting through the possibilities, they ruled out the possibility that different subsets of neurons in the auditory cortex were independently handling the sensory and memory representations. Instead, they showed that the same general population of neurons was involved, and that the activity of the neurons could be divided neatly into two categories. Some were “stable” in their behavior during both the sensory and memory representations, while other “switching” neurons flipped the patterns of their responses for each use. To the researchers’ surprise, this combination of stable and switching neurons was enough to rotate the sensory information and transform it into memory. “That’s the entire magic,” Buschman said. In fact, he and Libby used computational modeling approaches to show that this mechanism was the most efficient way to build the orthogonal representations of sensation and memory: It required fewer neurons and less energy than the alternatives. Buschman and Libby’s findings feed into an emerging trend in neuroscience: that populations of neurons, even in lower sensory regions, are engaged in richer dynamic coding than was previously thought. “These parts of the cortex that are lower down in the food chain are also fitted out with really interesting dynamics that maybe we haven’t really appreciated until now,” said Miguel Maravall, a neuroscientist at the University of Sussex who was not involved in the new study. The work could help reconcile two sides of an ongoing debate about whether short-term memories are maintained through constant, persistent representations or through dynamic neural codes that change over time. Instead of coming down on one side or the other, “our results show that basically they were both right,” Buschman said, with stable neurons achieving the former and switching neurons the latter. The combination of processes is useful because “it actually helps with preventing interference and doing this orthogonal rotation.” Buschman and Libby’s study might be relevant in contexts beyond sensory representation. They and other researchers hope to look for this mechanism of orthogonal rotation in other processes: in how the brain keeps track of multiple thoughts or goals at once in how it engages in a task while dealing with distractions in how it represents internal states in how it controls cognition, including attention processes. “I’m really excited,” Buschman said. Looking at other researchers’ work, “I just remember seeing, there’s a stable neuron, there’s a switching neuron! You see them all over the place now.” Libby is interested in the implications of their results for artificial intelligence research, particularly in the design of architectures useful for AI networks that have to multitask. “I would want to see if people pre-allocating neurons in their neural networks to have stable and switching properties, instead of just random properties, helped their networks in some way,” she said. All in all, “the consequences of this kind of coding of information are going to be really important and really interesting to figure out,” Maravall said. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. Discussion In this work, we asked how information from somatosensory and auditory inputs is integrated in the mouse neocortex. With two-photon Ca 2+ imaging, we investigated large populations of layer 2/3 neurons across somatosensory and auditory areas with single cell resolution. We found that neurons across somatosensory cortices are tuned to the frequency of tactile stimulation. The addition of concurrent sound resulted in modulation of these tactile responses in both S1 and S2, and this modulation typically manifested as a suppression of the response. Moreover, the degree of suppression depended on tactile frequency, with responses to low frequencies more inhibited than responses to high frequencies. We also identified a population of neurons in S2 responsive to sound but not to touch. Unlike in auditory cortex, sound responses of many (31 of 82) sound-selective neurons in S2 were strongly inhibited by addition of tactile stimuli at high tactile frequencies. These neurons were spatially colocalized with S2 touch-selective neurons. The detection of the frequency of mechanical vibrations is important for animals to discern surface texture and to handle tools 30,31 , and tuning to spectral frequency in the somatosensory system can encode texture information 32 . In our study, the presence of well-tuned neurons in both S1 and S2 supports the notion that tactile frequency tuning may be a general organizational feature for mouse tactile sensation. The higher proportion of neurons with tuning to lower tactile frequencies in S2 than in S1 may reflect differences in thalamocortical inputs to the two regions. S1 receives strong thalamic drive from the ventral posterior medial nucleus (VPM), while S2 receives a larger share of its thalamocortical input from the posterior medial nucleus (POm) 33,34 . Interestingly, although both POm and VPM cells show adaptation, causing decreased response amplitude under high frequency stimulation, POm cells exhibit earlier adaptation than VPM cells 35 and as a result are tuned to lower frequencies than VPM cells. Thus, the tuning properties of neurons in S2 may be inherited from the response properties of thalamic neurons, although it could also reflect longer temporal integration windows in higher areas of cortex 36 . We found that the addition of an auditory stimulus modulated tactile responses in both S1 and S2, consistent with the sound-driven hyperpolarizing currents previously observed in mouse S1 37 . This modulation has three notable features: (1) Although a similar proportion of neurons in both S1 and S2 were facilitated by sound, more neurons in S2 were inhibited than in S1 (Figs. 3d and 5d). (2) Inhibition of neurons tuned to low tactile frequencies in both S1 and S2 was more severe than inhibition of neurons to high tactile frequencies in the same regions (Figs. 3e and 5e). (3) Sound-driven suppression in S2 is tactile frequency dependent, with stronger inhibition occurring at lower tactile frequencies (Figs. 7 and 8). Previous studies in human and non-human primates have revealed that multimodal integration improves detection of events in the environment 3,38,39 . The optimal integration of competing sensory cues involves dominance of the most reliable sensory cue to minimize variance in the estimation of the true stimulus 3 . This evaluation of reliability between different sensory cues is a dynamic process, with the weight or value of each stimulus modality being continuously updated 39 . Low frequency tactile stimulation is potentially less salient of a signal than high frequency tactile stimulation, since it comprises lower velocity whisker motions. Indeed, we observed more suppression of tactile responses at lower tactile stimulus frequencies than at high frequencies (Figs. 7 and 8), indicating that auditory responses are more dominant when tactile stimuli are weak. This result is consistent with the prior observation that, during optimal multimodal integration, the more reliable stimulus modality dominates the response 40 . On the other hand, this frequency-dependent integration is complementary to “inverse effectiveness,” where multimodal integration is largest for weak multimodal stimuli near threshold and decreases with increasing stimulus intensity, as has been reported in the superior colliculus 41,42 . Sound-touch modulations may involve more than just direct interactions between the unimodal stimuli themselves. Attention, arousal, motor behavior, and hidden internal states can be influenced by sensory stimuli and they, in turn, can influence the response to a sensory stimulus 23,24,43,44,45 . Indeed, multisensory integration, if relevant to behavior, should be associated with a change in the internal state of the animal. Pointing towards this complex interplay of stimuli and internal states, we found that locomotive behavior, while able to influence sensory responses, could not explain sound-driven inhibition of tactile responses on its own (Fig. 4). To untangle these potentially complex interconnections, the underlying cellular and network mechanisms that mediate these interactions need to be uncovered. While our present work focused on neurons in layers 2/3, a fruitful locus of study would be layers 1 and 6, where crossmodal 46 and neuromodulatory 47 inputs are known to be stronger and may thus gate sensory inputs and mediate attentional effects. Previously, it was believed that multimodal influences on activities within classically defined unimodal areas are mediated by feedback from multisensory integration in higher-order cortical regions 48,49 . However, human studies using event-related potentials (ERPs) suggest that these multimodal influences may also be carried in the feedforward inputs coming from subcortical regions to unimodal regions 48,50,51 . In the present study, we identified a small (1.2%) population of sound-selective neurons within S2 itself. Although prior studies have shown non-matching neurons in primary cortices that respond solely to other sensory modality inputs 52 , the sound-selective neurons we found may play a special computational role in multimodal integration. The sound-driven responses in these neurons were strongly suppressed at high tactile frequencies (Fig. 9a–e), and those inhibited by tactile stimuli neurons are clustered near the center of the whisker-responsive region of S2 (Fig. 10), similar to the spatial organization of non-matching neurons seen in other studies 51,52 . The existence of touch-inhibited sound-selective neurons in S2 indicates that they may play a role in local sound-driven suppression observed in tactile-selective neurons of S2. This winner-take-all circuit (Fig. 10a) could dynamically select a stimulus modality at each moment and, under the right conditions, would be consistent with divisive normalization, a model that has been proposed as a driving force behind multisensory interactions 53,54,55 . Organization of receptive field properties There is a serial and hierarchical organization of receptive field properties. Each sensory modality is composed of multiple brain areas. As one proceeds from receptor to thalamus to the primary sensory cortex and higher cognitive areas of the brain, receptive fields demonstrate increasingly complex stimulus requirements. For example, in the auditory system, peripheral neurons may respond well to pure tones, whereas some central neurons respond better to frequency-modulated sounds. In the primary visual and somatosensory cortex, receptive fields are selective for the orientation or direction of motion of a stimulus, whereas in higher visual cortical areas, neurons may respond best to images of faces or objects. In the visual and somatosensory systems, receptive fields can be essentially circular or oval regions of retina or skin. By contrast, in the thalamus, visual and somatosensory receptive fields are circular and exhibit centre-surround antagonism, in which onset of a stimulus in one skin or retinal region elicits activating responses and in surrounding regions elicits inhibitory responses. Thus, the same stimulus produces opposite responses in those regions. The effects of stimulus antagonism at different locations are a manifestation of the phenomenon called lateral inhibition. In lateral inhibition the optimal stimulus is not spatially uniform across the receptive field rather, it is a discrete spot of light (in the case of the eye) or contact (in the case of a body surface), with contrast between central and surrounding regions. Referring as it does to a region, a receptive field is fundamentally a spatial entity (a portion of the visual field or retina, or a portion of the body surface) that makes the most sense in the visual and somatosensory systems. In the auditory system hair cells tuned to particular frequencies are located at different locations along the basilar membrane, implying a spatial relevance for auditory receptive fields. In the auditory system one could define a cell’s receptive field as the specific set of frequencies to which the cell responds. In the nervous system generally, the receptive field of a sensory neuron is defined by its synaptic inputs each cell’s receptive field results from the combination of fields of all of the neurons providing input to it. Because inputs are not simply summed, the receptive field properties of a neuron commonly are described in terms of the stimuli that elicit responses from the cell. Integration of Signals from Mechanoreceptors The configuration of the different types of receptors working in concert in human skin results in a very refined sense of touch. The nociceptive receptors—those that detect pain—are located near the surface. Small, finely calibrated mechanoreceptors—Merkel’s disks and Meissner’s corpuscles—are located in the upper layers and can precisely localize even gentle touch. The large mechanoreceptors—Pacinian corpuscles and Ruffini endings—are located in the lower layers and respond to deeper touch. (Consider that the deep pressure that reaches those deeper receptors would not need to be finely localized.) Both the upper and lower layers of the skin hold rapidly and slowly adapting receptors. Both primary somatosensory cortex and secondary cortical areas are responsible for processing the complex picture of stimuli transmitted from the interplay of mechanoreceptors. Contents Stochastic resonance was first discovered in a study of the periodic recurrence of Earth's ice ages. The theory developed out of an effort to understand how the earth's climate oscillates periodically between two relatively stable global temperature states, one "normal" and the other an "ice age" state. The conventional explanation was that variations in the eccentricity of earth's orbital path occurred with a period of about 100,000 years and caused the average temperature to shift dramatically. The measured variation in the eccentricity had a relatively small amplitude compared to the dramatic temperature change, however, and stochastic resonance was developed to show that the temperature change due to the weak eccentricity oscillation and added stochastic variation due to the unpredictable energy output of the sun (known as the solar constant) could cause the temperature to move in a nonlinear fashion between two stable dynamic states. As an example of stochastic resonance, consider the following demonstration after Simonotto et al. The image to the left shows an original picture of the Arc de Triomphe in Paris. If this image is passed through a nonlinear threshold filter in which each pixel detects light intensity as above or below a given threshold, a representation of the image is obtained as in the images to the right. It can be hard to discern the objects in the filtered image in the top left because of the reduced amount of information present. The addition of noise before the threshold operation can result in a more recognizable output. The image below shows four versions of the image after the threshold operation with different levels of noise variance the image in the top right hand corner appears to have the optimal level of noise allowing the Arc to be recognized, but other noise variances reveal different features. The quality of the image resulting from stochastic resonance can be improved further by blurring, or subjecting the image to low-pass spatial filtering. This can be approximated in the visual system by squinting one's eyes or moving away from the image. This allows the observer's visual system to average the pixel intensities over areas, which is in effect a low-pass filter. The resonance breaks up the harmonic distortion due to the threshold operation by spreading the distortion across the spectrum, and the low-pass filter eliminates much of the noise that has been pushed into higher spatial frequencies. A similar output could be achieved by examining multiple threshold levels, so in a sense the addition of noise creates a new effective threshold for the measurement device. Cuticular mechanoreceptors in crayfish Edit Evidence for stochastic resonance in a sensory system was first found in nerve signals from the mechanoreceptors located on the tail fan of the crayfish (Procambarus clarkii). An appendage from the tail fan was mechanically stimulated to trigger the cuticular hairs that the crayfish uses to detect pressure waves in water. The stimulus consisted of sinusoidal motion at 55.2 Hz with random Gaussian noise at varying levels of average intensity. Spikes along the nerve root of the terminal abdominal ganglion were recorded extracellularly for 11 cells and analyzed to determine the SNR. Two separate measurements were used to estimate the signal-to-noise ratio of the neural response. The first was based on the Fourier power spectrum of the spike time series response. The power spectra from the averaged spike data for three different noise intensities all showed a clear peak at the 55.2 Hz component with different average levels of broadband noise. The relatively low- and mid-level added noise conditions also show a second harmonic component at about 110 Hz. The mid-level noise condition clearly shows a stronger component at the signal of interest than either low- or high-level noise, and the harmonic component is greatly reduced at mid-level noise and not present in the high-level noise. A standard measure of the SNR as a function of noise variance shows a clear peak at the mid-level noise condition. The other measure used for SNR was based on the inter-spike interval histogram instead of the power spectrum. A similar peak was found on a plot of SNR as a function of noise variance for mid-level noise, although it was slightly different from that found using the power spectrum measurement. These data support the claim that noise can enhance detection at the single neuron level but are not enough to establish that noise helps the crayfish detect weak signals in a natural setting. Experiments performed after this at a slightly higher level of analysis establish behavioral effects of stochastic resonance in other organisms these are described below. Cercal mechanoreceptors in crickets Edit A similar experiment was performed on the cricket (Acheta domestica), an arthropod like the crayfish. The cercal system in the cricket senses the displacement of particles due to air currents utilizing filiform hairs covering the cerci, the two antenna-like appendages extending from the posterior section of the abdomen. Sensory interneurons in terminal abdominal ganglion carry information about intensity and direction of pressure perturbations. Crickets were presented with signal plus noise stimuli and the spikes from cercal interneurons due to this input were recorded. Two types of measurements of stochastic resonance were conducted. The first, like the crayfish experiment, consisted of a pure tone pressure signal at 23 Hz in a broadband noise background of varying intensities. A power spectrum analysis of the signals yielded maximum SNR for a noise intensity equal to 25 times the signal stimulus resulting in a maximum increase of 600% in SNR. 14 cells in 12 animals were tested, and all showed an increased SNR at a particular level of noise, meeting the requirements for the occurrence of stochastic resonance. The other measurement consisted of the rate of mutual information transfer between the nerve signal and a broadband stimulus combined with varying levels of broadband noise uncorrelated with the signal. The power spectrum SNR could not be calculated in the same manner as before because there were signal and noise components present at the same frequencies. Mutual information measures the degree to which one signal predicts another independent signals carry no mutual information, while perfectly identical signals carry maximal mutual information. For varying low amplitudes of signal, stochastic resonance peaks were found in plots of mutual information transfer rate as a function of input noise with a maximum increase in information transfer rate of 150%. For stronger signal amplitudes that stimulated the interneurons in the presence of no noise, however, the addition of noise always decreased the mutual information transfer demonstrating that stochastic resonance only works in the presence of low-intensity signals. The information carried in each spike at different levels of input noise was also calculated. At the optimum level of noise, the cells were more likely to spike, resulting in spikes with more information and more precise temporal coherence with the stimulus. Stochastic resonance is a possible cause of escape behavior in crickets to attacks from predators that cause pressure waves in the tested frequency range at very low amplitudes, like the wasp Liris niger. Similar effects have also been noted in cockroaches. Cutaneous mechanoreceptors in rats Edit Another investigation of stochastic resonance in broadband (or, equivalently, aperiodic) signals was conducted by probing cutaneous mechanoreceptors in the rat. A patch of skin from the thigh and its corresponding section of the saphenous nerve were removed, mounted on a test stand immersed in interstitial fluid. Slowly adapting type 1 (SA1) mechanoreceptors output signals in response to mechanical vibrations below 500 Hz. The skin was mechanically stimulated with a broadband pressure signal with varying amounts of broadband noise using the up-and-down motion of a cylindrical probe. The intensity of the pressure signal was tested without noise and then set at a near sub-threshold intensity that would evoke 10 action potentials over a 60-second stimulation time. Several trials were then conducted with noise of increasing amplitude variance. Extracellular recordings were made of the mechanoreceptor response from the extracted nerve. The encoding of the pressure stimulus in the neural signal was measured by the coherence of the stimulus and response. The coherence was found to be maximized by a particular level of input Gaussian noise, consistent with the occurrence of stochastic resonance. Electroreceptors in paddlefish Edit The paddlefish (Polyodon spathula) hunts plankton using thousands of tiny passive electroreceptors located on its extended snout, or rostrum. The paddlefish is able to detect electric fields that oscillate at 0.5–20 Hz, and large groups of plankton generate this type of signal. Due to the small magnitude of the generated fields, plankton are usually caught by the paddlefish when they are within 40 mm of the fish's rostrum. An experiment was performed to test the hunting ability of the paddlefish in environments with different levels of background noise. It was found that the paddlefish had a wider distance range of successful strikes in an electrical background with a low level of noise than in the absence of noise. In other words, there was a peak noise level, implying effects of stochastic resonance. In the absence of noise, the distribution of successful strikes has greater variance in the horizontal direction than in the vertical direction. With the optimal level of noise, the variance in the vertical direction increased relative to the horizontal direction and also shifted to a peak slightly below center, although the horizontal variance did not increase. Another measure of the increase in accuracy due to the optimal noise background is the number of plankton captured per unit time. For four paddlefish tested, two showed no increase in capture rate, while the other two showed a 50% increase in capture rate. Separate observations of the paddlefish hunting in the wild have provided evidence that the background noise generated by plankton increase the paddlefish's hunting abilities. Each individual organism generates a particular electrical signal these individual signals cause massed groups of plankton to emit what amounts to a noisy background signal. It has been found that the paddlefish does not respond to only noise without signals from nearby individual organisms, so it uses the strong individual signals of nearby plankton to acquire specific targets, and the background electrical noise provides a cue to their presence. For these reasons, it is likely that the paddlefish takes advantage of stochastic resonance to improve its sensitivity to prey. Individual model neurons Edit Stochastic resonance was demonstrated in a high-level mathematical model of a single neuron using a dynamical systems approach. The model neuron was composed of a bi-stable potential energy function treated as a dynamical system that was set up to fire spikes in response to a pure tonal input with broadband noise and the SNR is calculated from the power spectrum of the potential energy function, which loosely corresponds to an actual neuron's spike-rate output. The characteristic peak on a plot of the SNR as a function of noise variance was apparent, demonstrating the occurrence of stochastic resonance. Inverse stochastic resonance Edit Another phenomenon closely related to stochastic resonance is inverse stochastic resonance. It happens in the bistable dynamical systems having the limit cycle and stable fixed point solutions. In this case the noise of particular variance could efficiently inhibit spiking activity by moving the trajectory to the stable fixed point. It has been initially found in single neuron models, including classical Hodgkin-Huxley system. Later inverse stochastic resonance has been confirmed in Purkinje cells of cerebellum, where it could play the role for generation of pauses of spiking activity in vivo. Multi-unit systems of model neurons Edit An aspect of stochastic resonance that is not entirely understood has to do with the relative magnitude of stimuli and the threshold for triggering the sensory neurons that measure them. If the stimuli are generally of a certain magnitude, it seems that it would be more evolutionarily advantageous for the threshold of the neuron to match that of the stimuli. In systems with noise, however, tuning thresholds for taking advantage of stochastic resonance may be the best strategy. A theoretical account of how a large model network (up to 1000) of summed FitzHugh–Nagumo neurons could adjust the threshold of the system based on the noise level present in the environment was devised. This can be equivalently conceived of as the system lowering its threshold, and this is accomplished such that the ability to detect suprathreshold signals is not degraded. Stochastic resonance in large-scale physiological systems of neurons (above the single-neuron level but below the behavioral level) has not yet been investigated experimentally. Psychophysical experiments testing the thresholds of sensory systems have also been performed in humans across sensory modalities and have yielded evidence that our systems make use of stochastic resonance as well. Vision Edit The above demonstration using the Arc de Triomphe photo is a simplified version of an earlier experiment. A photo of a clocktower was made into a video by adding noise with a particular variance a number of times to create successive frames. This was done for different levels of noise variance, and a particularly optimal level was found for discerning the appearance of the clocktower. Similar experiments also demonstrated an increased level of contrast sensitivity to sine wave gratings. Tactility Edit Human subjects who undergo mechanical stimulation of a fingertip are able to detect a subthreshold impulse signal in the presence of a noisy mechanical vibration. The percentage of correct detections of the presence of the signal was maximized for a particular value of noise. Audition Edit The auditory intensity detection thresholds of a number of human subjects were tested in the presence of noise. The subjects include four people with normal hearing, two with cochlear implants and one with an auditory brainstem implant. The normal subjects were presented with two sound samples, one with a pure tone plus white noise and one with just white noise, and asked which one contained the pure tone. The level of noise which optimized the detection threshold in all four subjects was found to be between -15 and -20 dB relative to the pure tone, showing evidence for stochastic resonance in normal human hearing. A similar test in the subjects with cochlear implants only found improved detection thresholds for pure tones below 300 Hz, while improvements were found at frequencies greater than 60 Hz in the brainstem implant subject. The reason for the limited range of resonance effects are unknown. Additionally, the addition of noise to cochlear implant signals improved the threshold for frequency discrimination. The authors recommend that some type of white noise addition to cochlear implant signals could well improve the utility of such devices. HOW THE SEROTONIN SYSTEM INFLUENCES SENSORY PROCESSING [MURTHY LAB] Brains process external information rapidly at a sub-second time scale, which is set by the dynamic electrophysiological properties of neurons and the fast communication within neuronal populations. This fast neural processing is complemented by the so-called neuromodulatory systems (involving certain class of neurotransmitters such as dopamine and serotonin). Neuromodulation has generally been thought to occur at slower time scales, for example during periods of alertness following some salient event or over circadian periods for sleep-wake modulation. The fast and slow systems working together allow the brain to not only react rapidly to external stimuli, but also assign context or meaning to these stimuli. In our recent study, appearing in Nature Neuroscience, we set out to understand how a particular neuromodulatory system involving serotonin influences information processing in a sensory system. What we found was unexpected and exciting. Serotonin is a chemical that has been linked to high level cognitive features such as depression, aggression and mood. Although we are far from understanding the neuronal architecture underlying any of these effects, it is generally theorized that the serotonin system affects neural processing by slowly altering the properties of circuit elements, the neurons and synapses. In mammals, serotonin is secreted by neurons located in the raphe nuclei, which send their axons widely throughout the brain, including very dense projections to the early stages of olfactory system. This fact, combined with the importance of olfaction for mice, prompted us to examine the involvement of the serotonin system in odor processing. Our experiments were enabled by the explosive advances in neuroscience techniques, including optogenetics (which allowed us to selectively activate specific neurons and axons with light) and optical reporters of activity (genetically-encoded calcium indicators that transduce neural activity to light). We used multiphoton microscopy to look at the activity of two different populations of output neurons (mitral and tufted cells) in the olfactory bulb, the first odor processing stage in vertebrates. To our surprise, we found that even brief activation of raphe neurons caused immediate excitation of mitral and tufted cells. An even greater surprise was in store when we complemented our whole animal experiments with mechanistic studies in ex vivo brain slices in addition to releasing serotonin, raphe neurons also released a fast excitatory neurotransmitter, glutamate. In fact, glutamate mediates much of the excitation of mitral and tufted cells in our experiments, with serotonin release likely requiring more intense activity in raphe neurons. Sensory systems are not only required to detect external stimuli (odors in the case of the olfactory system), but they also need to make distinctions between different stimuli. We asked how the activation of raphe neurons modulates these functions of the olfactory system. Uncharacteristically for a neuromodulatory system, qualitatively distinct effects were seen in the two types of olfactory bulb neurons: activating raphe neurons enhanced the response of tufted cells to odors, but bi-directionally modulated the odor response of mitral cells. A quantitative analysis of the population coding of odors revealed that raphe activation makes tufted cells more sensitive to detecting odors and the mitral cells better at discriminating different odors. Overall, our study indicates that a “neuromodulatory” system, traditionally considered to have slow actions, can actually be part of fast ongoing information processing by releasing multiple types of neurotransmitters. Further, these modulatory neurons need not have monolithic effects, but can influence different channels of information processing in distinct ways. Conceptually, our study blurs the distinction between neuromodulation and computation itself.This research was supported by grants from the NIH, a seed grant from the Harvard Brain Initiative and fellowships from the NSF and the Sackler Foundation. Read more in Nature Neuroscience or download PDF Read more in News and Views or download PDF Reflex Arcs Reflex arcs are an interesting phenomenon for considering how the PNS and CNS work together. Reflexes are quick, unconscious movements, like automatically removing a hand from a hot object. Reflexes are so fast because they involve local synaptic connections in the spinal cord, rather than relay of information to the brain. For example, the knee reflex that a doctor tests during a routine physical is controlled by a single synapse between a sensory neuron and a motor neuron. While a reflex may only require the involvement of one or two synapses, synapses with interneurons in the spinal column transmit information to the brain to convey what happened after the event is already over (the knee jerked, or the hand was hot). So this means that the brain is not involved at all in the movement associated with the reflex, but it is certainly involved in learning from the experience – most people only have to touch a hot stove once to learn that they should never do it again! The simplest neuronal circuits are those that underlie muscle stretch responses, such as the knee-jerk reflex that occurs when someone hits the tendon below your knee (the patellar tendon) with a hammer. Tapping on that tendon stretches the quadriceps muscle of the thigh, stimulating the sensory neurons that innervate it to fire. Axons from these sensory neurons extend to the spinal cord, where they connect to the motor neurons that establish connections with (innervate) the quadriceps. The sensory neurons send an excitatory signal to the motor neurons, causing them to fire too. The motor neurons, in turn, stimulate the quadriceps to contract, straightening the knee. In the knee-jerk reflex, the sensory neurons from a particular muscle connect directly to the motor neurons that innervate that same muscle, causing it to contract after it has been stretched. Image credit: https://www.khanacademy.org/science/biology/ap-biology/human-biology/neuron-nervous-system/a/overview-of-neuron-structure-and-function, modified from “Patellar tendon reflex arc,” by Amiya Sarkar (CC BY-SA 4.0). The modified image is licensed under a CC BY-SA 4.0 license. This video provides an overview of how reflex arcs work:
https://au.dualjuridik.org/9313-what-are-the-response-frequencies-of-sensory-neurons.html
Warning: more... Fetching bibliography... Generate a file for use with external citation management software. The reasoning that neural reflexes maintain homeostasis in other body organs, and that the immune system is innervated, prompted a search for neural circuits that regulate innate and adaptive immunity. This elucidated the inflammatory reflex, a prototypical reflex circuit that maintains immunological homeostasis. Molecular products of infection or injury activate sensory neurons traveling to the brainstem in the vagus nerve. The arrival of these incoming signals generates action potentials that travel from the brainstem to the spleen and other organs. This culminates in T cell release of acetylcholine, which interacts with α7 nicotinic acetylcholine receptors (α7 nAChR) on immunocompetent cells to inhibit cytokine release in macrophages. Herein is reviewed the neurophysiological basis of reflexes that provide stability to the immune system, the neural- and receptor-dependent mechanisms, and the potential opportunities for developing novel therapeutic devices and drugs that target neural pathways to treat inflammatory diseases. National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/22224768?dopt=Abstract
The nervous system is a complex and sophisticated system that coordinates body activities such as walking speaking. It is made up of two major divisions the Central Nervous System and the Peripheral Nervous System. The Central nervous system consists of the brain and spinal cord. The Peripheral nervous system consists of all other neural elements, including the peripheral nerves and the autonomic nerves. Structure Spinal Cord- The spinal cord is a long thin mass of bundled neurons that carry information through the vertebral cavity to the stem of the brain. Neurons- Neurons are nerve cells that communicate within the body by transmitting electrochemical signals. NERVES- Nerves are bundles of axons that act as information highways. They carry signals between the brain your spinal card and the rest of your body. The wrapping of nerves help protect and increase speed of the communication within the body. Meninges- Meninges are protective covering of the central nervous system. Major functions Integration- Integration is the processing of many sensory signals that are passed into CNS. The signals are then evaluated, compared, used for decision making, discarded or committed to memory as deemed appropriate. This all takes place in the gray matter of the brain and spinal cord. Motor- Once the networks of inter neurons in the CNS evaluate sensory information and decide in an action they stimulate efferent neurons. Motor neurons carry signals from the gray matter through the nerves of the peripheral nervous system to effector cells. Bipolar Disorder Aphasia Body Systems Working Together Cardiovascular System- The Cardiovascular System delivers oxygen, nutrients and white blood cells by pumping blood around the body. The brain regulates heart rate and blood pressure in the Cardiovascular System. Baroreceptors in the Cardiovascular System send information to the brain about blood pressure. Muscular System- The Muscular System enables motion in the body with its muscles. Muscles in this system also generate heat to maintain body temperature and contracts the heart. Receptors in the muscles give the brain information about body position and movement. The brain in the Nervous System controls the speed in which food moves through the digestive track.
https://www.smore.com/cy7hk
Macmahon’s 100% owned QLD Civil business TMM Group has a proud history of operating in Queensland’s Bowen Basin for over fifteen years and specialise in earthwork operations and engineering for the mining and civil sectors. Committed to providing total project management services, we maintain a hands-on approach, staying involved to the very end to deliver the safest and most efficient results for our clients – we design, manage and execute. TMM was acquired by Macmahon Holdings (MAH:ASX) in 2018 to rebuild the civil business unit across Australia. With civil and mining projects across Australia we are able offer a wide range of projects and locations as well as various career progression opportunities. We are currently recruiting for a dedicated Project Manager to join the Macmahon Civil team. Based across multiple mine sites within the Bowen Basin and reporting to the Area Manager East, this role will be responsible for the management of one or multiple civil project sites within the mining and construction industry. The successful applicant will: • Provide leadership and monitor compliance to the agreed project plan and Macmahon business processes to ensure that projects deliver safety, production, maintenance and financial performance; • Have strong commercial acumen with an ability to monitor, report, and drive budgets and forecasts; • Maintain and implement Health and Safety on site by complying with relevant state legislation, acts, regulations, company procedures and site requirements; • Lead the cooperation and coordination between the client and senior management at site. • Ensure that the Coal Mining Act and Regulations are adhered to at all times; • Select, manage and develop the senior managers in order to optimize project performance. • Ensure the project is adequately resourced in line with the contract and in accordance with Macmahon’s Recruitment Procedure; • Monitor the management of the critical risk register for the entire project and ensure mandatory controls are in place; and • Monitor and control project costs, provide regular financial forecasts, develop and review claims, understand and report on the financial status of the project. The ideal candidate will have: • Minimum 5 years’ experience in Project Management with proven ability of leading large teams • Civil Engineering Degree (desirable) • Development and Management of budgets ranging from $20m - $100m • Implementation of Mining Statutory Compliance including HSEQ, Mine Regulations, Work Safe etc. • Management and coordinating of large scale multi-functional groups and team • Formal qualification in project management and frontline management or engineering (desirable) As a part of the recruitment process for this role you may be required to complete a pre-employment medical, inclusive of laboratory drug and alcohol screen and complete a criminal history and qualification checks. Additionally, you will be asked to provide proof of working rights in Australia i.e. copy of valid Australian passport, birth certificate, citizenship certificate or current visa grant notification where applicable. We will accept both local to the Bowen Basin or FIFO candidates. A company vehicle will be provided for the successful applicant along with camp accommodation and the flights to home base within Queensland for FIFO applicants.
https://careers.macmahon.com.au/job/Strathfield-Project-Manager-QLD-4742/580907710/
Bachelor’s Degree in Industrial Maintenance, Engineering, and/or equivalent experience. At least 3 years of Facilities Management experience in a production environment. Working knowledge of the plant support systems (Internal Substations, Compressed air, Chilled water, Central vacuum, Pneumatic waste collection etc. Strong financial acumen with experience developing and implementing large capital business cases / projects. Knowledge and experience in mechanical and electrical engineering fundamentals and technology. Skilled in organizing work and delegating to appropriate channels while working with other engineers and/or contractors. Strong aptitude for working with various telecom, security, and operating systems. Ability to initiate change through problem solving, decision-making, planning and utilization/development of employees and resources. Excellent written and verbal communication skills. Excellent interpersonal skills and decision-making ability. Experience in project management organization and implementation with delivered results. Ability to manage multiple priorities and work effectively to meet deadlines. Knowledge & experience in AutoCad and similar programs. Preferred Requirements Experienced in printing manufacturing environment. Lean Six-Sigma, Total Productive Maintenance, Continuous Improvement, similar experience and delivered results. Leading 2 Lean CMMS experience. Experience in equipment development and improvement. Job Description Our Plant Facilities Engineer is responsible for providing guidance and technical expertise for the evaluation, selection, installation, maintenance, reliability, modification and improvement of the building, infrastructure and manufacturing equipment for Jostens multi-plant Printing Business. The incumbent will develop business cases justifying large capital outlays for the aforementioned. He/she must also develop and manage large capital and expense budgets required to meet financial targets. The incumbent must also partner with all departments and team members to support and drive current and new Quality, Cost, Delivery and Safety initiatives utilizing Lean Continuous Improvement methodologies and other CI tools. The role coordinates activities with a variety of outside contractors, setting scope of work, evaluating costs, and ensuring desired outcomes. The incumbent is also responsible for facility and equipment disaster recovery planning, coordinating and facilitating physical facilities projects, equipment installation projects, monitoring / enhancing utility usage and requirements, and maintaining the Plant/Facilities layout in Autocad.
https://clarksvilleishiring.com/jobs/details/plant-facilities-engineer
Will effectively lead assigned Projects toward completion as aggressively as possible. • Will manage all aspects of a large scale project or project with a higher degree of complexity and customer impact • Effectively lead project teams and customer executives (not a note taker) • Have the ability to build and sustain strong relationships with project team members and customer executives • Facilitate issue resolution - Identify customer issues, develop effective solutions and successfully manage the communication to the customer • Accountable for project schedule, financials, and deliverables • Responsible for end-to-end management of project, i.e. all phases of project from concept, project planning, development, execution, monitoring to project close down. • Report on project performance on at local and global levels (have the ability to communicate factual information to project team, C-Suite Executives as well as project stakeholders • Produces weekly status reports to management and project stakeholders Requirements: • EIGHT (8) + years working as a project manager (14,560 hrs. +), including TWO (2) years in a managerial role • Managed project budgets in excess of $2M • Managed projects with 30+ resources • Project Owners or managers, Director or above or has a development background and moved into a project management role • Managed project teams over multiple geographical locations (preferably domestic and international) • Strong writing skills • Very organized and articulate • Strong business acumen • Analytical problem solving skills • PMP Certification is nice to have but not necessary Education Requirements:
https://aquent.com/our-expertise/job-description-library/Program-Manager--Sr-Project-Manager-600017806
The Portfolio / Program Manager is accountable for directing and integrating the activities of multiple, primary project operations. They ensure that the project's efforts are generally cohesive, consistent, and effective in supporting the mission, goals, and strategic plan of an organization. Moreover, the Portfolio / Program Manager establishes policies, strategies, and operating objectives consistent strong project management methodologies within the existing Information Technology Project Management Office framework. These strategies will ensure the efficient and effective implementation of major cross-organizational projects. The role will participate and supervise the development, performance, and maintenance of individual project objectives and short and long-range plans via standard project management tools. Additionally, this role develops, tracks, and evaluates programs to help accomplish established project goals and objectives critical to delivering business outcomes. The role will also manage and administer a large and diverse team of professional/technical professional and support staff, both directly and through project managers to effectively complete program level resource plans. Moreover, the portfolio / program manager will also establish and manage complex, multi-faceted budgets that involve large scale vendors contracts. It will also be important for this role to collaborate with direct and indirect reports to identify where management and technical reinforcement is needed, and the appropriate corrective actions are implemented. The role must oversee the development and implementation of training, communications, testing and change management plans to ensure the successful execution of project deliverables required for successful turnover to the various business units. This will require a diverse set of skills and innovative solutions to meet time sensitive schedules in a complex large multi-site organization. Job Responsibilities - Experience in running large/multiple portfolios, and teams of project managers - Ability to manage a portfolio of projects that span across various applications and integrations, departments, and sites • Work with internal business partners in corporate services, Finance, human resources, IT, PMO to clearly define and drive project scope, benefits, timeline, change management - Oversee project delivery process with key contributors to help manage scope and prepare change requests - including tasks, deliverables, milestones, resources, and estimated costs - Responsible for approval of all phases of the projects by implementing the Project Life-cycle Methodology, review and approval of the individual program and project charters requirements, scope, schedule, presented by outsourced partners - Review financial and status reports and adjust the portfolio plan when necessary - Ensure that internal client issues are addressed to meet the client's business goals, compliance with corporate and department policies, standards, and practices - Conduct periodic review of proposed projects to determine duplicate of effort, the ability to consolidate projects to achieve similar goals/objectives, and to validate continued need against evolving mission objectives and vision - Establish a formal procedure to accept and review submitted changes for in-flight projects - Provides strategic leadership and technical, operational, financial, and managerial leadership for successful implementation of project activities. - Ensures that the program is technically sound, evidence-based, and consistent with funder and stakeholders' priorities. - Provides oversight of program implementation including all activities, outputs, and outcomes related to project management and administration, including reporting, budget development and monitoring, financial transactions, execution of project plans, and project performance. - Oversees the selection and training of qualified program staff, assigning clear roles and responsibilities, providing effective supervision, and managing performance to ensure efficient operations. - Ensures the project produces the specified results in the annual workplan(s) to the required standard of quality and within the timeline and budget parameters. - Oversees budget pipeline development and budget monitoring. - Conducts monthly/weekly reviews to ensure accountability of all project activities as well as the accurate and timely reporting of financial deliverables and obligations. - Ensures that the project progresses in accordance with its contractual obligations and complies with donor regulations and internal organizational policies. Qualifications Portfolio / Program Manager need to have excellent skills and knowledge to execute these duties effectively include: - Excellent verbal and written communication skills - Attentive to details - Strong financial acumen - Knowledge of computer operating systems, hardware, and software - Strong Leadership and business management skills - Good budgeting skills and the ability to reduce cost without making an adjustment that affects the quality - Strong interpersonal skills - Ability to act as change agents and instill confidence to embrace change as well - Results-driven and detail-oriented - Proven experience in a leadership role is required - A degree in a relevant field from an accredited university with at least 7 years of experience in a leadership position managing a program required - A least 5 years' experience managing, designing, implementing, and evaluating multi-million dollars, multisite projects. - Strong preference for candidates with expertise/experience in ERP systems and/or custom software - Demonstrated diplomatic, management, and communication skills to liaise and advocate with internal stakeholders, government agencies, external vendors, and other key stakeholders - Proven ability to write technical reports and program documents, and deliver presentations.
https://www.morsoncanada.com/fr/emploi/23-012-program-slash-portfolio-manager
Position Title: General Manager, Mines. Job Station: Ogun State. INTRODUCTION:- General Manager, Mines job opportunity is available at Dangote Group for individuals who possess relevant qualifications. Job Summary - Oversee all mining activities including operations and maintenance with key focus on profitability and optimum utilisation of resources while ensuring the health and safety of mining staff. DUTIES AND RESPONSIBILITIES:- - Plan, manage, co-ordinate, and direct mining operations and maintenance of machinery. - Evaluate efficiency of mining sites to determine adequacy of personnel, equipment and technologies used, and make changes to work schedule or equipment when necessary. - Oversee the technical mining aspects of the operations including drilling, blasting loading and hauling and provide expertise as required. - Identify performance optimization opportunities to enhance bottom line financial benefits. - Prepare mining production reports for review by the Plant Director. - Monitor mining operational performance against budget and ensure that production quotas and procedures are met. - Perform any other duties as may be assigned by the Plant Director. DESIRED EXPERIENCE & QUALIFICATION:- Education and Work Experience: - Bachelor’s degree or its equivalent in Engineering, Mining or related discipline. - Minimum of twenty-seven (27) years demonstrated operating experience in mining operations and maintenance practices. Skills and Competencies: - In-depth knowledge of mining operations, production, maintenance, process control and health and safety management. - Commercial acumen and experience regarding how to maximise the financial returns of the project including the ability to manage contracts effectively and to compile and manage budgets and operating costs. - Strong leadership and people management skills. - Commitment to implemented safety and environment regulations - Good data gathering and analysis skills. - Baseline problem analysis and solving skills. - Creativity and an ability to think out of the box. REMUNERATION:- - Private Health Insurance - Paid Time Off - Training and Development Apply Before:- Not Specified. INTERESTED? Interested and qualified candidates should: WHO IS Dangote Group? Dangote Group is one of Nigeria’s most diversified business conglomerates with a hard – earned reputation for excellent business practices and products’ quality with its operational headquarters in the bustling metropolis of Lagos, Nigeria in West Africa. PLEASE help others by Sharing This Job Opportunity on:
https://www.myjobnigeria.com/general-manager-mines-vacancy-at-dangote-group/
My client is a high performing multi million pound, dynamic company with an innovative approach to business is looking to source a talented and ambitious Finance Controller to join their medium sized team. The business has continued to grow both in local and international markets and believes in continuous growth and profitability offering development and progressive opportunities for their staff! The main purpose of this role is to provide leadership of the finance reporting group and be responsible for providing comprehensive reporting, budgeting and forecasting solutions. Key Duties - Own the month end process including production of monthly P&L and balance sheets, budgets and forecasts - Review variances against forecast and propose action where needed - Own and manage cash flow forecasting, challenging projects and optimising cash where possible - Review cost structures, including presenting and implementing any recommended cost changes or initiatives - Work alongside FP&A and Management Accountant team to lead production of analysis, forecasting and planning processes - Keep Finance Business Partner aware of upcoming financial and cash flow issues - Manage the year-end audit process - Assist the Finance Business Partner in continuous improvement of processes by sharing best practise and maintain strong relationships within all the different departments - Ensure appropriate financial governance and controls are in place - Provide decision-driving analysis to leverage opportunities and mitigate risk - Bonus scheme review and control - Effectively manage, motivate and develop the Finance Department Personality and Skillset Requirements - The successful candidate will be ACA/ACCA/CIMA Qualified Accountant with previous experience doing a similar role for at least 3 - 5 years. - You will be able to establish and maintain effective working relationships across the organisation and the ability to deliver results in a complex environment. - You will have strong commercial acumen and excellent communication skills at all levels. - You will remain calm and focused under pressure.
https://www.jobsatteam.com/jobs/financial-controller-2
Tunnels Project ManagerRef MAX9135 Consultant Richard Poulter Region Australia and NZ Location Auckland, New Zealand Salary NZ$350K ++ Salary Band £150,000 Plus TypePermanent Job Posted 30/04/2021 Status Live: Interviewing now Our client, a global civil engineering contractor, is seeking to appoint an experienced Tunneling Manager with demonstrable experience of major tunnel projects to join their team to deliver a landmark complex rail project underway in Auckland, New Zealand. Project Details The Tunnels Project Manager is responsible for the programming and delivery of construction works. The primary focus on this role will be growing the capability of the team, making sure they understand and adhere to the JV principles, provide clear and positive leadership to ensure we deliver best for project outcomes. This includes producing clear and transparent reporting allowing the JV Management Team to make site relevant decisions. The Tunnels Project Manager will be responsible for the safety of workers and public within the scope of the tunnel works and will develop a high-performance team, who are able to embrace and understand the JV principle and be good at problem solving. You will be based full time on the project site in Auckland , New Zealand Responsibilities and Duties Provide Strategic Leadership - Help to bridge the gap between operational and strategic leadership ensuring that all activities are aligned with the wider, long term objectives of the JV strategy document to develop and grow the business - Understand and embed JV principles to the site team in a way that is understood by all workers. Planning and estimating - Participate in the production of project methodologies, budgets and programmes for construction. - Provide accurate advice for construction phasing a, methodology to allow informed engineering and design decision which are best for project. Provide accurate estimates for budgets and monthly reporting. - Prepare and update the specific Construction Management Plans - Review and seek approval of all construction documentation - Plan and organize construction activities in your area, in coordination with specialty managers Outcome Management - Participate in the management of the contracts ensuring contract performance meets JV requirements in terms of quality and timeframes - Manage subcontracts in your area or support specialty managers in managing their subcontractors - Implement control framework and monitoring process, including project budgets, and report regularly to project budgets and report regularly to project stakeholders. - Verify and ensure sign off monthly forecasts, costs to complete etc Safety, Environment, Quality & Risk Management - Manage construction operations to achieve a superior level of safety, environmental and quality performance. - In conjunction with the Manager and Advisor, promote safe and environmentally sound working practices and a culture of safe and responsible behaviours and attitudes, ensuring safety and environmental initiatives and achievements are publicised and rewarded. - Take personal and team responsibility for Zero Harm developed. Safety, environment and sustainability performance measures to meet JV targets e.g. LTI, TEIFR, energy targets. - Ensure health and safety and environment features prominently in induction and ongoing staff development. - Identify and minimise business risks and compliance issues, including critical safety and environmental risks. - Ensure appropriate Project Management Methodology and quality assurance systems are in place. Ensure the relevant quality standards and accreditation maintained, that Health and Safety, environment and quality plans are aligned with JV strategy and policy, communicated to staff, monitored and reported on monthly, with appropriate incentives in place. Operational Excellence - Critically challenge all aspects of the project, ensuring that the construction team is recognised as delivering operational excellence on behalf of the JV and is fully integrated with the rest of the time to deliver the best for project outcome. - Manage the interface between the tunnel work and the work on the Stations - Accountable for timely delivery and handover of works in his area, for control of construction costs, and for compliance with environment requirements. Resource Management - Manage the allocation of people, plant, technology and resources to achieve outcomes. - Manage plant effectively, maximise utilisation of own plant. Liaise effectively with Plant Manager. Promote mutualisation of resources between different sections of works/other stations to get the best result for the project. - Coordinate with other project managers and the Tunnel Construction Manager in order to allocate resources and priorities which are best for overall project. People Management & Development - Communicates effectively using a respectful, personal manner - Provide positive leadership and mentoring to all team members within the project - Effective management of disciplinary procedures in cooperation with JV HR and the relevant Home Organisation - Involve team members in setting clear targets for their area of responsibility - Manage, coach and assist direct reports to achieve their targets and improve their competency - Coordinate with the Construction Manager to have a good understanding of the “bigger picture” and make sure that site supervisors that report to the superintendent have the allocated resources and understand the priority for the whole project. Make sure that decisions are best for project. Brand Management Ensure that all stakeholders impacted by the CRL construction works for your station are communicated with proactively and issues are managed effectively to ensure the JV is recognised as being an industry leader in the delivery the CRL project. Desired Skills and Experience - Proven Project Management experience - At least 10 years’ experience in the Underground mine or tunnel sector, five of these in a Management role - Ability to read, understand and ensure specifications and requirements are fully implemented - Demonstrable experience of managing multiple suppliers and sub-contractors - Demonstrable experience in preparing and submitting Financial Reports - Ability to meet deadlines and prioritise duties - Strong verbal and written communication skills - Computer skills - Strong commitment to Health & Safety initiatives - Strong proven leadership skills Qualifications/Educational Requirements - Tertiary qualification or equivalent in Engineering. - Highly desirable to have proven Business Management experience with responsibility and accountability for staff, plant and return on investment. Employing Company Overview and Profile Civil engineering recruitment continues a pace as our client as our client, a top tier global civil engineering contractor with an impressive delivery record of major rail and infrastructure projects, seeks to strengthen its team in Auckland on a major underground rail project. Give a major boost to your construction career path and get in touch today!
https://www.maximrecruitment.com/tunnels-project-manager-job-auckland-new-zealand-9135
Why Work Here? "Manufacturing Sciences Corporation is a well-established, small company looking for others who want to work in a team environment." Manufacturing Sciences Corporation is a specialty metals processing company. We were founded in 1982 and started operations in Oak Ridge in 1985. We are the only commercial facility in the world capable of casting, rolling, and machine depleted uranium. We are currently seeking a Project Manager. This position is critical to ensure the execution and delivery of the company’s contracts, ensuring the work completed is within the scope of the agreement, fulfilled timely and accurately. The project manager will identify and minimize risk and work to ensure maximization of financial performance of a project and ensure the completion of day-to-day activities. Responsibilities: - Will provide oversight to one or more projects - Participates in resources planning such as staffing, material procurement, and facility modifications - At times will offer guidance to various team members, communicating scope, goals, and responsibilities. Will make sure that all stakeholders are aligned and have proper expectations. - Communicates regularly with the customer, clarifies, aligns, and sets expectations as appropriate, and will be the point of contact for customers - May, at times, ensure appropriate resources are allocated and maintained to secure successful completion of projects - Will manage and coordinate priorities within a project or between various projects. - Develop processes and procedures to monitor and track the status of projects to ensure progress - Creates and delivers reports to management providing status of projects highlighting the risks, issues, discrepancies, costs, and overall standing. - Will utilize analytical and problem-solving skills to minimize risk and delays and work with other team members to resolve issues. - Will plan and direct schedules as well as provide input for project budgets Successful Candidates will possess the following skillsets: 1. Strong Organizational Skills 2. Ability to multi-task 3. Excellent attention to detail 4. Ability to work cross-functionally and with external parties: customers & vendors 5. Ability to influence others and ability to communicate effectively and concisely across all levels of employees and customers 6. Strong problem-solving and decision-making skills 7. Business acumen 8. Results driven 9. Strong written and verbal communications skills, ability to set expectations Manufacturing Sciences Corporation is an equal opportunity company. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, disability, military or veteran status, genetic information, or any other characteristic protected by federal, state or local laws. About Manufacturing Sciences Corporation: Manufacturing Sciences Corporation is a specialty metals processing company. We are a unique organization as we are the only commercial company in the world with the capability to cast. roll, and machine depleted uranium. We provide these services to government and private sector companies, supporting their production and development of new equipment and materials. MSC offers design, metal casting, specialty metal rolling, fabrication, welding, and precision machining capabilities.
https://www.mfgsci.com/projectmanager
Job Description: In your new role, you’ll prepare, manage and provide reporting on all project plans, controls and resource plans. You’ll make sure that activities are in place to adequately prepare the business and engage all the appropriate stakeholders effectively to enable change to be implemented and handed over. [ads1] Job Responsibilities: - Support the creation of a financial business case - Track and report project costs and make sure that the project is completed in allotted budgets - Make sure that all project deliverables satisfy the requirements and that they adhere to the agreed governance framework - Analyse the appropriate statements and prepare estimates for approval - Lead and define the production of a project initiation document and make sure that the solution clearly supports and is aligned to our strategic goals - Identify, track, manage and mitigate any project risks, assumptions, issues and dependencies Job Requirements: - Knowledge of flexible working environments and strategies - Experience of delivering project management in a technology or IT function - Strong organisation skills and astute attention to detail - Financial Crime experience would be an advantage - Experience of applications for systems and delivery Job Details:
https://www.getsyourvacancy.online/job/rbs-careers-02/
Join a dynamic, innovative and data-driven start-up named 2021 Best Tech Startups in Chicago, which is changing the supply chain industry with real-time tracking and end-to-end visibility. Like FourKites’ Financial Analyst, you will develop the Finance function in our high growth environment. In this role, you will manage all financial analysis and reporting across departments. A strong candidate for this role will be familiar with working in a very fast paced environment and will be responsible for ensuring financial competence and growth. What you will do: You will support the processes of posting high risk monthly income and commissions as part of the month end close. You will also be responsible for analyzing FourKites’ financial results and supporting operating budgets and annual financial forecasts. You will prepare monthly, quarterly and annual analyzes of financial performance for investor and executive review. Working with teammates in the United States, India and the Netherlands, you will work in partnership with key operational teams and business leaders from our global organization to support the success of the business, while maintaining sound financial practices. - Play a key role in monthly closing with regards to revenue and commissions with clearly defined responsibilities, processes and deadlines - Lead the preparation of the financial reporting package for internal management reports, highlighting key business issues, revenue trends, cash flow analysis and functional expenses - Analyze budget against actual income statement figures at company level, by region and by department, summarizing them in concise reports for business unit managers - Compile financial metrics for quarterly board meetings - Navigate Financial Reports under Both GAAP and Management Books - Support the annual operating plan and monthly forecasting process - Support the implementation of the financial / planning reporting system to improve efficiency and visibility across the organization - Liaise with multiple departments as needed in terms of revenue recognition, commissions and budgets - Prepare journal entries and reconcile balance sheet accounts to ensure accuracy of financial reports - Identify potential efficiency gains in daily work and implement automation tools / techniques to streamline - Maintain and leverage tools and systems to ensure rapid and reliable access to data and information tools Who are you: - Extremely detail-oriented with strong analytical skills - Excellent problem solving skills with a desire to learn, grow and seek increased responsibilities - Ability to work closely and effectively in a team environment across all facets of the business - Strong interpersonal and consultative communication skills with well-developed presentation skills - Strong organizational skills with the ability to effectively manage competing priorities and time pressures - Familiarity with business and financial modeling in P&L, balance sheet and cash flow - Strong technical acumen with an ease in learning new business software systems Preferred Qualifications: - Bachelor’s degree in finance / accounting / economics preferred (mathematics, computer science, engineering also welcome) - 1-2 years of experience in corporate finance, banking, management consulting or high growth start-ups - Experience in an agile and rapidly changing environment - Advanced skills in Excel / PowerPoint - Familiarity with financial analysis and planning software applications (Intacct, NetSuite, Adaptive Insights) - Experience in SQL / Python / VBA and general systems integration is a plus - SFDC experience is a plus Named one of Forbes’ next billion dollar startups, FourKites is on a mission to transform global supply chains with the most powerful technology on the planet. With a customer base that includes 18 of GPC’s top 20 food and beverage companies and nine of GPC’s top 10 companies, the company combines the largest network of real-time logistics data with machine learning to help customers reduce their costs. costs, to improve performance time and strengthen relationships with the end customer. We are a customer-obsessed organization fully committed to helping our customers succeed and generate long-term value. The empathy we show with our customers also applies internally. At FourKites, a positive work culture is a priority, and team members benefit from structured employee development plans, mentoring and training programs, quarterly appraisal and promotions processes, memberships to competitive affinity and benefit groups. Click here to learn more about our team and our corporate values. If you live in California, here is our California Applicant Privacy Notice.
https://i48.org/financial-analyst-fourkites-built-in-chicago/
- Detailed practical knowledge of project management methodologies, - Experience in Logistics, Financial Services, Financial or Consumer Financial Services - Change management - Demonstrated effectiveness in all the areas outlined in the roles - Analytical Thinking: ability to identify issues, obtain relevant information - Building Partnerships: Ability to develop and use collaborative relationships - People Management: ability to effectively manage and evaluate performance - Strong interpersonal skills: ability to communicate and work well in a team - Takes accountability and drives the same within the organization - Strong coordination skills - Strong presentation skills both verbal and written - Ability to meet deadlines/milestones - Ability to manage business expectations - Ability to coach and mentor teammates - Strong organization skills, with the ability to objectively and prioritise tasks. - Able to establish relationships with, and influence people in all levels - Manage internal/external projects using the PM methodology & frameworks - Facilitate the initiating a project process - Management Method and templates and domain specific methodologies as applicable - Control project stages to ensure the project stays within acceptable tolerance levels - Close Project according to company policy. - Provide direction to the CEO to achieve the defined budgets, margins and key relationship - Identify, develop good relations and manage businesses with key accounts. - Assess and target new relationships and maintain them. - Provide analytics and a daily management approach to the project pipeline - Offer a project turn-key solution from inception of specification to completion of project - Meet monthly, quarterly, and annual sales and gross margin targets - Ensure all proposals, quotes and sales project files are in accordance with standards - Stock availability, alternative suggestions on out of stock items and ETA's. - Follow the standard sales processes and company policies etc.
http://www.modulent.co.za/job/project-manager/
We have 9 late-stage development mining projects in progress at various mines in Nevada. We need a motivated, dedicated projects controls specialist to help us remove risk from our projects and to ensure the contractors meet their agreed to requirements. Responsibilities Assist Project Managers in defining and tracking project-specific controls procedures Assist in the development and tracking of Project budgets and ensures spending remains within acceptable variances Assist in preparing financial, expenditures and payment authorization forms, and administrative documents for authorization by the Project Manager, such as purchase requisitions and orders, invoices, etc. Review project metrics regularly and advise Project Manager of discrepancies or issues, investigating variances in data as needed, especially concerning invoicing, accounts receivable, subcontractor invoices, approvals and payments, and other accounts payables, and escalating issues to Project Manager Accountable for the project forecast (cost & quantities) and monthly internal control compliance Review and track invoices, for consistency with contract requirements and rates, as well as tracking unbilled work and any issues Produce and/or assist in the compilation of all required internal and external Project reporting Obtain documentation to support subcontracts, material contracts and progress estimates Develop or maintain an accurate and up-to-date records management system Prepare Project meeting agendas, records and distributes meeting minutes, and maintains organized files of same Effectively build relationships with Management, Project Managers, Project/job teams and external contractors and stakeholders Assist Project Managers and Project administrators with Project setup, preparing contracts, change requests and contract modifications, and job closures, as may be needed Perform other duties as assigned Qualifications Bachelor's degree in related field from an accredited college or relevant experience (5+ years) 1-2 years of experience in one of the following: Mining Industry, Natural Gas, Geo Thermal, Oil and Gas, Solar, Power/Substation, etc. Experience scheduling and controlling the costs in drilling exploration and development type projects or similar capital projects. Experience controlling project costs with AFE, Work Break Down, earned value, contracts & contract terms, drilling projects or construction projects contract terms, budget tracking, accounting principles, and billing procedures’ Excellent analytical, organizational, math, finance and accounting skills Strong relationship-building, business acumen, problem solving and decision-making skills Strong Microsoft Word, Excel and PowerPoint skills Positive attitude, accountable, willingness to learn, and team player Ability to communicate effectively, both verbally and in writing Ability to develop and maintain awareness of occupational hazards and safety of yourself and others Willing to travel to sites 3-5 hours from Reno to verify work completed or cost variances, as needed. Typically, 10% of your time. Must be legal to work in the U.S. and able to show documentation Possess a valid Driver’s license and be able to operate a light vehicle Physical Demands: To be eligible for this position, an individual must be able to: Perform basic operational functions of climbing, stooping, kneeling, crouching, reaching, standing, walking, pushing, pulling, lifting, fingering, grasping, feeling, talking, hearing, and repetitive motion Wear and demonstrate to employees the use of personal protective equipment including hardhat, steel toed footwear, safety glasses, ear plugs, gloves (rubber, leather & cloth), and/or any other equipment required to do the job assigned safely Operate a telephone and mine phone/radio with sufficient voice and hearing to carry on a conversation for the purpose of communicating information, to management Hear sufficiently to distinguish various pitches of warning alarms Drive to remote sites when required – 10%. Thank you for considering this opportunity!
https://slr.catsone.com/careers/1700-General/jobs/10336503-Project-Controls-Specialist---Drilling/Construction/
Our client is a leading professional services consulting MNC and they are currently looking for a Senior Finance Business Partner to join their team. Reporting to the Financial Controller, you will be responsible for monitoring and analyzing the performance of various projects and market segments, identifying and understanding project financial and commercial risks, reviewing all major projects and working with the project team to identify areas of working capital requiring focus. You will be the key point of contact and will work closely with other corporate business partners, analyze management accounts, present financial reports and work with managers to prepare forecasts and budgets, provide assistance in financial modelling for different scenarios to help the business improve performance over the long term. You should possess a Degree in Accounting and be CA or CPA qualified. You will need to have exposure in consulting, engineering, construction or project related industries where you will have a strong knowledge of WIP. You will need to have strong commercial acumen; business partnering skills and have excellent communication skills to be able to work effectively with various stakeholders. MNC exposure is a must. Please send your resume, in WORD format only and quote reference number BL17042019, by clicking the apply button. Please note that only short-listed candidates will be contacted.
https://www.roberthalf.com.sg/job/singapore/senior-finance-business-partner/bl17042019-sgen?page=2
My client, an international development agency based near Vauxhall, are looking to recruit for an Project Officer, to join their expanding team. Salary is £26-28k per annum, depending on experience. The role reports to the Project Coordinator. Reporting requirements include: Attendance at all regular management team meetings, Provision of regular updates to the line manager, including updates on project risks and any other material matters and/or areas of concern, Monthly submission of financials and forecasts, and contributions to reporting; There are 4 main elements to the role: Project Management: Support project start-up and closeout; Participate in the development, review, and tracking of progress against project work plans; Support the development and presentation of internal and external reporting, project risk assessments, project reviews; Ensure project issues and risks are logged, monitored, and updated and escalate risks to leadership Learn to review budgets, reports, and other documents to ensure compliance Support development of project subcontractor and grants management procedures compliant with both internal policies and procedures; Lead on internal project meetings Work towards anticipating project needs Contribute to technical assistance through reviews, Aim to build skills with the goal of taking on a more senior role, such as manager, project delivery or a functional area. Financial Management: Support monitoring expenditure against budgets and forecasts, including staff costs, operating expenses and procurements; reconciles & reports on all advances monthly; Assist with project financial and cash-flow planning, processes monthly Funds Transfer Requests (FTRs) and payments for assigned projects; Help coordinate service providers and processes work orders and invoices; Assist with internal financial reporting and tasks including accruals and reviewing client invoices. Support preparation of contract action requests Support audit processes and ensures retention of project records is compliant Travel Management Manage international travel Maintenance of deployment register Support on travel logistics Head Contract and Client Relationship Management Support the monitoring and maintenance of compliance with head/prime contracts; Manage and supports contract amendments Provide inputs and support to colleagues on any client communication or reporting needs. Person Specification Internship, customer service, or prior experience in any business setting is helpful. Basic understanding of management including organizational skills and attention to detail. Excellent written and verbal communication skills. Financial acumen and the ability to interpret and analyse financial reports. Sound problem solving and decision-making skills. Ability to travel and work in developing countries for extended periods of time if needed. Ability to work both independently and as a part of a team when required.
https://www.reedglobal.com/jobs/project-officer-105
Job description: ABOUT US: Founded in 1967, M.P. Lundy Construction (Lundy) protects and supports its clients by providing expert advice and oversight in all matters related to planning and executing complex construction projects across Eastern Ontario. Our dedication to creating strong professional relationships combined with a tremendous natural desire to see our clients succeed has enabled us to become a leader in customer-focused Design-Build and Construction Management solutions. Since our inception, our continuous evolution has been rooted in the belief that we must always look for ways to improve how we plan, communicate, and execute. With this in mind, our commitment to refining our key processes, year after year, has transformed our organization in a fundamental way and continues to propel us to succeed. We execute on what we say, we look forward and plan effectively, and we inspire confidence that we can make it happen and deliver predictable results. If you share our commitment to fostering long-term relationships, honouring commitments, and executing on plans, we want to talk to you. POSITION OVERVIEW: Reporting to the VP, Construction, as a Senior Construction Project Manager with Lundy you’ll take on a full range of project management responsibilities, including planning, budgeting, reporting, and stakeholder communication. As an ideal candidate for this role, you’re a natural leader with exceptional communication and organizational skills. You’re capable of administering contract controls and costs, developing and maintaining construction scheduling, and delivering predictable results through a relationship of trust with the Project Stakeholders, including the client, consultants, suppliers, and subcontractors. KEY RESPONSIBILITIES: - Plan and manage medium to large ICI construction projects ($20MM+). - Liaise with Estimating and Business Development to obtain and review all project information and become intimate with and fulfil all contract requirements; set project schedule and budgetary goals and finalize project procurement. - Work directly with the Estimator to establish and validate project budgets and may use the Estimating Department resources for tendering and budgeting of changes. - Validate construction cost estimates; create the schedule of values for budget and progress billing; reconcile all costs and mitigate budget overruns; review labour and material budgets with Construction Leads regularly. - Review and approve all Subtrade claims; interface with the Accounting department; review reports generated from accounting against project records to ensure accuracy and adjust cost to complete projections monthly; approve all project invoices; responsible for all cost control documentation. - Develop and update the Master Project Schedule; create a suitable work breakdown structure and set realistic target dates & milestones; report on progress against plan including written notification of project schedule delays to secure Lundy’s rights under the terms of the contract and communicate schedule updates to the owner/consultants monthly. - Prepare and maintain project risk registry; communicate and review project risks with Project Stakeholders regularly and in a timely manner; develop strategies for risk mitigation. - Provide regular progress reports to the Project Stakeholders on all aspects of the project, including safety, quality, budget, schedule, risk, etc. KEY QUALIFICATIONS: - 5+ years’ experience managing ICI construction projects. - Post-Secondary education in a relevant field. - Demonstrated success in effectively prioritizing tasks based on demand and deadlines and effectively work on multiple projects simultaneously. - A track record that clearly demonstrates effective leadership when interacting with multiple external clients and stakeholders. - Working knowledge of contract terms and conditions. - Fundamental understanding of project financial controls. - Ability to communicate professionally and effectively. - Demonstrated competency with Microsoft Office including: Outlook, Word, Excel and PowerPoint. - General understanding and commitment to QA/QC policies and procedures as well as Health and Safety practices /requirements. - Design/build experience is an asset. - Committed team player with excellent interpersonal skills and progressive leadership style. - Knowledge of estimating techniques and strategies is an asset. - Strong mathematical and financial aptitude. - Experience with project or construction management software is an asset. - An ability to read, understand, interpret construction plans and specifications is desired. - Must have excellent reading as well as writing skills. - Effective and collaborative communicator with the ability to develop productive working relationships with employees at all levels. - Results and solutions oriented individual. - Sense of urgency and drive to succeed. Must be comfortable working within a deadline-driven fast paced environment. - Resourceful, energetic and well organize with the experience and proven ability to set and manage competing priorities. - Strong team player with the ability to work independently as required. - Driver’s license and the ability to travel to job sites. We thank all applicants for their interest; however, only those to be interviewed will be contacted. For more information about MP Lundy Construction Inc, please visit our website at mplundy.com MP Lundy Construction respects the dignity and independence of people with disabilities and provides accommodations throughout the recruitment and hiring processes. Powered by JazzHR Expected salary:
https://ca.ineedajob.online/job/senior-construction-project-manager/
We are delighted to be working with a globally recognised professional services business who are appointing a senior consulting professional to manage and lead their global HR projects. The role has a strong focus on project and programme management, combined with the ability to influence, advise and guide senior stakeholders across different locations and time zones. Managing budgets, liaising with colleagues across HR as well as within the finance function, strong financial acumen is essentials for the role. The opportunity is one to be part of a global HR transformation, influencing and shaping the HR function at a senior level. Critical for consideration in this role is a background as a management consultant within the area of HR coming from the Big 4 or similar, large management consultancy firm. The role would map well for a senior manager looking to take their first step in-house from consultancy and will require core competency across HR transformation, project and programme management, communication, influencing and stakeholder management. The SR Group (UK) Limited is acting as an Employment Agency in relation to this vacancy.
https://www.frazerjones.it/job/senior-hr-projects-manager-1/
Location: Washington, D.C. Reports to: Senior Vice President, Programs & Policy Plan International USA Overview Plan International USA (Plan) is part of a global organization that advances children’s rights and equality for girls. We strive for a just world, working together with children, young people, our supporters, and partners to: - Empower children, young people, and communities to make vital changes that tackle the root causes of discrimination against girls, exclusion and vulnerability. - Drive change in practice and policy at local, national, and global levels through our reach, experience, and knowledge of the realities children face. - Work with children and communities to prepare for and respond to crises and to overcome adversity. - Support the safe and successful progression of children from birth to adulthood. Founded in 1937, Plan now works in more than 70 countries. Our programs in water and sanitation, health, education, child protection, economic empowerment and disaster response, and our advocacy work made progress in addressing girls’ rights by bringing girls’ voices to the forefront in policy discussions on early marriage, sexual and reproductive health and gender based violence. Plan International USA has two offices, located in Washington, DC and Warwick, Rhode Island. It is part of the global Federation, Plan International, which is comprised of collaborative partnerships between 21 national offices and 51 country offices. Plan International USA is registered 501(c)(3) and has an independent Board of Directors. Department Overview The Project Management Unit is a newly created team with Plan and was created to (1) standardize international project management across all technical teams and donors within Plan USA; and (2) improve the way resources are allocated to projects, ensuring that Plan’s programs are delivered in a compliant manner on time within budget and produce the expected results and (3) serve as a lead to continuously enhance Plan’s international project implementation. This unit performs a critical role working across Plan USA to ensure compliant project implementation of all Plan technical areas and requires working in close collaboration with Plan Country Offices at both the senior level and project implementation units (e.g., finance, Human Resources, procurement). The Project Management Unit is part of the Programs & Policy division, which is led by the Senior Vice President of Programs & Policy. The person in this role will report directly to the Senior Vice President of Programs & Policy, collaborating as required with the Chief Operations Officer and Chief Marketing Officer. The peers of this position are the Senior Director of Business Development, Director of Monitoring, Evaluation, Learning and Research (MERL) and Senior Director of Technical Services, along with senior leaders in other divisions within Plan International USA such as IMC (significant individual, corporate and philanthropy donors), Operations, Finance and Plan Country Offices. Position Overview The Senior/Director, Project Management Unit oversees a diverse portfolio of international projects funded by USG (predominantly USAID), significant individual donors, corporations and philanthropic foundations that are implemented in different Plan Country Offices. The Director of the Project Management Unit is responsible for ensuring that all Plan’s international programs follow project management standard operating procedures to ensure compliant international project implementation across multiple countries. S/he works closely with staff from Finance, Contracts, Operations & Awards (COA), Plan’s technical experts, representatives from IMC and designated counterparts in Plan Country Offices or Plan teaming partners to ensure startup, execution and close-out of Plan international projects implemented in country on time, within budget and in compliance with donor expectations. Performing this role requires an exceptional ability to collaborate with a range of professionals, seek continuous improvements, manage change through coaching and communication as well as motivate and grow a team. Given the cross-entity nature of this position, the Director must also be able to collaborate effectively with a range of professionals both in the US and internationally. Since this is a new role, the Senior/Director is expected to be comfortable and flexible in leading change, proactively engage in improving project implementation in collaboration with all involved units named above and positively motivate implementation of changes as quickly as possible within Plan USA and Country Offices. Critical determinants for success in this position are collaborative leadership, demonstrated capabilities in improving international project implementation, strong communication and listening skills, demonstrated international acumen and an ability to motivate a diverse team. This position oversees a staff of approximately five to ten senior project associates and project associates (when the Unit will be fully developed). The Senior/Director of the Project Management Unit is responsible to build and expand the team’s skill sets in project management and implementation, lead required improvements to project implementation and grow their technical expertise by partnering with the technical team and other units essential for project implementation. Main Responsibilities Project Management: - Lead, build and manage a team focused on implementation of Plan USA international project management approach. - Collaborate effectively across Plan to identify continuous improvement of processes, systems and procedures for project implementation. Collaborate on adaptation of identified systemic improvements. - Effectively engage with different levels across Plan to communicate and facilitate changes in project implementation. - Partner with technical, finance, compliance, IMC, Country Office staff and teaming partners to create standardized work plans and annual budgets to ensure appropriate resource allocation to implement the project. Perform an integral role in forecasting of budgets for projects and Plan USA. - In collaboration with team members, Plan Country Offices and Finance ensure the review of monthly budgets and, invoices. Monitor operational and financial status of each project. In collaboration with finance, review monthly updates on financial performance of projects. - Monitor ongoing issues and risks and work with peers and the SVP, CMO, and COO to mitigate risks and resolve issues, including working with Plan Country Offices where applicable. - Lead project presentations in quarterly portfolio reviews. Training, Supervision & Coaching: - Develop capabilities of team members or others on Plan’s project management approach - Direct and supervise the project management unit staff; distribute project portfolio workload allocations; plan field visits with the team based on risk, project materiality, etc.; work with team in all aspects of workload management, including time reporting and staff allocation. - Lead the team in the design and delivery of periodic reviews of process, systems and procedures, in the provision of in-house trainings and workshops related to Plan’s project management standard operating procedures. - Develop and manage relationships across teams through building trust, understanding requirements, and being proactive. Qualifications Critical determinants for success in this position are collaborative leadership, demonstrated capabilities in improving international project implementation, strong communication and listening skills, demonstrated international acumen and an ability to motivate a diverse team to achieve Plan’s mission. Competencies: - Excellent leadership and management skills. - Demonstrated experience leading and managing transformational change in international operations, preferably in an INGO setting - Demonstrated ability to work in a team environment and coordinate activities among teams, especially multinational teams. - Demonstrated ability to coach and build team capabilities. - Strong knowledge of project management methodologies and principles, including international issues that can arise while working with different country contexts. - Proven home office and international USG funded project experience; must have managed complex USAID funded projects or portfolios of such projects. - Ability to apply project management approaches to private sector or other institutional donors. - Ability to use financial and utilization data to build budgets and track adherence to goals. - Strong communication and facilitation skills. - Well organized and able to manage multiple priorities. - Ability to communicate clearly in English, both oral and written, with Spanish and/or French a plus. - Experience working with US based projects funded by private sector or institutional donors a plus - Competent computer skills, including MS Office and Salesforce Education and Experience: - Master’s degree, preferred; International Development or Business discipline - Knowledge and experience with USG Rules and Regulations and of other institutional funders within the field of international development. - 10+ years’ experience in international development, demonstrating increasing management responsibilities in a similar size organization implementing USG international contracts and cooperative agreements. Preference given to candidates that are also familiar with managing international projects for donors from private sector or institutional donors. - Preference given to candidates with INGO experience and candidates with field/home office project management experience; experience leading units that provided project management/operational support to a technical sector or group of technical sectors. - Experience in INGO with diverse (private and institutional) funding - 5+ years supervisory experience Physical and Mental Demands:
https://www.broderickhaightconsulting.com/plan-senior-dir-program-mgt-unit
Global Leader in the Enclosures and Industrial Components Industry is now looking to recruit a Senior Financial Officer for their Milan office. The ideal candidate will have a Degree in Accountancy and Management, have acquired 8 years of experience as financial figure for an international group, and have perfect knowledge of both Italian and English. The person will also have developed strong business acumen by having moved through other departments in previous experiences. The ideal profile will also demonstrate clear experience with conflict resolution and networking skills. Main tasks: - Establish Business Objectives and Formulate Strategies to support the reach of said objectives; - Evaluate the impact of the business strategies on the organization; - Lead, guide and provide input intro financial decisions; - Oversee financial control of the factory; - Oversee the general functions of the Finance department, ensuring that professional practices are followed at all times; - Manage budgetting, forecasting and reporting both P&L and balance sheet; - Budget and forecast all salaried compensation to be incorporated into departmental budgets; - Ensure that IT department standards, practices and policies are kept updated and enforced; - Manage the general accounting function to ensure timely and accurate monthly, quarterly and annual accounting closes; - Present results to both General and Financial Management. SR Group is acting as an Employment Agency in relation to this vacancy.
https://www.cartermurray.com/job/senior-financial-officer/
Vice President of Operations Suntuity is rapidly expanding and we need talented people to grow with us! If you are high-energy, motivated by opportunity, and have the business acumen, you need to join our team. Our VP of Operations will provide oversight, management, and implementation with experience in scalability for our operational strategies and objectives to ensure the delivery and profitability of the residential solar division’s production goals. Responsible for the management, oversight, and development of multiple domestic residential solar installation fulfillment centers. Our VP of Operations must be able to coordinate, communicate, and motivate service teams across various regions with the business acumen and expertise of developing, scheduling, and maintaining complex residential solar installation projects. Our VP of Operations will interface with the managers of design, permitting, survey, installation and inspection teams (internally, as well as externally), as well as intra- departmental teams (sales, customer service, human resources, finance, etc.) and will be required to ensure the stability and success of all residential solar operations. Dimensions: - Multiple company locations with full installation crews along the US eastern seaboard with expansion strategy into new markets. Must have expertise in market expansion and development in the Residential Solar industry - Direct responsibility for maintaining service fulfillment, vendor management, and operational excellence including budgeting - Responsibility for reduction in project deficiencies, maintaining SLA requirements, employee development, state and federal compliance, OSHA guidelines/regulations, as well as corporate best standards - Oversight of multiple operational departments including Field Operations, Corporate Operations, Fleet Management, Safety, Inventory, Procurement, and Project Management - Short-term departmental goals include (but are not limited to): on-time delivery, cost per watt management, quality and gross margin, and staff development - Long-term departmental goals include (but are not limited to): the formulation, planning, and implementation of strategies that will increase crew productivity and retention, mitigate project deficiencies, optimize process improvement through intra- departmental networking, and maintain excellence in service and delivery procedures Responsibilities: - Manage the operations function concurrent with: - Business growth - Introduction of new operational systems/protocols/standards - Meeting departmental financial objectives - Maintenance of SLA standards; maximizing productivity while mitigating loss/deficiencies - Development and management of suppliers/vendors/sub-contractors in the pursuit of maintaining budget/waste/turnover - Meeting divisional goals in relation to safety, quality, and on-time delivery of products. - Assess and assist in upgrading the management talent base within operations to achieve growth and meet market needs (examples include: reduced cost, shorter manufacturing/product introduction cycle times and on-time project delivery) - Conceive, research, plan, target, and control reductions in cost and product lead times on existing and new projects - Manage and assist in coordinating effort between support departments within the organization - Participate in the implementation of process improvement policies and protocols to decrease costs and promote on-time delivery - Create a productive department through increased written and verbal communication, streamlined project management practices, and project review standards - Is a key contributor to: - The overall long-range planning process - Establishment and assessment of the department’s annual operating budget and service delivery forecasting as agreed to by the company CEO - Achievement of monthly, quarterly, and yearly goals as set forth in the budget - OPS Reviews, covering current and future forecasts and delinquencies/deficiencies - Achieve the division’s yearly financial objectives, for all service locations reporting to the incumbent, through planning, directing, controlling, implementing, evaluating, monitoring, and forecasting as needed to achieve budgets and cost of sales - Project a positive image to peers and subordinates, to the customers we serve, to the industry in which we participate, and to the community in which we live by producing cost-efficient, quality products/services in a productive, responsive, and proactive environment - Plan, prepare, control, monitor, and forecast departmental direct and/or indirect budgets - Coordinate needed support to operations areas through intra-department interface for smooth work flow and cost-efficient product - Continuously improve customer satisfaction through programs to reduce deficiencies, provide on-time delivery, and meet customer quality and cost expectations. Actively interfaces to communicate and facilitate customer needs within the organization. - Participate in the implementation and management of operational service fulfillment processes, employee development and retention efforts, quality assurance protocols, as well as product and safety review - Provide a leadership role in the integration of efforts within operations, quality, sales, and engineering for the effective introduction of new quality systems and technology within operations - Establish, prepare, implement, revise, and maintain policies and procedures related to operations and safety protocols - Administer and manage the company’s safety and quality to provide an adequate and safe working environment Requirements and qualifications:
https://suntuity.com/corporate-information/careers/vp-operations/
A Cost Breakdown Structure (CBS) is a breakdown or hierarchical representation of the various costs in a project. The Cost Breakdown Structure represents the costs of the components in the Work Breakdown Structure (WBS). The CBS is a critical tool in managing the project lifecycle, especially the financial aspects of any project by creating a structure for applying measurable cost controls. Looking for ways to better track and manage your project costs? Learn more about project cost management and Project Business Automation. The image above is an example of a typical CBS. This one resides within the ERP and is directly linked to the WBS. The CBS is required for Project Business to appropriately manage the financial aspect of any project. It is the planning structure financial controllers use to create budgets and calculate various financial metrics such as: estimate at completion, cost to complete, variances and earned value. Typically, the Cost Breakdown Structure is missing completely in most project tools, including project management, scheduling and the ERP. Therefore, a CBS is either not utilized or it is created and managed in a spreadsheet. If a CBS is not used, then the WBS serves as both the financial and operational hierarchy of the project. This is a major project governance flaw for project-based companies. It is imperative to detach the cost and work breakdown structures, particularly for large projects. If your project involves hundreds or thousands of tasks, it will grow to a point where it becomes impractical to manage the project financials based on the schedule (WBS). At this point you need to employ a CBS parallel to your WBS. The CBS will enable you to manage costs at an appropriate account/GL level or category and create meaningful, real-time financial analysis directly connected to the WBS. Companies attempting to manage projects financially and operationally using a single hierarchy (the WBS) are setting themselves up for failure. A single hierarchy contains insufficient details for operational planning and is too granular for budget, estimation, cost collection, and variance management. Therefore, companies compromise on both sides and performance suffers as a result. If a CBS is maintained in a spreadsheet it is completely divorced from the operational aspects so much as it does not represent the current reality of the project. Changes from the WBS do not flow through to the CBS and vice-versa. Therefore, the CBS and WBS are almost always out of sync. To learn how a CBS should be integrated into the rest of your project processes and systems, Download the Project Business Automation Blueprint. Cost Breakdown Structure Example Adeaca One Project Business Automation has a dedicated financial multi-level hierarchy, the cost breakdown structure (CBS). This is the central planning structure against which all project financial activities are planned and managed. The CBS allows you to define the exact level of details required to efficiently and effectively manage a given project or contract. In order to provide one cohesive construct with the necessary level of granularity, the CBS manages and tracks: - Change orders - Revisions and transfers - Contingencies - Estimate at completion - Variances - Progress and productivity indicators All budget positions are processed within dedicated budget versions against the CBS. Budget versions are subject to approval, capturing cost and revenue estimates, as well as cash-flow projections. You can create project budgets using built-in templates and formula-driven estimation tools. Any changes to the original budget are processed as revisions, change orders, or transfers. Adeaca PBA provides a full audit trail of all budget versions including estimation of deliverables, contingencies, and undistributed budgets for rolling-wave planning. Additionally, the CBS in Adeaca PBA can also be directly and automatically updated from the WBS and vice-versa. Therefore, if a change in the schedule affects the financial aspects of a project (such as EAC, margin, and cash flow), it is known immediately.
https://www.adeaca.com/blog/faq-items/what-is-a-cost-breakdown-structure/
Job DescriptionA leading Oil & Gas Operator are looking for a Senior Cost Engineer to join their team in Zoetermeer. Responsibilties: Deliver, maintain and manage robust cost management frameworks, procedures and tools Create and maintain a database of costs and norms for estimating purposes and future benchmarking Compile project budgets, support Corporate and Joint Venture budgeting requirements and prepare project financial approvals (AFE’s) as and when required Track, monitor and accurately report project commitments, value of work done (VOWD) and forecast costs Provide accurate monthly reporting and input into overall project and wider business reporting cycles Support project change control, reviewing scope changes, assessing impacts of project costs and managing contingency and allowance provisions Ensure all project requisitions and invoiced costs are valid and coded correctly Provide monthly project accruals Analyze cost trends to offer early warnings to Project Management Support the project risk review / probabilistic analysis as necessary Interface with and build strong working relationships between the various Company Business Groups and all counterparts in Contractor organizations Maintenance of audit trail on all project costs and participate in project audits as and when required Maintain cost management system and appropriate databases Prepare for and contribute to project reviews Support in other project control tasks as required. Support department monthly reporting and strategic goals Actively participate in HSEQ strategy to ensure expectations for the Project Team are met Educated to a degree or equivalent in an appropriate discipline (preferred) 10-15 years’ experience in cost engineering role supporting major E&P development projects (essential) Extensive working knowledge of project cost management – cost estimating principles, WBS structures, budgets, allocations, commitments, VOWD, forecasts and estimated final costs Track record of continuous professional development Ability to work pro-actively under own initiative with minimum supervision. Willingness to learn, share knowledge and learning’s with others. Have a good appreciation of both topsides and subsea works. Working knowledge of SAP. Extensive use of MS Office packages (specifically excel) is essential. Language: fluent in English and Dutch is mandatory. Please note only applicants eligible to work in the EU will be considered.
https://www.nesgt.com/job/107430061435/senior-cost-engineer-3/
Location: Melbourne CBD & Inner Suburbs Work Type: Full Time - A large, complex self insured environment - Lead a high performing and well respected team - Lead a number of high profile strategic projects An undisputed market leader with a long and proud history of operating in Australia. Employing over 6500 people across 100 sites across the country, this organisation is well regarded for its commitment to innovation, sustainability and most importantly its people. About the Role Reporting to the Executive Director People & Culture, the Head of Injury Management and Workers Compensation will be accountable for leading, directing and managing all injury management and workers compensation activity relating to the organisations' self-insurance licence, improving HSE and workers compensation performance. Additionally, you will: - Determine the risk management compliance and internal controls necessary to ensure strategic and operational risks are effectively managed and minimised in relations to workers compensation. - Lead your team to provide proactive and efficient guidance and expertise to the business units regarding workers compensation claims, issues, strategies and financial implications. - Manage service providers to ensure the highest level of services is received. - Manage the claims and injury management systems to meet internal and external audit standard and progress in line with self-insurance obligations. - Manage the strategic regulator interface and relationship including all licencing and compliance requirements. - Successfully oversee/lead a number of large programs of work. About You - Tertiary qualifications in a relevant discipline - Experience leading a self-insured workers compensation/injury management function at a national level. - In-depth knowledge and experience of applicable workers compensation legislation, regulations and a detailed understanding of the self-insurance licence and compliance requirements - Demonstrated experience in the development and monitoring of budgets in a complex business in relation to workers compensation. - Highly developed conceptual and analytical skills with a proven capacity to establish strategic directions for large, matrix-style organisations, as well as the successful action plans that achieve those directions. - Strong commercial acumen, understands the business drivers. - A highly capable decision-maker with the ability to navigate ambiguous and complex scenarios - Well-developed written communication and interpersonal skills, able to influence outcomes beyond the span of your control and brokering consensus among stakeholders with conflicting needs and expectations. - A strong continuous improvement mindset Click "Apply for this job" below to apply for this role.
https://www.uworkin.com/job/15459330/head-of-injury-management-and-workers-compensation
The dividend discount model (DDM) is a quantitative method used for predicting the price of a company's stock based on the theory that its present-day price is worth the sum of all of its future dividend payments when discounted back to their present value. It attempts to calculate the fair value of a stock irrespective of the prevailing market conditions and takes into consideration the dividend pay-out factors and the market expected returns. If the value obtained from the DDM is higher than the current trading price of shares, then the stock is undervalued and qualifies for a buy, and vice versa. The variables of the DDM are g; the constant growth rate in perpetuity expected for the dividends, r; the cost of equity capital for the evaluating company or entity, and D1; the value of the next year's dividends. These variables can be used to equate the value, V, of share or investment and determine whether it is under or over valued. The DDM assumes that a company's dividends are going to continue to rise at a constant growth rate indefinitely. You can use that assumption to figure out what a fair price is to pay for the share or investment today based on those future dividend payments. The cost (or return) of equity is the return a company requires to decide if an investment meets capital return requirements. Firms often use it as a capital budgeting threshold for the required rate of return. A firm's cost of equity represents the compensation the market demands in exchange for owning the asset and bearing the risk of ownership. The traditional formula for the cost of equity is the dividend capitalization model. "D1" stands for the stock's expected dividend over the next year. For the purposes of this calculation, you can assume that next year's dividend will grow at the company's historical rate of dividend increases. If given the value for the current year’s dividend (D0), the value for next year’s dividend can be calculated by multiplying the current dividend value by the value of the growth rate plus one: D1 = D0 * (1 + g) The value of a share or investment (V) is the expected value of a share based on the DDM calculation. This value can then be compared to the most recent selling price of a share, currency, commodity, or precious metal that is traded on an exchange, as it is the most reliable indicator of that security's present value. Should the estimated value be higher than the current market value, it can be assumed that the investment is undervalued on the market and may present an investment opportunity, or vice versa, should the estimated value be lower than the market value of the investment then the investment may be considered overvalued. Since the variables used in the formula include the dividend per share, the net discount rate (represented by the required rate of return or cost of equity and the expected rate of dividend growth), it comes with certain assumptions. For example; Let's say that a certain company issued share is expected to pay a $2.00 dividend next year, and its dividend has historically grown by 4% per year, so it's fair to assume this same growth rate going forward. Assume an investor holds a desired rate of return of 10%. Using these input values, the investor can calculate the share's value using the dividend discount model as: V = D1 / (r - g) V = $2 / (0.1 – 0.04) V = $33.33 Using the dividend discount model, an investor would value the company’s shares’ value to be $33.33 per share. This value is important for valuation by investors as it indicates whether shares are under or overvalued when comparing them to the market value on a stock exchange. For instance, if the DDM value calculated is lower than that of the market value, an investor may consider the shares overvalued and could make the decision to forgo investing in the security. However, if the DDM value is higher than the market value on the stock exchange, it might flag an investment opportunity for the investor as the shares may seem undervalued compared to the market price. The dividend discount model is best used for larger blue-chip stocks because the growth rate of dividends tends to be predictable and consistent. For example, Coca-Cola has paid a dividend every quarter for nearly 100 years and has almost always increased that dividend by a similar amount annually. It makes a lot of sense to value Coca-Cola using the dividend discount model. It is important to note that since dividends and their growth rate are key inputs to the formula, the DDM is believed to be applicable only on companies that pay out regular dividends. However, it can still be applied to stocks which do not pay dividends by making assumptions about what dividend they would have paid otherwise. The dividend discount model was developed under the assumption that the intrinsic value of a stock reflects the present value of all future cash flows generated by a security. At the same time, dividends are essentially the positive cash flows generated by a company and distributed to the shareholders. A company produces goods or offers services to earn profits. The cash flow earned from such business activities determines its profits, which gets reflected in the company’s stock prices. Companies also make dividend payments to stockholders, which usually originates from business profits. The DDM model is based on the theory that the value of a company is the present worth of the sum of all of its future dividend payments. Dividend Discount Model is one of many formulas available on the On Equation Finance Calculator. Check out Dividend Discount Model and many more finance formulas by clicking the download button and get On Equation Finance Calculator on your mobile device! On Equation Finance Calculator is available on both iOS and Android platforms. Take some of the stress out of your finance equations today!
https://onequation.io/dividend-discount-model
The Peso has gone through a lot of changes since the crisis in 1994. The peso is now the 8th most traded currency in the world and the the most traded currency in Latin America. After the crisis in 1994, Mexico adopted a floating exchange rate regime. As a result, the currency price is set by the forex market with its supply and demand. Currently, Mexico is the 15th largest economy in the world. As of 2019, Mexico has a real GDP of 2.45 trillion in dollars and a nominal GDP of 1.15 trillion in dollars. Mexico is not in a recession at the time, but GDP growth has slowed down in recent years. In 2018, GDP growth varied a low of 1.2% to a high of 2.6%. At the moment, GDP growth recorded at 1.8% and does not show signs of jump drastically. The IMF recently projected that GDP growth will be 2.1% in 2019, and 2.2% in 2020. The IMF previously projected growth at 3.5% in 2019 and 3.6% in 2020. Increases in GDP results in an increase in supply of peso in foreign countries which will depreciate the Peso. In light of the slower growth in GDP, Mexico can expect miniscule depreciations in the Peso as real GDP increases. The current political environment also has an effect on the value of the Peso. On December 1st, Mexico inaugurated a new president, Andres Manuel Lopez. President Lopez promised many changes to the welfare of Mexico in his campaigning. In fact, President Lopez has already proposed a moderate increase in government spending for 2019. Government spending is expected to rise 6.1% in real terms. With an expansionary fiscal policy being implemented in 2019, the value of the peso can be expected to appreciate, thus reducing the exchange rate of the peso and the dollar. Alongside increased government spending, President Lopez has already implemented an increase in the minimum wage to 102.68 pesos a day. These effects can be seen on figure 1a. This increase in the minimum wage can have several effects on Mexico’s economy. One effect is increasing the consumption of the low-wage household thus increasing GDP and inflation. However, the effects of the increase in wage are too early to determine. Inflation rates have shown signs of softening in 2019. Inflation rates for Mexico have dropped to 4.37% in January of 2019, as opposed to the 4.83% at the end of 2018. With inflation rates decreasing, the supply of peso to foreign countries will decrease and increase the demand for dollars in foreign countries. If the inflation rate continues to decrease, Mexico can expect the peso to appreciate. However, Mexico should expect the inflation rate to jump as the newly elected president has increased the minimum wage. The increase in minimum wage will increase consumer spending and firm costs and ultimately increase the inflation rate. However, inflation will always be challenged by the Central Bank with its monetary policies. As a result of softening inflation rates, the Central Bank of Mexico implemented a monetary policy to keep the interest rate at a constant rate of 8.25% on February 7, 2019. In fact, 8.25% is a decade high interest rate for Mexico. With no change to interest rate at the start of 2019, the peso’s supply and demand in foreign countries should be unaffected by interest rates. With a floating exchange rate regime, the peso’s value should be unaffected by the recent monetary policy. However with minimum wage increase on the horizon, the Central Bank of Mexico will combat the inflation with an increase in interest rates. With further increases in interest rates in the future, the value of the peso can be expected to increase. The effects can be seen on figure 1b. With GDP, inflation rate, and interest rates considered, one can expect the value of the peso to appreciate in the future. With inflation rates expected to grow as a result of an increase in minimum wage, the Central Bank of Mexico will be implementing tight monetary policies which will increase the interest rate. President Lopez and his promises of expansionary monetary policy will increase the value of the peso. With factors considered, it is time to look at the exchange rate between the Peso and the US dollar. Currently, the exchange rate is sitting at 19.42 Pesos for 1 US dollar. Since Mexico has a floating exchange rate regime, the forex market will determine the exchange rate between the peso and the dollar. Considering the effects of monetary policies and fiscal policies on the forex market, the peso value will be highly affected by the policies mentioned previously. The biggest effects on the peso will be the expansionary fiscal policy and the tight monetary policies. A forecast of the peso can be achieved. The peso is expected to appreciate based on current events and policies. The forecasting determination can be seen on the econometric graphs in the appendix. A structural model and ARMA model were run and compared on STATA. Residuals squared and root mean square errors were the main determinants of choosing the optimal model. After performing many tests on the models, forecasting was able to be performed. The forecast of the Peso in 1 year or by the end of 2019 will be 15.94 Pesos to 1 US dollar. Based on the forecasting model, the Peso will either appreciate or the dollar will depreciate in value. The 3-year forecast of the exchange rate will be 15.35 Pesos to 1 US dollar. The 5-year forecast will be 14.82 Pesos to 1 US dollar. These forecasted values do seem unrealistic since the forecasted values predict that the exchange rate will go down by 4 pesos. The biggest thing to take away with these forecasted exchange rates is that the Peso will go up in value in the future. Maybe not as dramatically as the forecasted exchanges rate, but Mexico can expect the exchange rate between the Peso and the dollar to decrease. A professional writer will make a clear, mistake-free paper for you!Get help with your assigment Please check your inbox Hi!
https://studydriver.com/essay-title-maker-minimum-wage-hike-harmful-effects/
Tourism is a major pillar of the Mauritian economy. According to estimates for 2010, the tourism industry has contributed Rs 39,456 million to the Mauritian economy and has provided direct employment to 27,161 workers. The contribution of tourism to GDP at basic price stands at 7.4% in 2010. This fact is indicative of the importance of the tourism sector to the Mauritian economy. If you need assistance with writing your essay, our professional essay writing service is here to help! To evaluate the impact of tourism on economic growth in Mauritius, a log-linear model will be estimated. However, economic growth may not be influenced only by tourism, but there are also other macro economic factors which may have an effect on growth. As such, these factors will be taken into consideration in the model. The model consists of standard variables such as Investment (INV), Exports (EXP) and Inflation (CPI), as well as one variable (TRP) which will be used to quantify the impact of tourism, such as tourism receipts. Real GDP per capita is used as a reference variable in order to demonstrate the impact of tourism on economic growth. 4.2 Types of Data 4.2.1Primary Data Primary data is collected on source and is not been subjected to processing or any other manipulation. The most common methods to collect primary data consist of surveys, interviews and focus groups. As such, primary research entails the use of immediate data and is collected by the researcher particularly to meet up the research objective of the subsisting project. Making use of primary data implies that researchers are collecting information for the specific purposes of their study. As such, the questions the researchers ask are tailored to extract the data that will help them with their study. However, it is time consuming and costly to collect such data. 4.2.2 Secondary Data Secondary data consists of pre-existing information which is not gathered for the purpose of the current research. Secondary data is readily available and inexpensive to obtain. In addition, such data can be examined over a longer period of time. Secondary data includes information from the census, a company’s financial position and safety records such as their injury rates, or other government statistical information such as the number of workers in different sectors. In secondary data, information relates to a past period and as such, it lacks aptness and has unsatisfactory value. The drawback is that often the reliability, accuracy and integrity of the data is uncertain. However, it is easier to collect such data and longitudinal study may be possible. 4.3 Model Specification A simple log-linear Cobb-Douglass production function is used to measure the impact of tourism on economic growth in Mauritius. The equation is as follows: GDP = f (INV, TRP, CPI, EXP) Consider the following model, known as an exponential regression model: GDPt = β0 INVt β1 TRPt β2 CPIt β3 EXPt β4 e εt (4.2.1) which may be expressed alternatively as lnGDPt = lnβ0 + β1lnINVt + β2lnTRPt + β3lnCPIt + β4lnEXPt + εt (4.2.2) where ln is the natural log (i.e log to the base e, and where e = 2.7183) Equation 4.2.2 can be written as: lnGDPt = C + β1lnINVt + β2lnTRPt + β3lnCPIt + β4lnEXPt + εt (4.2.3) where C = lnβ0 Therefore, the transformed model is: ln GDPt = C + β1 ln INVt + β2 ln TRPt + β3 ln CPIt + β4 ln EXPt + εt Where ln GDP: Log of real gross domestic product per capita ln INV : Log of investment ln TRP : Log of tourism receipts per capita ln CPI : Log of consumer price index used as a proxy for inflation ln EXP : Log of exports C : Constant term εt : White noise disturbance term In the above log-linear model, the dependent variable, GDP, is expressed as a linear function of four other independent variables, also known as the explanatory variables, namely INV, TRP, CPI and EXP. It is often assumed for such log-linear model that the causal relationships which may exist, flow only in one direction, namely from the explanatory variables to the dependent variable. The parameters of the model can be estimated by using the Ordinary Least Square method, if the assumptions of the classical linear regression model are fulfilled. As such, GDPt* = C + β1 INVt* + β2 TRPt* + β3 CPIt* + β4 EXPt* + εt where GDPt* = ln GDPt, INVt* = ln INVt, TRPt* = ln TRPt, CPIt* = ln CPIt, EXPt* = ln EXPt The coefficient of each of the four explanatory variables measures the partial elasticity of the dependent variable GDP with respect to that variable. As such, each of the partial regression coefficient β1, β2, β3 and β4 are the partial elasticities of GDP with respect to variables INV, TRP, CPI and EXP respectively. 4.4 Explanation of Variables 4.4.1Gross Domestic Product (GDP) Gross Domestic Product is used to assess the market value of all final goods and services produced during a given period of time within an economy. It also measures the total income of an economy and as such, it is often correlated with standard of living. GDP is used as a reference variable in order to assess the impact of tourism on economic growth in Mauritius. GDP is an important factor used to analyse the development of the tourism sector. As such, in case the tourism sector brings huge foreign earnings, there will be an increase in GDP, suggesting that the economy is flourishing. The GDP figures that are used for the regression have been adjusted for inflation using the GDP deflator. 4.4.2 Investment (INV) Investment, which is a major component of the gross domestic product of an economy, refers to the acquisition of new capital goods. A positive change in investment may lead to a positive change in income and output of an economy in the short run. Higher level of investment may contribute to aggregate demand while higher level of income may indirectly impact on consumer demand. Investment, which is an injection in the circular flow of income, is a useful tool to analyse the impact of tourism on the economy of Mauritius. Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs. Investment is expected to have the same impact on economic growth as propounded by empirical literature, such as Sargent and James (1997) who found a positive impact of physical capital and investment on growth in Canada over the period from 1947 to 1995. 4.4.3 Tourism Receipts (TRP) Tourism receipt is a major indicator of the contribution of the tourism sector to the local economy. Tourism receipt represents an inflow of foreign currency in the economy. Such receipts account for a major contribution to the gross domestic product of the Mauritian economy. As such, an increase in tourism earning is expected to have a positive impact on GDP. Most governments in developing countries encourage international tourism because such tourists bring capital to the country. Earnings of currencies permit governments to finance, at least in part, their development efforts. Tourism receipt is expected to impact positively on economic growth as postulated by Balaguer and Cantavella-Jorda (2002) or Dritsakis (2004) who claimed that economic growth and tourism are interrelated and established tourism as a driver of economic growth. 4.4.4 Inflation (CPI) Inflation is defined in economics as a rise in the general level of prices of goods and services in an economy over a period of time. As such, it is a sustained increase in the price level and it may be the consequence either of constant falls in aggregate supply or recurring increases in aggregate demand. As a result, inflation erodes the purchasing power of money, that is, there is a loss of real value in the internal medium of exchange and unit of account in the economy. An important measure of price inflation is the inflation rate, which can be calculated by taking the annualised percentage change in a general price index over time. This is referred to as the Consumer Price Index (CPI). In Mauritius, the Consumer Price Index is measured by computing the average change over time in the cost of a fixed basket of consumer goods and services. It represents changes over time in the general level of prices of goods and services acquired by Mauritian consumers. Inflation is then calculated by comparing the average level of prices during a 12-month period with the average level during the preceding 12-month period. One of the most fundamental objectives of macroeconomic policies of many countries, whether industrialised or developing, is to sustain high economic growth together with low inflation. Inflation can bring about uncertainty about the future profitability of investment projects particularly when high inflation is also linked with increased price variability. This would in turn generate more conservative investment strategies, which would ultimately result in lower levels of investment and economic growth. Inflation is expected to have a negative effect on growth as claimed by Barro (1995) who explored the inflation-economic growth relationship using a large sample covering more than 100 countries from 1960 to 1990. 4.4.5 Exports (EXP) Export entails the sale of goods and services produced in one country to other countries. There are two types of exporting: direct and indirect. For national accounts statistics, exports consist of transactions in goods and services from residents to non-residents. As such, an export of a good represents a change of ownership from a resident to a non-resident; this does not necessarily imply that the good in question physically crosses the frontier; while an export of services consists of all services rendered by residents to non-residents. The relationship between export growth, foreign direct investment and economic growth in both developed and developing countries is a question that continues to be of considerable interest. Cross-country trade and capital flows and interpreting the significance of these activities towards economic growth lie at the heart of the debate on economic development policy since the early literature on export and economic growth. Export is expected to impact positively on growth as postulated by Feder (1982), who mentioned that exports contribute to economic growth in a variety of ways: economies of scale and incentives for technological improvement. Thus, marginal factor productivities are expected to be higher in export industries than in non-export industries. 4.5 Data Sources For the purpose of this study, time series data has been used. A time series is an ordered chain of values of a variable at equally spaced time intervals. Time series analysis is used for economic and sales forecasting, budgetary analysis, inventory studies or stock market analysis. It encompasses techniques to investigate data in order to extract meaningful statistics and other characteristics of the data. A time series model indicates that observations close together in time will be more closely correlated than observations further apart. As such, time series models use the natural one-way ordering of time so that values for a given period can be expressed as deriving in some way from past values. Data has been collected for the period 1976 to 2009. Figures for the explanatory variables namely investment and exports and that for the dependent variable real gross domestic product were obtained from the Central Statistical Office. Data for inflation and tourism receipts was obtained from annual reports of the Bank of Mauritius. 4.6 Software The analysis of data will be done using the Microfit 4.0 software. Before carrying out the regression, the stationarity of the variable should be tested in order to avoid spurious results and invalidity of the model. The ARDL model will be evaluated. Furthermore, a co-integration test shall be performed to determine if an Error Correction Model (ECM) must be used.
http://economicsessays.com/economic-impact-of-tourism-in-the-mauritian-economy-2/
Salman Ahmed Shaikh Growth rate of Gross Domestic Product (GDP) is one of the most frequently suggested alternative for Islamic finance transactions. GDP measures value of production in a given year. This alternative avoids reference to interest based benchmarks and reflects pure economic activities in a comprehensive way covering output of all sectors plus prices. A study in 2009 “Central Bank and Monetary Management in Islamic Financial Environment” provided preliminary evidence for the case of using nominal GDP growth rate as pricing benchmark in Islamic financial transactions. The study finds that in 12 out of 14 countries, equivalency of means test shows that null hypothesis, i.e. both interest rate and GDP growth rate are not significantly different from each other could not be rejected. Therefore, it is plausible to use growth in nominal GDP as the benchmark rate in money market. Indexing the primary public finance instrument based on nominal GDP growth rate will be appropriate as the benchmark used will be related to production. For specific regions, Gross Regional Product can be used which will be an even more stable average return in regional economies. Gross Regional Product can be computed by summing the weighted average growth of countries where weights will be based on the size of the economy. In other variations, GNP is a good measure to consider in trade finance in an open economy framework. Furthermore, since Jorgensen takes depreciation rate in user cost of capital, allowing for depreciation charge, Net National Product (NNP) or Net Domestic Product (NDP) are also appropriate measures. For individual households who are potential targets for consumer finance products, tax adjusted personal disposable income is more relevant. Another decision point is about whether to look at whole economy or a subset. In services sector, financial services value added is given separately. Should that be taken since that sub-sector is more relevant to the financial sector? Nominal GDP growth takes into account inflation. In stagflation, when inflation is driven by cost-push factors, it may be quite disadvantageous for commercial financing client who is suffering from input price increase and is asked to pay higher cost of finance in a recession. A possible solution is to use Real GDP growth rate plus moving average of core inflation growth rate. Even if output growth is low or negative, inflation is not and overall rate (Real GDP Growth + Core Inflation) will remain positive. However, there are some potential challenges in using this benchmark. First, output growth is not a function of time. There is no strict linear or compounded time value of economic activities. A solution is to add term premium separately. Secondly, production in some sectors may not be Shari’ah compliant. In that case, sectoral distribution in agriculture, industry and services will need to be looked at. Relationship between interbank rate and economic growth rate is not expected to be highly correlated. Theoretically, interbank rates and growth may go in opposite direction. When economy is pulled out of recession, policy rates are kept lower and when it heats up, then policy rates are raised to curtail aggregate demand. In that case, moving averages can help in ensuring better co-movement and alignment. GDP growth rate is not measured to be forward looking by design. Therefore, different approaches are needed to estimate expected growth. For this, there are time series tools, economic models and survey based methods. IMF also publishes World Economic Outlook in which it provides estimated growth rates in different countries. Another challenge is that in most developing economies, 30% of the economic activities are in the undocumented sector. However, what we need is a benchmark for growth. GDP is the broadest measure for illustrating growth in production. For growth in value of production, inflation rate measures the growth in prices. Even though GDP growth measures rate of change in value of new production and it is not necessarily a barometer of profitability, combining it with inflation can help in capturing the rise in value of production. If costs remain same, rise in output price reflects rise in profitability. Moreover, another challenge is that frequency of data availability is quarterly at best or monthly for some sectoral indices like industrial production index. However, monthly or even weekly inflation rates are also available. Thus, combining growth in value of production along with growth in change in prices, newer observations in the series can be obtained on weekly basis. Fixed income instruments have been issued with GDP growth rate as benchmark. However, in conventional space, such instruments use loan as underlying contract. In Islamic space, there is need for Shari’ah compliant structuring, for instance, Musharakah contracts, Musharakah certificates and Musharakah Sukuk. If sovereign securities are issued with this benchmark rate using underlying contracts of Islamic finance, then this rate will get assimilated in the financial contracts and markets at the micro level.
https://islamiceconomicsproject.com/2022/03/22/economic-growth-rate-as-benchmark-for-islamic-finance/
The following equation identifies our estimate of Social Security's accrued liability at a point in time: Accrued liabilities are equal to the present value of the benefits that all currently living taxpayers and retirees are expected to receive based on their earnings up to the year of the calculation. It is similar in concept to the method Kotlikoff (1995) suggested for crediting workers for their participation in Social Security. In the equation i identifies birth year, y0 is the starting year of the calculation, and r is the real rate at which future benefits are discounted. Nit represents the number of individuals born in year i who are alive in year t. Bit is the average benefit based on earnings up to year y0 received in year t by individuals born in year i. Because this calculation is based on accrued benefits it depends on earnings histories and is thus independent of earnings projections and future tax policy. In practice the benefit calculation and the number of individuals are further separated into sex and education categories. Elsewhere we have presented our methodology for projecting Social Security taxable earnings by birth year, sex and education categories (Rettenmaier and Saving (2000)). For individuals born in 1933 and later, the benefits are derived from our earnings projections using the scheduled benefit formula. For individuals who retire in the future, zeros enter their earnings history between year y0 and their years of retirement. For older workers the benefits are taken from the average benefits reported Table 5.A1 in the 1997 Annual Statistical Supplement. The data include the average benefits for men and women by age and by type of benefit for retired workers, wives, and widows. Benefit amounts are converted to y0 using the Social Security Trustees intermediate estimate price level changes and are discounted to y0 using a real discount rate of 5.5 percent. Population is from the Census Bureaus middle series projection. Extending the Budget Forecast to 2070 At the beginning of October the Congressional Budget Office released its long-term budget forecasts. The variant of the forecasts used here is referred to as the "Save the Off-Budget Surpluses" assumption. The CBO provides annual estimates out to 2049 of Gross Domestic Product (GDP) and federal tax receipts and the various expenditure categories as a percent of GDP. Federal debt held by the public and net interest payments also are provided. Beyond 2049 we assume that tax receipts, federal consumption expenditures and other expenditures retain the same share of GDP they had in 2049. GDP is estimated by letting GDP per worker grow at the rate of growth that existed over the last 20 years of the CBO forecast. The GDP estimates beyond 2049 are equal to the GDP per worker multiplied by the number of workers. The number of workers is obtained from the 2000 Trustees Report. Medicare and Social Security expenditures beyond 2049 are calculated by estimating expenditures per beneficiary by applying the growth rates in per beneficiary expenditures derived from the CBO's forecasts. Total Social Security or Medicare expenditures are obtained by multiplying the number of beneficiaries by the per-beneficiary estimates. Finally, total Medicaid expenditures are assumed to grow at the same rate as the Medicare expenditures.
http://www.ncpa.org/pub/st241?pg=9
The most recent and complete economic model documentation is available on Pardee's website. Although the text in this interactive system is, for some IFs models, often significantly out of date, you may still find the basic description useful to you. The economics model of IFs forecasting system draws on two general modeling traditions. The first is the dynamic growth model of classical economics. Within IFs the growth rates of labor force, capital stock, and multifactor productivity largely determine the overall size of production and therefore of the economy. The second tradition is the general equilibrium model of neo-classical economics. IFs contains a six-sector (agriculture, raw materials, energy, manufactures, services, and ICT) equilibrium-seeking representation of domestic supply, domestic demand, and trade. Further, the goods and services market representation is embedded in a larger social accounting matrix structure that introduces the behavior of household, firm, and government agent classes and the financial flows they determine. Contents - 1 Goods and Services Market - 2 Financial Flows / Social Accounting - 3 Dominant Relations: Economics - 4 Social Accounting Matrix Approach in IFs - 5 Economic Flow Charts - 6 Value Added - 7 Multifactor Productivity - 8 Economic Aggregates and Indicators - 9 Household Accounts - 10 Firm Accounts - 11 Government Accounts - 12 Savings and Investment - 13 Trade - 14 International Finance - 15 Income Distribution - 16 Poverty - 17 The Goods and Services Market - 18 The Production Function: Basic Overview - 19 The Production Function: Detail - 19.1 Annual Base Technology Growth in MFP - 19.2 Driver Cluster 1: Human Capital - 19.3 Driver Cluster 2: Social Capital - 19.4 Driver Cluster 3: Physical Capital - 19.5 Driver Cluster 4: Knowledge Capital - 19.6 Issues Concerning Parameterization and Interaction Effects - 19.7 The Relationship of Physical Models to the Economic Model Goods and Services Market | | System/Subsystem | | Goods and Services | | Organizing Structure | | Endogenously driven production function represented within a dynamic general equilibrium-seeking model | | Stocks | | Capital, labor, accumulated technology | | Flows | | Production, consumption, trade, investment | | Key Aggregate Relationships (illustrative, not comprehensive) | | Production function with endogenous technological change; price movements equilibrate markets over time | | Key Agent-Class Behavior Relationships (illustrative, not comprehensive) | | Households and work/leisure, consumption, and female participation patterns; Firms and investment; Government decisions on revenues and on both direct expenditures and transfer payments Households, firms, and the government interact via markets in goods and services. There are obvious stock and flow components of markets that are desirable and infrequently changed in model representation. Perhaps the most important key aggregate relationship is the production function. Although the firm is an implicit agent-class in that function, the relationships of production even to capital and labor inputs, much less to the variety of technological and social and human capital elements that enter a specification of endogenous productivity change (Solow 1957; Romer 1994), involve multiple agent-classes. In the representation of the market now in IFs there are also many key agent-class relationships as suggested by the table. Financial Flows / Social Accounting | | System/Subsystem | | Financial | | Organizing Structure | | Market plus socio-political transfers in Social Accounting Matrix (SAM) | | Stocks | | Government, firm, household assets/debts | | Flows | | Savings, consumption, FDI, foreign Aid, IFI credits/grants, government expenditures (military, health, education, other) and transfers (pensions and social transfers) | | Key Aggregate Relationships (illustrative, not comprehensive) | | Exchange rate, movements with net asset/current account level; interest rate movements with savings and investment | | Key Agent-Class Behavior Relationships (illustrative, not comprehensive) | | Household savings/consumption; Firm investment/profit returns and FDI decisions; Government revenue, expenditure/transfer payments; IFI credits and grants Households, firms, and the government interact in markets, but more broadly also via financial flows, including those related to the market (like foreign direct investment), but extending also to those that have a socio-political basis (like government to household transfers). A key structural representation is the Social Accounting Matrix (SAM). The structural system portrayed by SAMs is well represented by stocks, flows, and key relationships. Although the traditional SAM matrix itself is a flow matrix, IFs has introduced a parallel stock matrix that captures the accumulation of assets and liabilities across various agent-classes. The dynamic elements that determine the flows within the SAM involve key relationships, such as that which constrains government spending or forces increased revenue raising when government indebtedness rises. Many of these, as indicated in the table, represent agent-class behavior. The model can represent the behavior of households with respect to use of time for employment and leisure, the use of income for consumption and savings, and the specifics of consumption decisions across possible goods and services. And it represents the behavior of governments with respect to search for income and targeting of transfers and expenditures, in interaction with other agents including households, firms, and international financial institutions (IFIs). IFs thus represents equilibrating markets (domestically and globally) in goods and services and in financial flows. It does not yet include labor market equilibration. Dominant Relations: Economics In any long-term economic model the supply side has particular importance. In IFs, gross domestic product (GDP) is a function of multifactor productivity (MFP), capital stocks (KS), and labor inputs (LABS), all specified for each of six sectors. This approach is sometimes called a Solowian Cobb-Douglas specification, but IFs helps the user get inside the multifactor productivity term, rather than leaving it as a totally exogenous residual. The following key dynamics are directly related to the dominant relations: - Multifactor productivity is a function partly of exogenous specification of an annual growth rate in it for the systemic technology leader, base rates of relative technological advance in other countries determined via an inverted U-shaped function that assumes convergence with the leader, and of an exogenously specified additive factor for control of specific regions or countries. - Multifactor productivity is, however, largely an endogenous function of variables determined in other models of the IFs system representing the extent of human, social, physical, and knowledge capital; their influence on production involves coefficients that the user can control. - Capital stock is a function of investment and depreciation rates. Endogenously determined investment can be influenced exogenously by a multiplier and the lifetime of capital can be changed. - Labor supply is determined from population of appropriate age in the population model (see its dominant relations and dynamics) and endogenous labor force participation rates, influenced exogenously by the growth of female participation. The larger economic model provides also representation of and some control over sector-specific consumption patterns; trade including protectionism levels and terms of trade; taxation levels; economic freedom levels; and financial dynamics around foreign aid, borrowing, and external debt. Social Accounting Matrix Approach in IFs A SAM integrates a multi-sector input-output representation of an economy with the broader system of national accounts, also critically representing flows of funds among societal agents/institutions and the balance of payments with the outside world. Richard Stone is the acknowledged father of social accounting matrices, which emerged from his participation in setting up the first systems of national accounts or SNA (see Pesaran and Harcourt 1999 on Stone’s work and Stone 1986). Many others have pushed the concepts and use of SAMs forward, including Pyatt (Pyatt and Round 1985) and Thorbecke (2001). So, too, have many who have extended the use of SAMs into new frontiers. One such frontier is the additional representation of environmental inputs and outputs and the creation of what are coming to be known as social and environmental accounting matrices or SEAMs (see Pan 2000). Another very productive extension is into the connection between SAMs and technological systems of a society (see Khan 1998; Duchin 1999). It is fitting that the 1993 revision of the System of National Accounts by the United Nations began explicitly to move the SNA into the world of SAMs. The SAM of IFs is integrated with a dynamic general equilibrium-seeking model. The structural representation is a variant and to some degree an extension of the computable general equilibrium formulations that often surround SAMs. In wrapping SAMs into CGEs, Stone was a pioneer, leading the Cambridge Growth Project with Alan Brown. That project placed SAMs into a broader modeling framework so that the effects of changes in assumptions and coefficients could be analyzed, the predecessor to the development and use of computable general equilibrium (CGE) models by the World Bank and others. Some of the Stone work continued with the evolution of the Cambridge Growth Model of the British economy (Barker and Peterson, 1987). Kehoe (1996) reviewed the emergence of applied general equilibrium (GE) models and their transformation from tools used to solve for equilibrium under changing assumptions at a single point in time to tools used for more dynamic analysis of societies. The approach of IFs is both within these developing traditions and an extension of them on five fronts. The first extension is in universality of the SAM representation. Most SAMS are for a single country or a small number of countries or regions within them (e.g.. see Bussolo, Chemingui, and O’Connor 2002 for a multi-regional Indian SAM within a CGE). The IFs project has created a procedure for constructing relatively highly aggregated SAMs from available data for all of the countries it represents, relying upon estimated relationships to fill sometimes extensive holes in the available data. Jansen and Vos (1997: 400-416) refer to such aggregated systems as using a "Macroeconomic social Accounting Framework." Each SAM has an identical structure and they can therefore be easily compared or even aggregated (for regions of the world). The second extension is the connecting of the universal set of SAMs through representation of the global financial system. Most SAMs treat the rest of the world as a residual category, unconnected to anything else. Because IFs contains SAMs for all countries, it is important that the rest-of –the-world categories are mutually consistent. Thus exports and imports, foreign direct investment inflows and outflows, government borrowing and lending, and many other inter-country flows must be balanced and consistent. The third extension is a representation of stocks as well as flows. Both domestically and internationally, many flows are related to stocks. For instance, foreign direct investment inflows augment or reduce stocks of existing investment. Representing these stocks is very important from the point of view of understanding long-term dynamics of the system because those stocks, like stocks of government debt, portfolio investment, IMF credits, World Bank loans, reserve holdings, and domestic capital stock invested in various sectors, generate flows that affect the future. Specifically, the stocks of assets and liabilities will help drive the behavior of agent classes in shaping the flow matrix. The IFs stock framework has been developed with the asset-liability concept of standard accounting method. The stock framework is also an extension of the social accounting flow matrix, and the cumulative flows over time among the agents will determine the stocks of assets or liabilities for all agents. If the inflow demands repayment or return at some point in future, it is considered as liability for that agent and an asset for the agent from which the flow came. For example, in IFs, if a government receives loans (inflow) from other countries, the stock of those loans is a liability for the recipient government and an asset for the country or countries providing the loans. The fourth extension is temporal and builds on the third. The SAM structure described here has been embedded within a long-term global model. The economic module of IFs has many of the characteristics of a typical CGE, but the representation of stocks and related agent-class driven behavior in a consciously long-term structure introduces a quite different approach to dynamics. Instead of elasticities or multipliers on various terms in the SAM, IFs seeks to build agent-class behavior that often is algorithmic rather than automatic. To clarify this distinction with an example, instead of representing a fixed set of coefficients that determine how an infusion of additional resources to a government would be spent, IFs increasingly attempts partially to endogenize such coefficients, linking them to such longer-term dynamics as those around levels of government debt. Similarly, the World Bank as an actor or agent could base decisions about lending on a wide range of factors including subscriptions by donor states to the Bank, development level of recipients, governance capacity of recipients, existing outstanding loans, debt-to-export ratios, etc. Much of this kind of representation is in very basic form at this level of development, but the foundation is in place. The fifth and final extension has already been discussed. In addition to the SAM, The IFs forecasting system also includes a number of other models relevant to the analysis of longer-term forecasts. For example demographic, education, health, agriculture, and energy models all provide inputs to the economic model and SAM, as well as responding to behavior within it. The effort is to provide a dynamic base for forecasts can be made well into the 21st century. It is important to quickly emphasize that such forecasts are not predictions. Instead they are scenarios to be used for thinking about possible alternative longer-term futures. As a graduate student in what is now the Josef Korbel School of International Studies, Anwar Hossain worked with Barry Hughes in the development of the SAM structure and database for IFs (see Hughes and Hossain 2003); his help was much appreciated. Economic Flow Charts Overview This section presents several block diagrams that are central to the two major components of the economics model, the goods and services market—with special emphasis on the production function— and the broader SAM. The economic model represents supply, demand, and trade in each of six economic sectors: agriculture, primary energy, raw materials, manufactures, services, and information/communications technology. The model draws upon data from the Global Trade and Analysis Project (GTAP) with 57 sectors as of GTAP 8; the pre-processor collapses those into the six IFs sectors and theoretically could collapse them into a different aggregated subset.Inventories (or stocks) are the key equilibrating variable in three negative feedback loops. As they rise, prices fall, increasing final demand (one loop), decreasing production (a second loop), and thereby in total decreasing inventories in the pursuit over time of a target value and equilibrium. Similarly, as inventories rise, capacity utilization falls, decreasing production, and restraining inventories. Physical investment and capital stocks are the key driving variables in an important positive feedback loop. As capital rises, it increases value added and GDP, increasing final demand and further increasing investment. Similarly, government social investment can increase productivity, production and inventories in another positive feedback loop. The figure below also shows some production detail. A-matrices, which are specified dependent on the level of development (GDP per capita), allow the computation from value added of gross production and of the production that is available, after satisfaction of intersectoral flows, for meeting final demand. It is the balance of this production for final demand with actual final demand that determines whether inventories grow or decline. The calculation of gross production (ZS) in value terms within the economic model is overridden by calculations of physical production converted to value in the agricultural and energy models when respective switches (AGON and ENON) are thrown as in the default of the IFs Base Case scenario. Value Added A Cobb-Douglas production function determines value added. Thus two principal factors are capital and labor. Labor is responsive not just to population size and structure, but to the labor participation rate, including the changing role of women in the work force. Accumulated growth in the level of technology or multifactor productivity (MFP), in a "disembodied" representation (TEFF), modifies these factors. Immediate energy shortages/shocks can also affect value added. Multifactor Productivity The technological factor in the production function is often called multifactor productivity (MFP). The basic value of MFP is a sum of a global productivity growth rate driven by the economically advanced or leading country/region (mfpleadr ), a technological premium dependent on GDP per capita, and an exogenous or scenario factor (mfpadd ). In addition, however, other factors affect productivity growth over time. These include a wide range of variables, such as the years of education that adults have (EDYRSAG25) and the level of economic freedom (ECONFREE), which respectively are among the variables that affect change in MFP associated with human and social capital. Economic Aggregates and IndicatorsBased on value added and population, it is possible to compute GDP, GDP at purchasing power parity, and a substantial number of country/region-specific and global indicators including several that show the extent of the global North/South gap. Household Accounts The most important drivers of household income is the size of value added and the share of that accruing to households. That share, divided further into unskilled and skilled households, is initialized with data from the Global Trade and Analysis Project and changes a function driven by GDP per capita. Household income is augmented by flows from government and firms (dividends and interest). Most of household income will be used for consumption, but shares will go back to the government via taxes and to savings. Once the total of household consumption is known, if is divided across the sectors of IFs using Engel elasticities that recognize changing use of consumption as levels per capital rise Firm AccountsFirms retain as income the portion of total value added that is not sent to households in return for labor provided. Income of firms functioning within a country (ownership in IFs is not designated as domestic or international) benefit from inflows of portfolio and foreign direct investment. Firms direct their income to governments in the form of tax payments, to households as dividends and interest, to the outside world as portfolio or foreign direct investment, and to savings (available for investment). Government Accounts Government revenues come from taxes levied on households and firms. The total expenditures are a sum of two sub-categories, direct consumption and transfer payments (the latter in turn being a sum of payments for pensions/retirement and welfare. The annual government balance is the difference between revenues and expenditures and increments or decrements government debt in absolute terms and as a portion of GDP. That stock variable in turn sends back signals to both revenue and expenditure sides of the model so as to keep the debt at reasonable levels over time. The level of government consumption and its distribution across targets are important policy-relevant variables in the model. Government consumption is spread across several target spending categories: military health, education, traditional infrastructure, other infrastructure, research and development, other/residual, and foreign aid). The distribution to most of those categories is endogenously determined by functions, but other models in the IFs system provide special signals for military, education, and traditional infrastructure spending. Demand for military spending involves action-reaction dynamics (when a switch is turned on) and threat levels. Demand for educational and infrastructure involves full models. Demands will not equal supply and all demands are normalized to the supply, but special protection can be given to the demands for education and infrastructure spending. Educational spending by level of education (primary, secondary, and tertiary) is further broken out of total educational spending in the education model but targets can be changed via a multiplier. Savings and Investment Savings is a sum of the savings by households, firms, the government (its fiscal balance) and net foreign savings. Investment is most immediately a sum of gross capital formation and changes in inventories. As in other parts of the IFs economic model, there will not be an exact equilibrium between savings and investment in any given time step. The system will chase equilibrium over time with the help of two mechanisms. The smoothed pattern of savings over time will affect investment. So, too, will interest rates that respond to changes in inventories or stocks of goods and services. Trade The trade algorithm of IFs relies on a pooled rather than bilateral trade approach. That is, it does not track exactly who trades with whom, maintaining instead information on gross exports and imports by sector and in total. The algorithm sums import demand and export capacity across all traders (in a given sector), defines world trade as the average of those two values, and then normalizes demand and capacity to the total of world trade to determine sectoral exports and imports by geographic unit. International Finance The current account depends on international loan repayments and foreign aid flows as well as on the trade balance. The exchange rate floats with the debt level (in turn responsive to the current account balance) and is the key equilibrating variable in two negative feedback loops that work via import demand and export capacity (see description of trade). Income DistributionDomestic income distribution is represented by the Gini coefficient. That is calculated with a Lorenz curve that looks at the share of population and income held by the only two subgroups for which we have that information, namely unskilled an Given domestic Gini indices, it is also possible to compute global Gini indices, both treating countries as wholes (GINI) and computing across the world at the household level (GINIFULL). Poverty The calculation of poverty levels is fairly straightforward if one has the average level of consumption per capita (or income) and its distribution as indicated by the Gini index (and if one assumes that the distribution underlying the Gini index is log-normal). The internal calculation using those variables will, however, almost inevitably produce a rate of poverty at odds with the provided by national surveys. We therefore compute a ratio of those in the first year to allow adjustment in forecast years of the values from the lognormal calculation. As a rough check on the values produced by lognormal calculation we also compute a value of poverty estimated from a cross-sectional function linking GDP per capita (and PPP) and Gini to rates of poverty. Overview The growth portion of the goods and services module responds to endogenous labor supply growth (from the demographic model), endogenous capital stock growth (with a variety of influences on the investment level), and a mixture of endogenous and exogenous specification of advance in multifactor productivity (MFP). The endogenous portion of MFP represents a combination of convergence and country-specific elements that together create a conditional convergence formulation. The equilibrium-seeking portion of the goods and services market module uses increases or decreases in inventories and prices (by sector) to balance demand and supply. Inventory stocks in each sector serve as buffers to reconcile demand and supply temporarily. Prices respond to stock levels. The central equilibrium problem that the module must address is maintaining domestic and global balance between supply and demand in each of the sectors of the model. IFs relies on three principal mechanisms to assure equilibrium in each sector: (1) price-driven changes in domestic demand; (2) price-driven changes in trade (IFs represents trade and global financial flows in pooled rather than bilateral form); and (3) stock-driven changes in investment by destination (changes in investment patterns could also be price-driven, but IFs uses stocks because of its recursive structure, so as to avoid a 2-year time delay in the response of investment). The economic model makes no attempt through iteration or simultaneous solution to obtain exact equilibrium in any time point. In addition to being observationally obvious, Kornai (1971) and others have, of course, argued that real world economic systems are not in exact equilibrium at any time point, in spite of the convenience of such assumptions for much of economic analysis. Similarly, the SARUM global model (Systems Analysis Research Unit 1977) and GLOBUS (Hughes 1987) used buffer systems similar to that of IFs with the model "chasing" equilibrium over time. Two "physical" or "commodity" models in IFs, agriculture and energy, have structures very similar to each other and to the economic model. They have partial equilibrium structures that optionally, and in the normal base case, replace the more simplified sectoral calculations of the goods and services market module. The goods and services market sits within a larger social accounting matrix that tracks financial flows among households, firms, and the government. The integrated economic model also allows computation of income distribution and poverty rates. The Goods and Services Market The Production Function: Basic Overview Cobb-Douglas production functions involving sector-specific technology or multifactor productivity (TEFF), capital (KS) and labor (LABS) provide potential value added (VADDP) in each sector, taking into account the level of capacity utilization (CAPUT), initially set exogenously (caputtar ). In a multi-sector model the functions require sectoral exponents for capital (CDALFS) and labor that, assuming constant returns to scale, sum to one within sectors. Solow (1956) long ago recognized that the standard Cobb-Douglas production function with a constant scaling coefficient in front of the capital and labor terms was inadequate because the expansion of capital stock and labor supply leave a large portion of economic growth unexplained. It then became standard practice to represent an exogenously specified growth of technology term in front of the capital and labor terms as "disembodied" technological progress (Allen 1968: Chapter 13). Romer (1994) began to show the value of unpacking such a term and specifying its elements in the model, thereby endogenously representing this otherwise very large residual, which we can understand to represent the growth of multifactor productivity (MFP). In IFs that total endogenous productivity growth factor (TEFF) is the accumulation over time (hence a stock like labor and capital) of annual values of growth in multifactor productivity (MFPGRO). There are many components contributing to growth of productivity, and there is a vast literature around them. See, for example, Barro and Sala-i-Martin (1999) for theoretical and empirical treatment of productivity drivers; also see Barro (1997) for empirical analysis (or McMahon 1999) for a focus on education. In the development of IFs there was a fundamental philosophic choice to make. One option was to keep the multi-factor productivity function very simple, perhaps to restrict it to one or two key drivers, and to estimate the function as carefully as possible. Suggestions included focusing on the availability/price of energy and the growth in electronic networking and the knowledge society. The second option was to develop a function that included many more factors known or strongly suspected to influence productivity and to attempt a more stylistic representation of the function, using empirical research to aid the effort as much as possible. The advantages of the second approach include creating a model that is much more responsive to a wide range of policy levers over the long term. The disadvantages include some inevitable complications with respect to overlap and redundancy of factor representation, as well as some considerable complexity of presentation. Because IFs is a thinking tool and an extensively integrated multi-model system, the second approach was adopted, and the production function has become an element of the economic model that will be subject to regular revision and enhancement. IFs groups the many drivers of multifactor productivity into five categories, recognizing that even the categories overlap somewhat. The base category is one that represents core technological development and transfer elements of convergence theory, with less developed countries gradually catching up with more developed ones. The four other categories incorporate factors that can either retard or accelerate such convergence, transforming the overall formulation into one of conditional convergence. The convergence base . The base rate of multifactor productivity growth (MFPRATE) is the sum of the growth rate for technological advance or knowledge creation of a technological leader in the global system (mfpleadr ) and a convergence premium (MFPPrem) that is specific to each country/region. The basic concept is that it can be easier for less developed countries to adopt existing technology than for leading countries to develop it (assuming some basic threshold of development has been crossed). The base rate for the leader remains an unexplained residual in the otherwise endogenous representation of MFP, but that has the value of making it available to model users to represent, if desired, technological cycles over time (e.g. Kondratief waves). The base also includes a correction term (MFPCor) that is initially set to the difference between empirical growth of MFP (calculated the first year as a residual between factor growth and output growth) and the sum of the technological leader and convergence premium terms. Over time, the correction term is phased out, but the four other terms (below) become key drivers of country-specific productivity. In fact, significant change in the other terms can either undercut the foundational convergence process or greatly augment it. Human capital . This term has multiple components, including changes in educational spending as a portion of GDP, educational attainment of adults, and changes in health expenditure. For example, Barro and Sala-i-Martin (1999: 433) estimated that a 1.5% increase in government expenditures on education translates into approximately a 0.3% increase in annual economic growth. Social capital . Similarly, social capital representation aggregates several components including economic freedom and absence of overt social conflict. Illustratively, the value of the parameter for economic freedom (elgref ) was estimated in a cross-sectional relationship of change in GDP level from 1985 to 1995 with the level of economic freedom. Similarly, Barro places great emphasis in his estimation work on the "rule of law". Physical capital . In collaborative work with the IFs project, Robert Ayres correctly emphasized the close relationship between energy supply availability and economic growth. For instance, a rapid increase in world energy prices (WEP) essentially makes much capital stock less valuable. IFs uses world energy price relative to world energy prices in the previous year to compute an energy price term. The physical capital term also represents the extent of various types of infrastructure in a society. Knowledge Capital . This fourth term includes changes in the R&D spending, computed from government spending (GDS) on R&D as a portion of total government spending (GOVCON) contribute to knowledge creation, notably in the more developed countries. Globerman (2000) reviewed empirical work on the private and social returns to R&D spending and found them to be in the 30-40% range; see also Griffith, Redding, and Van Reenen (2000). Many other factors undoubtedly contribute to superior knowledge development diffusion. This term represents especially the extent of economic integration with the global economy via trade. All of the elements computed in the human, social, physical and knowledge capital terms are used in shaping economic productivity and growth rates of the model on a differential basis–that is, they are computed and evaluated relative to underlying "expected" patterns given overall economic development levels. Their actual levels can be above or below expected ones and they can therefore either add to or slow down the productivity growth rate. See the detailed equations of the production function for elaboration. <header><hgroup> The Production Function: Detail </hgroup></header> The core equation for the production function computes value added (VADD) from technology (TEFF), capital (KS), labor (LABS), and capacity utilization (CAPUT), with a time-constant scaling parameter (CDA) assuring that gross production is consistent with data the first year. GDP is, of course, the sum across sectors of valued added. Although the production function can serve all sectors of IFs, the parameters agon and enon act as switches; when their values are one, production in the agricultural and primary energy sectors, respectively, are determined in the larger, partial equilibrium models and the values then override this computation (see documentation on those models for detail). http://www.du.edu/ifs/help/media/images/econ-module/econeq1.png http://www.du.edu/ifs/help/media/images/econ-module/econeq2.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq3.png http://www.du.edu/ifs/help/media/images/econ-module/econeq4.png Other topics in our documentation explain the dynamics underlying change in capital stock (through investment and depreciation) and labor supply (through demographic change and participation patterns). The rest of this topic will focus on the computation of the elements that go into the MFP or technology term (TEFF), working our way progressively deeper into their determinants. At the most basic level the stock term is simply an accumulation of the annual increments in MFP. http://www.du.edu/ifs/help/media/images/econ-module/econeq5.png where 1.0 http://www.du.edu/ifs/help/media/images/econ-module/econeq6.png 1.0 The annual growth in MFP (MFPGRO) consists of a base rate linked to systemic technology advance and convergence (MFPRATE) plus four terms that affect MFP growth over time as a result of human, social, physical, and knowledge capital. http://www.du.edu/ifs/help/media/images/econ-module/econeq7.png Each of the following subsections elaborates one of the five terms. Annual Base Technology Growth in MFP The base rate related to technology or other factors unexplained by the four capital terms is anchored by an exogenous specification of the sector-specific rate of advance in the systemic leader’s technology (mfpleadr ); the leader is assumed to be the United States. (The rate in the leader's ICT sector gradually convergences over time to that of the service sector.) On top of that is an endogenously computed premium computed for convergence of each country/region (MFPPrem), a term that is a function of GDP per capita at PPP, the function for which posits an inverted-V shape with the greatest potential for technological convergence to the leader among middle-income countries. In this calculation there needs to be a correction factor (MFPCor), computed to assure that the actual rate of this growth is consistent with the actual amount of growth for a country/region (MFPGRO) that is unexplained by capital and labor growth in the first model run year. That correction factor converges to 0 over the number of years specified by mfpconv . This convergence assumption has significant implications for model behavior because it tends to slow down growth in countries (like China) that have had a burst of growth beyond that which the rates of the leader and the catch-up factor would lead us to anticipate and to speed up growth in countries (like the transition states of Central Europe) that have suffered a reduction similarly unexpected by the basic formulation. http://www.du.edu/ifs/help/media/images/econ-module/econeq8.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq9.png http://www.du.edu/ifs/help/media/images/econ-module/econeq10.png and http://www.du.edu/ifs/help/media/images/econ-module/econeq11.png http://www.du.edu/ifs/help/media/images/econ-module/econeq12.png It is possible for the regionally and sectorally specific MFP correction terms (MFPCOR) to still leave a small discrepancy between the global economic growth in the data and the initial growth rate in the model. To avoid this, we also compute a global correction factor (MFPGloCor) as the region/country and value added weighted sum of the individual country-sector terms divided by the global and regional sums of value added. In addition to the basic technology terms and the correction factors, there are three parameters in the MFPRATE equation (with zero values as the default) that allow the model user much control over assumptions of technological advance. The first is basic parameter (mfpbasgr ) that allows a global growth increment or decrement; the second is a parameter (mfpbasinc ) that allows either a constant rise or slowing of growth rate globally, year by year, where zy is the count of the model run years across time; finally is a frequently-used parameter (mfpadd ) allowing flexible intervention for any country/region. Driver Cluster 1: Human Capital The general logic of each the four driver clusters around human, social, physical, and knowledge capital is the same. Each cluster aggregates several variables that generally contribute to productivity. For each variable, such as average years of adult education in the human capital cluster, there is an expected value and an actual value. It is the difference between actual and expected values that gives rise to a positive or negative contribution to productivity and growth. Most expected values are identified in a relationship with GDP per capita at PPP. That is, there is a tendency for most developmentally supportive variables to advance in a rough relationship with each other and with GDP per capita (Kuznets 1959 and 1966; Chenery and Syrquin 1975; Syrquin and Chenery 1989; Sachs 2005). To the degree that they do, such advance can be understood to be consistent also with the overall technological advance of the country. If, however, a variables such as years of formal education attained by adults exceeds the typical or expected value for a country at a given level of GDP per capita, we can expect that variable to add something more to productivity. Similarly, falling behind the expected value could retard productivity advance. To illustrate and emphasize this point, even a country for which adult education levels advance could find that education is not keeping up with the advance in other developmental variables including GDP per capita and find that its education levels move from contributing to productivity enhancement to decrementing that productivity enhancement. In the human capital cluster there are six variables that add to or subtract from the human capital (MFPHC) term: the educational spending contribution (EdExpContrib), the years of adult education contribution (EdYearsContribub), the boost from life expectancy years (LifExpEdYrsBoost) assumed to generate (via mfpedlifexp ) extra years of education, the stunting contribution (StuntContrib) related to undernutrition of children, the disability contribution (DisabContrib) related to morbidity from the health model, and vocational education contribution (edVoccontrib) resulting from growth (or decline) in vocational share of lower and upper secondary enrollment,. The first five of these six drivers have a similar formulation while the formulation for vocational education is slightly different. The first five, as computed in IFs often as a result of extended formulations and even other models of system (as with life expectancy, computed in the health model) are compared with an expected value. In the case of disability, the expected value is set to the world average level (WorldDisavg), but all other expected values (EdExpComp, EdYrsComp, LifExpComp, and StuntingComp) are computed as functions of GDP per capita at PPP. As the provision of vocational education does not follow any common pattern or trend and is rather a matter of policy choices made (or will be made) by the particular country, it was not possible to calculate an expected value for this variable We have instead computed the vocational education contribution from changes in vocational share over time with appropriate moving averages to capture the lag required in materializing such contribution and to smooth out the contribution over time. (Note: In the base case of the model, vocational shares do not change and as such EdVocContrib is zero). In each case a parameter drawn from study of the literature and/or our own analysis converts the difference between actual and expected into a positive or negative contribution to MFP. (Because of the recursive structure of IFs, some terms rely on variables from the previous time step, estimated from the current time step with a correction factor based on initial GDP growth.) http://www.du.edu/ifs/help/media/images/econ-module/econeq13.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq14.png http://www.du.edu/ifs/help/media/images/econ-module/econeq15.png http://www.du.edu/ifs/help/media/images/econ-module/econeq16.png http://www.du.edu/ifs/help/media/images/econ-module/econeq17.png http://www.du.edu/ifs/help/media/images/econ-module/econeq18.png http://www.du.edu/ifs/help/media/images/econ-module/econeq19.png http://www.du.edu/ifs/help/media/images/econ-module/econeq20.png where Vocational contribution for lower secondary is calculated as: http://www.du.edu/ifs/help/media/images/econ-module/econeq21.png http://www.du.edu/ifs/help/media/images/econ-module/econeq22.png A similar calculation is done for upper secondary vocational and the two are averaged as shown below: http://www.du.edu/ifs/help/media/images/econ-module/econeq23.png Driver Cluster 2: Social Capital The logic of comparison of actual with expected values is the same as that described above for human capital in the case of the six factors that contribute to social capital: economic freedom as in the Fraser Institute measure (EconFreeContrib), government effectiveness as in the World Bank measure (GovtEffContrib), corruption as in the Transparency International measure (CorruptContrib), democracy as in the Polity project measure (DemocPolicyContrib), freedom as in the Freedom House measure (FreedomContrib), and conflict as in the IFs project's own measure tied in turn to the work of the Political Instability Task Force (ConflictContrib). In each case other than that for conflict, the expected values (EconFreeComp, GovEffectComp, CorruptComp, DemocPolityComp, and FreeComp) are computed from functions with GDP per capita at PPP. In the case of conflict, the expected value is set at the initial year's value (and the comparison is reversed because lower conflict values contribute positively to MFP). http://www.du.edu/ifs/help/media/images/econ-module/econeq24.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq25.png http://www.du.edu/ifs/help/media/images/econ-module/econeq26.png http://www.du.edu/ifs/help/media/images/econ-module/econeq27.png http://www.du.edu/ifs/help/media/images/econ-module/econeq28.png http://www.du.edu/ifs/help/media/images/econ-module/econeq29.png http://www.du.edu/ifs/help/media/images/econ-module/econeq30.png Driver Cluster 3: Physical Capital The logic of the physical capital cluster is again parallel to that of the human and social capital clusters and involves the comparison of an actual (that is, IFs computed) with an expected value. The formulation for MFPPC can actually take several forms depending on the value of a switching parameter (inframfpsw ) but the standard form involves four contributions, from traditional infrastructure (InfraTradContrib), ICT infrastructure (InfraICTContrib), other infrastructure spending level (InfOthSpenContrib), and the price of energy (EnPriceTerm). The last term is included because higher prices of energy can make some forms of capital plant no longer efficient or productive. In the case of this cluster only the expected value of the traditional infrastructure index (InfraIndTradComp) and the expected value of other infrastructure spending (InfraOthSpendComp) are computed as most other cluster elements are, namely as a function of GDP per capita at PPP. In the case of the ICT index contribution, the technology has been evolving so rapidly that there is not really a basis for an expected value with some stability over time. Instead the contribution from ICT is computed in terms of a moving average value of change over time, such that faster rates of change contribute more to MFP as the moving average expected value lags further behind the actual. In the case of the energy price term, the "expected" value is set equal to the energy price in the first year of the model run. As with other clusters and the variables in them, a single parameter links the discrepancy between actual and expected values to MFP. http://www.du.edu/ifs/help/media/images/econ-module/econeq31.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq32.png http://www.du.edu/ifs/help/media/images/econ-module/econeq33.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq34.png http://www.du.edu/ifs/help/media/images/econ-module/econeq35.png http://www.du.edu/ifs/help/media/images/econ-module/econeq36.png http://www.du.edu/ifs/help/media/images/econ-module/econeq37.png Driver Cluster 4: Knowledge Capital Following the pattern of other MFP driver clusters, the one for knowledge accumulation includes terms that compare actual model and typical or expected values and use parameters to translate the differences into increments or decrements of MFP. In this case the three terms represent R&D spending (RDExpContrib), economic integration via trade with the rest of the world (EIntContrib) and the share of science and engineering among all tertiary degrees earned (EdTerSciContrib). In the first and third case the expected value (RDExpComp; EdTerGRSciEnComp) is a function of GDP per capita at PPP. In the second instance, there is no clear relationship between extent of economic integration and GDP per capita, so the model compares a moving average of trade openness (exports plus imports as a percentage of GDP) with the initial value of openness (because trade is computed later in the computational sequence for each year, the values of trade variables lag one year behind those of the production function). Given the extreme global range of trade openness, the elasticity term itself in this relationship is variable, with values decreasing when initial openness is greater (that is, countries that start with less openness gain more from the same percentage point increases in it). http://www.du.edu/ifs/help/media/images/econ-module/econeq38.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq39.png http://www.du.edu/ifs/help/media/images/econ-module/econeq40.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq41.png http://www.du.edu/ifs/help/media/images/econ-module/econeq42.png http://www.du.edu/ifs/help/media/images/econ-module/econeq43.png http://www.du.edu/ifs/help/media/images/econ-module/econeq44.png Issues Concerning Parameterization and Interaction Effects Although our approach to calculation of MFP creatively connects developments in many other models in the IFs system to it, parameterization of the effects individually and in interaction is complicated and uncertain. Hughes (2005) documented the original creation of the structure and its parameterization based on existing literature. One of the concerns with this approach is the possibility of double counting of effects from the large number of variables fed into the MFP calculations. In general the project has dealt in part with this by selecting conservative values for the parameters when studies indicate possible ranges of contribution of the variables to productivity and/or growth. Another concern is that a very large or extreme advance by one or a small subset of variables could have inappropriately large impacts of productivity given the fundamental conceptual foundation of the approach in the notion that development involves widespread and reinforcing structural changes across many variables. In order to limit this possibility, we have created an algorithmic function (MFPContribAdj) to adjust the multiple MFP contributions and dampen especially high positive or negative contributions of the four cluster terms (MFPHC, MFPSC, MFPPC, and MFPKN). The Relationship of Physical Models to the Economic Model IFs normally does not use the economic model's equations representing MFP and production for the first two economic sectors, because the agriculture and energy models provide gross production for them (unless those sectors are disconnected from economics using the agon and/or enon parameters). Instead, the two physical models provide gross production, translated to value terms, back to the economic model. See Section 3.2.3 for discussion of gross production. In addition to this impact of the physical models on the economic model, there is one more of importance. Physical shortages on energy may constrain actual value added in each sector (VADD) relative to potential production. Economists typically do not accept such shortages as a real world phenomenon because (at least in theory) prices rise to clear markets; yet periods like the 1970s when governments intervened in those markets, such shortages do appear and they can in some IFs scenarios. In those situations, IFs assumes that energy shortages (ENSHO), as a portion of domestic energy demand (ENDEM) and export commitments (ENX) lower actual production in all sectors through a physical shortage multiplier factor (SHOMF). A parameter/switch (squeeze ) controls this linkage and can turn it off. http://www.du.edu/ifs/help/media/images/econ-module/econeq45.png where http://www.du.edu/ifs/help/media/images/econ-module/econeq46.png In addition, the translation of potential into actual production depends on the imports of manufactured goods (MKAV), which serve as a proxy for both availability of intermediate goods and for technological imports. A parameter (PRODME) also controls this relationship. Although the IFs model represents prices in real terms (no monetary sector and no inflation), there are relative sectoral price changes (PRI). Some of those can be quite dramatic over time, especially in the agricultural and energy sectors where the equilibration of physical representations of supply and demand can swing those prices. Such relative price swings can, in the real world, give added drag or boost to the value added in certain sectors and for economies as a whole. To represent this we compute a relative-price adjusted version of value added (VADDRPA), which is the normal value added weighted by world sectoral prices (WP); the prices are lagged a year because of the recursive model structure.
https://pardee.du.edu/w/index.php?title=Economics&oldid=1803
The terminal growth rate is a constant rate at which a firm’s expected free cash flows are assumed to grow at, indefinitely. This growth rate is used beyond the forecast period in a discounted cash flow modelDCF Model Training Free GuideA DCF model is a specific type of financial model used to value a business. DCF stands for Discounted Cash Flow, so the model is simply a forecast of a company’s unlevered free cash flow discounted back to today’s value. This free DCF model training guide will teach you the basics, step by step with examples and images, from the end of forecasting period until perpetuity, we will assume that the firm’s free cash flow will continue to grow at the terminal growth rate, rather than projecting the free cash flow for every period in the future. When making projections for a firm’s free cash flow, it is common practice to assume there will be different growth rates depending on which stage of the business life cycle the firm currently operates in. We assume a high growth rate (usually over 10%) for business in its early stage of expansion. The business has established its position in the industry and is seeking to increase its market share. As such, it will experience a rapid growth in revenueSales RevenueSales revenue is the starting point of the income statement. Sales or revenue is the money earned from the company providing its goods or services, income and, thus, free cash flow. The rapid-growth stage is often followed by a relatively decelerated growth stage, as the company will likely struggle to maintain its high growth rate due to the rising competition within the industry. The business will continue to grow, but no longer at the substantial growth rate it had previously experienced. However, as the company evolves closer to maturity, it is expected to hold a steady market share and revenue. We often assume a relatively lower growth rate for this stage, usually 5% to 8%. We assume the company will grow at the terminal growth rate when it reaches a matured stage. At this stage, the company’s growth is minimal as more of the company’s resources are diverted to defending its existing market share from emerging competitors within the industry. A positive terminal growth rate implies that the company will grow into perpetuity, whereas a negative terminal growth rate implies the discontinuance of the company’s operations. The terminal growth rates typically range between the historical inflation rate (2%-3%) and the average GDP growth rate (4%-5%) at this stage. A terminal growth rate higher than the average GDP growth rate indicates that the company expects its growth to outperform that of the economy forever. The terminal growth rate is widely used in calculating the terminal valueDCF Terminal Value FormulaTerminal value formula is used to calculate the value a business beyond the forecast period in DCF analysis. It's a major part of a financial model as it makes up a large percentage of the total value of a business. There are two approaches to calculate terminal value: (1) perpetual growth, and (2) exit multiple of a firm. The “terminal value” of a firm is the net present value of its cash flows at points of time beyond the forecast period. The calculation of a firm’s terminal value is an essential step in a multi-staged discounted cash flow analysis and allows for the valuation of said firm. In a Discounted Cash Flow DCF ModelDCF Model Training Free GuideA DCF model is a specific type of financial model used to value a business. DCF stands for Discounted Cash Flow, so the model is simply a forecast of a company’s unlevered free cash flow discounted back to today’s value. This free DCF model training guide will teach you the basics, step by step with examples and images the terminal value usually makes up the largest component of value for a company (more than the forecast period). The above model is a screenshot from CFI’s financial modeling courses. We need to keep in mind that the terminal value found through this model is the value of future cash flows at the end of the forecasting period. In order to calculate the present value of the firm, we must not forget to discount this value to the present period. This step is critical and yet often neglected. Learn more in our financial modeling courses. Although the multi-stage growth rate model is a powerful tool for discounted cash flow analysis, it is not without drawbacks. To start, it is often challenging to define the boundaries between each maturity stages of the company. A significant amount of judgment is required to determine if, and when, the company has progressed into the next state. In practice, it is difficult to convert the qualitative characteristics into specific time periods. Moreover, this model assumes that high growth rates transform immediately into low growth rates upon the firm entering the next maturity level. Realistically, however, the changes tend to happen gradually over time. These concepts are outlined in more detail in our free introduction to corporate finance course.
https://corporatefinanceinstitute.com/resources/knowledge/valuation/what-is-terminal-growth-rate/
Since the option value whether put or call is increasing in this parameter, it can be inverted to produce a " volatility surface " that is then used to calibrate other models, e. The Black-Scholes model makes certain assumptions: They can be obtained by differentiation of the Black—Scholes formula. The good news is that many of these calculations are boiled down into the Greeks delta, vega, etc. TP does the heavy lifting on the math and the viewer can sit back and benefit. The Black-Scholes formula (also called Black-Scholes-Merton) was the first widely used model for option pricing. It's used to calculate the theoretical value of European-style options using current stock prices, expected dividends, the option's strike price, expected interest rates, time to expiration and expected volatility. The Black Scholes Model The Black Scholes pricing model is partially responsible for the options market and options trading becoming so popular. Before it was developed there wasn't a standard method for pricing options, and it . 1. The derivation(s) of Black-Scholes Equation Black Scholes model has several assumptions: 1. Constant risk-free interest rate: r 2. Constant drift and volatility of stock price: 3. The stock doesn’t pay dividend 4. No arbitrage 5. No transaction fee or cost 6. The Black-Scholes Formula Plain options have slightly more complex payo s than digital options but the principles for calculating the option value are the same.
http://hubbabubbanascar.tk/wado/options-binaires-black-scholes-221.php
Every business must assess where to invest its funds and regularly reevaluate the quality and risk of its existing investments. Investment theory specifies that firms should invest in assets only if they expect them to earn more than their risk-adjusted hurdle rates. Knowing a business’ cost of capital allows a comparison of different ways of financing its operations. For example, increasing the proportion of debt may allow a company to lower its cost of capital and accept more investments. Knowing the cost of capital also permits a company to determine its value of operations and evaluate the effects of alternative strategies. In value-based management, a business’ current value of operations is calculated as the present value of the expected future free cash flows discounted at the cost of capital. This analysis is useful as a guide in decision making as well as for projecting future financing needs. The cost of capital is also used in the computation of economic value added (EVA). EVA is useful to the managers of the company as well as to external financial analysts. It is a measure of the economic value created by a company in a single year. EVA is computed by subtracting a capital charge (operating invested capital multiplied by weighted average cost of capital) from after-tax operating income. Financial theorists agree that using a correct risk-adjusted discount rate is needed to analyze a company’s potential investments and evaluate overall or divisional performance. Risk-adjusted discount rates should incorporate business and operating risk as well as financial risk. Business risk is measured by the nature of the products and services the business provides (discretionary vs. nondiscretionary), the length of the product’s life cycle (shorter life cycles create more risk), and the size of the company (economies of scale can reduce risk). Operating risk is determined by the cost structure of the firm (higher fixed assets relative to sales increases operating risk). Financial risk is determined by a company’s level of debt. For example, industries that exhibit high operating leverage and short life cycles, and have discretionary products such as technology, have very high beta measurements. Borrowing money will only exaggerate the impact of the risk. Computing the cost of capital for a growth company, however, can be problematic. Changing product mixes, changing cost structures, rapidly changing capital structures, and increasing size are inherent qualities of growth firms. Furthermore, because growing companies typically do not pay dividends, using the constant dividend growth model to compute the cost of equity yields a cost of equity equal to the company’s growth rate. In light of this, there is a need to deal more explicitly with risk when establishing hurdle rates for growth companies. The abundance of information available on the Internet makes computing a risk-adjusted hurdle rate simple. Yet the literature to date has provided little to explain the computation of a risk-adjusted cost of capital using readily available information. The use of a bottom-up beta in computing the cost of equity component of the cost of capital is an exceptional method of capturing all types of risk. An example using Community Health Systems, Inc., a hospital business, illustrates the procedures. The application presented would be especially useful to investors who hold growth stocks in their portfolios, equity research analysts, venture capitalists, and managers of growth firms. A bottom-up beta is estimated from the betas of firms in a specified business, thereby addressing problems associated with computing the cost of capital. First, by eliminating the need for historical stock prices to estimate the firm’s beta, the standard error, created by regression betas, is reduced. Second, the problem of a changing product mix is eliminated because the business computes a different cost of capital for each product line. Third, the levered beta is computed from the company’s current financial leverage, rather than from the average leverage over the period of the regression. Overall, bottom-up betas are designed to be a better measure of the market risk associated with the industry or sector of the business. Because betas measure the risk of a firm relative to a market index, the more sensitive a business is to market conditions, the higher its beta. Bottom-up betas also capture the operating and financial risk of a company. Intuitively, the more financial risk or operating risk a firm has, the higher its beta. Exhibit 1 illustrates the computation of a bottom-up beta for Community Health Systems, Inc. (CHS). CHS has an average three-year historical sales growth rate of 25.6%. The August 12, 2002, Fortune reported it to be one of the top 40 companies traded, based on both value and growth indicators. It went public on June 9, 2000, and does not have a reported beta. The first step in estimating a bottom-up beta is to identify the business and a set of comparable established companies. Compustat, Value Line, and Hoovers.com, all report companies by industry and sector. Panel A of Exhibit 1 shows a set of eight comparable companies identified by Hoovers. The unlevered beta (Bu) in Equation 1 is the beta of a firm with no debt and is determined by the types of businesses in which it operates and its operating leverage (risk). The degree of operating leverage is a function of a company’s cost structure, and is usually defined in terms of the relationship between fixed costs and total costs. A company that has high operating leverage (high fixed costs relative to total costs) will also have higher variability in earnings before interest and taxes than a company producing a similar product with low operating leverage. Other things being equal, the higher variance in operating income will lead to a higher beta for companies with high operating leverage. The debt to equity ratio (D/E) in Equation 1 represents the amount of financial leverage or the debt level of the company. Other things being equal, an increase in financial leverage will increase the beta. The obligated payments on debt increase the variance in net income, with higher leverage increasing income during good times and decreasing income during economic downturns. The tax advantage of debt financing is represented in the formula by (1 – tax rate). The higher the tax rate, the more favorable the debt financing. The third step is to compute a weighted-average unlevered beta of the comparable companies, and from this, using Equation 1, compute the levered beta (BL) for the company being evaluated. The computation of the unlevered beta of the comparable company is weighted according to company size measured as the market value (MV) of equity plus debt. As shown in Panel B of Exhibit 1, the weighted average unlevered beta for the comparable companies is 0.72. The levered beta (BL) for CHS is then computed using the weighted-average unlevered beta of the comparable companies. The debt-to-equity ratio and tax rate of the company under consideration is used to lever up the unlevered beta in Equation 1. This procedure adjusts the beta for the financial risk and tax benefits associated with the individual firm, project, or division in question. The levered beta (BL) of a firm is a function of its operating leverage, the type of businesses in which it operates, and its financial leverage. The levered beta computed for CHS is 0.917. The debt-to-MV ratio of 0.42, obtained from CHS’s financial statements, is slightly higher than the debt-to-MV ratios for comparable companies and factored into the computation of CHS’s levered beta of 0.917. If the company’s project or division being analyzed has a higher (lower) operating leverage than the comparable firms, the unlevered beta should be adjusted upward (downward). CHS’s operating leverage of 1.1 and three-year historical growth rate of 25.6 appear to be fairly consistent with the operating leverage and growth rates of the comparable companies. Thus the computed levered beta for CHS is a good estimate of the company’s market risk with regard to operating leverage and growth. The unlevered betas appear to be associated with a combination of size and operating leverage. Because the largest companies have the highest operating leverage, it appears that these companies attempt to balance business risk with operating risk by increasing their operating leverage. The unlevered betas, however, do not appear to correlate with growth rates. The annual growth rates for the most recent three years of the comparable companies range from 2.5% to 42.7%. The five-year annual projected growth rate for 2002 through 2007 of the hospital industry is reported by Yahoo Finance to be 16.3% (finance.yahoo.com; as of September 4, 2002). Growth companies generally tend to have significant fixed costs associated with setting up infrastructure and developing new products. Once these costs have been incurred, however, the variable costs are relatively low. For growth companies in high-risk industries, such as technology, higher growth leads to higher fixed costs and higher betas. The low betas of companies in the hospital industry indicate a low market risk, even for growth companies. This can be explained by the industry’s nondiscretionary products and longer product life cycles. Exhibit 2 shows the computation of the cost of equity for CHS using the capital asset pricing model (CAPM), Equation 3, and the bottom-up beta computed in Exhibit 1. The components that go into measuring the cost of equity using the CPM include the riskless rate, the market risk premium, and the beta of the firm, product, or division. A riskless asset is one in which the investor knows the expected return with certainty. Consequently, there is no default risk and no uncertainty about reinvestment rates. To eliminate uncertainty about reinvestment rates, the maturity of the security should be matched with the length of the evaluation. In practice, using a long-term government rate—which can be obtained from Bondsonline (www.bondsonline.com)—as a riskless rate in all types of analyses will yield a close approximation of the true value. The market risk premium measures the extra return that would be demanded by investors for shifting their money from a riskless investment to an average-risk investment. It should be a function of how risk-averse the investors are and how risky they perceive stocks and other risky investments to be, in comparison to a riskless investment. The most common approach to estimating the market risk premium is to estimate the historical premium earned by risky investments (stocks) over riskless investments (government bonds). The average historical market risk premium over the period 1926 to 1999 for small companies is 12.1%, as reported by Ibbotson’s. Exhibit 2 illustrates the computation of the cost of equity for CHS. Using a risk-free rate of 5.5%, a market risk premium of 12.1%, and the bottom-up beta of 0.917, the cost of equity for CHS is estimated to be 16.6%. The cost of debt measures the current cost of borrowing funds to finance projects. The cost of debt is measured by the current level of interest rates, the default risk of the company, and the tax advantage associated with debt. The default spread is the difference between the long-term Treasury bond rate and the company’s bond yield. Default spreads can be found at Bondsonline if the company has a bond rating. Bond ratings can be found at www.standardandpoors.com. For companies that are not rated (CHS is not rated by Standard & Poor’s), the rating may be obtained by computing the company’s interest coverage ratio and adjusting for industry standards or expected future interest coverage. The interest coverage ratio for CHS is 2, and the associated ratings for the comparable companies indicate a bond rating of B+ for CHS. The default spread for this rating from Bondsonline is 8.5%. Exhibit 3 illustrates the computation of the cost of debt for CHS, from Equation 4, as 14%. The estimated cost of capital should be based on the market values of a company’s debt and equity, since a company has to earn more than its market value cost of capital to generate value. From a practical standpoint, using the book value cost of capital will tend to understate the cost for most companies, especially highly levered companies. These companies have more equity in their capital structures, and equity is more likely to have a higher market value than book value. The market value of equity (E) for CHS is $2,354 million, calculated as the number of shares outstanding times the stock price as of December 31, 2001. The stock price can easily be obtained from Yahoo Finance, and the number of shares outstanding is reported on the financial statements, which can be obtained from either Hoovers or Compustat. Generally the book value of debt is an adequate proxy for the market value unless interest rates have changed drastically. Exhibit 4 illustrates the computation of the cost of capital for CHS, which is 14.40%.
http://archives.cpajournal.com/2003/0503/dept/d056603.htm
Financial variables, both prices as well as quantities, provide useful information in assessing economic conditions and consequently serve as important inputs for policy making. In this article, we attempt to study the movement in equity prices using a fundamental tool of valuation termed as Dividend Discount Model (DDM). The DDM framework values equities using the present discounted value approach and helps to attribute the changes in equity prices to factors including growth expectations, risk-free rate and equity risk premium (ERP). Decomposition of changes in equity prices indicate that the rise in equity prices during 2016 to early 2020 was mainly supported by decrease in interest rates and ERP, with increase in forward earnings expectations contributing to a lesser extent. Thereafter, spike in ERP on COVID-19 concerns initially contributed significantly to equity prices declining sharply to compensate for increased risks. However, equity prices registered impressive recovery subsequently aided by easing of ERP. Introduction Movements in equity prices alongside a range of other assets reveal expectations of economic agents and provide important information to central banks for shaping appropriate policy actions in pursuit of their mandate of price stability, financial stability and economic growth. Equity prices contain information about both current and future economic conditions. Transmission of monetary policy actions to the broader economy takes place through various channels, including asset price channel, which leads to change in market value of equities and other securities. Assessment of equity prices may also help to identify financial imbalances or risks in an economy as it contains information about the degree of uncertainty around the economic outlook. The analysis of equity prices and understanding of the drivers of change in equity prices assumes significance as it interacts with monetary policy and could have different implications for policy actions. Against this backdrop, this article attempts to decompose equity price movements in India since 2005 to 2020 into contribution of changes in growth expectations, interest rates and Equity Risk Premium (ERP) using DDM. Section II discusses the relevance of understanding movement in equity prices. Section III discusses the DDM framework and the specification of the model deployed for the study. Section IV outlines the estimates of unknown variable, i.e., ERP. Section V discusses the results of DDM model, followed by the conclusion. II. Why Equity Prices are Relevant? Financial variables provide useful information in assessing economic conditions and consequently serve as an important input in monetary policy making. Further, monetary policy transmission also takes place through financial channel, which eventually influences the real sector. The immediate impact of monetary policy actions is mirrored in the prices and returns of financial assets, which is transmitted to broader economy through resultant actions of economic agents including households and firms. Unlike other lagging macroeconomic indicators including GDP and inflation, these variables are available on continuous basis and are also not subject to revisions, and hence can be used for real-time monitoring of macroeconomic conditions. In this context, understanding the movement of equity prices, which directly reveals the expectations of economic agents about future economic activity assumes critical importance. This is premised on the traditional equity valuation model, which suggests that stock prices equal the present value of expected future earnings. Ceteris paribus, rising equity prices reflect optimism over future profitability of companies. Other than economic outlook, equity prices may also convey information about risk perception. High risk perception is associated with low equity prices and vice versa because investors demand higher premium to compensate for higher risk against alternate safer investments in bonds thereby driving equity prices lower. Thus, movement of equity prices represents the interplay of different forces and hence there is a need to identify these separate factors for effective policy making. Monitoring equity prices is also relevant from the standpoint of its linkages with the macro economy defined within the demand side framework, i.e., consumption and investment. Easy monetary policy boosts asset prices including equity prices which, increases households’ wealth, prompting them to consume more. This is popularly known as wealth effect. Moreover, higher equity prices also increase business demand by reducing the cost of equity finance and enable them to finance investment at a reduced cost. This relationship between equity market and investment was propounded by James Tobin and is known as Tobin’s ‘q’ theory. Conversely, tighter monetary policy lower equity prices and dampens the aggregate demand of both consumers and business firms. However, the quantum and timing of effect of monetary policy changes on equity prices will depend on the extent to which the policy changes were ex-ante anticipated by market participants and also the extent to which future policy expectations are altered in response to central bank’s actions. Central banks also monitor equity prices in pursuit of their objective of maintaining financial stability, which is a prerequisite for price and economic stability. Financial stability risks may arise when equity prices deviate from fundamental levels as dictated by the present value of future income stream and the market is characterised by wide fluctuations in prices. Easy monetary policy for a sustained period may lead to build up of financial excesses by raising equity prices which foster excessive credit risk taking being translated into excessive investments and thereby fuelling asset prices further. These kinds of unsustainable booms not only lead to misallocation of resources but also create systemic risk with serious ramifications for real economy in an event of inevitable market correction, which would warrant appropriate action from monetary policymakers. III. Equity Valuation: Dividend Discount Model (DDM) Approach Equities can be valued using Gordon Growth Dividend Discount Model (DDM), a fundamental tool for valuation that involves discounting of the expected cash flows in the future by using an appropriate discount rate. The basic intuition that underlies this model is that the value of stock is measured by the cash flows it generates for its shareholders, which primarily include dividend payments. The basic DDM is represented by the following equation: In this equation, P0 is the current equity price, D1, D2, D3, …,Dn are the expected dividend payouts by companies to shareholders in periods 1, 2, 3 up to n, and ke is the cost of equity or expected return on equity. Assuming expected growth rate of dividends (g) to be constant and cost of equity (ke) to be greater than growth rate of dividends (i.e. ke > g), the equation may be re-written as: To incorporate greater flexibility in the set of assumptions, this article uses a two-stage DDM, which assumes high growth phase for short-term, and the terminal growth rate for long-term. Further, the model considers potential dividends instead of actual dividends as companies hold back cash and do not pay out the entire amount that they can afford to in the form of dividends or sometimes distribute cash through stock buybacks due to tax considerations and other reasons. Potential dividends are measured using free cash flow to equity (FCFE)1. Incorporating these modifications, the DDM is re-written as: In this equation, n represents the number of years of high growth, P0 is the current equity price, FCFEt is the expected free cash flow to equity (potential dividends) in period t, ke is the cost of equity or expected return on equity and gn is the long-term stable growth rate. All these factors drive the valuation of equity indices. Required return on equity can be computed using the model by incorporating the current price, which is observable, free cash flow to equity and long-term stable growth rate that are estimable. In our DDM framework, the value of equity benchmark Sensex is considered as representative price of Indian equities. Expected FCFE is assumed to be 60 per cent of expected net profits of constituent Sensex companies, which are derived using consensus forward earnings estimates of equity analysts for a three-year period (high growth phase) available on Bloomberg. Terminal growth rate is assumed to be equivalent to 10-year G-sec rate, which is also considered to be the risk-free rate. Further, the expected return on equity is equal to the risk-free rate plus a risk premium that investors demand for taking additional risk by investing in equities. Since risk-free rate is observable, implied ERP can be computed as residual of expected return on equity (ke) and risk-free rate (Rf). Implied ERP, calculated using this approach, is a forward-looking estimate of ERP and is consistent with the generally accepted belief that return on equities is driven by expectations. However, the reliability of implied ERP largely depends upon the accuracy of estimated future earnings, which may be subject to miscalculation and/or the bias of analysts. It is different from historical ERP that computes the premium over the risk-free rate earned by equity investors in the past. ERP is influenced by multiple factors, such as, risk-profile of investors, volatility in markets, etc. The details of ERP will be discussed in following section. Thus, DDM helps to establish the changes in equity prices on account of factors including growth expectations, risk free rate and ERP, which is implied from the model. For instance, the rise in equity prices can be attributed to improved growth outlook, which raises profitability expectation or decline in ERP. However, higher contribution from low ERP for prolonged period is likely to signal stretched valuations, raising concerns over financial stability. Similarly, equity prices might be falling not only because of weak growth prospects but may be due to rise in ERP. IV. Equity Risk Premium (ERP) ERP is conceptualised as the excess return that makes an investor indifferent between holding a risk-free investment, usually a government bond, and a risky equity investment. ERP is an indicator of uncertainty and is dependent upon various factors, such as, investors’ risk preferences, macro-economic fundamentals, savings rate, market liquidity, political stability, government policies, monetary policy, etc. ERP serves as a key input in determining cost of equity and thus warrants the need for monitoring. While the central bank’s action directly influences part of this cost by affecting the cost of debt, ERP is indirectly altered based on ability of monetary policy to influence risk taking behaviour of investors. From the financial stability point of view, a significantly lower ERP than what is implied by economic factors may lead to sharp and sudden drop in asset prices in any adverse event. The first key determinant of ERP is the risk aversion of investors that suggest equity risk is higher, if investors are risk averse and vice versa. Risk aversion amongst investors varies with age, i.e., older investors are more risk averse and, therefore, demand higher premium compared to the younger investors. This risk aversion increases if investors value current consumption more than future consumption and vice versa. Since risk aversion varies across individual investors, it is the collective risk aversion, which determines the movement in equity risk premium. Another important determinant is the economic risk. A more predictable economic scenario offers conducive environment for investment in equity resulting into lower risk premium and higher equity prices. Similarly, concerns over future equity returns in an environment of heightened macroeconomic uncertainty translates into higher ERP and lower equity prices. In this regard, several studies have established the relationship between risk premium and macroeconomic factors. In a study on the impact of macro-economic variables on stock prices as well as risk premium, the impact of GDP and inflation on ERP was found to be significant (Roll. et al.., 1986). Similarly, the inflation and GDP growth rates were found to influence risk premium in the US during the period 1802-2002 (Arnott and Beinstein, 2002). Also, declining ERP and persistently high stock valuations in the US during 1990s were ascribed to fall in macroeconomic risk, or volatility in the aggregate economy (Lettau, et al.., 2008). One important risk, which arises from investment in equity is the liquidity risk. An illiquid market means that the transaction cost of liquidating positions will be higher, making investors demand higher ERP. A study on the US stock returns between 1973 to 1997 concluded that liquidity accounts for a significant component of the overall ERP, and that its effect varies over time (Gibson and Mougeot, 2002). Another research showed that liquidity across the markets can partially explain the differences in equity returns and risk premiums across emerging markets (Baekart, Harvey and Lundblad, 2007). Further, several studies examined macroeconomic disasters which are low probability high consequence events and contributed to the strand of literature on the key reasons for existence of risk premium on equity (Rietz, 1988; Barro, 2006; Gabaix, 2008; Barro, Nakamura, Steinsson and Ursua, 2013). Broadly, there are three methods to compute ERP categorised into survey-based approach, ex-post calculation giving historical ERP and ex ante or implied ERP based on valuation models, which considers current market prices and interest rates. While the survey-based approach suffers from individual biases, historical ERP yields backward looking estimate. In this respect, implied ERP is considered a better indicator and is consistent with the generally accepted belief that return on equities is driven by market expectations. The DDM framework for Indian equity market yields average ERP estimates for the period under study (2005-2020) at 4.7 per cent. ERP peaked at 8.2 per cent during global financial crisis in 2008 followed by surge in 2020 to 6.0 per cent on account of coronavirus induced stress in the market (Chart 1). Since this framework is based on a set of assumptions, the ERP estimates are uncertain and, therefore, it is prudent to focus on change in ERP over time than its precise level. In the Indian context, ex-post ERP calculated using the difference between actual return on Sensex and 10- year G-sec yield for the period 2005-20 averaged at 4.8 per cent, which is close to the implied ERP estimated using DDM framework. The DDM model enables the ascertainment of factors behind changing cost of equity. During the global financial crisis of 2008, the sharp spike in ERP translated into higher cost of equity/required return on equity even as risk-free rates were declining (Chart 2). The subsequent policy actions in the form of monetary and fiscal stimulus aided in lowering of risk premia and the cost of equity. Thereafter, the decline in cost of equity is attributable to easing of policy rates by the Reserve Bank amidst low inflation environment and reduction in risk-free rate. However, escalating coronavirus induced stress during 2020 pushed ERP higher and consequently resulted in higher cost of equity. The subsequent policy actions lowered interest rates and led to easing of ERP, helping to bring down cost of equity from elevated levels. ERP and Economic Activity In economic sense, rising uncertainty as reflected by higher ERP tends to have negative impact on economic activity. This is mainly because both businesses and consumers will prefer to postpone their investment and consumption decisions, respectively, in an environment of increased uncertainty, thus having negative impact on economic growth. However, the decrease in the risk premium need not necessarily translate into increased economic growth (Stein, 2014). Low ERP over sustained period could potentially lead to a build-up of financial vulnerability, which can translate into macroeconomic instability (Annex 1). The computed ERP through DDM method shows high correlation with other measures of risk premium including India VIX and corporate bond spreads. Chart 3 depicts co-movement of India VIX, which measures market perceptions of risk, and ERP. Surge in volatility (VIX) is associated with unwillingness of investors to hold risky assets raising the premium (ERP) they demand for bearing risk or a flight to safety against sharp changes in asset prices. Another way of looking at risk perceptions is through measures of credit risk, which is determined by the spread of corporate bond yield over the risk-free rate. Changes in corporate bond spread reflects risk perceptions of investors, i.e., increase in corporate bond spread would mean increase in yield that investors demand for holding a corporate bond over an equivalent maturity government bond without any default risk suggesting prevalence of risk aversion and vice versa. Chart 4 outlines the co-movement of corporate bond spread and ERP for the period under study. However, the relationship weakened somewhat during 2018, which can plausibly be attributed to the asymmetric impact of default on its debt obligations by Infrastructure Leasing & Financial Services (IL&FS) on corporate bond market. V. Decomposing Indian Equity Prices using DDM A second important application of DDM is that it helps to understand the underlying factors driving movement in equity prices by enabling decomposition of change in equity prices into contributions of changes in risk-free rate, ERP and growth expectations. Such a decomposition is arrived by varying, one at a time, each of the three terms on the right side of DDM equation and determining the associated impact on change in equity prices. The impact on equity prices due to simultaneous change in two or more variables is captured by the interaction term. Indian equity market witnessed strong rally during the period 2005-08 before the contagion from global financial crisis beginning in 2007 triggered sharp decline in the BSE Sensex (Chart 5). However, the domestic equity market recovered from the stress at a faster pace during 2009-10, declining thereafter between 2010-13 amidst renewed global uncertainty owing to Euro crisis and taper tantrum. During this period, the negative sentiment was compounded by domestic factors including downgrade of India’s long-term rating outlook to negative from stable, worries over retrospective tax and general anti-avoidance rules (GAAR) and sharp slide of the Indian rupee. The positive momentum in the equity market from 2013 was restored by the Reserve Bank’s measures to rebuild appropriate liquidity buffers coupled with accommodative monetary policy stances of European Central Bank (ECB) and the Bank of Japan (BoJ) as well as relaxed approach of the US Federal Reserve to monetary policy normalisation. Going ahead, concerns over slowdown in China and asset quality in the banking system on the domestic front contributed to the fall in equity market during 2015-16. Since February 2016, equity market has generally exhibited uptrend barring transient blips till mid-January 2020. However, the Indian equity markets retreated thereafter in sync with global markets on fears over COVID-19 induced recession. Subsequently, markets made impressive recovery enthused by extraordinary monetary policy support and fiscal stimulus measures coupled with improvement in macro indicators amidst gradual unlocking of economy. All these factors driving market manifest into the components of DDM including earning expectations, risk premium and interest rates. Application of the model The Chart 6 illustrates the contribution of different factors in driving the Indian equity prices since 2005: August 2005-January 2008: The sharp rally in the Indian equity market during this period pushed the BSE Sensex nearly three times to 20,000 levels. The DDM decomposition attributes this surge mainly to the increase in earnings expectations with the equity risk premium playing relatively minimal role. The cumulative positive impact of these factors was substantially large to offset the negative contribution from interest rates. January 2008 - March 2009: This phase is marked by the global financial crisis, which roiled financial markets across the globe. During this period, the BSE Sensex declined around 60 per cent in a short span of 15 months. According to the DDM decomposition, earnings expectations and equity risk premium contributed to the fall in equity market. Earnings expectations corrected sharply as economic outlook deteriorated, and equity risk premium spiked owing to increased perception of risk and uncertainty. However, the impact of accommodative monetary policy stance is reflected in the positive contribution of interest rate to the movement in equity prices. March 2009 - November 2010: Equity prices rebounded sharply during this period and the results of DDM illustrates the positive impact of ERP and earnings expectations. During this period, ERP fell sharply from peak of 8.2 per cent during crisis period to average of 4.8 per cent during this period. This can be attributed to the revival of risk sentiment due to both monetary and fiscal support in the aftermath of crisis. Further, Indian economy weathered financial crisis relatively well which meant outlook on earnings growth also recovered quickly. November 2010 - September 2013: DDM decomposition suggests that the decline in Indian equity market during this phase is explained by ERP and interest rates, even though earnings expectation contributed positively. Market was unsettled by the Euro area crisis beginning 2010 followed by the taper tantrum episode during May-August 2013, which drove ERP higher. Higher ERP amidst improved earnings expectations suggest that even as economic outlook remained positive, uncertainty surrounding that outlook had risen. September 2013 - January 2015: With concerted policy actions post the taper tantrum episode, the equity market resumed upward momentum helped both by ERP and interest rates as the DDM decomposition highlights. However, role of earnings expectations in driving equity prices declined compared to the previous period. January 2015 - February 2016: The downturn in equity prices was witnessed during this phase led by worsening outlook on earnings and rise in ERP, with interest rates causing negligible impact as illustrated by the DDM decomposition. February 2016 - January 2020: Since 2016, the BSE Sensex gained over 80 per cent touching a lifetime high of 41953 on January 14, 2020. However, this uptrend is driven largely by a combination of ERP and interest rates, with earnings expectations contributing to a lesser extent. Low interest rate environment attributable to low inflation prevailed alongside compression of risk premium. ERP touched a low of 3.7 per cent during this period and remained lower than the long-term average in the previous years. January 2020 - March 2020: This phase saw sharp correction in the Indian equity market in sync with the global markets due to COVID-19 pandemic. The BSE Sensex plummeted over 30 per cent during this period and the DDM decomposition suggests significant impact from the increase in ERP followed by faltering earnings expectations. At the same time, monetary accommodation is reflected in positive contribution from interest rates. March 2020 - September 2020: The BSE Sensex recovered nearly 50 per cent from the low of 25,981 on March 23, 2020 aided by sizable policy support on both monetary and fiscal fronts along with rally in global equity markets. DDM decomposition suggest significant contribution from the easing of ERP. However, unlike previous episodes of upswing, there is negligible contribution from earnings expectations. Conclusion The study uses Dividend Discount Model framework to determine implied Equity Risk Premium for Indian equities. The average ERP during 2005-2020 is estimated at 4.7 per cent. The empirical evidence suggests that ERP shot up sharply during global financial crisis of 2008 and during COVID-19 pandemic in March 2020. The relationship between change in ERP and GDP growth is found to be inverse and asymmetric with increase in ERP assuming significance in explaining the fall in GDP growth. ERP is also found to be highly correlated with other measures of uncertainty such as India VIX and corporate bond spread. Decomposition of change in equity price indicates that the rise in equity prices during 2016 to early 2020 was mainly supported by decrease in interest rates and ERP, with increase in forward earnings expectations contributing to a lesser extent. Thereafter, spike in ERP on COVID-19 concerns contributed significantly to the plunge in the equity prices. However, subsequently post-March 2020, markets registered impressive recovery aided by easing of ERP, even as the contribution of earnings expectations was negligible. References Dison, Will and Rattan, Alex (2017), “An improved model for understanding equity prices,” Bank of England Quarterly Bulletin, Bank of England, vol. 57(2), pages 86-97. Damodaran, Aswath (2019), “Equity Risk Premiums (ERP): Determinants, Estimation and Implications – The 2019 Edition”. Guangye Cao & Daniel Molling & Taeyoung Doh, (2015) “Should monetary policy monitor risk premiums in financial markets?”,Economic Review, Federal Reserve Bank of Kansas City. Norges Bank (2016), “Discussion Note on the Equity Risk Premium”. Remarks by Governor Ben S. Bernanke (2003), “Monetary Policy and the Stock Market: Some Empirical Results”, At the Fall 2003 Banking and Finance Lecture, Widener University, Chester, Pennsylvania. Panigirtzoglou, Nikolaos and Scammell, Robert (2002), Analysts’ Earnings Forecasts and Equity Valuations, Bank of England Quarterly Bulletin. Chen, Nai-Fu, Roll, Richard and Ross, Stephen A. (1986), “Economic Forces and the Stock Market”, The Journal of Business, Vol. 59, No. 3 Arnott, Robert D. and Bernstein, Peter L. (2002), “What Risk Premium is ‘Normal’?”, Financial Analysts Journal, Vol. 58, No. 2, 64-85 Lettau, M., Ludvigson, S., Wachter, J. (2008), “The declining equity risk premium: What role does macroeconomic risk play?”, Review of Financial Studies, 21, 1653-1687. Gibson, R., Mougeot, N. (2004), “The pricing of systematic liquidity risk: Empirical evidence from the US stock market”, Journal of Banking and Finance, 28, 157-178. Bekaert, G., Harvey, C., Lundblad, C. (2007), “Liquidity and expected returns: Lessons from emerging markets”, Review of Financial Studies, 20(6), 1783- 1831. Rietz, T. (1988), “The equity risk premium: A solution”, Journal of Monetary Economics, 22, 117-131. Barro, R. (2006), “Rare disasters and asset markets in the twentieth century”, Quarterly Journal of Economics, 121(3), 823-866. Nakamura, E., Steinsson, J., Barro, R., Ursua, J. (2013), “Crises and recoveries in an empirical model of consumption disasters”, American Economic Journal: Macroeconomics, 5(3), 35-74. Xavier Gabaix (2008), “Variable Rare Disasters: An Exactly Solved Framework for Ten Puzzles in Macro- Finance,” The Quarterly Journal of Economics, Oxford University Press, vol. 127(2), pages 645-700. Annex I: ERP and Economic Activity To understand the macroeconomic implications of changes in equity risk premium, we have regressed both IIP and GDP on ERP as well as its past values. A distinction is made between increase and decrease in ERP, which are included separately to estimate their asymmetric impact. In evaluating the impact on IIP, monthly IIP is regressed on previous six months change in ERP and similarly, quarterly GDP on previous two quarters change in ERP (Chart 7). Lagged values of both IIP and GDP are included in regression to gauge the predictive power of ERP over and above the past values of independent variables (GDP and IIP). Lagged terms are identified based on improvement to the overall fit of the model. The negative coefficients of increase as well as decrease in ERP establish the inverse relationship between economic activity indicators (IIP and GDP) and equity risk premium. However, both the regression results suggest that while the increase in ERP assumes significance in explaining the dependent variables, i.e., IIP and GDP, decrease in ERP is insignificant in line with the economic theories. This is largely consistent with the divergence between real economy and market observed in 2019 wherein ERP stayed low contributing to surge in equity markets to record-highs and GDP growth stayed muted. Overall, while the ERP has stayed below 4 per cent levels since 2016, real GDP growth has remained below 2016 level which was 8.7 per cent. * This article is prepared by Priyanka Sachdeva and Abhinandan Borad under the guidance of Mohua Roy and Subrat Kumar Seet in the Division of Financial Markets of the Department of Economic and Policy Research, Reserve Bank of India. The views expressed in this article are those of the authors and do not represent the views of the Reserve Bank of India. 1 Free cash flow to equity (FCFE) is the amount of cash a business generates that is available to be potentially distributed to shareholders. In other words, FCFE is the cash left over after taxes, re-investment needs and debt repayments.
https://www.rbi.org.in/scripts/BS_ViewBulletin.aspx?Id=19838
The security market line (SML) is a visual representation of the capital asset pricing model or CAPM. It shows the relationship between the expected return of a security and its risk measured by its beta coefficient. In other words, the SML displays the expected return for any given beta or reflects the risk associated with any given expected return. As was mentioned above, the security market line is based on the following CAPM equation E(Ri) = RF + βi × (E(RM) - RF) where E(Ri) is an expected return of a security, RF is a risk-free rate, βi is a security’s beta coefficient, and E(RM) is an expected market return. The x-axis of the SML graph is represented by the beta, and the y-axis is represented by the expected return. The value of the risk-free rate is the beginning of the line. The shift of SML can also occur when key economical fundamental factors change, such as a change in the expected inflation rate, GDP, or unemployment rate. Let’s assume the current risk-free rate is 4.75%, and the expected market return is 15.50%. Thus, the SML equation will be as follows: E(Ri) = 4.75 + βi × (15.50 - 4.75) = 4.75 + 10.75 × βi Suppose that Security A has a beta of 0.6, and Security B has a beta of 1.2. The expected return of Security A is 11.20%, and the expected return of Security B is 17.65%. E(RA) = 4.75 + 10.75 × 0.6 = 11.20% E(RB) = 4.75 + 10.75 × 1.2 = 17.65% So, lower risk (lower beta) means lower expected return and vice versa. The security market line has the same limitations as CAMP because it is based on the same assumptions. Real markets conditions can’t be characterized by strong efficiency because market participants have different abilities to lend or borrow money at a risk-free rate, and transaction costs are different. Thus, in the real world, the position of a given stock can be above or below the SML as shown in the figure below. So, the graph shown on actual values of betas, and expected returns of stocks look like a set of points rather than a single line. Stocks above the line are undervalued because investors require a higher return for a given risk (beta) than the CAPM assessment. If stocks are below the security market line, they are overvalued, which means investors require a lower return for a given risk than was assessed by the CAPM.
https://financialmanagementpro.com/security-market-line-sml/
The Reserve Bank of India switched back to gross domestic product (GDP) model from the gross value added (GVA) methodology to provide its estimate of economic activity in the country. The switch to GDP is mainly to conform to international standards and global best practices. Key Facts The GVA methodology gives picture of state of economic activity from producers’ side or supply side whereas the GDP model gives picture from consumers’ side or demand perspective. Globally, performance of most economies is gauged in terms of GDP model. This is also approach followed by multilateral institutions, international analysts and investors because it facilitates easy cross-country comparisons. Background Government had started analysing growth estimates using GVA methodology from January 2015 and had also changed the base year to 2018 from January 2018. Even the Central Statistical Office (CSO) has started using GDP model as supply-side measure of economic activity as main measure of economic activities since January 15, 2018. Tags: Banking • Business • Economy • GDP model • Growth Forecast Highlights of Economic Survey 2017-18 Union finance Minister Arun Jaitley tabled the Economic Survey 2017-18 in Parliament during 2018 budget session. The survey was authored by chief economic adviser in the finance ministry Arvind Subramanian. The survey projects economy to grow in the range of 7% to 7.50% in the next fiscal year 2018-19 in the post-demonetisation year. Suvey 2017-18 was in pink colour to highlight gender issues. Key Highlights of Survey Growth Forecast: The real GDP growth projections for 2017-18 is expected to be around 6.75%, which is further expected to reach to 7-7.5% in 2018-19 driven by major reforms initiated by the government. There was reversal of declining trend of GDP growth in second quarter of 2017-18. The growth during this period was led by industry sector. Agriculture, industry and services sectors are expected to grow at the rate of 2.1%, 4.4%, and 8.3% respectively in 2017-18.
https://currentaffairs.gktoday.in/tags/growth-forecast/page/2
Estimating The Fair Value Of ProCook Group plc (LON:PROC) How far off is ProCook Group plc (LON:PROC) from its intrinsic value? Using the most recent financial data, we'll take a look at whether the stock is fairly priced by taking the expected future cash flows and discounting them to their present value. We will use the Discounted Cash Flow (DCF) model on this occasion. Before you think you won't be able to understand it, just read on! It's actually much less complex than you'd imagine. We would caution that there are many ways of valuing a company and, like the DCF, each technique has advantages and disadvantages in certain scenarios. Anyone interested in learning a bit more about intrinsic value should have a read of the Simply Wall St analysis model. Check out our latest analysis for ProCook Group What's the estimated valuation? We have to calculate the value of ProCook Group slightly differently to other stocks because it is a specialty retail company. In this approach dividends per share (DPS) are used, as free cash flow is difficult to estimate and often not reported by analysts. This often underestimates the value of a stock, but it can still be good as a comparison to competitors. The 'Gordon Growth Model' is used, which simply assumes that dividend payments will continue to increase at a sustainable growth rate forever. For a number of reasons a very conservative growth rate is used that cannot exceed that of a company's Gross Domestic Product (GDP). In this case we used the 5-year average of the 10-year government bond yield (0.9%). The expected dividend per share is then discounted to today's value at a cost of equity of 7.1%. Relative to the current share price of UK£0.4, the company appears around fair value at the time of writing. Remember though, that this is just an approximate valuation, and like any complex formula - garbage in, garbage out. Value Per Share = Expected Dividend Per Share / (Discount Rate - Perpetual Growth Rate) = UK£0.02 / (7.1% – 0.9%) = UK£0.4 The assumptions The calculation above is very dependent on two assumptions. The first is the discount rate and the other is the cash flows. You don't have to agree with these inputs, I recommend redoing the calculations yourself and playing with them. The DCF also does not consider the possible cyclicality of an industry, or a company's future capital requirements, so it does not give a full picture of a company's potential performance. Given that we are looking at ProCook Group as potential shareholders, the cost of equity is used as the discount rate, rather than the cost of capital (or weighted average cost of capital, WACC) which accounts for debt. In this calculation we've used 7.1%, which is based on a levered beta of 1.265. Beta is a measure of a stock's volatility, compared to the market as a whole. We get our beta from the industry average beta of globally comparable companies, with an imposed limit between 0.8 and 2.0, which is a reasonable range for a stable business. Next Steps: Valuation is only one side of the coin in terms of building your investment thesis, and it shouldn't be the only metric you look at when researching a company. It's not possible to obtain a foolproof valuation with a DCF model. Instead the best use for a DCF model is to test certain assumptions and theories to see if they would lead to the company being undervalued or overvalued. For example, changes in the company's cost of equity or the risk free rate can significantly impact the valuation. For ProCook Group, there are three additional aspects you should assess: Risks: Case in point, we've spotted 2 warning signs for ProCook Group you should be aware of. Future Earnings: How does PROC's growth rate compare to its peers and the wider market? Dig deeper into the analyst consensus number for the upcoming years by interacting with our free analyst growth expectation chart. Other Solid Businesses: Low debt, high returns on equity and good past performance are fundamental to a strong business. Why not explore our interactive list of stocks with solid business fundamentals to see if there are other companies you may not have considered! PS. Simply Wall St updates its DCF calculation for every British stock every day, so if you want to find the intrinsic value of any other stock just search here. Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com. This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.
https://nz.finance.yahoo.com/news/estimating-fair-value-procook-group-082456139.html
A substantial portion of the socio-political model of IFs is scattered throughout the other models. There are "policy handles" or intervention points throughout those models. For instance, in the population model, multipliers on the total fertility rate can reflect policy decisions (although they can also reflect the model user's judgment concerning social changes in the country or region, independent of policy). Patterns of regulation, subsidy, tax incidence, and provision of state services are so diffuse and complicated that we resort to looking at their aggregate consequences through various "policy handles" rather than trying to represent them explicitly. For more information on this module, please use the links below or read more at Socio-Political Equations Overview. Structure and Agent System: Socio-Political | | System/Subsystem Socio-political | | Organizing Structure Social fabric | | Stocks Levels of human well-being and institutional development (human and social capital) Cultural structures | | Flows Social expenditures Value change | | Key Aggregate Relationships (illustrative, not comprehensive) Growth in literacy and human development; Democratic development, state failure | | Key Agent-Class Behavior Relationships (illustrative, not comprehensive) Government efforts to develop human capital through spending on health, education, R&D Unlike the use of cohort-component structures in demographics and of markets and social accounting matrices for economics, there is no standard organizing structure that is widely used for representing socio-political systems. In the context of the TERRA project, IFs developed a multi-component approach to structure that might be called the "social fabric" (a la Robert Pestel). Although representation of agent-class behavior would be of special interest in a socio-political module, most relationships in IFs remain at the level of aggregate specifications. Dominant Relations: Socio-political Domestic Socio-Political Change: Dominant Relations Social and political change occurs on three dimensions (social characteristics or individual life conditions, values, socio-political institutions and process). Although GDP per capita is strongly correlated with all dimensions of change, it might be more appropriate to conceptualize a syndrome or complex of developmental change than to portray an economically-driven process. For causal diagram see Socio-Political Flow Charts Overview. For equations see, for example, Socio-Political Equations Overview. Key dynamics are directly linked to the dominant relations - The model computes some key social characteristics/life conditions, including life expectancy and fertility rates in the demographic model, but the user can affect them via multipliers (mortm, tfrm). Literacy rate is an endogenous function of education spending, which the user can influence (via gdsm). - The model computes value or cultural change on three dimensions: traditional versus secular-rational, survival versus self-expression, and modernism versus postmodernism, which the user can affect via additive factors (tradsrateadd, survseadd, matpostradd). - Freedom, democracy (the POLITY measure), autocracy, economic freedom, and the status of women are all computed endogenously but can all be shifted by the user via multipliers (freedomm, democm, autocm, econfreem, gemm) Domestic Socio-Political Change: Selected Added Value The larger socio-political model provides representation and control over government spending on education, health, the military, R&D, foreign aid, and a residual category. Military spending is linked to interstate politics, both as a driver of threat and as a result of action-and-reaction based arms spending. The sub-model provides aggregated indicators of the physical quality of life and the human development index. Socio-political Flow Charts Overview The social and political module represents a complex of interacting structures and processes. These include: - The various social characteristics or life conditions of individuals - Human values, beliefs, and orientations’ - Social and political structures, informal as well as formal - Social and political processes, both domestic and international Cultural foundations frame all of these components. And all of the components interact closely with human demographic and economic systems. The socio-political elements of IFs are among the most dynamically evolving aspects of the overall modeling system. Much, but not everything in the above figure has been fully represented yet within IFs; the figure indicates direction of development and shows implemented elements in italics. For more, please read the links below. Social Characteristics: Life Conditions Individuals are the foundations of society. Many social indicators are actually aggregated indicators of their condition. The Human Development Index (HDI) is a widely-used summary measure of that life condition, based on life expectancy, educational attainment, and GDP per capita. Physical Quality of Life (PQLI) The Overseas Development Council (then under the leadership of Jim Grant) developed and publicized a measure of (physical) quality of life (the PQLI) many years ago. It combines literarcy rate, infant mortality rate, and life expectancy, using scales from the lowest to the highest values in the global system. It weights the three scales equally. The literacy rate is, in turn, a function of the per capita spending levels on education, estimated cross-sectionally. In many respects the PQLI was a predecessor of the human development index (HDI). Based on country/region-specific Physical Quality of Life, it is possible to compute world quality of life (WPQLI) and the North-South gap in quality of life (NSPQLI). Given country-specific literacy rates, it is also possible to compute world literacy (WLIT). Income Distribution Income distribution is represented by the share of national income earned by the poorest 20 percent of the population. That share is obtained from data whenever possible, but is estimated from a cross-sectional relationship when necessary and changed over time by that relationship (the values tend, however, to be very stable both in the real world and in the model). Because initial conditions of variables affected by income share, such as fertility and mortality rates, already reflect existing income distributions, it is only the changes in that distribution relative to the expected value that the model uses in such relationships. A parameter (incshrm) is available to change income share and thus affect those variables influenced by it. Social Characteristics: Networking Being electronically networked is an increasingly important aspect of human life condition. The number of networked persons (NUMNWP) is a function primarily of the growth rate in that number (NUMNWPGR). It is ultimately constrained, however, by the size of the population and by the number of connections and organizational memberships that people can have (numnwplim). The growth in networked person number slows as it approaches the ultimate limit. The model user can affect the growth pattern via a multiplier on the growth rate (numnwpgrm). This approach was added to IFs during the TERRA project and draws on the thinking of Tom Tesch and Pol Descamps. Social Values and Cultural Evolution IFs computes change in three cultural dimensions identified by the World Values Survey (Inglehart 1997). Those are dimensions of materialism/post-materialism, survival/self-expression, and traditional/secular-rational values. Inglehart has identified large cultural regions that have substantially different patterns on these value dimensions and IFs represents those regions, using them to compute shifts in value patterns specific to them. Levels on the three cultural dimensions are predicted not only for the country/regional populations as a whole, but in each of 6 age cohorts. Not shown in the flow chart is the option, controlled by the parameter "wvsagesw," of computing country/region change over time in the three dimensions by functions for each cohort (value of wvsagesw = 1) or by computing change only in the first cohort and then advancting that through time (value of wvsagesw = 2). The model uses country-specific data from the World Values Survey project to compute a variety of parameters in the first year by cultural region (English-speaking, Orthodox, Islamic, etc.). The key parameters for the model user are the three country/region-specific additive factors on each value/cultural dimension (matpostradd, etc.). Finally, the model contains data on the size (percentage of population) of the two largest ethnic/cultural groupings. At this point these parameters have no forward linkages to other variables in the model. Social Organization and Change The sociopolitical module computes change in freedom (political and economic) and the status of women. For freedom it uses both the measure of the Freedom House and the combined measure for democracy (building on democracy and autocracy) of the POLITY project. It also computes a measure of economic freedom and of gender equality. Social Organization: Stability/State Failure The State Failure project has analyzed the propensity for different types of state failures within countries, including those associated with revolution, ethnic conflict, genocide-politicide, and abrupt regime change (using categories and data pioneered by Ted Robert Gurr. Upon the advice of Gurr, IFs groups the first three as internal war and the last as political instability. IFs uses the same primary variables (infant mortality, democracy, and trade openness) as the State Failure project to drive forecasts of the probability of individual events of state failure, of ongoing episodes of it, and of the magnitude of episodes. In addition, it allows the use in the formulation of GDP per capita and years of education. Many other linkages have been and can be explored, including cultural regions. Government Spending The economic submodel provides total government spending. Government spending by category begins as a simple product of total government consumption and fractional shares by spending category. Spending by type (military, health, education, research and development, other, and foreign aid) is largely specified exogenously, building on the initial conditions for each country/region. In addition, an action-reaction (arms-race) dynamic can be established in military spending if the action-reaction switch is turned on. After adjustments to foreign aid and military spending, spending in all categories is re-normalized to equal total governmental spending. Educational spending is further broken out of total educational spending. The user can shift the spending across three educational levels (primary, secondary, and tertiary) through the use of an educational multiplier. See also the specifications of detailed final demand and of international finance. Drug Demand The UNODC drug report finds that illicit drug use is concentrated amongst the youth, notably young males living in an urban environment. The UNODC report also finds a pronounced gender gap in relation to illicit drug consumption. Gender equality and empowerment seems to act as a key driver when it comes to determining drug consumption. For example, in the United States, characterized by a small gender gap, female drug use is about two thirds that of males, whereas in some other countries, including India and Indonesia, female drug use is as low as one tenth that of males, though there is a risk that female drug use may be underreported. In addition, we have also found poverty, inequality and government health expenditure as drivers of specific types of drug prevalence. Policy options with respect to drug prevalence are represented in the model using multipliers which can be used to simulate an increase or decrease in drug prevalence. The table below lists the driving variables for each of the drug types. |Drug Type||Driving Variables||Driving Variables in IFS| | | Amphetamines |Youth Bulge, Gender Inequalities||YTHBULGE, GEM| |Cocaine||Consumption levels, Gender Empowerment Measure and Income Inequality||(C/POP), GEM, GINIDOM| |Opiates||Poverty, Youth Bulge and Urban Population||INCOMELT310LN, YTHBULGE, POPURBAN| |Prescription Opiods||Health Expenditure||HLEXPEND| The figure below shows a diagrammatic representation of the drug demand model in IFs, Violence Mortality from conflict is driven using the probability of internal war (SFINTLWARALL). Mortality from homicides and violence against women and children are driven using the youthbulge (YTHBULGE) and the GINI co-efficient (GINIDOM). Police violence deaths are driven by homicides(SVDTHSOTHERINTERPERSON) and the Corruption index in IFs (GOVCORRUPT). Finally, mortality from self-harm is calculated using mental health deaths (which are calculated in the health model) and deaths of women and children (SVDTHSWOMENANDCHILDREN). There are user controllable parameters available in the model to increase the death rates (svmulm) and the total number of deaths (svdthsadd) for each of the categories of violence. Finally, the homicide index(HOMICIDEINDEX) is calculated using each of the death rates mentioned above excluding self-harm. The homicide index itself is used in computing a conflict component of the security index in IFs (GOVINDSECUR). The figure below shows a visual representation of the violence model in IFs. Socio-political Equations Overview A substantial portion of the policy model of IFs is scattered throughout the other models. There are "policy handles" or intervention points throughout those models. For instance, in the population model, multipliers on the total fertility rate can reflect policy decisions (although they can also reflect the model user's judgment concerning social changes in the country or region, independent of policy). Similarly, in the energy model, the multiplier on energy demand can represent conservation policy. Similarly, the ultimate energy resource base and the rate of resource discovery remain uncertain in part because they are subject to a wide range of government interventions - multipliers can introduce assumptions about such interventions. In the economic module, the level of trade protection is very clearly a policy parameter as is the multiplier on the tax rate. Patterns of regulation, subsidy, tax incidence, and provision of state services are so diffuse and complicated that we resort to looking at their aggregate consequences through various "policy handles" rather than trying to represent them explicitly. IFs contains other categories of sociol-political activity, however, that it represents in more integrated fashion in the sociopolitical module as a four-dimensional social fabric: social characteristics/life condition, values, social structures (formal and informal), and social processes. For help understanding the equations see Notation. Socio-political Equations: Life Conditions Literacy changes from the initial level for the region because of a multiplier (LITM). The function upon which the literacy multiplier is based represents the cross sectional relationship globally between educational expenditures per capita (EDEX) from the government submodel and literacy rate (LIT). Rather than imposing the typical literacy rate on a region (and thereby being inconsistent with initial empirical values), the literacy multiplier is the ratio of typical literacy at current expenditure levels to the normal literacy level at initial expenditure levels. This formulation predates the development of an educational module that calculates the numbers of those with a primary education (one common definition of literacy). As that module is refined, we will likely derive literacy dynamics from it. Educational expenditures (and thus implicitly literacy and labor efficiency) are tied back to the economic model via the economic production function. Given life expectancy, literacy, and infant mortality levels from the mortality distribution, it is possible to compute the Physical Quality of Life Index (PQLI) that the Overseas Development Council developed (ODC, 1977: 147#154). This measure averages the three quality of life indicators, first normalizing each indicator so that it ranges from zero to 100. The normaliza"tion is not needed for literacy; for life expectancy it converts the range of approximately 28 (LIFEXPMIN) to 80 (LIFEXPMAX) years into 0 to 100; for infant mortality it converts the range of approximately 229 per thousand (INFMORMAX) to 9 per thousand (INFMORMIN) into 0 to 100. where For most users, the United Nations Development Program’s human development index (HDI) has replaced the PQLI as an integrated measure of life condition. It is a simple average of three sub-indices for life expectancy, education, and GDP per capita (using purchasing power parity). The life expectancy sub-index is the same as was used for the PQLI. The literacy sub-index is again the literacy rate. The GDP per capita index is a logged form that runs from a minimum of 100 to a maximum of $40,000 per capita. The measure in IFs differs slightly from the HDI version, because it does not put educational enrollment rates into a broader educational index with literacy; that will be changed as the educational model of IFs is better tested. where Although the HDI is a wonderful measure for looking at past and current life conditions, it has some limitations when looking at the longer-term future. Specifically, the fixed upper limits for life expectancy and GDP per capita are likely to be exceeded by many countries before the end of the 21st century. IFs has therefore introduced a floating version of the HDI, in which the maximums for those two index components are calculated from the maximum performance of any state in the system in each forecast year. where The floating measure, in turn, has some limitations because it introduces relative attainment into the equation rather than absolute attainment. IFs therefore uses still a third version of the HDI, one that allows the users to specify probable upper limits for life expectancy and GDPPC in the twenty-first century. Those enter into a fixed calculation of which the normal HDI could be considered a special case. where It is useful to compute several additional global indicators, a world physical quality of life index (WPQLI), a world life expectancy (WLIFE), a world literacy rate (WLIT), and a North#South gap index or ratio of quality of life in the "developed -D" regions to the "less developed-L" regions (NSPQLI). Socio-political Equations: Income Distribution The income share of the poorest 20 percent of the population (INCSHR) depends on the GDP per capita at PPP (GDPPCP) and on an exogenous income share multiplier (incshrm). The introduction of different household types into the social accounting matrix structure of IFs made possible the computation of a more sophisticated measure of income distribution tied directly to the model’s computation of household income (HHINC) and household size (HHPOP) by type. A domestic Gini value (GINIDOM) is calculated from a function that uses the normal Lorenz curve foundation for Gini indices. Because that function can calculate values that are quite different from the empirical initial values, a ratio of the empirical value to the initial computed value (GINIDOMRI) is used for scaling purposes. The model’s formulation of the relative household income levels of different household types, and therefore the calculation of a domestic GINI based on those income levels, are in early versions and are still rather crude. where One value of a domestic Gini calculation is that it, in turn, makes possible the calculation of the percentage of population living on less than one dollar per day (INCOMELT1) or two dollars per day (INCOMELT2). Functions were estimated linking GDP per capita at purchasing power (GDPPCP) and the Gini index to those percentages. Again, IFs uses initial conditions for scaling purposes. where where IFs also calculates a global Gini index across all countries/regions in the model, again using the standard Lorenz curve approach to areas of inequality and equality. It does not yet take into account intra-regional income differentials, but the foundation is now in place to do so. The user interface of IFs now uses the same Lorenz-curve approach to allow the user to calculate a specialized-display GINI for any variable that can be represented across all countries/regions of the model. Social Equations Networking The focal point of this portion of the model is on the computation of the total number of networked persons (NUMNWP). The rate of growth in that number (NUMNWPGR) is subject to several forces. The initial value of that rate is set in the data preprocessor of the model from empirical data. When no data are available for a country or region, the rate is set at a level determined via a cross-sectional relationship between GDP per capita (PPP) and portion of population networked. where Over time the growth rate of networked persons is subject to a saturating function, as the actual number of networked persons approaches a limit. The limit is set by an exogenous multiplier (numnwplim) on total population; networked persons can exceed total population because of multiple affiliations of individuals (households, NGOs, companies). The user of the model can accelerate or de-accelerate the process of networking via a multiplier on the growth rate (numnwpgrm). Although of interest in its own right, the number of networked persons is also carried forward in the model to the production function of the economy. Socio-political Equations: Values IFs computes change in three cultural dimensions identified by the World Values Survey (Inglehart 1997). Those are dimensions of materialism/post-materialism (MATPOSTR), survival/self-expression (SURVSE), and traditional/secular-rational values (TRADSRAT). On each dimension the process for calculation is somewhat more complicated than for freedom or gender empowerment, however, because the dynamics for change in the cultural dimensions involves the aging of population cohorts. IFs uses the six population cohorts of the World Values Survey (1= 18-24; 2=25-34; 3=35-44; 4=45-54; 5=55-64; 6=65+). It calculates change in the value orientation of the youngest cohort (c=1) from change in GDP per capita at PPP (GDPPCP), but then maintains that value orientation for the cohort and all others as they age. Analysis of different functional forms led to use of an exponential form with GDP per capita for materialism/postmaterialism and to use of logarithmic forms for the two other cultural dimensions (both of which can take on negative values). where where where The user can influence values on each of the cultural dimensions via two parameters. The first is a cultural shift factor (e.g. CultSHMP) that affects all of the IFs countries/regions in a given cultural region as defined by the World Value Survey. Those factors have initial values assigned to them from empirical analysis of how the regions differ on the cultural dimensions (determined by the pre-processor of raw country data in IFs), but the user can change those further, as desired. The second parameter is an additive factor specific to individual IFs countries/regions (e.g. matpostradd). The default values for the additive factors are zero. Some users of IFs may not wish to assume that aging cohorts carry their value orientations forward in time, but rather want to compute the cultural orientation of cohorts directly from cross-sectional relationships. Those relationships have been calculated for each cohort to make such an approach possible. The parameter (wvsagesw) controls the dynamics associated with the value orientation of cohorts in the model. The standard value for it is 2, which results in the "aging" of value orientations. Any other value for wvsagesw (the WVS aging switch) will result in use of the cohort-specific functions with GDP per capita. Regardless of which approach to value-change dynamics is used, IFs calculates the value orientation for a total region/country as a population cohort-weighted average. IFs uses an approach that is similar to the one for literacy in order to estimate the future of another measure created by the United Nations Development Program, one called the Gender Equity Measure (GEM). The closer the values of that measure approach "1", the closer women are to men in political and social power. Socio-political Equations: Structures or Institutions IFs endogenizes level of freedom (FREEDOM), based on the Freedom House measures, by linking change from initial conditions to GDP per capita at purchasing power parity in an analytic function. For discussion of the relationship between GDP and democracy, see Londregran and Poole (1996) and Przeworski and Limongi (1997). The latter view it as a probabilistic relationship in which there are a variety of reasons (often external pressure) at all levels of economic development for the conversion of dictatorships to democracies and in which the conversion of democracies to dictatorships occurs commonly at low but not high levels of development. That pattern creates a positive correlation between economic development and democratic government. A multiplier in freedom level (freedomm) increases or decreases the level of freedom. The Economic Freedom Institute (with leadership from the Fraser Institute; see Gwartney and Lawson with Samida, 2000) have also introduced a measure of economic freedom. IFs represents that in similar fashion. The POLITY project provides an alternative to the freedom house measure of freedom or democracy level. In fact, it provides multiple variables related to political system. IFs EARLIER included formations of two of those, democracy (DEMOC) and autocracy (AUTOC). They worked in completely analogous fashion. More recently, IFs has (1) combined the two Polity project measures into a single one as is often done with the Polity measures, setting POLITYDEMOC equal to democracy – autocracy + 10, a measure that runs from 0 to 20; (2) introduced a more complicated, multi-level forecast for the new measure. Specifically, the project identified three levels of analysis for factors that affect democratic change: domestic, regional, and systemic. At each of the three levels there are multiple factors that can affect democracy within states. At the domestic level we can identify two categories of factors in particular: - GDP per capita. This variable correlates highly with almost all measures of social condition; GDP provides the resources for democratization and other social change. - values/culture. Values clearly do differ across countries and regions of the world and almost certainly affect propensity to democratize. At the regional level (or, more accurately, the "swing-states" level) we can also identify three prospective drivers: - world average effects. It is possible that the world average exerts a pull-effect on states around the world (for instance, increasingly globalization could lead to homogenization of a wide variety of social structures around the world). - swing states effects. Some states within regions quite probably affect/lead others (obviously the former Soviet Union was a prime example of such a swing state within its sphere of influence, but there is reason to believe in lesser and less coercive effects elsewhere). - regional average. States within a region possibly affect each other more generally, such that "swing states" are moved by regional patterns and not simply movers of them. At the system level we identify three: - systemic leadership impetus. It is often suggested that the United States and other developed countries can affect democratization in less developed countries, either positively or negatively - snowballing of democracy (Huntington 1991). The wave character of democratization suggests that there may be an internal dynamic, a self-reinforcing positive feedback loop, of the process globally, partially independent of other forces that act on the process. Such a conclusion is consistent with the fact that idea spread and global regime development influence many types of social change (Hughes 2001) - miscellaneous other forces. Historic analysis would identify world war, economic depression, and other factors to explain the global pattern of democratization, especially the surge or retreat of waves. A project document prepared for the CIA’s Strategic Assessment Group (SAG) analyzed historic data and, in cooperation with David Epstein and Larry Diamond, fit an approach to it that cut across these three levels (see Hughes 2002: 59-74 for elaboration and documentation of the empirical work). The empirical work is not documented again here. The work did not find significant and consistent regional level effects, however, and the regional variables are therefore normally turned off. The resulting formulation uses the domestic level as an initial base calculation because it is the empirically strongest piece, and later adds (optionally) the regional level effects and the systemic effects. The base calculation is further tied to the actual empirical levels in the initial year of the run, with the impact of the driving variables being felt only in change of those levels. An ‘expected" democracy level (DEMOCEXP) is computed using an analytic function that uses GDP per capita at purchasing power parity (GDPPCP) and the World Value Survey’s survival and self-expression dimension (SURVSE). These were found quite powerful in their level of correlation with democracy and the WVS dimension, interestingly, carries a cultural component into the formulation. The user can further modify this basic formulation with an exogenous multiplier (democm). where It is also useful to have a separate calculation of the empirically strongest piece of the formulation, namely the domestic effects, but without any adjustment to the initial empirical values. The expected democracy variable (DEMOCEXP) carries that. It can be compared with the fully computed values to see the degree to which there may be tension in countries between democracy levels that GDP per capita and values would predict, on the one hand, and those that are in the initial data. The greatest tension levels tend to be in the Middle Eastern countries, where decmocracy is considerably below "expected" levels. The initial conditions of democracy in countries carry a considerable amount of idiosyncratic, country-specific influence, much of which can be expected to erode over time. Therefore a revised base level is computed that converges over time from the base component with the empirical initial condition built in to the value expected purely on the base of the analytic formulation. The user can control the rate of convergence with a parameter that specifies the years over which convergence occurs (polconv) and, in fact, basically shut off convergence by sitting the years very high. - if - then where else On top of the country-specific calculation sits the (optional) regional or swing state effect calculation (SwingEffects), turned on by setting the swing states parameter (swseffects) to 1. The swing effects term has three components. The first is a world effect, whereby the democracy level in any given state (the "swingee") is affected by the world average level, with a parameter of impact (swingstdem) and a time adjustment (timeadj) . The second is a regionally powerful state factor, the regional "swinger" effect, with similar parameters. The third is a swing effect based on the average level of democracy in the region (RgDemoc). David Epstein of Columbia University did extensive estimation of the parameters (the adjustment parameter on each term is 0.2). Unfortunately, the levels of significance were inconsistent across swing states and regions. Moreover, the term with the largest impact is the global term, already represented somewhat redundantly in the democracy wave effects. Hence, these swing effects are normally turned off and are available for optional use. Also on top of the country-level effects sits the effect of global waves (DemGlobalEffects). Those depend on the amplitude of waves (DEMOCWAVE) relative to their initial condition and on a multiplier (EffectMul) that translates the amplitude into effects on states in the system. Because democracy and democratic wave literature often suggests that the countries in the middle of the democracy range are most susceptible to movements in the level of democracy, the analytic function enhances the affect in the middle range and dampens it at the high and low ends. where The democratic wave amplitude is a level that shifts over time (DemocWaveShift) with a normal maximum amplitude (democwvmax) and wave length (democwvlen), both specified exogenously, with the wave shift controlled by a endogenous parameter of wave direction that shifts with the wave length (DEMOCWVDIR). The normal wave amplitude can be affected also by impetus towards or away from democracy by a systemic leader (DemocImpLead), assumed to be the exogenously specified impetus from the United States (democimpus) compared to the normal impetus level from the U.S. (democimpusn) and the net impetus from other countries/forces (democimpoth). where Given both the global and regional/swing-state effects, it is possible to add these to the basic country calculation for the final computation of the level of democracy using the Polity scale. The size of the swing effects is constrained by an external parameter (swseffmax). Socio-political Equations: Stability/State Failure The State Failure project has analyzed the propensity for different types of state failures within countries, including those associated with revolution, ethnic conflict, genocide-politicide, and abrupt regime change (using categories and data pioneered by Ted Robert Gurr. Upon the advice of Gurr, IFs groups the first three as internal war and the last as political instability. The extensive database of the project includes many measures of failure. IFs has variables representing three measures in each of the two categories, corresponding to the probability of the first year of a failure event (SFINSTABY1 and SFINTLWARY1), the probability of the first year or a continuing year (SFINSTABALL and SFINTLWARALL), and the magnitude of a first year or continuing event (SFINSTABMAG and SFINTLWARMAG). Using data from the State Failure project, formulations were estimated for each variable using up to five independent variables that exist in the IFs model: democracy as measured on the Polity scale (DEMOCPOLITY), infant mortality (INFMOR) relative to the global average (WINFMOR), trade openness as indicated by exports (X) plus imports (M) as a percentage of GDP, GDP per capita at purchasing power parity (GDPPCP), and the average number of years of education of the population at least 25 years old (EDYRSAG25). The first three of these terms were used because of the state failure project findings of their importance and the last two were introduced because they were found to have very considerable predictive power with historic data. The IFs project developed an analytic function capability for functions with multiple independent variables that allows the user to change the parameters of the function freely within the modeling system. The default values seldom draw upon more than 2-3 of the independent variables, because of the high correlation among many of them. Those interested in the empirical analysis should look to a project document (Hughes 2002) prepared for the CIA’s Strategic Assessment Group (SAG), or to the model for the default values. One additional formulation issue grows out of the fact that the initial values predicted for countries or regions by the six estimated equations are almost invariably somewhat different, and sometimes quite different than the empirical rate of failure. There may well be additional variables, some perhaps country-specific, that determine the empirical experience, and it is somewhat unfortunate to lose that information. Therefore the model computes three different forecasts of the six variables, depending on the user’s specification of a state failure history use parameter (sfusehist). If the value is 0, forecasts are based on predictive equations only. The equation below illustrates the formulation and that for the other five state failure variables varies with estimation. The analytic function obviously handles various formulations including linear and logarithmic. - if then (no history) where If the value of the sfusehist parameter is 1, the historical values determine the initial level for forecasting, and the predictive functions are used to change that level over time. Again the equation is illustrative. - if then (use history) where If the value of the sfusehist parameter is 2, the historical values determine the initial level for forecasting, the predictive functions are used to change the level over time, and the forecast values converge over time to the predictive ones, gradually eliminating the influence of the country-specific empirical base. That is, the second formulation above converges linearly towards the first over years specified by a parameter (polconv), using the CONVERGE function of IFs. - if then (converge) where Probability of state failure from different causes The variables represent the probability of failure with respect to distinct conceptual groups of drivers. - SFDEM (demography) - SFECONDEV (economic/development) - SFGOV (governance) - SFIMBAL (structural imbalances) Input variables needed to compute the probabilities | | Drivers | | Coeff. | | Units | | Transformation | | Other specification | | Demography | | | | Infant mortality | | 0.77919 | | Deaths/1000 Births | | Ln | | population | | 0.30204 | | Millions | | Ln | | Population growth | | 0.07767 | | Percent | | Youth bulge (15-29/15+) | | 0.0077 | | Percent | | Net migration | | -0.29432 | | Millions | | _cons | | -8.23582 | | - | | Economic/Development | | | | | | | | | | GDP/cap | | -0.30591 | | Thousands (2011 PPP) | | Ln | | GDP/cap (log) growth | | -0.06393 | | Percent | | Life expectancy | | -0.02537 | | Years | | _cons | | -2.06558 | | - | | Governance | | | | | | | | | | Polity | | 0.03273 | | -10 to 10 | | Polity^2 | | -0.02155 | | Polity2 | | _cons | | -2.89726 | | - | | Structural Imbalances | | | | | | | | | | polity v GDP/cap | | 0.04735 | | [Polity - Expected] | | Ln(GDP/cap) | | Pooled | | Life Exp. v GDP/cap | | -0.0558 | | [Life Exp. - Expected] | | Ln(GDP/cap) | | Partial Pool (re) | | Youth Bulge v Polity | | 0.0131 | | [Yth Blg % - Expected] | | Based on year 2013 | | _cons | | -4.23404 | | - Formulation for the probabilities is below, where β0 is the constant, β1…k are the parameters listed above, and X1…k are the driver values Economic Inequality and Political Conflict IFs does not yet include this important relationship. See Lichbach (1989) and Moore, Lindstrom, and O’Regan (1996) for analyses of how difficult this relationship is to specify. One critical problem is conceptualization of political conflict, political repression, political instability, political violence, political protest, etc. There are clearly many interacting, but separate dimensions for consideration. As Lichbach (1989: 448) says, "robust EI-PC laws have not been discovered." Drug Model Equations We use linear regressions for each of the variables described above. We fit this linear equation to logistic curves to derive the final prevalence rate. The methodology used here is similar to what is used in the water and sanitation model in the International Futures tool to compute access to water and sanitation. The values are computed using the equations given below, Where, - C is the amount of household consumption in billion USD - POP is the population - YTHBULGE is the youth bulge (Population aged between 15-29 years as a percent of the total population) - INCOMELT310LN is the number of people living in poverty (earning less than USD 3.10 per day. - POPURBAN is the number of people living in urban areas. - HLEXPEND is the amount of health spending (private and public) - GDP is the gross domestic product Pre-Processor and first year The values for drug prevalence are initialized using illicit drug demand data from the UNODC. However, data availability from this source is low. Appendix II shows the data coverage across countries from the UNODC. Therefore, filling holes for the first year where no data is available is crucial. There are three options available to the user when filling holes. They are, - Using IHME equations to fill holes- The institute for health and metric evaluation also provides data on drug prevalence and this source has much higher coverage (184 countries from 1990 to 2016). However, this data pertains to treatment of drug prevalence. We developed regression equations to estimate levels of illicit drug use from the IHME drug prevalence data set. Appendix III describes these regression equations in detail. - Using forecast year equations - This method uses the forecast year equations to derive the drug prevalence value for the first year of the model. - Using regional averages from the UNODC- Alternatively, we can also use regional averages for illicit drug prevalence to fill in holes for individual countries. The user can choose the initialization method using the parameter druginitsw. By default the model will choose the first option i.e. using IHME equations to fill in holes for the first year of the model. Forecast Years Computing Drug Demand Using the Bottom Up Approach In the forecast years, logistic regressions are used to first estimate the drug prevalence rates. The equations for amphetamines are shown below, This value is then used to compute the prevalence rate for each of the four drug types as follows, The above values are then adjusted for the shift factor, multipliers and a cap on the maximum possible value Where, DrugShift is the shift factor computed in the first year of the model which is used to chain the forecast values to the historical values from the data 2.3 is the cap on drug prevalence for amphetamines. AMIN is the function used to get the minimum value of drug prevalence and the cap (2.3). Since prevalence of drug usage tends to be slow moving over time, we have also capped the rate of growth of the prevalence rate for all four drug types. The growth rate in the drug prevalence rate is capped at 5 percent for every country for every year. However, this growth rate is not applicable when the parameters on drug prevalence rate are activated by a user. Finally, total drug use is computed as the average of the four drug types divided by a drugusepolyindex parameter which is set to 1.2. This is done to account for users who use multiple drugs. Adjusting Drug Use Using the Top Down Approach The paragraph above described the computation of drug prevalence using the bottom up approach (i.e. drug prevalence is computed for each drug type individually and this is used to compute total drug demand). However, another approach to computing drug demand would be to compute total drug demand first and distribute that across drug types (i.e. a top down approach). The model computes total drug demand using this top down approach and then converges the drug demand computed through the bottom up approach to the same The top down model uses youth bulge and household consumption as the two main drivers. Total drug demand is calculated as, The total drug use from the bottom up approach is converged to the above value over a period of 100 years. Note that there is a restriction on the year growth and decline rate of total drug use of 2%. Violence Model Equations Pre-processor and first year In the pre-processor, each of the violence variables are initialized using death rate data from the Institute for Health and Metric Evaluation (IHME). Please note that we only forecast mortality and the model currently does not have a representation of the prevalence of violence. For the conflict deaths, instead of using the latest data point for initialization, we use a weighted average of conflict deaths from the previous 10 years which is then divided by two to generate a more realistic number for the initialization. Where no data is available for any particular type of violence, we use the forecast equations to fill in holes for the first year of the model. In the first year of the model, we need to make sure that the total deaths from violence matches the total deaths from intentional injuries in the health model. Hence we normalize the total violence deaths to the total intentional injuries deaths. Please note that this normalization is optional (i.e. the user can activate a switch svvionormsw). The normalization will also be activated in the event the user turns on the forward linkage switch from the violence model to the health model svtohlsw. For the normalization we first calculate the total deaths from intentional injuries in the health model. This term is called the AdjustedViolenceTerm. Now, we calculate the total deaths from the violence model and call this tem SVTerm. The deaths from the violence model are now normalized to the deaths from the health model using the equations below. (The below equation is used for normalizing conflict deaths. Similar equations are used for the other types of violence), Where, POP is the total population Shift factors are then calculated in the first year to chain the forecast values to the historical data. Forecast Years In the forecast years estimated values are calculated using forecast equations for each type of violence. The forecast equations have been explained in Table 1 below. Each of the types of violence are calculated using this estimated value and the respective shift factor calculated in the first year of the model and the multipliers on the death rates are applied. The equations used are as follows, Where, ConflictEst, HomicideEst, WomenandChilEst, PoliceEst and SelfHarmEst are the estimated level of deaths calculated using the forecast equations. ConflictShift, HomicideShift, WomenandChilShift, PoliceShift and SelfHarmShift are the shift factors calculated in the first year of the model. | | No | | Function | | R-Squared | | Independent variable | | Co-efficient | | Constant | | 1 | | Conflict deaths computation | | 0.5885 | | Internal War magnitude | | .5501 | | .0991 | | 2 | | Police violence deaths computation | | 0.1447 | | Log of homicides | | .25879 | | -3.3145 | | Police violence deaths computation | | 0.1447 | | Log of corruption | | 0.28308 | | -3.3145 | | 3 | | Interpersonal Violence Deaths computation | | 0.21 | | Youthbulge | | 1.04344 | | -10.5462 | | Interpersonal Violence Deaths computation | | 0.21 | | GINI | | 2.4341 | | -10.5462 After this, the total number of deaths are calculated for each category. For this purpose, we first calculate the total populations for adult males, women and children from the population model as AdultMaleTerm, WomenTerm and ChildrenTerm respectively. Next, we calculate the total number of deaths for each of the categories and apply the additive parameters on total deaths (svdthsadd) as follows, After this stage, we calculate the total deaths from societal violence as a simple sum of each of the above categories, Because we have applied additive parameters above, we perform a recalculation of the total death rates using the total number of deaths from each category of violence. We now calculate the total death rate from societal violence, Finally, the homicide index is calculated using each of the above except self-harm. The contribution of each term to the homicide index can be changed using the parameter svindexm. Each term is set to a value of 1 in the Base Case. Policy Equations: Government Expenditures The fiscal model of IFs is quite simple and builds on the computation of government consumption (GOVCON) in the economic model. IFs expenditures fall into six categories: military, health, education, research and development, other, and foreign aid. IFs divides total government consumption (GOVCON) into these five destination sectors (GDS) with a vector of government spending coefficients (GK) based on initial conditions. The user can change that default pattern of government spending over time with a multiplier parameter (gdsm). The model normalizes the allocation to assure that the money spent is no more or less than total government consumption. The last category of spending complicates the allocation of spending to destination categories. It is traditional not to think of foreign aid in terms of its percentage of the governmental budget (as we often think of defense or educational expenditures), but to think of it in terms of a percentage of the GDP. For instance, the United Nations has called for foreign aid spending equal to 0.7% (earlier 1.0%) of GDP of donor countries. Moreover, for some governments, foreign aid is not an expenditure, but a receipt and an addition to government revenues. Therefore IFs actually calculates foreign aid expenditures and receipts first and fixes those amounts (see the foreign aid equations). It then allocates the amount of government spending that remains in the coffers of aid donors (or the augmented amount available to aid recipients) among the other categories, normalizing the allocation to the sum of the coefficients in those other categories. where There are several forward linkages of government spending that are important. A mortality multiplier (MORTMG) is computed for the demographic model, using changes in health spending from the initial year and a parameter of the impact of that spending (elashc). Three of the forward linkages carry information on spending to the calculation of multifactor productivity in the economic production function, for additive rather than multiplicative use. One variable tracks change in education spending (CNGEDUC), modified by an elasticity of education on MFP (elmfped) and carries it forward. Another tracks changes in health spending (CNGHLTH) using a parameter (elmfphl). The third tracks changes in R&D spending with a parameter of impact (elmfprd). In each case there is a lag involved because of computational sequence. Because essentially of an older variable form for the education term that is still used in the agricultural model’s production function, the first of the three terms is transferred to that older variable (LEFMG). Policy Equations: Foreign Aid IFs uses a "pool" approach to aid (AID) rather than indicating bilateral flows from particular donors to particular recipients. That is, all aid from all donors flows into the pool and then all recipients draw proportions of the pool. IFs uses the aid value parameter (AIDDON) to calculate the aid (AID) from donors and AIDREC to calculate the targeted aid to recipients. The pool of aid donations determines the actual total level of interstate aid flows, however, and is allocated among potential recipients according to the proportions targeted for each. Aid outflows are negative and the total aid pool given (AIDP) is the sum of the negative flows, while the total desired aid of recipients (AIDR) is the sum of positive flows. - if - if A recomputation of aid for recipients distributes the aid pool across their demands.
https://pardeewiki.du.edu/index.php?title=Socio-Political
What are the formulas for unlevering and levering Beta? Which is less expensive capital, debt or equity? Debt is less expensive for two main reasons. First, interest on debt is tax deductible (i.e. the tax shield). Second, debt is senior to equity in a firm’s capital structure. That is, in a liquidation or bankruptcy, the debt holders get paid first before the equity holders receive anything. Note, debt being less expensive capital is the equivalent to saying the cost of debt is lower than the cost of equity. When using the CAPM for purposes of calculating WACC, why do you have to unlever and then relever Beta? In order to use the CAPM to calculate our cost of equity, we need to estimate the appropriate Beta. We typically get the appropriate Beta from our comparable companies (often the mean or median Beta). However before we can use this “industry” Beta we must first unlever the Beta of each of our comps. The Beta that we will get (say from Bloomberg or Barra) will be a levered Beta. Recall what Beta is: in simple terms, how risky a stock is relative to the market. Other things being equal, stocks of companies that have debt are somewhat more risky that stocks of companies without debt (or that have less debt). This is because even a small amount of debt increases the risk of bankruptcy and also because any obligation to pay interest represents funds that cannot be used for running and growing the business. In other words, debt reduces the flexibility of management which makes owning equity in the company more risky. Now, in order to use the Betas of the comps to conclude an appropriate Beta for the company we are valuing, we must first strip out the impact of debt from the comps’ Betas. This is known as unlevering Beta. After unlevering the Betas, we can now use the appropriate “industry” Beta (e.g. the mean of the comps’ unlevered Betas) and relever it for the appropriate capital structure of the company being valued. After relevering, we can use the levered Beta in the CAPM formula to calculate cost of equity. Beta is a measure of the riskiness of a stock relative to the broader market (for broader market, think S&P500, Wilshire 5000, etc). By definition the “market” has a Beta of one (1.0). So a stock with a Beta above 1 is perceived to be more risky than the market and a stock with a Beta of less than 1 is perceived to be less risky. For example, if the market is expected to outperform the risk-free rate by 10%, a stock with a Beta of 1.1 will be expected to outperform by 11% while a stock with a Beta of 0.9 will be expected to outperform by 9%. A stock with a Beta of -1.0 would be expected to underperform the risk-free rate by 10%. Beta is used in the capital asset pricing model (CAPM) for the purpose of calculating a company’s cost of equity. For those few of you that remember your statistics and like precision, Beta is calculated as the covariance between a stock’s return and the market return divided by the variance of the market return. Author AndrewPosted on October 12, 2007 October 6, 2009 Categories Discounted Cash Flow Analysis1 Comment on What is Beta? How do you calculate the cost of equity? To calculate a company’s cost of equity, we typically use the Capital Asset Pricing Model (CAPM). The CAPM formula states the cost of equity equals the risk free rate plus the multiplication of Beta times the equity risk premium. The risk free rate (for a U.S. company) is generally considered to be the yield on a 10 or 20 year U.S. Treasury Bond. Beta (See the following question on Beta) should be levered and represents the riskiness (equivalently, expected return) of the company’s equity relative to the overall equity markets. The equity risk premium is the amount that stocks are expected to outperform the risk free rate over the long-term. Prior to the credit crises, most banks tend to use an equity risk premium of between 4% and 5%. However, today is assumed that the equity risk premium is higher. The WACC (Weighted Average Cost of Capital) is the discount rate used in a Discounted Cash Flow (DCF) analysis to present value projected free cash flows and terminal value. Conceptually, the WACC represents the blended opportunity cost to lenders and investors of a company or set of assets with a similar risk profile. The WACC reflects the cost of each type of capital (debt (“D”), equity (“E”) and preferred stock (“P”)) weighted by the respective percentage of each type of capital assumed for the company’s optimal capital structure. Specifically the formula for WACC is: Cost of Equity (Ke) times % of Equity (E/E+D+P) + Cost of Debt (Kd) times % of Debt (D/E+D+P) times (1-tax rate) + Cost of Preferred (Kp) times % of Preferred (P/E+D+P). To estimate the cost of equity, we will typically use the Capital Asset Pricing Model (“CAPM”) (see the following topic). To estimate the cost of debt, we can analyze the interest rates/yields on debt issued by similar companies. Similar to the cost of debt, estimating the cost of preferred requires us to analyze the dividend yields on preferred stock issued by similar companies. In order to do a DCF analysis, first we need to project free cash flow for a period of time (say, five years). Free cash flow equals EBIT less taxes plus D&A less capital expenditures less the change in working capital. Note that this measure of free cash flow is unlevered or debt-free. This is because it does not include interest and so is independent of debt and capital structure. Next we need a way to predict the value of the company/assets for the years beyond the projection period (5 years). This is known as the Terminal Value. We can use one of two methods for calculating terminal value, either the Gordon Growth (also called Perpetuity Growth) method or the Terminal Multiple method. To use the Gordon Growth method, we must choose an appropriate rate by which the company can grow forever. This growth rate should be modest, for example, average long-term expected GDP growth or inflation. To calculate terminal value we multiply the last year’s free cash flow (year 5) by 1 plus the chosen growth rate, and then divide by the discount rate less growth rate. The second method, the Terminal Multiple method, is the one that is more often used in banking. Here we take an operating metric for the last projected period (year 5) and multiply it by an appropriate valuation multiple. This most common metric to use is EBITDA. We typically select the appropriate EBITDA multiple by taking what we concluded for our comparable company analysis on a last twelve months (LTM) basis. Now that we have our projections of free cash flows and terminal value, we need to “present value” these at the appropriate discount rate, also known as weighted average cost of capital (WACC). For discussion of calculating the WACC, please read the next topic. Finally, summing up the present value of the projected cash flows and the present value of the terminal value gives us the DCF value. Note that because we used unlevered cash flows and WACC as our discount rate, the DCF value is a representation of Enterprise Value, not Equity Value. How to Be An Investment Banker – Buy the Book!
http://www.ibankingfaq.com/category/interviewing-technical-questions/discounted-cash-flow-analysis/
Long Term Growth: Have the Facts Changed? The difference between a capitalization rate, used to value a single sum, and a discount rate, used to value varying future benefit streams, is the expected long-term growth rate. While not the result of a scientific sampling, it is my impression that most valuators use historic inflation or historic growth in Gross Domestic Product as the basis for their projection. Many use both as support for their estimate. Short-to midterm expectations are that both of those measures are going to be very different than we have become accustomed to. The question is how much should you let recent developments impact your long-term growth estimate? And, what do you do when those two indicators go in opposite directions? Government spending as a percent of GDP for the last three years has been 24 percent. This is the highest level since World War II. Government spending is expected to grow as the demographics of this country age and more citizens are eligible for full Social Security and Medicare. Government spending is expected to grow as “Obamacare” is implemented over the next several years. As government spending makes up a larger percentage of GDP there is, historically, a drag on the growth that can be expected in the private sector. Barry Eichengreen (Exorbitant Privilege, Oxford University Press, 2010) estimated that without any changes to Social Security and Medicare, government spending will reach 40 percent of GDP within 25 years. This raises the question of whether the U.S. will be able to borrow the funds needed to finance the spending. At some point the perception of risk will begin to drive interest rates higher. Whether any, or all, of this means that expected growth in GDP will be less than the historical averages is a decision that each practitioner should evaluate. If you come to the conclusion that 1.8 percent annual growth in GDP is the new normal, than to continue using the historic 3.8 percent will result in high values. The U.S. government has been borrowing in excess of $1 trillion per year for the last four years and that borrowing has been used to finance consumption rather than production. To the extent that the holders of U.S. debt lose confidence in the fiscal policies of the country (and hence raises the risk of the debt), interest rates will be driven up. The U.S. government has engaged a program of refinancing debt - replacing long-term debt with shorter term instruments. If the market begins to perceive higher risk the impact on the U.S. economy will be fast and severe. A general economic survey is underway and could be expected to evolve into a very robust period of economic growth. This is based on the pent up demand for housing, etc. and the huge volume of cash that has been sidelined in government securities as a safe harbor. Federal debt has grown from $5 trillion in 2000 to about $17 trillion in 2012 and there is no projection or expectation that debt will stop growing in the next several years. The most likely way to get out from under that kind of debt is to pay back with cheaper dollars - inflation. Historically, high inflation (in excess of historical averages) is a short-term event. Projections to the contrary are concerning but should be viewed in the context of history. In 1979 reported inflation was 13.3 percent. By 1982 the inflation rate had fallen back down to 3.4 percent. The purpose of this article is not to make political calculations or commentary. It is designed to cause practitioners to consider how they view long-term growth and to be prepared to explain and defend their estimate. When we enter a period of high inflation there will be pressure to increase long-term growth to those apparent realities. Alternatively, if the GDP growth rate remains below historical rates there will be pressure to lower long-term growth expectations. These opposite pressures will change the current environment where the expert can base long-term growth on both inflation and GDP growth. One of the roles of a financial expert is to understand the assumptions we are making, balance all the conflicting indications such as long-term growth, and be able to define, explain and defend those assumptions. It may get a lot more interesting and challenging in coming years. ©2013 Building Value contains articles to help attorneys and business owners understand the valuation process. As such, it is not intended to be financial, investment, or legal advice. Articles in Building Value are contributed by members of the Financial Consulting Group (www.fcgnetwork.org) and by Valuation Products and Services (www.valuationproducts.com). While we encourage you to forward Building Value in its entirety to other interested parties, individual articles may not be reproduced without the written consent of the author.
https://keitercpa.com/blog/view/long-term-growth-have-the-facts-changed-2/
All securities are valued on the basis of the cash inflows that they are expected to provide to their owners or investors. Value or current price should equal the present value (PV) of expected future cash flows, which, represents the intrinsic value of the asset. The formula is also applied to the case of bond pricing. The cash flows from a typical bond are straightforward: the bond has a known and definite life, has fixed coupon payments paid on a regular basis, pays a known par value or principal when the bond matures and should have a discount rate (yield to maturity) close to that of bonds with similar credit ratings. Although the principle for determining an appropriate stock price is the same as that for determining a bond price, equity does not offer the certainty of bond cash flows. Common and preferred stocks are generally assumed to have infinite lives. For common stock, relevant cash flows (dividend payments) will likely vary over time. Finally, determining an appropriate rate at which to discount future dividends is difficult. Despite these difficulties, in this section, we will see that the present value of all future dividends should equal a stock’s intrinsic value and that some simplifying assumptions can make the task of determining stock value easier. It may seem rather strange to treat the stock price as nothing more than the present value of all future dividends. Who buys a stock with no intention of ever selling it, even after retirement? Investors generally buy stock with the intention of selling it at some future time ranging from a few hours to 30 years later or longer. Despite the length of any one investor’s time horizon, the value of any dividend-paying common stock should equal the present value of all future dividends. What if a corporation currently pays no dividends and has no plans to pay dividends in the foreseeable future? The value of this company’s stock will not be zero. Here’s why. First, because the firm has no plans to pay dividends does not mean that it never will. To finance rapid growth, young firms often retain all their earnings; when they mature, they often pay a portion of earnings as dividends. Second, although the firm may not pay dividends to shareholders, it may be generating cash (or have the potential to do so). Someone buying the entire firm can claim the cash or profits, so its current price should reflect this value. Third, the firm’s stock should be worth, at the least, the per-share liquidation value of its assets. But for a going concern, which is our focus in this chapter, the firm is worth the discounted cash flow value that can be captured by an acquirer. Estimating all future dividend payments is impractical. Matters can be simplified considerably if we assume that the firm’s dividends will remain constant or will grow at a constant rate over time. Valuing Stocks with Constant Dividends If the firm’s dividends are expected to remain constant, so that D0 = D1 = D2 …, we can treat its stock as perpetuity. A perpetuity is an annuity that never ends! It keeps going and going, paying cash flows on a regular basis throughout time. That is, if you purchase perpetuity, you are buying a cash flow stream that you will receive for the rest of your life…. which can be passed on to your children, your grandchildren, and so forth as long as the payer is in business and financially able to make the payments. Is such a concept practical? Do perpetuities exist? In the past, some governments (e.g., those in the UK) have issued perpetuity bonds that will pay interest as long as the government stands. Preferred stock pays a fixed annual dividend forever (that is, as long as the firm exists and can pay the dividend), another example of a perpetuity. The present value of a perpetuity is the cash flow divided by the discount rate. For stocks with constant dividends, this means the equation becomes the following: P0 = D0 rs Many preferred stocks are valued using the equation since preferred stocks typically pay a constant dollar dividend and do not usually have finite lives or maturities. For example, if the FY Corporation’s preferred stock currently pays a $2.00 dividend and investors require a 10 percent rate of return on preferred stocks of similar risk, the preferred stock’s present value is the following: $2.00 $0.10 = $20.00 For a preferred stock with no stated maturity and a constant dividend, changes in price will occur only if the rate of return expected by investors changes. Valuing Stocks with Constant Dividend Growth Rates Many firms have sales and earnings that increase over time; their dividends may rise, as well. If we assume that a firm’s dividends grow at an annual rate of g percent, next year’s dividend, D1, will be D0(1 + g); the dividend in two years’ time will be D0(1 + g)2. As long as the dividend growth rate g is less than the discount rate rs, each future term will be smaller than the preceding term. Although technically, there are an infinite number of terms, the present value of dividends received farther into the future will move closer to zero. By accepting that the sum of all these terms is finite, the equation becomes the following: P0= D1 (rs − g) With D1 equal to next year’s dividend, namely, D0(1 + g), the current dividend increased by the g percent constant growth rate. This result is known as the Gordon model, or the constant dividend growth model. The model assumes that a dividend is currently being paid, and this dividend will grow or increase at a constant rate over time. Of course, the assumption of constant growth in dividends may be unrealistic for a firm that is experiencing a period of high growth (or negative growth; that is, declining revenues). Neither will constant dividend growth be a workable assumption for a firm whose dividends rise and fall over the business cycle. Let’s assume the cash dividend per share for XYZ Company for last year was $1.89 and is expected to be $2.05 at the end of this year. This represents a percentage increase of 8.5 per cent [($2.05 – $1.89)/$1.89]. If investors expect a 12 percent rate of return, then the estimated current stock value (P0) would be the following: P0 = $1.89(1.085)/(0.12 – 0.085) = ($2.05/0.035) = $58.59 Thus, if investors believed that the cash dividends would grow at an 8.5 percent rate indefinitely into the future and expected a 12 percent rate of return, they would pay $58.59 for the stock. Risk in Stock Valuation Investors in common stocks face a number of risks that bondholders do not. This additional risk leads them to require a higher rate of return on a firm’s stock than on its debt securities. For example, in the event of corporate failure, the claims of stockholders have lower priority than those of bondholders. So, stockholders face a greater risk of loss than bondholders. Dividends can be variable and omitted, whereas bond cash flows have a legal obligation to be met. Poor ethical decisions and poor management are another source of risk for stock investors in that such decisions can lower future cash flows and raise the required rate of return demanded by future investors. Accounting gimmickry and decisions by self-serving managers can hurt stock prices, as happened with Enron, WorldCom, and Tyco. Poor customer or supplier relations, allegations of poor-quality products, and poor communications, as occurred between Ford Motor Company and one of its tire suppliers, Firestone, which hurt both companies and their shareholders. Volkswagen’s admission of “fixing” emissions tests on its diesel car engines led to declines of over 35 percent in its stock price in a matter of a few days. If the general level of interest rates rises, investors will demand a higher rate of return on stocks to maintain their risk premium differential over debt securities. This will force stock prices downward. Therefore, stockholders risk losses from any general upward movement of market interest rates. Also, future dividends, or dividend growth rates, are not known with certainty at the time stock is purchased. If poor corporate performance or adverse general economic conditions lead investors to lower their expectations about future dividend payments, this will lower the present value of shares of the stock, leaving the stockholder with the risk of capital loss. Stock analysts systematically review economic, industry, and firm conditions in great detail to gain insight into corporate growth prospects and the appropriate level of return that an investor should require of a stock. Valuation and the Financial Environment The price of an asset is the present value of future cash flows; the discount rate used in the present value calculation is the required rate of return on the investment. Future cash flows of firms and the required returns of investors are affected by the global and domestic economic environments and the competition faced by firms. Slower sales or higher expenses can harm a firm’s ability to pay its bond interest or dividends or to reinvest in its future growth. Besides affecting cash flows, these can affect investors’ required rates of return by increasing risk premiums or credit spreads. Inflation pressures and capital market changes influence the level of interest rates and required returns. Global Economic Influences on Stock Valuation Two main overseas influences will affect firms. The first is the condition of overseas economies. Growth in foreign economies will increase the demand for U.S. exports. Similarly, sluggish foreign demand will harm overseas sales and hurt the financial position of firms doing business overseas. The rate of economic growth overseas can affect the conditions faced by domestic firms, too, as growing demand globally may make it easier to raise prices and sluggish demand overseas may lead to intense competition in the U.S. market. The second influence is the behavior of exchange rates, the price of a currency in terms of another currency. A change in exchange rates over time has two effects on the firm. Changing exchange rates lead to higher or lower U.S. dollar cash flows from overseas sales, more competitively priced import goods, or changing input costs. Thus, changing exchange rates affect profitability by influencing sales, price competition, and expenses. Changing exchange rates also affect the level of domestic interest rates. Expectations of a weaker U.S. dollar can lead to higher U.S. interest rates; to attract capital, U.S. rates will have to rise to compensate foreign investors for expected currency losses because of the weaker dollar.35 Conversely, a stronger dollar can result in lower U.S. interest rates. Domestic Economic Influences on Stock Valuation Individuals can spend only what they have (income and savings) or what their future earning capacity will allow them to borrow. Consumption spending (spending by individuals for items such as food, cars, clothes, computers, and so forth) comprises about two-thirds of gross domestic product (GDP) in the United States. Generally, higher disposable incomes (that is, income after taxes) lead to higher levels of consumer spending. Higher levels of spending mean inventories are reduced and companies need to produce more and hire additional workers to meet sales demand. Corporations will spend to obtain supplies and workers based upon expectations of future demand. Similarly, they will invest in additional plants and equipment based on expected future sales and income. Economic growth results in higher levels of consumer spending and corporate investment, which in turn stimulates job growth and additional demand. Slow or negative growth can lead to layoffs, pessimistic expectations, and reduced consumer and corporate spending. These effects will directly influence company profits and cash flows. Economic conditions affect required returns, too. Investors will be more optimistic in good economic times and more willing to accept lower-risk premiums on bond and stock investments. In poor economic times, credit spreads will rise as investors want to place their funds in safer investments. Governments shape the domestic economy by fiscal policy (government spending and taxation decisions) and monetary policy. These decisions may affect consumer disposable income (fiscal policy) and the level of interest rates as well as inflation expectations, (monetary policy) and, therefore, affect the valuation of the bond and stock markets. Some industry sectors are sensitive to changes in consumer spending. Sales by auto manufacturers, computer firms, and other manufacturers of high-priced items will rise and fall by greater amounts over the business cycle than will food or pharmaceutical firms. Changes in interest rates affect some industries more than others, too; banks and the housing industry (and sellers of large household appliances) are sensitive to changes in interest rates more than, say, book and music publishers or restaurants. Influence of Industry Competition on Stock Valuation A firm’s profits are determined by its sales revenues, expenses, and taxes. We have mentioned taxes and some influences on sales and expenses in our discussion of global and domestic economies, but industry competition and the firm’s position within the industry will have a large impact on its ability to generate profits over time. Tight competition means it will be difficult to raise prices to increase sales revenue or profitability. Nonprice forms of competition, such as customer service, product innovation, and the use of technology to the fullest extent in the manufacturing and sales processes may hurt profits by increasing expenses if the features do not generate sufficient sales. Competition may not come only from similar firms; for example, a variety of “entertainment” firms, from music to theater to movies to sports teams, vie for consumers’ dollars. Trucking firms and railroads compete for freight transportation. Cable and satellite firms compete in the home television markets (and for Internet service, along with telephone service providers). Changes in the cost and availability of raw materials, labor, and energy can adversely affect a firm’s competitive place in the market. The influences of competition and supply ultimately affect a firm’s profitability and investors’ perceptions of the firm’s risk. This, in turn, will affect its bond and stock prices. The most attractive firms for investing in will be those with a competitive advantage over their rivals. They may offer a high-quality product, be the low-cost producer, be innovators in the use of technology, or offer the best customer support. Whatever the source of the firm’s advantage is, if it can build and maintain this advantage over time it will reap above-normal profits and be an attractive investment.
https://highincomesource.com/how-to-calculate-the-value-of-a-stock/
Black scholes equation What is the use of Black Scholes equation? Also called Black-Scholes-Merton, it was the first widely used model for option pricing. It’s used to calculate the theoretical value of options using current stock prices, expected dividends, the option’s strike price, expected interest rates, time to expiration and expected volatility. What is the Black Scholes option pricing model? Definition: Black-Scholes is a pricing model used to determine the fair price or theoretical value for a call or a put option based on six variables such as volatility, type of option, underlying stock price, time, strike price, and risk-free rate. How accurate is Black Scholes model? Regardless of which curved line considered, the Black-Scholes method is not an accurate way of modeling the real data. While the lines follow the overall trend of an increase in option value over the 240 trading days, neither one predicts the changes in volatility at certain points in time. Is Black Scholes risk neutral? From the partial differential equation in the model, known as the Black–Scholes equation, one can deduce the Black–Scholes formula, which gives a theoretical estimate of the price of European-style options and shows that the option has a unique price regardless of the risk of the security and its expected return ( What interest rate is used in Black Scholes? For a standard option pricing model like Black-Scholes, the risk-free one-year Treasury rates are used. It is important to note that changes in interest rates are infrequent and in small magnitudes (usually in increments of 0.25%, or 25 basis points only). How option price is calculated? Options prices, known as premiums, are composed of the sum of its intrinsic and time value. Intrinsic value is the price difference between the current stock price and the strike price. An option’s time value or extrinsic value of an option is the amount of premium above its intrinsic value. What is d1 in Black Scholes formula? So, N(d1) is the factor by which the discounted expected value of contingent receipt of the stock exceeds the current value of the stock. By putting together the values of the two components of the option payoff, we get the Black-Scholes formula: C = SN(d1) − e−rτ XN(d2). How is call price calculated? Calculate the call price by calculating the cost of the option. The bond has a par value of $1,000, and a current market price of $1050. This is the price the company would pay to bondholders. The difference between the market price of the bond and the par value is the price of the call option, in this case $50. What is volatility in Black Scholes model? Implied volatility is an estimate of the future variability for the asset underlying the options contract. The inputs for the Black-Scholes equation are volatility, the price of the underlying asset, the strike price of the option, the time until expiration of the option, and the risk-free interest rate.
https://estebantorreshighschool.com/faq-about-equations/black-scholes-equation.html
In appraisal proceedings and other disputes relating to valuation, the Delaware courts prefer the discounted cash flow (DCF) method if deal price cannot be trusted to indicate fair value. Under the DCF method, the usual approach is to calculate value year-by-year for the coming five years (the projection period) and to use projected average cash flow to calculate value for the period thereafter (the terminal period). Since cash flow differs from GAAP earnings primarily by netting out funds reinvested in the firm (plowback), future returns can be expected to grow. Thus, one must adjust for expected growth during the terminal period. The standard practice is to reduce the discount rate by the projected inflation rate plus the projected GDP growth rate (or often half thereof) -- since a firm must keep up with inflation (lest it disappear over time) and since economic growth comes from returns generated by business in the aggregate. Needless to say, projections of economic growth are speculative at best. So to use any such prognostication in the context of an appraisal proceeding should be inherently suspect. Moreover, at a discount rate of 10% – about the average market rate currently used by the courts – terminal value amounts to more than sixty percent of total value. And to reduce the applicable discount rate from 10% to 7% (for example) increases the value of each dollar of terminal period return from about $6.21 to about $10.19. In other words, the effect of adjusting the discount rate for growth can be quite dramatic. The temptation for manipulation is obvious. But there is no need to adjust the discount rate at all if plowback generates return at the same rate ordinarily required of the firm. If so, growth in value is equal to plowback. Thus, it would be far simpler and much less speculative to use projected GAAP earnings as the measure of return. To use cash flow together with an adjusted discount rate is akin to making Maraschino cherries – which are first soaked in lye to remove color and flavor and then soaked in food coloring and sugar to put it back. The problem (if it is a problem) is that to use projected GAAP earnings rather than cash flow with a growth-adjusted discount rate is to presume that long-term growth in firm value is in fact limited to growth from plowback. But there is good reason to think that this is true since opportunities to generate above normal returns (economic rents) are likely to dissipate quickly because of competition. Still, it is possible that firms do grow by more than can be explained by plowback. The data presented in this piece suggests that indeed growth comes wholly from plowback together with reinvestment of dividends. For the period since 1930, growth in the value of the S&P 500 can be fully explained by plowback (GAAP earnings less dividends) together with reinvestment by investors. While plowback has been just enough to match inflation, remaining growth in stock prices is slightly less than would be expected by dividend reinvestment (which is consistent with diversion by investors of some portion to consumption). The data since 2000 is somewhat different in that plowback has been less than inflation, but stock prices have nonetheless increased consistent with reinvestment. The bottom line is that stock prices seem to grow slightly more than the real GDP growth rate but a bit less than plowback plus the likely reinvestment rate. It follows that there is no reason for stockholders to expect any more growth than can be generated through plowback, and thus no need for courts to struggle with estimating growth rates. By using projected GAAP earnings as the measure of average long-term return, the courts can use an unadjusted discount rate to calculate terminal value. This post is based on the paper ‘Appraisal Rights and Economic Growth’. Richard A. Booth is the Martin G. McGuinn Professor of Business Law at Villanova University — Widger School of Law.
https://www.law.ox.ac.uk/business-law-blog/blog/2018/01/appraisal-rights-and-economic-growth
The Gordon Growth Model, also known as the dividend discount model, measures the value of a publicly traded stock by summing the values of all of its expected future dividend payments, discounted back to their present values. It essentially values a stock based on the net present value (NPV) of its expected future dividends. Gordon Growth Model: stock price = (dividend payment in the next period) / (cost of equity - dividend growth rate) The advantages of the Gordon Growth Model is that it is the most commonly used model to calculate share price and is therefore the easiest to understand. It values a company's stock without taking into account market conditions, so it is easier to make comparisons across companies of different sizes and in different industries. There are many disadvantages to the Gordon Growth Model. It does not take into account nondividend factors such as brand loyalty, customer retention and the ownership of intangible assets, all of which increase the value of a company. The Gordon Growth Model also relies heavily on the assumption that a company's dividend growth rate is stable and known. If a stock does not pay a current dividend, such as growth stocks, an even more general version of the Gordon Growth Model must be used, with an even greater reliance on assumptions. The model also asserts that a company's stock price is hypersensitive to the dividend growth rate chosen and the growth rate cannot exceed the cost of equity, which may not always be true. There are two types of Gordon Growth Models: the stable growth model and the multistage growth model.
https://www.investopedia.com/ask/answers/032415/what-are-advantages-and-disadvantages-gordon-growth-model.asp
When you hear the phrase genome engineering, your mind might instinctively go to a place of science fiction, conjuring up images of augmented superhumans like Khan from Star Trek. You might think of real-life history, such as the horrors of the eugenics movement. Or you might gravitate toward the present-day debates over ideas such as cloning and GMOs. Genome engineering is a wide-ranging topic encompassing many schools of thought, and it has continually fascinated scientists and creatives alike. Yet despite this, it is also often something that many people know very little about and often fear due to misconceptions and generalizations. Over the past few years, there has been something of a renaissance in genome engineering technology and research, which has led to increased awareness and renewed interest from the public. At the forefront of this are organizations such as the Genome Writers Guild (GWG), a “group of stakeholders interested in promoting the safe, effective and ethical use of gene delivery and gene editing technologies for society.” While the GWG hopes to promote ethical research into genome engineering technology, it is also making a concerted effort to engage the public—challenging people’s misconceptions and building a greater understanding of the benefits that these technologies can present to our world. From July 19 through July 21, 2018, the GWG is hosting its second annual professional conference at the University of Minnesota. On the first day of the conference, the group will open up to the public from 7:00 to 8:30 p.m. at the McNamara Alumni Center for an event dubbed the Science Café, in which scientists, professors, and artists will deliver short talks and encourage questions and discussions on current theories and research related to genome engineering. Dr. David Largaespada, president and cofounder of the Genome Writers Guild, hopes this event will engage individuals—particularly young people—by piquing curiosity and cultivating a sense of excitement for the field. “Our mission is to engage the public, effectively describe new and ongoing genome engineering technologies, listen to concerns, and help drive policy,” he explained to Twin Cities Geek. While the CRISPR/Cas9 technology has shown immense potential—it has arguably revolutionized the field and paved the way for a great deal of progress—the use of this tool has also been met with some controversy. An ongoing intellectual property dispute has led to court cases over legal mistakes in the initial patenting of CRISPR technology since 2016, and the low cost of producing and distributing this technology has inadvertently led to a market of mail-order gene editing kits giving way to “garage scientists” and DIY-inspired genome editing projects that many experts argue may pose legitimate risks to society. On top of that, some scientists have challenged the effectiveness and safety of the technology as a whole. A study published in Nature last year argued that the use of CRISPR may be associated with genetic unexpected mutations, but this was subsequently criticized and then challenged in another study and ultimately retracted from the journal altogether. A more recent study in the journal Nature Biotechnology has argued that cells edited through CRISPR technology struggle to repair themselves after the process, thus causing significant genetic damage. Dr. Largaespada and his colleagues understand and recognize the validity of these concerns but also hope that these setbacks will not stand in the way of all of the positive developments that have been coming out of the field. Largaespada believes that these issues are “still important to keep talking about” and hopes to continue having “an ongoing dialog with all stakeholders including the general public.” Effectively, the members of the Genome Writers Guild believe that it is continued engagement, discussion, and a greater general understanding of the concepts and technologies that will help the field progress meaningfully. This is why they hope to see a large turnout at their conference this weekend and especially at the public Science Café event—for which you can register here.
http://twincitiesgeek.com/2018/07/scientists-gather-in-minnesota-to-foster-understanding-of-genome-engineering-technology/
The Advocacy and Outreach Programme conducts advocacy by engaging with national, regional and international stakeholders to influence policies and ensure that they embody principles of human rights. Through these engagements, we strive to ensure that the concerns of grassroots stakeholders are represented during the decision making processes of domestic, regional and international institutions and actors and that Libyan civil society has access to these institutions and can influence them. We aim to be active, self reflective participants in the development of a Libya which embodies the values and principles of human rights, justice and the rule of law by supporting the development of a stronger civil society in Libya, engaged in a meaningful way. At the international and regional levels, we advocate before institutions such as the United Nations (UN) Human Rights Council, the African Commission on Human and Peoples’ Rights and the Assembly of States Parties to the International Criminal Court. We inform key decision-makers of the human rights situation in Libya and shed light on issues that require particular attention through meetings and side events. At the national level, we encourage the state to develop effective policy decisions to implement Libya’s human rights commitments. We work to effect change to draft legislation dealing with human rights and accountability concerns by addressing weaknesses and gaps in Libyan law, while promoting compliance with international human rights standards. As such, the Advocacy and Outreach Programme corresponds with members of the Constitutional Drafting Assembly (CDA) to encourage the improvement of human rights standards within the current draft constitution and the consideration of the concerns of all the Libyan public within their deliberations. In complementarity with our advocacy work, we carry out outreach campaigns and activities aiming to involve grassroots actors in the legal and political decision making processes shaping Libyan society. Our outreach activities, including public service announcements (PSAs), podcasts, public consultations and events, seek to engage the public and civil society on key issues and to initiate an open dialogue with grassroots actors in an inclusive and accessible manner. Through these activities, we aim to build a deeper understanding and culture of human rights by raising public awareness of key issues currently affecting human rights in Libya and priorities that need to be addressed to facilitate the development of a Libya which embodies the values and principles of human rights, justice and the rule of law. In turn, the priority concerns voiced by Libyan stakeholders inform our advocacy efforts to influence political and decision making processes. As well as undertaking its own advocacy, the Advocacy and Outreach Programme works with national NGOs to engage with international and regional human rights mechanisms and pursue joint advocacy targets. To that end, LFJL brought together the Coalition of Libyan Human Rights Organisations (the Coalition), a diverse network of Libyan civil society organisations from different geographical areas and working on a wide range of human rights issues. Together we monitor, document and report on the human rights situation from the perspective of the Coalition organisations’ diverse expertise. We then develop joint strategies for outreach and advocacy through events, campaigns and engagement with human rights mechanisms based on the priority concerns of diverse groups of Libyan grassroots stakeholders. Through our long term, holistic support of the Coalition we intend to encourage, support and expand the engagement of Libyan civil society with human rights mechanisms and to strengthen the resilience of civil society in Libya. The Coalition is progressively expanding and is now formed of 11 organisations that cover issues including: Access to nationality; Conditions in migrant detention facilities. The media have a massive impact in setting the tone in specific situations and have the potential to support democracy efforts and promote sustainable peace. Their coverage has direct consequences on how the public opinion perceives and understand a certain issue, both in a positive and negative sense. Very often, distortion of facts does not happen willingly, but because of the media agenda and the priorities of the news cycle. This often results in crucial aspects of a conflict, such as its impact on civilian lives and rights, being overlooked. This is particularly worrying when applied to a context like Libya, with an ongoing conflict, an inconsistent approach by international actors, a divided population and a polarised national media environment. LFJL decided to challenge this narrative and bring a human rights perspective to the table. LIBYA MATTERS: A NEW PERSPECTIVE ON LIBYA Hosted by Elham Saudi and Marwa Mohamed, LFJL's Head of Advocacy and Outreach, every week Libya Matters will focus on an overlooked and neglected aspect of the Libya story. In a casual conversation intended to bring a candid insight, guest experts explore issues of justice, human rights, the rule of law and much more. Libya Matters aims to challenge the mainstream coverage of Libya and focus on under-reported parts of the Libyan story. LFJL’s UPRna initiative brought together and trained the Coalition members to engage with the UN human rights mechanism of the Universal Periodic Review (UPR), marking the first engagement of Libyan civil society in Libya’s UPR. In doing so, the Coalition expressed human rights concerns in relation to their field of expertise and underlined shortcomings by Libya in implementing past UPR recommendations. The Coalition advocated for recommendations to be made by UN Member States to Libya during joint advocacy training and missions to Geneva, Switzerland, and made suggestions to overcome existing human rights challenges. As a result of these joint efforts, the majority of the recommendations submitted by UN Member States and accepted during Libya’s UPR reflected concerns raised by the Coalition. Following the adoption of UPR recommendations, the Coalition has entered into the stage of monitoring the state’s compliance with, and implementation of, the recommendations it accepted, in order to engage in Libya’s next UPR. Alongside its engagement in Libya’s UPR, the Coalition encourages other civil society organisations to become involved in such efforts, and generates public awareness of Libya’s human rights obligations through creative and accessible media, such as PSAs and podcasts. Our Destoori project aims to raise awareness of Libyans on the constitution making process, to gather public opinion, and to create a connection and sense of ownership between the Libyan people and their constitution. In order to realise the aspiration of a lasting constitution that defends and is in turn defended by all Libyans, we seek to foster engagement from the Libyan public in the constitutional draft to ensure that all stakeholders are represented in the process and invested in the outcome. Based on our outreach activities with the Libyan public, we advocate to national decision makers, including the CDA, by sharing our legal analysis of the constitutional draft to ensure that it protects human rights of all people in Libya and reflects their aspirations. Under the Destoori project, we engage with grassroots stakeholders to encourage their participation in the drafting of Libya's law and constitution through awareness campaigns. Outreach initiatives include the Rehlat Watan ('Journey of a Nation') constitutional tour, during which the Destoori team travelled across Libya to engage with over 3,000 people from 37 different communities. Through interactive activities, Q&A sessions and surveys, we discussed sensitive issues with thousands of Libyans, including the constitutional process and the rights that they would like to see protected in their constitution. The findings from Rehlat Watan and the views and aspirations of the population formed the basis of LFJL’s Destoori Report and Recommendations, and informed our advocacy before the Constitutional Drafting Assembly. We seek to find accessible creative ways of involving diverse audiences in constitutional discussions, through interactive projects such as graffiti competitions, songwriting and educational videos. In April 2018, we attended the 62nd Ordinary Session of the African Commission for Human and Peoples’ Rights (ACHPR) to raise awareness of the issues of enforced disappearance and the dire situation of migrants in Libya. Participating in a side event entitled “Enforced Disappearances in Africa: the fight for truth and justice”, LFJL presented the current state of affairs in regards to enforced disappearances in Libya and made suggestions in relation to the role the ACHPR should play in assisting victims and their families in their search for the truth and justice. LFJL also met with key ACHPR Commissioners and advocated for the ACHPR to call upon the Libyan state to respect its international human rights obligations towards migrants. We highlighted the importance of promptly taking measures to end the situation of migrants of Libya, including by ending the practice of automatic detention of migrants and the culture of discrimination against them. Our full recommendations published ahead of the mission are available here. In December 2017, LFJL attended the 16th session of the Assembly of States Parties to the Rome Statute (ASP). During this mission, LFJL pushed for stronger accountability for human rights violations and abuses in Libya, including those against migrants before states and representatives of the International Criminal Court (ICC) (read LFJL’s recommendations to the ASP here). We held a side event entitled The Importance of Deterrence and the ICC’s Role in Current Violations during which, the panel addressed the crucial role that the ICC can play in the pursuit of deterrence and the current challenges to achieving justice in Libya. LFJL insisted on the need for the ICC to restore its relationship with the people on the ground in Libya, including civil society, as a priority. We further called on the ICC to investigate the crimes of trafficking and slavery in Libya and to adopt sanctions against the perpetrators of such crimes. In September 2017, we attended the 36th regular session of the United Nations Human Rights Council, along with members of the Coalition, to advocate for stronger human rights protections for migrants and IDPs in Libya. The Coalition held a side event entitled Fortress Europe: Threatening the Human Rights of Migrants which aimed to shed light on the role of the European Union and its member states’ migration policies in the human rights crisis facing migrants. Additionally, the Coalition took part in the Interactive Dialogue on the High Commissioner’s Oral Update on Libya to deliver two oral statements calling for the protection of the human rights of migrants and of freedom of expression. The Research and Capacity Building Programme undertakes activities that aim to identify new opportunities for participation, to share our understanding of human rights issues and to address the knowledge deficit around Libya. The Advocacy and Outreach Programme ensures that core human rights concerns of grassroots stakeholders are a key consideration during the decision making processes of domestic, regional and international institutions and actors.
https://www.libyanjustice.org/en/programmes/advocacy-and-outreach
What is the class about? Emerging technologies such as AI, Big Data, and the Internet of Everything are going to reshape every business and be part of the foundation for most future jobs. The CT 100 Future Technology Innovations course is an introductory class that seeks to engage students in hands-on learning as well as informative and critical discussions about future trends in technology — providing students with an early understanding of what these technologies are about and what their impact will be on work and society. Could you describe the “light bulb moment” that led your department to create this course? Technology has greatly impacted the way we live. Advancements in technology over the last century have made both positive and negative impacts on the ways we work and communicate. Society has become heavily dependent upon technology as new and better technologies emerge, and this dependence has continued to increase as a result of the Covid-19 pandemic. With this increased reliance and notoriety comes misconceptions and fears about emerging technology trends and how they will affect future generations. This was the “light-bulb” moment for the whole department, as all of us are aware that we must do better in ensuring that people understand the benefits and consequences of these technology innovations. As a result of this awareness, through discussions and brainstorming sessions, our department director Dr. Dennis Trinkle, and professors Dr. Edward Lazaros and Dr. David Hua created this course. My approach is to provide an engaging and informative class environment where our students can learn about various technology innovations and how these technologies will impact society. What is your teaching philosophy? I utilize a multi-modal approach when teaching courses that is structured in a way to create open, informative and critical discussions. I am still relatively new to teaching, but I have noticed that students are much more engaged when you try to find ways to have open and honest discussions where they feel like they positively contributed to the conversation. What teaching methodology will you use for this class? As a way to address various learning styles, I will use a combination of electronic media, lectures, critical discussions, and reflections in the hopes of engaging the students while addressing the misconceptions of future technology innovations. Students will also get hands-on learning experience with the technologies studied in class. What are your learning objectives? At the conclusion of the course, students will be able to: - Think critically about future technology scenarios, analyzing both future scenarios and positive and negative implications. - Explain each of the following technologies, why they matter, and to project what their impact might be: - Artificial Intelligence (AI) & Machine Learning - Big Data, Data Analytics, Data Visualization - Augmented & Virtual Reality - Internet of Everything (IoE), Next Gen Networks, and Cloud Computing - Automation & Robotics - Blockchain, Encryption & Digital Currencies - Massive Computational Power - Understand and describe both the professional implications of the technology as well as their potential impact on society. - Understand the possible impacts of these technologies on human health, wellness, and well-being. What material will be studied in class? This course will be based upon a wide array of academic resources, textbooks, articles, and videos about the various technology innovations, with group discussions reinforcing the learning objectives. Are there any prerequisite to this class? There are no prerequisites for this class. What students can register to attend the class? This course is open to all students of any major or field of study. This course hopes to attract students towards the technology field, but it ultimately designed make students better informed about technology and how it affects society. How will students will be evaluated? Students will be evaluated using a wide range of methods including comparative essays, discussions, quizzes, and projects. They will also be evaluated on the quality of their work and their ability to demonstrate their understanding of the course objectives. Finish the sentence: You will do well in this class if you… … are actively engaged in the class, keep an open mind, and produce quality work that shows your understanding of the objectives of the course. * * * Interested in joining the class? - Speak to your academic advisor or follow the course registration process outlined here and look up TC100. - For more information, please contact Kyle Church at [email protected].
https://blogs.bsu.edu/ccim/2021/04/19/ct100-future-technology-innovation/
Public Engagement & Foresight We are using public and stakeholder engagement, foresight and inclusive community building to help lay the ground for a research infrastructure based on the principles of responsible research and innovation. One of Human Brain Project’s major innovations is to develop the new digital research infrastructure EBRAINS to help advance neuroscience, medicine, and computing. Our public engagement activities support public acceptance and the Human Brain Project’s commitment to Responsible Research and Innovation: a wide umbrella term connecting different aspects of the relationships between research, innovation, and society. We work to engage citizens and stakeholders in the discussion of ethical and societal dilemmas that may arise from research within the Human Brain Project, providing an opportunity for society to influence how research is carried out in accordance with the principles of responsible research and innovation. Public engagement also actively informs citizens and stakeholders on the potentials within brain research and sheds light on some of the dilemmas that may arise. 'Foresight', on the other hand, is the practice of making 'forward looks', of anticipating change, and studying future possibilities. We have developed scenarios that serve as frameworks and stimuli for evaluating the possible consequences of the Human Brain Project on society. These have then been discussed with key informants from a range of communities to generate a series of best practice recommendations for researchers and Human Brain Project managers. This simultaneously enables adopting strategies that optimise scientific and social benefits while enhancing preparedness for possible ethical concerns and dilemmas. To inform the work with RRI, we organise and facilitate public engagement on potentially controversial issues of relevance to the Human Brain Project. Public engagement gives the society outside of the Project a chance to influence how research is being made in accordance with the principles of responsible research and innovation. Furthermore, public engagement also actively informs citizens and stakeholders on the potential within brain research and sheds light on potential ethical dilemmas and future societal benefits. Engagement methods are central to our Community Building activities. Broad stakeholder dialogues and involvement support a thriving community, which can make use of EBRAINS and ensure its further development. Public Engagement & Foresight in the HBP Engaging citizens and stakeholders, making 'forward looks', and building a community of neuroscientists and EBRAINS users. Citizen & Stakeholder Engagement Responding to ethical and societal issues can be challenging, particularly in a research setting. We help researchers understand how they may utilise and implement public insight and opinions while planning and carrying out their research, helping them to heighten the positive impact and benefit for society. The Human Brain Project’s public engagement activities are addressing concerns and opinions of EU citizens about social, ethical, cultural, and legal issues that are related to the Project’s activities. With a stronger understanding of the perspectives of citizens, we can communicate better and promote more responsible research that takes the publics’ perspectives into account. To understand the views and opinions of the public, we have designed and carried out a series of citizen engagement processes, facilitating an informed discussion on neuroethical subjects of interest, identified by researchers in the Project and the International Brain Initiative (IBI). Large-scale public engagement has previously been conducted on the neuroethics of Artificial Intelligence, in Dual Use and in Data Governance. The next large-scale public engagement will be on the topic of Disease Signatures and will be held under the title Mixing of Minds (MoM). This series of public engagement workshops takes place in six countries across Europe. Long-Term Implications & Foresight Our work focuses on finding ways to design and embed responsible research practices into EBRAINS. A key objective is to develop 'foresight', which is the practice of looking ahead, to envision potential future developments and changes. The foresight activities cover both the short and long-term ethical and societal issues that may form roadblocks for the Human Brain Project and EBRAINS. We are developing a toolkit for EBRAINS to support responsible research and innovation in practice. This approach is both anticipatory and reflective by design. The toolkit makes approaches accessible and easy for researchers to use. Researchers across the Project are involved in the design of this toolkit through a series of workshops. The intention of this toolkit is to provide the best possible framework for foresight through public engagement activities. The toolkit enhances preparedness for possible ethical concerns and empowers researchers and stakeholders to reflect on their work, their role in their work, and the work’s justification and broader implications. Community Building Another important building block in the groundwork for the EBRAINS infrastructure is Inclusive Community Building: creating the foundation for a growing and self-sustaining community around the EBRAINS research infrastructure. The goal of community building is to create increase societal benefit, ensuring that a broad range of stakeholders are engaged in the infrastructure, its development and research. The community will be built across the scientific disciplines represented in the Human Brain Project, including both users of the scientific tools and data of EBRAINS, and of a wider circle of relevant stakeholders, for example industry partners, clinicians, patient associations, relevant networks, and research funders. Allowing for novel collaborations and support for end-user directed research. Anyone can request to address ethical, regulatory and social issues raised by HBP research.Register an Ethical Concern More Ethics & RRI The Human Brain Project will have an impact on both science and society. We promote RRI practices within the HBP, and help to shape the direction of its research in ethically sound ways that serve the public interest.
https://www.humanbrainproject.eu/en/science-development/ethics-and-society/public-engagement-foresight/
A couple of weeks ago I had some friends over for dinner, and it struck me afterwards that we had spent a lot of the evening talking about Artificial Intelligence. As a venture capital investor focussed on emerging technologies, I am used to my work day being filled with conversations about technology trends and advances. However over the last 12 months I have found myself having more of these conversations outside the office too, and they are almost always focussed around AI. Looking at mentions of key technologies in a set of general news sources, you can see that AI has caught public interest to a far greater extent than other innovations. Since early 2016 when AI overtook the Internet of Things, interest in the topic has grown rapidly, and news volume now exceeds IoT by 50% and other key technologies by 6 times. Technologies such as Autonomous Vehicles and Industry 4.0 are seen as well defined and with perceived benefit to a clear set of problems. AI though has captured the public imagination at another level, due to a sense of its broader significance and potential impact, both positive and negative. I see this as being driven by four significant factors: We frequently dismiss the fears without acknowledging that they are based in a little bit of truth. Humans have built technical systems for a long time, and they’ve often had unintended consequences … What would happen if we took the fears seriously enough not to just dismiss them, going, “You’ve watched too many Terminator movies”? But actually took them seriously and said, “Here are the guardrails we’re implementing; here are the things we’re going to do differently this time around and here are the open questions we still have.” In a fascinating survey from the UK report ‘Public views of Machine Learning’ by the Royal Society, you can see that the cumulative impact of the factors discussed is to leave people’s opinions divided, with a 30/35/30 split between people who feel the risks of machine learning were greater, equal to or less than its benefits. Interestingly though, looking deeper you see a significant variation in opinion between potential uses. While 61% of respondents were positive on the benefits of using computer vision on CCTV to catch criminals (a use case which seems to have limited downside), 45% were negative on driverless vehicles and 48% negative on autonomous military robots, which in both cases have implicit safety concerns alongside potential for job losses. For those working in and around AI today, the challenge is to harness the public engagement in a positive way. We need to help educate and convert those who are undecided to enable faster adoption of this new technology, while dispelling the myths and misconceptions that could slow this down. Most importantly though, we need to work with the public to understand and mitigate the real concerns around the potential negative impacts. And while some of this will need to be done in government and the media, some of this can probably be done round the dinner table too.
https://hackernoon.com/why-ai-is-now-on-the-menu-at-dinner-even-with-my-non-tech-friends-44c666348de4
Firms in various industries have become more and more active in engaging in corporate social responsibility (CSR). However, CSR imposes non-negligible costs on a firm which may hurt a firm’s short-term financial performance. As a consequence, not all these activities will be appreciated by the financial stakeholders. Some CSR may be regarded as unnecessary expenses rather than benefits to a firm. A recent study by Dr. Yijing Wang and Dr. Guido Berens shows that the financial stakeholders think positively about the legal CSR, indifferently about the Philanthropic CSR, whereas negatively about the ethical CSR related to communities or the environment. The impact of CSR on financial stakeholders is reflected by how CSR influences the perceptions among them. These perceptions, namely, corporate reputation among financial stakeholders, are a result of evaluations regarding the likelihood that a firm can meet its expectation. A good corporate reputation can help resolve the information asymmetry problem between firm mangers and the financial stakeholders, and consequently benefit the financial performance of a firm in the long run. Therefore, to understand the impact of CSR on financial stakeholders, a key step is to investigate the financial reputation. Another important step, then, is to categorize CSR activities into specific groups. The classification is based on the degree to which different expectations among stakeholders are fundamental to a firm’s role in society, which includes the legal, ethical and philanthropic responsibilities. Legal CSR refers to respecting the laws and regulations that a firm must adhere to. Ethical CSR corresponds to the expectation of society that firms should carry out their business within the framework of social norms. Ethical CSR is not necessarily codified into law. Philanthropic CSR refers to voluntary activities, such as philanthropic contributions, about which society has no clear-cut message for business. Through this classification, Wang and Berens’s study finds that legal CSR is an important determinant of reputation among financial stakeholders. Financial stakeholders perceive the commitments to legal CSR positively. On the contrary, the ethical CSR related to secondary stakeholders, such as communities or the environment, are perceived negatively among financial stakeholders. These intriguing findings pose another question: are the financial stakeholders really so special, or do other stakeholders perceive CSR in the same way? Wang and Berens’s study further compared the impact of different CSR on public stakeholders to that on financial stakeholders. In contrast, the public stakeholders think positively about the ethical CSR related to communities or the environment, as well as about the philanthropic CSR. But they hold a neutral view of the legal CSR. The different impacts of CSR to financial stakeholders and public stakeholders point to a potential conflict of interest between the two groups. For example, investors and environmental activists may value a firm’s commitment to conforming to social norms differently. The conflicting interests between financial and public stakeholders may result in an obstacle for a firm to allocate limited resources to match the expectations of stakeholders. Sometimes firms may only be able to serve the interests of certain stakeholder groups while one group may benefit at the expense of another. Therefore, when managers communicate about their CSR aiming at fulfilling legal, ethical, or philanthropic expectations, they should choose to emphasize certain CSR in their communication, depending on the specific stakeholder group they are targeting. For example, to manage investor relations, emphasizing on environmental commitment may not be a sensible idea. This article is based on the paper: Wang, Yijing, and Guido Berens. "The Impact of Four Types of Corporate Social Performance on Reputation and Financial Performance." Journal of Business Ethics (2014): 1-23. Does borrowing from the private markets cost more than borrowing from the public markets? The role of internal data in data analytics for business valuation How to gain more insight into the relationship between data and risk management.
https://www.tias.edu/en/item/what-if-financial-stakeholders-care-about-some-csr-but-not-others/
To be effective, a business must promote future-mindedness in its workforce. Future-minded employees are often more committed to their very own jobs and they are less likely to leave them. In addition , firms with high future-mindedness have bigger employee preservation rates. Future-mindedness also enhances with firm size. This is especially true for R&D departments. Future-oriented innovations include the development of fresh technologies and the application. Learners should be aware of the existing regulatory and legal issues connected with new technology and their potential impact on modern culture. Moreover, they have to understand the significance of these improvements in the business universe. Courses for Hult focus on emerging technologies and their business implications. The courses are created by wonderful faculty who all are analysts in their areas. They cover the basics of each technology, potential applications, and key players that vacation cruise innovation. They also explore the timeline of such technologies and the important legal and regulating issues. In order to create future-oriented innovations, corporations must embark on collaborative https://datatraininst.com/2021/11/12/three-reasons-why-your-company-needs-a-virtual-data-room/ efforts with stakeholders. They must identify the goals of their stakeholders and identify obstacles that inhibit innovation. After identifying these kinds of barriers, they need to engage in dialogic evaluation. With successful transform, new products and services will be developed that benefit the two business plus the community. These innovations in many cases are a product of a company’s collaborative efforts with stakeholders. Nevertheless , to be effective, these types of transformations need a change in state of mind and the leadership of a provider. Future-oriented enhancements can be defined as the creation of recent knowledge that should address long term future challenges. This requires cooperation across stakeholders, institutions, and processes. With regards to work at home, this requires an complex analysis of existing hurdles and possibilities. The process of co-creation is crucial since it requires a dialogic approach.
https://appliedsustainabilitygroup.com/uncategorized/future-oriented-innovations/
Why Are Secondary Stakeholders Important to a Company? In general, stakeholders are people or entities that have an interest in a company; they affect the business or are affected by the operations of the business. Stakeholders are classified as primary stakeholders and secondary stakeholders. Primary stakeholders are people or entities that participate in direct economic transactions with an organization. Examples of primary stakeholders are employees, customers and suppliers. Secondary stakeholders are people or entities that do not engage in direct economic transactions with the company. According to the American Society for Quality, secondary stakeholders are indirectly affected by an organization's operational activities. Secondary stakeholders examples are local communities, local workforce boards, activist groups, business support groups and media. Secondary Stakeholders' Importance Secondary stakeholders are important to a company because they affect the company's reputation. Secondary stakeholders tend to be more vocal than primary stakeholders. Primary stakeholders are small groups compared to secondary stakeholders. The concerns raised by primary stakeholders, such as suppliers, stay well within that supplier’s group and the business owners. The public perception toward the organization might not be affected even if the organization takes its time in addressing the supplier's issue. The concerns raised by secondary stakeholders, such as local communities, receive wide media coverage and disseminate to the general public quickly. Any delay in addressing their concerns could damage the company's reputation. Secondary Stakeholders Effects on Businesses If a company's production activities damage the air and underground water, local communities and activist groups will likely speak out. For example, In 2018, Sterlite Copper, a subsidiary of Vedanta Group of India, was forced to shut down its operations after continuous protests from residents claiming that the company caused severe damage to air and water. Secondary stakeholders can also create positive word-of-mouth about an organization in the market. For example, participating in community fundraisers or offering tuition for children of employees can motivate local communities, activist groups and the media to speak in glowing terms about a company. How to Deal With Secondary Stakeholders A company should treat secondary stakeholders with dignity and respect. If they voice a genuine concern, the organization should take appropriate steps to address it. If a local workforce board claims that outsourcing activities are increasing layoffs and the company feels the concern is genuine, it should negotiate with the board to arrive at a win-win situation. Organizations that act aggressively toward secondary stakeholders and try to impose their will on them may face severe criticism and negative publicity. The company should show in its actions that it cares for the well-being of secondary stakeholders and values their opinions. Primary and Secondary Stakeholders in Project Management Managing stakeholders is an important principle of project management. While planning a project, the project manager should draw a stakeholder matrix to prioritize stakeholders based on their ability to influence the project and the interest they have in the project. According to the Food and Agricultural Organization of the United Nations, projects with a complex set of overlapping stakeholders must have a clear framework for identifying and ranking them. The stakeholder matrix states that powerful stakeholders in a project should be managed carefully. Secondary stakeholders such as local communities, activist groups and media are influential and can affect the success of projects. References Writer Bio Naveen is a copywriter and content strategist. He is also an educational consultant who coaches students to equip with relevant knowledge on entrepreneurship and helps them to set up small-scale and freelance businesses.
https://smallbusiness.chron.com/secondary-stakeholders-important-company-23877.html
While improved service delivery and return on investment are top-of-mind procurement objectives when choosing a Software as a Service (SaaS) partner, federal agencies must equally prioritize “security first” measures to ensure vulnerable legacy systems are protected in today’s digitally dominated climate. Zero Trust The Defense Information Systems Agency (DISA) is still in the prototyping stage with its zero-trust solution but already is looking ahead to the next version. Thunderdome, the prototype being developed by Booz Allen Hamilton under a six-month contract awarded in January, is DISA’s solution for implementing zero-trust cybersecurity. It is a comprehensive effort requiring cooperation across the agency, as well as with the military services, combatant commands and others. The U.S. Defense Information Systems Agency (DISA) intends to double down on the security of its classified networks in the coming months as it experiments with the zero-trust prototype known as Thunderdome. Julian Breyer, DISA’s senior enterprise and security architect, reported a change in priorities while discussing Thunderdome during a panel session at AFCEA’s TechNet Cyber conference in Baltimore, April 26. By the end of 2022, leaders at the Defense Information Systems Agency (DISA) anticipate having a production decision as part of its zero-trust prototype officials call Thunderdome, Brian Hermann, director of the agency’s Cyber Security and Analytics Directorate, said during a micro-keynote session Tuesday during AFCEA’s annual TechNet Cyber conference, taking place April 26-28 in Baltimore. Thunderdome, the Defense Information Systems Agency’s zero-trust solution, may enhance cybersecurity while also transforming the way the agency does business. The U.S. Department of Defense might learn a thing or two about the software-defined world from non-defense industry companies such as Netflix and Mazda, Jason Weiss, chief software officer, U.S. Defense Department, recently suggested to the AFCEA Cyber Committee. Weiss, who serves on the committee, relayed an incident from Mazda that he said keeps him up at night. The incident was reported by BBC News in a February 10th article. This article is part of a series that explores zero trust, cyber resiliency and similar topics. Over the past year or so, I’ve discovered the secret weapon that IT leaders of various U.S. government entities have deployed as they implement zero trust architectures. Their first step has been to create a comprehensive educational pathway for their workers. This is because no one can implement zero trust alone. Zero trust: Only education can move you forward This article is part of a series that explores zero trust, cyber resiliency and similar topics. The recently released federal zero-trust strategy from the Office and Management and Budget (OMB) and the Homeland Security Department’s Cybersecurity and Infrastructure Security Agency (CISA) has one action area that has raised a few eyebrows within the zero trust community: Go ahead and open your applications to the Internet. Wait… what? More than just a technology focus, zero trust (ZT) is an invitation for all of us to think differently about cybersecurity. We are losing on the cybersecurity battlefield, and continued investment in more advanced versions of the same architecture patterns will not change that. The Defense Information Systems Agency (DISA) has announced the award of a $6.8 million contract to Booz Allen Hamilton for a Thunderdome prototype, a zero-trust security model. During this six-month effort, the agency will operationally test how to implement DISA’s Zero Trust Reference Architecture, published in March 2020 for the Defense Department, by taking advantage of commercial technologies such as secure access service edge (SASE) and software-defined wide area networks (SD-WANs). Thunderdome will also incorporate greater cybersecurity centered around data protection and integrate with existing endpoint and identity initiatives aligned to zero trust, according to the press release. Researchers at the Massachusetts Institute of Technology Lincoln Laboratory that developed the Linux-based open-source zero-trust architecture called Keylime are now seeing it deployed more significantly. The COVID-19 pandemic changed how government agencies do business by requiring remote work and videoconferencing for meetings, creating a growing need for securing these virtual workspaces. One way to achieve this security, and one that is being mandated across the federal government, is with zero-trust architecture. Zero trust requires a change of perspective about securing data versus securing networks because data can be anywhere on a device, Joel Bilheimer, a strategic account architect with Pexip, told SIGNAL Magazine Senior Editor Kimberly Underwood during a SIGNAL Executive Video Series discussion. The human factor looms as the most imposing challenge to implementing zero-trust security, say experts. Aspects of this factor range from cultural acceptance to training, and sub-elements such as organizations and technologies also will play a role. Ultimately, change will have to come from the top of an organization to be truly effective. All security measures depend to a large degree on human cooperation, but that is only part of the picture for zero trust. Its implementation will entail a massive change in security procedures both for users and for network architects. And, the ability to share information across organizational boundaries will be strongly affected at all government levels. The Cybersecurity and Infrastructure Security Agency may soon release an initial playbook for departments and agencies to follow while transitioning to a zero-trust cybersecurity architecture. The new guidance will be based on lessons learned from various pilot programs across the government. The U.S. Defense Department has chalked up a number of accomplishments in a short amount of time aimed at achieving a vision of connecting sensors and weapon systems from all of the military services. However, officials still are assessing the best way to achieve zero trust. The use of zero trust could prove to be a boon for 5G networks by providing vital security across networks made up of a variety of innovative devices and capabilities. Fully established zero trust could allow unprecedented network visibility and situational awareness while ensuring that potential attack points are closed to cyber marauders. Yet, implementing zero trust runs the risk of slowing down the network’s fast data flow if it is not applied properly. The U.S. Space Force Space Launch Delta 45’s addition of zero-trust architecture to the launch enterprise could bring earth-shattering flexibility to its mission operations, its commander says. Under a year-long pilot effort, officials at Patrick Space Force Base, Florida, Space Launch Delta 45’s headquarters, and nearby Cape Canaveral Space Force Station, its launch range, have installed zero trust-related software and hardware into the launch mission system and are conducting beta testing and evaluation of the capabilities. Make no mistake: zero trust represents a cultural shift from today’s approach. It will change the way information is secured and the way users access it. Yet, it also must be applied in ways that do not prevent the secured data from being effectively exploited by its users. The president has issued an executive order to implement the necessary security to stay ahead of our adversaries. But ultimately, the challenge of zero trust is less one of technology and architecture and more one of integration into the operation and workflows. The key to a successful zero-trust implementation is to secure the data that people need to use while simultaneously enabling them to access it. Known mostly for its large-scale physical projects, the Army Corps of Engineers is erecting a digital infostructure to allow it to engage in operations in a host of different settings. What will be a mobile Corps of Engineers will rely on many top-shelf information technologies, including zero trust. The U.S. Indo-Pacific Command will deliver an initial mission partner environment next summer. The capability ultimately will allow U.S. forces to access classified and unclassified networks with one device. It also will provide more effective information sharing with allies and coalition forces. When I hear of zero trust, I think of “In God We Trust,” the motto printed on U.S. currency and Florida’s official motto. More than just a buzzword phrase, though, zero trust is better understood as an approach to security. There is a lot of information available about zero trust—at times inconsistent and unreliable. Talk to different vendors and you are likely to get different answers as to exactly what zero trust is and how to adopt it within your agency. What you need to know is this: The U.S. Navy is looking to quickly implement commercial information technologies while it concurrently conducts a cattle drive to rid itself of obsolete capabilities, said its chief information officer (CIO). Aaron Weis allowed that industry will play a key role in providing innovation in an outside the box approach that addresses serious shortcomings. “We have an infrastructure that for the most part is not supporting the mission,” Weis said. The Defense Information Systems Agency intends next month to award a contract for its Thunderdome zero-trust architecture and to begin implementing a prototype within six months. The new architecture is expected to enhance security, reduce complexity and save costs while replacing the current defense-in-depth approach to network security. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, or CISA, released two key documents meant to raise the cybersecurity practices of government agencies and organizations. The documents, the Cloud Security Technical Reference Architecture (TRA) and Zero Trust Maturity Model are open for public comment through September 30, the agency reported. Defense Information Systems Agency (DISA) officials do not plan to try to force others in the Defense Department or military services to use its zero-trust solution known as Thunderdome. Thunderdome is a fledgling program that offers a range of capabilities, including secure access service edge (SASE), software-defined area networking (SD-WAN), identity credential access management (ICAM) and virtual security stacks. SASE, which is pronounced “sassy,” is a technology package that includes SD-WAN, firewall as a service and cloud access security broker. While SASE has been implemented across much of the commercial world, it has not yet been widely adopted by the government. “Never Trust, Always Verify”: that’s the essence of Zero Trust security. But to be effective, agencies need to validate more than just their users. Tanium can help you validate devices too. With Tanium’s comprehensive endpoint visibility and control, you can collect real-time data to authenticate devices within zero-trust models. This will help close vulnerabilities, improve cyber hygiene and raise the barrier to entry into your network. Tanium is the ideal partner for your Zero Trust journey. Visit Tanium.com to learn more. Led by the Air Combat Command, the U.S. Air Force is pursuing zero-trust architecture on a level not seen before. One of the service’s first main use cases applies the cybersecurity measure to the agile combat employment (ACE). ACE operations provide a more lean, agile and lethal force that can generate airpower from multiple locations. ACE requires a different kind of command and control (C2) environment, as well as advanced planning concepts and logistical supply line support. Following the success of some initial, smaller-scale efforts, the U.S. Air Force is pursuing zero trust architecture on a level not seen before. The service’s Air Combat Command is leading the charge into many more initiatives with a comprehensive view to employ zero trust architecture across its bases, weapon systems and missions. A delayed focus on IT modernization could create a gap between frequent high-impact cyber breaches and the U.S. Department of the Navy’s preparedness to address them. From the SolarWinds hack to ransomware, new cyber threats emerge almost weekly. Advances in technology to help defend against such threats occur so quickly that current acquisition and infrastructure programs cannot keep pace. As the Department of Defense migrates more mission-critical systems and software to cloud environments, it must also consider an innovative way for securing this new environment from potential cyber attack. It is up to DoD organizations like the Defense Information Systems Agency (DISA) to work out the details of such efforts and ensure the military’s considerable inventory of legacy equipment and systems can continue to interoperate smoothly with the latest technologies. But integrating different technologies is never an easy process. As more federal agencies and businesses move to the cloud, managing their security needs in this new environment becomes critical. One way to do this is to implement zero-trust architectures as part of an identity cloud environment, said Sean Frazier, federal chief security officer at Okta Inc. Zero-trust architecture, where it is assumed that the network is or will be compromised, is the latest phase of security development. This is important as the Defense Department modernizes its cloud-based systems under constant pressure from foreign cyber attacks. Many federal government agencies are interested in improving their cybersecurity by moving to a zero trust architecture model. But such a move, while very beneficial to the organization, is a complex and involved process that requires some fundamental changes in how security and operations are approached, says Don Maclean, chief cybersecurity technologist for DLT Solutions. Zero trust architecture is a cybersecurity concept that assumes a network is or will be compromised and takes steps to protect data at every potential point of access. Cybersecurity in the federal government, especially for the Department of Defense, is a complex dance between agencies and commercial partners. To get things right, companies working with the government need to be adaptable and resilient in helping government customers meet their mission goals, said Dana Barnes, senior vice president of public sector at Palo Alto Networks. The revolutionary advantages offered by defense use of 5G technology could be undone if the United States doesn’t begin now to meet and overcome a set of challenges, said an expert from the National Security Agency (NSA). These challenges range from developing effective security measures to ensuring the supply chain is not contaminated by parts made by foreign adversaries. The federal government has been taking zero trust more seriously. Although a significant part of it has yet to be implemented, some initial work has been completed with zero trust network access, yet the outside-in approach to zero trust and complexity remains. But the more important aspect of zero trust relates to application and workload connections, which is what attackers care about and is not being protected today. This “other side” of zero trust and a host-based micro-segmentation approach will lead to greater security and will stop the lateral movement of malware. Constituting multiple pilot projects is the best way forward in the inside-out approach to zero trust. Ask someone in federal IT what zero trust means and you’re likely to hear that it’s about access control: never granting access to any system, app or network without first authenticating the user or device, even if the user is an insider. The term “Never trust; always verify” has become a common way to express the concept of zero trust, and the phrase is first on the list of the Defense Information Systems Agency’s (DISA’s) explanation. The Federal Bureau of Investigation (FBI) has a unique role as a federal law enforcement agency as well as a national security department. Its vast information technology enterprise must support its functionality in carrying out these roles, which have different rules of engagement. And when adding new tools, processes or software, the bureau has to consider solutions carefully. With zero trust architecture—a method that combines user authentication, authorization and monitoring; visibility and analytics; automation and orchestration; end user device activity; applications and workload; network and other infrastructure measures; and data tenants to provide more advanced cybersecurity—gaining use in the U.S. Like most organizations during the pandemic, the Defense Information Systems Agency, or DISA, is doing things a bit differently this year. Naturally, the agency is leveraging virtual events to increase its engagement with key mission partners, as well as government, industry and academia, including at the annual TechNet Cyber conference, noted Vice Adm. Nancy Norton, USN, DISA’s director and the commander of Joint Forces Headquarters for the Department of Defense Information Systems Network (JFHQ-DODIN). The Defense Department’s Joint Enterprise Defense Infrastructure, or JEDI, cloud effort has been tied up in the Court of Federal Claims since a preliminary injunction was issued in February. And although that has prevented the DOD from implementing Microsoft Azure cloud computing solutions, the department is not sitting idle, according to Chief Information Officer Dana Deasy. “Cloud for me has always been first and foremost about supporting the warfighter,” Deasy told a group of reporters yesterday during a virtual Defense Writers Group meeting. “And when we got put on hold with JEDI, that didn't mean we were going to stop working on figuring out ways to support the warfighter.” Over the last few months, Zero Trust Architecture (ZTA) conversations have been top-of-mind across the DoD. We have been hearing the chatter during industry events all while sharing conflicting interpretations and using various definitions. In a sense, there is an uncertainty around how the security model can and should work. From the chatter, one thing is clear—we need more time. Time to settle in on just how quickly mission owners can classify a comprehensive and all-inclusive acceptable definition of Zero Trust Architecture. Over the last few months, the Defense Information Systems Agency, known as DISA, has been working with the National Security Agency, the Department of Defense (DoD) chief information officer and others to finalize an initial reference architecture for zero trust. The construct, according to DISA’s director, Vice Adm. Nancy Norton, USN, and commander, Joint Force Headquarters-Department of Defense Information Network, will ensure every person wanting to use the DoD Information Network, or DODIN, is identified and every device trying to connect is authenticated. Federal agencies and especially the DOD are quickly embracing cloud computing for many IT requirements. Traditional computing paradigms are giving way to distributed computing that is fundamental to the dynamic and ephemeral cloud environment. At the same time, the user base is also becoming much more distributed, particularly in this era of increased remote work. Teams of globally dispersed personnel from the DOD, partner organizations and even supporting contractors are now regularly leveraging the cloud to share information critical to mission fulfillment. The U.S. Defense Department by the end of the calendar year will release an initial zero trust architecture to improve cybersecurity across the department, says Vice Adm. Nancy Norton, USN, director, Defense Information Systems Agency, and commander, Joint Force Headquarters-Department of Defense Information Network. Norton’s agency, commonly known as DISA, is working with the National Security Agency, the Department of Defense (DOD) chief information officer and others on what she calls an initial “reference” architecture for zero trust, which essentially ensures every person wanting to use the DOD Information Network, or DODIN, is identified and every device trying to connect is authenticated. The U.S. Army’s near future will include an increased focus on adopting “zero trust” cybersecurity practices, better protecting its network endpoints and consolidating its plethora of cloud computing contracts, according to Lt. Gen. Bruce Crawford, the Army’s outgoing CIO/G-6. It also will likely include tightening defense budgets. The general indicated during a keynote address for the Army’s virtual 2020 Signal Conference, which is hosted by AFCEA, that the 2021 fiscal year “is going to be all about driving on priorities.” Sponsored: The Zero Trust Future: Using DevSecOps and Containerization to Secure Cloud Infrastructure Zero Trust, a strategic security model to “never trust, always verify,” centers on preventing successful breaches by eliminating the whole concept of trust from an organization’s digital environment; instead, everything must be proven. In today’s environment, the network no longer can be considered a safe zone. Every asset an organization possesses and every transaction it conducts must be secured as if it were a standalone item continually exposed to the full range of cyber threats. The realization that perimeter protection alone is not sufficient has led to the security concept of Zero Trust. In this never-trust/always-verify approach, all entities and transactions rely on multiple solutions to work together and secure digital assets.
https://www.afcea.org/content/related-content/zero-trust
For years, organizations have been engaging in digital transformation efforts to improve internal processes, cut costs, and enhance the customer experience. In 2020, the COVID-19 pandemic turned what had been a gradual process into a mad dash, forcing organizations to accelerate their digital transformation timelines by several years. Cybersecurity frequently fell by the wayside, overshadowed by organizations’ pressing need to rapidly build and scale extensive remote-work infrastructures. This was especially the case in small and medium-sized businesses (SMBs) without dedicated IT security departments. Unfortunately, cybercrime didn’t fall by the wayside during the pandemic. It accelerated as cybercriminals took advantage of a perfect storm: the chaos and confusion wrought by the pandemic, combined with the fact that organizations’ potential attack surfaces were expanding with each new technology they deployed. Cybersecurity is essential to a successful, sustainable digital transformation. Here are 4 best practices for integrating cybersecurity into your digital transformation plans. 1 – Involve IT security personnel in all key decisions Too often, organizations don’t give IT security personnel a seat at the table when important digital transformation decisions are being made, resulting in security vulnerabilities for cybercriminals to exploit. Forbes reports that 82% of respondents to a survey about digital transformation cyber risks said that their digital transformation projects had resulted in at least one breach. Instead of involving security personnel only after a cyberattack occurs, organizations must solicit their input throughout the digital transformation process. This includes not only decisions to deploy new technologies but also to adapt existing technologies to fit new use cases. 2 – Educate employees on cybersecurity risks Employees can’t avoid cybersecurity risks that they aren’t aware of. Unfortunately, many organizations aren’t properly educating their employees about the new cybersecurity risks posed by digital transformation initiatives, including not just new digital tools but new ways of working. Over half of respondents (56%) of a survey by the Ponemon Institute report that their organizations have not provided remote workers with cybersecurity training, despite the fact that 56% expect remote work to become the new post-pandemic normal. 3 – Adopt a zero-trust security model Historically, network security models were based around the premise that all users and devices within the network perimeter could be trusted; only those outside needed to be verified. This model falls apart in modern, distributed data environments, which have no defined network perimeter. A zero-trust model assumes that all users and devices could potentially be compromised, and everyone, human or machine, must be verified before they can access organizational network resources. With an emphasis on password security, role-based access control (RBAC), and least-privileged access, zero-trust models support a secure digital transformation by helping prevent cyberattacks caused by compromised user credentials or stolen devices. 4 – Secure your users’ passwords Over 80% of successful data breaches are due to stolen or compromised passwords, making password security integral to a secure digital transformation. Organizations must establish and enforce a comprehensive password security policy throughout the enterprise, including the use of strong, unique passwords for every account, enabling multi-factor authentication (2FA) on all accounts that support it, and using a password manager. Secure your digital transformation with Keeper’s enterprise password management platform Keeper’s enterprise password management and security platform provides organizations the visibility and control over employee password practices that they need to support a secure digital transformation. IT administrators can monitor and control password use across the entire organization, both remote and on-prem, and set up and enforce RBAC and least-privileged access. Keeper utilizes a zero-knowledge encryption model; we cannot access our users’ master passwords, nor can we access customers’ encryption keys to decrypt their data. Keeper also integrates with SSO deployments through SSO Cloud Connect, a fully managed, SAML 2.0 SaaS solution that can be deployed on any instance or in any Windows, Mac OS, or Linux environment, in the cloud or on-prem. Keeper SSO Cloud Connect easily and seamlessly integrates with all popular SSO IdP platforms, including Microsoft 365, Azure, ADFS, Duo, Okta, Ping, JumpCloud, Centrify, OneLogin, and F5 BIG-IP APM. Keeper is easily and rapidly deployed on all devices, with no upfront equipment or installation costs. Whether your organization is an emerging business or a multinational enterprise, Keeper scales to the size of your company Not a Keeper business customer yet? Sign up for a 14-day free trial now! Want to find out more about how Keeper can help your business prevent security breaches? Reach out to our team today.
https://www.keepersecurity.com/blog/2020/12/16/four-best-practices-for-a-secure-digital-transformation/
The advent of emerging technologies such as robotic process automation, artificial intelligence, and blockchain, as well as heightened security concerns due to the pandemic, bring new cybersecurity risks and challenges. This evolving technology landscape has made it even more imperative for organizations to better manage cybersecurity risks and become security resilient, and many are turning to a Zero Trust approach to do this. What is Zero Trust? According to the National Institute of Standards and Technology (NIST), Zero Trust (ZT) refers to an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources. A Zero Trust architecture (ZTA) uses Zero Trust principles to plan industrial and enterprise infrastructure and workflows. Zero Trust assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership. Zero Trust is not a tool or a new technology, but a strategic architectural concept, and should be aligned with business objectives. Why does Zero Trust matter to an organization? The primary goal of a solid Zero Trust strategy is to extend the control plane from the internal secure assets of the environment as far outward as possible. ZTA is all about “verify, and then trust.” The COVID-19 pandemic has resulted in many organizations shifting to remote working due to the safety risk and government guidelines, like city lockdowns. As a result, it has impacted nearly every organization’s cybersecurity strategy. This obviously has posed an external issue for cybersecurity leaders on how they could trust personal devices and home networks to be secured in line with the organization’s security policies and procedures. In many cases, it has become necessary for cybersecurity leaders to consider implementing controls without trusting the resources in the interest of maintaining business continuity. However, many organizations are now reassessing these practices and transitioning where possible to a Zero Trust approach. Implementing Zero Trust The newly released ISO / IEC 27002:2022 Information security, cybersecurity and privacy protection – Information security controls standard provides guidance on implementing Zero Trust principles that organizations can consider, such as: • Assuming the organization’s information systems are already breached and thus not reliant on network perimeter security alone. • Employing a “never trust and always verify” approach for access to information systems. • Ensuring that requests to information systems are encrypted end-to-end. • Verifying each request to an information system as if it originated from an open, external network, even if these requests originated internally to the organization. • Using “least privilege” and dynamic access control techniques. (e.g., authentication information, user identities, data about the user endpoint device, and data classification). • Always authenticating requesters and always validating authorization requests to information systems based on information (e.g., enforcing strong multi-factor authentication). Why are boardrooms supporting Identity and Zero-Trust initiatives? Like any security initiative, Zero Trust requires commitment from the board. Zero Trust should involve the board, the chief information security officer, and other leaders to determine priorities and ensure that they will be effectively implemented across the organization. In general, boardrooms tend to trust insiders, that is, authorized users, rather than outsiders. However, Zero Trust begins with not differentiating insiders and outsiders. An organization’s existing controls may not suffice to address new cybersecurity risks and it may need to implement additional and/or new controls. Zero Trust is built on the premise that trust cannot be granted forever and needs to be evaluated on a continual basis. Today, many boardrooms are already driving the change and supporting identity and ZT. Many boardrooms are convinced of the value of ZT after realizing these business benefits: • Reduction in overall cost and expenditures. • Reduction in the scope of requirements for compliance related to cybersecurity, as it entails accurately mapping assets, inventories, and data, which decreases the risk of unauthorized access. • Greater control in the cloud environment through authorized workloads. • Lower breach potential through verified and approved communications. • Lower compliance risk. • Business agility and speed. • More streamlined user experience, allowing users to be less encumbered by security as part of their daily job. Summary In summary, achieving Zero Trust does not require the adoption of any new technologies. It’s simply a new approach to cybersecurity to “never trust, always verify,” or to eliminate any and all trust, as opposed to the more traditional perimeter-based security approach that assumes user identities have not been compromised, and all human actors are responsible and can be trusted. Zero Trust does not eliminate trust completely but uses technologies to enforce the principle that no user and no resource has access until it has been proven it can and should be trusted—and in the process, strengthen cybersecurity defenses.
https://techgraph.co/opinions/reaping-the-benefits-of-zero-trust/
Like it or not, cybercrime is big business. Organizations that understand this invest in the right technologies to protect their assets. VPNs have been a staple in cybersecurity and networking for some time, while the zero trust concept is a relatively new kid on the block. VPNs remain a popular choice for many organizations but come with a debilitating flaw: They do not provide end-to-end security. While VPNs can encrypt data transmitted between the client and the VPN server, the data is decrypted at the server and is potentially vulnerable to interception or tampering at that point. Segmenting local area networks (also known as micro-segmentation) offers a higher level of security by partitioning a single corporate local area network into multiple isolated LANs and interconnecting them via VPNs or firewalls. However, this approach burdens network administrators, who have to maintain policies and configurations for more network equipment, and still provides free rein to attackers within each network island as it doesn’t adhere to the core tenets of the zero trust paradigm. The zero-trust approach to network security addresses these issues by treating all network traffic, including traffic within the VPN tunnel, as untrusted and requiring authentication and authorization. In a zero-trust network, each access request is independently evaluated based on a set of defined security policies, rather than relying on the VPN connection alone to provide security. This approach helps ensure that only authorized users and devices can access network resources, even if the VPN connection is compromised. In addition to reviewing some of the shortcomings of VPNs in this article, we’ll explore VPNs and zero trust, their advantages and disadvantages, and how you can find the right solution for your organization. We’ll also explain why the term “zero trust VPN” is a bit of a misnomer. The table below summarizes key concepts from this article and compares VPN and zero trust across several categories. What is a VPN? Virtual private networking was developed as a cost-effective solution for connecting different parts of an organization or allowing outside organizations to access the organization’s network securely. This site-to-site connectivity, commonly used to connect corporate locations, has made it possible to replace expensive leased lines and direct connect circuits with internet-based solutions, saving organizations money on costly infrastructure. More commonly deployed is the client-to-site VPN, where VPN software on a user’s PC connects to the company’s VPN server and effectively places the PC on the corporate network. For all practical purposes, the PC is now treated as if it is located within the company perimeter. How Does a VPN Work? To access their organization’s network, a user will typically enter the details for their VPN gateway into their VPN client. If the gateway is responding, it will negotiate with the user’s software to agree on specific cryptographic algorithms to authenticate and create a secure connection. VPNs generally support multiple forms of authentication, with simple passwords or pre-shared keys still commonly deployed. In this model, the VPN gateway will first check its local database. If the user isn’t found there, it will reach out to any authentication servers specified in its configuration. This will contact the organization’s RADIUS or Microsoft Active Directory server in many environments to validate user credentials. After successful verification, the gateway will ask the user to enter their login credentials to authenticate. Once authentication is successful, the connection is secured and encrypted with the cryptographic algorithm. How is a VPN Typically Secured? Modern VPNs support multi-factor authentication (MFA) versus simply using a username and password – or even worse – a shared password. This technology allows the authentication server to challenge the user with a third verification method, like a phone number, an authenticator application, or challenge questions. Once verified, the VPN establishes a user connection and grants access to the network. Challenges With Using a VPN While VPNs solve many problems, they can be difficult to install and manage. A corporate VPN involves installing and configuring costly hardware. When the organization needs to support more remote users or higher data volumes, it requires upgrading to even more expensive, upgraded hardware. The hardware is augmented with expensive service and support contracts, and requires expertise to administer it properly. It also becomes a single point of congestion, a single point of failure, and a single point of attack from intruders. Additionally, due to the number of ingress and egress points, there is an increased complexity of investigation and troubleshooting due to policy changes or software disparities. As the number of requests for access by end users grows, so do the costs to manage their access. Despite the additional spending, the end-user experience leaves much to be desired. Network compliance can become tedious as Zero Trust best practices dictate that an organization’s network should be segmented, ideally using micro-segmentation. Each segment will have its own VPN server, separately-managed policies, and firewall rules between network segments. This approach makes administering and auditing policies challenging. Organizations eventually start demanding solutions to control access across the organization’s network better and enable faster, more accurate provisioning of new entries. The Case for Zero Trust Network Access Zero trust network access (ZTNA) is a concept of explicit authentication, authorization, and end-to-end security. In this framework, verification is based on authenticating the user, client device, server, and service and verifying that the authorization policy permits access. Clients and servers must be authenticated and authorized, and all traffic must be encrypted. By implementing the principle of least privilege, zero trust ensures that users and services can only access the services they need and no more. It also assumes everything is compromised until end-to-end verification is complete. In other words, zero trust is about having no assumptions about implicit trust. Zero trust is policy-driven and posits that perimeter-based protections are ineffective; hence, every communications session must be established and authorized. It doesn’t matter if it’s a remote user talking to a server, a user within the organization, or one service interacting with another service. In other words, where legacy security was based on protecting perimeters and trusting anyone within the perimeter, ZTNA takes a parameterless approach. ZTNA consists of a set of technologies and processes that allow organizations to securely access their networks and applications from any device, anywhere, without requiring traditional VPNs. The goal of ZTNA is to replace the reliance on the conventional perimeter-based security model with a more secure, granular, and adaptive approach. Zero Trust Network Access (ZTNA), Explained The building blocks of a Zero Trust Network Access are: Explicit Verification in Zero Trust Networks Explicit verification in zero trust networks authenticates users and devices, and restricts access to services on those devices to ensure a secure,trusted computing environment. This process involves verifying user identities, validating device identity and integrity, and providing secure communication between devices. It also includes inspecting traffic flows and verifying that all communications are safe and trustworthy. Organizations can protect their network infrastructure and data from unauthorized access and malicious activity by implementing explicit verification in a zero trust network. Specifically, explicit verification involves evaluating the following: 1. User Identity Verifying the user’s identity with a trustworthy authentication protocol, such as passwords or multi-factor authentication, is a core requirement of zero trust. Zero Trust Networks (ZTN) rely on explicit authentication to confirm the identity of a user. If a user fails to provide the correct credentials, their access to the requested endpoint is denied. ZTN can also monitor user activity to look for abnormal behavior, such as logging in from an unexpected location or multiple failed login attempts. If any suspicious activity is detected, the user can be further verified before access is granted. 2. Device Location Ascertain the device’s physical location and determine whether it is allowed to be present. This verification process can involve various techniques, such as GPS coordinates or IP geolocation, which helps ensure that the device is located in a secure area and that access is only granted to legitimate users. Additionally, this verification process can detect suspicious activity, such as an attempt to access the network from an unauthorized location. 3. Device Health Ensure the device is suitable for the organization’s environment. Verify the health of all devices that are trying to access the network. This process ensures that each device is free of malware and is not being used as part of a malicious attack. The verification can include checking the device’s operating system version, patch level, anti-malware software, and other security measures. The verification process can also involve scans to ensure the device is free of malware and safe to connect to the network. 4. Service or Workload Security Provides additional layers of security to the service or workload security and helps protect against malicious actors. This can be done through authentication and authorization, encryption, and data integrity verification. Authentication and authorization ensure that only authorized users can access resources and services. Encryption provides data privacy, and data integrity verification ensures that data has not been modified or tampered with. 5. Types of Access Determine which services a user (or any authenticated system) can access. The implementation uses various specifications: - Access Control Lists (ACLs) can be used to define which users have access to which resources and what type of access is allowed. Users as well as resources may be grouped to simplify administration. For instance, a Type Enfocement (TE) model enables the definition of user groups, resource groups, and rules that define which groups of users can access which groups of objects. - Role-Based Access Control (RBAC) can be used to grant access to users depending on the role they have or their job title. The role of an individual user may change over time. - Network Access Control (NAC) can control network access based on a user’s identity, device type, and other attributes. NAC can be used to enforce policies and monitor user behavior to ensure that only authorized users have access to the network. 6. Anomaly Detection Additional services can be deployed to provide advanced analytics and machine learning to identify anomalous user and system behavior. This can include monitoring user activity, network traffic, and application usage to detect patterns that may indicate malicious activity. These patterns can then trigger alerts and further investigation into the security incident. Additionally, organizations can leverage tools such as honeypots and honeynets (two or more honeypots on a network) to detect and respond to malicious activity. Privilege Access Enforcement in Zero Trust Networks The principle of least privilege is enforced by limiting access to only what is required to perform needed tasks. Zero trust security blocks all access until the user is authenticated and authorized. After that, the computer can access only the systems and services that the access policy allows for that user. This approach differs from traditional perimeter-based security rules, where firewalls may restrict access to specific hosts and ports without source authentication. VPNs may authenticate the user but provide broad access within the internal network. Perimeter-based protections often make it easy to configure policies that allow an administrator to block specific traffic but otherwise allow all other traffic. Compromised user computers can be identified through authentication failure alerts or attempts to connect to unauthorized services. Breach Assumption in Zero Trust Networks Breach assumption is the assumption that security breaches will occur. It is a mindset rather than an architectural framework that leads to deploying a Zero Trust Architecture. A zero trust architecture supports this mindset by allowing each system to access only explicitly authorized systems and services. This prevents a compromised machine from probing the network, doing port scans, and attempting to connect to arbitrary services within the network. A zero trust architecture constantly scrutinizes connections and logs information about authentication, connectivity, and data traffic. Armed with this data, an organization’s security operations center can effectively mitigate breaches because all the relevant information is constantly being preserved. Zero Trust Network Access (ZTNA) vs. Zero Trust Architecture (ZTA) Now that we’ve discussed zero trust, let’s focus on “zero trust network access”. The distinctions between Zero Trust Architecture and Zero Trust Network Access vary among vendors. ZTA and ZTNA are often used interchangeably. ZTNA provides identity-based access to applications regardless of their network segment and the connecting user’s location. ZTNA hides the applications from discovery by unauthenticated and unauthorized systems and users. In general, ZTA is an overarching goal, while ZTNA focuses on the practical elements of delivering ZTA within the scope of data networking. ZTNA is different from a VPN because it does not require an appliance or server to protect the boundary between the Internet and the organization’s network. With ZTNA, the concept of a protected perimeter vanishes. Instead, ZTNA is a host-based solution that uses client software to provide point-to-point security. Clients and servers connect to the ZTNA provider’s cloud. This cloud connection has secure access to the organization’s directory services, allowing policy management to take place in the cloud environment. ZTNA helps organizations respond faster to zero-day vulnerabilities and attacks, providing higher protection for their data and systems. A ZTNA provider offers organizations a unified network security model that covers all networks. With it, administrators can enforce policies on users no matter what device they choose to use and where the devices are located. End-user satisfaction is improved through direct connectivity. Security teams will have increased visibility into network traffic due to the inherent nature of zero trust and the fact that every connection is authenticated and authorized. ZTNA providers have the power to offer greater visibility and faster mitigation of security vulnerabilities across their customer base. Traditional VPNs will remain popular in the coming years, especially in organizations that have yet to adopt ZTNA. They offer quick remote access, and the necessary hardware is readily available. Plus, not all organizations are equipped to take on the subscription costs of ZTNA, as they may have already invested in a VPN solution. Despite those initial adoption challenges, and because of its superior security model, zero trust is being deployed as a more secure alternative to VPNs and firewalls across large enterprises and government agencies. The adoption curve has been accelerated by the U.S. government mandate that federal agencies have until September 2024 to deploy zero trust architectures. Conclusion VPN networking is no longer the preferred choice of security experts within IT organizations due to its lack of scalability, security vulnerabilities, and limited flexibility. While traditional perimeter-based security is being challenged, a new generation of VPNs like ZTMesh have come to market providing secure end-to-end tunneling all the way to the endpoint. These enable the end-to-end secure connectivity that is needed for ZTNA and enables them to serve as building blocks for a zero trust network architecture. Organizations are increasingly adopting zero trust network access (ZTNA) as a secure way to access their networks. ZTNA provides more robust authentication and access control, allowing organizations to protect their networks from unauthorized access. This technology is reliable and constantly improved, making it far superior to traditional VPNs. The advantages of ZTNA-based access are vast, including enhanced visibility into user activities, improved security controls, and simplified access management. At the same time, its disadvantages are few and far between, making it the new go-to option for many organizations.
https://optm.com/zero-trust-pillars/zero-trust-vpn
July 14, 2022 What is a Zero-Trust Security Model? Traditional computer security models ensure that people without the proper authorization cannot access an organization’s network. However, a single set of compromised login credentials can lead to a breach of the entire network. A Zero-Trust Security Model goes some way to solving this problem by requiring users to continually verify their identity, even if they’re already inside the secure digital perimeter. This approach restricts users to the minimum amount of information necessary to do their job. In the event of a breach, hackers will find it difficult or impossible to move laterally through a network and gain access to more information. A Zero-Trust Security Model doesn’t mean that you don’t trust the people you’re sharing data with. Instead, a zero-trust security model implements checkpoints throughout a system so you can be confident that your trust in each user is justified. What is a Zero-Trust Security Model? Imagine for a moment that a computer network is like a country. In a traditional security model, the country would have border checkpoints around its perimeter. Employees who presented the correct login info would be allowed to enter, and bad actors trying to gain access would be kept outside. While this is a good idea in theory, in practice, problems emerge. For example, bad actors who breached the perimeter would get much or all of the information inside the network. Likewise, employees who are past the first barrier may gain access to documents or other information that they shouldn’t see. These problems with the traditional model of cybersecurity drove the U.S. Department of Defense to adopt a new strategy in the early 2000s. Those responsible for network security treated their systems as though they had already been breached, and then asked the question: “Given that the system has been breached, how do we limit the collateral damage?” To meet that objective, they developed an approach that required users, consisting of both humans and machines, to continually prove that they were allowed to be present every time they attempted to access a new resource. To return to our metaphor from earlier, employees would have to show ID at the country’s border, and show ID every time they tried to access a new building, which in this example represents resources within the system. This approach meant that bad actors would find it harder to move through the system with a single breach, and also made it easy to restrict employees to the appropriate areas in the network based on their security clearance. Zero-Trust Security Comes of Age The external and internal benefits of a Zero-Trust Security Model quickly became clear to the private sector, too. While many businesses adapted the system for their own use, or offered it as a service to others, it wasn’t until August 2020 that the National Institute of Standards and Technology (NIST) released the first formal specification for Zero-Trust Security Model implementation. NIST Special Publication 800-207 details how to implement a Zero-Trust Architecture (ZTA) in a system. The Seven Tenets of Zero Trust form the core of this approach. - All data sources and computing services are resources - All communication is secured regardless of network location - Access to individual enterprise resources is granted on a per-session basis - Access to resources is determined by a dynamic policy - The enterprise monitors and measures the integrity and security posture of all owned and associated assets - All resource authentication and authorization are dynamic and strictly enforced before access is allowed - The enterprise collects as much information as possible about the current state of assets, network infrastructure, and communications and uses that information to improve its security posture Of these seven tenets, two especially speak to what’s different between ZTA and more traditional approaches. Session-based access (#3) means that access permissions are reevaluated each time a new resource is accessed, or if sufficient time has passed between access requests to the same resource. This approach reduces the potential for bad actors to exploit lost devices or gain access through an unattended workstation. Dynamic policy controls (#6) look beyond user credentials, such as a username and password. For example, a dynamic policy may also consider other factors such as the type of device, what network it is on, and possibly previous activity on the network to determine if the request is legitimate. This kind of observation improves detection of external malicious actors, even when the correct login credentials are provided. Access control is run through a Policy Decision Point. The Policy Decision Point is composed of a Policy Engine, which holds the rules for granting access, and the Policy Administrator, which carries out the allowance or disallowance of access to resources when a request is made. Benefits of Zero-Trust Security Many powerful benefits emerge when a system is set up to align with ZTA standards. Arguably, the most important of these is the compartmentalization of system resources. When resources are compartmentalized, hackers who gain access to one area of your network won’t gain access to other resources. For example, a breached email account wouldn’t give the hacker access to your project documentation or financial systems. Compartmentalization also holds benefits for managing your employees. With a compartmentalized system, you won’t have to and shouldn’t give your employees access to more resources than they need to do their jobs. This approach reduces the risk of the employee intentionally or accidentally viewing sensitive information. Compartmentalization also minimizes the damage done by leaks, as employees generally won’t have access to documentation beyond their immediate needs. Because a core policy of ZTA is the continuous collection of data about how each user behaves on the network, it becomes far easier to spot breaches. In many cases, organizations with ZTA systems detect breaches not because of failed authentication but rather because a feature of the access request, such as location, time, or type of resource requested, differs from regular operation and is flagged by the Policy Decision Point. For example, a request for a resource from Utah to a server for a company based in Virginia would raise flags, even if a bad actor provided a valid username and password. Zero-Trust Security Model Integration While Zero-Trust Security Models hold many benefits for many companies, it’s essential to acknowledge that it’s not a “plug-and-play” system. The approach differs significantly from traditional security practices. Most companies will need a total overhaul of their network to apply it. That can be a disruptive process and will likely lower productivity in the short term as new systems are implemented, and employees adapt to the new policies. That doesn’t make moving to a Zero-Trust system the wrong choice, but it does mean that the transition has some tradeoffs. However, if you’re looking for the absolute best industry standard for security, Zero-Trust is the way to go. If you’re contemplating increasing your security, you need to know exactly what data you’ll be securing. Mage data helps organizations find and catalog their data, including highlighting Personally Identifiable Information, which you’d want to provide an extra layer of security to in a Zero-Trust system. Schedule a demo today to see what Mage can do to help your organization better secure its data.
https://magedata.ai/what-is-a-zero-trust-security-model/
Everyone in the cybersecurity community is talking about zero trust, and although it is not a new concept, there is renewed interest in implementing zero-trust principles. This introduces challenges for an organization’s mobile administrators. But what does zero trust really mean for mobile? In May 2021, President Biden issued an Executive Order on Improving the Nation’s Cybersecurity, requiring the federal government to develop a plan to implement zero trust architecture (ZTA) across agency infrastructure, which includes mobile devices. This new development has spilled over into industry and created a lot of activity around zero trust, but not necessarily with clarity on how to incorporate it into current mobile infrastructure. Due to the pandemic, many employees have transitioned to remote/telework options to accomplish their daily work activities. The portability of mobile devices makes it easier to respond promptly to emails, attend virtual meetings, and use special work apps from anywhere, even in your own home. They also serve as backup devices when the primary computing devices are not functioning properly at remote sites. In this new environment, mobile devices are now another endpoint connected to enterprise resources and can put the entire enterprise at risk if compromised or stolen. ZTAs can minimize this impact by applying cybersecurity practices that assume no implicit trust, constant monitoring, and restricted access to the enterprise resources based on the criticality of resources and user and device identity and posture. Here’s how to get started When considering implementing a ZTA, it helps to first clarify the fundamental tenets. NIST Special Publication 800-207 Zero Trust Architecture defines the basic zero-trust tenets as the following: - All data sources and computing services are considered resources. - All communication is secured regardless of network location. - Access to individual enterprise resources is granted on a per-session basis. - Access to resources is determined by dynamic policy—including the observable state of client identity, application/service, and the requesting asset—and may include other behavioral and environmental attributes. - The enterprise monitors and measures the integrity and security posture of all owned and associated assets. - All resource authentication and authorization are dynamic and strictly enforced before access is allowed. - The enterprise collects as much information as possible about the current state of assets, network infrastructure, and communications and uses it to improve its security posture. The good news Some of the zero-trust tenets are common cybersecurity practices that are already in place in most organizations. Many organizations may not realize they are already applying several zero-trust principles; they may just need to be integrated with new systems with additional features that may be missing from the current architecture. The impact on the mobile world A secure mobile infrastructure consists of many components that align with current cybersecurity practices. Recently, the NIST National Cybersecurity Center of Excellence (NCCoE) published NIST SP 1800-21 Mobile Device Security: Corporate-Owned Personally-Enabled, which describes some of these components, including enterprise mobility management (EMM) / unified endpoint management (UEM) solutions, mobile threat defense (MTD) / endpoint security tools, and mobile application vetting services. The purpose of this special publication is to demonstrate how organizations can equip themselves with the tools they need to address their mobile security concerns. In addition, the NCCoE is working to provide examples of ZTA, including the management of enterprise mobile devices. This ZTA project will demonstrate how to use several components described in NIST SP 1800-21, along with other components and features, to apply ZTA principles to an organization’s mobile infrastructure. These example solutions will include commercially available products for use by industry and government. With an eye toward the future, the hope is that this example solution will provide a clear roadmap for the cybersecurity community as they develop their own mobile device security strategies that include robust ZTA principles.
https://www.rsaconference.com/Library/blog/zero-trust-applied-to-the-mobile-world