id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
53727537 | https://en.wikipedia.org/wiki/Hanspeter%20Pfister | Hanspeter Pfister | Hanspeter Pfister is a Swiss computer scientist. He is the An Wang Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences and an affiliate faculty member of the Center for Brain Science at Harvard University. His research in visual computing lies at the intersection of scientific visualization, information visualization, computer graphics, and computer vision and spans a wide range of topics, including biomedical image analysis and visualization, image and video analysis, and visual analytics in data science.
Biography
Hanspeter Pfister received his master's degree in 1991 in electrical engineering at ETH Zurich and moved to the United States for his PhD in computer science at Stony Brook University. In 1992 he began working with Arie Kaufman on Cube-3, a hardware architecture for volume visualization. By the time of his graduation in 1996, he had finished the architecture for Cube-4 and licensed it to Mitsubishi Electric Research Laboratories. He joined Mitsubishi Electric Research Laboratories in 1996 as a research scientists, where he worked for over a decade. He was the chief architect of VolumePro, Mitsubishi Electric's real-time volume rendering graphics card, for which he received the Mitsubishi Electric President's Award in 2000. He joined the faculty at Harvard University in 2007. In 2012 Hanspeter Pfister was appointed the An Wang Professor of Computer Science and started his research lab called the Visual Computing Group.[6] In the same year, he also served as the Technical Papers Chair at SIGGRAPH and became a consultant for Disney Research From 2013 to 2017, Hanspeter Pfister was the director of the Institute for Applied Computational Science at the Harvard John A. Paulson School of Engineering and Applied Sciences.
Awards and prizes
2019, Elected as ACM Fellow
2019, Elected into the IEEE Visualization Academy as a recognition for his achievements in the scientific visualization and information visualization research communities.
2011, Dean's Thesis Prize, Harvard Extension School ALM in Information Technology, for Michael Tracey Zellman's thesis “Creating and Visualizing Congressional Districts”
2010, IEEE Visualization Technical Achievement Award.
2009, IEEE Golden Core Award.
2009, IEEE Meritorious Service Award.
2009, Petra T. Shattuck Excellence in Teaching Award.
2009, Dean's Thesis Prize, Harvard Extension School ALM in Information Technology, for Manish Kumar's thesis “View-Dependent FTLV”
2007, Dean's Thesis Prize, Harvard Extension School ALM in Information Technology, for Joseph Weber's thesis “ProteinShader: Cartoon-Type Visualization of Macromolecules Using Programmable Graphics Cards”
2005, Dean's Thesis Prize, Harvard Extension School ALM in Information Technology, for George P. Stathis’ thesis “Aspect-Oriented Shade Trees”
2002, 2003, and 2004, Distinguished Teaching Performance, Harvard Extension School
2000, Mitsubishi Electric President's Award.
1999, Innovation Awards and Top 100 Products Award for VolumePro
1994, The Jack Heller Award for Outstanding Contribution to the CS Department, SUNY Stony Brook
1992, Swiss Academy of Technical Sciences Fellowship
1991 and 1992, ABB Switzerland Research Fellowship
1991–1996, U.S. Government Fulbright Scholarship
Most relevant publications
As of Dec 2019, according to Google Scholar, Hanspeter Pfister's most cited publications are:
Pfister, H., Zwicker, M., Van Baar, J., & Gross, M. (2000). Surfels: Surface elements as rendering primitives. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, 335–342.
Zwicker, M., Pfister, H., Van Baar, J., & Gross, M. (2001). Surface splatting. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 371–378.
Marks, J., Andalman, B., Beardsley, P. A., Freeman, W., Gibson, S., Hodgins, J., Kang, T., Mirtich, B., Pfister, H., Ruml, W., et al. (1997). Design galleries: A general approach to setting parameters for computer graphics and animation. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, 389–400.
Matusik, W., & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. In ACM Transactions on Graphics (TOG), 23, 3, 814–824.
Vlasic, D., Brand, M., Pfister, H., & Popović, J. (2005). Face transfer with multilinear models. In ACM transactions on graphics (TOG), 24, 3, 426–433.
Pfister, H., Hardenbergh, J., Knittel, J., Lauer, H., and Seiler, L. (1999). The VolumePro real-time ray-casting system. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 99, 251–260.
Kasthuri, N., Hayworth, K. J., Berger, D. R., Schalek, R. L., Conchello, J. e. A., Knowles-Barley, S., Lee, D., Vãzquez Reina, A., Kaynig, V., Jones, T. R., et al. (2015). Saturated reconstruction of a volume of neocortex. Cell, 162(3):648–661.
Lex, A., Gehlenborg, N., Strobelt, H., Vuillemot, R., & Pfister, H. (2014). UpSet: visualization of intersecting sets. IEEE transactions on visualization and computer graphics, 20(12), 1983–1992.
Pfister, H., Lorensen, B., Bajaj, C., Kindlmann, G., Schroeder, W., Avila, L. S., Raghu, K. M., Machiraju, R. & Lee, J. (2001). The transfer function bake-off. IEEE Computer Graphics and Applications, 21(3), 16–22.
Borkin, M. A., Vo, A. A., Bylinskii, Z., Isola, P., Sunkavalli, S., Oliva, A., & Pfister, H. (2013). What makes a visualization memorable?. IEEE Transactions on Visualization and Computer Graphics, 19(12), 2306–2315.
A complete list of Hanspeter Pfister's publications can be found on his research group's website.
References
Living people
German computer scientists
Computer graphics researchers
Stony Brook University alumni
ETH Zurich alumni
Harvard University faculty
Year of birth uncertain
Information visualization experts
Disney Research people
Mitsubishi Electric people
Fellows of the Association for Computing Machinery
1964 births |
37622 | https://en.wikipedia.org/wiki/Venus%20%28mythology%29 | Venus (mythology) | Venus () is a Roman goddess, whose functions encompassed love, beauty, desire, sex, fertility, prosperity, and victory. In Roman mythology, she was the ancestor of the Roman people through her son, Aeneas, who survived the fall of Troy and fled to Italy. Julius Caesar claimed her as his ancestor. Venus was central to many religious festivals, and was revered in Roman religion under numerous cult titles.
The Romans adapted the myths and iconography of her Greek counterpart Aphrodite for Roman art and Latin literature. In the later classical tradition of the West, Venus became one of the most widely referenced deities of Greco-Roman mythology as the embodiment of love and sexuality.
She is usually depicted nude in paintings.
Etymology
The Latin name Venus ('love, charm') stems from Proto-Italic *wenos- ('desire'), ultimately from Proto-Indo-European (PIE) ('desire'; compare with Messapic Venas, Old Indic vánas 'desire').
It is cognate with the Latin venia ("favour, permission") through to common PIE root ("to strive for, wish for, desire, love"). The Latin verb venerārī ("to honour, worship, pay homage") is a derivative of Venus.
Origins
Venus has been described as perhaps "the most original creation of the Roman pantheon", and "an ill-defined and assimilative" native goddess, combined "with a strange and exotic Aphrodite". Her cults may represent the religiously legitimate charm and seduction of the divine by mortals, in contrast to the formal, contractual relations between most members of Rome's official pantheon and the state, and the unofficial, illicit manipulation of divine forces through magic. The ambivalence of her persuasive functions has been perceived in the relationship of the root *wenos- with its Latin derivative venenum ('poison'; from *wenes-no 'love drink' or 'addicting'), in the sense of "a charm, magic philtre".
In myth, Venus-Aphrodite was born, already in adult form, from the sea foam (Greek αφρός, aphros) produced by the severed genitals of Caelus-Uranus. Roman theology presents Venus as the yielding, watery female principle, essential to the generation and balance of life. Her male counterparts in the Roman pantheon, Vulcan and Mars, are active and fiery. Venus absorbs and tempers the male essence, uniting the opposites of male and female in mutual affection. She is essentially assimilative and benign, and embraces several otherwise quite disparate functions. She can give military victory, sexual success, good fortune and prosperity. In one context, she is a goddess of prostitutes; in another, she turns the hearts of men and women from sexual vice to virtue. Varro's theology identifies Venus with water as an aspect of the female principle. To generate life, the watery matrix of the womb requires the virile warmth of fire. To sustain life, water and fire must be balanced; excess of either one, or their mutual antagonism, are unproductive or destructive.
Prospective brides offered Venus a gift "before the wedding"; the nature of the gift, and its timing, are unknown. The wedding ceremony itself, and the state of lawful marriage, belonged to Juno – whose mythology allows her only a single marriage, and no divorce from her habitually errant spouse, Jupiter – but Venus and Juno are also likely "bookends" for the ceremony; Venus prepares the bride for "conubial bliss" and expectations of fertility within lawful marriage. Some Roman sources say that girls who come of age offer their toys to Venus; it is unclear where the offering is made, and others say this gift is to the Lares. In dice-games played with knucklebones, a popular pastime among Romans of all classes, the luckiest, best possible roll was known as "Venus".
Epithets
Like other major Roman deities, Venus was given a number of epithets that referred to her different cult aspects, roles, and her functional similarities to other deities. Her "original powers seem to have been extended largely by the fondness of the Romans for folk-etymology, and by the prevalence of the religious idea nomen-omen which sanctioned any identifications made in this way."
Venus Acidalia, in Virgil's Aeneid (1.715–22, as mater acidalia). Servius speculates this as reference to a "Fountain of Acidalia" (fons acidalia) where the Graces (Venus' daughters) were said to bathe; but he also connects it to the Greek word for "arrow", whence "love's arrows" and love's "cares and pangs". Ovid uses acidalia only in the latter sense. It is likely a literary conceit, not a cultic epithet.
Venus Caelestis (Celestial or Heavenly Venus), used from the 2nd century AD for Venus as an aspect of a syncretised supreme goddess. Venus Caelestis is the earliest known Roman recipient of a taurobolium (a form of bull sacrifice), performed at her shrine in Pozzuoli on 5 October 134. This form of the goddess, and the taurobolium, are associated with the "Syrian Goddess", understood as a late equivalent to Astarte, or the Roman Magna Mater, the latter being another supposedly Trojan "Mother of the Romans".
Venus Calva ("Venus the bald one"), a legendary form of Venus, attested only by post-Classical Roman writings which offer several traditions to explain this appearance and epithet. In one, it commemorates the virtuous offer by Roman matrons of their own hair to make bowstrings during a siege of Rome. In another, king Ancus Marcius' wife and other Roman women lost their hair during an epidemic; in hope of its restoration, unafflicted women sacrificed their own hair to Venus.
Venus Cloacina ("Venus the Purifier"); a fusion of Venus with the Etruscan water goddess Cloacina, who had an ancient shrine above the outfall of the Cloaca Maxima, originally a stream, later covered over to function as Rome's main sewer. The rites conducted at the shrine were probably meant to purify the culvert's polluted waters and noxious airs. Pliny the Elder, remarking Venus as a goddess of union and reconciliation, identifies the shrine with a legendary episode in Rome's earliest history, in which the Romans, led by Romulus, and the Sabines, led by Titus Tatius, met there to make peace following the rape of the Sabine women, carrying branches of myrtle. In some traditions, Titus Tatius was responsible for the introduction of lawful marriage, and Venus-Cloacina promoted, protected and purified sexual intercourse between married couples.
Venus Erycina ("Erycine Venus"), a Punic idol of Astarte captured from Sicily and worshiped in Romanised form by the elite and respectable matrons at a temple on the Capitoline Hill. A later temple, outside the Porta Collina and Rome's sacred boundary, may have preserved some Erycine features of her cult. It was considered suitable for "common girls" and prostitutes".
Venus Frutis honoured by all the Latins with a federal cult at the temple named Frutinal in Lavinium. Inscriptions found at Lavinium attest the presence of federal cults, without giving precise details.
Venus Felix ("Lucky Venus"), probably a traditional epithet, later adopted by the dictator Sulla. Hadrian built a temple to Venus Felix et Roma Aeterna on the Via Sacra. The same epithet is used for a specific sculpture at the Vatican Museums.
Venus Genetrix ("Venus the Mother"), as a goddess of motherhood and domesticity, with a festival on September 26, a personal ancestress of the Julian lineage and, more broadly, the divine ancestress of the Roman people. Julius Caesar dedicated a Temple of Venus Genetrix in 46 BC. This name has attached to an iconological type of statue of Aphrodite/Venus.
Venus Heliopolitana ("Venus of Heliopolis Syriaca"), a Romano-Syrian form of Venus at Baalbek, variously identified with Ashtart, Dea Syria and Atargatis, though inconsistently and often on very slender grounds. She has been historically identified as one third of a so-called Heliopolitan Triad, and thus a wife to presumed sun-god "Syrian Jupiter" (Baal) and mother of "Syrian Mercury" (Adon). The "Syrian Mercury" is sometimes thought another sun-god, or a syncretised form of Bacchus as a "dying and rising" god, and thus a god of Springtime. No such Triad seems to have existed prior to Baalbek's 15 BC colonisation by Augustus' veterans. It may be a modern scholarly artifice.
Venus Kallipygos ("Venus with the beautiful buttocks"), a statue, and possibly a statue type, after a lost Greek original. From Syracuse, Sicily.
Venus Libertina ("Venus the Freedwoman"), probably arising through the semantic similarity and cultural links between libertina (as "a free woman") and lubentina (possibly meaning "pleasurable" or "passionate"). Further titles or variants acquired by Venus through the same process, or through orthographic variance, include Libentia, Lubentina, and Lubentini. Venus Libitina links Venus to a patron-goddess of funerals and undertakers, Libitina, who also became synonymous with death; a temple was dedicated to Venus Libitina in Libitina's grove on the Esquiline Hill, "hardly later than 300 BC."
Venus Murcia ("Venus of the Myrtle"), merging Venus with the little-known deity Murcia (or Murcus, or Murtia). Murcia was associated with Rome's Mons Murcia (the Aventine's lesser height), and had a shrine in the Circus Maximus. Some sources associate her with the myrtle-tree. Christian writers described her as a goddess of sloth and laziness.
Venus Obsequens ("Indulgent Venus"), Venus' first attested Roman epithet. It was used in the dedication of her first Roman temple, on August 19 in 295 BC during the Third Samnite War by Quintus Fabius Maximus Gurges. It was sited somewhere near the Aventine Hill and Circus Maximus, and played a central role in the Vinalia Rustica. It was supposedly funded by fines imposed on women found guilty of adultery.
Venus Physica: Venus as a universal, natural creative force that informs the physical world. She is addressed as "Alma Venus" ("Mother Venus") by Lucretius in the introductory lines of his vivid, poetic exposition of Epicurean physics and philosophy, De Rerum Natura. She seems to have been a favourite of Lucretius' patron, Memmius. Pompeii's protective goddess was Venus Physica Pompeiana, who had a distinctive, local form as a goddess of the sea, and trade. When Sulla captured Pompeii from the Samnites, he resettled it with his veterans and renamed it for his own family and divine protector Venus, as Colonia Veneria Cornelia (for Sulla's claims of Venus' favour, see Venus Felix above).
Venus Urania ("Heavenly Venus"), used as the title of a book by Basilius von Ramdohr, a relief by Pompeo Marchesi, and a painting by Christian Griepenkerl. (cf. Aphrodite Urania.)
Venus Verticordia ("Venus the Changer of Hearts"). See #Festivals and Veneralia.
Venus Victrix ("Venus the Victorious"), a Romanised aspect of the armed Aphrodite that Greeks had inherited from the East, where the goddess Ishtar "remained a goddess of war, and Venus could bring victory to a Sulla or a Caesar." Pompey vied with his patron Sulla and with Caesar for public recognition as her protégé. In 55 BC he dedicated a temple to her at the top of his theater in the Campus Martius. She had a shrine on the Capitoline Hill, and festivals on August 12 and October 9. A sacrifice was annually dedicated to her on the latter date. In neo-classical art, her epithet as Victrix is often used in the sense of 'Venus Victorious over men's hearts' or in the context of the Judgement of Paris (e.g. Canova's Venus Victrix, a half-nude reclining portrait of Pauline Bonaparte).
Cult history and temples
The first known temple to Venus was vowed to Venus Obsequens ("Indulgent Venus") by Q. Fabius Gurges in the heat of a battle against the Samnites. It was dedicated in 295 BC, at a site near the Aventine Hill, and was supposedly funded by fines imposed on Roman women for sexual misdemeanours. Its rites and character were probably influenced by or based on Greek Aphrodite's cults, which were already diffused in various forms throughout Italian Magna Graeca. Its dedication date connects Venus Obsequens to the Vinalia rustica festival.
In 217 BC, in the early stages of the Second Punic War with Carthage, Rome suffered a disastrous defeat at the battle of Lake Trasimene. The Sibylline oracle suggested that if the Venus of Eryx (, a Roman understanding of the Punic goddess Astarte), patron goddess of Carthage's Sicilian allies, could be persuaded to change her allegiance, Carthage might be defeated. Rome laid siege to Eryx, offered its goddess a magnificent temple as reward for her defection, captured her image, and brought it to Rome. It was installed in a temple on the Capitoline Hill, as one of Rome's twelve . Shorn of her more overtly Carthaginian characteristics, this "foreign Venus" became Rome's Venus Genetrix ("Venus the Mother"), As far as the Romans were concerned, this was the homecoming of an ancestral goddess to her people. Roman tradition made Venus the mother and protector of the Trojan prince Aeneas, ancestor of the Roman people. Soon after, Rome's defeat of Carthage confirmed Venus's goodwill to Rome, her links to its mythical Trojan past, and her support of its political and military hegemony.
The Capitoline cult to Venus seems to have been reserved to higher status Romans. A separate cult to Venus Erycina as a fertility deity, was established in 181 BC, in a traditionally plebeian district just outside Rome's sacred boundary, near the Colline Gate. The temple, cult and goddess probably retained much of the original's character and rites. Likewise, a shrine to Venus Verticordia ("Venus the changer of hearts"), established in 114 BC but with links to an ancient cult of Venus-Fortuna, was "bound to the peculiar milieu of the Aventine and the Circus Maximus" – a strongly plebeian context for Venus's cult, in contrast to her aristocratic cultivation as a Stoic and Epicurian "all-goddess".
Towards the end of the Roman Republic, some leading Romans laid personal claims to Venus' favour. The general and dictator Sulla adopted Felix ("Lucky") as a surname, acknowledging his debt to heaven-sent good fortune and his particular debt to Venus Felix, for his extraordinarily fortunate political and military career. His protégé Pompey competed for Venus' support, dedicating (in 55 BC) a large temple to Venus Victrix as part of his lavishly appointed new theatre, and celebrating his triumph of 54 BC with coins that showed her crowned with triumphal laurels.
Pompey's erstwhile friend, ally, and later opponent Julius Caesar went still further. He claimed the favours of Venus Victrix in his military success and Venus Genetrix as a personal, divine ancestress – apparently a long-standing family tradition among the Julii. When Caesar was assassinated, his heir, Augustus, adopted both claims as evidence of his inherent fitness for office, and divine approval of his rule. Augustus' new temple to Mars Ultor, divine father of Rome's legendary founder Romulus, would have underlined the point, with the image of avenging Mars "almost certainly" accompanied by that of his divine consort Venus, and possibly a statue of the deceased and deified Caesar.
Vitruvius recommends that any new temple to Venus be sited according to rules laid down by the Etruscan haruspices, and built "near to the gate" of the city, where it would be less likely to contaminate "the matrons and youth with the influence of lust". He finds the Corinthian style, slender, elegant, enriched with ornamental leaves and surmounted by volutes, appropriate to Venus' character and disposition. Vitruvius recommends the widest possible spacing between the temple columns, producing a light and airy space, and he offers Venus's temple in Caesar's forum as an example of how not to do it; the densely spaced, thickset columns darken the interior, hide the temple doors and crowd the walkways, so that matrons who wish to honour the goddess must enter her temple in single file, rather than arm-in arm.
In 135 AD the Emperor Hadrian inaugurated a temple to Venus and Roma Aeterna (Eternal Rome) on Rome's Velian Hill, underlining the Imperial unity of Rome and its provinces, and making Venus the protective genetrix of the entire Roman state, its people and fortunes. It was the largest temple in Ancient Rome.
Festivals
Venus was offered official (state-sponsored) cult in certain festivals of the Roman calendar. Her sacred month was April (Latin Mensis Aprilis) which Roman etymologists understood to derive from aperire, "to open," with reference to the springtime blossoming of trees and flowers.
Veneralia (April 1) was held in honour of Venus Verticordia ("Venus the Changer of Hearts"), and Fortuna Virilis (Virile or strong Good Fortune), whose cult was probably by far the older of the two. Venus Verticordia was invented in 220 BC, in response to advice from a Sibylline oracle during Rome's Punic Wars, when a series of prodigies was taken to signify divine displeasure at sexual offenses among Romans of every category and class, including several men and three Vestal Virgins. Her statue was dedicated by a young woman, chosen as the most pudica (sexually pure) in Rome by a committee of Roman matrons. At first, this statue was probably housed in the temple of Fortuna Virilis, perhaps as divine reinforcement against the perceived moral and religious failings of its cult. In 114 BC Venus Verticordia was given her own temple. She was meant to persuade Romans of both sexes and every class, whether married or unmarried, to cherish the traditional sexual proprieties and morality known to please the gods and benefit the State. During her rites, her image was taken from her temple to the men's baths, where it was undressed and washed in warm water by her female attendants, then garlanded with myrtle. Women and men asked Venus Verticordia's help in affairs of the heart, sex, betrothal and marriage. For Ovid, Venus's acceptance of the epithet and its attendant responsibilities represented a change of heart in the goddess herself.
Vinalia urbana (April 23), a wine festival shared by Venus and Jupiter, king of the gods. It offered opportunity to supplicants to ask Venus' intercession with Jupiter, who was thought to be susceptible to her charms, and amenable to the effects of her wine. Venus was patron of "profane" wine, for everyday human use. Jupiter was patron of the strongest, purest, sacrificial grade wine, and controlled the weather on which the autumn grape-harvest would depend. At this festival, men and women alike drank the new vintage of ordinary, non-sacral wine (pressed at the previous year's vinalia rustica) in honour of Venus, whose powers had provided humankind with this gift. Upper-class women gathered at Venus's Capitoline temple, where a libation of the previous year's vintage, sacred to Jupiter, was poured into a nearby ditch. Common girls (vulgares puellae) and prostitutes gathered at Venus' temple just outside the Colline gate, where they offered her myrtle, mint, and rushes concealed in rose-bunches and asked her for "beauty and popular favour", and to be made "charming and witty".
Vinalia Rustica (August 19), originally a rustic Latin festival of wine, vegetable growth and fertility. This was almost certainly Venus' oldest festival and was associated with her earliest known form, Venus Obsequens. Kitchen gardens and market-gardens, and presumably vineyards were dedicated to her. Roman opinions differed on whose festival it was. Varro insists that the day was sacred to Jupiter, whose control of the weather governed the ripening of the grapes; but the sacrificial victim, a female lamb (agna), may be evidence that it once belonged to Venus alone.
A festival of Venus Genetrix (September 26) was held under state auspices from 46 BC at her Temple in the Forum of Caesar, in fulfillment of a vow by Julius Caesar, who claimed her personal favour as his divine patron, and ancestral goddess of the Julian clan. Caesar dedicated the temple during his extraordinarily lavish quadruple triumph. At the same time, he was pontifex maximus and Rome's senior magistrate; the festival is thought to mark the unprecedented promotion of a personal, family cult to one of the Roman state. Caesar's heir, Augustus, made much of these personal and family associations with Venus as an Imperial deity. The festival's rites are not known.
Mythology and literature
As with most major gods and goddesses in Roman mythology, the literary concept of Venus is mantled in whole-cloth borrowings from the literary Greek mythology of her counterpart, Aphrodite. In some Latin mythology, Cupid was the son of Venus and Mars, the god of war. At other times, or in parallel myths and theologies, Venus was understood to be the consort of Vulcan. Virgil, in compliment to his patron Augustus and the gens Julia, embellished an existing connection between Venus, whom Julius Caesar had adopted as his protectress, and Aeneas. Virgil's Aeneas is guided to Latium by Venus in her heavenly form, the morning star, shining brightly before him in the daylight sky; much later, she lifts Caesar's soul to heaven. In Ovid's Fasti Venus came to Rome because she "preferred to be worshipped in the city of her own offspring". In Virgil's poetic account of Octavian's victory at the sea-battle of Actium, the future emperor is allied with Venus, Neptune and Minerva. Octavian's opponents, Antony, Cleopatra and the Egyptians, assisted by bizarre and unhelpful Egyptian deities such as "barking" Anubis, lose the battle.
In the interpretatio romana of the Germanic pantheon during the early centuries AD, Venus became identified with the Germanic goddess Frijjo, giving rise to the loan translation "Friday" for dies Veneris.
Iconography
Signs and symbols
Images of Venus have been found in domestic murals, mosaics and household shrines (lararia). Petronius, in his Satyricon, places an image of Venus among the Lares (household gods) of the freedman Trimalchio's lararium.
Venus' signs were for the most part the same as Aphrodite's. They include roses, which were offered in Venus' Porta Collina rites, and above all, myrtle (Latin myrtus), which was cultivated for its white, sweetly scented flowers, aromatic, evergreen leaves and its various medical-magical properties. Venus' statues, and her worshipers, wore myrtle crowns at her festivals. Before its adoption into Venus' cults, myrtle was used in the purification rites of Cloacina, the Etruscan-Roman goddess of Rome's main sewer; later, Cloacina's association with Venus' sacred plant made her Venus Cloacina. Likewise, Roman folk-etymology transformed the ancient, obscure goddess Murcia into "Venus of the Myrtles, whom we now call Murcia".
Myrtle was thought a particularly potent aphrodisiac. As goddess of love and sex, Venus played an essential role at Roman prenuptial rites and wedding nights, so myrtle and roses were used in bridal bouquets. Marriage itself was not a seduction but a lawful condition, under Juno's authority; so myrtle was excluded from the bridal crown. Venus was also a patron of the ordinary, everyday wine drunk by most Roman men and women; the seductive powers of wine were well known. In the rites to Bona Dea, a goddess of female chastity, Venus, myrtle and anything male were not only excluded, but unmentionable. The rites allowed women to drink the strongest, sacrificial wine, otherwise reserved for the Roman gods and Roman men; the women euphemistically referred to it as "honey". Under these special circumstances, they could get virtuously, religiously drunk on strong wine, safe from Venus' temptations. Outside of this context, ordinary wine (that is, Venus' wine) tinctured with myrtle oil was thought particularly suitable for women.
Roman generals given an ovation, a lesser form of Roman triumph, wore a myrtle crown, perhaps to purify themselves and their armies of blood-guilt. The ovation ceremony was assimilated to Venus Victrix ("Victorious Venus"), who was held to have granted and purified its relatively "easy" victory.
Classical art
Roman and Hellenistic art produced many variations on the goddess, often based on the Praxitlean type Aphrodite of Cnidus. Many female nudes from this period of sculpture whose subjects are unknown are in modern art history conventionally called 'Venus'es, even if they originally may have portrayed a mortal woman rather than operated as a cult statue of the goddess.
Examples include:
Venus de Milo (130 BC)
Venus Pudica
Capitoline Venus
Venus de' Medici
Esquiline Venus
Venus Felix
Venus of Arles
Venus Anadyomene (also here)
Venus, Pan and Eros
Venus Genetrix
Venus of Capua
Venus Kallipygos
The Venus types 'Venus Pompeiana' and 'Venus Pescatrice' are found almost exclusively in Pompeii.
Post-classical culture
Medieval art
Venus is remembered in De Mulieribus Claris, a collection of biographies of historical and mythological women by the Florentine author Giovanni Boccaccio, composed in 136162. It is notable as the first collection devoted exclusively to biographies of women in Western literature.
Art in the classical tradition
Venus became a popular subject of painting and sculpture during the Renaissance period in Europe. As a "classical" figure for whom nudity was her natural state, it was socially acceptable to depict her unclothed. As the goddess of sexuality, a degree of erotic beauty in her presentation was justified, which appealed to many artists and their patrons. Over time, venus came to refer to any artistic depiction in post-classical art of a nude woman, even when there was no indication that the subject was the goddess.
The Birth of Venus (Botticelli) (c. 1485)
Sleeping Venus (c. 1501)
Venus of Urbino (1538)
Venus with a Mirror (c. 1555)
Rokeby Venus (1647–1651)
Olympia (1863)
The Birth of Venus (Cabanel) (1863)
The Birth of Venus (Bouguereau) (1879)
Venus of Cherchell, Gsell museum in Algeria
Venus Victrix, and Venus Italica by Antonio Canova
In the field of prehistoric art, since the discovery in 1908 of the so-called "Venus of Willendorf" small Neolithic sculptures of rounded female forms have been conventionally referred to as Venus figurines. Although the name of the actual deity is not known, the knowing contrast between the obese and fertile cult figures and the classical conception of Venus has raised resistance to the terminology.
Gallery
Medieval and modern music
In Wagner's opera Tannhäuser, which draws on the medieval German legend of the knight and poet Tannhäuser, Venus lives beneath the Venusberg mountain. Tannhäuser breaks his knightly vows by spending a year there with Venus, under her enchantment. When he emerges, he has to seek penance for his sins.
The Dutch band Shocking Blue had a number one hit on the Billboard Top Ten in 1970 with the song titled "Venus", which was also a hit when covered by Bananarama in 1986. The song "Venus" by the band Television from the 1978 album Marquee Moon references the Venus de Milo. There is also a song named "Venus" co-written, co-produced and sung by Lady Gaga, as well as a song named "Birth of Venus Illegitima" by the Swedish symphonic metal Therion, on the album Vovin, and the song "Venus as a Boy" by the Icelandic artist Björk. Another reference to Venus is from Billy Idol's album Cyberpunk, in the track "Venus".
See also
Love goddess
Planets in astrology#Venus
Hottentot Venus
Venus (planet)
Venus symbol
Notes
References
Bibliography
Beard, M., Price, S., North, J., Religions of Rome: Volume 1, a History, illustrated, Cambridge University Press, 1998.
Beard, Mary: The Roman Triumph, The Belknap Press of Harvard University Press, Cambridge, Mass., and London, England, 2007. (hardcover).
Champeaux, J. (1987). Fortuna. Recherches sur le culte de la Fortuna à Rome et dans le monde romain des origines à la mort de César. II. Les Transformations de Fortuna sous le République. Rome: Ecole Française de Rome, pp. 378–395.
Eden, P.T., "Venus and the Cabbage," Hermes, 91, (1963), pp. 448–459.
Hammond, N.G.L. and Scullard, H.H. (eds.) (1970). The Oxford Classical Dictionary. Oxford: Oxford University Press. (p. 113)
Langlands, Rebecca (2006). Sexual Morality in Ancient Rome. Cambridge University Press. .
Lloyd-Morgan, G. (1986). "Roman Venus: public worship and private rites." In M. Henig and A. King (eds.), Pagan Gods and Shrines of the Roman Empire (pp. 179–188). Oxford: Oxford Committee for Archaeology Monograph 8.
Nash, E. (1962). Pictorial Dictionary of Ancient Rome Volume 1. London: A. Zwemmer Ltd. (pp. 272–263, 424)
Richardson, L. (1992). A New Topographical Dictionary of Ancient Rome. Baltimore and London: The Johns Hopkins University Press. (pp. 92, 165–167, 408–409, 411)
Room, A. (1983). Room's Classical Dictionary. London and Boston: Routledge & Kegan Paul. (pp. 319–322)
Rüpke, Jörg (Editor), A Companion to Roman Religion, Wiley-Blackwell, 2007.
Schilling, R. (1982) (2nd ed.). La Religion Romaine de Vénus depuis les origines jusqu'au temps d'Auguste. Paris: Editions E. de Boccard.
Schilling, R., in Bonnefoy, Y., and Doniger, W. (Editors), Roman and European Mythologies, (English translation), University of Chicago Press, 1991. pp. 146.
Scullard, H.H. (1981). Festivals and Ceremonies of the Roman Republic. London: Thames and Hudson. (pp. 97, 107)
Simon, E. (1990). Die Götter der Römer. Munich: Hirmer Verlag. (pp. 213–228).
Staples, Ariadne (1998). From Good Goddess to Vestal Virgins: Sex and Category in Roman Religion. London: Routledge. .
Turcan, Robert (2001). The Cults of the Roman Empire. Blackwell. .
Wagenvoort, Hendrik, "The Origins of the goddess Venus" (first published as "De deae Veneris origine", Mnemnosyne, Series IV, 17, 1964, pp. 47 – 77) in Pietas: selected studies in Roman religion, Brill, 1980.
Weinstock, S. (1971). Divus Julius. Oxford; Clarendon Press. (pp. 80–90)
Gerd Scherm, Brigitte Tast Astarte und Venus. Eine foto-lyrische Annäherung (1996),
External links
Britannica Online Encyclopedia
The Roman goddess Venus – highlights at The British Museum
Warburg Institute Iconographic Database (ca 2,300 images of Venus)
'Venus Chiding Cupid for Learning to Cast Accounts' by Sir Joshua Reynolds at the Lady Lever Art Gallery
Beauty goddesses
Deities in the Aeneid
Fertility goddesses
Love and lust goddesses
Mother goddesses
Nudity in mythology
Sexuality in ancient Rome
Venusian deities |
58608 | https://en.wikipedia.org/wiki/Trusted%20Computing | Trusted Computing | Trusted Computing (TC), also often referred to as Confidential Computing, is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning. The core idea of trusted computing is to give hardware manufacturers control over what software does and does not run on a system by refusing to run unsigned software. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by computer hardware and software. Enforcing this behavior is achieved by loading the hardware with a unique encryption key that is inaccessible to the rest of the system and the owner.
TC is controversial as the hardware is not only secured for its owner, but also secured against its owner. Such controversy has led opponents of trusted computing, such as free software activist Richard Stallman, to refer to it instead as treacherous computing, even to the point where some scholarly articles have begun to place scare quotes around "trusted computing".
Trusted Computing proponents such as International Data Corporation, the Enterprise Strategy Group and Endpoint Technologies Associates claim the technology will make computers safer, less prone to viruses and malware, and thus more reliable from an end-user perspective. They also claim that Trusted Computing will allow computers and servers to offer improved computer security over that which is currently available. Opponents often claim this technology will be used primarily to enforce digital rights management policies (imposed restrictions to the owner) and not to increase computer security.
Chip manufacturers Intel and AMD, hardware manufacturers such as HP and Dell, and operating system providers such as Microsoft include Trusted Computing in their products if enabled. The U.S. Army requires that every new PC it purchases comes with a Trusted Platform Module (TPM). As of July 3, 2007, so does virtually the entire United States Department of Defense.
In 2019, the Confidential Computing Consortium (CCC) was established by the Linux Foundation with the mission to "improve security for data in use". The consortium now has over 40 members, including Microsoft, Intel, Baidu, Red Hat, and Meta.
Key concepts
Trusted Computing encompasses six key technology concepts, of which all are required for a fully Trusted system, that is, a system compliant to the TCG specifications:
Endorsement key
Secure input and output
Memory curtaining / protected execution
Sealed storage
Remote attestation
Trusted Third Party (TTP)
Endorsement key
The endorsement key is a 2048-bit RSA public and private key pair that is created randomly on the chip at manufacture time and cannot be changed. The private key never leaves the chip, while the public key is used for attestation and for encryption of sensitive data sent to the chip, as occurs during the TPM_TakeOwnership command.
This key is used to allow the execution of secure transactions: every Trusted Platform Module (TPM) is required to be able to sign a random number (in order to allow the owner to show that he has a genuine trusted computer), using a particular protocol created by the Trusted Computing Group (the direct anonymous attestation protocol) in order to ensure its compliance of the TCG standard and to prove its identity; this makes it impossible for a software TPM emulator with an untrusted endorsement key (for example, a self-generated one) to start a secure transaction with a trusted entity. The TPM should be designed to make the extraction of this key by hardware analysis hard, but tamper resistance is not a strong requirement.
Memory curtaining
Memory curtaining extends common memory protection techniques to provide full isolation of sensitive areas of memory—for example, locations containing cryptographic keys. Even the operating system does not have full access to curtained memory. The exact implementation details are vendor specific.
Sealed storage
Sealed storage protects private information by binding it to platform configuration information including the software and hardware being used. This means the data can be released only to a particular combination of software and hardware. Sealed storage can be used for DRM enforcing. For example, users who keep a song on their computer that has not been licensed to be listened will not be able to play it. Currently, a user can locate the song, listen to it, and send it to someone else, play it in the software of their choice, or back it up (and in some cases, use circumvention software to decrypt it). Alternatively, the user may use software to modify the operating system's DRM routines to have it leak the song data once, say, a temporary license was acquired. Using sealed storage, the song is securely encrypted using a key bound to the trusted platform module so that only the unmodified and untampered music player on his or her computer can play it. In this DRM architecture, this might also prevent people from listening to the song after buying a new computer, or upgrading parts of their current one, except after explicit permission of the vendor of the song.
Remote attestation
Remote attestation allows changes to the user's computer to be detected by authorized parties. For example, software companies can identify unauthorized changes to software, including users tampering with their software to circumvent commercial digital rights restrictions. It works by having the hardware generate a certificate stating what software is currently running. The computer can then presents this certificate to a remote party to show that unaltered software is currently executing. Numerous remote attestation schemes have been proposed for various computer architectures, including Intel, RISC-V, and ARM.
Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that requested the attestation, and not by an eavesdropper.
To take the song example again, the user's music player software could send the song to other machines, but only if they could attest that they were running an authorized copy of the music player software. Combined with the other technologies, this provides a more restricted path for the music: encrypted I/O prevents the user from recording it as it is transmitted to the audio subsystem, memory locking prevents it from being dumped to regular disk files as it is being worked on, sealed storage curtails unauthorized access to it when saved to the hard drive, and remote attestation prevents unauthorized software from accessing the song even when it is used on other computers. To preserve the privacy of attestation responders, Direct Anonymous Attestation has been proposed as a solution, which uses a group signature scheme to prevent revealing the identity of individual signers.
Proof of space (PoS) have been proposed to be used for malware detection, by determining whether the L1 cache of a processor is empty (e.g., has enough space to evaluate the PoSpace routine without cache misses) or contains a routine that resisted being evicted.
Trusted third party
One of the main obstacles that had to be overcome by the developers of the TCG technology was how to maintain anonymity while still providing a “trusted platform”. The main object of obtaining “trusted mode” is that the other party (Bob), with whom a computer (Alice) may be communicating, can trust that Alice is running un-tampered hardware and software. This will assure Bob that Alice will not be able to use malicious software to compromise sensitive information on the computer. Unfortunately, in order to do this, Alice has to inform Bob that she is using registered and “safe” software and hardware, thereby potentially uniquely identifying herself to Bob.
This might not be a problem where one wishes to be identified by the other party, e.g., during banking transactions over the Internet. But in many other types of communicating activities people enjoy the anonymity that the computer provides. The TCG acknowledges this, and allegedly have developed a process of attaining such anonymity but at the same time assuring the other party that he or she is communicating with a "trusted" party. This was done by developing a “trusted third party”. This entity will work as an intermediary between a user and his own computer and between a user and other users. In this essay the focus will be on the latter process, a process referred to as remote attestation.
When a user requires an AIK (Attestation Identity Key) the user wants its key to be certified by a CA (Certification Authority). The user through a TPM (Trusted Platform Module) sends three credentials: a public key credential, a platform credential, and a conformance credential. This set of certificates and cryptographic keys will in short be referred to as "EK". The EK can be split into two main parts, the private part "EKpr" and the public part "EKpub". The EKpr never leaves the TPM.
Disclosure of the EKpub is however necessary (version 1.1). The EKpub will uniquely identify the endorser of the platform, model, what kind of software is currently being used on the platform, details of the TPM, and that the platform (PC) complies with the TCG specifications. If this information is communicated directly to another party as a process of getting trusted status it would at the same time be impossible to obtain an anonymous identity. Therefore, this information is sent to the privacy certification authority, (trusted third party). When the C.A (Privacy certification Authority) receives the EKpub sent by the TPM, the C.A verifies the information. If the information can be verified it will create a certified secondary key pair AIK, and sends this credential back to the requestor. This is intended to provide the user with anonymity. When the user has this certified AIK, he or she can use it to communicate with other trusted platforms.
In version 1.2, the TCG have developed a new method of obtaining a certified AIK. This process is called DAA Direct anonymous attestation. This method does not require the user to disclose his/her EKpub with the TTP. The unique new feature of the DAA is that it has the ability to convince the remote entity that a particular TPM (trusted platform module) is a valid TPM without disclosing the EKpub or any other unique identifier. Before the TPM can send a certification request for an AIK to the remote entity, the TPM has to generate a set of DAA credentials. This can only be done by interacting with an issuer. The DAA credentials are created by the TPM sending a TPM-unique secret that remains within the TPM. The TPM secret is similar but not analogous to the EK. When the TPM has obtained a set of DAA credentials, it can send these to the Verifier. When the Verifier receives the DAA credentials from the TTP, it will verify them and send a certified AIK back to the user. The user will then be able to communicate with other trusted parties using the certified AIK. The Verifier may or may not be a trusted third party (TTP). The Verifier can determine whether the DAA credentials are valid, but the DAA credentials do not contain any unique information that discloses the TPM platform. An example would be where a user wants trusted status and sends a request to the Issuer. The Issuer could be the manufacturer of the user's platform, e.g. Compaq. Compaq would check if the TPM it has produced is a valid one, and if so, issues DAA credentials. In the next step, the DAA credentials are sent by the user to the Verifier. As mentioned this might be a standard TTP, but could also be a different entity. If the Verifier accepts the DAA supplied it will produce a certified AIK. The certified AIK will then be used by the user to communicate with other trusted platforms. In summary the new version introduces a separate entity that will assist in the anonymous attestation process. By introducing the Issuer which supplies a DAA, one will be able to sufficiently protect the user's anonymity towards the Verifier/TTP. The issuer most commonly will be the platform manufacturer. Without such credentials, it will be probably difficult for a private customer or small business or organization to convince others that they have a genuine trusted platform.
Known applications
The Microsoft products Windows Vista, Windows 7, Windows 8 and Windows RT make use of a Trusted Platform Module to facilitate BitLocker Drive Encryption. Other known applications with runtime encryption and the use of secure enclaves include the Signal messenger and the e-prescription service ("E-Rezept") by the German government.
Possible applications
Digital rights management
Trusted Computing would allow companies to create a digital rights management (DRM) system which would be very hard to circumvent, though not impossible. An example is downloading a music file. Sealed storage could be used to prevent the user from opening the file with an unauthorized player or computer. Remote attestation could be used to authorize play only by music players that enforce the record company's rules. The music would be played from curtained memory, which would prevent the user from making an unrestricted copy of the file while it is playing, and secure I/O would prevent capturing what is being sent to the sound system. Circumventing such a system would require either manipulation of the computer's hardware, capturing the analogue (and thus degraded) signal using a recording device or a microphone, or breaking the security of the system.
New business models for use of software (services) over Internet may be boosted by the technology. By strengthening the DRM system, one could base a business model on renting programs for a specific time periods or "pay as you go" models. For instance, one could download a music file which could only be played a certain number of times before it becomes unusable, or the music file could be used only within a certain time period.
Preventing cheating in online games
Trusted Computing could be used to combat cheating in online games. Some players modify their game copy in order to gain unfair advantages in the game; remote attestation, secure I/O and memory curtaining could be used to determine that all players connected to a server were running an unmodified copy of the software.
Verification of remote computation for grid computing
Trusted Computing could be used to guarantee participants in a grid computing system are returning the results of the computations they claim to be instead of forging them. This would allow large scale simulations to be run (say a climate simulation) without expensive redundant computations to guarantee malicious hosts are not undermining the results to achieve the conclusion they want.
Criticism
Trusted Computing opponents such as the Electronic Frontier Foundation and Free Software Foundation claim trust in the underlying companies is not deserved and that the technology puts too much power and control into the hands of those who design systems and software. They also believe that it may cause consumers to lose anonymity in their online interactions, as well as mandating technologies Trusted Computing opponents say are unnecessary. They suggest Trusted Computing as a possible enabler for future versions of mandatory access control, copy protection, and DRM.
Some security experts, such as Alan Cox and Bruce Schneier, have spoken out against Trusted Computing, believing it will provide computer manufacturers and software authors with increased control to impose restrictions on what users are able to do with their computers. There are concerns that Trusted Computing would have an anti-competitive effect on the IT market.
There is concern amongst critics that it will not always be possible to examine the hardware components on which Trusted Computing relies, the Trusted Platform Module, which is the ultimate hardware system where the core 'root' of trust in the platform has to reside. If not implemented correctly, it presents a security risk to overall platform integrity and protected data. The specifications, as published by the Trusted Computing Group, are open and are available for anyone to review. However, the final implementations by commercial vendors will not necessarily be subjected to the same review process. In addition, the world of cryptography can often move quickly, and that hardware implementations of algorithms might create an inadvertent obsolescence. Trusting networked computers to controlling authorities rather than to individuals may create digital imprimaturs.
Cryptographer Ross Anderson, University of Cambridge, has great concerns that:
TC can support remote censorship [...] In general, digital objects created using TC systems remain under the control of their creators, rather than under the control of the person who owns the machine on which they happen to be stored [...] So someone who writes a paper that a court decides is defamatory can be compelled to censor it — and the software company that wrote the word processor could be ordered to do the deletion if she refuses. Given such possibilities, we can expect TC to be used to suppress everything from pornography to writings that criticize political leaders.
He goes on to state that:
[...] software suppliers can make it much harder for you to switch to their competitors' products. At a simple level, Word could encrypt all your documents using keys that only Microsoft products have access to; this would mean that you could only read them using Microsoft products, not with any competing word processor. [...]
The [...] most important benefit for Microsoft is that TC will dramatically increase the costs of switching away from Microsoft products (such as Office) to rival products (such as OpenOffice). For example, a law firm that wants to change from Office to OpenOffice right now merely has to install the software, train the staff and convert their existing files. In five years' time, once they have received TC-protected documents from perhaps a thousand different clients, they would have to get permission (in the form of signed digital certificates) from each of these clients in order to migrate their files to a new platform. The law firm won't in practice want to do this, so they will be much more tightly locked in, which will enable Microsoft to hike its prices.
Anderson summarizes the case by saying:
The fundamental issue is that whoever controls the TC infrastructure will acquire a huge amount of power. Having this single point of control is like making everyone use the same bank, or the same accountant, or the same lawyer. There are many ways in which this power could be abused.
Digital rights management
One of the early motivations behind trusted computing was a desire by media and software corporations for stricter DRM technology to prevent users from freely sharing and using potentially copyrighted or private files without explicit permission.
An example could be downloading a music file from a band: the band's record company could come up with rules for how the band's music can be used. For example, they might want the user to play the file only three times a day without paying additional money. Also, they could use remote attestation to only send their music to a music player that enforces their rules: sealed storage would prevent the user from opening the file with another player that did not enforce the restrictions. Memory curtaining would prevent the user from making an unrestricted copy of the file while it is playing, and secure output would prevent capturing what is sent to the sound system.
Users unable to modify software
A user who wanted to switch to a competing program might find that it would be impossible for that new program to read old data, as the information would be "locked in" to the old program. It could also make it impossible for the user to read or modify their data except as specifically permitted by the software.
Remote attestation could cause other problems. Currently, web sites can be visited using a number of web browsers, though certain websites may be formatted such that some browsers cannot decipher their code. Some browsers have found a way to get around that problem by emulating other browsers. With remote attestation, a website could check the internet browser being used and refuse to display on any browser other than the specified one (like Internet Explorer), so even emulating the browser would not work.
Users unable to exercise legal rights
The law in many countries allows users certain rights over data whose copyright they do not own (including text, images, and other media), often under headings such as fair use or public interest. Depending on jurisdiction, these may cover issues such as whistleblowing, production of evidence in court, quoting or other small-scale usage, backups of owned media, and making a copy of owned material for personal use on other owned devices or systems. The steps implicit in trusted computing have the practical effect of preventing users exercising these legal rights.
Users vulnerable to vendor withdrawal of service
A service that requires external validation or permission - such as a music file or game that requires connection with the vendor to confirm permission to play or use - is vulnerable to that service being withdrawn or no longer updated. A number of incidents have already occurred where users, having purchased music or video media, have found their ability to watch or listen to it suddenly stop due to vendor policy or cessation of service, or server inaccessibility, at times with no compensation. Alternatively in some cases the vendor refuses to provide services in future which leaves purchased material only usable on the present -and increasingly obsolete- hardware (so long as it lasts) but not on any hardware that may be purchased in future.
Users unable to override
Some opponents of Trusted Computing advocate "owner override": allowing an owner who is confirmed to be physically present to allow the computer to bypass restrictions and use the secure I/O path. Such an override would allow remote attestation to a user's specification, e.g., to create certificates that say Internet Explorer is running, even if a different browser is used. Instead of preventing software change, remote attestation would indicate when the software has been changed without owner's permission.
Trusted Computing Group members have refused to implement owner override. Proponents of trusted computing believe that owner override defeats the trust in other computers since remote attestation can be forged by the owner. Owner override offers the security and enforcement benefits to a machine owner, but does not allow him to trust other computers, because their owners could waive rules or restrictions on their own computers. Under this scenario, once data is sent to someone else's computer, whether it be a diary, a DRM music file, or a joint project, that other person controls what security, if any, their computer will enforce on their copy of those data. This has the potential to undermine the applications of trusted computing to enforce DRM, control cheating in online games and attest to remote computations for grid computing.
Loss of anonymity
Because a Trusted Computing equipped computer is able to uniquely attest to its own identity, it will be possible for vendors and others who possess the ability to use the attestation feature to zero in on the identity of the user of TC-enabled software with a high degree of certainty.
Such a capability is contingent on the reasonable chance that the user at some time provides user-identifying information, whether voluntarily, indirectly, or simply through inference of many seemingly benign pieces of data. (e.g. search records, as shown through simple study of the AOL search records leak). One common way that information can be obtained and linked is when a user registers a computer just after purchase. Another common way is when a user provides identifying information to the website of an affiliate of the vendor.
While proponents of TC point out that online purchases and credit transactions could potentially be more secure as a result of the remote attestation capability, this may cause the computer user to lose expectations of anonymity when using the Internet.
Critics point out that this could have a chilling effect on political free speech, the ability of journalists to use anonymous sources, whistle blowing, political blogging and other areas where the public needs protection from retaliation through anonymity.
The TPM specification offers features and suggested implementations that are meant to address the anonymity requirement. By using a third-party Privacy Certification Authority (PCA), the information that identifies the computer could be held by a trusted third party. Additionally, the use of direct anonymous attestation (DAA), introduced in TPM v1.2, allows a client to perform attestation while not revealing any personally identifiable or machine information.
The kind of data that must be supplied to the TTP in order to get the trusted status is at present not entirely clear, but the TCG itself admits that "attestation is an important TPM function with significant privacy implications". It is, however, clear that both static and dynamic information about the user computer may be supplied (Ekpubkey) to the TTP (v1.1b), it is not clear what data will be supplied to the “verifier” under v1.2. The static information will uniquely identify the endorser of the platform, model, details of the TPM, and that the platform (PC) complies with the TCG specifications . The dynamic information is described as software running on the computer. If a program like Windows is registered in the user's name this in turn will uniquely identify the user. Another dimension of privacy infringing capabilities might also be introduced with this new technology; how often you use your programs might be possible information provided to the TTP. In an exceptional, however practical situation, where a user purchases a pornographic movie on the Internet, the purchaser nowadays, must accept the fact that he has to provide credit card details to the provider, thereby possibly risking being identified. With the new technology a purchaser might also risk someone finding out that he (or she) has watched this pornographic movie 1000 times. This adds a new dimension to the possible privacy infringement. The extent of data that will be supplied to the TTP/Verifiers is at present not exactly known, only when the technology is implemented and used will we be able to assess the exact nature and volume of the data that is transmitted.
TCG specification interoperability problems
Trusted Computing requests that all software and hardware vendors will follow the technical specifications released by the Trusted Computing Group in order to allow interoperability between different trusted software stacks. However, since at least mid-2006, there have been interoperability problems between the TrouSerS trusted software stack (released as open source software by IBM) and Hewlett-Packard's stack. Another problem is that the technical specifications are still changing, so it is unclear which is the standard implementation of the trusted stack.
Shutting out of competing products
People have voiced concerns that trusted computing could be used to keep or discourage users from running software created by companies outside of a small industry group. Microsoft has received a great deal of bad press surrounding their Palladium software architecture, evoking comments such as "Few pieces of vaporware have evoked a higher level of fear and uncertainty than Microsoft's Palladium", "Palladium is a plot to take over cyberspace", and "Palladium will keep us from running any software not personally approved by Bill Gates". The concerns about trusted computing being used to shut out competition exist within a broader framework of consumers being concerned about using bundling of products to obscure prices of products and to engage in anti-competitive practices. Trusted Computing is seen as harmful or problematic to independent and open source software developers.
Trust
In the widely used public-key cryptography, creation of keys can be done on the local computer and the creator has complete control over who has access to it, and consequentially their own security policies. In some proposed encryption-decryption chips, a private/public key is permanently embedded into the hardware when it is manufactured, and hardware manufacturers would have the opportunity to record the key without leaving evidence of doing so. With this key it would be possible to have access to data encrypted with it, and to authenticate as it. It is trivial for a manufacturer to give a copy of this key to the government or the software manufacturers, as the platform must go through steps so that it works with authenticated software.
Therefore, to trust anything that is authenticated by or encrypted by a TPM or a Trusted computer, an end user has to trust the company that made the chip, the company that designed the chip, the companies allowed to make software for the chip, and the ability and interest of those companies not to compromise the whole process. A security breach breaking that chain of trust happened to a SIM card manufacturer Gemalto, which in 2010 was infiltrated by US and British spies, resulting in compromised security of cellphone calls.
It is also critical that one be able to trust that the hardware manufacturers and software developers properly implement trusted computing standards. Incorrect implementation could be hidden from users, and thus could undermine the integrity of the whole system without users being aware of the flaw.
Hardware and software support
Since 2004, most major manufacturers have shipped systems that have included Trusted Platform Modules, with associated BIOS support. In accordance with the TCG specifications, the user must enable the Trusted Platform Module before it can be used.
The Linux kernel has included trusted computing support since version 2.6.13, and there are several projects to implement trusted computing for Linux. In January 2005, members of Gentoo Linux's "crypto herd" announced their intention of providing support for TC—in particular support for the Trusted Platform Module. There is also a TCG-compliant software stack for Linux named TrouSerS, released under an open source license.
Some limited form of trusted computing can be implemented on current versions of Microsoft Windows with third-party software.
With the Intel Software Guard Extension (SGX) and AMD Secure Encrypted Virtualization (SEV) processors, there is hardware available for runtime memory encryption and remote attestation features.
Major cloud providers such as Microsoft Azure, AWS and Google Cloud Platform have virtual machines with trusted computing features available.
There are several open-source projects that facilitate the use of confidential computing technology. These include EGo, EdgelessDB and MarbleRun from Edgeless Systems, as well as Enarx, which originates from security research at Red Hat.
The Intel Classmate PC (a competitor to the One Laptop Per Child) includes a Trusted Platform Module.
PrivateCore vCage software can be used to attest x86 servers with TPM chips.
Mobile T6 secure operating system simulates the TPM functionality in mobile devices using the ARM TrustZone technology.
Samsung Smartphones come equipped Samsung Knox that depend on features like Secure Boot, TIMA, MDM, TrustZone and SE Linux
See also
Glossary of legal terms in technology
Hardware restrictions
Next-Generation Secure Computing Base (formerly known as Palladium)
Trusted Computing Group
Trusted Network Connect
Trusted Platform Module
References
External links
Cryptography
Copyright law
Microsoft Windows security technology |
30797416 | https://en.wikipedia.org/wiki/Simcad%20Pro | Simcad Pro | Simcad Pro simulation software is a product of CreateASoft Inc. used for simulating process-based environments including manufacturing, warehousing, supply lines, logistics, and healthcare. It is a tool used for planning, organizing, optimizing, and engineering real process-based systems. Simcad Pro allows the creation of a virtual computer model, which can be manipulated by the user and represents a real environment. Using the model, it is possible to test for efficiency as well as locate points of improvement among the process flow. Simcad Pro's dynamic computer model also allows for changes to occur while the model is running for a fully realistic simulation. It can also be integrated with live and historical data.
Simulation software is part of a broader category of Industry 4.0 technologies, or technologies that move organizations to digitization of operations.
Model Building
User Interface
Simcad Pro has a user-friendly graphical user interface for building virtual models of process-based environments, and the entire model can be built and edited without coding, including any advanced model characteristics. In addition to this, Simcad Pro allows for data import from almost any data source, including Microsoft Access, Excel, Visio, and SQL Server databases for easier model creation. Simcad Pro also supports real-time tracking using devices including RFID tags and barcode scanners. Simulations can then be viewed in either 2D or 3D.
Simulation Engine
Simcad Pro uses a patented simulation engine that is 64-bit and fully multi-threaded, distinguishing it from the SIMAN simulation engine used in many competing software products. Models built with the Simcad engine feature patented "On-The-Fly" simulation, meaning that models can be manipulated, edited, and transformed as they are performing a simulation run.
Applied Industries
Any industry engaged in a process-flow environment can use Simcad Pro to model their operations. The main industries where Simcad Pro is used are manufacturing, warehousing, supply chain & logistics, and healthcare.
Manufacturing
Simcad Pro can be used to simulate manufacturing operations, with the most common goals being to:
Optimize production schedules
Optimize labor allocation
Perform capacity analysis and growth projection
Manage and reduce WIP
Identify bottlenecks and constraints
Optimize facility layout
Simcad Pro supports high-mix manufacturing operations with the ability to model:
Detailed part routing
Multiple sub-assemblies/components and detailed BOM support
Tooling and manpower constraints
Work orders, detailed cell implementation and sequencing
Kanban pull and material handling
Warehousing
Simcad Pro can be used to simulate warehouses and distribution centers, with the most common goals being to:
Optimize pick paths
Perform congestion analysis
Optimize staging areas
Analyze receiving and put-away operations
Optimize slotting
Validate and implement automation
Analyze storage media
Supply Chain & Logistics
Simcad Pro can be used to simulate supply chains and logistics operations, with the most common goals being to:
Manage inventory and delivery to ensure product availability
Determine optimal locations for distribution centers
Determine impact of changes to supply and demand
Reduce transportation costs
Optimize delivery routes
Plan capacity and scheduling changes
Reduce delivery lead time
Healthcare
Simcad Pro can be used to simulate healthcare operations, such as emergency rooms and operating rooms, with the most common goals being to:
Optimize patient flow and reduce patient wait times
Optimize bed allocation
Analyze length-of-stay
Optimize staff utilization and scheduling
Optimize ED capacity planning
Optimize OR department scheduling & efficiency
Version History
See also
List of discrete event simulation software
Discrete event simulation
List of computer simulation software
Simulation in manufacturing systems
References
External links
Createasoft website
Simcad Pro simulation software
Simcad Pro Healthcare simulation software
SimTrack predictive analytics software
SimTrack healthcare predictive analytics software
smaRTLS Real-time locating system
SimData Time and motion studies software
ianimate3D 3D animation software
Simulation software
Companies based in Chicago |
69286580 | https://en.wikipedia.org/wiki/System%20Source%20Computer%20Museum | System Source Computer Museum | The System Source Computer Museum, located in Hunt Valley, Maryland, exhibits notable computing devices from ancient times until the present. Over 5,000 objects are on display, and many of the computation devices are operational. STEM activities are offered to organized tour groups. As of 2021 admission is free. The museum is open weekdays from 9:00AM until 6:00PM, and other times by appointment. Docents are available to lead tours.
History
The museum's origins date to 1981 when a Baltimore ComputerLand franchise had computers in inventory that instantly became historic artifacts with the introduction of the IBM Personal Computer.
The museum was incorporated as a non-profit 501c3 in 2018 as the Maryland Technology Museum d/b/a the System Source Computer Museum. In 2021, the museum became the new home of the DigiBarn Computer Museum.
Exhibits
Apples: Apple 1, Apple II, Apple ///, Apple Lisa, and most other Apple products
Cray Computer: Cray 1, Cray 2, Cray T90
DEC Computers: PDP-5, PDP-8, LINC, PDP-12, VAX
Computer Memory: Delay-line memory Magnetic-core memory
Pre-Industrial Computers: Abacus, Quipu, Napier's bones, Slide rule
Tic-Tac-Toe and Computers: Charles Babbage, Relay Tic Tac Toe Machine, Matchbox Educable Noughts and Crosses Engine (MENACE)
Univac: Univac 490 Univac 418
Xerox: Xerox Alto
STEM programs
Hardware Workshop
Programming a Virtual PET
Squeak (Etoys Programming)
References
External links
Museums established in 1981
Museums in Maryland |
56059754 | https://en.wikipedia.org/wiki/Kiersten%20Todt | Kiersten Todt | Kiersten Todt is the Chief of Staff of the U.S. Cybersecurity and Infrastructure Security Agency (CISA). She previously served as the managing director of the Cyber Readiness Institute as well as a resident scholar at the University of Pittsburgh in Washington, DC with the Institute for Cyber Law, Policy, and Security and was appointed for this position on June 1, 2017. Before taking this position, she worked under Barack Obama in the national cybersecurity commission. She was the president and partner with Liberty Group Ventures, LLC. She has been a partner with Good Harbor Consulting. She was cognizant of the organization's North America crisis management practice.
Background and career
Todt graduated from Princeton University with a public policy degree from the Woodrow Wilson School of Public and International Affairs in 1994. Her Master's degree is from the John F. Kennedy School of Government at Harvard. In 1999, she was named a Presidential Management Fellow.
Government service
Before working for Good Harbor, she worked for Business Executives for National Security (BENS). In this position she worked to bring together state and local emergency organizations and businesses. Ms. Todt served as a Professional Staff Member on the U.S. Senate Committee on Governmental Affairs; she worked for the Committee Chairman, Senator Joseph Lieberman, and was responsible for drafting the cybersecurity, infrastructure protection, emergency preparedness, bioterror, and science and technology directorates of the legislation that created the Department of Homeland Security. She also developed and executed federal and regional port and cyber security projects. Prior to BENS, she was a consultant for Sandia National Laboratories and worked with the California Governor’s Office and Bay Area Economic Forum to develop the homeland security preparedness plan for the Bay Area. Todt has been an adjunct lecturer at Stanford University.
Before working in the Senate, Todt worked on Vice President Gore’s domestic policy office and was responsible for coordinating federal resources with locally-defined needs, with priority on energy and housing. She was also the senior adviser on demand-reduction issues to Director Barry. R. McCaffrey at the White House Office of National Drug Control Policy (ONDCP). She received the outstanding service award while there.
On March 23, 2016, Todt joined the National Institute of Standards and Technology (NIST). Todt's role in the NIST was to create anticipated actions that the federal government would develop in the short and long term along with other government departments. Secretary Pritzker endorsed her appointment of Todt by describing her as having proven expertise in risk management. She was the Executive Director of the Commission on Enhancing National Cybersecurity. She currently works on the team with Chairman Tom Donilon. “A recognized and highly accomplished leader in the field, Kiersten's experience in both the public and private sectors make her uniquely qualified to assist the Commission as it develops and recommends an agenda to enhance the nation's cybersecurity,” Commission Chair Tom Donilon said.
References
University of Pittsburgh faculty
People from Pittsburgh
Harvard Kennedy School alumni
Princeton School of Public and International Affairs alumni |
1592215 | https://en.wikipedia.org/wiki/SafeDisc | SafeDisc | SafeDisc is a copy protection program for Microsoft Windows applications and games distributed on optical disc. Created by Macrovision Corporation, it was aimed to hinder unauthorized disc duplication. The program was first introduced in 1998 and was discontinued on March 31, 2009.
Although the stated use is to prevent piracy, many, including the Electronic Frontier Foundation, believe it is used to restrict one's fair-use rights.
History
There have been several editions of SafeDisc over the years. Each one has the goals of making discs harder to copy. The current revision is marketed as SafeDisc Advanced.
The early versions of SafeDisc did not make the discs very difficult to copy. Recent versions 2.9+ could produce discs that are difficult to copy or reverse engineer, requiring specific burners capable of emulating the "weak sectors" and odd data formats that are characteristic of SafeDisc.
Withdrawal of support
Shortly after the release of Windows 10 in 2015, Microsoft announced that games with SafeDisc DRM would not run on the operating system, citing security concerns over the software due to the way in which it becomes "deeply embedded" in the system. Microsoft stated that supporting SafeDisc could have been a possible loophole for computer viruses to exploit. Support for SafeDisc on earlier versions of Windows was withdrawn upon the release of update number 3086255 in 2015.
Circumvention
Previous versions of SafeDisc were overcome by disc image emulator software such as Daemon Tools and Alcohol 120%. SafeDisc currently blacklists such software, meaning that those who want to use this method must install additional software to cloak the mounter; examples include CureRom and Y.A.S.U.
Another potential attack on SafeDisc is to pull the encrypted application out of the archive it is contained in. All SafeDisc encrypted discs contain an ICD file, an encrypted format used by SafeDisc to ensure that the original CD is loaded. UnSafeDisc circumvents and decrypts SafeDisc encrypted files by opening the ICD file format, decrypting it, and converting it to an EXE file. However, each program requires a specific patch to enable full functionality.
Operation
SafeDisc adds a unique digital signature to the optical media at the time of replication. Each time a SafeDisc-protected program runs, the SafeDisc authenticator performs various security checks and verifies the SafeDisc signature on the optical media. The authentication process takes about 10 to 20 seconds. Once verification has been established, the sequence is complete and the program will start normally. The SafeDisc signature is designed to be difficult to copy or transfer from the original media. (For example, it might change as a result of error correction during the copying process.) Certain multimedia programs are designed to run from the PC's hard drive without accessing files from the program disc after the initial installation. SafeDisc will permit this as long as the consumer retains the original CD or DVD, which is required for authentication each time the program is launched. Failure to place the original disc in the drive when loading the program will prevent validation of the SafeDisc signature.
Version history
SafeDisc (V1) (1998–2001)
SafeDisk V1 protected CDs can be recognized by several files on the CD:
00000001.TMP
CLCD16.DLL
CLCD32.DLL
CLOKSPL.EXE
DPLAYERX.DLL
And also by the existence of two files .EXE and .ICD (where is replaced with the actual game's name).
The EXE executable is only a loader which decrypts and loads the protected game executable in the encrypted ICD file.
The initial version of SafeDisc was easy for home users and professional duplicators alike to copy, due to the fact that the ICD file can be decrypted and converted into an EXE file.
SafeDisc (V2) (November 2000–2003)
The following files should exist on every original CD:
00000001.TMP
00000002.TMP (not always present)
The loader file (.EXE) is now integrated into the main executable, making the .ICD file obsolete. Also the CLOKSPL.EXE file, which was present in SafeDisc v1, no longer exists.
The SD2 version can be found inside the .EXE file through its string: BoG_ *90.0&!! Yy>, followed by three unsigned longs, these are the version, subversion and revision numbers (in hex). When making a backup, read errors will be encountered between sectors 806–10663.
The protection also has "weak" sectors, introduced with this version, which causes synchronization problems with certain CD-Writers. Digital signatures are still present in this version. But this has no effect on disc images mounted in Daemon Tools or similar programs. In addition, SafeDisc Version 2.50 added ATIP detection making it impossible to use a copy in a burner unless software that masks this is used (CloneCD has the ability to do this.) SafeDisc Versions 2.90 and above make burning copies more difficult requiring burners that are capable of burning the "weak sectors"; these drives are uncommon. However, there are software solutions that eliminate the need for specialized hardware.
SafeDisc (V3) (2003–2005)
SafeDisc v3 uses a key to encrypt the main executable (EXE or DLL) and creates a corresponding digital signature which is added to the CD-ROM/DVD-ROM when they are replicated. The size of the digital signature varies from 3 to 20 MB depending how good the encryption must be. The authentication process takes about 10 to 20 seconds.
SafeDisc v3 is capable of encrypting multiple executables over one or more CDs/DVDs, as long as the executables are encrypted with the same key and the digital signature is added to each media.
SafeDisc v3 supports Virtual Drives as long as the original CD/DVD is available. Once the CD has been authenticated the game should continue to run from the virtual drive, provided the virtual drive software has not been blacklisted.
CloneCD is able to make fair use copies of V3.
SafeDisc (V4) (2005–2008)
The final major SafeDisc version was Version 4, released in February 2005. It lost ground to SecuROM over time, with the final build being version 4.90.010 in May 2008; with the product being discontinued on March 30, 2009.
SafeDisc driver vulnerabilities
On November 7, 2007; Microsoft stated that "there is vulnerability in Macrovision SECDRV.SYS driver on Windows and it could allow elevation of privilege. This vulnerability was patched by Microsoft on December 11, 2007 This vulnerability does not affect Windows Vista. The driver, secdrv.sys, is used by games which use Macrovision SafeDisc. Without the driver, games with SafeDisc protection would be unable to play on Windows". Ultimately, this would prove to be one of the factors that would lead to them to drop support for the program in 2015.
See also
SecuROM
CD-Cops
XCP
TAGES
LaserLock
Y.A.S.U.
References
External links
SafeDisc product description
SafeDisc 2 Explained
SafeDisc 1–4 Explained
Weak Sectors explained
Weak Sector Utility
Compact Disc and DVD copy protection |
1500146 | https://en.wikipedia.org/wiki/Scientific%20Linux | Scientific Linux | Scientific Linux (SL) was a Linux distribution produced by Fermilab, CERN, DESY and by ETH Zurich. It is a free and open-source operating system based on Red Hat Enterprise Linux.
This product is derived from the free and open-source software made available by Red Hat, but is not produced, maintained or supported by them.
In April 2019, it was announced that feature development for Scientific Linux would be discontinued, but that maintenance will continue to be provided for the 6.x and 7.x releases through the end of their life cycles. Fermilab will utilize CentOS for its deployment of 8.0 instead.
History
Fermilab already had a Linux distribution known as Fermi Linux, a long-term support release based on Red Hat Enterprise Linux. CERN was creating their next version of CERN Linux, also based on RHEL. CERN contacted Fermilab about doing a collaborative release. Connie Sieh was the main developer and driver behind the first prototypes and initial release. The first official release of Scientific Linux was version 3.0.1, released on May 10, 2004.
In 2015, CERN began migrating away from Scientific Linux to CentOS.
Scientific Linux is now maintained by a cooperative of science labs and universities. Fermilab is its primary sponsor.
Design philosophy
The primary purpose of Scientific Linux is to produce a common Linux distribution for various labs and universities around the world, thus reducing duplicated effort. The main goals are to have everything compatible with Red Hat Enterprise Linux with only minor additions and changes, and to allow easy customization for a site, without disturbing the Linux base. Unlike other distributions such as Poseidon Linux, it does not contain a large collection of scientific software as its name may suggest. However, it provides good compatibility to install such software.
Features
Scientific Linux is derived from Red Hat Enterprise Linux, with protected components, such as Red Hat trademarks, removed, thus making it freely available. New releases are typically produced about two months after each Red Hat release. As well as a full distribution equal to two DVDs, Scientific Linux is also available in LiveCD and LiveDVD versions.
Scientific Linux offers wireless and Bluetooth out of the box, and it comes with a comprehensive range of software, such as multimedia codecs, Samba, and Compiz, as well as servers and clients, storage clients, networking, and system administration tools.
It also contains a set of tools for making custom versions, thus allowing institutions and individuals to create their own variant.
Release history
Historical releases of Scientific Linux are the following. Each release is subjected to a period of public testing before it is considered 'released'.
Support
Security updates are provided for as long as Red Hat continues to release updates and patches for their versions.
See also
Fermi Linux, Fermilab's own custom version of Scientific Linux
CentOS, another distribution based on Red Hat Enterprise Linux
Rocks Cluster Distribution, a Linux distribution intended for high-performance computing clusters
References
External links
Scientific Linux Homepage
2004 software
Enterprise Linux distributions
RPM-based Linux distributions
State-sponsored Linux distributions
X86-64 Linux distributions
CERN software
Linux distributions |
1262642 | https://en.wikipedia.org/wiki/Trojan%E2%80%93Tauranac%20Racing | Trojan–Tauranac Racing |
Trojan was an automobile manufacturer and a Formula One constructor, in conjunction with Australian Ron Tauranac, from the United Kingdom.
The car producer Trojan Limited was founded by Leslie Hounsfield in 1914 in Clapham, South London, and later in Purley Way, Croydon, Surrey. It produced cars and especially delivery vans until 1964.
Around 1960, the Trojan business was sold to Peter Agg who imported Lambretta scooters for the British market. In 1962, the rights to manufacture the Heinkel microcar were acquired and the production line was moved from Dundalk, Ireland to Croydon. Production then commenced, renaming the bubble car as Trojan Cabin Cruiser. Production continued until 1965, when some 6,000 cars had been produced. Speaking to Motor Cycle magazine in 1965 after cessation of production, Peter Agg confirmed that a 1962 British government reduction in purchase tax from 50% to 25% aligning car taxation with three-wheelers and motorbikes, had given a big boost to the cheaper end of the car market, adversely affecting sales of the economy-sector three-wheeler, making continued production uneconomical.
Also in 1962, Trojan acquired the Elva sports car business and started to make the Mk IV Elva Courier. This in turn led to the manufacturing of McLaren racing cars until vehicle production finally ceased in the early 1970s. Trojan Limited still exists as an independent company though the factory was sold in the 1970s.
The Trojan T101 Formula 5000 model met with success when Jody Scheckter won the 1973 SCCA L&M Championship driving a T101 and a Lola T330.
They participated in eight grands prix, entering a total of eight cars. In 1974 David Purley won the Brighton Speed Trials driving a Trojan-Chevrolet T101. While Formula One remained the major series, sports cars were also fashionable on either side of the Atlantic. The McLaren M1 was put into production by Peter Agg's Lambretta Trojan Group in Rye, Sussex. They would make 200 McLarens during ten years.
Complete Formula One World Championship results
(key)
See also
Trojan (automobile)
References
All Formula One results are taken from formula1.com
External links
Trojan Museum Trust
F1 Rejects article about the team
Formula One constructors
Formula One entrants
Defunct motor vehicle manufacturers of the United Kingdom
British auto racing teams
British racecar constructors |
23879650 | https://en.wikipedia.org/wiki/License%20manager | License manager | A software license manager is a software management tool used by Independent software vendors or by end-user organizations to control where and how software products are able to run. License managers protect software vendors from losses due to software piracy and enable end-user organizations to comply with software license agreements. License managers enable software vendors to offer a wide range of usage-centric software licensing models, such as product activation, trial licenses, subscription licenses, feature-based licenses, and floating licensing from the same software package they provide to all users.
A license manager is different from a software asset management tool, which end-user organizations employ to manage the software they have licensed from many software vendors. However, some software asset management tools include license manager functions. These are used to reconcile software licenses and installed software, and generally include device discovery, software inventory, license compliance, and reporting functions.
An additional benefit of these software management tools are that they reduce the difficulty, cost, and time required for reporting and can increase operational transparency in order to prevent litigation costs associated with software misuse, as set forth by the Sarbanes-Oxley Act.
License management solutions provided by non-vendor companies are more valuable to the end-users, since most vendors do not provide enough license usage information. A vendor license manager provides limited information, while non-vendor license management solutions are developed for end-users in order to maximally optimize the licenses they have.
Most license managers can cover different software licensing models as license dongles or license USB keys, floating licenses, network licenses, concurrent license etc.
References
License Management Software
External links
Floating License Management
License4J Floating License Server User Guide
Cloud Software Licensing
System administration
Manager |
57722909 | https://en.wikipedia.org/wiki/AdultSwine | AdultSwine | AdultSwine is malware discovered in 2018 by Check Point Software Technologies. The malware was found programmed into around 60 apps on the Google Play Store, primarily those aimed at children. The bug would display pornographic ads that, when clicked on, would instruct victims to download more malicious software in an attempt to steal personal data. It's estimated that between 3 and 7 million users may have been infected.
According to a representative for Google, the apps have been removed from the Play Store and all developer accounts associated with the apps have been locked. Check Point cautions users to "be extra vigilant when installing apps, particularly those intended for use by children.”
References
External links
List of infected apps
Android (operating system) malware |
57927278 | https://en.wikipedia.org/wiki/History%20of%20NATO | History of NATO | The history of NATO begins in the immediate aftermath of World War II when British diplomacy set the stage to contain the Soviet Union and to stop the expansion of communism in Europe. The United Kingdom and France signed, in 1947, the Treaty of Dunkirk, a defensive pact, which was expanded in 1948 with the Treaty of Brussels to add the three Benelux countries (Belgium, the Netherlands, and Luxembourg) and committed them to collective defense against an armed attack for fifty years. The British worked with Washington to expand the alliance into NATO in 1949, adding the United States and Canada as well as Italy, Portugal, Norway, Denmark, and Iceland. West Germany joined in 1955 and Spain joined later still in 1982.
Beginnings
The Treaty of Brussels was a mutual defense treaty against the Soviet threat at the start of the Cold War. It was signed on 17 March 1948 by Belgium, the Netherlands, Luxembourg, France, and the United Kingdom and was the precursor to NATO. The Soviet threat became immediate with the Berlin Blockade in 1948, leading to the creation of a multinational defense organization, the Western Union Defence Organisation, in September 1948. However, the parties were too weak militarily to counter the Soviet Armed Forces. In addition, the communist 1948 Czechoslovak coup d'état had overthrown a democratic government, and British Foreign Minister Ernest Bevin reiterated that the best way to prevent another Czechoslovakia was to evolve a joint Western military strategy. He got a receptive hearing in the United States, especially with the American anxiety over Italy and the Italian Communist Party.
In 1948, European leaders met with US defense, military, and diplomatic officials at the Pentagon, exploring a framework for a new and unprecedented association. The talks resulted in the North Atlantic Treaty, and the United States signed on 4 April 1949. It included the five Treaty of Brussels states, as well as the United States, Canada, Portugal, Italy, Norway, Denmark and Iceland. The first NATO Secretary General, Lord Ismay, stated in 1949 that the organization's goal was "to keep the Russians out, the Americans in, and the Germans down". Popular support for the Treaty was not unanimous, and some Icelanders participated in a pro-neutrality, anti-membership riot in March 1949. The creation of NATO can be seen as the primary institutional consequence of a school of thought called Atlanticism, which stressed the importance of trans-Atlantic cooperation.
The members agreed that an armed attack against any of them in Europe or North America would be considered an attack against them all. Consequently, they agreed that if an armed attack occurred, each of them, in the exercise of the right of individual or collective self-defense, would assist the member being attacked and take such action as it deemed necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area. The treaty does not require members to respond with military action against an aggressor. Although obliged to respond, they maintain the freedom to choose the method by which they do so. That differs from Article IV of the Treaty of Brussels, which clearly states that the response is military in nature. NATO members are nonetheless assumed to aid the attacked member militarily. The treaty was later clarified to include both the member's territory and their "vessels, forces or aircraft" North of the Tropic of Cancer, including some overseas departments of France.
The creation of NATO brought about some standardization of allied military terminology, procedures, and technology, which, in many cases, meant European countries adopting US practices. Roughly 1300 Standardization Agreements (STANAG) codified many of the common practices that NATO has achieved. The 7.62×51mm NATO rifle cartridge was thus introduced in the 1950s as a standard firearm cartridge among many NATO countries. Fabrique Nationale de Herstal's FAL, which used the 7.62mm NATO cartridge, was adopted by 75 countries, including many outside NATO. Also, aircraft marshaling signals were standardized so that any NATO aircraft could land at any NATO base. Other standards such as the NATO phonetic alphabet have made their way beyond NATO into civilian use.
Cold War
The outbreak of the Korean War in June 1950 was crucial for NATO, as it raised the apparent threat of all Communist countries working together and forced the alliance to develop concrete military plans. Supreme Headquarters Allied Powers Europe (SHAPE) was formed to direct forces in Europe and began work under Supreme Allied Commander Dwight Eisenhower in January 1951. In September 1950, the NATO Military Committee called for an ambitious buildup of conventional forces to meet the Soviets and reaffirmed that position at the February 1952 meeting of the North Atlantic Council in Lisbon. The conference, seeking to provide the forces necessary for NATO's Long-Term Defence Plan, called for an expansion to 96 divisions. However, that requirement was dropped the following year to roughly 35 divisions, with heavier use to be made of nuclear weapons. At this time, NATO could call on about 15 ready divisions in Central Europe and another 10 in Italy and Scandinavia. Also at Lisbon, the post of Secretary General of NATO as the organization's chief civilian was created, and Lord Ismay was eventually appointed to the post.
In September 1952, the first major NATO maritime exercises began. Exercise Mainbrace brought together 200 ships and over 50,000 personnel to practice the defence of Denmark and Norway. Other major exercises that followed included Exercise Grand Slam and Exercise Longstep, naval and amphibious exercises in the Mediterranean Sea, Italic Weld, a combined air-naval-ground exercise in northern Italy, Grand Repulse, involving the British Army on the Rhine (BAOR), the Netherlands Corps and Allied Air Forces Central Europe (AAFCE), Monte Carlo, a simulated atomic air-ground exercise involving the Central Army Group, and Weldfast, a combined amphibious landing exercise in the Mediterranean Sea involving American, British, Greek, Italian, and Turkish naval forces.
Greece and Turkey also joined the alliance in 1952, which forced a series of controversial negotiations, mainly between United States and Britain, over how to bring both countries into the military command structure. While that overt military preparation was going on, covert stay-behind arrangements initially made by the Western European Union to continue resistance after a successful Soviet invasion, including Operation Gladio, were transferred to NATO control. Ultimately, unofficial bonds began to grow between NATO's armed forces, such as the NATO Tiger Association and competitions such as the Canadian Army Trophy for tank gunnery.
In 1954, the Soviet Union suggested that it should join NATO to preserve peace in Europe. The NATO countries, fearing that the Soviet Union's motive was to weaken the alliance, ultimately rejected that proposal. On 17 December 1954, the North Atlantic Council approved MC 48, a key document in the evolution of NATO nuclear thought. MC 48 emphasized that NATO had to use atomic weapons from the outset of a war with the Soviet Union, whether or not the Soviets chose to use them first. That gave SACEUR the same prerogatives for automatic use of nuclear weapons that existed for the commander-in-chief of the US Strategic Air Command.
The incorporation of West Germany into the organization on 9 May 1955 was described as "a decisive turning point in the history of our continent" by Halvard Lange, then the Norwegian Foreign Affairs Minister. A major reason was that German manpower was necessary to have enough conventional forces to resist a Soviet invasion. One of the immediate results of West German entry was the creation of the Warsaw Pact, which was signed on 14 May 1955 by the Soviet Union, Hungary, Czechoslovakia, Poland, Bulgaria, Romania, Albania, and East Germany, thereby delineating the two opposing sides of the Cold War.
Three major exercises were held concurrently in the northern autumn of 1957. Operation Counter Punch, Operation Strikeback, and Operation Deep Water were the most ambitious military undertaking for the alliance so far, involving more than 250,000 men, 300 ships, and 1,500 aircraft operating from Norway to Turkey.
French withdrawal
NATO's unity was breached early in its history with a crisis occurring during Charles de Gaulle's presidency of France. De Gaulle protested the strong role of the United States in NATO and what he perceived as a special relationship between it and the United Kingdom. In a memorandum sent to US President Dwight Eisenhower and British Prime Minister Harold Macmillan on 17 September 1958, he argued for the creation of a tripartite directorate, which would put France on an equal footing with the US and the UK.
Considering the response to be unsatisfactory, de Gaulle began constructing an independent defense force for his country. He wanted to give France, in the event of an East German incursion into West Germany, the option of coming to a separate peace with the Eastern bloc, instead of being drawn into a larger war between NATO and the Warsaw Pact. In February 1959, France withdrew its Mediterranean Fleet from NATO command, and it later banned the stationing of foreign nuclear weapons on French soil. That caused the United States to transfer 300 military aircraft out of France and to return control of the air force bases that it had operated in France since 1950 to the French by 1967.
Though France showed solidarity with the rest of NATO during the Cuban Missile Crisis in 1962, de Gaulle continued his pursuit of an independent defense by removing France's Atlantic and Channel fleets from NATO command. In 1966, all French armed forces were removed from NATO's integrated military command, and all non-French NATO troops were asked to leave France. US Secretary of State Dean Rusk was later quoted as asking de Gaulle whether his order included "the bodies of American soldiers in France's cemeteries". The withdrawal forced the relocation of SHAPE from Rocquencourt, near Paris, to Casteau, north of Mons, Belgium, by 16 October 1967. France remained a member of the alliance and committed to the defense of Europe from possible Warsaw Pact attack with its own forces stationed in West Germany throughout the Cold War. A series of secret accords between the US and French officials, the Lemnitzer–Ailleret Agreements, detailed how French forces would dovetail back into NATO's command structure if East-West hostilities broke out.
When de Gaulle announced his decision to withdraw from the integrated NATO command, US President Lyndon Johnson suggested that when de Gaulle "comes rushing down like a locomotive on the track, why the Germans and ourselves, we just stand aside and let him go on by, then we are back together again."
That vision came true when France announced its return to full participation at the 2009 Strasbourg–Kehl summit.
Détente and escalation
During most of the Cold War, NATO's watch against the Soviet Union and Warsaw Pact did not actually lead to direct military action. On 1 July 1968, the Treaty on the Non-Proliferation of Nuclear Weapons opened for signature. NATO argued that its nuclear sharing arrangements did not breach the treaty since US forces controlled the weapons until a decision was made to go to war when the treaty would no longer be controlling. Few states then knew of the NATO nuclear sharing arrangements, which were not challenged. In May 1978, NATO countries officially defined two complementary aims of the Alliance: to maintain security and pursue détente. That was supposed to mean matching defenses at the level rendered necessary by the Warsaw Pact's offensive capabilities without spurring a further arms race.
On 12 December 1979, in light of a build-up of Warsaw Pact nuclear capabilities in Europe, ministers approved the deployment of US GLCM cruise missiles and Pershing II theatre nuclear weapons in Europe. The new warheads were also meant to strengthen the West's negotiating position regarding nuclear disarmament. That policy was called the Dual Track policy. Similarly, in 1983 and 1984, responding to the stationing of Warsaw Pact SS-20 medium-range missiles in Europe, NATO deployed modern Pershing II missiles tasked to hit military targets such as tank formations in the event of war. That action led to peace movement protests throughout Western Europe, and support for their deployment wavered, as many doubted whether the push for deployment could be sustained.
The membership of the organization was then largely static. In 1974, as a consequence of the Turkish invasion of Cyprus, Greece withdrew its forces from NATO's military command structure but, with Turkish co-operation, was readmitted in 1980. The Falklands War between the United Kingdom and Argentina did not result in NATO involvement because Article 6 of the North Atlantic Treaty specifies that collective self-defense is applicable only to attacks on member state territories north of the Tropic of Cancer. On 30 May 1982, NATO gained a new member when the newly-democratic Spain joined the alliance, as was confirmed by referendum in 1986. At the peak of the Cold War, 16 member nations maintained an approximate strength of 5,252,800 active militaries, including as many as 435,000 forward-deployed US forces, under a command structure that reached a peak of 78 headquarters, organized into four echelons.
After the Cold War
The Revolutions of 1989 and the dissolution of the Warsaw Pact in 1991 removed the de facto main adversary of NATO and caused a strategic re-evaluation of NATO's purpose, nature, tasks, and focus on the continent of Europe. The shift started, with the 1990 signing in Paris of the Treaty on Conventional Armed Forces in Europe between NATO and the Soviet Union, which mandated specific military reductions across the continent, which continued after the dissolution of the Soviet Union in December 1991. European countries then accounted for 34 percent of NATO's military spending; by 2012, that had fallen to 21 percent. NATO also began a gradual expansion to include newly-autonomous countries of Central and Eastern Europe and extended its activities into political and humanitarian situations that had not been thought of as NATO concerns.
An expansion of NATO came with German reunification on 3 October 1990, when the former East Germany became part of the Federal Republic of Germany and of the alliance. That had been agreed in the Two Plus Four Treaty earlier that year. To secure Soviet approval of a united Germany remaining in NATO, it was agreed that foreign troops and nuclear weapons would not be stationed in the east. There was no formal commitment in the agreement not to expand NATO to the east, but there are diverging views on whether negotiators gave informal commitments regarding further NATO expansion. Jack Matlock, the American ambassador to the Soviet Union during its final years, said that the West gave a "clear commitment" not to expand, and declassified documents indicate that Soviet negotiators were given the impression that NATO membership was off the table for countries such as Czechoslovakia, Hungary, or Poland. Hans-Dietrich Genscher, then the West German foreign minister, said in a conversation with Eduard Shevardnadze, "For us, however, one thing is certain: NATO will not expand to the east." In 1996, Gorbachev wrote in his Memoirs that "during the negotiations on the unification of Germany they gave assurances that NATO would not extend its zone of operation to the east," and he repeated that view in an interview in 2008. However, in 2014 Gorbachev stated the opposite: "The topic of 'NATO expansion' was not discussed at all [in 1990], and it wasn't brought up in those years. I say this with full responsibility. Western leaders didn't bring it up, either." According to Robert Zoellick, a US State Department official involved in the Two Plus Four negotiating process, that appears to be a misperception, and no formal commitment regarding enlargement was made. Harvard University historian Mark Kramer also rejects that an informal agreement existed.
As part of restructuring, NATO's military structure was cut back and reorganized, with new forces such as the Headquarters Allied Command Europe Rapid Reaction Corps established. The changes brought about by the collapse of the Soviet Union on the military balance in Europe were recognized in the Adapted Conventional Armed Forces in Europe Treaty, which was signed by 30 countries in 1999, ratified by Russia in 2000, but never ratified by any NATO member, and therefore never came into effect. The policies of French President Nicolas Sarkozy resulted in a major reform of France's military position, culminating with the return to full membership on 4 April 2009, which also included France rejoining the NATO Military Command Structure but maintaining an independent nuclear deterrent.
Enlargement and reform
Between 1994 and 1997, wider forums for regional cooperation between NATO and its neighbors were set up, like the Partnership for Peace, the Mediterranean Dialogue initiative, and the Euro-Atlantic Partnership Council. In 1998, the NATO–Russia Permanent Joint Council was established. On 8 July 1997, three former communist countries (Hungary, the Czech Republic, and Poland) were invited to join NATO, which was accepted by all three, with Hungarian acceptance being endorsed in a referendum in which 85.3% of voters supported joining NATO.
Czech President Vaclav Havel welcomed the expansion: "Never have we been part of such a broad, solid and binding security alliance, which at the same time respects in its essence the sovereignty and will of our nation." Polish Foreign Minister Bronislaw Geremek also welcomed the expansion: "Poland forever returns where she has always belonged: the free world." Hungarian Foreign Minister Janos Martonyi stated that the expansion showed that Hungary was returning "to her natural habitat." The expansion was also welcomed by US Secretary of State Madeleine Albright, who stated that the expansion would do "for Europe's east what NATO has already helped to do for Europe's west: steadily and systematically, we will continue erasing – without replacing – the line drawn in Europe by Stalin's bloody boot."
The expansion was criticized in the US by some policy experts as a "policy error of historic proportions." According to George F. Kennan, an American diplomat and an advocate of the containment policy, the decision "may be expected to have an adverse effect on the development of Russian democracy; to restore the atmosphere of the cold war to East-West relations, to impel Russian foreign policy in directions decidedly not to our liking."
Membership went on expanding with the accession of seven more Central and Eastern European countries to NATO: Estonia, Latvia, Lithuania, Slovenia, Slovakia, Bulgaria, and Romania. They were first invited to start talks of membership during the 2002 Prague summit and joined NATO on 29 March 2004, shortly before the 2004 Istanbul summit. Slovenian membership was endorsed in a referendum in which 66.02% of voters supported joining.
New NATO structures were also formed while old ones were abolished. In 1997, NATO reached agreement on a significant downsizing of its command structure from 65 headquarters to 20. The NATO Response Force (NRF) was launched at the 2002 Prague summit on 21 November, the first summit in a former Comecon country. On 19 June 2003, a further restructuring of the NATO military commands began as the Headquarters of the Supreme Allied Commander, Atlantic were abolished and a new command, Allied Command Transformation (ACT), was established in Norfolk, Virginia, United States, and the Supreme Headquarters Allied Powers Europe (SHAPE) became the Headquarters of Allied Command Operations (ACO). ACT is responsible for driving transformation (future capabilities) in NATO while ACO is responsible for current operations. In March 2004, NATO's Baltic Air Policing began, which supported the sovereignty of Latvia, Lithuania, and Estonia by providing jet fighters to react to any unwanted aerial intrusions. Eight multinational jet fighters are based in Lithuania, the number of which was increased from four in 2014. Also at the 2004 Istanbul summit, NATO launched the Istanbul Cooperation Initiative with four Persian Gulf nations.
The 2006 Riga summit was held in Riga, Latvia, and highlighted the issue of energy security. It was the first NATO summit to be held in a country that had been part of the Soviet Union. At the April 2008 summit in Bucharest, Romania, NATO agreed to the accession of Croatia and Albania, both of which joined NATO in April 2009. Ukraine and Georgia were also told that they could eventually become members. The issue of Georgian and Ukrainian membership in NATO prompted harsh criticism from Russia, as did NATO plans for a missile defence system. Studies for the system had begun in 2002, with negotiations centered on anti-ballistic missiles being stationed in Poland and the Czech Republic. Though NATO leaders gave assurances that the system was not targeting Russia, Russian Presidents Vladimir Putin and Dmitry Medvedev criticized the system as a threat.
In 2009, US President Barack Obama proposed using the ship-based Aegis Combat System, but the plan still includes stations being built in Turkey, Spain, Portugal, Romania, and Poland. NATO will also maintain the "status quo" in its nuclear deterrent in Europe by upgrading the targeting capabilities of the "tactical" B61 nuclear bombs stationed there and deploying them on the stealthier Lockheed Martin F-35 Lightning II. After the 2014 annexation of Crimea by Russia, NATO committed to forming a new "spearhead" force of 5000 troops at bases in Estonia, Lithuania, Latvia, Poland, Romania, and Bulgaria.
The 2014 Russian annexation of Crimea led to strong condemnation by NATO nations, and Poland invoked Article 4 in meetings. Then, at the 2014 Wales summit, the leaders of NATO's member states formally committed for the first time spend the equivalent of at least 2% of their gross domestic product on defence by 2024, which was previously only an informal guideline. In 2015, five of its 28 members met that goal. At the beginning of 2018, eight members either were meeting the target or were close to it; six others had laid out plans to reach the target by 2024 as promised; and Norway and Denmark had unveiled plans to substantially boost defense spending, including Norway's planned purchase 52 new F-35 fighter jets.
On 15 June 2016, NATO officially recognized cyberwarfare as an operational domain of war, just like land, sea, and aerial warfare. That means that any cyber attack on NATO members can trigger Article 5 of the North Atlantic Treaty. Montenegro became the 29th member of NATO on 5 June 2017, amid strong objections from Russia.
On 1 August 2018, the US Department of Treasury sanctioned two senior Turkish government ministers who were involved in the detention of American pastor Andrew Brunson. Turkish President Recep Tayyip Erdoğan said that the US behavior would force Turkey to look for new friends and allies. The US–Turkey dispute appears to be one of the most serious diplomatic crisis between the NATO allies in years.
On 4 December 2019, NATO officially recognized space warfare as an operational domain of war, just like land, sea, aerial, and cyber warfare. That means that any space attack on NATO members can trigger Article 5 of the North Atlantic Treaty.
On 27 March 2020, North Macedonia became the 30th and newest member after a dispute about its name had been resolved with Greece.
Structural changes
The Defence Planning Committee was a former senior decision-making body on matters relating to the integrated military structure of the Alliance. It was dissolved after a major committee review in June 2010, with its responsibilities absorbed by the North Atlantic Council.
Civilian structure
In NATO: The First Five Years, Lord Ismay described the civilian structure as follows:
The ..Office of the Secretary-General [is] directed by an Executive Secretary, Captain R.D. Coleridge (UK), who is also Secretary to the Council. He is responsible for supervising the general processing of the work of the Council and their committees, including the provision of all secretarial assistance, as well as supervision of the administrative services of the Staff/Secretariat itself. Thus the Secretariat provides secretaries to all the Council's principal committees and working groups - apart from those of a strictly technical nature - and ensures coordination between them... On the Staff side, there are three main divisions corresponding to the three principal aspects of NATO's work, each under an Assistant Secretary-General. Ambassador Sergio Fenoaltea (Italy) heads the Political Affairs Division, M. Rene Sergent (France) the Economics and Finance Division, and Mr. Lowell P. Weicker (USA) the Production and Logistics Division. The Divisions' tasks are to prepare, in close touch with delegations, proposed action in their respective fields for consideration by the appropriate committee or by the Council. In addition to the main divisions, there are three other offices working directly to the Secretary-General. These are the Office of Statistics (Mr. Loring Wood of the USA), the Financial Comptroller's Office (M. A. J. Bastin of Belgium), and the Division of Information (Mr. Geoffrey Parsons, Jr. of the USA). The Information Division, besides providing material about NATO for the use of member governments, (it does not engage in independent operations), is also the press and public relations branch of the civilian authority.
Military structure
The Strategic Commanders were the former Major NATO Commanders, who sat atop a command hierarchy consisting of Major Subordinate Commanders (MSCs), Principal Subordinate Commanders (PSCs), and Sub-PSCs. The Military Committee had an executive body, the Standing Group, made up of representatives from France, the United States, and the United Kingdom. The Standing Group was abolished during the major reform of 1967 that resulted from France's departure from the NATO Military Command Structure.
Beginnings
A key step in establishing the NATO Command Structure was the North Atlantic Council's selection of General Dwight Eisenhower as the first Supreme Allied Commander Europe (SACEUR) in December 1950. After Eisenhower arrived in Paris in January 1951, he and the other members of the multinational Supreme Headquarters Allied Powers Europe (SHAPE) Planning Group immediately began to devise a structure for the new Allied Command Europe. NATO official documents state, "The cornerstone of the NATO Military Command Structure was laid... when the North Atlantic Council approved D.C. 24/3 on 18 December 1951." They quickly decided to divide Allied Command Europe into three regions: Allied Forces Northern Europe, containing Scandinavia, the North Sea, and the Baltic; Allied Forces Central Europe, and Allied Forces Southern Europe (AFSOUTH), covering Italy and the Mediterranean. SHAPE was established at Rocquencourt, west of Paris.
The British post of Commander in Chief Mediterranean Fleet was given a dual-hatted role as NATO Commander in Chief of Allied Forces Mediterranean in charge of all forces assigned to NATO in the Mediterranean Area. The British made strong representations in discussions regarding the Mediterranean NATO command structure since they wished to retain their direction of NATO naval command in the Mediterranean to protect their sea lines of communication running through the Mediterranean to the Middle East and the Far East.
In 1952, after Greece and Turkey joined NATO Allied Land Forces South-Eastern Europe (LANDSOUTHEAST) was created in Izmir, Turkey, under a US Army General because of the geographic distance of both countries from the LANDSOUTH headquarters as well as political disagreements over which nation should be the overall commander for its ground forces.
With the establishment of Allied Command Atlantic (ACLANT) on 30 January 1952, the Supreme Allied Commander Atlantic joined the previously created Supreme Allied Commander Europe as one of the alliance's two Major NATO Commanders. A third was added when Allied Command Channel was established on 21 February 1952 to control the English Channel and North Sea area and deny them to the enemy and to protect the sea lanes of communication. The establishment of this post and the agreement that it was to be filled by the British Commander-in-Chief, Portsmouth, was part of the compromise that allowed an American officer to take up the SACLANT post. Previously Commander-in-Chief Portsmouth had controlled multinational naval operations in the area under WUDO auspices. In due course, the CINCHAN role was assumed by the British Commander-in-Chief Fleet.
In 1966, when French President Charles de Gaulle withdrew French forces from the military command structure, NATO's headquarters was forced to move to Belgium. SHAPE was moved to Casteau, north of the Belgian city of Mons. Headquarters Allied Forces Central Europe was moved from the Chateau de Fontainebleau, near Paris to Brunssum, in the Netherlands.
Structure in 1989
NATO Military Committee, led by the Chairman of the NATO Military Committee, in Brussels, Belgium
Allied Command Europe (ACE), led by Supreme Allied Commander Europe (SACEUR), in Mons, Belgium
ACE Mobile Force, in Seckenheim, Germany
United Kingdom Air Forces, in High Wycombe, United Kingdom
NATO Airborne Early Warning Force, in Maisieres, Belgium
Allied Forces Northern Europe (AFNORTH), in Kolsås, Norway
Allied Forces North Norway (NON), in Bodø, Norway
Allied Forces South Norway (SONOR), in Stavanger, Norway
Allied Forces Baltic Approaches (BALTAP), in Karup, Denmark
Allied Land Forces Schleswig-Holstein and Jutland (LANDJUT), in Rendsburg, Germany
Allied Land Forces in Zealand (LANDZEALAND), in Ringsted, Denmark
Allied Air Forces Baltic Approaches (AIRBALTAP), in Karup, Denmark
Allied Naval Forces Baltic Approaches (NAVBALTAP), in Karup, Denmark
Allied Forces Central Europe (AFCENT), in Brunssum, Netherlands
Northern Army Group (NORTHAG), in Rheindahlen, West Germany
Central Army Group (CENTAG), in Heidelberg, West Germany
Allied Air Forces Central Europe (AAFCE), in Ramstein, West Germany
Second Allied Tactical Air Force (2 ATAF), in Rheindahlen, West Germany
Fourth Allied Tactical Air Force (4 ATAF), in Ramstein, West Germany
Allied Forces Southern Europe (AFSOUTH), in Naples, Italy
Allied Land Forces Southern Europe (LANDSOUTH), in Verona, Italy
Allied Land Forces South-Eastern Europe (LANDSOUTHEAST), in İzmir, Turkey
Allied Air Forces Southern Europe (AIRSOUTH), in Naples, Italy
Fifth Allied Tactical Air Force (5 ATAF), in Vicenza, Italy
Sixth Allied Tactical Air Force (6 ATAF), in İzmir, Turkey
Allied Naval Forces Southern Europe (NAVSOUTH), in Naples, Italy
Naval Striking and Support Forces Southern Europe (STRIKFORSOUTH), afloat, centered around US Sixth Fleet
Allied Command Atlantic (ACLANT), led by Supreme Allied Commander Atlantic (SACLANT), in Norfolk, United States
Eastern Atlantic Area (EASTLANT), in Northwood, United Kingdom
Northern Sub-Area (NORLANT), in Rosyth, United Kingdom
Central Sub-Area (CENTLANT), in Plymouth, United Kingdom
Submarine Force Eastern Atlantic (SUBEASTLANT), in Gosport, United Kingdom
Maritime Air Eastern Atlantic (MAIREASTLANT), in Northwood, United Kingdom
Maritime Air Northern Sub-Area (MAIRNORLANT), in Rosyth, United Kingdom
Maritime Air Central Sub-Area (MAIRCENTLANT), in Plymouth, United Kingdom
Island Command Iceland (ISCOMICELAND), in Keflavík, Iceland
Island Command Faroes (ISCOMFAROES), in Tórshavn, Faroe Islands
Western Atlantic Area (WESTLANT), in Norfolk, United States
Ocean Sub-Area (OCEANLANT), in Norfolk, United States
Canadian Atlantic Sub-Area (CANLANT), in Halifax, Canada
Island Command Bermuda (ISCOMBERMUDA), in Hamilton, Bermuda
Island Command Azores (ISCOMAZORES), in Ponta Delgada, Azores
Island Command Greenland (ISCOMGREENLAND), in Grønnedal, Greenland
Submarine Force Western Atlantic (SUBWESTLANT), in Norfolk, United States
Iberian Atlantic Area (IBERLANT), in Oeiras, Portugal
Island Command Madeira (ISCOMADEIRA), in Funchal, Madeira
Striking Fleet Atlantic (STRIKFLTLANT), in Norfolk, United States
Carrier Striking Force (CARSTRIKFOR), in Norfolk, United States
Carrier Striking Group One (CARSTRIKGRUONE), in Norfolk, United States
Carrier Striking Group Two (CARSTRIKGRUTWO), in Plymouth, United Kingdom
Submarines Allied Command Atlantic (SUBACLANT), in Norfolk, United States
Allied Command Channel (ACCHAN), in Northwood, United Kingdom
Nore Sub-Area Channel Command (NORECHAN), in Rosyth, United Kingdom
Plymouth Sub-Area Channel Command (PLYMCHAN), in Plymouth, United Kingdom
Benelux Sub-Area Channel Command (BENECHAN), in Den Helder, Netherlands
Allied Maritime Air Force Channel (MAIRCHAN), in Northwood, United Kingdom
Maritime Air Nore Sub-Area Channel Command (MAIRNORECHAN), in Rosyth, United Kingdom
Maritime Air Plymouth Sub-Area Channel Command (MAIRPLYMCHAN), in Plymouth, United Kingdom
Standing Naval Force Channel (STANAVFORCHAN), afloat
After Cold War
By June 1991, it was clear that Allied Forces Central Europe, a Major Subordinate Command, could be reduced, with the Soviet threat disappearing. Six multinational corps were to replace the previous eight. Announcements in June 1991 presaged main defensive forces consisting of six multinational corps. Two were to be under German command, one with a US division, one under Belgian command with a pending offer of a U.S. brigade, one under U.S. command with a German division, one under joint German-Danish command (LANDJUT), and one under Dutch command. The new German IV Corps was to be stationed in eastern German and was not to be associated with the NATO structure.
On July 1, 1994, the Alliance disestablished Allied Command Channel but retained many of its subordinate structures after the reshuffling. Most of the headquarters were absorbed within ACE, particularly within the new Allied Forces Northwestern Europe.
From 1994 to 1999, ACE had three Major Subordinate Commands, AFNORTHWEST, AFCENT, and AFSOUTH. In 1995 NATO began a Long Term Study to examine post-Cold War strategy and structure. Recommendations from the study for a new, streamlined structure emerged in 1996. The European and Atlantic commands were to be retained, but the number of major commands in Europe was to be cut from three to two, Regional Command North Europe and Regional Command South Europe. Activation of the new RC SOUTH occurred in September 1999, and in March 2000 Headquarters AFNORTHWEST closed and the new RC NORTH was activated. The headquarters of the two Regional Commands were known as Regional Headquarters South (RHQ South) and RHQ NORTH respectively. Each was to supervise air, naval, and land commands for their region as well as a number of Joint Subregional Commands (JSRCs). Among the new JSRCs was Joint Headquarters Southwest, which was activated in Madrid in September 1999.
Organizations and agencies
Prior to the reorganization, the NATO website listed 43 different agencies and organizations and five project committees/offices as of 15 May 2008. including:
Logistics committees, organizations and agencies, including:
NATO Maintenance and Supply Agency
Central Europe Pipeline System
NATO Pipeline System
Production Logistics organizations, agencies, and offices including the NATO Eurofighter and Tornado Management Agency
Standardisation organization, committee, office, and agency including the NATO Standardization Agency which also plays an important role in the global arena of standards determination.
Civil Emergency Planning committees and center
Air Traffic Management and Air Defence committees, working groups organization and center including the:
NATO ACCS Management Agency (NACMA), based in Brussels, manages around a hundred persons in charge of the Air Control and Command System (ACCS) due for 2009.
NATO Programming Centre
The NATO Airborne Early Warning and Control Programme Management Organisation (NAPMO)
NATO Consultation, Command and Control Organisation (NC3O)
NATO Consultation, Command and Control Agency (NC3A), reporting to the NATO Consultation, Command and Control Organization (NC3O). This agency was formed when the SHAPE Technical Centre (STC) in The Hague (Netherlands) merged in 1996 with the NATO Communications and Information Systems Operating and Support Agency (NACISA) based in Brussels (Belgium). The agency comprises around 650 staff, of which around 400 are located in The Hague and 250 in Brussels.
NATO Communications and Information Systems Services Agency (NCSA), based in Mons (BEL), was established in August 2004 from the former NATO Communications and Information Systems Operating and Support Agency (NACISA).
NATO Headquarters C3 Staff (NHQC3S), which supports the North Atlantic Council, Military Committee, International Staff, and the International Military Staff.
NATO Electronic Warfare Advisory Committee (NEWAC)
Military Committee Meteorological Group (MCMG)
The Military Oceanography Group (MILOC)
NATO Research and Technology Organisation (RTO),
Education and Training college, schools and group
Project Steering Committees and Project Offices, including:
Alliance Ground Surveillance Capability Provisional Project Office (AGS/PPO)
Battlefield Information Collection and Exploitation System (BICES)
NATO Continuous Acquisition and Life-Cycle Support Office (CALS)
NATO FORACS Office
Munitions Safety Information Analysis Center (MSIAC)
Committee of Chiefs of Military Medical Services in NATO (COMEDS)
See also
Able Archer 83
Eight-Nation Alliance
History of the Common Security and Defence Policy
References
attribution: contains content originally in the Nato article.
Further reading
Asmus, Ronald D. "Europe's Eastern Promise: rethinking NATO and EU enlargement." Foreign Affairs (2008): 95-106 online.
Asmus, Ronald D. Opening NATO’s Door. How the Alliance Remade Itself for a New Era (2002)
Axelrod, Robert, and Silvia Borzutzky. "NATO and the war on terror: The organizational challenges of the post 9/11 world." Review of International Organizations 1.3 (2006): 293–307. online
Baylis, John. The Diplomacy of Pragmatism: Britain and the Formation of NATO, 1942-1949 (Basingstoke, Macmillan, 1993).
Caddis, John. "History, grand strategy and NATO enlargement." Survival (2007) 40:1, 145-151, DOI: 10.1093/survival/40.1.145
Chourchoulis, Dionysios. The Southern Flank of NATO, 1951–1959: Military Strategy or Political Stabilization (Lexington Books, 2014).
Colbourn, Susan. "NATO as a political alliance: continuities and legacies in the enlargement debates of the 1990s." International Politics (2020): 1-18.
Cornish, Paul: Partnership in Crisis: The US, Europe and the Fall and Rise of NATO (Royal Institute of International Affairs, 1997).
Gallagher, Tom. "Balkan But Different: Romania and Bulgaria's Contrasting Paths to NATO Membership 1994–2002." Journal of Communist Studies and Transition Politics 20.4 (2004): 1-19.
Grosser, Alfred. The Western Alliance: European-American Relations Since 1945 (1980).
Hanrieder, Wolfram F. Germany, America, Europe: Forty Years of German Foreign Policy (Yale UP, 1989).
Hatzivassiliou, Evanthis. Nato and Western Perceptions of the Soviet Bloc: Alliance Analysis and Reporting, 1951-69 (Routledge, 2014).
Hawes, Derek. "Enduring alliance: a history of NATO and the post-war global order," Journal of Contemporary European Studies (2019), DOI: 10.1080/14782804.2019.1657731
Hendrickson, Ryan C. "NATO’s next secretary general: Rasmussen’s leadership legacy for Jens Stoltenberg." Journal of Transatlantic Studies 14.3 (2016): 237-251.
Heuser, Beatrice. NATO, Britain, France, and the FRG: Nuclear Strategies and Forces for Europe, 1949-2000 (St. Martin’s Press, 1997).
Hofmann, Stephanie C. "Party preferences and institutional transformation: revisiting France’s relationship with NATO (and the common wisdom on Gaullism)." Journal of Strategic Studies 40.4 (2017): 505-531.
Johnston, Seth A. How NATO adapts: strategy and organization in the Atlantic Alliance since 1950 (JHU Press, 2017).
Kaplan, Lawrence S. NATO divided, NATO united: the evolution of an alliance (Greenwood, 2004).
Kaplan, Lawrence S. The United States and NATO: the formative years (UP of Kentucky, 2014) links.
Lundestad, Geir. "Empire by Invitation? The United States and Western Europe, 1945-1952." Journal of Peace Research 23.3 (1986): 263-277 online.
March, Peter R. Freedom of the Skies: An Illustrated History of Fifty Years of NATO Airpower (1999)
Miles, Simon. "The War Scare That Wasn't: Able Archer 83 and the Myths of the Second Cold War." Journal of Cold War Studies (Summer 2020) 22#3 pp 86–118. Able Archer 83 was a routine NATO exercise in 1983.
Milloy, John C. The North Atlantic Treaty Organization, 1948-1957: Community or Alliance? (McGill-Queen's Press-MQUP, 2006).
Münch, Philipp. "Creating common sense: getting NATO to Afghanistan." Journal of Transatlantic Studies (2021): 1-29 online in 2001.
NATO Office of Information and Press. NATO Handbook : Fiftieth Anniversary Edition (NATO, Brussels, 1998–99, Second Reprint),
“NATO at 70: Balancing Collective Defense and Collective Security,” Special issue of Journal of Transatlantic Studies 17#2 (June 2019) pp: 135–267.
Norris, John. Collision Course: NATO, Russia and Kosovo (2005)
Osgood, Robert E. NATO. The Entangling Alliance (1962).
Park, W. H. Defending the West: A History of NATO (Wheatsheaf Books, 1986)
Pedlow, Gregory W. "NATO and the Berlin Crisis of 1961: Facing the Soviets While Maintaining Unity" (US National Archives,. 2011); short essay; not copyright.
Perot, Elie. "The art of commitments: NATO, the EU, and the interplay between law and Politics within Europe’s collective defence architecture." European Security 28.1 (2019): 40-65.
Reid, Escott. Time of Fear and Hope. The Making of the North Atlantic Treaty 1947-1949 (McClelland & Stewart, 1977).
Risso, Linda. Propaganda and intelligence in the cold war: The NATO information service (Routledge, 2014).
Riste, Olav, ed. Western Security. The Formative Years. European and Atlantic Defence 1947-1953. (Norwegian UP, 1985).
Sayle, Timothy Andrews. Enduring Alliance: A History of NATO and the Postwar Global Order (Cornell UP, 2019) online review
Schmidt, Gustav, ed. A History of NATO: The First Fifty Years (3 vol, Palgrave, 2001), with 60 contributors.
Smith, E. Timothy. The United States, Italy and NATO, 1947-52 (1991).
Smith, Joseph. The Origins of NATO (Liverpool University Press, 1990)
Till, Geoffrey. "Holding the bridge in troubled times: the Cold War and the navies of Europe." Journal of Strategic Studies 28.2 (2005): 309-337.
Historiography
Békés, Csaba. "The Bibliography of New Cold War History." (2nd ed. Cold War History Research Center, Budapest, 2018) online, 350pp.
Dülffer, Jost. "Cold War history in Germany." Cold War History 8.2 (2008): 135-156.
Kaplan, Lawrence S. "The Cold War and European Revisionism." Diplomatic History 11.2 (1987): 143-156.
Mariager, Rasmus. "Danish Cold War Historiography." Journal of Cold War Studies 20.4 (2019): 180-211.
Mastny, Vojtech. "The new history of Cold War alliances." Journal of Cold War Studies 4.2 (2002): 55-84.
Olesen, Thorsten B., ed. The Cold War and the Nordic countries: Historiography at a crossroads (University Press of Southern Denmark, 2004).
Primary sources
"NATO Strategy Documents" 1949-1969"
External links |
56359863 | https://en.wikipedia.org/wiki/Gunmetal%20Gray | Gunmetal Gray | Gunmetal Gray is the sixth novel by Mark Greaney, published on February 14, 2017 by Berkley Books. It is also the sixth book in the Gray Man series. Picking up after the events of Back Blast, Court Gentry, back in the employ of the Central Intelligence Agency after five years as a fugitive, has to capture a rogue hacker working for the Chinese military who is on the run from his former employers. The book was dedicated to prominent thriller writer Dalton Fury, who died in 2016. The novel debuted at number 10 at The New York Times Bestseller list.
Plot summary
For his first operation back with the Central Intelligence Agency, Court Gentry is tasked with capturing Fan Jiang, a former member of PLA Unit 61398, an ultra-secret computer warfare unit responsible for testing China's own security systems, through his former handler Sir Donald Fitzroy, who was contracted by the Chinese government for a similar operation, in Hong Kong. Unbeknownst to him, his arrival in the country was discovered by the Ministry of State Security (MSS), who then sent two agents to surveil him. Their principal boss from the Ministry of Defense (MOD), Colonel Dai Longhai, becomes frustrated about this routine surveillance op and orders them to eliminate Gentry, who instead manages to kill them.
After the attempt on his life, Gentry decides to go dark in order to go on with his operation. His later inquiries on the whereabouts of Fitzroy attracted Colonel Dai's attention, who then ordered his henchmen to kidnap Gentry. Court is then brought to Fitzroy, who also had been kidnapped by Colonel Dai. Sir Donald had dispatched two kill teams for Fan on behalf of Colonel Dai's MOD, but they were killed by the Wo Shing Wo, a part of the Triad criminal organization in Hong Kong whom Fan had hired for protection while on the run. After hearing of his failure, Colonel Dai takes Fitzroy hostage and has decided to supervise the hunt. Aware of his reputation as the Gray Man and his past relationship with Fitzroy, but unaware that Gentry is working on behalf of the CIA, Colonel Dai hires Court to find Fan and eliminate him.
Gentry follows up on the last lead on Fan, in which the hacker had escaped the city by ship in the island of Po Toi with the help of Wo Shing Wo. After a violent bar fight, he finds out that Fan had in fact escaped to Vietnam and is now under the protection of the Con Ho Hoang Da (Wild Tigers), a Vietnamese criminal organization. Unbeknownst to him, a secret Russian foreign intelligence (SVR) paramilitary unit led by intelligence officer Zoya Zakharova was also interested in locating Fan, and had in fact located the ship first after having raided it. They also gleaned the same information from interrogating the ship's captain at the same time that Court found out about Fan's whereabouts.
Court, as well as Zoya's team, goes to Vietnam to continue his hunt. He plans to infiltrate the Wild Tigers's known headquarters in Ho Chi Minh City in order to capture Fan. However, an impatient Colonel Dai sends a paramilitary force to the HQ in an attempt to capture Fan without consulting Court, leaving Gentry to give chase to a black sedan (which, unbeknownst to him, was carrying Wild Tigers leader Tu Van Duc) escaping from the building into a Wild Tigers compound near the Cambodian border.
However, Zoya's team was one step ahead of Gentry once again, as she determined the compound to be Fan's hiding place from interrogating a Vietnamese police officer who was on the Wild Tigers's payroll. Her team infiltrate the compound that night, which turned into a full scale gunfight with the Wild Tigers, as well as soldiers from the People's Army of Vietnam (PAVN) who were there as extra protection for Fan. Court eventually finds Fan trying to escape the compound with Tu Van Duc and one of his henchmen, and captures him while killing his protectors, but not before evading the Russian paramilitary unit trying to extract the hacker and kill him in the process.
After the attack on the compound, which had failed to capture Fan once again and killed one of her men, Zoya was recalled to Moscow for seemingly ordering her men to storm the compound even though they were outnumbered by PAVN forces (although she was not really supervising the operation; Vasily, the team leader of the SVR Zaslon paramilitary unit, ordered the raid). Now desperate and wanting to salvage her mission on her own, Zoya tries to recruit the services of the Chamroon Syndicate, a Thai criminal organization, but is later burned by the SVR for not reporting in to her employers sooner.
Meanwhile, while making their way to Cambodia, Fan tells Court why he defected from China. When his parents died in a car accident, leaving him with no family members and making him liable to be executed according to the "family collateral" rule among members of unit 61398 (hackers with no next of kin are deemed untrustworthy by the government), Fan was helped by his parents's guardian Song Julong, later deduced by Court to be a double agent working for the CIA, in defecting from China. He originally wanted to go to Taiwan through Hong Kong with Taiwanese papers provided by a contact of Song's, but he was left behind in the border when Colonel Dai's men chased him. Court finds out that he did not know the full story about his mission.
Court and Fan were captured soon after by river bandits, whom Fan deduced were Thai smugglers. Court was forced to leave Fan behind when he escaped, but he was rescued by the United States Navy in lieu of his CIA handler Suzanne Brewer. Court decides to go to Bangkok, Thailand to resume his search for Fan and prevent Sir Fitzroy from getting killed by the Colonel Dai, who had tortured Fitzroy and was by now getting desperate.
The CIA intercepts an encrypted message from Fan to Taiwanese authorities which reveals that he is under captivity from the Chamroon Syndicate, which has its headquarters in Bangkok. Court tries to capture one of its operational leaders, Nattapong Chamroon, in a nightclub, and then interrogate him on Fan's whereabouts. He informs Colonel Dai of his operation, who then sends some of his men to the club. Unfortunately, the Russian paramilitary team (now led by SVR operations officer Oleg Utkin) is also present with the same objective. Once again impatient, Colonel Dai orders his men, led by his second in command Major Xi, to storm the nightclub in pursuit of Fan and Chamroon's senior leadership. Major Xi's men find themselves in a violent gunfight with the Thai gangsters and Oleg's forces. Meanwhile, Court manages to rescue Chamroon from the gunfight as well as his five surviving call girls. However, the last call girl he saves turns out to be Zoya, who then takes the Thai gangster away from Court.
Zoya then presses Chamroon for Fan's whereabouts, and when he told her that his brother Kulap is with the hacker, she kills him and then exits the nightclub, where she then makes contact with her surviving paramilitary team and escapes the nightclub with them. In the car, Zoya and Utkin argue about the failed operation, but when Utkin tried to kill the burned SVR operative, Zoya kills him. Meanwhile, Court had found Chamroon's dead body and had also exited the nightclub. He then finds Zoya tending to his former teammates, and follows them using a tracking device. After hearing of her colleague's death, he captures her, but then decides to enlist her help in finding Fan.
Zoya and Court go to the island of Phuket, Thailand, where the Chamroon estate is located. The two get close over the course of two days surveilling the estate, and Court recruits Zoya into the CIA as an asset. Meanwhile, his handler Brewer informs him that Fan had sent an encrypted message, this time to the U.S. embassy in Bangkok, asking to be rescued from the Chamroon estate, and that CIA Ground Branch operatives are supervising his rescue instead of Court. Rather, Court would take charge of Zoya. After his call, however, Zoya and Court are kidnapped by Major Xi's men under orders from Colonel Dai, who is also in Phuket. They were put in a ranch house, where the colonel tells Court that he himself will supervise the attack into the Chamroon estate, and that he and his Russian girlfriend will stay behind until the operation is over. Fitzroy was also present in the house, and he tells Gentry that he was originally contracted by the MI6 to assassinate Song before he kills Fan's parents, but the kill team he sent spooked Song in Shanghai, leading him to kill Fan's parents in order to force the hacker's defection to the United States.
Court reveals his identity as the Gray Man to Zoya. Then they deduce that the encrypted e-mail from Fan is a red herring, since they both know that the hacker truly wanted to go to Taiwan instead of being rescued by the United States. They further conclude that Kulap Chamroon forced Fan to send the encrypted e-mail from a nearby yacht, which he later learns is owned by Italian crime syndicate 'Ndrangheta which has business ties to Kulap, in order to lead the Chinese into an ambush as revenge for his brother's death. They later escape the ranch house, leaving Fitzroy behind, and decide to infiltrate the yacht and rescue Fan themselves. Court had made contact with Brewer and tells her to abort the CIA Ground Branch's impending raid on the estate, which would have led to a violent gun battle between them and Colonel Dai's forces as well as an ambush from Chamroon's men. Meanwhile, Colonel Dai's men were brutally ambushed by Thai insurgents hired by Kulap in his own estate.
After rescuing Fan, Court calls Colonel Dai and arranges for a prisoner exchange with Fan and Fitzroy. However, Court decides to go off grid with Fan, and leaves Zoya to be rescued by the CIA. Days later, Court contacts his boss in the CIA, Matthew Hanley, who confirms his suspicions about his operation. Then, Hanley calls in the CIA Ground Branch team to try to extract Gentry from where he is calling, which is in a hotel in Phang Nga, Thailand. However, the paramilitary unit end up rescuing Fitzroy from Colonel Dai's men, killing Major Xi in the process.
Three days later, Court enlists Fitzroy's help in securing Fan's move to Taiwan. However, when Court sends Fan off to Taiwanese intelligence officers in an airport, they were both captured by the CIA Ground Branch team. After being whisked into the plane, Court then finds out that Fan will be forced to work for the United States since he is safer with them, and that Colonel Dai made a deal with the CIA to escape death in the hands of the Chinese military for failing to capture Fan. After meeting with Zoya in Frankfurt, Germany before she gets shipped off in order to be vetted into the CIA, Court goes off grid.
Characters
Courtland "Court" Gentry: The Gray Man, code name Violator — freelance assassin/contract agent for the Central Intelligence Agency
Matthew Hanley: Director of the National Clandestine Service, Central Intelligence Agency
Suzanne Brewer: Officer, National Clandestine Service, Central Intelligence Agency
Fan Jiang: Chief Sergeant Class 3, cyber intrusion specialist, People’s Liberation Army, Unit 61398 (Red Cell Detachment), 2nd Bureau, General Staff Department (3rd Department)
Dai Longhai: Colonel, department director of security and counterintelligence, People’s Liberation Army, 2nd Bureau, General Staff Department (3rd Department)
Xi: Major, counterintelligence officer, People’s Liberation Army, 2nd Bureau, General Staff Department (3rd Department)
Sir Donald Fitzroy: Director and CEO of Cheltenham Security Services; former handler of Court Gentry
Zoya Feodorova Zakharova: Code name Banshee — officer, Russian Foreign Intelligence Service (SVR)
Oleg Utkin: Code name Fantom — officer, Russian Foreign Intelligence Service (SVR)
Vasily: “Anna One” — paramilitary officer and team leader, Russian Foreign Intelligence Service (SVR), Zaslon (Shield) Unit
Tu Van Duc: Leader of Con Ho Hoang Da (Wild Tigers), Vietnam-based criminal organization
Bui Ton Tan: Officer, Vietnam People’s Police and employee of Con Ho Hoang Da
Kulap Chamroon: Co-leader of the Chamroon Syndicate, Thailand-based transnational crime syndicate
Nattapong Chamroon: Brother of Kulap, co-leader of the Chamroon Syndicate, Thailand-based transnational crime syndicate
Song Julong: Major and security officer, People’s Liberation Army, People’s Republic of China
Reception
Commercial
Gunmetal Gray debuted at number 10 at the Combined Print & E-Book Fiction category of the New York Times bestseller list, as well as number 11 at the Hardcover Fiction category of the same list. This is the first time a Gray Man novel has charted in the list.
Critical
The book received generally positive reviews. In a starred review, Kirkus Reviews praised the novel as "fat, fast, and fun". Carol Memmott of The Washington Post said: "Fans of RPG, Hong Kong action films and high-octane storytelling will love the Gray Man, who battles full-bore through this fast-paced series." In a starred review, Publishers Weekly hailed the novel as "outstanding" and added that "Gray Man fans will close the book happily fulfilled and eagerly awaiting his next adventure." Prominent literary reviewer The Real Book Spy remarked: "From start to finish, Gunmetal Gray impresses with a well-laid-out plot and enough action to satisfy even the pickiest thriller fans. Between the Clancy books and the Gray Man series, nobody is on a hotter streak right now than Mark Greaney."
References
2017 American novels
American thriller novels |
27008230 | https://en.wikipedia.org/wiki/Node%20graph%20architecture | Node graph architecture | Node graph architecture is a software design structured around the notion of a node graph. Both the source code as well as the user interface is designed around the editing and composition (or linking) of atomic functional units.
The source code for the software application is organized into atomic functional units called nodes. This is typically done using classes derived from a base class for all nodes. Each node can have inputs and outputs, which are typically also implemented using classes derived from base classes for all inputs and all outputs. Outputs and inputs can refer to each other, typically by holding pointers to instances of other outputs or inputs. When a node executes its functionality, it retrieves its inputs by following the pointers stored in its inputs to retrieve data output by other nodes. The node then executes its operation on these inputs to produce its own outputs. The ability to link nodes together in this way allows complex tasks or problems to be broken down into atomic nodal units that are easier to understand.
The user interface of the software application will often visually display the node graph to the user. This is often accomplished by using the GPU to perform the rendering which is subsequently displayed on the desktop to the user. Common APIs for the GPU are OpenGL and DirectX. Nodes are often drawn as rectangles, and connections between nodes are drawn with lines or splines.
The use of node graph architecture started in the 1960s. Today the use of node graphs has exploded. The fields of graphics, games, and machine learning are the main adopters of this software design with the majority of tools using node graph architecture.
To this day, there is some debate as to the benefits of visual programming and node graph architecture. Advocates highlight how the abstraction that node graphs provide makes the tool easier to use. Critics highlight how visual programming is too restrictive and how they must resort to modifying source code or scripts to accomplish their tasks.
History
There is an ongoing effort by to collect snapshots of all node graph user interfaces in most software applications. The effort attempts to document the evolution and explosion of node graph user interfaces starting from their initial roots. This visual history is hosted on a blog page called Visual Programming Languages - Snapshots. Work leading to node graph architectures and visual programming seems to have started in the 1960s, in the area known as "man-machine communications".
In William Robert Sutherland's MIT thesis (1966) "Online Graphical Specification of Procedures", he describes and analyses topics around a 2D pictorial language. This is one of the first investigations in dataflow-based workflows or programs. Since then his thesis has been used as "prior art" in order to quash lawsuits about dataflow ideas today. His work is often thought to have led the way to what is known as computer-aided design (CAD) today.
A pictorial program is a natural way of expressing parallel processes. The two-dimensional nature of the language helps in visualizing many things happening at once.
The ease of debugging programs, particularly parallel ones, will be enhanced by a pictorial language form. Being able to attach data probes and to see a program run gives one a grasp of detail that is hard to obtain in any other way.
A program's execution need not be controlled by the usual explicit sequential flow conventions. The movement of data through a program may determine its operation. A data controlled convention corresponds closely to our intuitive ideas of how a graphical program should operate and also allows parallel programming without explicit flow designations.
In 1969, T. O. Ellis, J. F. Heafner, and W. L. Sibley published a paper concerning a Graphical Input Language (GRAIL). Their work was related to the RAND Tablet which began with research on Sketchpad, a system where users could write computer commands directly on a tablet, conducted by Ivan Sutherland. The GRAIL system used a flowchart-based graphical programming language and could recognize handwritten letters and gestures. Alan Kay has given a number of demos of the GRAIL system, however, he was not involved with the creation of the system.
Important organizational concepts in the GRAIL system are the sequential flow of control, the hierarchy of subroutines, and the language (flow diagrams) for pictorially relating the organization within the concepts of the first two.
The sequential nature of control allows the man to envision isolated processes that are adapted to specific functions--which, in turn, allow the organizer to think of the total program in terms of manageable subparts.
The subroutine hierarchy emphasizes the notion of isolated processes even more strongly.
Flow diagrams help the man to picture his control options and the relationship between processes by expressing these interrelationships in two dimensions.
Some of the more recent uses of node graph architectures started around 2005. Node graphs in this time frame start to develop paradigms to deal with complexity in the node graph. The complexity arose as the number of nodes and links in the graph increased. One of the main ideas dealing with complexity was the concept of a group or package node which hid nodes inside of itself, only exposing the inputs and outputs of the group.
Katana, Foundry
Houdini, SideFX
Nuke, Foundry
Mari, Foundry
Maya, Autodesk
Blender
Abstraction and Complexity
In the paper Hierarchical Small Worlds in Software Architecture author Sergi Valverde argues that most large software systems are built in a modular and hierarchical fashion, and that node graphs can be used to analyze large software systems. Many other software analysis papers often use node graphs to analyze large software systems suggesting that node graphs are good models of the internal structure and operation of the software.
Visual Programming Debate
Node graphs are a subset of the broader class of visual programming languages. Node graphs allow you to design programs in a visual and structured way instead of through the authoring of source code. In the film and video game industries node graphs are synonymous with visual programming. There is currently some debate on the power, abstraction, and need of node graphs and visual programming languages.
Advocates of visual programming generally emphasize how it simplifies programming because it abstracts away many details and only exposes controls that are necessary for their domain. These controls are the parameters on the nodes which control their behavior and the links between nodes.
Critics of visual programming generally emphasize how it does not offer enough control, and how for more complex tasks it becomes necessary to author source code. However, these more complex tasks often fall outside the intended usage or domain of the node graph.
This remains an active area of debate with new discussions occurring in open forums to this day. The following are a few of the largest discussions to date.
Discussion on Hacker News, 2014
Discussion on Hacker News, 2019
Reddit discussion, 2019
Research studies tend to shed more details on these discussions and highlight more of the advantages and disadvantages of node graphs. They indicate that node graphs and visual programming are easy to understand for new users, but as the users move to more complex tasks they often need to resort to authoring textual source code. Another survey focuses on peoples beliefs on the cognitive effects of visual programming, in which they found that professional programmers are the most skeptical of visual programming. Other studies have shown in psychological experiments that visual programming can have significant positive effects on performance in cognitive tasks.
Node Graph
A node graph in the context of software architecture refers to an organization of software functionality into atomic units known as nodes, and where nodes can be connected to each other via links. The manipulation of nodes and links in the node graph can be often be accomplished through a programmable API or through a visual interface by using the mouse. In the diagram above, the node graph appears on the right-hand side.
In modern-day usage, the term "node graph" is an open compound word. However, in older software it was referred to as a "nodegraph", a closed compound word.
Nodegraph, Valve Software
Nodegraph, Notch
Node
Nodes perform some type of computation. They encapsulate this executable functionality and will often take inputs and produce outputs as a by-product of execution. A simple example is a node that adds two numbers together. The inputs are the two numbers to add and the output is the sum of the two numbers.
Nodes are analogous to mathematical functions of the following form.
,
where is the node's computation, is a vector of the node's input values and is a vector of the node's output values.
Visually nodes are often represented by rectangles. However, this is not a convention that is followed by all applications. In the diagram above there are three nodes labeled "Video", "Add Star" and "Add Circle".
Node Parameters
Nodes often have additional parameters, that define their execution. These parameters are backed by data types in the node's source code.
Mathematically they can be thought of as additional input values to the node's compute function. The only difference is that these values are controlled directly by the user instead of being output by another node as a by-product of its execution. For example, in the simple example above regarding a node that adds two numbers, we can introduce a bias parameter on the node so that the node can add an extra fixed number onto the sum.
Visually the node's parameters are often exposed after the user clicks on the node. This helps to reduce visually cluttering the node graph. In the diagram above we see a parameter window opening up beside the "Add Star" node.
Node Inputs and Outputs
Nodes often have inputs and outputs, as discussed above. Inputs and outputs are backed by data types in the node's source code. Inputs and outputs are crucial to storing values before and after the node's execution.
Mathematically the node's inputs and outputs are analogous to input and output values of functions.
,
where is the node's computation, is a vector of the node's input values and is a vector of the node's output values.
Visually the inputs and outputs of nodes are often represented with circles.
Node Links
Links transfer the values stored in data types between different nodes. They are analogous to mathematical composition. For example, if node A is feeding its outputs to node B, this can be represented mathematically as follows.
,
where and are the operations performed by node B and node A, is a vector of the node A's input values and is a vector of the node B's output values.
Node Types
The type of a node indicates which compute operation it will perform when executed. There are often many different node types participating in the node graph. The following are some examples:
Nuke, a popular visual effects compositing program, includes hundreds of nodes. each performing specific tasks related to compositing.
Katana, a popular look and lighting software, includes hundreds of nodes. each performing specific tasks related to lighting computer graphics scenes.
Mari, a popular 3D painting software, includes hundreds of nodes. each performing specific tasks related to 3D painting.
The most important node type for managing complexity is the group node. This node type does not execute software code in the same as other nodes. This node simply groups a subset of connected nodes together and manages the inputs and outputs into or out of the group. This hides complexity inside of the group nodes and limits their coupling with other nodes outside the group. This leads to a hierarchy where smaller graphs are embedded in group nodes. The following are examples of group nodes which are used to group a subset of connected nodes and to help simplify the graph.
Group nodes in Nuke.
Group nodes in Katana.
User Interface
Software applications using node graph architecture will typically expose the node graph visually or graphically to the user, allowing the user to make changes to the node graph. Using the mouse, users will typically be able to:
create new nodes
edit parameters on nodes
connect nodes together
evaluate the graph up to a certain node
view the current output values on nodes
With the increasing usage of node graphs, there is currently increased attention on creating user-friendly interfaces. Often these new interfaces are being designed by user interface specialists and graphical designers. The following are some user interfaces designed by artists and designers.
Node graphs on Dribble
Nodes on Dribble
Directed Acyclic Graphs
Many theoretical results from graph theory apply to node graphs, especially with regards to topology. This subject area where Nodes are linked together to form graphs is well studied.
One particular area of concern during node graph evaluation is cycles. When cycles are present in the node graph, the evaluation never ends as nodes are continually executed by following links. To avoid these problems many node graphs architectures restrict themselves to a subset of graphs known as directed acyclic graphs.
Use in Computer Graphics
The use of node graph architecture in software design is especially popular in the film and video game industries. The diagram above shows a simplified user interface for an artistic tool for editing and creating videos. The nodes are represented as rectangles and are connected to each other through curved lines (Bézier curves). In this software's operational model, a video sequence is being passed through the lines onto the next node, and each node performs some additional modifications to the video sequence. In this example one video is translated in 2D, another is pixelated, and finally, both streams are merged.
The following are some examples of software using node graph architecture in the film and video game industries.
Katana, Foundry
Houdini, SideFX
Nuke, Foundry
Mari, Foundry
Maya, Autodesk
Blender
Use in Machine Learning
The use of node graph architecture in software design has recently become very popular in machine learning applications. The diagram above shows a simple neural network composed of 3 layers. The 3 layers are the input layer, the hidden layer, and the output layer. The elements in each layer are weights and are connected to weights in other layers. During inference, the machine learning algorithm evaluates the weights in the output layer through a sequence of functional evaluations over the weights from previous layers. During training, the machine learning algorithm uses optimization to minimize a loss function, where the loss function depends on the difference between the weights in the output layer and the expected values. Node graphs are used to visualize, configure and debug these neural network layers.
The following are examples of machine learning software using node graph architecture without a graphical interface for the node graphs.
PyTorch, GitHub, Facebook
TensorFlow, GitHub, Google
The following are some examples of machine learning software using node graph architecture.
PerceptiLabs, KDnuggets
Deep Cognition, Deep Congition Inc
Neural Network Modeler, IBM
Neural Network Console, Sony
Digits, nVIDIA
Notes
References
Metrics of Software Architecture Changes Based on Structural Distance
Representation and Analysis of Software
A History of Visual Programming: From Basic to Bubble
Scratch Software
Blockly Software
What is visual programming
Software architecture |
28826929 | https://en.wikipedia.org/wiki/The%20Trojan%20Horse%20%28film%29 | The Trojan Horse (film) | The Trojan Horse () is a 1961 film set in the tenth and final year of the Trojan War. The film focuses primarily on the exploits of the Trojan hero Aeneas during this time. The film was directed by Giorgio Ferroni and stars Steve Reeves as Aeneas and John Drew Barrymore as Odysseus.
In 2004 it was restored and shown as part of the retrospective "Storia Segreta del Cinema Italiano: Italian Kings of the Bs" at the 61st Venice International Film Festival.
Cast
Steve Reeves as Aeneas
Juliette Mayniel as Creusa
John Drew Barrymore as Odysseus
Edy Vessel as Helen
Lidia Alfonsi as Cassandra
Warner Bentivegna as Paris
Luciana Angiolillo as Andromache
Arturo Dominici as Achilles
Mimmo Palmara as Ajax
Nerio Bernardi as Agamemnon
Nando Tamberlani as Menelaus
Carlo Tamberlani as Priam
Antun Mateš
Production
The battle scenes were shot in Yugoslavia.
Release
The Trojan Horse was released in Italy on 26 October 1961 with a 115-minute running time. It was released in July 1962 in the United States with a 105-minute running time.
See also
List of historical drama films
Greek mythology in popular culture
Sword-and-sandal
Footnotes
References
External links
1961 films
1960s historical films
Peplum films
Italian films
French films
Yugoslav films
Films directed by Giorgio Ferroni
Trojan War films
Films shot in Montenegro
Films shot in Yugoslavia
Siege films
Cultural depictions of Helen of Troy
Films scored by Giovanni Fusco
Sword and sandal films
Agamemnon |
13084795 | https://en.wikipedia.org/wiki/IBM%203745 | IBM 3745 | The IBM 3745 is the latest and last of a 37xx family of communications controllers for the IBM mainframe environment. As of mid-2009 there were an estimated 7,000+ of the larger 3745 models still in active production status, down from 20,000 or more in 2007. The 3745 and associated 3746 models were once heavily used within financial, insurance and retail industries as well as within government agencies globally. However, today most organizations have migrated away from the use of 3745s. IBM's Enterprise Extender and the Communication Controller for Linux on System z (CCL) have largely displaced the older 3745s. IBM announced in September 2002 that it would no longer manufacture new 3745s, but IBM continues to support the hardware by providing worldwide maintenance service, by providing microcode releases and by supporting the associated software including NCP (Network Control Program) and the virtual telecommunications access method (VTAM). IBM has announced end-of-service dates for Japan, Europe and the Middle East, but has not yet announced end-of-service for the Americas and parts of Asia.
The latest and most commonly used models of the 3745 are the 3745-31A single CCU and 3745-61A dual CCU models. These are usually operated in conjunction with the 3746-900 expansion unit (aka 900 frame). The 900 frame provides multiple T1, token ring, V.35 and V.24 attachments on the front end, and connects on the back end to the mainframe host with multiple ESCON serial fiber optic channels. An operator and service interface to the 3745 and 900 frame is provided by an IBM Service Processor which operates under the control of the IBM OS/2 operating system and proprietary code.
Production
IBM maintained a contract manufacturing facility for the 3745/3746 product set in Havant, Hampshire, United Kingdom until the end of 2002. This facility was operated by Xyratex Technology Limited. When production at Havant ceased, the remaining inventory of new product, features, parts and components was purchased by Mid-Atlantic Research and Services, Inc. That company built and operated a re-manufacturing facility in Maryland, USA for the purpose of providing upgrade kits and service parts for those large companies that continued to rely upon the 3745/3746 for critical network applications. Re-manufacturing continued in the US at the Mid-Atlantic Research Maryland facility until late 2013. In 2014 Enterprise Infrastructure Solutions, LLC (EIS) of St. Charles, Illinois purchased Mid-Atlantic Research for an undisclosed amount. EIS consults and sells data center solutions including active equipment and physical layer solutions.
Replacements
IBM does not market a direct hardware replacement for the IBM 3745 providing all of the 3745/3746 interfaces. However, IBM offers a software emulation product that provides a subset of 3745/3746 function, IBM's Communications Controller for Linux on System z. CCL is a software emulation that runs on the mainframe under Linux on System z. The NCP (Network Control Program) licensed for use in the 3745/3746 continues to be licensed for use with CCL. CCL employs the IBM OSA adapter for physical connectivity. Low and medium speed lines must be supported through router ports. Alternatively, in many cases it is possible to migrate away from networking protocols supported by the 3745 and CCL, relying solely on the SNA and TCP/IP protocol support provided by z/OS Communications Server.
References
3745
3745
3745 |
618171 | https://en.wikipedia.org/wiki/Resource%20Reservation%20Protocol | Resource Reservation Protocol | The Resource Reservation Protocol (RSVP) is a transport layer protocol designed to reserve resources across a network using the integrated services model. RSVP operates over an IPv4 or IPv6 and provides receiver-initiated setup of resource reservations for multicast or unicast data flows. It does not transport application data but is similar to a control protocol, like Internet Control Message Protocol (ICMP) or Internet Group Management Protocol (IGMP). RSVP is described in .
RSVP can be used by hosts and routers to request or deliver specific levels of quality of service (QoS) for application data streams. RSVP defines how applications place reservations and how they can relinquish the reserved resources once no longer required. RSVP operations will generally result in resources being reserved in each node along a path. RSVP is not a routing protocol but was designed to interoperate with current and future routing protocols.
RSVP by itself is rarely deployed in telecommunications networks. In 2003, development effort was shifted from RSVP to RSVP-TE for teletraffic engineering. Next Steps in Signaling (NSIS) was a proposed replacement for RSVP.
Main attributes
RSVP requests resources for simplex flows: a traffic stream in only one direction from sender to one or more receivers.
RSVP is not a routing protocol but works with current and future routing protocols.
RSVP is receiver oriented in that the receiver of a data flow initiates and maintains the resource reservation for that flow.
RSVP maintains soft state (the reservation at each node needs a periodic refresh) of the host and routers' resource reservations, hence supporting dynamic automatic adaptation to network changes.
RSVP provides several reservation styles (a set of reservation options) and allows for future styles to be added in protocol revisions to fit varied applications.
RSVP transports and maintains traffic and policy control parameters that are opaque to RSVP.
History and related standards
The basic concepts of RSVP were originally proposed in 1993.
RSVP is described in a series of RFC documents from the IETF:
: The version 1 functional specification was described in RFC 2205 (Sept. 1997) by IETF. Version 1 describes the interface to admission (traffic) control that is based "only" on resource availability. Later RFC2750 extended the admission control support.
defines the use of RSVP with controlled-load RFC 2211 and guaranteed RFC 2212 QoS control services. More details in Integrated Services. Also defines the usage and data format of the data objects (that carry resource reservation information) defined by RSVP in RFC 2205.
specifies the network element behavior required to deliver Controlled-Load services.
specifies the network element behavior required to deliver guaranteed QoS services.
describes a proposed extension for supporting generic policy based admission control in RSVP. The extension included a specification of policy objects and a description on handling policy events. (January 2000).
, "RSVP-TE: Extensions to RSVP for LSP Tunnels" (December 2001).
, "Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource ReserVation Protocol-Traffic Engineering (RSVP-TE) Extensions" (January 2003).
, "Procedures for Modifying the Resource reSerVation Protocol (RSVP)" (October 2004), describes current best practices and specifies procedures for modifying RSVP.
, "A Resource Reservation Protocol (RSVP) Extension for the Reduction of Bandwidth of a Reservation Flow" (May 2006), extends RSVP to enable the bandwidth of an existing reservation to be reduced instead of tearing down the reservation.
, "Node-ID Based Resource Reservation Protocol (RSVP) Hello: A Clarification Statement" (June 2006).
Key concepts
The two key concepts of RSVP reservation model are flowspec and filterspec.
Flowspec
RSVP reserves resources for a flow. A flow is identified by the destination address, the protocol identifier, and, optionally, the destination port. In multiprotocol label switching (MPLS) a flow is defined as a label switched path (LSP). For each flow, RSVP also identifies the particular quality of service (QoS) required by the flow. This QoS information is called a flowspec and RSVP passes the flowspec from the application to the hosts and routers along the path. Those systems then analyse the flowspec to accept and reserve the resources.
A flowspec consists of:
Service class
Reservation spec - defines the QoS
Traffic spec - describes the data flow
Filterspec
The filterspec defines the set of packets that shall be affected by a flowspec (i.e. the data packets to receive the QoS defined by the flowspec). A filterspec typically selects a subset of all the packets processed by a node. The selection can depend on any attribute of a packet (e.g. the sender IP address and port).
The currently defined RSVP reservation styles are:
Fixed filter - reserves resources for a specific flow.
Shared explicit - reserves resources for several flows and all share the resources
Wildcard filter - reserves resources for a general type of flow without specifying the flow; all flows share the resources
An RSVP reservation request consists of a flowspec and a filterspec and the pair is called a flowdescriptor. The flowspec sets the parameters of the packet scheduler at a node and the filterspec sets the parameters at the packet classifier.
Messages
There are two primary types of messages:
Path messages (path)
The path message is sent from the sender host along the data path and stores the path state in each node along the path.
The path state includes the IP address of the previous node, and some data objects:
sender template to describe the format of the sender data in the form of a Filterspec
sender tspec to describe the traffic characteristics of the data flow
adspec that carries advertising data (see RFC 2210 for more details).
Reservation messages (resv)
The resv message is sent from the receiver to the sender host along the reverse data path. At each node the IP destination address of the resv message will change to the address of the next node on the reverse path and the IP source address to the address of the previous node address on the reverse path.
The resv message includes the flowspec data object that identifies the resources that the flow needs.
The data objects on RSVP messages can be transmitted in any order. For the complete list of RSVP messages and data objects see RFC 2205.
Operation
An RSVP host that needs to send a data flow with specific QoS will transmit an RSVP path message every 30 seconds that will travel along the unicast or multicast routes pre-established by the working routing protocol. If the path message arrives at a router that does not understand RSVP, that router forwards the message without interpreting the contents of the message and will not reserve resources for the flow.
Those who want to listen to them send a corresponding resv (short for reserve) message which then traces the path back to the sender. The resv message contains a flowspec. The resv message also has a filterspec object; it defines the packets that will receive the requested QoS defined in the flowspec. A simple filter spec could be just the sender’s IP address and optionally its UDP or TCP port. When a router receives the RSVP resv message it will:
Make a reservation based on the request parameters. Admission control processes the request parameters and can either instruct the packet classifier to correctly handle the selected subset of data packets or negotiate with the upper layer how the packet handling should be performed. If the cannot be supported, a reject message is sent to let the listener know.
Forward the request upstream (in the direction of the sender). At each node the flowspec in the resv message can be modified by a forwarding node (e.g. in the case of a multicast flow reservation the reservations requests can be merged).
The routers then store the nature of the flow and optionally set up policing according to the flowspec for it.
If nothing is heard for a certain length of time the reservation will time out and will be canceled. This solves the problem if either the sender or the receiver crash or are shut down without first canceling the reservation.
Other features
Integrity
RSVP messages are appended with a message digest created by combining the message contents and a shared key using a message digest algorithm (commonly MD5). The key can be distributed and confirmed using two message types: integrity challenge request and integrity challenge response.
Error reporting
When a node detects an error, an error message is generated with an error code and is propagated upstream on the reverse path to the sender.
Information on RSVP flow
Two types of diagnostic messages allow a network operator to request the RSVP state information on a specific flow.
Diagnostic facility
An extension to the standard which allows a user to collect information about the RSVP state along a path.
RFCs
References
External links
Internet architecture
Internet protocols
Transport layer protocols |
33455 | https://en.wikipedia.org/wiki/Web%20server | Web server | A web server is computer software and underlying hardware that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so.
The hardware used to run a web server can vary according to the volume of requests that it needs to handle. At the low end of the range are embedded systems, such as a router that runs a small web server as its configuration interface. A high-traffic Internet website might handle requests with hundreds of servers that run on racks of high-speed computers.
A resource sent from a web server can be a preexisting file (static content) available to the web server, or it can be generated at the time of the request (dynamic content) by another program that communicates with the server software. The former usually can be served faster and can be more easily cached for repeated requests, while the latter supports a broader range of applications.
Technologies such as REST and SOAP, which use HTTP as a basis for general computer-to-computer communication, as well as support for WebDAV extensions, have extended the application of web servers well beyond their original purpose of serving human-readable pages.
History
This is a very brief history of the web server programs and so some information necessarily interlaps with the histories of the web browsers, the World Wide Web and the Internet, therefore, for the sake of the clearness and understandability, some key historical information below reported may be similar to that found also in one or more of the above mentioned history articles.
Initial WWW project (1989-1991)
In March 1989, Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system. The proposal, titled "HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-author Robert Cailliau), and finally it was approved.
Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran on NeXTSTEP OS installed on NeXT workstations:
a graphical web browser, called WorldWideWeb;
a portable line mode web browser;
a web server, later known as CERN httpd.
Those early browsers retrieved web pages from web server(s) using a new basic communication protocol that was named HTTP 0.9.
In August 1991 Tim Berner-Lee announced the birth of WWW technology and encouraged scientists to adopt and develop it. Soon after, those programs, along with their source code, were made available to people interested in their usage. In practice CERN informally allowed other people, including developers, etc., to play with and maybe further develop what it has been made till that moment. This was the official birth of CERN httpd. Since then Berner-Lee started promoting the adoption and the usage of those programs along with their porting to other OSs.
Fast and wild development (1991-1995)
In December 1991 the was installed at SLAC (U.S.A.). This was a very important event because it started trans-continental web communications between web browsers and web servers.
In 1991-1993 CERN web server program continued to be actively developed by www group, meanwhile, thanks to the availability of its source code and the public specifications of the HTTP protocol, many other implementations of web servers started to be developed.
In April 1993 CERN issued a public official statement stating that the three components of Web software (the basic line-mode client, the web server and the library of common code), along with their source code, were put in the public domain. This statement freed web server developers from any possible legal issue about the development of derivative work based on that source code (a threat that in practice never existed).
At the beginning of 1994, the most notable among new web servers was NCSA httpd which ran on a variety of Unix-based OSs and could serve dynamically generated content by implementing the POST HTTP method and the CGI to communicate with external programs. These capabilities, along with the multimedia features of NCSA's Mosaic browser (also able to manage HTML FORMs in order to send data to web server) highlighted the potential of web technology for publishing and distributed computing applications.
In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers, webmasters and other professional figures interested in that server, started to write and collect patches thanks to the NCSA httpd source code being available to public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, the Apache HTTP server project was started.
At the end of 1994 a new commercial web server, named Netsite, was released with specific features. It was the first one of many other similar products that were developed first by Netscape, then also by Sun Microsystems and finally by Oracle Corporation.
In mid-1995 the first version of IIS was released, for Windows NT OS, by Microsoft. This was a notable event because marked the entry, in the field of World Wide Web technologies, of a very important commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web.
In the second half of 1995 CERN and NCSA web servers started to decline (in global percentage usage) because of the wide-spread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied and more performances than the previous ones.
Explosive growth and competition (1996-2014)
At the end of 1996 there were already over fifty known (different) web server software programs that were available to everybody wanted to own an Internet domain name and/or to host websites. Many of them lived only shortly and were replaced by other web servers.
The publication of RFCs about protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IP persistent connections (HTTP/1.1) required web servers both to increase a lot the maximum number of concurrent connections allowed and to improve their level of scalability.
Between 1996 and 1999 Netscape Enterprise Server and Microsoft's IIS emerged among the leading commercial options whereas among the freely available and open-source programs Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features).
In those years there was also another commercial, highly innovative and thus notable web server called Zeus (now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till first decade of 2000s, despite its low percentage of usage.
Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterwards IIS dropped to much lower percentages of usage than Apache (see also market share).
Since 2005-2006 Apache started to improve its speed and its scalability level by introducing new performance features (e.g. event MPM and new content cache). As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered even more the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features.
In fact, a few years after 2000 started, not only other commercial and highly competitive web servers, e.g. LiteSpeed, but also many other open-source programs, often of excellent quality and very high performances, among which should be noted Hiawatha, Cherokee HTTP server, Lighttpd, Nginx and other derived/related products also available with commercial support, emerged.
Around 2007-2008 most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616) to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages. Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption of reverse proxies in front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks).
New challenges (2015 and later years)
In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, a dilemma arose among developers of less popular web servers (e.g. with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version.
In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted for not supporting new HTTP/2 version (at least in the near future) also because of these main reasons:
protocols HTTP/1.x would have been supported anyway by browsers for a very long time (maybe forever) so that there would be no incompatibility between clients and servers in next future;
implementing HTTP/2 was considered a task of overwhelming complexity that could open the door to a whole new class of bugs that till 2015 did not exist and so it would have required notable investments in developing and testing the implementation of the new protocol;
adding HTTP/2 support could always be done in future in case the efforts would be justified.
Instead, developers of most popular web servers, rushed to offer the availability of new protocol, not only because they had the work force and the time to do so, but also because usually their previous implementation of SPDY protocol could be reused as a starting point and because most used web browsers implemented it very quickly for the same reason. Another reason that prompted those developers to act quickly was that webmasters felt the pressure of the ever increasing web traffic and they really wanted to install and to try - as soon as possible - something that could drastically lower the number of TCP/IP connections and speedup accesses to hosted websites.
In 2020-2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC about HTTP/3 protocol.
Technical overview
The following technical overview should be considered only as an attempt to give a few very limited examples about some features that may be implemented in a web server and some of the tasks that it may perform in order to have a sufficiently wide scenario about the topic.
A web server program plays the role of a server in a client-server model by implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage.
The complexity and the efficiency of a web server program may vary a lot depending on (e.g.):
common features implemented;
common tasks performed;
performances and scalability level aimed as a goal;
software model and techniques adopted to achieve wished performance and scalability level;
target hardware and category of usage, e.g. embedded system, low-medium traffic web server, high traffic Internet web server.
Common features
Although web server programs differ in how they are implemented, most of them offer the following common features.
These are basic features that most web servers usually have.
Static content serving: to be able to serve static content (web files) to clients via HTTP protocol.
HTTP: support for one or more versions of HTTP protocol in order to send versions of HTTP responses compatible with versions of client HTTP requests, e.g. HTTP/1.0, HTTP/1.1 (eventually also with encrypted connections HTTPS), plus, if available, HTTP/2, HTTP/3.
Logging: usually web servers have also the capability of logging some information, about client requests and server responses, to log files for security and statistical purposes.
A few other more advanced and popular features (only a very short selection) are the following ones.
Dynamic content serving: to be able to serve dynamic content (generated on the fly) to clients via HTTP protocol.
Virtual hosting: to be able to serve many websites (domain names) using only one IP address.
Authorization: to be able to allow, to forbid or to authorize access to portions of website paths (web resources).
Content cache: to be able to cache static and/or dynamic content in order to speed up server responses;
Large file support: to be able to serve files whose size is greater than 2 GB on 32 bit OS.
Bandwidth throttling: to limit the speed of content responses in order to not saturate the network and to be able to serve more clients;
Rewrite engine: to map parts of clean URLs (found in client requests) to their real names.
Custom error pages: support for customized HTTP error messages.
Common tasks
A web server program, when it is running, usually performs several general tasks, (e.g.):
starts, optionally reads and applies settings found in its configuration file(s) or elsewhere, optionally opens log file, starts listening to client connections / requests;
optionally tries to adapt its general behavior according to its settings and its current operating conditions;
manages client connection(s) (accepting new ones or closing the existing ones as required);
receives client requests (by reading HTTP messages):
reads and verify each HTTP request message;
usually performs URL normalization;
usually performs URL mapping (which may default to URL path translation);
usually performs URL path translation along with various security checks;
executes or refuses requested HTTP method:
optionally manages URL authorizations;
optionally manages URL redirections;
optionally manages requests for static resources (file contents):
optionally manages directory index files;
optionally manages regular files;
optionally manages requests for dynamic resources:
optionally manages directory listings;
optionally manages program or module processing, checking the availability, the start and eventually the stop of the execution of external programs used to generate dynamic content;
optionally manages the communications with external programs / internal modules used to generate dynamic content;
replies to client requests sending proper HTTP responses (e.g. requested resources or error messages) eventually verifying or adding HTTP headers to those sent by dynamic programs / modules;
optionally logs (partially or totally) client requests and/or its responses to an external user log file or to a system log file by syslog, usually using common log format;
optionally logs process messages about detected anomalies or other notable events (e.g. in client requests or in its internal functioning) using syslog or some other system facilities; these log messages usually have a debug, warning, error, alert level which can be filtered (not logged) depending on some settings, see also severity level;
optionally generates statistics about web traffic managed and/or its performances;
other custom tasks.
Read request message
Web server programs are able:
to read an HTTP request message;
to interpret it;
to verify its syntax;
to identify known HTTP headers and to extract their values from them.
Once that a request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not and so many other steps are performed to do so, including security checks.
URL normalization
Web server programs usually perform some type of URL normalization (URL found in most HTTP request messages) in order:
to make resource path always a clean uniform path from root directory of website;
to lower security risks (e.g. by intercepting more easily attempts to access static resources outside the root directory of the website or to access to portions of path below website root directory that are forbidden or which require authorization);
to make path of web resources more recognizable by human beings and web log analysis programs (also known as log analyzers / statistical applications).
The term URL normalization refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed including conversion of URLs domain name to lowercase, the most important are removal of "." and ".." path segments and adding trailing slashes to the non-empty path component.
URL mapping
"URL mapping is the process by which a URL is analyzed to figure out what resource it is referring to, so that that resource can be returned to the requesting client. This process is performed with every request that is made to a web server, with some of the requests being served with a file, such as an HTML document, or a gif image, others with the results of running a CGI program, and others by some other process, such as a built-in module handler, a PHP document, or a Java servlet."
In practice, web server programs that implement advanced features, beyond the simple static content serving (e.g. URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled, e.g.:
as a URL redirection, a redirection to another URL;
as a static request of file content;
as a dynamic request of:
directory listing of files or other sub-directories contained in that directory;
other types of dynamic request in order to identify the program / module processor able to handle that kind of URL path and to pass to it other URL parts, i.e. usually path-info and query string variables.
One or more configuration files of web server may specify the mapping of parts of URL path (e.g. initial parts of file path, filename extension and other path components) to a specific URL handler (file, directory, external program or internal module).
When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory in file system) because it can refer to a virtual name of an internal or external module processor for dynamic requests.
URL path translation to file system
Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to an absolute path under the target website's root directory.
Website's root directory may be specified by a configuration file or by some internal rule of the web server by using the name of the website which is the host part of the URL found in HTTP client request.
Path translation to file system is done for the following types of web resources:
a local, usually non-executable, file (static request for file content);
a local directory (dynamic request: directory listing generated on the fly);
a program name (dynamic requests that is executed using CGI or SCGI interface and whose output is read by web server and resent to client who made the HTTP request).
The web server appends the path found in requested URL (HTTP request message) and appends it to the path of the (Host) website root directory. On an Apache server, this is commonly /home/www/website (on Unix machines, usually it is: /var/www/website). See the following examples of how it may result.
URL path translation for a static file request
Example of a static request of an existing file specified by the following URL:
http://www.example.com/path/file.html
The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request:
GET /path/file.html HTTP/1.1
Host: www.example.com
Connection: keep-alive
The result is the local file system resource:
/home/www/www.example.com/path/file.html
The web server then reads the file, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or its access is forbidden.
URL path translation for a directory request (without a static index file)
Example of an implicit dynamic request of an existing directory specified by the following URL:
http://www.example.com/directory1/directory2/
The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request:
GET /directory1/directory2 HTTP/1.1
Host: www.example.com
Connection: keep-alive
The result is the local directory path:
/home/www/www.example.com/directory1/directory2/
The web server then verifies the existence of the directory and if it exists and it can be accessed then tries to find out an index file (which in this case does not exist) and so it passes the request to an internal module or a program dedicated to directory listings and finally reads data output and sends a response to the client's web browser. The response will describe the content of the directory (list of contained subdirectories and files) or an error message will return saying that the directory does not exist or its access is forbidden.
URL path translation for a dynamic program request
For a dynamic request the URL path specified by the client should refer to an existing external program (usually an executable file with a CGI) used by web server to generate dynamic content.
Example of a dynamic request using a program file to generate output:
http://www.example.com/cgi-bin/forum.php?action=view&orderby=thread&date=2021-10-15
The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request:
GET /cgi-bin/forum.php?action=view&ordeby=thread&date=2021-10-15 HTTP/1.1
Host: www.example.com
Connection: keep-alive
The result is the local file path of the program (in this example a PHP program):
/home/www/www.example.com/cgi-bin/forum.php
Web server executes that program passing to it the path-info and the query string action=view&orderby=thread&date=2021-10-15 so that the program knows what to do (in this case to return, as an HTML document, a view of forum entries ordered by thread since October, 15th 2021). Besides this web server reads data sent by that external program and resends that data to the client which made the request.
Manage request message
Once a request has been read, interpreted and verified, it has to be managed depending on its method, its URL and its parameters which may include values of HTTP headers.
In practice web server has to handle the request by using one of these response paths:
if something in request was not acceptable (in status line or message headers), web server already sent an error response;
if request has a method (e.g. OPTIONS) that can be satisfied by general code of web server then a successful response is sent;
if URL requires authorization then an authorization error message is sent;
if URL maps to a redirection then a redirect message is sent;
if URL maps to a dynamic resource (a virtual path or a directory listing) then its handler (an internal module or an external program) is called and request parameters (query string and path info) are passed to it in order to allow it to reply to that request;
if URL maps to a static resource (usually a file on file system) then the internal static handler is called to send that file;
if request method is not known or if there is some other unacceptable condition (e.g. resource not found, internal server error, etc.) then an error response is sent.
Serve static content
If a web server program is capable of serving static content and it has been configured to do so, then it is able to send file content whenever a request message has a valid URL path matching (after URL mapping, URL translation and URL redirection) that of an existing file under the root directory of a website and file has attributes which match those required by internal rules of web server program.
That kind of content is called static because usually it is not changed by web server when it is sent to clients and because it remains the same until it is modified (file modification) by some program.
NOTE: when serving static content only, a web server program usually does not change file contents of served websites (as they are only read and never written) and so it suffices to support only these HTTP methods:
OPTIONS
HEAD
GET
Response of static file content can be sped up by a file cache.
Directory index files
If a web server program receives a client request message with an URL whose path matches one of an existing directory and that directory is accessible and serving directory index file(s) is enabled then a web server program may try to serve the first of known (or configured) static index file names (a regular file) found in that directory; if no index file is found or other conditions are not met then an error message is returned.
Most used names for static index files are: index.html, index.htm and Default.htm.
Regular files
If a web server program receives a client request message with an URL whose path matches the file name of an existing file and that file is accessible by web server program and its attributes match internal rules of web server program, then web server program can send that file to client.
Usually, for security reasons, most web server programs are pre-configured to serve only regular files or to avoid to use special file types like device files, along with symbolic links or hard links to them. The aim is to avoid undesirable side effects when serving static web resources.
Serve dynamic content
If a web server program is capable of serving dynamic content and it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it parameters of client request; after that, web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request.
NOTE: when serving static and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safely receive data from client(s) and so to be able to host also websites with interactive form(s) that may send large data sets (e.g. lots of data entry or file uploads) to web server / external programs / modules:
POST
In order to be able to communicate with its internal modules and/or external programs, a web server program must have implemented one or more of the many available gateway interface(s) (see also Web Server Gateway Interfaces used for dynamic content).
The three standard and historical gateway interfaces are the following ones.
CGI
An external CGI program is run by web server program for each dynamic request, then web server program reads from it the generated data response and then resends it to client.
SCGI
An external SCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for network connections; every time there is a new request for it, web server program makes a new network connection to it in order to send request parameters and to read its data response, then network connection is closed.
FastCGI
An external FastCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for a network connection which is established permanently by web server; through that connection are sent the request parameters and read data responses.
Directory listings
A web server program may be capable to manage the dynamic generation (on the fly) of a directory index list of files and sub-directories.
If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files and/or subdirectories of above mentioned directory, is dynamically generated (on the fly). If it cannot be generated an error is returned.
Some web server programs allow the customization of directory listings by allowing the usage of a web page template (an HTML document containing placeholders, e.g. $(FILE_NAME), $(FILE_SIZE), etc., that are replaced with the field values of each file entry found in directory by web server), e.g. index.tpl or the usage of HTML and embedded source code that is interpreted and executed on the fly, e.g. index.asp, and / or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FGCIs, e.g. index.cgi, index.php, index.fcgi.
Usage of dynamically generated directory listings is usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page.
The main usage of directory listings is to allow the download of files (usually when their names, sizes, modification date-times or file attributes may change randomly / frequently) as they are, without requiring to provide further information to requesting user.
Program or module processing
An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or more data repositories, e.g.:
files (file system);
databases (DBs);
other sources located in local computer or in other computers.
A processing unit can return any kind of web content, also by using data retrieved from a data repository, e.g.:
a document (e.g. HTML, XML, etc.);
an image;
a video;
structured data, e.g. that may be used to update one or more values displayed by a dynamic page (DHTML) of a web interface and that maybe was requested by an XMLHttpRequest API (see also: dynamic page).
In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically.
Send response message
Web server programs are able to send response messages as replies to client request messages.
An error response message may be sent because a request message could not be successfully read or decoded or analyzed or executed.
NOTE: the following sections are reported only as examples to help to understand what a web server, more or less, does; these sections are by any means neither exhaustive nor complete.
Error message
A web server program may reply to a client request message with many kinds of error messages, anyway these errors are divided mainly in two categories:
HTTP client errors, due to the type of request message or to the availability of requested web resource;
HTTP server errors, due to internal server errors.
When an error response / message is received by a client browser, then if it is related to the main user request (e.g. an URL of a web resource such as a web page) then usually that error message is shown in some browser window / message.
URL authorization
A web server program may be able to verify whether the requested URL path:
can be freely accessed by everybody;
requires a user authentication (request of user credentials, e.g. such as user name and password);
access is forbidden to some or all kind of users.
If the authorization / access rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program:
can deny access by sending a specific error message (e.g. access forbidden);
may deny access by sending a specific error message (e.g. access unauthorized) that usually forces the client browser to ask human user to provide required user credentials; if authentication credentials are provided then web server program verifies and accepts or rejects them.
URL redirection
A web server program may have the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL).
URL redirection of location is used:
to fix a directory name by adding a final slash '/';
to give a new URL for a no more existing URL path to a new path where that kind of web resource can be found.
to give a new URL to another domain when current domain has too much load.
Example 1: a URL path points to a directory name but it does not have a final slash '/' so web server sends a redirect to client in order to instruct it to redo the request with the fixed path name.
From:
/directory1/directory2
To:
/directory1/directory2/
Example 2: a whole set of documents has been moved inside website in order to reorganize their file system paths.
From:
/directory1/directory2/2021-10-08/
To:
/directory1/directory2/2021/10/08/
Example 3: a whole set of documents has been moved to a new website and now it is mandatory to use secure HTTPS connections to access them.
From:
http://www.example.com/directory1/directory2/2021-10-08/
To:
https://docs.example.com/directory1/2021-10-08/
Above examples are only a few of the possible kind of redirections.
Successful message
A web server program is able to reply to a valid client request message with a successful message, optionally containing requested web resource data.
If web resource data is sent back to client, then it can be static content or dynamic content depending on how it has been retrieved (from a file or from the output of some program / module).
Content cache
In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more content caches, each one specialized in a content category.
Content is usually cached by its origin, e.g.:
static content:
file cache;
dynamic content:
dynamic cache (module / program output).
File cache
Historically, static contents found in files which had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanical disks since mid-late 1960s / 1970s; regrettably reads from and writes to those kind of devices have always been considered very slow operations when compared to RAM speed and so, since early OSs, first disk caches and then also OS file cache sub-systems were developed to speed up I/O operations of frequently accessed data / files.
Even with the aid of an OS file cache, the relative / occasional slowness of I/O operations involving directories and files stored on disks became soon a bottleneck in the increase of performances expected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet / network lines.
The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests/responses per second (RPS), started to be studied / researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs.
In practice, nowadays, many popular / high performance web server programs include their own userland file cache, tailored for a web server usage and using their specific implementation and parameters.
The wide spread adoption of RAID and/or fast solid-state drives (storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server.
Dynamic cache
Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys / parameters) and so, maybe for a while (e.g. from 1 second to several hours or more), the resulting output can be cached in RAM or even on a fast disk.
The typical usage of a dynamic cache is when a website has dynamic web pages about news, weather, images, maps, etc. that do not change frequently (e.g. every n minutes) and that are accessed by a huge number of clients per minute / hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches.
Anyway, in most cases those kind of caches are implemented by external servers (e.g. reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g. memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web server(s).
Kernel-mode and user-mode web servers
A web server software can be either incorporated into the OS and executed in kernel space, or it can be executed in user space (like other regular applications).
Web servers that run in kernel mode (usually called kernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereas run-time critical errors may lead to serious problems in OS kernel.
Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer/data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server.
Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OS system calls and new optimized web server software). See also comparison of web server software to discover which of them run in kernel mode or in user mode (also referred as kernel space or user space).
Performances
To improve the user experience (on client / browser side), a web server should reply quickly (as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g. big or huge files), also returned data content should be sent as fast as possible (high transfer speed).
In other words, a web server should always be very responsive, even under high load of web traffic, in order to keep total user's wait (sum of browser time + network time + web server response time) for a response as low as possible.
Performance metrics
For web server software, main key performance metrics (measured under vary operating conditions) usually are at least the following ones (i.e.):
(, similar to , depending on HTTP version and configuration, type of HTTP requests and other operating conditions);
(), is the number of connections per second accepted by web server (useful when using HTTP/1.0 or HTTP/1.1 with a very low limit of requests / responses per connection, i.e. 1 .. 20);
+ response time for each new client request; usually benchmark tool shows how many requests have been satisfied within a scale of time laps (e.g. within 1ms, 3ms, 5ms, 10ms, 20ms, 30ms, 40ms) and / or the shortest, the average and the longest response time;
, in bytes per second.
Among the operating conditions, the (1 .. n) of used during a test is an important parameter because it allows to correlate the supported by web server with results of the tested performance metrics.
Software efficiency
The specific web server software design and model adopted (e.g.):
single process or multi-process;
single thread (no thread) or multi-thread for each process;
usage of coroutines or not;
... and other programming techniques, such as (e.g.):
zero copy;
minimization of possible CPU cache misses;
minimization of possible CPU branch mispredictions in critical paths for speed;
minimization of the number of system calls used to perform a certain function / task;
other tricks;
... used to implement a web server program, can bias a lot the performances and in particular the scalability level that can be achieved under heavy load or when using high end hardware (many CPUs, disks and lots of RAM).
In practice some web server software models may require more OS resources (specially more CPUs and more RAM) than others to be able to work well and so to achieve target performances.
Operating conditions
There are many operating conditions that can affect the performances of a web server; performance values may vary depending on (i.e.):
the settings of web server (including the fact that log file is or is not enabled, etc.);
the HTTP version used by client requests;
the average HTTP request type (method, length of HTTP headers and optional body);
whether the requested content is static or dynamic;
whether the content is cached or not cached (by server and/or by client);
whether the content is compressed on the fly (when transferred), pre-compressed (i.e. when a file resource is stored on disk already compressed so that web server can send that file directly to the network with the only indication that its content is compressed) or not compressed at all;
whether the connections are or are not encrypted;
the average network speed between web server and its clients;
the number of active TCP connections;
the number of active processes managed by web server (including external CGI, SCGI, FCGI programs);
the hardware and software limitations or settings of the OS of the computer(s) on which the web server runs;
other minor conditions.
Benchmarking
Performances of a web server are typically benchmarked by using one or more of the available
automated load testing tools.
Load limits
A web server (program installation) usually has pre-defined load limits for each combination of operating conditions, also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also the C10k problem and the C10M problem).
When a web server is near to or over its load limits, it gets overloaded and so it may become unresponsive.
Causes of overload
At any time web servers can be overloaded due to one or more of the following causes (e.g.).
Excess legitimate web traffic. Thousands or even millions of clients connecting to the website in a short amount of time, e.g., Slashdot effect.
Distributed Denial of Service attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users.
Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them).
XSS worms can cause high traffic because of millions of infected browsers or web servers.
Internet bots Traffic not filtered/limited on large websites with very few network resources (e.g. bandwidth) and/or hardware resources (CPUs, RAM, disks).
Internet (network) slowdowns (e.g. due to packet losses) so that client requests are served more slowly and the number of connections increases so much that server limits are reached.
Web servers, serving dynamic content, waiting for slow responses coming from back-end computer(s) (e.g. databases), maybe because of too many queries mixed with too many inserts or updates of DB data; in these cases web servers have to wait for back-end data responses before replying to HTTP clients but during these waits too many new client connections / requests arrive and so they become overloaded.
Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures such as back-end (e.g. database) failures; in these cases the remaining web servers may get too much traffic and become overloaded.
Symptoms of overload
The symptoms of an overloaded web server are usually the following ones (e.g.).
Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
The web server returns an HTTP error code, such as 500, 502, 503, 504, 408, or even an intermittent 404.
The web server refuses or resets (interrupts) TCP connections before it returns any content.
In very rare cases, the web server returns only a part of the requested content. This behavior can be considered a bug, even if it usually arises as a symptom of overload.
Anti-overload techniques
To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones (e.g.).
Tuning OS parameters for hardware capabilities and usage.
Tuning web server(s) parameters to improve their security and performances.
Deploying techniques (not only for static contents but, whenever possible, for dynamic contents too).
Managing network traffic, by using:
Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
Bandwidth management and traffic shaping, in order to smooth down peaks in network usage.
Using different domain names, IP addresses and computers to serve different kinds (static and dynamic) of content; the aim is to separate big or huge files (download.*) (that domain might be replaced also by a CDN) from small and medium-sized files (static.*) and from main dynamic site (maybe where some contents are stored in a backend database) (www.*); the idea is to be able to efficiently serve big or huge (over 10 – 1000 MB) files (maybe throttling downloads) and to fully cache small and medium-sized files, without affecting performances of dynamic site under heavy load, by using different settings for each (group) of web server computers, e.g.:
https://download.example.com
https://static.example.com
https://www.example.com
Using many web servers (computers) that are grouped together behind a load balancer so that they act or are seen as one big web server.
Adding more hardware resources (i.e. RAM, fast disks) to each computer.
Using more efficient computer programs for web servers (see also: software efficiency).
Using the most efficient to process dynamic requests (spawning one or more external programs every time a dynamic page is retrieved, kills performances).
Using other programming techniques and workarounds, especially if dynamic content is involved, to speed up the HTTP responses (i.e. by avoiding dynamic calls to retrieve objects, such as style sheets, images and scripts), that never change or change very rarely, by copying that content to static files once and then keeping them synchronized with dynamic content).
Using latest efficient versions of HTTP (e.g. beyond using common HTTP/1.1 also by enabling HTTP/2 and maybe HTTP/3 too, whenever available web server software has reliable support for the latter two protocols) in order to reduce a lot the number of TCP/IP connections started by each client and the size of data exchanged (because of more compact HTTP headers representation and maybe data compression).
Caveats about using HTTP/2 and HTTP/3 protocols
.
Market share
Below are the latest statistics of the market share of all sites of the top web servers on the Internet by Netcraft.
NOTE: (*) percentage rounded to integer number, because its decimal values are not publicly reported by source page (only its rounded value is reported in graph).
Apache, IIS and Nginx are the most used web servers on the World Wide Web.
See also
Server (computing)
Application server
Comparison of web server software
HTTP server (core part of a web server program that serves HTTP requests)
HTTP compression
Web application
Open source web application
List of AMP packages
Variant object
Virtual hosting
Web hosting service
Web container
Web proxy
Web service
Standard Web Server Gateway Interfaces used for dynamic contents:
CGI Common Gateway Interface
SCGI Simple Common Gateway Interface
FastCGI Fast Common Gateway Interface
A few other Web Server Interfaces (server or programming language specific) used for dynamic contents:
SSI Server Side Includes, rarely used, static HTML documents containing SSI directives are interpreted by server software to include small dynamic data on the fly when pages are served, e.g. date and time, other static file contents, etc.
SAPI Server Application Programming Interface:
ISAPI Internet Server Application Programming Interface
NSAPI Netscape Server Application Programming Interface
PSGI Perl Web Server Gateway Interface
WSGI Python Web Server Gateway Interface
Rack Rack Web Server Gateway Interface
JSGI JavaScript Web Server Gateway Interface
Java Servlet, JavaServer Pages
Active Server Pages, ASP.NET
References
External links
Mozilla: what is a web server?
Netcraft: news about web server survey
Servers (computing)
Web server software
Website management
Web development
World Wide Web
English inventions |
40865460 | https://en.wikipedia.org/wiki/Glass%E2%80%93Steagall%20in%20post-financial%20crisis%20reform%20debate | Glass–Steagall in post-financial crisis reform debate | Following the financial crisis of 2007-08, legislators unsuccessfully tried to reinstate Glass–Steagall Sections 20 and 32 as part of the Dodd–Frank Wall Street Reform and Consumer Protection Act. Currently, bills are pending in United States Congress that would revise banking law regulation based on Glass–Steagall inspired principles. Both in the United States and elsewhere banking reforms have been proposed that also refer to Glass–Steagall principles. These proposals raise issues that were addressed during the long Glass–Steagall debate in the United States, including issues of "ring fencing" commercial banking operations and "narrow banking" proposals that would sharply reduce the permitted activities of commercial banks.
Please see the main article, Glass–Steagall in post-financial crisis reform debate, for information about the following topics:
Failed 2009-10 efforts to restore Glass–Steagall Sections 20 and 32 as part of Dodd–Frank
Post-2010 efforts to enact Glass–Steagall inspired financial reform legislation
Volcker Rule ban on proprietary trading as Glass–Steagall lite
Further financial reform proposals that refer to Glass–Steagall
UK and EU "ring fencing" proposals
Similar issues debated in connection with Glass–Steagall and "firewalls"
Limited purpose banking and narrow banking
Wholesale financial institutions in Glass–Steagall reform debate
Glass–Steagall references in reform proposal debate
There have been several efforts or appeals in the United States to reinstate repealed sections of the Glass–Steagall Act following the financial crisis of 2007-08, as well as elsewhere to adopt similar financial reforms.
Efforts to restore Sections 20 and 32 as part of Dodd–Frank
During the 2009 United States House of Representatives consideration of H.R. 4173, the bill that became the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010, Representative Maurice Hinchey (D-NY) proposed an amendment to the bill that would have reenacted Glass–Steagall Sections 20 and 32, which had been repealed by the 1999 Gramm–Leach–Bliley Act (GLBA), and also prohibited bank insurance activities. The amendment was not voted on by the House.
On December 16, 2009, Senators John McCain (R-AZ) and Maria Cantwell (D-WA) introduced in the United States Senate the "Banking Integrity Act of 2009" (S.2886), which would have reinstated Glass–Steagall Sections 20 and 32, but was not voted on by the Senate.
Before the Senate acted on its version of what became the Dodd–Frank Act, the Congressional Research Service issued a report describing securities activities banks and their affiliates had conducted before the GLBA. The Report stated Glass–Steagall had "imperfectly separated, to a certain degree" commercial and investment banking and described the extensive securities activities the Federal Reserve Board had authorized for "Section 20 affiliates" since the 1980s.
The Obama Administration has been criticized for opposing Glass–Steagall reenactment. In 2009, Treasury Secretary Timothy Geithner testified to the Joint Economic Committee that he opposed reenacting Glass–Steagall and that he did not believe "the end of Glass–Steagall played a significant role" in causing the financial crisis.
Post-2010 efforts
On April 12, 2011, Representative Marcy Kaptur (D-OH) introduced in the House the "Return to Prudent Banking Act of 2011" (H.R. 129), which would have (1) amended the Federal Deposit Insurance Act to add prohibitions on FDIC insured bank affiliations instead of reenacting the affiliation restrictions in Glass–Steagall Sections 20 and 32, (2) directed federal banking regulators and courts to interpret these affiliation provisions and Glass–Steagall Sections 16 and 21 in accordance with the Supreme Court decision in Investment Company Institute v. Camp, and (3) repealed various GLBA changes to the Bank Holding Company Act. The bill was reported to a House subcommittees but not further acted upon before the 112th Congress adjourned.
On May 16, 2013, Senator Tom Harkin (D-IA) introduced S. 985 to restore the original Glass-Steagall Act, on the 80th anniversary of the original act. On July 13, 2013 Senator Elizabeth Warren (D-MA) introduced alongside John McCain (R-AZ), Maria Cantwell (D-WA), and Angus King (I-ME) the 21st Century Glass–Steagall Act (S.1282). Warren wrote in a press release: "The 21st Century Glass-Steagall Act will reestablish a wall between commercial and investment banking, make our financial system more stable and secure, and protect American families." Instead of restoring repealed Glass–Steagall sections 20 and 32, the bill would
separate traditional banks that offer savings and checking accounts insured by the Federal Deposit Insurance Corporation from riskier financial services like investment banking, swaps dealings, hedge funds and private equity activities, as well as from structured and synthetic products, and other products, that did not exist when Glass–Steagall was originally passed.
define the "business of banking" to prevent national banks from engaging in risky activities and bars non-banking activities from being treated as "closely related" to banking. These restriction would counter regulatory loopholes for risky activities, because the Office of the Comptroller of the Currency and the Federal Reserve used these terms to allow traditional banks and bank-holding companies to engage in high-risk activities.
address the issue of too big to fail: by separating depositary institutions from riskier activities, large financial institutions will shrink in size, become smaller and safer and won't be able to rely on federal depository insurance as a safety net for their risky activities. Although some financial institutions may still be large, the implicit government guarantee of a bailout will be reduced because such institutions would no longer be intertwined with depository institutions.
institutes a five years transition period and penalties for any violation.
Many of her colleagues remain opposed, such as the Senators from Delaware, home to the corporate offices of many major banks.
Volcker rule as "Glass–Steagall lite"
The Dodd–Frank Act included the Volcker Rule, which among other things limited proprietary trading by banks and their affiliates. This proprietary trading ban will generally prevent commercial banks and their affiliates from acquiring non-governmental securities with the intention of selling those securities for a profit in the "near term." Some have described the Volcker Rule, particularly its proprietary trading ban, as "Glass–Steagall lite."
As described in prohibitions apply to dealing in and underwriting or distributing securities, Glass–Steagall restricted commercial bank "dealing" in, not "trading" of, non-government securities the bank was permitted to purchase as "investment securities." After the GLBA became law, Glass–Steagall Section 16 continued to restrict bank securities purchases. The GLBA, however, expanded the list of "bank-eligible" securities to permit banks to buy, underwrite, and deal in municipal revenue bonds, not only "full faith and credit" government bonds.
The Volcker Rule permits "market making" and other "dealer" activities in non-government securities as services for customers. Glass–Steagall Section 16 prohibits banks from being a "market maker" or otherwise "dealing" in non-government (i.e., "bank-ineligible") securities. Glass–Steagall Section 16 permits a bank to purchase and sell (i.e., permits "trading") for a bank's own account non-government securities that the OCC approves as "investment securities." The Volcker Rule will prohibit such "proprietary trading" of non-government securities.
Before and after the late-2000s financial crisis, banking regulators worried that banks were incorrectly reporting non-traded assets as held in their "trading account" because of lower regulatory capital requirements for assets held in a "trading account." Under the Volcker Rule, U.S. banking regulators have proposed that banks and their affiliates be prohibited from holding any asset (other than government securities and other listed exceptions) as a "trading position."
Senators Jeff Merkley (D-OR) and Carl Levin (D-MI) have written that "proprietary trading losses" played "a central role in bringing the financial system to its knees." They wrote that the Volcker Rule's proprietary trading ban contained in statutory language they proposed is a "modern Glass–Steagall" because Glass–Steagall was both "over-inclusive" (in prohibiting some "truly client-oriented activities that could be managed by developments in securities and banking law") and "under-inclusive" in failing to cover derivatives trading.
In 2002, Arthur Wilmarth wrote that from 1990-1997 the nine U.S. banks with the greatest securities activities held more than 20% of their assets as trading assets. By 1997, 40% of J.P. Morgan's revenue was from trading. A 1995 study by the federal banking regulators of commercial bank trading activity from June 30, 1984, to June 30, 1994, concluded that "trading activities are an increasingly important source of revenue for banks" and that "[n]otwithstanding the numerous press reports that focus on negative events, the major commercial banks have experienced long-term success in serving customers and generating revenues with these activities." In reporting the study results, the American Banker described "proprietary trading" as "basically securities trading not connected to customer-related bank activities" and summarized the study as finding that "proprietary trading has been getting a bad rap."
Paul Volcker supported the Volcker Rule prohibition on proprietary trading as part of bringing commercial banks back to "concentrating on continuing customer interest." As described in the article Decline of the Glass–Steagall Act, Volcker had long testified to Congress in support of repealing Glass–Steagall Sections 20 and 32. In 2010 he explained that he understood Glass–Steagall as preventing banks from being principally engaged in underwriting and dealing in corporate securities. Volcker stated that with securitization and other developments he believes it is a proper bank function to underwrite corporate securities "as serving a legitimate customer need." He, therefore, did not believe "repeal of Glass–Steagall was terrible" but that Congress "should have thought about what they replace it with." Volcker's criticism was that Congress "didn't replace it with other restrictions."
Separate from its proprietary trading ban, the Volcker Rule restricts bank and affiliate sponsorship and ownership of hedge funds and private equity funds. The GLBA amended the Bank Holding Company Act to permit "merchant banking" investments by bank affiliates subject to various restrictions. It also authorized the Treasury Department and Federal Reserve Board to permit such merchant banking activities by direct bank subsidiaries ("financial subsidiaries") after five years, but they have not provided such permission. This was not a Glass–Steagall change but a change to the Bank Holding Company Act, which previously limited the size of investments bank affiliates could make in a company engaged in activities not "closely related to banking." Such merchant banking investments may be made through private equity funds. The Volcker Rule will affect the ability of bank affiliates to make such investments.
Other reform proposals that have been compared to Glass–Steagall
Both in the US and elsewhere, a number of reform proposals have been put forward that bear some resemblance to Glass–Steagall.
"Ring fencing" proposals
In the UK, the Independent Commission on Banking's (ICB) proposal to "ring fence" retail and small business commercial banking from investment banking seeks to isolate the "retail banking" functions of a banking firm within a separate corporation that would not be affected by the failure of the overall firm so long as the "ring fenced" retail bank itself remained solvent. Bank of England Governor Mervyn King expressed concern the European Commission could block implementation of the ICB proposal as a violation of Commission standards. Although Michel Barnier, European Union internal market Commissioner, proposed limits on capital requirements for banks that could have hindered the UK ring fencing proposal and indicated support for the French and German position against breaking up banking groups, in November 2011 he announced an "expert commission" would "study the mandatory separation of risky investment banking activities from traditional retail lenders."
In 2016 it was reported that Bank of England "would press ahead with plans for banks to ringfence their retail operations by 2019".
On October 2, 2012, the EU committee appointed to study the issue, through its Liikanen report, recommended a form of "ring fencing" similar to the proposal in the United Kingdom. In April 2013 the Bank for International Settlements has issued a working paper comparing the Volcker Rule, the ICB proposals, the Liikanen report proposals, and other international proposals for bank structural reform.
Debate on "firewalls"
Congressional and bank regulator efforts to "repeal", "reform" or apply Glass–Steagall were based on isolating a commercial banking firm's expanded securities activities in a separately capitalized bank affiliate. Much of the debate concerned whether such affiliates could be owned by a bank (as with "operating subsidiaries" in the 1990s) or would be bank holding company subsidiaries outside the chain of bank ownership. In either case, "firewalls" were intended to isolate the bank from the affiliate.
Banking regulators and commentators debated whether "firewalls" could truly separate a bank from its affiliate in a crisis and often cited the early 1980s' statement by then Citicorp CEO Walter Wriston that "it is inconceivable that any major bank would walk away from any subsidiary of its holding company." Alan Greenspan and Paul Volcker testified to Congress that firewalls so strong that they truly separated different businesses would eliminate the benefits of combining the two activities. Both testified that in a crisis the owners of the overall firm would inevitably find ways to use the assets of any solvent part of the firm to assist the troubled part. Thus, "firewalls" sufficient to prevent a bank from assisting its affiliate would eliminate the purpose of the combination, but "workable" firewalls would be insufficient to prevent such assistance. Both Volcker and Greenspan proposed that the solution was adequate supervision, including sufficient capital and other requirements.
In 1998 and 1999 Greenspan testified to Congress in opposition to the Clinton Administration proposal to permit national bank subsidiaries to engage in expanded securities and other activities. He argued such direct bank subsidiary activities would be "financed by the sovereign credit of the United States" through the "federal safety net" for banks, despite the Treasury Department's assurance that "firewalls" between the bank and its operating subsidiary would prevent the expansion of the "federal safety net."
As described in commentator response to Section 20 and 32 repeal, Gary Stern, Arthur Wilmarth, and others questioned whether either operating subsidiaries or separate holding company affiliates could be isolated from an affiliated bank in a financial crisis and feared that the "too big to fail" doctrine gave competitive benefits to banking firms entering the securities or insurance business through either structure. Greenspan did not deny that the government might act to "manage an orderly liquidation" of a large financial "intermediary" in a crisis, but he suggested that only insured creditors would be fully repaid, that shareholders would be unprotected, and that uninsured creditors would receive less than full payment through a discount or "haircut." Commentators pointed to the 1990 failure of Drexel Burnham Lambert as suggesting "too-big-to-fail" considerations need not force a government rescue of creditors to a failing investment bank or other nonbank, although Greenspan had pointed to that experience as questioning the ability of firewalls to isolate one part of a financial firm from the rest.
After the late-2000s financial crisis commentators noted that the Federal Reserve Board used its power to grant exemptions from Federal Reserve Act Section 23A (part of the 1933 Banking Act and the "principle statutory" firewall between banks and their affiliates) to permit banks to "rescue" various affiliates or bank sponsored participants in the "shadow banking system" as part of a general effort to restore liquidity in financial markets. Section 23A generally prevented banks from funding securities purchases by their affiliates before the financial crisis (i.e., prevented the affiliates from "using insured deposits to purchase risky investments") by "limiting the ability of depository institutions to transfer to affiliates the subsidy arising from the institutions' access to the federal safety net," but the Federal Reserve Board's exemptions allowed banks to transfer such investments from the shadow banking market to FDIC insured banks during the crisis. The Federal Reserve Board's General Counsel has defended these actions by arguing that all the Section 23A exemptions required that bank funding of affiliate or shadow banking investments be "fully collateralized" on a daily basis with qualifying collateral, so that the bank was "very much protected," and that in the end the exemptions did not prove very "useful."
The ICB proposes to erect a barrier between the "ring-fenced bank" and its "wider corporate group" that will permit banking regulators to isolate the ring-fenced bank "from the rest of the group in a matter of days and continue the provision of its services without providing solvency support."
Limited purpose banking and narrow banking
Laurence Kotlikoff was disappointed the ICB did not adopt the "limited purpose banking" he proposed to the ICB. This would require a bank to operate like a mutual fund in repaying "deposits" based on the current market value of the bank's assets. Kotlikoff argues there will always be financial crises if banks lend deposits but are required to repay the full amount of those deposits "on demand." Kotlikoff would only permit a bank (restructured as a mutual fund) to promise payment of deposits at "par" (i.e., $1 for every $1 deposited) if the bank (i.e., mutual fund) held 100% of all deposits in cash as a trustee.
As Kotlikoff notes, in 1987 Robert Litan proposed "narrow banking." Litan suggested commercial banking firms be freed from Glass–Steagall limits (and other activity restrictions) so long as they isolated FDIC insured deposits in a "narrow bank" that was only permitted to invest those deposits in "safe securities" approved by the FDIC. In 1995 Arthur Wilmarth proposed applying Litan's "narrow bank" proposal to U.S. banks ("global banks") that had become heavily involved in "capital markets" activities through "Section 20 affiliates," derivatives, and other activities. Under Wilmarth's proposal (which he repeated in 2001 after the GLBA became law) only banks that limited their activities to taking deposits and making commercial loans would be permitted to make commercial loans with FDIC insured deposits. Wilmarth expected only "community banks" specialized in making consumer and small business loans would continue to operate as such traditional banks. The large "global banks" would fund their lending through the capital markets just like investment banks and other "shadow banking" lenders.
Narrow banking and wholesale financial institutions
The Litan and Wilmarth proposals were very different from the Kotlikoff limited purpose banking proposal in that they would only limit the activities of companies that owned FDIC insured banks. Whereas Kotlikoff would require a company to hold the full amount of its "demand deposits" in cash, Litan and Wilmarth would permit companies to issue such "demand deposits" without restriction, so long as the demand deposits were not FDIC insured. Congress considered this type of proposal when it debated the repeal of Glass-Steagall sections 20 and 32.
In 1997 the Clinton Administration proposed that "wholesale financial institutions" (known as "woofies") be authorized to be members of the Federal Reserve System but not "banks" under the Bank Holding Company Act because they would own non-FDIC insured banks that would only take deposits of $100,000 or more. Whereas "narrow banks" would be FDIC insured, but only invest in FDIC approved "safe securities," "woofies" would be free to lend, purchase securities, and make other investments, because they would not hold any FDIC insured deposits. The proposal was intended to permit securities firms to continue to maintain ownership of commercial firms while gaining access to the Federal Reserve's "payment system" and "discount window", so long as the firm did not take FDIC insured deposits.
"Woofies" were not authorized by the GLBA because of a dispute between Senator Phil Gramm and the Clinton Administration over the application of the Community Reinvestment Act (CRA) to "woofies." In their October 1999 compromise on CRA provisions in the GLBA, the Clinton Administration agreed with Gramm that CRA would not apply to woofies so long as only a company that did not then own any FDIC insured depository institution would be permitted to qualify as a "wholesale financial institution." The Clinton Administration wanted this restriction to prevent existing bank holding companies from disposing of their FDIC insured banks to qualify as "woofies," which could reduce the deposit base subject to CRA requirements. When Chase and J.P. Morgan lobbied to change the final legislation to permit them to become woofies, they complained only Goldman Sachs and "a few others" could qualify as a woofie. When negotiators decided they could not resolve the dispute, permission for woofies was eliminated from the final GLBA.
"Woofies" were similar to the "global bank" structure suggested by Arthur Wilmarth because they would not use FDIC insured deposits to make commercial loans. They would, however, be subject to Federal Reserve supervision unlike lenders in the unsupervised "shadow banking" system. Because woofies would have had access to the Federal Reserve discount window and payments service, critics (including the Independent Bankers Association of America and Paul Volcker) opposed woofies (and a similar 1996 proposal by Representative James A. Leach) for providing unfair competition to banks. Although October 1999 press reports suggested bank holding companies were interested in becoming woofies, the New York Times reported in July 1999 that banking and securities firms had lost interest in becoming woofies.
Shadow banking in financial reform proposals
The ICB Report rejected "narrow banking" in part because it would lead to more credit (and all credit during times of stress) being provided by a "less regulated sector." In 1993 Jane D'Arista and Tom Schlesinger noted that the "parallel banking system" had grown because it did not incur the regulatory costs of commercial banks. They proposed to equalize the cost by establishing "uniform regulation" of banks and the lenders and investors in the parallel banking system. As with Kotlikoff's "limited purpose banking" proposal, only investment pools funded 100% from equity interests would remain unregulated as banks. Although D'Arista and Schlesinger acknowledged the regulation of banks and of the parallel banking system would end up only being "comparable," their goal was to eliminate so far as possible the competitive advantages of the "parallel" or "shadow" banking market.
Many commentators have argued that the failure to regulate the shadow banking market was a primary cause of the financial crisis. There is general agreement that the crisis emerged in the shadow banking markets not in the traditional banking market. As described in the article Decline of the Glass–Steagall Act, Helen Garten had identified the "consumerization" of banking regulation as producing "a largely unregulated, sophisticated wholesale market," which created the risk of the "underproduction of regulation" of that market.
Laurence Kotlikoff's "limited purpose banking" proposal rejects bank regulation (based on rules and supervision to ensure "safety and soundness") and replaces it with a prohibition on any company operating like a traditional "bank." All limited liability financial companies (not only today's "banks") that receive money from the public for investment or "lending" and that issue promises to pay amounts in the future (whether as insurance companies, hedge funds, securities firms, or otherwise) could only issue obligations to repay amounts equal to the value of their assets. All "depositors" in or "lenders" to such companies would become "investors" (as in a mutual fund) with the right to receive the full return on the investments made by the companies (minus fees) and obligated to bear the full loss on those investments.
Thomas Hoenig rejects both "limited purpose banking" and the proposal to regulate shadow banking as part of the banking system. Hoenig argues it is not necessary to regulate "shadow banking system" lenders as banks if those lenders are prohibited from issuing liabilities that function like bank demand deposits. He suggests that requiring money market funds to redeem shares at the funds' fluctuating daily net asset values would prevent those funds from functioning like bank checking accounts and that eliminating special Bankruptcy Code treatment for repurchase agreements would delay repayment of those transactions in a bankruptcy and thereby end their treatment as "cash equivalents" when the "repo" was funding illiquid, long term securities. By limiting the ability of "shadow banks" to compete with traditional banks in creating "money-like" instruments, Hoenig hopes to better assure that the safety net is not ultimately called upon to "bail them [i.e., shadow banks such as Bear Stearns and AIG during the financial crisis] out in a crisis." He proposes to deal with actual commercial banks by imposing "Glass–Steagall-type boundaries" so that banks "that have access to the safety net should be restricted to certain core activities that the safety net was intended to protect—making loans and taking deposits—and related activities consistent with the presence of the safety net."
Glass–Steagall references in financial reform proposals
Although the UK's ICB and the commentators presenting the proposals described above to modify banks or banking regulation address issues beyond the scope of the Glass–Steagall separation of commercial and investment banking, each specifically examines Glass–Steagall. The ICB stated Glass–Steagall had been "undermined in part by the development of derivatives." The ICB also argued that the development before 1999 of "the world's leading investment banks out of the US despite Glass–Steagall in place at the time" should caution against assuming the "activity restrictions" it recommended in its "ring fencing" proposal would hinder UK investment banks from competing internationally.
Boston University economist Laurence J. Kotlikoff suggests commercial banks only became involved with CDOs, SIVs, and other "risky products" after Glass–Steagall was "repealed," but he rejects Glass–Steagall reinstatement (after suggesting Paul Volcker favors it) as a "non-starter" because it would give the "nonbank/shadow bank/investment bank industry" a "competitive advantage" without requiring it to pay for the "implicit" "lender-of-last-resort" protection it receives from the government. Robert Litan and Arthur Wilmarth presented their "narrow bank" proposals as a basis for eliminating Glass–Steagall (and other) restrictions on bank affiliates. Writing in 1993, Jane D'Artista and Tom Schlesinger noted that "the ongoing integration of financial industry activities makes it increasingly difficult to separate banking and securities operations meaningfully" but rejected Glass–Steagall repeal because "the separation of banking and securities functions is a proven, least-cost method of preventing the problems of one financial sector from spilling over into the other" (which they stated was "most recently demonstrated in the October 1987 market crash.")
During the Senate debate of the bill that became the Dodd–Frank Act, Thomas Hoenig wrote Senators Maria Cantwell and John McCain (the co-sponsors of legislation to reinstate Glass–Steagall Sections 20 and 32) supporting a "substantive debate" on "the unintended consequences of leaving investment banking commingled with commercial banking" and reiterating that he had "long supported" reinstating "Glass–Steagall-type laws" to separate "higher risk, often more leveraged, activities of investment banks" from commercial banking. As described above in post 2010 efforts to enact Glass–Steagall inspired financial reform legislation Elizabeth Warren and other Senators have joined Cantwell and McCain in their effort to legislate Glass–Steagall inspired restrictions. Hoenig agreed with Paul Volcker, however, that "financial market developments" had caused underwriting corporate bonds (the prohibition of which Volcker described as the purpose of Glass–Steagall), and also underwriting of corporate equity, revenue bonds, and "high quality asset-backed securities," to be "natural extensions of commercial banking." Instead of reinstating Glass–Steagall prohibitions on such underwriting, Hoenig proposed restoring "the principles underlying the separation of commercial and investment banking firms."
In Mainland Europe, some scholars have suggested Glass–Steagall should be a model for any in-depth reform of bank regulation: notably in France where SFAF and World Pensions Council (WPC) banking experts have argued that "a new Glass–Steagall Act" should be viewed within the broader context of separation of powers in European Union law.
This perspective has gained ground after the unraveling of the Libor scandal in July 2012, with mainstream opinion leaders such as the Financial Times editorialists calling for the adoption of an EU-wide "Glass Steagall II".
On July 25, 2012, former Citigroup Chairman and CEO Sandy Weill, considered one of the driving forces behind the considerable financial deregulation and "mega-mergers" of the 1990s, surprised financial analysts in Europe and North America by "calling for splitting up the commercial banks from the investment banks. In effect, he says: bring back the Glass–Steagall Act of 1933 which led to half a century, free of financial crises." However, Weill reversed his position a year later, arguing that "big banks don't have to be split if the 'right regulation' is in place," and instead that "[banks] should decide on their own to 'split if they figure that's the best way that they can provide their services.'"
See also
Gramm–Leach–Bliley Act
Securities regulation in the United States
References
See also the References list (citations) in the main article, Glass–Steagall–Act.
21st century in American law
73rd United States Congress
Federal Deposit Insurance Corporation
United States federal banking legislation
United States repealed legislation
Financial regulation in the United States
Separation of investment and retail banking |
3093382 | https://en.wikipedia.org/wiki/Program%20information%20file | Program information file | A program information file (PIF) defines how a given DOS program should be run in a multi-tasking environment, especially in order to avoid giving it unnecessary resources which could remain available to other programs. TopView was the originator of PIFs; they were then inherited and extended by DESQview and Microsoft Windows, where they are most often seen. PIFs are seldom used today in software due to the absence of DOS applications.
Basic overview
The PIF file originally contained only one block of data storing the parameters needed to run under TopView. These included fields like an ASCII string for the window title, the maximum and minimum amount of RAM needed, and bitmaps for switches like whether or not the window should be closed when the program exits.
When the system was adapted for use under Windows, the developers faced the problem that there were additional switches that did not apply to TopView. Instead of simply adding the new switches to the end of the file, they instead re-imagined the file as a database file containing any number of entries. In theory the file consisted of a number of header areas describing what operating system should read the section, and an offset to the next section. Systems would read down the list until they found the most appropriate one.
However, this left a problem with backward compatibility. If the file started with a header, even if it was for the original switches, TopView and DESQview would not be able to read it properly. The file was thus re-arranged with the first header appearing after the initial data, which left the first 253 bytes of the file in the same format as before.
Notes
Creating a program information file for a DOS-based program creates a shortcut to the program executable. All the settings saved in the PIF are contained in the shortcut.
Although a file in PIF format does not contain any executable code (it lacks executable files' magic number "MZ"), Microsoft Windows handles all files with (pseudo-)executables' extensions in the same manner: all .COMs, .EXEs, and .PIFs are analyzed by the ShellExecute function and will run accordingly to their content and not extension, meaning a file with the PIF extension can be used to transmit computer viruses.
The concept of program information files was also used under Digital Research operating systems such as Concurrent DOS, Multiuser DOS, System Manager and REAL/32. Using the PIFED command, the necessary program information got directly embedded into the .EXE or .COM executable file.
See also
Compatibility mode
References
The PIF format in various Windows versions
External links
Windows 98 Sample Program Information (.pif) Files on Microsoft Support
Dobb's Undocumented Corner – The PIF File Format, or, TopView (sort of) Lives!
Windows architecture
Executable file formats |
55008695 | https://en.wikipedia.org/wiki/AT%26T%20Internet | AT&T Internet | AT&T Internet is an AT&T brand of broadband internet service. Previously, AT&T Internet was branded as U-verse Internet and bundled with U-verse TV, which was spun off into the newly independent DirecTV in 2021. AT&T Internet plans powered by fiber-optic cable use the AT&T Fiber brand.
Services
AT&T delivers most internet service over a fiber-to-the-node (FTTN) or fiber-to-the-premises (FTTP) communications network. In the more common FTTN deployment, fiber-optic connections carry all data (internet, IPTV, and voice over IP) between the service provider and a distribution node. The remaining run from the node to the network interface device in the customer's home uses a copper-wire current loop that is traditionally part of the PSTN (public switched telephone network). In more recently constructed housing developments, AT&T uses an FTTP deployment—they run fiber-optic cable from their DSLAM all the way to an optical network terminal in the customer's home.
In areas where AT&T deploys internet through FTTN, they use High-speed digital subscriber lines with ADSL2+ or VDSL technology. Service offerings depend on the customer's distance to an available port in the distribution node, or the central office.
In so-called "fringe" areas, AT&T provides High Speed Internet through IP-DSLAM ADSL2+, which does not require pair bonding or a VRAD and operates at slower bitrates than pair-bonded VDSL2. In practice, VRADs are not installed in many older urban neighborhoods as AT&T prepares to abandon the fixed-line broadband market.
AT&T Internet provides internet access to computers connected on-premises via Ethernet cabling or Wi-Fi from the included residential gateway or DSL modem.
AT&T Fiber, or as it is known AT&T Internet powered by Fiber, provides fiber to the home (FTTH) service in select markets. Historically a form of AT&T Fiber Internet launched in the fall of 2013 branded as GigaPower, and bundled with U-verse TV as "U-verse with GigaPower". In 2014, it launched in Austin, Texas with 300Mbps speeds, but as of 2014 top download speeds have increased to 1Gbps (1000Mbps). In 2019, AT&T rolled out 100% Fiber Network Powered by AT&T Fiber Live in 84 Metro areas.
AT&T announced Internet 18 service (then called "Max 18") in November 2008, and Internet 24 (then called "Max Turbo") was announced in December 2009. Basic, Express, Pro, Elite and Max (VDSL) are usually available for self-installation. Max (ADSL2+), Max Plus, and Max Turbo can be self-installed if only one jack is connected for DSL (through a splitter installed by a technician), or splitter-free if no landline shares the pair. Conditions where higher speeds are still attainable through filters or quality wiring to more than one jack occur less often.
AT&T announced Internet 45 service (formerly "Power") on August 26, 2013. Internet 45 required two conditioned line pairs (pair bond) and a Motorola NVG589 VDSL2+ Gateway. AT&T charges a service fee to condition and pair bond the lines and install a new gateway, plus additional monthly charges.
Equipment
Line equipment
AT&T Internet uses the Alcatel-Lucent 7330 or 7340 Intelligent Services Access Manager (ISAM) shelf, also called a video-ready access device (VRAD), deployed either in a central office (CO) or to a neighborhood serving area interface (SAI). These models are both composed of circuit boards providing service, which are fed by fiber. FTTN (fiber to the node) systems use model 7330, which uses existing copper wiring to customers' homes, leading to distance limitations from the VRAD cabinet to the customer's home. The 7330 ISAM is an internet protocol DSL access multiplexer that supports VDSL and ADSL protocols. FTTP (fiber to the premises) systems use model 7340, mostly in areas such as new neighborhoods or large housing developments, where AT&T chooses to run fiber to the household, removing the distance limitations of copper. The 7340 then connects to a serving area interface, which distributes service to homes in the neighborhood, via a dual strand fiber, which then splits into 32 customer fiber pairs. The fiber pairs typically lead to a customer's residence at the network interface device.
The VRAD typically connects upstream to an Alcatel-Lucent 7450 Ethernet service switch in the central office hub, then to the headend video hub office.
Customer equipment
AT&T provides the customer premises equipment (leased for a monthly fee, or purchased with a 1-year warranty), and includes a wireless router and modem, which they call a residential gateway (RG) or internet gateway. They also provide TV receivers made by Cisco (through Scientific Atlanta) and Arris (from the former Motorola cable division) (including standard receivers, wireless receivers, and DVR receivers).
Those eligible for triple play (TV, Internet, and Phone) will use a VDSL2 transport link which uses one of the following modems:
2Wire 3600 (Deprecated)
2Wire 3800 (Deprecated)
2Wire 3801
Pace 5031NV
2Wire iNID (which comes with the 2Wire i3812V for the outside unit, the iPSU (Intelligent Power Supply Unit) which powers the i3812V, and one or more i38HG for internet access via wireless or ethernet connectivity inside the customer premises) (Deprecated)
Arris NVG589
Arris NVG599
Pace 5268AC
Arris BGW210
Along with the modems above, those eligible for fiber-to-the-home may have additional routers that could be used:
Nokia BGW320 (Integrated ONT utilizing a SFP transceiver to provide optics) also has a standard ONT port
Those who are eligible for double play (Internet and Phone) only, and aren't serviced by fiber-to-the-home, will use an ADSL2+ transport type which uses one of the following modems:
2Wire 2701HGV-B (the model number must contain a "V", otherwise it will not function with the AT&T Internet platform) (Deprecated)
Motorola 2210-02-1ATT (the AT&T Internet version of the 2210 and is black; the silver version is for PPPoE and not 802.1x) (Deprecated)
Motorola NVG510
Pace 5168NV (Only RG that can support VOIP on a 1.5 Mbit/s profile and support bonded ADSL+2)
Currently four devices support bonded pair: the 2Wire iNID, Arris NVG589 and NVG599, and Pace 5268AC. The Motorola NVG589 originally replaced the 2Wire iNID for all bonded pair installs. The NVG599 and 5268AC both have replaced the NVG589 and are used interchangeably. These three devices are capable of both ADSL2+ and VDSL.
All AT&T AT&T Internet transport types use 802.1x authentication. This means only equipment on AT&T's approved list works with the AT&T Internet service, as other (non-AT&T) equipment cannot authenticate with AT&T DSLAMs and GPONs. Another side-effect of AT&T Internet's authentication protocol is the lack of bridge mode support (unlike standard DSL that uses PPPoE authentication, which is easily bridgeable). At best, the 2Wire/Pace routers support DMZ+ mode, while the Motorola devices support IP Passthrough. AT&T allows residential and business customers to pay for static IP addresses, which they support on all AT&T approved equipment (including the 2Wire/Pace and Motorola routers.)
When AT&T launched IP-DSL (ADSL2+, double play only), they installed connections with either the 2Wire 2701HGV-B or Motorola 2210 (pairing the latter with a Cisco Linksys E1000 for residential customers, or an EdgeMarc 250AEW for business customers). The 2Wire 2701HGV-B was limited to a top speed of 6Mbit/s, while the Motorola 2210 was capable of higher speeds. In later installations, AT&T standardized on the Motorola NVG510, phasing out the other routers for new service installation.
When AT&T introduced the Internet 45 tier in 2013, installations were initially done with the iNID. AT&T later standardized on the Motorola NVG589, which supports pair-bonding for both ADSL2+ and VDSL2. AT&T also uses the NVG589 in some installations where the customer otherwise is too far from a node for service. Additionally, it supports an internal battery for those who subscribe to AT&T Phone service for battery backup during power failures. AT&T no longer supplies the battery to customers for any residential service.
References
AT&T brands
AT&T subsidiaries
American companies established in 2016
Broadband |
11208600 | https://en.wikipedia.org/wiki/D-37C | D-37C | The D-37C (D37C) is the computer component of the all-inertial NS-17 Missile Guidance Set (MGS) for accurately navigating to its target thousands of miles away. The NS-17 MGS was used in the Minuteman II (LGM-30F) ICBM. The MGS, originally designed and produced by the Autonetics Division of North American Aviation, could store multiple preprogrammed targets in its internal memory.
Unlike other methods of navigation, inertial guidance does not rely on observations of land positions or the stars, radio or radar signals, or any other information from outside the vehicle. Instead, the inertial navigator provides the guidance information using gyroscopes that indicate direction and accelerometers that measure changes in speed and direction. A computer then uses this information to calculate the vehicle's position and guide it on its course. Enemies could not "jam" the system with false or confusing information.
The Ogden Air Logistics Center at Hill AFB has been Program Manager for the Minuteman ICBM family since January 1959. The base has had complete logistics management responsibilities for Minuteman and the rest of the ICBM fleet since July 1965.
The D-37C computer consists of four main sections: the memory, the central processing unit (CPU), and the input and output units. These sections are enclosed in one case. The memory is a two-sided, fixed-head disk which rotates
at 6000 rpm. It contains 7222 words of 27 bits. Each word contains 24 data bits and three spacer bits not available to the programmer. The memory is arranged in 56 channels of 128 words each plus ten rapid access channels of one to sixteen words. The memory also includes the accumulators and instruction register.
The MM II missile was deployed with a D-37C disk computer. Autonetics also programmed functional simulators for flight program development and testing, and the code inserter verifier that was used at Wing headquarters to generate the codes to go into the airborne computer. It became necessary to verify not only that the flight program software was correct, but there was no code that would lead toward an unauthorized or accidental launch. TRW, Inc. continued its role of independent verification that first called verification and validation and then became nuclear safety cross check analysis (NSCCA). Logicon RDA was selected to perform the NSCCA of the targeting and execution plan programs developed by TRW.[1]
When MM III was developed, Autonetics generated the guidance equations that were programmed into the D37D computer, which contained a hybrid explicit guidance system for the first time. A new class of program was required by the Joint Strategic Targeting Planning Staff to select targets for the multiple warhead system. The Missile Application Programs were developed for these functions.
The next major update to the operational software was made under the Guidance Replacement Program. Autonetics (later acquired by The Boeing Co.) developed the necessary software for the new flight computer.
Functional description
This section was excerpted from the original document, "Minuteman" D-37C Digital Computer System Depot Overhaul. Autonetics, Division of North American Rockwell, Inc. Anaheim, California. FET-D-120-D37/4.
Control Unit
The control unit interprets and processes all machine functions and consists of a location counter, the instruction register, and the phase register.
Location Counter - The location counter determines the channel from which the next instruction is to be obtained.
Instruction Register - The instruction register holds the instruction to be executed by the computer. This instruction defines the type of operation to be performed such as add, subtract, etc.; specifies the location address of the operand when necessary and indicates the sector address of the next instruction.
Phase Register - The phase register consists of three flip-flops which may be set to one of eight possible states to indicate the phase of flight. It also serves as a selector switch to determine which group of voltage inputs are to be sampled and as an index register for a modify-flagged instruction. The state of the phase register is available as the stage reference outputs.
Arithmetic Unit
The arithmetic unit consists of three registers: the accumulator (A), lower accumulator (L), and the number register (N). Only the A and L registers are addressable.
Accumulator (A-register) - The accumulator serves as the main register of the computer and holds the results of all arithmetic operations. This register serves as an output register for telemetry and character outputs.
Lower Accumulator (L-register) - This register is used for certain arithmetic, input, logical operations or for rapid access storage.
Number Register (N-register) - This register is used by the logic of the computer during multiplication and division and is not addressable.
Input Unit
The discrete input lines generally serve as communication lines from external equipment. There are three sets of "on - off" type signals:
One set samples 24 input signals.
One set samples 19 external input signals and 5 flip-flops from within the computer.
One set samples 21 input signals, two flip-flops and the logical *or" of 7 discrete output signals.
Program Load - The main input for loading numerical data and instructions into the computer memory is a punched tape (paper or mylar). Information can be entered into the computer at a maximum rate of 800 five-bit codes per second from a photo-electric tape reader. Data can be entered manually from a keyboard if a computer manual control panel (CMPC) is available.
Detector - The detector input is an "on - off" type signal received from an external source and indicates the working status of a specified piece of external equipment. The detector input monitor can be "reset" by means of a special instruction.
Incremental - The incremental inputs are basically independent of program control and consist of seven resolver type, two variable incremental type, and one pulse type. These inputs are accumulated in the two four-word input buffer loops (V&R).
Voltage - The computer is capable of converting one of 32 dc voltage inputs into an 8-bit binary number under program control. Analog voltages are grouped into four sets of eight inputs each. The range is + 10 volts with an accuracy of 200 mV.
Cable - Cable inputs are serial messages of up to 96 bits in length entered into one of four words of the C-loop. Maximum data rate in 1600 bits per second. Cable input operation is begun by executing the Enable Cable Input instruction and proceeds basically independent of program control.
Radio - Radio inputs are serial messages of unlimited length entered into one word of the C-loop. After 24 bits are accumulated, the information is transferred to channel MX Sector 054 and the loop is prepared to accept another 24 bits. Maximum input data rate is 100 bits per second. The operation is begun by an instruction and proceeds basically independent of program control.
External Reset - Master Reset (Mr), Enable Write (Ewc), Initiate Load (Fsc) for checkout only, Halt Prime(K'hc), Run Prime (K'rc),Single Cycle Prime (K'sc).
Output Unit
Discrete - The discrete outputs provide two independent sets of output lines (32 and 15) for a total of 47 "on - off" type signals. The outputs are modified under program control and are sent to equipment external to the computer.
Voltage - There are four dc voltage output lines available with each proportional to an 8-bit number including the sign. These lines are updated at the rate of 9.27 volts per 32 word times. The range is + 10 volts with an accuracy of ± 200 mv.
Single Character - The single character output provides four-bit characters suitable for typewriter, tape punch or other similar output equipment. A parity check bit and two timing bits are issued automatically with each character.
Cable - The cable output is a serial message of up to 96 bits in length transmitted from the four word C-loop. The maximum data rate is 1600 bits per second* The operation is begun by execution of the Enable Cable Output (ECO) instruction and proceeds basically independent of program control.
Binary - There are four pairs of outputs which can be used to control external equipment such as gyro, etc. The output states are automatically updated under program control every 10 MS. The output is in the form of +1 or -1.
Telemetry - A timing signal is issued under program control which signifies that the accumulator contains information which is to be read by the external equipment receiving the timing signal.
Miscellaneous - These signals include Parity/Verify error signal, mode indication and stage reference.
Memory Unit
The D-37C computer memory consists of a rotating magnetic disk driven by a synchronous motor at 6000 rpm. Adjacent to the disk are two fixed head plates which house the read and write heads. The disk has a thin magnetic oxide coating on both sides for storing information. This disk is supported by air bearings generated by the rotating disk. The disk is divided into tracks or channels of 128 words each for main memory. A total capacity of 7222 words may be contained in the 56 channels of 128 Sectors, six 4-word loops, one 8-word loop, one 16-word loop and six 1-word loops.
Programming
The computer uses a full 24-bit instruction word and data word. Data is represented in one of two fashions, as a 23-bit binary fraction (full word) or as a 10-bit fraction (split word). The two formats are shown in the figure. Instructions also have two formats, either flagged or unflagged as indicated in the figure. A list with all of the available instructions with numeric and mnemonic codes follows. For more information on programming see:
Kee, W. T. Programming Manual for the D-37C Computer. Anaheim, California, Autonetics, Division of North American Rockwell, Inc., 30 January 1965.
D-37C Computer Instructions
D-17B Comparison
Both the D-17B and the D-37C computers were designed and built by Autonetics, then a division of North American Aviation, later a division of Boeing, for the real time guidance and control of a Minuteman ICBM from launch to detonation. The D-17B is a component of the NS-10Q missile guidance system for the Minuteman I, while the D-37C is a component of the NS-17 missile guidance system for the Minuteman II. There are many basic similarities between the two designs. They are both synchronous, serial machines with fixed head disks for primary memory. They have two-address instructions, half and whole word precision, and many similar instruction operator codes. The differences in the two computers are based mainly upon their differing technologies. The D-17B was built in 1962 using primarily diode-resistor logic and diode-transistor logic as needed to realize its logic circuits. On the other hand, the D-37C was built in 1964 using small scale integrated circuits made by Texas Instruments with discrete components only in the internal power supplies.
Specifications
MINUTEMAN ADVANCED D-37B
MANUFACTURER
Autonetics Division of North American Aviation
APPLICATIONS
Missile guidance and control
PROGRAMMING AND NUMERICAL SYSTEM
Internal number system: Binary
Binary digits/word: 27
Arithmetic system: Fixed point
ARITHMETIC UNIT
Excl. Stor. Access
Microsec
Add 78
Mult 1,016
Div 2,030
Arithmetic mode: Serial
Timing: Synchronous
Operation: Sequential
STORAGE
No. of Access
Medium Words Microsec
Disk 6,912 5,000 (Avg) (General Purpose Channels)
Disk 29 (Rapid Access Loops)
40 (1 word loop)
160 (4 word loop)
320 (8 word loop)
640 (16 word loop)
POWER, SPACE, WEIGHT, AND SITE PREPARATION
Power, computer 0.169 kW
Volume, computer 0.40 cu ft
Weight, computer 26 lbs
Power supply
Jerrold Foutz, President, SMPS Technology was the responsible engineer for the Minuteman D-37B guidance and control computer power supply study program which defined the state-of-art techniques later used in one of the first integrated-circuit military computers. These techniques included high-speed flat-pack power transistors and diodes (the first silicon power devices that could switch at 20 kHz and higher), high frequency DC-DC converters (100 kHz reduced to 20 kHz for reliability safety margins), high frequency pulse-width-modulated power supplies (20 kHz), metal substrate multilayer circuit boards (removing eight watts per cubic inch in space environment with 40°C rise, junction to system heat sink), and radiation circumvention techniques that removed all electrical power from the power distribution system, including decoupling capacitors, in less than 1 microsecond and restored to the specified voltage in a few microseconds upon command. Responsible for developing these concepts from exploratory development through to the production design. The basic power supply configuration was maintained in later Minuteman missiles whereas other components underwent major redesigns. Also developed, but not used, was a complete liquid dielectric cooling system based on phase change. This study verified, for the first time, that such a system could work in zero-gravity, and that the liquid dielectric showed no compatibility problems with the chosen electronic components over a test period lasting eight years.
See also
D-17B
D37D
Minuteman (missile)
Inertial navigation system
References
Tony C. Lin. Development of U.S. Air Force Intercontinental Ballistic Missile Weapon Systems. Journal of Spacecraft and Rockets, vol. 40, no. 4, 2003. pp. 491–509.
Dennis C. Reguli. Conversion of the D-37C Computer for General Purpose Applications. Air Force Institute of Technology, Wright-Patterson AFB, Ohio, School of Engineering, Master's Thesis, 1974. 171 pp.
Minuteman D-37C Computer Logic Breakdown. (Technical Memorandum 64-343-2-8). Anaheim, California. Autonetics, Division of North American Rockwell, Inc.
Minuteman D-37C Digital Computer System Depot Overhaul. Anaheim, California, Autonetics, Division of North American Rockwell, Inc. FET-D-120-D37/4.
Martin H. Weik. A Fourth Survey of Domestic Electronic Digital Computing Systems. Ballistic Research Laboratories, Aberdeen Proving Ground, MD, Report No. 1227, January 1964.
Jerrold Foutz, President, SMPS Technology.
Military computers
Missile guidance |
20214001 | https://en.wikipedia.org/wiki/M.%20Lynne%20Markus | M. Lynne Markus | M. Lynne Markus (born 1950) is an American Information systems researcher, and John W. Poduska, Sr. Chair of Information Management, Bentley University, who has made fundamental contributions to the study of enterprise systems and inter-enterprise systems, IT and organizational change, and knowledge management.
Education
Markus received her B.S. in 1972 from the University of Pittsburgh, and her PhD in Organizational Behavior in 1979 from the Case Western Reserve University.
Career and research
She was formerly a member of the Faculty of Business at the City University of Hong Kong (as Chair Professor of Electronic Business), the Peter F. Drucker Graduate School of Management at Claremont Graduate University, the Anderson Graduate School of Management (UCLA), and the MIT Sloan School of Management.
Markus' research interests are in the fields of "effective design, implementation and use of information systems within and across organizations; the risks and unintended consequences of information technology use; and innovations in the governance and management of information technology."
Her work in these areas has been published in several high-impact peer-reviewed journals, and set the stage for much of the future work in these areas. She is one of the most widely cited researchers in the field of information systems.
Her article "The Technology Shaping Effects of E-Collaboration Technologies – Bugs and Features" was selected as the best article published in 2005 in the International Journal of e-Collaboration. The article "Industry-Wide Information Systems Standardization as Collective Action: The Case of the U.S. Residential Mortgage Industry", which she co-authored, was selected as the paper of the year for 2006 in the journal MIS Quarterly.
Awards and honours
Best article published in 2005 in the International Journal of e-Collaboration.
Paper of the year for 2006 in the journal MIS Quarterly.
2008 Leo Award for Exceptional Lifetime Achievement in Information Systems by the Association for Information Systems.
Selected publications
Markus, M. Lynne, and Robert I. Benjamin. 1997. The Magic Bullet Theory In IT-Enabled Transformation, Sloan Management Review, 38(2): 55-68.
Markus, M. Lynne. 1983. Power, Politics, and MIS Implementation, Communications of the ACM, 26(6): 430-444.
Markus, M. Lynne. 1987. Toward a 'Critical Mass' Theory of Interactive Media: Universal Access, Interdependence, and Diffusion, Communications Research, 14(5): 491-511.
Markus, M. Lynne and Daniel Robey. 1988. Information Technology and Organizational Change: Causal Structure in Theory and Research, Management Science, 34(5): 583-598.
Ortiz de Guinea, Ana and M. Lynne Markus. 2009. Why break the habit of a lifetime? Rethinking the roles of intention, habit, and emotion in continuing information technology use, MIS Quarterly, 33(3): 433-444.
References
External links
M. Lynne Markus at bentley.edu
Living people
American sociologists
American women sociologists
Information systems researchers
MIT Sloan School of Management faculty
Year of birth uncertain
21st-century American women |
45350085 | https://en.wikipedia.org/wiki/Visual%20computing | Visual computing | Visual computing is a generic term for all computer science disciplines dealing with images and 3D models, such as computer graphics, image processing, visualization, computer vision, virtual and augmented reality and video processing. Visual computing also includes aspects of pattern recognition, human computer interaction, machine learning and digital libraries. The core challenges are the acquisition, processing, analysis and rendering of visual information (mainly images and video). Application areas include industrial quality control, medical image processing and visualization, surveying, robotics, multimedia systems, virtual heritage, special effects in movies and television, and computer games.
History and overview
Visual computing is a fairly new term, which got its current meaning around 2005, when the International Symposium on Visual Computing first convened. Areas of computer technology concerning images, such as image formats, filtering methods, color models, and image metrics, have in common many mathematical methods and algorithms. When computer scientists working in computer science disciplines that involve images, such as computer graphics, image processing, and computer vision, noticed that their methods and applications increasingly overlapped, they began using the term "visual computing" to describe these fields collectively. And also the programming methods on graphics hardware, the manipulation tricks to handle huge data, textbooks and conferences, the scientific communities of these disciplines and working groups at companies intermixed more and more.
Furthermore, applications increasingly needed techniques from more than one of these fields concurrently. To generate very detailed models of complex objects you need image recognition, 3D sensors and reconstruction algorithms, and to display these models believably you need realistic rendering techniques with complex lighting simulation. Real-time graphics is the basis for usable virtual and augmented reality software. A good segmentation of the organs is the basis for interactive manipulation of 3D visualizations of medical scans. Robot control needs the recognition of objects just as a model of its environment. And all devices (computers) need ergonomic graphical user interfaces.
Although many problems are considered solved within the scientific communities of the sub-disciplines making up visual computing (mostly under idealistic assumptions), one major challenge of visual computing as a whole is the integration of these partial solutions into applicable products. This includes dealing with many practical problems like addressing a multitude of hardware, the use of real data (that is often erroneous and/or gigantic in size), and the operation by untrained users. In this respect, Visual computing is more than just the sum of its sub-disciplines, it is the next step towards systems fit for real use in all areas using images or 3D objects on the computer.
Visual computing disciplines
At least the following disciplines are sub-fields of visual computing. More detailed descriptions of each of these fields can be found on the linked special pages.
Computer graphics and computer animation
Computer graphics is a general term for all techniques that produce images as result with the help of a computer. To transform the description of objects to nice images is called rendering which is always a compromise between image quality and run-time.
Image analysis and computer vision
Techniques that can extract content information from images are called image analysis techniques. Computer vision is the ability of computers (or of robots) to recognize their environment and to interpret it correctly.
Visualization and visual analytics
Visualization is used to produce images that shall communicate messages. Data may be abstract or concrete, often with no a priori geometrical components. Visual analytics describes the discipline of interactive visual analysis of data, also described as “the science of analytical reasoning supported by the interactive visual interface”.
Geometric modeling and 3D-printing
To represent objects for rendering it needs special methods and data structures, which subsumed with the term geometric modeling. In addition to describing and interactive geometric techniques, sensor data are more and more used to reconstruct geometrical models. Algorithms for the efficient control of 3D printers also belong to the field of visual computing.
Image processing and image editing
In contrast to image analysis image processing manipulates images to produce better images. “Better” can have very different meanings subject to the respective application. Also, it has to be discriminated from image editing which describes interactive manipulation of images based on human validation.
Virtual and augmented reality
Techniques that produce the feeling of immersion into a fictive world are called virtual reality (VR). Requirements for VR include head-mounted displays, real-time tracking, and high-quality real-time rendering. Augmented reality enables the user to see the real environment in addition to the virtual objects, which augment this reality. Accuracy requirements on rendering speed and tracking precision are significantly higher here.
Human computer interaction
The planning, design and uses of interfaces between people and computers is not only part of every system involving images. Due to the high bandwidth of the human visual channel (eye), images are also a preferred part of ergonomic user interfaces in any system, so that human-computer interaction is also an integral part of visual computing.
Footnotes
External links
Microsoft Research Group Visual Computing
Visual Computing at NVidia
Visual Computing Group at Harvard University
Visual Computing Group at Brown University
Visual Computing Center at KAUST
Applied Research in Visual Computing (Fraunhofer IGD)
Institute of Visual Computing (Hochschule Bonn-Rhein-Sieg, Sankt Augustin)
VRVis Research Center for Virtual Reality and Visualisation (Vienna, Austria)
Visual Computing Group @ HTW Berlin (Germany)
Image processing
Computer graphics |
9250039 | https://en.wikipedia.org/wiki/Steven%20Bender | Steven Bender | Steven Lee Bender (September 7, 1950, Russell, Kansas – March 5, 2010) was a serial entrepreneur and founder of both Altamira Group (Genuine Fractals) and iMagic Software (typing pattern recognition). Bender has made contribributions to digital imaging and Photoshop, and to authentication for distributed systems by supporting turning passwords into a biometric akin to fingerprints. Information technology analyst Rob Enderle of the Enderle Group commented in January 2007 that this new technology is, "a compelling solution in a world where identity theft and illegal access are the greatest growing threats to a business or family".
Steve Bender's, and co-founder Howard Postley's, patented and patent-pending system is based on muscle memory, that people have reliable patterns hidden inside the simple act of typing a password. By using these patterns, it is possible to distinguish the real users from impostors. This system extends the science often called Keystroke Dynamics.
In the '90s Steve's team developed Genuine Fractals at Altamira Group. Genuine Fractals was a plug-in for Photoshop and, for the first time, allowed images to go completely resolution independent—meaning a single image could be scaled from postage stamp to IMAX without loss of quality. Genuine Fractals won Best Product of the Year (EDDY) from Macworld Magazine in 1997 & 1998 [1], and is an industry standard now on version 5 from onOne Software.
References
-- "Key Sequence Rhythms" USA Patent 7,206,938.
—MacWorld 1997 EDDY winners, Genuine Fractals wins Best Graphics Plug-in. Notable also because Genuine Fractals was the first product developed on PC and ported to Mac to win an EDDY
—Steven Bender Bio, iMagic Software website, enterprise software vendor specializing in human authentication via typing rhythms.
—Description of Trustable Passwords from CA.com, formerly Computer Associates
—The importance of a biometric logon, from the Candid CIO blog
—Article on the founding of iMagic Software and the first customer, Cottage Hospital, Santa Barbara, from the Santa Barbara News Press, cover article.
—Obituary, from the Santa Ynez Valley Journal. (Scroll down)
External links
The Enderle Group
Description of Genuine Fractals features and importance
Genuine Fractals version 5 from onOne Software
Genuine Fractals from Adobe.com
American computer businesspeople
2010 deaths
1950 births
People from Russell, Kansas |
710100 | https://en.wikipedia.org/wiki/Any%20key | Any key | Computer programmers historically used "Press any key to continue" (or a similar text) as a prompt to the user when it was necessary to pause processing. The system would resume after the user pressed any keyboard button.
History
Early computers were typically operated using mechanical teleprinters, which provided a continuous printed record of their output. However, during the 1970s, these became obsolete and were replaced with visual display units, and text was lost once it scrolled off the top of the screen. To compensate, programs typically paused operation after displaying one screen of data, so that the user could observe the results and then press a key to move to the next screen.
A similar pause was also required when some physical action was required from the user, such as inserting a floppy disk or loading a printer with paper.
These prompts were commonplace on text-based operating systems prior to the development of graphical user interfaces, which typically included scrollbars to enable the user to view more than one screen/window of data. They are therefore no longer required as a means of paginating output, but the graphical equivalent (such as a modal dialog box containing the text "Click OK to continue") is still used for hardware interactions.
The prompt ("any key") is not strictly accurate, in that one is required to press a key which generates some sort of character. For the vast majority of computer systems, pressing modifier keys or lock keys would not cause processing to resume, as they do not produce an actual character that the program could detect.
Some Samsung remote controls for DVD players, as is the case of DVD-R130, have included an "anykey" to their interface. It is used to view the status of the DVD being watched.
Cultural significance
A 1982 Apple Computer manual for developers warned:
There are reports from as early as 1988 that some users have searched for such a key labelled "any", and called technical support when they have been unable to find it. The computer company Compaq even edited their FAQ to explain that the "any" key does not exist, and at one point considered replacing the command "Press any key" with "Press return key".
The concept of the "any key" has become a popular piece of computer-related humor, and was used as a gag on The Simpsons, in the seventh-season episode "King-Size Homer".
Plastic "ANY keys" with adhesive backings are available as novelty gifts.
In ex-USSR computer slang
A slang word in the Russian language, , has appeared due to the phrase "press any key to continue" and other similar phrases, which refers to computer system administrators and technical support workers who must assist users who struggle with PC-related difficulties which are often trivial, such as "press any key to continue" messages. The word is often considered derogatory, contrasting anyone it is applied to with someone considered to be a real system administrator or higher-level technical support worker. The related slang verb means 'to perform usually simple computer administration and support'.
In Russian language computer jargon, the term (anykey) is sometimes associated with the reset button of a computer. Office workers more experienced in English and computers are said to have placed stickers with the words "any key" printed on them over reset buttons, causing colleagues to press the reset button in cases where a "press any key" message appeared. Others have attributed it to the behaviour of novice computer users, who might press the reset button when faced with a message they do not expect to see, rather than reading it and following its instructions. It has also been considered that Murphy's law-like situations have contributed to the term's association with the reset button, as users may not trust the computer to perform the stated action correctly after a key is pressed, for instance due to a software bug, instead opting to reset it entirely if there is no other possible action to take.
References
Computer keys
Computer humor |
62391933 | https://en.wikipedia.org/wiki/Department%20of%20Computer%20Science%20of%20TU%20Darmstadt | Department of Computer Science of TU Darmstadt | The Department of Computer Science is a department of the Technische Universität Darmstadt. With a total of 36 professorships and about 3,700 students in 12 study courses, the Department of Computer Science is the largest department of the university. The department shapes the two research profile areas "Cybersecurity (CYSEC)" and "Internet and Digitization (InDi)" of the university.
Like the history of the university, the history of the department is shaped by pioneers. The beginnings of computer science, artificial intelligence and business informatics in Germany go back to the department.
History
Beginnings of computer science in Germany
In 1928, Alwin Walther was appointed professor of mathematics at the Technische Hochschule Darmstadt. Walther established the Institute for Practical Mathematics (IPM) there, which was part of the Department of Mathematics and Natural Sciences. In Germany, the beginnings of computer science go back to this institute. The institute was concerned with automating computing using mechanical and electromechanical devices and developing machines that could be used to solve mathematical problems. One of the earliest results was the System Darmstadt slide rule, which was widely used in mechanical engineering. Another development was an electromechanical integration system. After the Second World War, the institute concentrated increasingly on the development of electronic computer systems. Due to the reputation that TH Darmstadt had at that time in automatic computation research, the first congress on the subject of computer science (electronic calculators and information processing) held in German-speaking countries with international participation took place at TH Darmstadt in October 1955. The Darmstadt Electronic Calculator (DERA), which was completed in 1959, was created with the help of the German Research Foundation (DFG). At that time, the computer capacity was unique in Europe. Two decades before the invention of programming languages, algorithms were tested on the computing station and successfully used to process problems from industry. In 1956, the first students at DERA were able to deal with the problems of automatic calculating machines. At the same time, the first programming lectures and practical courses were offered at TH Darmstadt. In 1957, Walther made sure that TH Darmstadt got an IBM 650, which was the most powerful computer at that time. Thus TH Darmstadt was also the first university in Germany with a mainframe computer. In 1961, in response to Walther's efforts, the German Computer Center (DRZ) was founded in Darmstadt, the first mainframe computer center in Germany with which TH Darmstadt entered into a cooperation to train mathematical-technical assistants.
Electrical engineering also had a major influence on computer science at the Technische Hochschule Darmstadt (TH Darmstadt). In 1964, Robert Piloty was appointed to the chair of data technology at TH Darmstadt. In the 1960s, Germany lacked competitiveness in the field of data processing. To counteract this, the Federal Committee for Scientific Research adopted a programme for the promotion of research and development in the field of data processing for public tasks on 26 April 1967. The advisory board, which consisted mainly of representatives of universities and non-university research institutions, was responsible for the implementation of the programme. At the seventh meeting of the advisory board on 15 November 1967, Karl Ganzhorn, who at the time was responsible for research and development at IBM Germany, signalled the problems of industry in finding skilled personnel. The director of the Institute for Information Processing at TH Darmstadt, Piloty, pointed out that the German universities were responsible for training qualified personnel. As a result, a committee was formed, which was chaired by Piloty. The committee formulated recommendations for the training of computer scientists, which provided for the establishment of a course of studies in computer science at several universities and technical colleges. At TH Darmstadt Piloty worked with Winfried Oppelt on a study plan "Computer Science", which was characterized by engineering science. There was already another curriculum with the name "Diplom-Ingenieur Informatik (Mathematik)", which came from the Faculty of Mathematics and Physics and provided for a stronger emphasis on software engineering. However, the Faculty of Electrical Engineering was the driving force, which is why in the same year the first computer science course of study in Germany was established at the Faculty of Electrical Engineering on the basis of Pilotys and Oppelts study regulations. The first diploma thesis was written in 1971, the first doctoral thesis in 1975 and the first habilitation in 1978.
In the spring of 1969, Hartmut Wedekind and Robert Piloty had travelled through the USA together for several weeks to study the faculties of computer science there. On July 7, 1969, the Founding Committee for Computer Science (GAI) was established to constitute the Department of Computer Science. Later, the committee was replaced by a provisional department conference. This conference met for the first time on 15 May 1972, so that on that day the Department of Computer Science was officially established. Wedekind became its first dean. Piloty was awarded the Konrad Zuse Medal for his achievements in 1989.In 1969, graduates of TH Darmstadt founded Software AG. Today it is one of the largest IT companies in Europe. One of the founders was Peter Schnell, who was chairman of Software AG for many years and today, with his Software AG Foundation, is one of the largest donors in Germany.
Business Informatics
The history of business informatics goes back to Peter Mertens, who studied industrial engineering at the Technische Hochschule Darmstadt (TH Darmstadt). His habilitation thesis was the first habilitation thesis on business informatics in the German-speaking world. In 1968, Peter Mertens was appointed to the first chair in the German-speaking countries focusing on economic data processing at the Johannes Kepler University Linz. In the same year, Hartmut Wedekind, former systems consultant at IBM Germany, represented the Chair of Business Administration at TH Darmstadt for the first time. Two years later, he was appointed to the Chair of Business Administration and Data Processing at TH Darmstadt. Wedekind worked on database systems and their operational applications and, as early as 1971, headed the "Data Management Systems I" research group, which dealt with databases in the operational context. It was the first larger research group to deal with the topics of business informatics. In 1976, TH Darmstadt introduced the first course of studies in business informatics in Germany.
Artificial Intelligence
The history of artificial intelligence goes hand in hand with the appointment of Wolfgang Bibel, who had been rejected by professors at the Technical University of Munich because they did not believe in the future of artificial intelligence. In the winter semester 1985/1986 Bible represented the chair at the Technische Hochschule Darmstadt (TH Darmstadt) as deputy professor for the first time, to which the university later appointed him. Hans-Jürgen Hoffmann, Professor for Programming Languages and Translators, was involved in the deputy professorship. He accepted the call to TH Darmstadt on 1 October 1988 and became Professor of Intellectics at the Department of Computer Science. Bible is one of the founders of artificial intelligence in Germany and Europe. He built up the necessary institutions, conferences and scientific journals and provided the necessary research programmes to establish the field of artificial intelligence. For the academic year 1991/1992 he took over the office as Dean of the Department of Computer Science of TH Darmstadt. During this time he chaired three appointment commissions. Among them were Oskar von Stryk and Karsten Weihe. In his time, he also built up his research group and made the Technische Universität Darmstadt (TU Darmstadt) one of the leading universities for artificial intelligence worldwide. The most outstanding scientific project was the National Priority Program Deduction, funded by the German Research Foundation (DFG). The project led to Germany assuming a leading position in artificial intelligence. He has been professor emeritus since 2004. He gave his farewell lecture on February 13, 2004. By 2017, twenty-five of his doctoral students or staff had become professors, so that the majority of today's German AI researchers are graduates of TU Darmstadt. For his achievements he was honored by the Gesellschaft für Informatik as one of the ten influential minds in German AI history. He was also one of the first Fellows of the Association for the Advancement of Artificial Intelligence (AAAI).
The Centre for Cognitive Science (CCS) was founded at TU Darmstadt by Constantin Rothkopf, Professor of Psychology of Information Processing. Rothkopf became its founding director. Research groups from various disciplines work at the Centre. At the same time Kristian Kersting, Professor of Artificial Intelligence and Machine Learning, founded the initiative Artificial Intelligence at TU Darmstadt (AI•DA), a unique model that coordinates different research groups to advance the development of artificial intelligence. Kersting was awarded in 2019 for his scientific achievements as a Fellow of the European Association for Artificial Intelligence (EurAI) and as a Fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS).
In 2019, the TU Darmstadt was selected as a founding location of ELLIS with the aim of establishing a top AI research institute. The decision, made by international scientists, was based on the scientific excellence in the field.
IT Security
In 1996, Johannes Buchmann was appointed to the Chair of Theoretical Computer Science. The appointment is regarded as the birth of IT security at the Technische Hochschule Darmstadt (TH Darmstadt). Three years later, Darmstadts universities and research institutions founded the Competence Center for Applied Security Technology (CAST), the largest network for cyber security in the German-speaking world. It was initially a forum, which was transformed into an independent association in 2003. The second professorship for IT security followed in 2001. Claudia Eckert, who also headed Fraunhofer Institute for Secure Information Technology (Fraunhofer SIT) from 2001 to 2011, was appointed Professor of Information Security at the Technische Universität Darmstadt. The professorship was endowed by the Horst Görtz Foundation. IT security was institutionalized in 2002 with the founding of the Darmstadt Center for IT Security (DZI), which became the Center for Advanced Security Research Darmstadt (CASED) in 2008. Buchmann and Eckert were in charge of the project. Buchmann was the founding director of CASED. In 2010, Michael Waidner became director of Fraunhofer SIT. The European Center for Security and Privacy by Design (EC SPRIDE) was founded in 2011 as a result of the efforts of Buchmann and Waidner. CASED and EC SPRIDE were part of LOEWE, the research excellence program of the state of Hesse.
In 2012, Intel established the Intel Collaborative Research Institute for Secure Computing at the Technische Universität Darmstadt. It was the first collaborative research institute for IT security that Intel established outside the United States. Two years later, the German Research Foundation (DFG) established the Collaborative Research Centre "CROSSING - Cryptography-Based Security Solutions" at the Technische Universität Darmstadt, which deals with cryptography-based security solutions. The first speaker of CROSSING was Buchmann.
In 2015, CASED and EC SPRIDE merged to form today's Center for Research in Security and Privacy (CRISP), the largest research institution for IT security in Europe. In the same year, the German Research Foundation established the Graduate School for Privacy and Trust for Mobile Users on the initiative of Max Mühlhäuser. One year later, the Federal Ministry of Finance decided to make the Darmstadt region an outstanding location for the digital transformation of the economy. The Federal Ministry of Finance has established the centers "Digital Hub Cybersecurity" and "Digital Hub FinTech" in the region, which are to serve the networking of companies, research institutions and start-ups. CRISP was upgraded to the National Research Center for Applied Cyber Security on January 1, 2019.
Johannes Buchmann and his team founded the field of post-quantum cryptography internationally. In a worldwide competition organized by the National Institute of Standards and Technology, the XMSS signature method developed by Buchmann and his team became the first international standard for post-quantum cryptography in 2018. XMSS is the first future-proof and practical signature procedure with minimal security requirements. Buchmann was awarded the Konrad Zuse Medal in 2017 for his achievements.
Fraunhofer Institute in Darmstadt
The history of the Fraunhofer Institute for Secure Information Technology (Fraunhofer SIT) dates back to 1961, when the German Computer Center (DRZ) was founded in Darmstadt on the initiative of Alwin Walther. At that time, the German Data Center was equipped with one of the most powerful mainframe computers in Germany, making it the first mainframe data center in Germany. Particularly of the DRZ was that it could be used by universities and scientific mechanisms for research purposes. As the ARPANET became more and more widespread, communication between the machines became the focus of research at the DRZ. In 1973, the DRZ merged with other research institutions in this field to form the Gesellschaft für Mathematik und Datenverarbeitung (GMD). The Society founded the Institute for Remote Data Transmission, which was renamed the Institute for Telecooperation Technology in 1992. Under the direction of Heinz Thielmann, the Institute increasingly dealt with IT security issues and with the advent of the Internet, IT security became increasingly important, so that in 1998 it was renamed the Institute for Secure Telecooperation. In 2001 the GMD merged with the Fraunhofer Society. In 2004, the Institute for Secure Telecooperation became the Fraunhofer Institute for Secure Information Technology (Fraunhofer SIT). The founding director was Claudia Eckert, who was also Professor for Information Security at the Technische Universität Darmstadt.
Graphical Data Processing
In 1975, José Luis Encarnação founded the Research Group Graphic Interactive Systems (GRIS) at the Department of Computer Science of the Technische Hochschule Darmstadt. In 1977 he and his research group introduced the Graphical Kernel System (GKS) as the first ISO standard for computer graphics (ISO/IEC 7942). GKS allows graphics applications to run device-independently. Images can be created and manipulated and the images were portable for the first time. In 1984 Encarnação founded the Center for Computer Graphics in Darmstadt. A working group resulting from this cooperation was taken over by the Fraunhofer Society and the Fraunhofer Institute for Computer Graphics Research (Fraunhofer IGD) was founded in 1987. The founding director of the Fraunhofer IGD was José Luis Encarnação. The institute was one of the first research institutes to deal with internet technologies. José Luis Encarnação was awarded the Konrad Zuse Medal for his achievements in 1997.
Research
Research priorities
The research focuses of the department include:
Computational Engineering and Robotics
Data Science
IT Security
Massively Parallel Software Systems
Networks and Distributed Systems
Visual Computing
Research grants
According to the funding report 2018 of the German Research Foundation (DFG), the Technische Universität Darmstadt received the highest number of competitive grants in the field of computer science in the period under review from 2014 to 2016. In a competitive selection process, the DFG selects the best research projects from researchers at universities and research institutions and finances them.
Location
The Department of Computer Science is spread over several locations, but the buildings are located in or around the city center of Darmstadt.
Contests
The search and rescue robot Hector (Heterogeneous Cooperating Team Of Robots) of the Technische Universität Darmstadt competed in 2014 in the category "Rescue Robot" in the RoboCup, the oldest and world's largest competition for intelligent robots in various application scenarios, and took first place there.
In 2017, the Argonaut robot, developed by a team led by Oskar von Stryk, won the ARGOS Challenge for intelligent inspection robots on oil and gas platforms, which the company Total S.A. had launched. The prize was half a million euros. Argonaut is a variant of Taurob tracker and the first fully autonomous, mobile inspection robot for oil and gas plants.
In 2018, Hector competed at the World Robot Summit in Tokyo in the category "Plant Disaster Prevention Challenge" and won 1st place.
References
1972 establishments in Germany
Technische Universität Darmstadt |
5940590 | https://en.wikipedia.org/wiki/Linuxcare | Linuxcare | Linuxcare is an American IT services company founded in San Francisco in 1998 by Dave Sifry, Arthur Tyde and Dave LaDuke. The company's initial goal was to be "the 800 number for Linux" and operate 24 hours a day. Due to the dot-com bubble of the early millennium years, this version of Linuxcare morphed into Levanta and eventually sold in 2008.
Linuxcare Bootable Toolbox
In 1999, Linuxcare developed the Linuxcare Bootable Toolbox, also known as the Linuxcare BBC, or Bootable Business Card. The BBC was a Live CD, a bootable Linux distribution designed to be run entirely from the CD. In 1999, this was a very new concept, and was preceded by only one other Linux distribution designed exclusively to be run from CD, DemoLinux. While DemoLinux was designed to show the whole desktop experience of a Linux distribution, the Linuxcare BBC was designed to be used mainly as a utility CD, and was the first Live CD with this focus.
The BBC distribution was under 50MB, and designed to fit on a mini CD shaped like a standard business card. It included utilities designed to assist system administrators, and was primarily a text console operating system, but a minimal Blackbox X11 UI was included.
Linuxcare produced an initial launch of an unversioned release, pressed as business card CDs, and distributed them at LinuxWorld 1999. Versions 1.2, 1.5 and 1.6 were later released online, and pressed and released at other conventions and Linux user groups from 1999 to 2001.
LNX-BBC
On May 8, 2001, Seth Schoen announced that the original three developers of the Linuxcare Bootable Toolbox had left Linuxcare to fork the project into a new community project, named LNX-BBC. Release 1.6, released in May 2001, served as a transition release between Linuxcare and LNX-BBC, with both projects offering the same release on their respective sites. LNX-BBC produced three more BBC releases: 1.618 in August 2001, 2.0 in January 2003, and 2.1 on May 1, 2003. Early versions were assembled by hand, while later versions utilized GAR, a software build system built around GNU Automake. LNX-BBC was discontinued after the 2.1 release.
2011 company relaunch
The Linuxcare brand was repurchased by Arthur Tyde and incorporated as an LLC by Dr. Scott S. Elliott and his partners in the state of California. The new company provides IT services to businesses related to cloud computing. Linuxcare LLC has offices in San Francisco and Manila.
References
External links
Linuxcare homepage / Invalid domain
LNX-BBC homepage / This site is now advertising related
GAR Architecture / Invalid domain
Linux companies |
27764 | https://en.wikipedia.org/wiki/Systems%20engineering | Systems engineering | Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function.
Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing and evaluation, maintainability and many other disciplines necessary for successful system design, development, implementation, and ultimate decommission become more difficult when dealing with large or complex projects. Systems engineering deals with work-processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as industrial engineering, process systems engineering, mechanical engineering, manufacturing engineering, production engineering, control engineering, software engineering, electrical engineering, cybernetics, aerospace engineering, organizational studies, civil engineering and project management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole.
The systems engineering process is a discovery process that is quite unlike a manufacturing process. A manufacturing process is focused on repetitive activities that achieve high quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved, and identifying the most probable or highest impact failures that can occur – systems engineering involves finding solutions to these problems.
History
The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. Military, to apply the discipline.
When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly. The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, including USL, UML, QFD, and IDEF0.
In 1990, a professional society for systems engineering, the National Council on Systems Engineering (NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to the International Council on Systems Engineering (INCOSE) in 1995. Schools in several countries offer graduate programs in systems engineering, and continuing education options are also available for practicing engineers.
Concept
Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavour.
Origins and traditional scope
The traditional scope of engineering embraces the conception, design, development, production and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts.
Evolution to broader scope
The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy, and the term continues to apply to both the narrower and broader scope.
Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to a take on a broader meaning especially when humans were seen as an essential component of a system. Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement."
Consistent with the broader scope of systems engineering, the Systems Engineering Body of Knowledge (SEBoK) has defined three types of systems engineering: (1) Product Systems Engineering (PSE) is the traditional systems engineering focused on the design of physical systems consisting of hardware and software. (2) Enterprise Systems Engineering (ESE) pertains to the view of enterprises, that is, organizations or combinations of organizations, as systems. (3) Service Systems Engineering (SSE) has to do with the engineering of service systems. Checkland defines a service system as a system which is conceived as serving another system. Most civil infrastructure systems are service systems.
Holistic view
Systems engineering focuses on analyzing and eliciting customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, the system lifecycle. This includes fully understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering process can be decomposed into
a Systems Engineering Technical Process, and
a Systems Engineering Management Process.
Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includes assessing available information, defining effectiveness measures, to create a behavior model, create a structure model, perform trade-off analysis, and create sequential build & test plan.
Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include the Waterfall model and the VEE model (also called the V model).
Interdisciplinary field
System development often requires contribution from diverse technical disciplines. By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item.
This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment.
Managing complexity
The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems, but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. The International Space Station is an example of such a system.
The development of smarter control algorithms, microprocessor design, and analysis of environmental systems also come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here:
System architecture,
System model, Modeling, and Simulation,
Optimization,
System dynamics,
Systems analysis,
Statistical analysis,
Reliability analysis, and
Decision making
Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior of and interaction among system components is not always immediately well defined or understood. Defining and characterizing such systems and subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users, operators, marketing organizations, and technical specifications is successfully bridged.
Scope
One way to understand the motivation behind systems engineering is to see it as a method, or practice, to identify and improve common rules that exist within a wide variety of systems. Keeping this in mind, the principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, provided systems thinking is employed at all levels. Besides defense and aerospace, many information and technology based companies, software development firms, and industries in the field of electronics & communications require systems engineers as part of their team.
An analysis by the INCOSE Systems Engineering center of excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort. At the same time, studies have shown that systems engineering essentially leads to reduction in costs among other benefits. However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering.
Systems engineering encourages the use of modeling and simulation to validate assumptions or theories on systems and the interactions within them.
Use of methods that allow early detection of possible failures, in safety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology, Jay Wright Forrester's System dynamics method, and the Unified Modeling Language (UML)—all currently being explored, evaluated, and developed to support the engineering decision process.
Education
Education in systems engineering is often seen as an extension to the regular engineering courses, reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g., aerospace engineering, civil engineering, electrical engineering, mechanical engineering, manufacturing engineering, industrial engineering, chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material most often presented as a BS in Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either a MS/MEng or Ph.D./EngD degree.
INCOSE, in collaboration with the Systems Engineering Research Center at Stevens Institute of Technology maintains a regularly updated directory of worldwide academic programs at suitably accredited institutions. As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively.
Education in systems engineering can be taken as Systems-centric or Domain-centric:
Systems-centric programs treat systems engineering as a separate discipline and most of the courses are taught focusing on systems engineering principles and practice.
Domain-centric programs offer systems engineering as an option that can be exercised with another major field in engineering.
Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core-engineer.
Systems engineering topics
Systems engineering tools are strategies, procedures, and techniques that aid in performing systems engineering on a project or product. The purpose of these tools vary from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export and more.
System
There are many definitions of what a system is in the field of systems engineering. Below are a few authoritative definitions:
ANSI/EIA-632-1999: "An aggregation of end products and enabling products to achieve a given purpose."
DAU Systems Engineering Fundamentals: "an integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective."
IEEE Std 1220-1998: "A set or arrangement of elements and processes that are related and whose behavior satisfies customer/operational needs and provides for life cycle sustainment of the products."
INCOSE Systems Engineering Handbook: "homogeneous entity that exhibits predefined behavior in the real world and is composed of heterogeneous parts that do not individually exhibit that behavior and an integrated configuration of components and/or subsystems."
INCOSE: "A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system level qualities, properties, characteristics, functions, behavior and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected."
ISO/IEC 15288:2008: "A combination of interacting elements organized to achieve one or more stated purposes."
NASA Systems Engineering Handbook: "(1) The combination of elements that function together to produce the capability to meet a need. The elements include all hardware, software, equipment, facilities, personnel, processes, and procedures needed for this purpose. (2) The end product (which performs operational functions) and enabling products (which provide life-cycle support services to the operational end products) that make up a system."
Systems engineering processes
Systems engineering processes encompass all creative, manual and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions:
task definition (informative definition),
conceptual stage (cardinal definition),
design stage (formative definition), and
implementation stage (manufacturing definition).
Depending on their application, tools are used for various stages of the systems engineering process:
Using models
Models play important and diverse roles in systems engineering. A model can be defined in several
ways, including:
An abstraction of reality designed to answer specific questions about the real world
An imitation, analogue, or representation of a real world process or structure; or
A conceptual, mathematical, or physical tool to assist a decision maker.
Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like a functional flow block diagram and mathematical (i.e., quantitative) models used in the trade study process. This section focuses on the last.
The main reason for using mathematical models and diagrams in trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set of differential equations describing the trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just correlation. Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to crossfertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'.
Modeling formalisms and graphical representations
Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system's functional and data requirements. Common graphical representations include:
Functional flow block diagram (FFBD)
Model-based design
Data flow diagram (DFD)
N2 chart
IDEF0 diagram
Use case diagram
Sequence diagram
Block diagram
Signal-flow graph
USL function maps and type maps
Enterprise architecture frameworks
A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods are used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems is important. Part of the design phase is to create structural and behavioral models of the system.
Once the requirements are understood, it is now the responsibility of a systems engineer to refine them, and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. A decision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods.
Other tools
Systems Modeling Language (SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems.
Lifecycle Modeling Language (LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support and retirement stages.
Related fields and sub-fields
Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity:
Cognitive systems engineering
Cognitive systems engineering (CSE) is a specific approach to the description and analysis of human-machine systems or sociotechnical systems. The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use of artifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to as cognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively.
Configuration management
Like systems engineering, configuration management as practiced in the defense and aerospace industry is a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality that systems engineering and/or Test and Verification Engineering have proven out through objective testing.
Control engineering
Control engineering and its design and implementation of control systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process.
Industrial engineering
Industrial engineering is a branch of engineering that concerns the development, improvement, implementation and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems.
Interface design
Interface design and its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces be able to accept new features, including mechanical, electrical and logical interfaces, including reserved wires, plug-space, command codes and bits in communication protocols. This is known as extensibility. Human-Computer Interaction (HCI) or Human-Machine Interface (HMI) is another aspect of interface design, and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design of communication protocols for local area networks and wide area networks.
Mechatronic engineering
Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamical systems modeling to express tangible constructs. In that regard it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice.
Operations research
Operations research supports systems engineering. The tools of operations research are used in systems analysis, decision making, and trade studies. Several schools teach SE courses within the operations research or industrial engineering department, highlighting the role systems engineering plays in complex projects. Operations research, briefly, is concerned with the optimization of a process under multiple constraints.
Performance engineering
Performance engineering is the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed, or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute is throttled by limited system capacity. For example, the performance of a packet-switched network is characterized by the end-to-end packet transit delay, or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily on statistics, queueing theory and probability theory for its tools and processes.
Program management and project management
Program management (or programme management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering. Project management is also closely related to both program management and systems engineering.
Proposal engineering
Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal.
Reliability engineering
Reliability engineering is the discipline of ensuring a system meets customer expectations for reliability throughout its life; i.e., it does not fail more frequently than expected. Next to prediction of failure, it is just as much about prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated with maintainability, availability (dependability or RAMS preferred by some), and logistics engineering. Reliability engineering is always a critical component of safety engineering, as in failure modes and effects analysis (FMEA) and hazard fault tree analysis, and of security engineering.
Risk Management
Risk management, the practice of assessing and dealing with risk is one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoff with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for the system lifecycle that requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated to the overall effort.
Safety engineering
The techniques of safety engineering may be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs, and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems.
Scheduling
Scheduling is one of the systems engineering support tools as a practice and item in assessing interdisciplinary concerns under configuration management. In particular the direct relationship of resources, performance features, and risk to duration of a task or the dependency links among tasks and impacts across the system lifecycle are systems engineering concerns.
Security engineering
Security engineering can be viewed as an interdisciplinary field that integrates the community of practice for control systems design, reliability, safety and systems engineering. It may involve such sub-specialties as authentication of system users, system targets and others: people, objects and processes.
Software engineering
From its beginnings, software engineering has helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods and processes of Systems Engineering.
See also
Arcadia (engineering)
Control engineering
Design review (U.S. government)
Engineering management
Engineering information management
Enterprise systems engineering
Industrial engineering
Interdisciplinarity
List of production topics
List of requirements engineering tools
List of systems engineers
List of types of systems engineering
Management cybernetics
Model-based systems engineering
Operations management
Structured systems analysis and design method
System of systems engineering (SoSE)
System accident
Systems architecture
Systems development life cycle
Systems thinking (e.g. theory of constraints, value-stream mapping)
System information modelling
References
Further reading
Blockley, D. Godfrey, P. Doing it Differently: Systems for Rethinking Infrastructure, Second Edition, ICE Publications, London, 2017.
Buede, D.M., Miller, W.D. The Engineering Design of Systems: Models and Methods, Third Edition, John Wiley and Sons, 2016.
Chestnut, H., Systems Engineering Methods. Wiley, 1967.
Gianni, D. et al. (eds.), Modeling and Simulation-Based Systems Engineering Handbook, CRC Press, 2014 at CRC
Goode, H.H., Robert E. Machol System Engineering: An Introduction to the Design of Large-scale Systems, McGraw-Hill, 1957.
Hitchins, D. (1997) World Class Systems Engineering at hitchins.net.
Lienig, J., Bruemmer, H., Fundamentals of Electronic Systems Design, Springer, 2017 .
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
MITRE, The MITRE Systems Engineering Guide(pdf)
NASA (2007) Systems Engineering Handbook, NASA/SP-2007-6105 Rev1, December 2007.
NASA (2013) NASA Systems Engineering Processes and Requirements NPR 7123.1B, April 2013 NASA Procedural Requirements
Oliver, D.W., et al. Engineering Complex Systems with Models and Objects. McGraw-Hill, 1997.
Ramo, S., St.Clair, R.K. The Systems Approach: Fresh Solutions to Complex Problems Through Combining Science and Practical Common Sense, Anaheim, CA: KNI, Inc, 1998.
Sage, A.P., Systems Engineering. Wiley IEEE, 1992. .
Sage, A.P., Olson, S.R., Modeling and Simulation in Systems Engineering, 2001.
SEBOK.org, Systems Engineering Body of Knowledge (SEBoK)
Shermon, D. Systems Cost Engineering, Gower publishing, 2009
Shishko, R., et al. (2005) [https://openlibrary.org/books/OL23710690M/NASA_systems_engineering_handbook NASA Systems Engineering Handbook]. NASA Center for AeroSpace Information, 2005.
Stevens, R., et al. Systems Engineering: Coping with Complexity. Prentice Hall, 1998.
US Air Force, SMC Systems Engineering Primer & Handbook, 2004
US DoD Systems Management College (2001) Systems Engineering Fundamentals.'' Defense Acquisition University Press, 2001
US DoD Guide for Integrating Systems Engineering into DoD Acquisition Contracts, 2006
US DoD MIL-STD-499 System Engineering Management
External links
ICSEng homepage.
INCOSE homepage.
INCOSE UK homepage
PPI SE Goldmine homepage
Systems Engineering Body of Knowledge
Systems Engineering Tools List of systems engineering tools
AcqNotes DoD Systems Engineering Overview
NDIA Systems Engineering Division
Engineering disciplines |
1696362 | https://en.wikipedia.org/wiki/Optical%20burst%20switching | Optical burst switching | Optical burst switching (OBS) is an optical networking technique that allows dynamic sub-wavelength switching of data. OBS is viewed as a compromise between the yet unfeasible full optical packet switching (OPS) and the mostly static optical circuit switching (OCS). It differs from these paradigms because OBS control information is sent separately in a reserved optical channel and in advance of the data payload. These control signals can then be processed electronically to allow the timely setup of an optical light path to transport the soon-to-arrive payload. This is known as delayed reservation.
Purpose
The purpose of optical burst switching (OBS) is to dynamically provision sub-wavelength granularity by optimally combining electronics and optics. OBS considers sets of packets with similar properties called bursts. Therefore, OBS granularity is finer than optical circuit switching (OCS). OBS provides more bandwidth flexibility than wavelength routing but requires faster switching and control technology. OBS can be used for realizing dynamic end-to-end all optical communications.
Method
In OBS, packets are aggregated into data bursts at the edge of the network to form the data payload. Various assembling schemes based on time and/or size exist (see burst switching). Edge router architectures have been proposed (see ). OBS features the separation between the control plane and the data plane. A control signal (also termed burst header or control packet) is associated to each data burst. The control signal is transmitted in optical form in a separated wavelength termed the control channel, but signaled out of band and processed electronically at each OBS router, whereas the data burst is transmitted in all optical form from one end to the other end of the network. The data burst can cut through intermediate nodes, and data buffers such as fiber delay lines may be used. In OBS data is transmitted with full transparency to the intermediate nodes in the network. After the burst has passed a router, the router can accept new reservation requests.
Advantages of OBS over OPS and OCS
Advantages over OCS
More efficient bandwidth utilization – In an OCS system, a lightpath must be set up from source to destination in the optical network. If the data transmission duration is short relative to the set up time, bandwidth may not be efficiently utilized in the OCS system. In comparison, OBS does not require end-to-end lightpath set up, and therefore may offer more efficient bandwidth utilization compared to an OCS system. This is similar to the advantage offered by packet switching over circuit switching.
Advantages over OPS
Remove throughput limitation – Optical buffer technology has not matured enough to enable low cost manufacturing and widespread use in optical networks. Core optical network nodes are likely to either be unbuffered or have limited buffers. In such networks, delayed reservation schemes such as Just Enough Time (JET) are combined with electronic buffering at edge routers to reserve bandwidth. Using JET can create a throughput limitation in an edge router in an OPS system. This limitation can be overcome by using OBS.
Furthermore, there must be a guardband in the data channel between packets or bursts, so that core optical router data planes have adequate time to switch packets or bursts. If the guardband is large relative to the average packet or burst size, then it can limit data channel throughput. Aggregating packets into bursts can reduce guardband impact on data channel throughput.
Reduce processing requirements and core network energy consumption – A core optical router in an OBS network may face reduced control plane requirements when compared to that in an OPS network, as: A core optical router in an OPS network would have to perform processing operations for every arriving packet, wherelse in an OBS network the router performs processing operations for an arriving burst which contains several packets. Therefore, less processing operations per packet are required in an OBS network core optical router compared to an OPS network. Consequently, the energy consumption and potentially the carbon footprint of a core optical router in an OPS network is likely to be larger than that of an OBS network router for the same amount of data.
This advantage may be offset by the fact that an OBS network edge router is likely to be more complex than an OPS network edge router, due to the possible need for a burst assembly/aggregation and a sorting stage. Consequently, energy consumption at the edge of an OBS network may be higher than in an OPS network.
See also
Burst switching
Optical mesh network
References
Further reading
Baldine I, et al., 2003, "Just-in-Time Optical Burst Switching Implementation in the ATDnet All-Optical Networking Testbed" Proceedings of the Global Telecommunications Conference (GLOBECOM 2003), San Francisco, USA.
Chen, Yang; Qiao, Chunming and Yu, Xiang; "Optical Burst Switching (OBS): A New Area in Optical Networking Research", IEEE Network Magazine, Vol. 18 (3), pp. 16–23, May–June 2004.
Gauger, C.; 2003, "Projects and Test Beds Related to OBS in Europe", Proceedings of the 2nd International Workshop on Optical Burst Switching, IEEE Globecom, San Francisco, USA.
de Vega, Miguel; "Modeling Future All-Optical Networks without Buffering Capabilities", PhD Thesis, Université libre de Bruxelles, Brussels, Belgium, 2008.
Jue, Jason P. and Vokkarane, Vinod M.; Optical Burst Switched Networks, Springer, Optical Networks Series, 2005 .
Garcia, Nuno; "Architectures and Algorithms for IPv4/IPv6-Compliant Optical Burst Switching Networks", PhD Thesis, University of Beira Interior, Covilhã, Portugal, 2008.
M. Maier, "Optical Switching Networks", Cambridge University Press, 2008.
R. Rajaduray, S. Ovadia, D. J. Blumenthal, "Analysis of an edge router for span-constrained optical burst switched (OBS) networks", IEEE Journal of Lightwave Technology, November 2004, pp. 2693–2705
R. Rajaduray, D. J. Blumenthal, S. Ovadia, “Impact of Burst Assembly Parameters on Edge Router Latency in an Optical Burst Switching Network”, Paper MF3 LEOS 2003 Annual Meeting, Oct 26 – 30, Tucson, Arizona
R. Rajaduray, "Unbuffered and Limited-Buffer All-Optical Networks", PhD dissertation, University of California Santa Barbara, December 2005
S. Ovadia, C. Maciocco, M. Paniccia, R. Rajaduray “Photonic Burst Switching (PBS) Architecture for Hop and Span-Constrained Optical Networks”, S24-S32 IEEE Comms Magazine Nov 2003
Fiber-optic communications
Network protocols |
54594603 | https://en.wikipedia.org/wiki/Angelfish%20software | Angelfish software | Angelfish Software is an on-premises, self-hosted web analytics application which allows organizations to monitor how users interact with websites and web-based applications. Angelfish can use web server logs or JavaScript page tags to create reports.
First released in 2013, Angelfish Software was created in response to Google's cancellation of Urchin and the lack of options that existed for on-premises web analytics software.
Angelfish is a popular solution for tracking Intranet and SharePoint environments, and has significant interest from organizations that are required to protect website visitor data due to regulations, or cannot use Google Analytics due to data privacy concerns.
See also
Web analytics
Information Privacy
List of web analytics software
References
External links
http://www.analyticsmarket.com/blog/website-analytics-software-review, Web Analytics Software Review.
Business software
Web analytics
Web applications |
1400884 | https://en.wikipedia.org/wiki/Charles%20E.%20Leiserson | Charles E. Leiserson | Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof. As part of this effort, he developed the Cilk multithreaded language. He invented the fat-tree interconnection network, a hardware-universal interconnection network used in many supercomputers, including the Connection Machine CM5, for which he was network architect. He helped pioneer the development of VLSI theory, including the retiming method of digital optimization with James B. Saxe and systolic arrays with H. T. Kung. He conceived of the notion of cache-oblivious algorithms, which are algorithms that have no tuning parameters for cache size or cache-line length, but nevertheless use cache near-optimally. He developed the Cilk language for multithreaded programming, which uses a provably good work-stealing algorithm for scheduling. Leiserson coauthored the standard algorithms textbook Introduction to Algorithms together with Thomas H. Cormen, Ronald L. Rivest, and Clifford Stein.
Leiserson received a B.S. degree in computer science and mathematics from Yale University in 1975 and a Ph.D. degree in computer science from Carnegie Mellon University in 1981, where his advisors were Jon Bentley and H. T. Kung.
He then joined the faculty of the Massachusetts Institute of Technology, where he is now a Professor. In addition, he is a principal in the Theory of Computation research group in the MIT Computer Science and Artificial Intelligence Laboratory, and he was formerly Director of Research and Director of System Architecture for Akamai Technologies. He was Founder and Chief Technology Officer of Cilk Arts, Inc., a start-up that developed Cilk technology for multicore computing applications. (Cilk Arts, Inc. was acquired by Intel in 2009.)
Leiserson's dissertation, Area-Efficient VLSI Computation, won the first ACM Doctoral Dissertation Award. In 1985, the National Science Foundation awarded him a Presidential Young Investigator Award. He is a Fellow of the Association for Computing Machinery (ACM), the American Association for the Advancement of Science (AAAS), the Institute of Electrical and Electronics Engineers (IEEE), and the Society for Industrial and Applied Mathematics (SIAM). He received the 2014 Taylor L. Booth Education Award from the IEEE Computer Society "for worldwide computer science education impact through writing a best-selling algorithms textbook, and developing courses on algorithms and parallel programming." He received the 2014 ACM-IEEE Computer Society Ken Kennedy Award for his "enduring influence on parallel computing systems and their adoption into mainstream use through scholarly research and development." He was also cited for "distinguished mentoring of computer science leaders and students." He received the 2013 ACM Paris Kanellakis Theory and Practice Award for "contributions to robust parallel and distributed computing."
See also
Thinking Machines Corporation
Thomas H. Cormen
Ronald L. Rivest
Clifford Stein
References
Further reading
External links
Home page
Brief Biography
Charles Leiserson Playlist Appearance on WMBR's Dinnertime Sampler radio show October 27, 2004
American computer scientists
Theoretical computer scientists
Massachusetts Institute of Technology faculty
Fellows of the Association for Computing Machinery
Living people
Researchers in distributed computing
Yale University alumni
Carnegie Mellon University alumni
1953 births
American chief technology officers |
1571728 | https://en.wikipedia.org/wiki/Dardanians%20%28Trojan%29 | Dardanians (Trojan) | The Dardanoi (; its anglicized modern terms being Dardanians or Dardans) in classical writings were a people closely related to the Trojans, an ancient people of the Troad, located in northwestern Anatolia. The Dardanoi derived their name from Dardanus, the mythical founder of Dardania, an ancient city in the Troad. Rule of the Troad was divided between Dardania and Troy. Homer makes a clear distinction between the Trojans and the Dardanoi. However, "Dardanoi"/"Dardanian" later became essentially metonymous–– or at least is commonly perceived to be so–– with "Trojan", especially in the works of Vergil such as the Aeneid.
Dardanoi and Trojans
The Royal House of Troy was also divided into two branches, that of the Dardanoi and that of the Trojans (their city being called Troy, or sometimes Ilion/Ilium). The House of the Dardanoi (its members being the Dardanids, ; ) was older than the House of Troy, but Troy later became more powerful. Aeneas is referred to in Virgil's Aeneid interchangeably as a Dardanian or as a Trojan, but strictly speaking, Aeneas was of the branch of the Dardanoi. Many rulers of Rome, for example Julius Caesar and Augustus, claimed descent from Aeneas and the Houses of Troy and Dardania. Homer adds the epithet Dardanid (Δαρδανίδης) to Priam and to other prominent characters denoting that they are members of the house of the Dardanoi.
Homer writes;
The Dardanians were led by brave Aeneas, whom the fair Aphrodite, a goddess bedded with a mortal man, bore to Anchises in the mountains of Ida. He was not alone, for with him were the two sons of Antenor, Archilochus and Acamas, both skilled in all the arts of war.
The strait of the Dardanelles was named after the Dardanoi, who lived in the region.
Origins
The ethnic affinities of the Dardanoi, and of the Trojans, and the nature of their language remain a mystery. The remains of their material culture reveal close ties with Luwian, other Anatolian groups, and Thracians. The Dardanoi were linked by ancient Greek and Roman writers with the Illyrian people of the same name who lived in the Balkans (i.e. the Dardani), a notion supported by a number of parallel ethnic names found both in the Balkans and Anatolia that are considered too great to be a mere coincidence (e.g. Eneti and Enetoi, Bryges and Phryges, Moesians and Mysians). Archaeological finds from the Troad dating back to the Chalcolithic period show striking affinity to archaeological finds known from the same era in Muntenia and Moldavia, and there are other traces which suggest close ties between the Troad and the Carpatho-Balkan region of Europe. Archaeologists in fact have stated that the styles of certain ceramic objects and bone figurines show that these objects were brought into the Troad by Carpatho-Danubian colonists; for example, certain ceramic objects have been shown to have Cucuteni origins.
Variations of the name
Words used by Homer are:
Dardaniōnes, Δαρδανίωνες denotes Trojans in general
Dardanioi, Δαρδάνιοι, same as above
Dardanidai, Δαρδανίδαι, descendants of Dardanus; in Latin sometimes also used for Trojan women in the Aeneid
Dardanoi, Δάρδανοι, descendants of Dardanus, but also Trojan descendants of Assarakos
See also
Iliad
References
External links
The Iliad, translated in English
The Iliad, the original ancient Greek text
Ancient peoples of Anatolia
Trojans |
55302860 | https://en.wikipedia.org/wiki/Flowmon%20Networks | Flowmon Networks | Flowmon Networks is a privately held technology company which develops network performance monitoring and network security products utilizing information from traffic flow. Its Flowmon product series consists of network monitoring probes, collectors for flow data (NetFlow, IPFIX and other standards) analysis and software modules which extend probes and collectors by analytical features for network behavior anomaly detection, network awareness application performance management, DDoS detection and mitigation and traffic recording.
History
The origins of the company dated back to 2002 when a group of scientists under the CESNET association started activities in the field of programmable hardware called Liberouter project. When participating on the development project for GEANT2, the Liberouter team developed a prototype of network monitoring probe called FlowMon. It became the basis of Invea-Tech company which was founded in 2007 as a spin-off company by Masaryk University, Brno University of Technology and UNIS company on the base of the technology transfer from CESNET association. With this prototype of network monitoring solution it was incubated as a start-up by the South Moravian Innovation Centre technology incubator programme.
In May 2013 it was recognized by Gartner as the only European vendor on the Network Behavior Analysis (NBA) Market. To strengthen its position on the NBA market, Invea-Tech acquired another Czech company called AdvaICT specialized in network behavior analysis.
In 2014 the company was recognized as one of the fastest growing technology companies in CE region by Deloitte. In 2015 Invea-Tech was split into Flowmon Networks and Netcope Technologies.
In 2016, Flowmon Networks was recognized by Gartner in Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD).
In January 2016 the company purchased FerretApps company to supplement its application monitoring solution.
In 2020, Flowmon Networks was acquired by Kemp Technologies to enable "early detection of advanced threats and network anomalies along with complete active feedback loops for remediation."
Research Activities
Since the beginning, the company has participated in international research and development project focused on developing new measurement, analysis and data protection techniques across networks, i.e. DEMONS and ACEMIND. Flowmon also cooperates on local R&D projects with CESNET, Liberouter and Masaryk University endorsed by the funds of Technology Agency of the Czech Republic.
References
Technology companies established in 2007
Information technology companies of the Czech Republic |
54206591 | https://en.wikipedia.org/wiki/Fediverse | Fediverse | The Fediverse (a portmanteau of "federation" and "universe") is an ensemble of federated (i.e. interconnected) servers that are used for web publishing (i.e. social networking, microblogging, blogging, or websites) and file hosting, but which, while independently hosted, can communicate with each other.
On different servers (instances), users can create so-called identities. These identities are able to communicate over the boundaries of the instances because the software running on the servers supports one or more communication protocols which follow an open standard. As an identity on the fediverse, users are able to post text and other media, or to follow posts by other identities. In some cases, users can even show or share data (video, audio, text, and other files) publicly or to a selected group of identities and allow other identities to edit other users' data (such as a calendar or an address book).
History
In 2008, the social network identi.ca was founded by Evan Prodromou. He published the software GNU social under a free license (GNU Affero General Public License, AGPL). It defined the OStatus protocol. Besides the server, identi.ca, there were only few other instances, run by persons for their own use. This changed in 2011–12 when identi.ca switched to another software called pump.io. Several new GNU social instances were created. At the same time as GNU social, other projects like Friendica, Hubzilla, Mastodon, and Pleroma integrated the OStatus protocol, thus extending the fediverse (though Mastodon and Pleroma have since dropped OStatus in favor of ActivityPub). In the meantime, other communication protocols evolved which were integrated to different degrees into the platforms.
In January 2018, the W3C presented the ActivityPub protocol, aiming to improve the interoperability between the platforms. , this protocol was supported by thirteen platforms (see table below), and was the dominant protocol used in the fediverse.
Communication protocols used in the fediverse
These communication protocols, which implement open standards, are used in the fediverse:
ActivityPub
Diaspora Network
OStatus
Zot & Zot/6
Fediverse software platforms
The software spanning the fediverse are FOSS. Some of them vaguely resemble Twitter in style (for example, Mastodon, Misskey, GNU social, and Pleroma, which are similar in their microblogging function), while others include more communication and transaction options that are instead comparable to Google+ or Facebook (such as is the case with Friendica and Hubzilla).
The following software platforms span the fediverse by using the listed communication protocols:
User statistics
A number of developers publish live statistics about the fediverse on monitoring sites like the-federation.info. The statistics on these sites are an indication of usage levels, not a complete record, as they can only aggregate data from instances that use the NodeInfo protocol to publish usage statistics. There is no guarantee that all instances are known to these sites, and some instances may disable NodeInfo, or use software that hasn't implemented it. Some of these sites include data from any federated software that publishes it using NodeInfo, not just fediverse software.
See also
Comparison of software and protocols for distributed social networking
References
Further reading
2019. The disinformation landscape and the lockdown of social platforms
2019. Challenges in the Decentralised Web: The Mastodon Case
2018. Recommending Users: Whom to Follow on Federated Social Networks
2018. Multi-task dialog act and sentiment recognition on Mastodon
2015. FCJ-190 Building a Better Twitter: A Study of the Twitter Alternatives GNU social, Quitter, rstat.us, and Twister
2015. The Case for Alternative Social Media
Microblogging
Free software
Social networks
2008 introductions |
51344878 | https://en.wikipedia.org/wiki/Thomas%20G.%20Dietterich | Thomas G. Dietterich | Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the pioneers of the field of machine learning. He served as executive editor of Machine Learning (journal) (1992–98) and helped co-found the Journal of Machine Learning Research. In response to the media's attention on the dangers of artificial intelligence, Dietterich has been quoted for an academic perspective to a broad range of media outlets including National Public Radio, Business Insider, Microsoft Research, CNET, and The Wall Street Journal.
Among his research contributions were the invention of error-correcting output coding to multi-class classification, the formalization of the multiple-instance problem, the MAXQ framework for hierarchical reinforcement learning, and the development of methods for integrating non-parametric regression trees into probabilistic graphical models.
Biography and education
Thomas Dietterich was born in South Weymouth, Massachusetts, in 1954. His family later moved to New Jersey and then again to Illinois, where Tom graduated from Naperville Central High School. Dietterich then entered Oberlin College and began his undergraduate studies. In 1977, Dietterich graduated from Oberlin with a degree in mathematics, focusing on probability and statistics.
Dietterich spent the following two years at the University of Illinois, Urbana-Champaign. After those two years, he began his doctoral studies in the Department of Computer Science at Stanford University. Dietterich received his Ph.D. in 1984 and moved to Corvallis, Oregon, where he was hired as an assistant professor in computer science. in 2013, he was named "Distinguished Professor". In 2016, Dietterich retired from his position at Oregon State University.
Throughout his career, Dietterich has worked to promote scientific publication and conference presentations. For many years, he was the editor of the MIT Press series on Adaptive Computation and Machine Learning. He also held the position of co-editor of the Morgan Claypool Synthesis Series on Artificial Intelligence and Machine Learning. He has organized several conferences and workshops including serving as Technical Program Co-Chair of the National Conference on Artificial Intelligence (AAAI-90), Technical Program Chair of the Neural Information Processing Systems (NIPS-2000) and General Chair of NIPS-2001. He served as founding President of the International Machine Learning Society and he has been a member of the IMLS Board since its founding. He is currently also a member of the Steering Committee of the Asian Conference on Machine Learning.
Research interests
Professor Dietterich is interested in all aspects of machine learning. There are three major strands of his research. First, he is interested in the fundamental questions of artificial intelligence and how machine learning can provide the basis for building integrated intelligent systems. Second, he is interested in ways that people and computers can collaborate to solve challenging problems. And third, he is interested in applying machine learning to problems in the ecological sciences and ecosystem management as part of the emerging field of computational sustainability.
Over his career, he has worked on a wide variety of problems ranging from drug design to user interfaces to computer security. His current focus is on ways that computer science methods can help advance ecological science and improve our management of the Earth's ecosystems. This passion has led to several projects including research in wildfire management, invasive vegetation and understanding the distribution and migration of birds. For example, Dietterich's research is helping scientists at the Cornell Lab of Ornithology answer questions like: How do birds decide to migrate north? How do they know when to land and stopover for a few days? How do they choose where to make a nest? Tens of thousands of volunteer birdwatchers (citizen scientists) all over the world contribute data to the study by submitting their bird sightings to the eBird website. The amount of data is overwhelming – in March 2012 they had over 3.1 million bird observations. Machine learning can uncover patterns in data to model the migration of species. But there are many other applications for the same techniques which will allow organizations to better manage our forests, oceans, and endangered species, as well as improve traffic flow, water systems, the electrical power grid, and more.
Dangers of AI: an academic perspective
Dietterich has argued that the most realistic risks about the dangers of artificial intelligence are basic mistakes, breakdowns and cyberattacks, and the fact that it simply may not always work, rather than machines that become super powerful or destroy the human race. Dietterich considers machines becoming self-aware and trying to exterminate humans to be more science fiction than scientific fact. But to the extent that computer systems are given increasingly dangerous tasks, and asked to learn from and interpret their experiences, he said they may simply make mistakes. Instead, much of the work done in the AI safety community does indeed focus around accidents and design flaws.
Positions held
2014–2016: President, Association for the Advancement of Artificial Intelligence (AAAI).
2013–present: Distinguished Professor of computer science, Oregon State University.
2011–present: Chief Scientist, BigML, Corvallis, OR.
2005–present: Director of Intelligent Systems Research, School of Electrical Engineering and Computer Science, Oregon State University.
2006–2008: Chief Scientist, Smart Desktop, Inc., Seattle, WA.
2004–2005: Chief Scientist, MyStrands, Inc., Corvallis, OR.
1995-2013: Professor of computer science, Oregon State University.
1998–1999: Visiting Senior Scientist, Institute for the Investigation of Artificial Intelligence, Barcelona, Spain. (Sabbatical leave position)
1988–1995: Associate Professor of computer science, Oregon State University.
1991–1993: Senior Scientist, Arris Pharmaceutical Corporation, S. San Francisco, CA.
1985–1988: Assistant Professor of computer science, Oregon State University.
1979–1984: Research Assistant, Heuristic Programming Project, Department of Computer Science, Stanford University.
1979 (Summer): Member of Technical Staff, Bell Telephone Laboratories, Naperville, Illinois. Computer-to-computer file transfer and micro-code distribution to remote switching systems.
1977 (Summer): Assistant to the Director of Planning and Research, Oberlin College, Oberlin, Ohio. Developed institutional planning database.
Awards and honors
Thomas Dietterich was honored by Oregon State University in the spring of 2013 as a "Distinguished Professor" for his work as a pioneer in the field of machine learning and being one of the mostly highly cited scientists in his field. He has also earned exclusive "Fellow" status in the Association for the Advancement of Artificial Intelligence, the American Association for the Advancement of Science and the Association for Computing Machinery. Over his career, he obtained more than $30 million in research grants, helped build a world-class research group at Oregon State, and created three software companies. He also co-founded two of the field's leading journals and was elected first president of the International Machine Learning Society.
His other awards and honors include:
ACM Distinguished Lecturer, 2012-2013
Fellow, American Association for the Advancement of Science, 2007
Oregon State University, College of Engineering Collaboration Award, 2004
Winner, JAIR Award for Best Paper in Previous Five Years, 2003
Fellow, Association for Computing Machinery, elected 2003
Oregon State University, College of Engineering Research Award, 1998
Fellow, Association for the Advancement of Artificial Intelligence, elected 1994
NSF Presidential Young Investigator, 1987-92
Nominated for Carter Award for Graduate Teaching, 1987, 1988
IBM Graduate Fellow, 1982, 1983
Upsilon Pi Epsilon, 1996
Sigma Xi, 1979–present
State Farm Companies Foundation Fellowship, 1978
Member, Board of Trustees, Oberlin College, 1977-1980
Graduation with Honors in Mathematics, Oberlin College, 1977
Phi Beta Kappa, 1977
National Merit Scholar, 1973
Selected publications
Liping Liu, Thomas G. Dietterich, Nan Li, Zhi-Hua Zhou (2016). Transductive Optimization of Top k Precision. International Joint Conference on Artificial Intelligence (IJCAI-2016). pp. 1781–1787. New York, NY
Md. Amran Siddiqui, Alan Fern, Thomas G. Dietterich, Shubhomoy Das (2016). Finite Sample Complexity of Rare Pattern Anomaly Detection. Uncertainty in Artificial Intelligence (UAI-2016). New York, NY
Alkaee-Taleghan, M., Hall, K., Crowley, M., Albers, H. J., Dietterich, T. G. (2015). PAC Optimal MDP Planning for Ecosystem Management. Journal of Machine Learning Research, 16, 3877-3903
Thomas Dietterich, Eric Horvitz (2015). Viewpoint: Rise of Concerns about AI: Reflections and Directions. Communications of the ACM, 58(10) 38-40
Dietterich, T. G. (2009). Machine Learning in Ecosystem Informatics and Sustainability. Abstract of Invited Talk. Proceedings of the 2009 International Joint Conference on Artificial Intelligence (IJCAI-2009). Pasadena, CA
Dietterich, T. G., Bao, X., Keiser, V., Shen, J. (2010). Machine Learning Methods for High Level Cyber Situation Awareness. pp. 227–247 in Jajodia, S., Liu, P., Swarup, V., Wang, C. (Eds.) Cyber Situational Awareness, Springer.
Dietterich, T. G., Domingos, P., Getoor, L., Muggleton, S. Tadepalli, P. (2008). Structured machine learning: the next ten years. Machine Learning. 73(1) 3-23. DOI: 10.1007/s10994-008-5079-1
Dietterich, T. G., Bao, X. (2008). Integrating Multiple Learning Components Through Markov Logic. Twenty-Third Conference on Artificial Intelligence (AAAI-2008). 622-627
Dietterich, T. G. (2007). Machine Learning in Ecosystem Informatics. Proceedings of the Tenth International Conference on Discovery Science. Lecture Notes in Artificial Intelligence Volume 4755, Springer, Berlin
Dietterich, T. G. Learning and Reasoning. Technical report, School of Electrical Engineering and Computer Science, Oregon State University.
Dietterich, T. G. (2003). Machine Learning. In Nature Encyclopedia of Cognitive Science, London: Macmillan, 2003.
Dietterich, T. G. (2002). Machine Learning for Sequential Data: A Review. In T. Caelli (Ed.) Structural, Syntactic, and Statistical Pattern Recognition; Lecture Notes in Computer Science, Vol. 2396. (pp. 15–30). Springer-Verlag
Dietterich, T. G. (2002). Ensemble Learning. In The Handbook of Brain Theory and Neural Networks, Second edition, (M.A. Arbib, Ed.), Cambridge, MA: The MIT Press, 2002. 405-408.
Dietterich, T. G. (2000). The Divide-and-Conquer Manifesto In Algorithmic Learning Theory 11th International Conference (ALT 2000) (pp. 13–26). New York: Springer-Verlag.
Dietterich, T. G. (2000). Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13, 227-303.
Dietterich, T. G. (2000). Machine Learning. In David Hemmendinger, Anthony Ralston and Edwin Reilly (Eds.), The Encyclopedia of Computer Science, Fourth Edition, Thomson Computer Press. 1056-1059.
Dietterich, T. G. (2000). An Overview of MAXQ Hierarchical Reinforcement Learning. In B. Y. Choueiry and T. Walsh (Eds.) Proceedings of the Symposium on Abstraction, Reformulation and Approximation SARA 2000, Lecture Notes in Artificial Intelligence (pp. 26–44), New York: Springer Verlag.
References
External links
Thomas Dietterich's home page
Thomas Dietterich – Oregon State University faculty profile
Elon Musk & Thomas Dietterich on AI Safety
Thomas Dietterich Distinguished Professor Lecture
Research Bio
Curriculum Vita
Tom Dietterich Oral History Interview
Oregon State University faculty
Stanford University alumni
1954 births
Living people
Machine learning researchers
Fellows of the American Association for the Advancement of Science
Fellows of the Association for Computing Machinery
Fellows of the Association for the Advancement of Artificial Intelligence
Presidents of the Association for the Advancement of Artificial Intelligence |
42511426 | https://en.wikipedia.org/wiki/Arrington%20Jones | Arrington Jones | Arrington Jones III (born February 16, 1959) is a former American football running back who played one season with the San Francisco 49ers of the National Football League (NFL). He was drafted by the San Francisco 49ers in the fifth round of the 1981 NFL Draft. He played college football at Winston-Salem State University and attended John Marshall High School in Richmond, Virginia. Jones was also a member of the Washington Federals of the United States Football League (USFL). He was a member of the San Francisco 49ers team that won Super Bowl XVI and has been a coach on several collegiate teams.
College career
Jones played for the Winston-Salem State Rams of Winston-Salem State University from 1978 to 1981.
Professional career
Jones was selected by the San Francisco 49ers of the NFL with the 122nd pick in the 1981 NFL Draft. He played in one game for the 49ers during the 1981 season. He fumbled two kickoff returns of the opening game of the 1981 season at Detroit, site of the Super Bowl that season. He was released after that game and never played another NFL game. The 49ers won Super Bowl XVI against the Cincinnati Bengals on January 24, 1982.
Jones was a member of the USFL's Washington Federals during the 1983 off-season. He was released by the Federals on February 22, 1983.
Coaching career
Jones served as special teams coordinator for the Virginia State Trojans of Virginia State University from to 1986 to 1990. He was also assistant head coach and offensive coordinator from 1990 to 2000. The Trojans won the CIAA Championship in 1995.
He was offensive coordinator and recruiting coordinator for the Winston-Salem State Rams from 2001 to 2003. The Rams appeared in the CIAA Championship Game in 2001.
Jones served as head coach and offensive coordinator for the Virginia Union Panthers of Virginia Union University from 2004 to 2007, accruing a 21–21 record. He led the Panthers to a 9–1 regular season record in 2007, winning the CIAA Eastern Division Championship. The Panthers also earned a berth in the 2007 Pioneer Bowl, losing to the Tuskegee Golden Tigers. He was named the 2007 CIAA Coach of the Year. Jones resigned in March 2008, citing personal reasons.
Jones was the offensive coordinator and quarterbacks coach of the Delaware State Hornets of Delaware State University from 2011 to 2014.
Head coaching record
References
External links
Just Sports Stats
1959 births
Living people
American football running backs
Delaware State Hornets football coaches
San Francisco 49ers players
Virginia Union Panthers football coaches
Virginia State Trojans football coaches
Winston-Salem State Rams football coaches
Winston-Salem State Rams football players
Sportspeople from Richmond, Virginia
Coaches of American football from Virginia
Players of American football from Richmond, Virginia
African-American coaches of American football
African-American players of American football
21st-century African-American people
20th-century African-American sportspeople |
6728904 | https://en.wikipedia.org/wiki/Michael%20Fleischhacker | Michael Fleischhacker | Michael Fleischhacker (born May 26, 1969 in Friesach, Carinthia) is an Austrian journalist. He was director and editor-in-chief of Austrian daily Die Presse from 2004 until 2012.
Early life
Fleischhacker was raised in Sankt Lambrecht in Styria. He graduated from Admont Abbey High School and studied theology, philosophy and German philology in Graz and didn't graduate.
Newspaper career
In April 1991 Fleischhacker worked for Kleinen Zeitung as editor for foreign politics. 1994 he became editor in the editor-in-chief's office, from 1995 until 1997 he was managing editor and 1998/99 he became deputy editor-in-chief. In addition, as publishing director he was responsible for strategic development and new media.
In 2000 he joined Austrian daily Der Standard and was managing editor until 2001. From 2002 he worked at Austrian daily Die Presse and became editor-in-chief and chief executive in 2004. In October 2012 he left Die Presse and became freelance journalist and contributed regularly in various daily Austrian newspapers.
In July 2014 he joined the Austrian private network ServusTV and became News presenter of Talk im Hangar-7.
From 2014 he worked for NZZ Österreich of Austria and was primarily responsible for establishing an online presence. Later on he became chief editor and co-chief executive.
Personal life
Fleischhacker is father of five children.
Publications
References
External links
World Diversity Leadership Summit Profile of Fleischhacker
Fleischhacker's blog on the Die Presse Website (German)
Fleischhacker Michael
Austrian newspaper editors
Living people
1969 births
Die Presse editors |
20297788 | https://en.wikipedia.org/wiki/Etherpad | Etherpad | Etherpad (previously known as EtherPad) is an open-source, web-based collaborative real-time editor, allowing authors to simultaneously edit a text document, and see all of the participants' edits in real-time, with the ability to display each author's text in their own color. There is also a chat box in the sidebar to allow meta communication.
First launched in November 2008, the software was acquired by Google in December 2009 and released as open source later that month. Further development is coordinated by the Etherpad Foundation.
Features and implementation
Anyone can create a new collaborative document, known as a "pad". Each pad has its own URL, and anyone who knows this URL can edit the pad and participate in the associated chats. Password-protected pads are also possible. Each participant is identified by a color and a name.
The software auto-saves the document at regular, short intervals, but participants can permanently save specific versions (checkpoints) at any time. Merging of changes is handled by operational transform. A "time slider" feature allows anyone to explore the history of the pad. The document can be downloaded in plain text, HTML, Open Document, Microsoft Word, or PDF format.
Automated markup of JavaScript code was made available shortly after the launch.
Etherpad itself is implemented in JavaScript, on top of the AppJet platform, with the real-time functionality achieved through Comet streaming.
At the time of its launch, Etherpad was the first web application of its kind to achieve true real-time performance, a feat previously only achieved by desktop applications such as SubEthaEdit (for Mac), Gobby, or MoonEdit (both cross-platform). Existing collaborative web editors at the time could only achieve near-real-time performance.
The client-side text editor in Etherpad and its Etherpad Lite fork is implemented using Appjet's in-browser text editor, written in JavaScript.
Launch
Etherpad was launched on November 19, 2008 by David Greenspan, Aaron Iba, and J.D. Zamfirescu (the latter two being former Google employees).
They were later joined by former Googler Daniel Clemens and designer David Cole. The original website was etherpad.com.
Etherpad was covered by Slashdot on November 21, 2008, resulting in server slowdown and downtime. This led the developers to temporarily revert the tool to closed beta, not allowing new pads to be created (but providing full and unrestricted access to the existing ones), while the server infrastructure was being improved.
After the rewrite of the software was completed, the new version went live on 29 January 2009, and on February 3, the site became again open to all.
Acquisition
When Google Wave was announced, the Etherpad team wrote on their blog comparing the two platforms and stating that the minimalist and targeted Etherpad interface could be an advantage in some use cases.
Still, on 4 December 2009, Etherpad announced on its blog that it had been acquired by Google for integration into Google Wave. Existing Etherpad users would receive invites for Google Wave.
On 31 March 2010, Etherpad announced that creation of new pads would be allowed until April 14 (pad creation was still allowed as of April 18, though) and existing pads could still be accessed and used until May 14. Options for download/export were available. The Etherpad service terminated on May 14.
Open-source
Google released the source code for Etherpad under the Apache License version 2.0 on December 17, 2009.
Subsequently, Google asked the Etherpad code maintainers to remove JSMin from its code tree due to a clause in its license stating, "The Software shall be used for Good, not Evil," which is not compatible with the open source licenses allowed on Google Code.
After the release of the software as open source, a number of people have set up Etherpad servers, as clones of the original website. Soon after, users and programmers of Etherpad, after an initial meeting, created the Etherpad Foundation to coordinate further development. Their website maintains a list of a growing number of sites that run the Etherpad software.
Etherpad Lite
Etherpad Lite is an almost complete rewrite of the original Etherpad software, based on different technical foundations and written by different authors.
While the original Etherpad is written in Java and Scala and has quite demanding system requirements, Etherpad Lite is written in server-side JavaScript using node.js. The original realtime synchronization library (called Easysync) remains the same.
Etherpad Lite has some distinctive features which are not available in the original version:
An HTTP API which allows the user to interact with the pad contents, and with user and group management
A jQuery plugin exists which helps embedding the collaborative editor in other sites
Clients for PHP, Python, Ruby, JavaScript, Java, Objective-C and Perl which interface with the API.
More than 50 plugins, among them email_notifications, invite_via_email, offline_edit, fileupload, tables or rtc for video calls based on WebRTC.
Etherpad Lite offers a number of export formats, including LaTeX, but not Markdown.
But there is an official addon to export in markdown. Etherpad Lite supports many natural languages. Localization is achieved collaboratively through translatewiki.net.
See also
Collaborative real-time editor
Real-time text
Sync.in — an application based on Etherpad
References
Collaborative real-time editors
Google acquisitions
Java platform
Free software programmed in JavaScript |
48690401 | https://en.wikipedia.org/wiki/Solent%20Sea%20Steam%20Packet%20Company | Solent Sea Steam Packet Company | The Solent Sea Steam Packet Company, later the Solent Steam Packet Company, operated ferry services between Lymington and Yarmouth on the Isle of Wight between 1841 and 1884.
History
In early 1841, the company purchased Glasgow from the Lymington, Yarmouth, Cowes and Portsmouth Steam Packet Company, and after refitting, was deployed on the service between Lymington and Yarmouth, operating three or four passages a day.
In March 1841 they entered into a contract with the Post Office for the conveyance of mail between Lymington and Yarmouth.
By 1842, the company had acquired another vessel, Solent, which was running from Lymington to Yarmouth, Cowes, Ryde and Portsmouth.
In 1858, Red Lion was added to the fleet to handle additional traffic brought by the railway. The company changed its name to the Solent Steam Packet Company in 1861.
A second Solent replaced the first on 3 November 1863. Mayflower joined the fleet on 6 July 1866 had been built in Newcastle; she was tastefully fitted and comfortable. As well as plying to Yarmouth, she made excursion runs to Bournemouth, but was disposed of after 1878.
On 1 July 1884, the London and South Western Railway bought out the Solent Steam Packet Company's fleet of two paddle steamers, Solent and Mayflower, four horse and cargo boats, and other boats and property, paying £2,750 (equivalent to £ in ).
Ships
The vessels operated by the Solent Sea Steam Packet Company were:
References
Ferry transport in England
Isle of Wight
Defunct shipping companies of the United Kingdom
Lymington
British companies established in 1841
Transport companies established in 1841 |
61369833 | https://en.wikipedia.org/wiki/Isaiah%20Mobley | Isaiah Mobley | Eric Isaiah Mobley (born September 24, 1999) is an American college basketball player for the USC Trojans of the Pac-12 Conference. He attended Rancho Christian School in Temecula, California, where he was a five-star recruit and McDonald's All-American.
High school career
Mobley attended Rancho Christian School in Temecula, California. As a freshman, he won the CIF Southern Section (CIF-SS) Division 5A title, the school's first in any sport, and led his team to the CIF Division V Southern California Regional final. After averaging 16.2 points and 10.4 rebounds per game, Mobley shared CIF-SS Division 5A player of the year honors and made The Press-Enterprise All-Area second team. Over the summer, he played for the Compton Magic, one of the top travel teams in the country.
As a sophomore, Mobley was joined on the Rancho Christian basketball team by his younger brother Evan Mobley. He helped his team reach the CIF-SS Division 2A semifinal. In his junior season, Mobley averaged 19.9 points, 11.3 rebounds, and four assists per game and was named The Press-Enterprise player of the year and made the USA Today All-USA California second team and MaxPreps Junior All-American honorable mention team. He guided Rancho Christian to a 29–5 record and a CIF-SS Open Division playoff appearance. As a senior, Mobley averaged 19.4 points, 13.6 rebounds, and 3.8 assists per game, helping his team to a 26–6 record. He earned honorable mention on the MaxPreps All-American and USA Today All-USA teams, while making the All-USA California first team. Mobley played in the 2019 McDonald's All-American Game. Mobley at one point in high school was projected to be the second pick of the 2020 NBA Draft like his brother Evan Mobley was projected to be in the 2021 NBA Draft.
Recruiting
Mobley received offers from several NCAA Division I programs, including San Diego State and Nevada, before starting high school. On May 28, 2018, as a high school junior, he committed to play college basketball for USC. By the end of his high school career, Mobley was considered a consensus five-star recruit and the best 2019 class prospect in California.
College career
In his debut for USC, Mobley had 17 points and seven rebounds to lead the Trojans to a 77–48 victory over Florida A&M. He had 15 points and nine rebounds in a 101–79 loss to Marquette on November 29, 2019. Mobley made eight starts as a freshman and averaged 6.2 points and 5.3 rebounds per game. As a sophomore, he averaged 9.9 points and 7.3 rebounds per game. On April 17, 2021, he declared for the 2021 NBA draft while maintaining his college eligibility; he withdrew from the draft in July on the day of the deadline.
Career statistics
College
|-
| style="text-align:left;"| 2019–20
| style="text-align:left;"| USC
| 31 || 8 || 20.3 || .474 || .286 || .521 || 5.3 || 1.0 || .6 || .6 || 6.2
|-
| style="text-align:left;"| 2020–21
| style="text-align:left;"| USC
| 32 || 32 || 28.0 || .472 || .436 || .545 || 7.3 || 1.6 || .4 || .9 || 9.9
|- class="sortbottom"
| style="text-align:center;" colspan="2"| Career
| 63 || 40 || 24.2 || .473 || .373 || .536 || 6.3 || 1.3 || .5 || .8 || 8.0
Personal life
Mobley's father Eric played college basketball for Cal Poly Pomona and Portland and played professionally in China, Indonesia, Mexico, and Portugal. He later coached Amateur Athletic Union (AAU) basketball for 11 years. In 2018, he was hired as assistant basketball coach for USC. Mobley was a high school teammate of his younger brother Evan Mobley, who has been considered the best player in the 2020 class.
References
External links
USC Trojans bio
1999 births
Living people
American men's basketball players
African-American basketball players
Basketball players from San Diego
McDonald's High School All-Americans
Power forwards (basketball)
USC Trojans men's basketball players
21st-century African-American sportspeople |
701776 | https://en.wikipedia.org/wiki/Online%20casino | Online casino | Online casinos, also known as virtual casinos or Internet casinos, are online versions of traditional ("brick and mortar") casinos. Online casinos enable gamblers to play and wager on casino games through the Internet. It is a prolific form of online gambling.
Some online casinos claim higher payback percentages for slot machine games, and some publish payout percentage audits on their websites. Assuming that the online casino is using an appropriately programmed random number generator, table games like blackjack have an established house edge. The payout percentage for these games are established by the rules of the game.
Types
Online casinos are broadly divided into two categories based on the software they use: web-based and download-only casinos. Traditionally, online casinos would include only one of the two platforms. However, with advanced technological changes, an online casino can now accommodate both.
Web-based
Web-based online casinos (also known as no-download casinos) are websites where users may play casino games without downloading software to their local computer. A stable internet connection is required to have a seamless gaming experience as all graphics, sounds, and animations are loaded through the web. Most online casinos allow gameplay through an HTML interface, previously this was done through browser plugins, such as Flash Player, Shockwave Player, or Java.
Download-based
Download-based online casinos require the download of the software client in order to play and wager on the casino games offered. The online casino software connects to the casino service provider and handles contact without browser support. Download-based online casinos generally run faster than web-based online casinos since the graphics and sound programs are cached by the software client, rather than having to be loaded from the Internet. On the other hand, the initial download and installation of the casino's software take time. As with any download from the Internet, the risk of the program containing malware exists, which makes it less popular among skeptical casino players.
Games
Virtual
Also known as software-based online casino games, the outcome of these games is determined using a pseudorandom number generator (PRNG) software. This software ensures that every deal of the card, the outcome of a dice throw, or the results produced by the spinning of a slot machine or roulette wheel is totally random and unpredictable. PRNGs use a set of mathematical instructions known as an algorithm to generate a long stream of numbers that give the impression of true randomness. While this is not the same as true random number generation (computers are incapable of this without an external input source), it provides results that satisfy all but the most stringent requirements for true randomness.
When implemented correctly, a PRNG algorithm such as the Mersenne Twister will ensure that the games are both fair and unpredictable. However, usually, the player has to trust that the software has not been rigged to increase the house edge, as its inner workings are invisible to the user. Properly regulated online casinos are audited externally by independent regulators to ensure that their win percentages are in line with the stated odds, and this can provide a degree of assurance to the player that the games are fair, assuming the player trusts the regulator.
Live dealer
Live dealer casino games are the complete opposite of software-based games. Instead of depending on software to determine the outcome of the roulette spin, dice throw, or deal of a card, these games depend on real-time results. This is possible as the games are streamed in real-time from a land-based casino or a studio recreated to mimic a land-based casino.
To ensure that players have an easy time playing these games and that the land-based environment is fully recreated, software developers include innovative features such as the chat feature. This enables the player to type your message to the dealer and they can respond back verbally. The live chat feature can also be used to communicate with other players seated at the table following a set of rules laid down by the casino.
The results of the physical transactions by the dealer, such as the outcome of the roulette wheel spin or the dealing of cards, are translated into data that can be utilized by the software by means of optical character recognition (OCR) technology. This enables the player to interact with the game in much the same way as they would with a virtual casino game, except for the fact that the results are determined by real-life actions rather than automated processes.
These games are a lot more expensive for websites to host than virtual games, as they involve a heavier investment in technology and staffing. A live casino studio typically employs one or more cameramen, several croupiers running the various games, an information technology manager to ensure that any technical hitches are dealt with swiftly, and a pit boss that acts as an adjudicator in case of disputes between players and croupiers.
In most cases, this requires at least a three-room setup, comprising a live studio, a server/software room, and an analyst’s room. The configuration of these rooms varies from casino to casino, with some having several gaming tables in one room, and some having a single table in each room.
The high running costs involved with operating live dealer games is the reason why online casinos only tend to offer a handful of the most popular games in this format, such as roulette, blackjack, sic bo, and baccarat. In comparison, the running costs associated with virtual games are very low, and it is not uncommon for online casinos to offer hundreds of different virtual casino games to players on their site.
Online casinos vary in their approach to the hosting of live games, with some providing live games via their own television channel, and others offering the games exclusively via their website. In the case of televised games, players can often use their mobile phone or television remote controls to place bets instead of doing so via a computer connected to the internet. The most common live dealer games offered at online casinos are baccarat, blackjack, and roulette.
Examples
A typical selection of gambling games offered at an online casino might include:
Baccarat
Blackjack
Craps
Roulette
Sic bo
Slot machines
Poker
Keno
Bingo
Bonuses
Many online casinos offer sign-up bonuses to new players making their first deposit, and often on subsequent play as well. These bonuses are a form of marketing that may incur a cost (potentially justifiable in order to attract a new player who may return and deposit many more times), since the casino is essentially giving away money in return for a commitment from the player to wager a certain minimum amount before they are allowed to withdraw. Since all casino games have a house edge, the wagering requirements ensure that the player cannot simply walk away with the casino's money immediately after claiming the bonus. These wagering requirements are commonly set to be sufficiently high that the player has a negative expectation, exactly as if they had deposited and not claimed a bonus.
Casinos may choose to restrict certain games from fulfilling the wagering requirements, either to restrict players from playing low-edge games or to restrict 'risk-free' play (betting for instance both red and black on roulette), thereby completing the wagering requirement with a guaranteed profit after the bonus is taken into account.
Welcome
The Welcome bonus is a deposit match bonus on the first deposit ever made in the casino or casino group. Welcome bonuses sometimes come in packages and may be given to match the first two or three deposits (First Deposit Welcome Bonus, Second Deposit Welcome Bonus, etc.). They can also be tied to specific games, such as the Welcome Slots Bonus or the Welcome Table Games Bonus. The casino may also offer Welcome bonuses for high rollers who make an initial deposit above the standard amount limit.
Referral
There are two types of Referral bonuses: one for the Referee and one for the Referrer. The Referee gets a bonus when he or she registers an account at the casino and mentions the Referrer. The Referrer gets a bonus when the Referee completes all the requirements, such as making the deposit and wagering it a certain number of times.
Cashback or insurance
Cashback or Insurance bonuses are offered as a percentage of all losses in the player’s previous gaming activity. Typically, only deposits that were not matched with bonuses count towards this bonus. You can additionally find websites that offer casino cashback payments based on your losses encountered while playing with one or more online casinos. Those types of cashback deals are usually paid back to players by the casino portal that offers those special cashback offers.
No-deposit
The most popular form of bonus is one that can be claimed without the need to deposit any of the player's own money - known as a no deposit bonus. These bonuses are used as acquisition tools by casinos wishing to attract new players. No deposit bonuses don't always take the form of real cash, as exemplified below.
Non-cashable
Non-cashable bonuses may be called "sticky" or "phantom" bonuses. In both cases, the bonus forms a part of the player's balance, but cannot be cashed out. The difference between cash-able and phantom bonuses comes at cashout time. A phantom bonus is deducted from the player's balance at the moment he places his withdrawal request. For example: if you deposited $100, received $100, played, and finished the wagering at $150. If the bonus is sticky, the player will be able to withdraw just $50. If the bonus is cash-able, then the whole balance is available for withdrawal.
Comp points
Comps are commonly available at land-based casinos, but also exist online. Comp points can usually be exchanged for cash, prizes, or other comps. The amount of cash given per wager is usually very small and often varies with game selection. A casino might offer three comp points for each $10 wagered on slots and one comp point for each $10 wagered on blackjack. The casino might give $1 for each 100 comp points. This example is equivalent to returning 0.3% of wagers on slots and 0.1% of wagers on blackjack. In addition, online casinos may offer comps such as free tickets to online tournaments, free slots online, tickets to other special events, extra bonuses, souvenirs, and payback.
Hunting
Bonus hunting (also known as bonus bagging or bonus whoring) is a type of advantage gambling where turning a profit from casino, sportsbook and poker room bonus situations is mathematically possible. For example, the house edge in blackjack is roughly 0.5%. If a player is offered a $100 cashable bonus requiring $5000 in wagering on blackjack with a house edge of 0.5%, the expected loss is $25. Therefore, the player has an expected gain of $75 after claiming the $100 bonus.
Disputes
A large portion of online casino disputes relates to bonuses. Casinos may label players who win using bonuses as "bonus abusers." Both players and casinos may commit fraud. An example of player fraud is creating multiple accounts and using the accounts to claim a sign-up bonus several times. An example of casino fraud is changing terms of a bonus after a player has completed the wagering requirements, then requiring the player to meet the new bonus terms.
Legality
Online gambling legislation often has loopholes that result from the rapid development of the technology underpinning the development of the industry. Some countries, including Belgium, Canada, Finland, and Sweden have state gambling monopolies and do not grant licenses to foreign casino operators. According to their law, operators licensed on the territory of these countries can only be considered legal. At the same time, they can't prosecute foreign casino operators and only block their sites. Players in these countries can't be punished and can gamble at any site they can access.
Australia
The Australian Interactive Gambling Act 2001 (IGA) criminalizes the supply of online casino games by an operator anywhere in the world to persons located in Australia. It only targets operators of online gambling sites, resulting in the curious situation that it is not illegal for a player in Australia to access and gamble at an online casino. No operator has even been charged with an offense under the IGA and many online casinos accept Australian customers. In June 2016, the South Australian Government became the first state or territory in the world to introduce a 15% Place Of Consumption Tax (POCT) modeled on the 2014 UK POCT.
Belgium
The Belgian Gaming Act went into effect in January 2011 and allows online gambling, but only under very strict conditions and surveillance.
Canada
The Canadian criminal code states that only provincial governments and charitable organizations licensed by provincial governments may operate a casino in Canada. It also prohibits residents from participating in any lottery scheme, the game of chance, or gambling activity not licensed or operated by a provincial government. In 2010, the British Columbia Lottery Corporation launched Canada’s first legal online casino, PlayNow, which is available to residents of British Columbia. The province of Quebec also operates a legal online casino through Loto-Québec.
Despite this legislation, the Kahnawake First Nation in Quebec has taken the position that it is a sovereign nation, able to enact its own gambling legislation, and has licensed and hosted nearly 350 gambling websites, without ever being prosecuted.
Germany
A German state contract about gambling (German: ) between all 16 German states was ratified in 2008 and has been adopted in 2012. It regulates restrictive handling of online-gambling, including a basic state monopoly on public gambling with limited exceptions for a few commercial providers. Online gambling, and other forms of public gambling, against these regulations is illegal in Germany. The state contract, its implementation in contrast to the more lenient EU legislation, and possible further changes have been controversially discussed in the public, politics, and courts.
India
Online gambling is illegal in the state of Maharashtra under the "Bombay Wager Act". The most recent law to address gambling online was the Federal Information Technology Rules where such illegal activities may be blocked by Internet providers within India. Another act is the Public Gaming Act of 1867. States tend to operate on their own authority.
Online gambling legal issues in India are complicated in nature as Gambling in India is regulated by different states laws and online gambling is a central subject. To ascertain the position of Indian government, the Supreme Court of India sought the opinion of central government in this regard but the same was declined by the central government. This has made playing of online cards games like rummy, poker, etc. legally risky.
United Kingdom
In the United Kingdom, the Gambling Act 2005 governs all matters of online gambling, permitting online betting sites to have a Remote Gambling Licence in order to offer online betting to UK citizens. In 2014, the UK government put into law the Gambling Act of 2014 which in addition to the original 2005 law, required offshore online gambling operators catering to UK players to obtain a UK license. The new regulation required operators to pay a 15% Place of Consumption Tax (POCT), something that triggered an exodus of sorts of some operators from the UK Isles. However, this exodus did not last long in most cases as the benefits outweighed the stumbling blocks, due to the UK being a major market for online gambling.
In 2019 the United Kingdom Gambling Commission (UKGC) announced a series of new measures that apply to online and mobile casinos to reduce underage gambling with the aim of increasing fairness and transparency. The new measures will require casinos to have users verify their identity and age in order to gamble.
United States
In the United States, the legality of online gambling is debated and can vary from state to state. The Unlawful Internet Gambling Enforcement Act of 2006 (UIGEA) limits the ability of banks and payment processors to transact with internet gambling sites that are unlawful under any federal or state law. However, it does not define the legality or otherwise of an internet-based gambling site. It was commonly assumed that the Federal Wire Act prohibited all forms of online gambling. However, in December 2011, the United States Department of Justice released a statement clarifying that the Wire Act applied only to sports betting sites and not to online casinos, poker, or lottery sites, leaving the definition of legality up to individual states. Certain states such as Nevada, Delaware, and New Jersey have started the process of legalizing and regulating online gambling and it is expected that regulation will continue on a state by state basis.
See also
Problem gambling
eCOGRA
Online gambling
Notes
External links
Casinos
Casino |
38250946 | https://en.wikipedia.org/wiki/Quaid%20Software | Quaid Software | Quaid Software, Ltd. was a software publisher based in Toronto, Ontario. The company's best known product was Copywrite which company president Robert McQuaid claimed was "for making legal backup copies of a protected program."
The company was the subject to a lawsuit claiming that the software was used for making illegal copies. The lawsuit was dismissed because Section 117 of the US Copyright Act specifically allows:
the new copy is being made for archival (i.e., backup) purposes only;
you are the legal owner of the copy; and
any copy made for archival purposes is either destroyed, or transferred with the original copy, once the original copy is sold, given away, or otherwise transferred.
The Court concluded that, because of federal copyright law, its provisions (Louisiana License Act) were preempted (by the US Copyright Act) and Vault's license agreement was unenforceable.
See also
Vault Corp. v. Quaid Software Ltd.
References
See also
Vault Corp. v. Quaid Software Ltd.
Companies based in Toronto
Software companies of Canada |
44948385 | https://en.wikipedia.org/wiki/Tavis%20Ormandy | Tavis Ormandy | Tavis Ormandy is an English computer security white hat hacker. He is currently employed by Google as part of their Project Zero team.
Notable discoveries
Ormandy is credited with discovering severe vulnerabilities in Libtiff, Sophos' antivirus software and Microsoft Windows.
With Natalie Silvanovich he discovered a severe vulnerability in FireEye products in 2015.
His findings with Sophos' products led him to write a 30-page paper entitled "Sophail: Applied attacks against Sophos Antivirus" in 2012, which concludes that the company was "working with good intentions" but is "ill-equipped to handle the output of one co-operative security researcher working in his spare time" and that its products shouldn't be used on high-value systems.
He also created an exploit in 2014 to demonstrate how a vulnerability in glibc known since 2005 could be used to gain root access on an affected machine running a 32-bit version of Fedora.
In 2016, he demonstrated multiple vulnerabilities in Trend Micro Antivirus on Windows related to the Password Manager, and vulnerabilities in Symantec security products.
In February 2017, he found and reported a critical bug in Cloudflare's infrastructure leaking user-sensitive data along with requests affecting millions of websites around the world which has been referred to as Cloudbleed (in reference to the Heartbleed bug that Google co-discovered).
References
External links
"Sophail: Applied attacks against Sophos Antivirus" - Ormandy's paper on insecurities in Sophos products
Google employees
Hackers
English computer programmers
Living people
Year of birth missing (living people) |
4778056 | https://en.wikipedia.org/wiki/BlueSoleil | BlueSoleil | BlueSoleil is a Bluetooth software/driver for Microsoft Windows, Linux and Windows CE. It supports Bluetooth chipsets from CSR, Broadcom, Marvell etc. Bluetooth dongles, PCs, Laptops, PDAs, PNDs and UMPCs are sometimes bundled with a version of this software albeit with limited functionality and OEM licensing. The software is rarely needed on modern computers, as well-functioning Bluetooth drivers for the most widely used Bluetooth chips have been available through Windows Update since Windows Vista.
BlueSoleil is developed by the Chinese firm IVT Corporation and the first version was released in 1999. In China, BlueSoleil is marketed as 1000Moons (千月).
Features
BlueSoleil features the following technologies:
Voice over IP
Advanced Audio Distribution Profile (A2DP) and Audio/Video Remote Control Profile (AVRCP)
Personal Area Network (PAN)
Basic Imaging Profile (BIP)
Cordless Telephony Profile (CTP)
Instant Messaging
Integration Phone tools as a profile
A demonstration version of BlueSoleil is available, restricting the device after 2MB data transfer, approximately 1.5 minutes of high-quality audio or 2–4 hours of mouse use. The software must be purchased to enable unlimited use.
Interoperability
BlueSoleil has been distributed over 30 million copies. IVT has also established an interoperability testing centre where it has built up a large library of Bluetooth products which are on the market in order to perform interoperability testing.
Various Bluetooth dongles are delivered with an obsolete or demonstration version of Bluesoleil. New versions are available as a standalone purchase from the vendor's website. Regardless of whether the bundled or the standalone version is purchased, the software enforces licensing restrictions which tie it to the address of a specific Bluetooth dongle.
Bluesoleil works with the main Bluetooth Silicon Vendors hardware, such as Accelsemi, Atheros, CSR, Conwise, 3DSP, Broadcom, Intel, Marvell, NSC, RFMD, SiRF as well as baseband IP such as RivieraWaves BT IP.
If there is no Bluetooth dongle attached to the PC the Bluetooth logo will be grey, blue if a dongle is attached, and green when connected to another Bluetooth enabled device.
References
External links
BlueSoleil website
IVT Corporation website
1999 software
Bluetooth software |
33313215 | https://en.wikipedia.org/wiki/Rusi%20Brij | Rusi Brij | Rusi Brij (17 November 1955 – 20 May 2009) was an Indian business executive and entrepreneur. He was Executive Director of Satyam Computers, and CEO and Vice-Chairman of Hexaware Technologies. In 2003, he was named by Bain & Co. as one of India's hottest dealmakers in software. Rusi was well known and respected for his amicable mannerism, mentoring skills, people-oriented approach and a fine collection of vintage wines.
Rusi began his career with Living Media Limited, the leading magazine publishing house in India, where he was instrumental in setting up the first Market Research Unit in the publishing industry. He was the product manager for India Today, and also conceptualized and launched the Computers Today magazine. He entered the IT industry in 1986 when he joined Sonata Software, Bangalore.
He had over 25 years of varied experience in diverse portfolios ranging from M & A, International Business Development, Sales & Marketing, Project Management and Corporate Planning. His experience also included a long tenure with Satyam Computer Services Ltd. culminating in his appointment to Executive Director and EVP. He was widely known as a marketing-whiz at Satyam, where he once turned a $2 million deal into $20 million, the biggest deal of its kind in the Indian software industry at the time. He was instrumental in acquiring some of their largest customers, setting up many of their international operations, and also serving as Chairman on several of their joint ventures, with Fortune 500 firms such as GE, IBM and Deutsche Sparkasse.
He left Satyam in 2001 to join beleaguered Mumbai, India based Hexaware Technologies. The merger of Aptech Inc., a software training firm with Hexaware, a software development firm, had been slammed by Indian analysts. Morgan Stanley, while calling the venture "undoubtedly attractive", noted the uncertainty and volatility in the stock price as a major reason to be cautious. Prior to his joining Hexaware, the merger announcement had sent the stock tumbling from Rs. 450 per share to Rs. 90. 5 out of the 6 subsidiaries of the company were making a loss, and revenues seemed stalled at $52 million. By 2007, revenues had crossed $250 million, with year-on-year growth consistently over 35%.
While at Hexaware, he guided the company into key markets including PeopleSoft, Travel & Transportation, and Financial Services. He principally focused on business strategy, M&A, leadership development, and investor relations. In 2004, he was elevated to Vice-Chairman of the board. Under his tenure, Hexaware grew from a small BPO outfit to one of the top 20 software companies in India. It was also ranked as the fastest growing mid-size company in India. By 2005, the company was listed at number 11 on NASSCOM's Top 20 Indian software firms.
In 2005, Hexaware faced problems after PeopleSoft, a key client, was acquired by Oracle. Hexaware was amongst the top five vendors of PeopleSoft and had set up operations under a Build-Operate-Transfer (BOT) agreement. However after Peoplesoft's acquisition, this center was transferred to Oracle; along with some section of its employees. With a loss of 14% of Hexaware’s revenue, which came from this center, the company had faced a major financial challenge. To offset this loss, Rusi spearheaded the all-cash acquisition of FocusFrame by Hexaware for $34 million. By 2007, the company was growing at record highs again, with Q1 2007 revenue growing more than 50% from 2006.
Rusi left Hexaware in 2008 to focus on private venture capital activities. After his departure, former senior Wipro executive P.R Chandrashekhar was appointed new CEO. Rusi remained on the board as Vice-Chairman however.
In 2002, along with entrepreneur Tapaas Chakravarti, Rusi had co-founded & started DQ Entertainment, a Cannes & Palme d'Or award winning animation company. DQ is listed on both the London (FTSE) and Bombay Stock (BSE) Exchanges. Rusi was also a major investor in Karmic Lifesciences, an Indian Clinical Research Organization.
Rusi Brij died on May 20, 2009 after a prolonged battle with multiple myeloma. He is survived by his wife and 2 children.
References
Indian chairpersons of corporations
1955 births
2009 deaths
Deaths from multiple myeloma
Deaths from cancer in India |
46934848 | https://en.wikipedia.org/wiki/Shaker%20Seed%20Company | Shaker Seed Company | The Shaker Seed Company was an American seed company that was owned and operated by the Shakers in the eighteenth and nineteenth century. In the latter part of the eighteenth century, many Shaker communities produced several vegetable seed varieties for sale. The company created innovations in the marketing of seedsincluding distributing, packaging and catalogingall of which changed the horticultural business model forever.
The Mount Lebanon Shaker Village in New Lebanon, New York, was the most successful and the first to use the name Shaker Seed Company in advertising. As its stationery reveals, the company adopted the phrase "Experto crede" as its motto, noting its establishment in 1794.
Background
In August 1774, nine Shakers from England landed in New York City. In the fall of 1776 they settled in Watervliet, New York. Their religion soon spread throughout the Northeast, and around the year of 1787 a headquarters was established at New Lebanon, New York. By the mid-19th century some eighteen major, long-term settlements Shaker had been established.
The Shakers were avid gardeners who saved the best seeds to cultivate the following year. Historian D.A. Buckingham states that Joseph Turner of Watervliet assigned about two acres of land in 1790 for the purpose of raising vegetable seeds to sell for an income. He is the first known Shaker to package seeds for sale, making him the first American seed salesman. The Watervliet Shakers were the first people in the United States to sell garden seeds commercially. About this same time the Shaker community at New Lebanon began selling their surplus seeds. However, it was not until 1795 that they set aside land for the purpose of seed production for sale to outsiders. Shakers also did this at Canterbury, New Hampshire, and Hancock, Massachusetts.
Sales
At the beginning of the nineteenth century, Shaker seed salesmen were one of the few sources of seeds for the American gardener. Seed sales was one of the Shakers' most successful enterprises, providing the greater portion of their total income. The Shaker seed business stemmed from their rural agricultural roots and sold mostly to small villages and farming communities in the northeastern United States. Their marketing techniques were state of the art. The Shaker Seed Company became known for high quality and fair prices. The Shakers provided useful thingsgarden seedsat a time of need for American pioneers.
The Shakers of New Lebanon sold their own garden seeds from 1794. Commercial sales as "a prominent industry" began in 1800. At their zenith, the Shakers of New Lebanon sold over 37,000 pounds of seeds for a value of nearly $34,000 in a 25-year period in the mid-nineteenth century. About this same time the Canterbury Shaker Village in New Hampshire and the Enfield Shakers in Connecticut had joined the seed selling business as well with over a hundred acres dedicated just to seed production. The Mount Lebanon community was the most successful of all the Shaker communities in purveying seeds. From 1800 to 1880 the Shakers sold their seeds throughout North America, and the seeds were considered of the highest quality available. In many cases, the Shaker seeds were the only seed source for rural Americans.
New Lebanon sales records show that in the decade before and after 1800 the onion seed sold the best. Shaker peddler Artemas Markham showed in his records of 1795 that over 200 pounds of onion seeds were sold. In 1800 over 44 pounds of a variety of vegetable seeds sold, including mangelwurzel blood beet, carrot, cucumber, and summer squash, begetting $406 in income. Vegetables seeds were the main offering; however, flowers, herbs, and grasses also were available. The height of the Shaker seed business was in 1840, constituting at that point their chief industry.
The Shaker Seed Company of New Lebanon listed just over a dozen varieties of seeds in their early years. By 1873 they were offering eight different kinds of tomato, seven kinds of turnip, six kinds of lettuce, nine kinds of squash, eleven kinds of cabbage, sixteen kinds of peas, and fifteen kinds of beans. Their catalogs offered over a hundred kinds of seeds by 1890.
Paper envelope packaging
The Shakers are credited with developing the idea of putting seeds in small paper envelope-style packets to sell to the general public.
They introduced the innovation of placing tiny seeds in small paper envelopes bearing printed planting instructions for best results as well as storage and sometimes cooking suggestions. The Shakers were the first to use paper envelope-style packets as a strategy to sell and distribute seeds.
The concept itself is attributed to Shakers Josiah Holmes and Jonathan Holmes of the Sabbathday Lake Shaker Village. Before the development of the paper packets of seeds, the only way seeds were sold was in bulk in cloth sacks. The first seed envelope packets were made with plain brown paper with the seed variety name, where the seeds came from, and sometimes the grower's name. The first paper packets were pieces of paper cut into eight different sizes for the different seed types. The small paper envelopes were made by hand and folded and glued accordingly. Ebenezer Alden invented a printing block device for printing the envelopes by hand.
Specific machines were made early in the nineteenth century to speed the process of cutting and printing the packets. New Lebanon Shaker journals referred to the seed packet sizes as: pound-bag size, bean size, beet size, onion size, cucumber size, cucumber long size, radish size, and lettuce size. The small paper envelope packets filled with seeds were boxed in colorful wooden displays made by the Shakers and marketed throughout the United States in the nineteenth century.
General stores throughout the United States displayed these wooden boxes with various seed envelope packet "papers"as the Shakers called them. A typical box would hold 200 envelope packets that sold for five or six cents apiece. Shaker vendors had routes throughout the nation, many times a long distance from their home, but concentrated in the northeastern United States. Typically, the Shaker peddlers would deposit the wooden boxes of seed packet "papers" to the general stores in the spring on consignment and then in the fall gather them back up with their share of sales. Another method of distribution of the Shaker seeds was through mail-order.
The Shaker Seed Company at New Lebanon was the most industrious of all the Shaker communities for producing seeds. The seed envelopes they made between 1846 and 1870 averaged over a hundred thousand packets per year.
Demise
The Shaker philosophy encouraged excellence throughout their business practices, which was integral to their success. It also worked against them, as they ignored then outside competitors who, with different commercial philosophies, competed based mainly on price. Improved cheaper transportation methods opened the rural markets to the city commercial seed vendors, and competition then came about. The Shakers were unwilling to compete on price with the cheaper commercial dealerships. Their seed business deteriorated in the long run because of this.
In 1790, when the New Lebanon Shaker community developed their seed business, the population in the United States was just under four million. When nearly a century later the Shaker Seed Company ceased to exist as a seed business around 1890, the population of the nation was about fifty-two million.
Gallery
Notes
References
Sources
External links
Postcard from the Shaker Seed Company Mount Lebanon, New York
Shaker Seed Company Bill of Sale
Shaker Seed Boxes (Reproduction)
Defunct agriculture companies of the United States
Seed companies
Shakers
Watervliet, New York
American companies established in 1794
American companies disestablished in 1890 |
12965486 | https://en.wikipedia.org/wiki/SAE%20JA1002 | SAE JA1002 | Known as the "Software Reliability Program Standard", SAE JA1002 was published in January 2004 by the Society of Automotive Engineers. It is a standard that provides a framework for the management of software reliability within system reliability requirements. It is intended to serve the needs of industry organizations in meeting software product reliability objectives and can be employed as deliverables contacted between a customer and a supplier.
SAE JA1002 is based around the Software Reliability Plan and Software Reliability Case. The Software Reliability Case can be created or maintained to serve the needs of a support organization in sustaining reliability objectives and be used to supply the data needed by independent, regulatory, and/or third party certification bodies.
Notes and references
Arguing Security - Creating Security Assurance Cases
See also
SAE Home Page
Standards |
10099460 | https://en.wikipedia.org/wiki/TaxAct | TaxAct | TaxAct Holdings, Inc. is an American tax preparation software company based in Cedar Rapids, Iowa. The company offers its own software package "TaxAct" to individual tax registers, companies and professional affiliates. The company was founded in 1998. Since 2012, TaxAct Holdings, Inc. is a subsidiary of Blucora.
History
TaxAct Holdings, Inc. was founded in 1998 as 2nd Story Software by Lance Dunn, the former Vice President of software development at Parsons Technology. Later, Lance Dunn recruited Jerry McConnell and Alan Sperfslage as developers, and Cammie Greif as a marketer.
In 2000, a cloud-based version of TaxACT software was released. In 2004, TA Associates, a private equity firm based in Boston, acquired two-thirds of the company for $89 million.
In October 2010, H&R Block said it would pay $287.5 million in cash to acquire the parent firm of TaxAct. In May 2011 the U.S. Department of Justice attempted to stop the acquisition in an antitrust lawsuit. In November 2011, a federal judge sided with the Justice Department, and both companies mutually terminated the contract.
In January 2012, 2nd Story Software was sold to Seattle-based Blucora (formerly Infospace, Inc.) for more than $287 million. In 2013, the name was officially changed to TaxAct Holdings, Inc. Subsequently, in October of the same year, TaxAct acquired Balance Financial, the company that specializes in personal finance tools and services.
In 2018, former Intuit executive Curtis Campbell was appointed as the President of the company.
TaxAct is a member of the Free File Alliance, a free federal tax preparation and electronic filing program for eligible individual taxpayers developed through a partnership between the IRS and a group of private sector tax software companies.
Product overview
TaxAct offers two product formats: TaxACT Online which is web-based tax preparation, and TaxACT Desktop. There is also a free mobile app called TaxACT Express which was released on January 17, 2014. The app is available for iOS and Android.
There are several online plans: Free, Basic+, Deluxe+, Premier+ and Self- Employed+ and Pro. TaxAct offers a Free version for basic returns.
On January, 2017 TaxAct partnered with Miami-based startup Taxfyle. The partnership enables individual tax filers, small and medium sized business owners, independent contractors to connect with Certified Public Accountants and Enrolled Agents should they prefer to have their taxes completed by a licensed accounting professional.
In March 2018 TaxAct Online Freelancer Edition was awarded the title of "The best tax software for independent contractors" by Business Insider.
References
External links
TaxAct Official site
TaxAct IRS Free File Program
Financial services companies established in 1998
Tax software of the United States
2012 mergers and acquisitions
American corporate subsidiaries |
18934136 | https://en.wikipedia.org/wiki/Latitude%20ON | Latitude ON | Latitude ON is an instant-on computer system made by Dell. It is a combination of software and hardware developed by Dell and used in some of their Latitude laptops. The system is based on a dedicated ARM processor (Texas Instruments OMAP 3430) that runs a custom version of a Linux OS. It was announced on August 12, 2008, along with other laptops, including a potential competitor to the Asus Eee PC and arrived a year later on 28 September 2009.
Latitude ON runs MontaVista Linux on an ARM-based subprocessor. This so-called MontaVista Montabello Mobile Internet Device Solution provides a customizable, Linux-based Mobile Internet Device (MID) platform the laptop is able to boot almost instantly and view Email, document reader, calendar, contacts and access the Internet.
First laptop models to include Latitude ON were E4200 and E4300 released in February 2009. Last laptop model introduced so far is Latitude Z600. Dell claims that battery life can be extended to days.
Latitude ON Reader is similar to Dell's MediaDirect where the software is located in a separate partition on the system hard drive and has a dedicated button to power on.
Versions
There are several versions of Latitude ON:
Dell Latitude ON | Reader - Dell's initial release of the technology. The Reader software resides on the main partition of the hard drive, boots in 15–25 seconds, uses the laptop's CPU, and provides read-only access to e-mail, calendar and contacts from the last synched version of the system's Outlook data. No Internet browser.
Dell Latitude ON | FLASH - runs on a flash module, but uses the system's CPU, without significantly increased battery life. Features: boots in 8–10 seconds, supports Wi-Fi and LAN, thin client capabilities with Web access or Citrix, VMware and RDP clients, multi-protocol IM, VoIP (Skype, using the built-in webcam if present), Microsoft Office document viewer and editor (requires Internet connection), Java, Adobe Flash.
Dell Latitude ON | ECM - boots in 2–3 seconds and runs on the dedicated sub-processor, with Wi-Fi or Mobile Broadband support. Includes document reader, and read/write access to e-mail, calendar and contacts, and Firefox, but without Adobe Flash or other media plugins. Supports Novell GroupWise and Cisco VPN. Long battery life (about 17 hours on a 6-cell battery).
See also
Splashtop
HyperSpace
References
External links
Latitude ON Comparison Chart
Embedded Linux distributions
Dell laptops
Linux distributions |
32648842 | https://en.wikipedia.org/wiki/Michael%20Holve | Michael Holve | Michael Holve (born November 16, 1967 in Huntington, New York) is an American author, photographer, programmer and Linux practitioner.
Linux, Solaris and Unix operating systems
Holve started one of the earliest Linux websites in 1994 which came to feature one of the first "Quickcam pages" broadcasting a still image every few minutes automatically to a website, it was one of the first instances of what would later be called "lifecasting" - showing the world Holve's daily life. The Connectix Quickcam was new at the time, offering only a low resolution black and white image - and getting it to work with Linux was often a challenge. In an effort to ease adoption of this new technology, Holve wrote a HOW-TO on the subject and distributed shell scripts to handle the task in the public domain. The feature was quite popular, attracting thousands of daily visitors from around the world.
The site went on to become popular, featuring articles in a HOW-TO format. One such article, "A Tutorial on Using Rsync" featured on the Rsync homepage almost since its inception. Another article became the de facto reference on using Epson Stylus printers with Linux. At its peak, "Everything Linux" logged up to 4,685 people and 1,838,184 hits a day.
The site featured a forum, which allowed a community to form.
Early contributions to Linux include several HOW-TOs on subjects ranging from multimedia, printing, window managers and customization of the desktop, scanners and the PalmPilot PDA.
Other notable websites included "Everything Mac" and "Everything Unix" which catered to their specific communities, though neither enjoyed the success of the Linux and Solaris communities.
"Everything Solaris" is one of the only remaining online Solaris community websites after Oracle's acquisition of Sun Microsystems.
Holve is linked to various Open Source projects - including Rsync, ProFTP, Apache, SANE, perltidy and Ghostprint for his work on documenting them.
Linux advocacy
Holve is a Linux advocate and Solaris insider. He was active during the 1990s and early 2000s and brought adoption of Linux to several companies as well as the State University of New York, Stony Brook. Projects included adoption of Linux as both a server and desktop platform for several companies, an early database cluster for a nascent global search engine and as the backbone of the SUNYSB Department of Family Medicine's Internet presence, including its first website.
Apache web server
Author of one of the first GUIs for managing the Apache web server, TkApache v1.0 was released into the public domain and dedicated to the Open Source and Linux communities at ApacheCon on October 15, 1998. The early success of TkApache led to the design of the next generation tool, Mohawk. At the time, many GUI projects were now underway (such as webmin) which expanded to a system-wide configuration interface. It was decided to cancel further development of Mohawk.
Software contributions to open source
TkApache - GUI for the Apache web server
Mohawk - GUI for the Apache web server
iVote - High-performance Perl/mod-perl visual voting system
CPU Status - Status of Sun (SPARC/Intel) system CPUs CGI
Photography
A current project includes the formation of an informational site for users of the Leica "M system", La Vida Leica!. and author of nearly 50 reviews and 30 articles for the site. Several of the articles have been translated into Russian by - and posted on - Leica Camera Russia's blog.
OS X upgrade fiasco
When Apple introduced the OS X 10.1 update in 2001, there was controversy over modifying the CD to be able to install directly from it, rather than having to install 10.04 first, followed by an upgrade. The hack first appeared on MacFixIt's forum. Holve went on to further document the procedure with a step-by-step HOW-TO, which earned him the ire of the Apple legal team. A lot of press followed, including a cease and desist letter from Apple Inc.
Publications
Featured in Solaris 9 for Dummies
Featured in Building Embedded Linux Systems
Featured in The Quick Road to an Intranet Web Server: Apache and Linux make the task simple
Building Embedded Linux Systems by O'Reilly Media
References
External links
LitPixel
La Vida Leica
Everything Linux
Everything Mac (discontinued)
Everything Solaris (discontinued)
Xterra Firma
1967 births
Living people
People from Huntington, New York
American photographers
American computer programmers
American male writers |
302899 | https://en.wikipedia.org/wiki/No%20Silver%20Bullet | No Silver Bullet | "No Silver Bullet—Essence and Accident in Software Engineering" is a widely discussed paper on software engineering written by Turing Award winner Fred Brooks in 1986. Brooks argues that "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity." He also states that "we cannot expect ever to see two-fold gains every two years" in software development, as there is in hardware development (Moore's law).
Summary
Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. This is related to Aristotle's classification. Accidental complexity relates to problems which engineers create and can fix; for example, the details of writing and optimizing assembly code or the delays caused by batch processing. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things.
Brooks claims that accidental complexity has decreased substantially, and today's programmers spend most of their time addressing essential complexity. Brooks argues that this means shrinking all the accidental activities to zero will not give the same order-of-magnitude improvement as attempting to decrease essential complexity. While Brooks insists that there is no one silver bullet, he believes that a series of innovations attacking essential complexity could lead to significant improvements. One technology that had made significant improvement in the area of accidental complexity was the invention of high-level programming languages, such as Ada.
Brooks advocates "growing" software organically through incremental development. He suggests devising and implementing the main and subprograms right at the beginning, filling in the working sub-sections later. He believes that programming this way excites the engineers and provides a working system at every stage of development.
Brooks goes on to argue that there is a difference between "good" designers and "great" designers. He postulates that as programming is a creative process, some designers are inherently better than others. He suggests that there is as much as a tenfold difference between an ordinary designer and a great one. He then advocates treating star designers equally well as star managers, providing them not just with equal remuneration, but also all the perks of higher status: large office, staff, travel funds, etc.
The article, and Brooks's later reflections on it, "'No Silver Bullet' Refined", can be found in the anniversary edition of The Mythical Man-Month.
Related concepts
Brooks's paper has sometimes been cited in connection with Wirth's law, to argue that "software systems grow faster in size and complexity than methods to handle complexity are invented".
See also
History of software engineering
Software prototyping, one of the main strategies against essential complexity in "No Silver Bullet"
SOLID (object-oriented design)
Essential complexity (numerical measure of "structuredness")
References
Further reading
External links
No Silver Bullet — Essence and Accident in Software Engineering, by Frederick P. Brooks, Jr.
Software Engineering Principles—Steve McConnell's comments on the dichotomy, originally published in IEEE Software, Vol. 16, No. 2, March/April 1999
1986 documents
Academic journal articles
Software engineering papers
Software project management |
9648603 | https://en.wikipedia.org/wiki/AGDLP | AGDLP | AGDLP (an abbreviation of "account, global, domain local, permission") briefly summarizes Microsoft's recommendations for implementing role-based access controls (RBAC) using nested groups in a native-mode Active Directory (AD) domain: User and computer accounts are members of global groups that represent business roles, which are members of domain local groups that describe resource permissions or user rights assignments. AGUDLP (for "account, global, universal, domain local, permission") and AGLP (for "account, global, local, permission") summarize similar RBAC implementation schemes in Active Directory forests and in Windows NT domains, respectively.
Details
Role based access controls (RBAC) simplify routine account management operations and facilitate security audits. System administrators do not assign permissions directly to individual user accounts. Instead, individuals acquire access through their roles within an organization, which eliminates the need to edit a potentially large (and frequently changing) number of resource permissions and user rights assignments when creating, modifying, or deleting user accounts. Unlike traditional access control lists, permissions in RBAC describe meaningful operations within a particular application or system instead of the underlying low-level data object access methods. Storing roles and permissions in a centralized database or directory service simplifies the process of ascertaining and controlling role memberships and role permissions. Auditors can analyze permissions assignments from a single location without having to understand the resource-specific implementation details of a particular access control.
RBAC in a single AD domain
Microsoft's implementation of RBAC leverages the different security group scopes featured in Active Directory:
Global security groups Domain security groups with global scope represent business roles or job functions within the domain. These groups may contain accounts and other global groups from the same domain, and they can be used by resources in any domain in the forest. They can be changed frequently without causing global catalog replication.
Domain local security groups Domain security groups with domain local scope describe the low-level permissions or user rights to which they are assigned. These groups can only be used by systems in the same domain. Domain local groups may contain accounts, global groups, and universal groups from any domain, as well as domain local groups from the same domain.
Global groups that represent business roles should contain only user or computer accounts. Likewise, domain local groups that describe resource permissions or user rights should contain only global groups that represent business roles. Accounts or business roles should never be granted permissions or rights directly, as this complicates subsequent rights analysis.
RBAC in AD forests
In multi-domain environments, the different domains within an AD forest may only be connected by WAN links or VPN connections, so special domain controllers called global catalog servers cache certain directory object classes and attribute types in order to reduce costly or slow inter-domain directory lookups. Objects cached by the global catalog servers include universal groups but not global groups, making membership look-ups of universal groups much faster than similar queries of global groups. However, any change to a universal group triggers (potentially expensive) global catalog replication, and changes to universal groups require forest-wide security rights inappropriate in most large enterprises. These two limitations prevent universal security groups from completely replacing global security groups as the sole representatives of an enterprise's business roles. Instead, RBAC implementations in these environments use universal security groups to represent roles across the enterprise while retaining domain-specific global security groups, as illustrated by the abbreviation AGUDLP.
RBAC in non-AD domains
Domains in Windows NT 4.0 and earlier only have global (domain-level) and local (non-domain) groups and do not support group nesting at the domain level. The abbreviation AGLP refers to these limitations as applied to RBAC implementations in older domains: Global groups represent business roles, while local groups (created on the domain member servers themselves) represent permissions or user rights.
Example
Given a shared folder, \\nyc-ex-svr-01\groups\bizdev; a business development group within the organization's marketing department, represented in Active Directory as the (existing) global security group "Business Development Team Member"; and a requirement that the entire group have read/write access to the shared folder, an administrator following AGDLP might implement the access control as follows:
Create a new domain local security group in Active Directory named "Change permission on \\nyc-ex-svr-01\groups\bizdev".
Grant that domain local group the NTFS "change" permission set (read, write, execute/modify, delete) on the "bizdev" folder. (Note that NTFS permissions are different from share permissions.)
Make the global group "Business Development Team Member" a member of the domain local group "Change permission on \\nyc-ex-svr-01\groups\bizdev".
To highlight the advantages of RBAC using this example, if the Business Development Team required additional permissions on the "bizdev" folder, a system administrator would only need to edit a single access control entry (ACE) instead of, in the worst case, editing as many ACEs as there are users with access to the folder.
References
Windows administration |
2137644 | https://en.wikipedia.org/wiki/Quantum%20programming | Quantum programming | Quantum programming is the process of assembling sequences of instructions, called quantum programs, that are capable of running on a quantum computer. Quantum programming languages help express quantum algorithms using high-level constructs. The field is deeply rooted in the open-source philosophy and as a result most of the quantum software discussed in this article is freely available as open-source software.
Quantum instruction sets
Quantum instruction sets are used to turn higher level algorithms into physical instructions that can be executed on quantum processors. Sometimes these instructions are specific to a given hardware platform, e.g. ion traps or superconducting qubits.
cQASM
cQASM, also known as common QASM, is a hardware-agnostic QASM which guarantees the interoperability between all the quantum compilation and simulation tools. It was introduced by the QCA Lab at TUDelft.
Quil
Quil is an instruction set architecture for quantum computing that first introduced a shared quantum/classical memory model. It was introduced by Robert Smith, Michael Curtis, and William Zeng in A Practical Quantum Instruction Set Architecture. Many quantum algorithms (including quantum teleportation, quantum error correction, simulation, and optimization algorithms) require a shared memory architecture.
OpenQASM
OpenQASM is the intermediate representation introduced by IBM for use with Qiskit and the IBM Q Experience.
Blackbird
Blackbird is a quantum instruction set and intermediate representation used by Xanadu Quantum Technologies and Strawberry Fields. It is designed to represent continuous-variable quantum programs that can run on photonic quantum hardware.
Quantum software development kits
Quantum software development kits provide collections of tools to create and manipulate quantum programs. They also provide the means to simulate the quantum programs or prepare them to be run using cloud-based quantum devices.
SDKs with access to quantum processors
The following software development kits can be used to run quantum circuits on prototype quantum devices, as well as on simulators.
Ocean
An Open Source suite of tools developed by D-Wave. Written mostly in the Python programming language, it enables users to formulate problems in Ising Model and Quadratic Unconstrained Binary Optimization formats (QUBO). Results can be obtained by submitting to an online quantum computer in Leap, D-Wave's real-time Quantum Application Environment, customer-owned machines, or classical samplers.
ProjectQ
An Open Source project developed at the Institute for Theoretical Physics at ETH, which uses the Python programming language to create and manipulate quantum circuits. Results are obtained either using a simulator, or by sending jobs to IBM quantum devices.
Qiskit
An Open Source project developed by IBM. Quantum circuits are created and manipulated using Python. Results are obtained either using simulators that run on the user's own device, simulators provided by IBM or prototype quantum devices provided by IBM. As well as the ability to create programs using basic quantum operations, higher level tools for algorithms and benchmarking are available within specialized packages. Qiskit is based on the OpenQASM standard for representing quantum circuits. It also supports pulse level control of quantum systems via QiskitPulse standard.
Forest
An Open Source project developed by Rigetti, which uses the Python programming language to create and manipulate quantum circuits. Results are obtained either using simulators or prototype quantum devices provided by Rigetti. As well as the ability to create programs using basic quantum operations, higher level algorithms are available within the Grove package. Forest is based on the Quil instruction set.
t|ket>
A quantum programming environment and optimizing compiler developed by Cambridge Quantum Computing that targets simulators and several quantum hardware back-ends, released in December 2018.
Strawberry Fields
An open-source Python library developed by Xanadu Quantum Technologies for designing, simulating, and optimizing continuous variable (CV) quantum optical circuits. Three simulators are provided - one in the Fock basis, one using the Gaussian formulation of quantum optics, and one using the TensorFlow machine learning library. Strawberry Fields is also the library for executing programs on Xanadu's quantum photonic hardware.
PennyLane
An open-source Python library developed by Xanadu Quantum Technologies for differentiable programming of quantum computers. PennyLane provides users the ability to create models using TensorFlow, NumPy, or PyTorch, and connect them with quantum computer backends available from IBMQ, Google Quantum, Rigetti, Honeywell and Alpine Quantum Technologies.
SDKs based on simulators
Public access to quantum devices is currently planned for the following SDKs, but not yet implemented.
Quantum Development Kit
A project developed by Microsoft as part of the .NET Framework. Quantum programs can be written and run within Visual Studio and VSCode.
Cirq
An Open Source project developed by Google, which uses the Python programming language to create and manipulate quantum circuits. Results are obtained using simulators that run on the user's own device.
Quantum programming languages
There are two main groups of quantum programming languages: imperative quantum programming languages and functional quantum programming languages.
Imperative languages
The most prominent representatives of the imperative languages are QCL, LanQ and Q|SI>.
QCL
Quantum Computation Language (QCL) is one of the first implemented quantum programming languages. The most important feature of QCL is the support for user-defined operators and functions. Its syntax resembles the syntax of the C programming language and its classical data types are similar to primitive data types in C. One can combine classical code and quantum code in the same program.
Quantum pseudocode
Quantum pseudocode proposed by E. Knill is the first formalized language for description of quantum algorithms. It was introduced and, moreover, was tightly connected with a model of quantum machine called Quantum Random Access Machine (QRAM).
Q#
A language developed by Microsoft to be used with the Quantum Development Kit.
Q|SI>
Q|SI> is a platform embedded in .Net language supporting quantum programming in a quantum extension of while-language. This platform includes a compiler of the quantum while-language and a chain of tools for the simulation of quantum computation, optimisation of quantum circuits, termination analysis of quantum programs, and verification of quantum programs.
Q language
Q Language is the second implemented imperative quantum programming language. Q Language was implemented as an extension of C++ programming language. It provides classes for basic quantum operations like QHadamard, QFourier, QNot, and QSwap, which are derived from the base class Qop. New operators can be defined using C++ class mechanism.
Quantum memory is represented by class Qreg.
Qreg x1; // 1-qubit quantum register with initial value 0
Qreg x2(2,0); // 2-qubit quantum register with initial value 0
The computation process is executed using a provided simulator. Noisy environments can be simulated using parameters of the simulator.
qGCL
Quantum Guarded Command Language (qGCL) was defined by P. Zuliani in his PhD thesis. It is based on Guarded Command Language created by Edsger Dijkstra.
It can be described as a language of quantum programs specification.
QMASM
Quantum Macro Assembler (QMASM) is a low-level language specific to quantum annealers such as the D-Wave.
Scaffold
Scaffold is C-like language, that compiles to QASM and OpenQASM. It is built on top of the LLVM Compiler Infrastructure to perform optimizations on Scaffold code before generating a specified instruction set.
Silq
Silq is a high-level programming language for quantum computing with a strong static type system, developed at ETH Zürich.
Functional languages
Efforts are underway to develop functional programming languages for quantum computing. Functional programming languages are well-suited for reasoning about programs. Examples include Selinger's QPL, and the Haskell-like language QML by Altenkirch and Grattage. Higher-order quantum programming languages, based on lambda calculus, have been proposed by van Tonder, Selinger and Valiron and by Arrighi and Dowek.
QFC and QPL
QFC and QPL are two closely related quantum programming languages defined by Peter Selinger. They differ only in their syntax: QFC uses a flow chart syntax, whereas QPL uses a textual syntax. These languages have classical control flow but can operate on quantum or classical data. Selinger gives a denotational semantics for these languages in a category of superoperators.
QML
QML is a Haskell-like quantum programming language by Altenkirch and Grattage. Unlike Selinger's QPL, this language takes duplication, rather than discarding, of quantum information as a primitive operation. Duplication in this context is understood to be the operation that maps to , and is not to be confused with the impossible operation of cloning; the authors claim it is akin to how sharing is modeled in classical languages. QML also introduces both classical and quantum control operators, whereas most other languages rely on classical control.
An operational semantics for QML is given in terms of quantum circuits, while a denotational semantics is presented in terms of superoperators, and these are shown to agree. Both the operational and denotational semantics have been implemented (classically) in Haskell.
LIQUi|>
LIQUi|> (pronounced liquid) is a quantum simulation extension on the F# programming language. It is currently being developed by the Quantum Architectures and Computation Group (QuArC) part of the StationQ efforts at Microsoft Research. LIQUi|> seeks to allow theorists to experiment with quantum algorithm design before physical quantum computers are available for use.
It includes a programming language, optimization and scheduling algorithms, and quantum simulators. LIQUi|> can be used to translate a quantum algorithm written in the form of a high-level program into the low-level machine instructions for a quantum device.
Quantum lambda calculi
Quantum lambda calculi are extensions of the classical lambda calculus introduced by Alonzo Church and Stephen Cole Kleene in the 1930s. The purpose of quantum lambda calculi is to extend quantum programming languages with a theory of higher-order functions.
The first attempt to define a quantum lambda calculus was made by Philip Maymin in 1996.
His lambda-q calculus is powerful enough to express any quantum computation. However, this language can efficiently solve NP-complete problems, and therefore appears to be strictly stronger than the standard quantum computational models (such as the quantum Turing machine or the quantum circuit model). Therefore, Maymin's lambda-q calculus is probably not implementable on a physical device .
In 2003, André van Tonder defined an extension of the lambda calculus suitable for proving correctness of quantum programs. He also provided an implementation in the Scheme programming language.
In 2004, Selinger and Valiron defined a strongly typed lambda calculus for quantum computation with a type system based on linear logic.
Quipper
Quipper was published in 2013. It is implemented as an embedded language, using Haskell as the host language. For this reason, quantum programs written in Quipper are written in Haskell using provided libraries. For example, the following code implements preparation of a superposition
import Quipper
spos :: Bool -> Circ Qubit
spos b = do q <- qinit b
r <- hadamard q
return r
funQ
A group of undergraduate students at Chalmers University of Technology developed a functional quantum programming language in 2021. It is inspired by the quantum typed lambda calculus by Selinger and Valiron. The underlying quantum simulator is a part of a Haskell library by the same name. The following code implements superposition in funQ
spos : !(Bit -o QBit)
spos b = H (new b)
The same example in the Haskell library would be
import FunQ
spos :: Bit -> QM QBit
spos b = hadamard =<< new b
References
Further reading
External links
Curated list of all quantum open-source software projects
Bibliography on Quantum Programming Languages (updated in May 2007)
5th International Workshop on Quantum Physics and Logic
4th International Workshop on Quantum Programming Languages
3rd International Workshop on Quantum Programming Languages
2nd International Workshop on Quantum Programming Languages
Quantum programming language in Quantiki
QMASM documentation
pyQuil documentation including Introduction to Quantum Computing
Scaffold Source
Programming language classification
Programming paradigms
Quantum computing
Quantum information science |
6261192 | https://en.wikipedia.org/wiki/Atlas.ti | Atlas.ti | ATLAS.ti is a computer program used mostly, but not exclusively, in qualitative research or qualitative data analysis.
Description and usage
The purpose of ATLAS.ti is to help researchers uncover and systematically analyze complex phenomena hidden in unstructured data (text, multimedia, geospatial). The program provides tools that let the user locate, code, and annotate findings in primary data material, to weigh and evaluate their importance, and to visualize the often complex relations between them.
ATLAS.ti is used by researchers and practitioners in a wide variety of fields including anthropology, arts, architecture, communication, criminology, economics, educational sciences, engineering, ethnological studies, management studies, market research, quality management, psychology, sociology, and social work.
ATLAS.ti consolidates large volumes of documents and keeps track of all notes, annotations, codes and memos in all fields that require close study and analysis of primary material consisting of text, images, audio, video, and geo data.
In addition, it provides analytical and visualization tools designed to open new interpretative views on the material.
To support multi-method multi-user projects across space and time (longitudinal studies), project data export using XML is available. With XML, the proprietary nature of most software systems can be mitigated. This is indeed a mandatory requirement in scientific settings.
ATLAS.ti's XML schema (http://downloads.atlasti.com/atlasti_hu_2.2.xsd) influenced the development of the QuDEX language (http://dext.data-archive.ac.uk/schema/schema.asp) at University of Essex.
Features overview
Coding of text, image, geo, audio and video materials (interactive and automated)
Full native PDF Support (original layout) without conversion
Geodata Integration
Text-to-media Synchronization
On-Board Transcription Engine
Interactive margin area with drag & drop linking, coding, merging
Multi-document view for constant comparisons
Search & retrieve functions (incl. Boolean, semantic, and proximity-based operators)
Visual model building and "mind mapping" using the Network Editor
Integrated visualizations: frequency bars in entity managers
Cloud tag view for codes
Creation and navigation of hyperlinks between resources (Hypertext)
Searching for textual patterns through documents and entities (Object Crawler)
Automatic coding (search - select - code)
Proximity analysis of coded data (Cooccurrency Explorer and Table)
Project data export to XML
Export to SPSS, HTML, CSV
Word frequency export to Excel
Unicode language support
Single file project backup and migration.
Survey import
Twitter import
100 step undo/redo
Development history
A prototype of ATLAS.ti was developed by Thomas Muhr at Technical University in Berlin in the context of project ATLAS (1989–1992). A first commercial version of ATLAS.ti was released in 1993 to the market by company "Scientific Software Development," later ATLAS.ti Scientific Software Development GmbH.
The methodological roots of ATLAS.ti lie in - but are not restricted to grounded theory, content analysis. ATLAS.ti is currently available for the Windows desktop version (Version 9.0), Mac desktop version (Version 9.0), Android mobile, iPad version, as well as the Cloud version.
See also
Computer-assisted qualitative data analysis software
Literature
External links
Forum about Atlas
1993 software
QDA software |
56970489 | https://en.wikipedia.org/wiki/What%20Is%20Love%3F%20%28EP%29 | What Is Love? (EP) | What Is Love? is the fifth extended play by South Korean girl group Twice. The EP was released on April 9, 2018, by JYP Entertainment and is distributed by Iriver. It includes the lead single of the same name produced by Park Jin-young. Twice members Jeongyeon, Chaeyoung, and Jihyo also took part in writing lyrics for two songs on the EP.
Composed of six tracks overall, the EP received positive ratings from several music critics. It was also a commercial success for the group, reaching sales of over 340,000 copies. With this, Twice became the first Korean female act and the fifth music act overall to earn a Platinum certification from the Korea Music Content Association.
An expanded reissue of the EP, titled Summer Nights, was released on July 9, 2018.
Background and release
On February 26, 2018, JYP Entertainment confirmed that Twice planned to release a new Korean album in April. On March 25, the agency announced that Twice would release their fifth EP titled What Is Love? on April 9. A promotion schedule for the mini-album was uploaded by the group on their official Twitter account. The first group image teaser was uploaded online the next day. On March 27, individual teaser photos featuring Nayeon, Jeongyeon, Momo, and Sana were released. On March 28, individual teaser photos featuring Jihyo, Mina, Dahyun, Chaeyoung, and Tzuyu were uploaded by the group. On the same day, JYP Entertainment confirmed that the album's lead single "What Is Love?" was written and produced by agency founder Park Jin-young, marking a year since Park last collaborated with Twice as a producer following the release of "Signal" in 2017. A part of the album's track list was later uploaded, revealing additional details crediting Lee Woo-min "collapsedone" for the arrangement of the lead single. On March 29, a second group image teaser was uploaded by the group. On the same day, a second teaser for the EP's track list was released, revealing two new tracks written by Twice members: "Sweet Talker" which was penned by Jeongyeon and Chaeyoung, and "Ho!" with lyrics written by Jihyo.
On March 30, a second set of individual teaser photos featuring Tzuyu, Chaeyoung, Dahyun, and Mina were uploaded. The full track list was also revealed by the group, announcing that the EP will feature six songs in total, with the sixth track "Stuck" being exclusive for the album's physical edition. On March 31, individual teaser photos featuring Jeongyeon, Momo, Sana, Jihyo, and Nayeon were uploaded. On April 1, the group uploaded an image containing the full lyrics for "What Is Love?". On the same day, member Nayeon uploaded a behind-the-scenes photo for the album in the group's official Instagram account, and also appeared at the 2018 KBO League to throw the ceremonial first pitch for the match between LG Twins and KIA Tigers.
On April 2, Twice released the first music video teaser for "What Is Love?" on JYP Entertainment's Youtube channel. On April 3, a second music video teaser clip featuring Chaeyoung, Dahyun, and Nayeon was released, revealing a part of the song's "question mark" point choreography. The following day, a third music video teaser clip featuring Momo, Sana, and Jeongyeon was uploaded. On April 5, a fourth music video teaser clip featuring Jihyo, Tzuyu, and Mina was released. On April 6, a fifth music video teaser clip featuring all Twice members was released, revealing a snippet of the title track. On April 7, the group uploaded an album highlight medley featuring snippets for all tracks. On April 8, Twice revealed the cover image for the album's digital release.
The EP was released as a digital download on various music sites on April 9 while the physical album was released the following day. "Stuck", which was initially a "CD-only" track on the physical album, was released digitally on April 30.
On July 9, the group released an expanded reissue for the EP titled Summer Nights, which was supported by the lead single "Dance The Night Away".
Composition
What Is Love? is an EP consisting of six tracks overall. The title track "What Is Love?" is written and produced by Park Jin-young, is classified as a "hyper-pop" dance song that crosses over the trap genre, which lyrically talks about one's imaginations and curiosities regarding love, while ever only learning about it through books, movies, or television dramas. "Sweet Talker", written by members Jeongyeon and Chaeyoung, heavily features synth music mixed with percussions. "Ho!" is penned by Jihyo, and is described as a summer pop song that emphasizes a brass-driven band arrangement. "Deja Vu" is classified as a pop song that makes use of a "la la la"-worded hook and features a dubstep break. "Say Yes" is an acoustic-driven pop track with heavy influence from R&B while featuring the group's distinctive ballad sound.
The album's closing track, "Stuck", lyrically describes the feeling of falling in love. The track's lyricist took inspiration from the relationship between Twice and their fans.
Promotion
On the day of the album's release, Twice members Jeongyeon, Sana, and Tzuyu made an appearance on the KBS2 public talk show Hello Counselor, wherein they performed a part of the title track "What Is Love?" for the first time in live broadcast. Later that day, all members of the group then held a live showcase at the Yes24 Live Hall in Gwangjang-dong, Seoul wherein they performed the full song for the first time.
The group promoted their album in several South Korean music programs, starting with their appearance on M Countdown on April 12, 2018. They also made performances on KBS2's Music Bank on April 13, MBC's Show! Music Core on April 14, SBS' Inkigayo on April 15, and on MBC M's Show Champion on April 18 wherein the title track received its first music show win, among other performance dates. Music show promotions for What Is Love? ended on April 29. The title track achieved a total of 12 music show wins, with its last win being on Inkigayo in its episode aired on May 6.
Twice also made an appearance on You Hee-yeol's Sketchbook in its broadcast on April 15 wherein they performed "What Is Love?". They also performed the album's title track on a special broadcast of M Countdown for KCON Japan 2018 on April 19.
Critical reception
Josh Shim of The Kraze magazine left a positive review for the EP, giving it a rating of 9.5 out of 10 points, stating that "What Is Love? may be Twice’s best mini-album since their glorious debut EP The Story Begins, and it’s because it shows the most growth among all the group’s releases so far." He further praised the production work for all the tracks, and described the album to be among the group's most cohesive releases overall. Music review site Popfection gave the EP a rating of 8.6 out of 10 points, describing it to be a "very solid release and comeback" from the group, while citing the title track alongside "Sweet Talker" and "Ho!" to be the album's highlights. Editor Nicole Moraleda of The South China Morning Post Youngpost described the album's track list as having "strong memorable hooks and cool soul-pop vibes", while further describing the EP to have dug further than conventional love songs.
Music review site Vibes of Silence gave a more lukewarm review for the EP, giving it a rating of 6.5 out of 10 points, praising the album's cohesiveness but pointing out the need for more versatility from the group.
Commercial performance
On April 4, 2018, it was reported that pre-order sales for What Is Love? reached over 350,000 copies, surpassing the pre-order sales numbers achieved by Twice's full-length album Twicetagram which exceeded 330,000 pre-order copies. Upon release, the album debuted at number 2 on Gaon Album Chart while the title track topped the Gaon Digital Chart. The album also entered both the Oricon Albums and Japan Digital Albums charts at numbers 2 and 3 respectively. The EP and its title track both entered the Billboard World Albums and World Digital Song Sales charts at number 3 respectively. What Is Love? was also Twice's first album to enter on the Billboard Independent Albums chart ranked at number 35.
What Is Love? became the third best-selling album in South Korea for the month of April, selling 335,235 copies. With this, Twice became the first female artist—and the fifth overall act—to earn a Platinum certification from the Korea Music Content Association for reaching sales of over 250,000 copies since certifications began in 2018. By year-end, the EP became the tenth best-selling album in South Korea in 2018, reaching 348,797 copies sold.
Track listing
Personnel
Credits adapted from album liner notes.
J. Y. Park "The Asiansoul" – producer
Lee Ji-young – direction and coordination (A&R)
Jang Ha-na – music (A&R)
Kim Yeo-joo (Jane Kim) – music (A&R)
Kim Ji-hyeong – production (A&R)
Cha Ji-yoon – production (A&R)
Choi A-ra – production (A&R)
Kim Je-na (Jenna Kim) – production (A&R)
Kim Bo-hyeon – design (A&R), album art direction and design
Kim Tae-eun – design (A&R), album art direction and design, and web design
Choi Jeong-eun – design (A&R) and album art direction and design
Lee So-yeon – design (A&R), album art direction and design, and web design
Eom Se-hee – recording and mixing engineer
Choi Hye-jin – recording and mixing engineer
No Min-ji – recording engineer
Lee Ju-hyeong – recording engineer, and vocal director, keyboard, Pro Tools operator, and digital editor (on "Say Yes")
Friday of Galactika – recording engineer, and vocal producer and background vocals (on "What Is Love?" and "Stuck")
Tony Maserati – mixing engineer
James Krausse – mixing engineer
Lim Hong-jin – mixing engineer
Ko Hyeon-jeong – mixing engineer
Master Key – mixing engineer
Kwon Nam-woo – mastering engineer
Naive Production – video director
Kim Young-jo – video executive producer
Yoo Seung-woo – video executive producer
Choi Pyeong-gang – video co-producer
Jang Deok-hwa at Agency PROD – photographer
Seo Yeon-ah – web design
Son Eun-hee at Lulu – hair and makeup director
Jung Nan-young at Lulu – hair and makeup director
Choi Ji-young at Lulu – hair and makeup director
Jo Sang-ki at Lulu – hair and makeup director
Zia at Lulu – hair and makeup director
Jeon Dal-lae at Lulu – hair and makeup director
Won Jung-yo at Bit&Boot – makeup director
Choi Su-ji at Bit&Boot – makeup director
Choi Hee-seon at F. Choi – style director
Seo Ji-eun at F. Choi – style director
Lee Ga-young at F. Choi – style director
Lee Jin-young at F. Choi – style director
Park Soo-young at F. Choi – style director
Park Jin-hee at F. Choi – style director
Han Jin-joo at F. Choi – style director
Shin Hyun-kuk – management and marketing director
Yoon Hee-so – choreographer
Kang Da-sol – choreographer
Kyle Hanagami – choreographer
Freemind Choi Young-joon – choreographer
Freemind Chae Da-som – choreographer
Choi Nam-mi – choreographer
Today Art – printing
Lee Woo-min "collapsedone" – all instruments, computer programming, guitar, synths, and piano (on "What Is Love?")
E.Na – background vocals (on "What Is Love?" and "Stuck")
Park Soo-min – background vocals (on "What Is Love?", "Sweet Talker" and "Ho!")
Erik Lidbom – all instruments, computer programming, and digital editor (on "Sweet Talker")
Armadillo – vocal director (on "Sweet Talker")
Jiyoung Shin NYC – additional editor (on "Sweet Talker", "Ho!", and "Dejavu")
The Elev3n – all instruments, computer programming, and digital editor (on "Ho!")
Kim Seung-soo – vocal director (on "Ho!")
Mental Audio (Eirik Johansen and Jan Hallvard Larsen) – all instruments and computer programming (on "Dejavu")
Hayley Aitken – background vocals (on "Dejavu")
Anne Judith Wik – background vocals (on "Dejavu")
Jowul – vocal director (on "Dejavu")
Jeon Jae-hee – background vocals (on "Say Yes")
Jeok Jae – guitar (on "Say Yes")
Kim Byeong-seok – bass (on "Say Yes")
Frants – all instruments, computer programming, synth, bass, and drum (on "Stuck")
Shane – guitar (on "Stuck")
Locations
Recording
JYPE Studios, Seoul, South Korea
Mixing
Mirrorball Studios, North Hollywood, California
JYPE Studios, Seoul, South Korea
Koko Sound, Seoul, South Korea
Studio Instinct, Seoul, South Korea
821 Sound, Seoul, South Korea
Mastering
821 Sound Mastering, Seoul, South Korea
Photography
Imvely flagship store "Velyne", Seoul, South Korea
Charts
Weekly charts
Year-end charts
Certifications
Accolades
Release history
References
2018 EPs
Twice (group) EPs
JYP Entertainment EPs
Korean-language EPs
IRIVER EPs
Republic Records EPs |
44881166 | https://en.wikipedia.org/wiki/The%20Computers | The Computers | The Computers are a British rock band from Exeter, England. Their sound initially fused hardcore punk and garage rock and progressed to a less heavy sound incorporating blues and soul. As of 2014, The Computers have released one mini-album (You Can't Hide From the Computers, 2008) and two albums, This Is the Computers (2011) and Love Triangles Hate Squares (2013) which registered 70 and 75 points, respectively, on the Metacritic rating scale.
Band history
The band was formed in the mid-2000s by singer/guitarist Alex Kershaw, bassist Nic Heron, guitarist Sonny Crawford and drummer Will Wright. They played their first show in August 2006 at The Cavern in Exeter with American hardcore punk band Paint It Black. Will Wright was replaced in 2009 by drummer Aidan Sinclair. Guitarist and pianist Fred Ansell joined in 2011. In February 2014 Nic Heron and Sonny Crawford announced their departure from the group. Guitarist James Mattock (formerly of Sharks) and bassist Thomas McMahon joined in 2014.
BBC Radio 1 Punk Show host Mike Davies invited them to play a live session on the show after having been a band for less than a year and soon The Computers, then a Black Flag-influenced hardcore punk band, signed to Fierce Panda, who released their debut mini-album You Can't Hide From the Computers in 2008, described by Kerrang! as "stylish UK hardcore" and "punk'n'roll that wants to dance as well as it does break stuff."
In 2010 the quartet recorded their debut album, This Is the Computers (11 songs, 24 minutes), with producer John Reis of Rocket from the Crypt, in just four days at his home in San Diego (straight to tape, without any computers involved). Preceded in February by the single "Group Identity", the album was released on 21 June 2011 by One Little Independent. The band rounded up the year by touring Britain with Gay for Johnny Depp.
Their second album, Love Triangles Hate Squares, which came out on 16 May 2013, took a more eclectic approach than their earlier work and received a mainly positive response. Some critics lauding Alex Kershaw's singing and the way the band's writing had "developed, with many of these witty, catchy songs recalling Elvis Costello and the Hives" (according to AllMusic). Other reviewers claimed the change of direction was derivative and showed its influences too clearly - "Love Triangles Hate Squares is a forceful blast of passion-fired pastiche, but never quite escapes feeling like a cheap holiday in other people's history" according to Classic Rock. DIY said "for all of the frontman’s dynamism, he can’t save a frustratingly slow, out-of-date computer".
In March 2015 the band entered the studio to begin recording their third studio album with producer David McEwan.
Band members
Current
Alex Kershaw – lead vocals, guitar
Fred Ansell - guitar & piano
Sonny Crawford - guitar
Thomas McMahon – bass guitar
Aidan Sinclair– drums
Previous
Nic Heron - bass
Will Wright - drums
James Mattock – guitar
Discography
Track Four/Is it Just Me? (Freakscene, double A-side single, 2007)
Teenage Tourettes Camp/Welcome To The Working Week (Elvis Costello cover)/Rebel Girl (Bikini Kill cover) (Freakscene, 7" single, 2008)
You Can't Hide From the Computers (Fierce Panda, mini-album, 2008)
This Is the Computers (One Little Independent, 2011)
The Computers Are Misfits (One Little Independent, Black Friday Ltd edition 10" of The Misfits covers, 2012)
Love Triangles Hate Squares (One Little Independent, 2013)
Elvis Vs Elvis (One Little Independent, Record Store Day Ltd Edition 10" compiled of Elvis Costello and Elvis Presley covers, 2013)
Live & Inconsolable (One Little Independent, Record Store Day Ltd edition Live album, 2014)
References
Musical quintets
British hardcore punk groups |
6426293 | https://en.wikipedia.org/wiki/Jacquez%20Green | Jacquez Green | D'Tanyian Jacquez "Quezi" Green (born January 15, 1976) is an American former college and professional football player who was a wide receiver and punt returner in the National Football League (NFL) for five seasons during the 1990s and early 2000s. Green played college football for the University of Florida, and earned All-American honors. He was a second-round pick in the 1998 NFL Draft, and played professionally for the Tampa Bay Buccaneers, the Washington Redskins and the Detroit Lions of the NFL.
Early years
Green was born in Fort Valley, Georgia in 1976. He attended Peach County High School in Fort Valley, and was a member of the Peach County Trojans high school football, basketball, and track and field teams. Green received all-state honors in football and basketball as a senior, and was also selected to play in the annual Georgia vs. Florida High School All-Star football game. Green played quarterback throughout high school, except for his junior season when the Peach County Trojans lost in the state title game; that season he played wide receiver and running back. He was also a member of the Peach County Trojans' state championship 4x100-meter relay team as a junior.
College career
Green accepted an athletic scholarship to attend the University of Florida in Gainesville, Florida, where he played wide receiver for coach Steve Spurrier's Florida Gators football team from 1995 to 1997. He was a three-year letterman and a member of the 1996 Gators' Bowl Alliance national championship team, when he had seven catches for seventy-nine yards in the Gators' 52–20 Sugar Bowl victory over the Florida State Seminoles. Against the Auburn Tigers in 1997, he scored a rare triple—throwing a touchdown pass, catching one and running for one. Green suffered a major injury when he dislocated his hip in the 1995 national championship game against the Nebraska Cornhuskers. He may best be remembered for a 58-yard reception from quarterback Doug Johnson late in the 1997 Florida-Florida State game that propelled the underdog Gators over the top-ranked Florida State Seminoles. He was a member of the Gators' Southeastern Conference (SEC) championship teams in 1995 and 1996, a first-team All-SEC selection and a consensus first-team All-American in 1997, and was one of the three finalists for the Biletnikoff Award. Green caught sixty-one passes for 1,024 yards and nine touchdowns as a junior before entering the NFL Draft.
Professional career
Green was a second-round draft choice (thirty-fourth pick overall) of the Tampa Bay Buccaneers in the 1998 NFL Draft, and he played for the Buccaneers for four seasons from to . Green's most productive seasons as a wide receiver were , when he caught fifty-six passes for 791 yards with three touchdowns (only ten starts), and , when he had fifty-one receptions for 773 yards. Before the season, he signed as a free agent with the Washington Redskins and re-united with former Florida Gators and Buccaneers teammate Reidel Anthony. He was released by the Redskins and signed by the Detroit Lions. Prior to the season, he signed with his former team, the Buccaneers, and retired. He ended his NFL career starting thirty-seven of the sixty-six games in which he played, registering 162 receptions for 2,311 yards and seven touchdowns.
Life after the NFL
Green said that he played Madden NFL every day, and that "I want to make it to the NFL just to play in this [game] this right here. More than I want to play in the NFL". He won the annual Madden Bowl in 2001 and 2002. Green served as the offensive coordinator for Gibbs High School in St. Petersburg, Florida for two successful seasons, and was the associate head coach and offensive coordinator at Lincoln High School in Tallahassee, Florida. Green helped lead Lincoln to the 2010 state championship and a state runner-up performance in 2012.
Green spent one year at Valdosta High School in Valdosta, Georgia, as their wide receivers coach before returning to Lincoln High School for 3 more seasons. He was previously the offensive coordinator at Godby High School in Tallahassee, FL. Green helped with a resurgence at Godby High School, after winning 5 games the previous 2 seasons, the Godby Cougars went 10-2 during the 2017 season, and 23-3 over his two year coaching stint.
In September 2017, Green was inducted into the University of Florida Football Hall of Fame.
Green was announced as the offensive coordinator for Manatee High School on July 10, 2019.
In May 2021, Green was named Head Football Coach of the Manatee High School Hurricanes, in Bradenton, Florida
See also
History of the Tampa Bay Buccaneers
List of Detroit Lions players
List of Florida Gators football All-Americans
List of Florida Gators in the NFL Draft
List of Washington Redskins players
References
Bibliography
Carlson, Norm, University of Florida Football Vault: The History of the Florida Gators, Whitman Publishing, LLC, Atlanta, Georgia (2007). .
Golenbock, Peter, Go Gators! An Oral History of Florida's Pursuit of Gridiron Glory, Legends Publishing, LLC, St. Petersburg, Florida (2002). .
Hairston, Jack, Tales from the Gator Swamp: A Collection of the Greatest Gator Stories Ever Told, Sports Publishing, LLC, Champaign, Illinois (2002). .
McCarthy, Kevin M., Fightin' Gators: A History of University of Florida Football, Arcadia Publishing, Mount Pleasant, South Carolina (2000). .
Nash, Noel, ed., The Gainesville Sun Presents The Greatest Moments in Florida Gators Football, Sports Publishing, Inc., Champaign, Illinois (1998). .
1976 births
Living people
People from Fort Valley, Georgia
Players of American football from Georgia (U.S. state)
American football wide receivers
American football return specialists
Florida Gators football players
All-American college football players
Tampa Bay Buccaneers players
Washington Redskins players
Detroit Lions players
Coaches of American football from Georgia (U.S. state)
High school football coaches in Florida
High school football coaches in Georgia (U.S. state) |
33188621 | https://en.wikipedia.org/wiki/Cvent | Cvent | Cvent, Inc. is a publicly held software-as-a-service (SaaS) company that specializes in meetings, events, and hospitality management technology. The company offers web-based software for meeting site selection, online event registration, event management, email marketing, and web surveys.
History
Cvent was founded in September 1999 by Reggie Aggarwal. That same year it received $17 million in venture capital and grew its staff to 125 employees. Following the dot-com bubble burst and the September 11 attacks, Cvent faced near-bankruptcy and was forced to cut 80% of its staff.
The company became profitable again by 2003. In 2011, Cvent was growing by 50% a year and received $136 million of funding from New Enterprise Associates in July 2011, which, at the time, was the largest investment in a U.S. software company since 2007.
On June 13, 2012, Cvent announced the acquisition of Austin-based startup CrowdTorch, previously known as Seed Labs, for $4.2 million. Seven days later, it announced its acquisition of Portland-based application developer CrowdCompass for $10 million.
Cvent filed an S-1 with the U.S. Securities and Exchange Commission on July 8, 2013, proposing an initial public offering of 5.6 million shares. It went public on the New York Stock Exchange on August 9, 2013, at an initial price of $21. The company raised $117.6 million and received a market capitalization of more than a billion dollars. The IPO was referenced in regards to its use of the JOBS Act, which enabled the company to quickly offer an IPO.
In 2016, the company was acquired by venture capital company Vista Equity Partners for $USD 1.65 billion. Ashok Trivedi, the co-founder of Mastech Digital and iGate was an early investor of the company.
On May 23, 2018, Cvent announced that it had acquired Quickmobile, a Vancouver-based mobile event app developer.
On June 5, 2018, Cvent announced that it had acquired Kapow, an online booking platform for venues and experiences.
On October 16, 2018, Cvent announced that it had acquired Social Tables, an event diagramming, seating and collaboration platform based in Washington, D.C.
On May 22, 2019, Cvent announced that it had acquired Wedding Spot, a wedding venue sourcing platform that allows users to find venues based on budget, location, style and guest count. At the time of the acquisition, Wedding Spot, which was founded in 2013 in San Francisco, California, had partnerships with over 12,000 venues across the United States.
On June 10, 2019, Cvent announced that it had acquired mobile event technology provider DoubleDutch.
On July 20, 2021, WSJ reported that Cvent Nears $5-Billion-Plus SPAC(DGNS) Deal
Software and services
In July 2000 Cvent introduced its first SaaS product, a web-based tool for event planners to manage invitations and collect registration fees. In 2006, it introduced a product for conducting online surveys, which was followed by the introduction of the Cvent Supplier Network two years later. The Supplier Network is a free, online marketing place that connects meeting planners with venues and services. In 2009, the company began offering professional services.
An app development tool, CrowdTorch, was launched in 2009. Cvent also produces a Destination Guide, a free, online, 8,000-page travel guide designed for meeting planners with information about 800 different destinations. A "Strategic Meetings Management" helps users manage budgets.
External links
Official website
References
Companies based in McLean, Virginia
Cloud applications
Cloud computing providers
2013 initial public offerings
2016 mergers and acquisitions
Private equity portfolio companies
Software companies established in 1999
Software companies based in Virginia
Software companies of the United States
1999 establishments in Virginia |
4273140 | https://en.wikipedia.org/wiki/Avatar%20%282009%20film%29 | Avatar (2009 film) | Avatar (also marketed as James Cameron's Avatar) is a 2009 American epic science fiction film directed, written, produced, and co-edited by James Cameron and starring Sam Worthington, Zoe Saldana, Stephen Lang, Michelle Rodriguez, and Sigourney Weaver. The film is set in the mid-22nd century when humans are colonizing Pandora, a lush habitable moon of a gas giant in the Alpha Centauri star system, in order to mine the valuable mineral unobtanium. The expansion of the mining colony threatens the continued existence of a local tribe of Na'vi – a humanoid species indigenous to Pandora. The film's title refers to a genetically engineered Na'vi body operated from the brain of a remotely located human that is used to interact with the natives of Pandora.
Development of Avatar began in 1994, when Cameron wrote an 80-page treatment for the film. Filming was supposed to take place after the completion of Cameron's 1997 film Titanic, for a planned release in 1999; however, according to Cameron, the necessary technology was not yet available to achieve his vision of the film. Work on the language of the Na'vi began in 2005, and Cameron began developing the screenplay and fictional universe in early 2006. Avatar was officially budgeted at $237 million, due to a groundbreaking array of new visual effects Cameron achieved in cooperation with Weta Digital in Wellington. Other estimates put the cost between $280 million and $310 million for production and at $150 million for promotion. The film made extensive use of new motion capture filming techniques, and was released for traditional viewing, 3D viewing (using the RealD 3D, Dolby 3D, XpanD 3D, and IMAX 3D formats), and for "4D" experiences in selected South Korean theaters.
Avatar premiered in London on , 2009, and was released in the United States on to positive reviews, with critics highly praising its ground-breaking visual effects. During its theatrical run, the film broke several box office records and became the highest-grossing film at the time, as well as in the United States and Canada, surpassing Cameron's Titanic, which had held those records for twelve years. Avatar remained the highest-grossing film worldwide for nearly a decade until it was overtaken by Avengers: Endgame in 2019, before a Chinese re-release saw Avatar retake the top spot in March 2021. Adjusted for inflation, Avatar is the second highest-grossing movie of all time after Gone with the Wind with a total of more than $3 billion. It also became the first film to gross more than and the best-selling video title of 2010 in the United States. Avatar was nominated for nine Academy Awards, including Best Picture and Best Director, and won three, for Best Art Direction, Best Cinematography, and Best Visual Effects. The success of the film also led to electronics manufacturers releasing 3D televisions and caused 3D films to increase in popularity.
Following the film's success, Cameron signed with 20th Century Fox to produce four sequels: Avatar 2 and Avatar 3 have completed principal filming, and are scheduled to be released on December 16, 2022, and December 20, 2024, respectively; subsequent sequels are scheduled to be released on December 18, 2026, and December 22, 2028. Production took place relatively smoothly throughout the COVID-19 pandemic at Weta studios in Wellington, as New Zealand as a whole was much less affected by the pandemic than other countries. Several cast members are expected to return, including Worthington, Saldana, Lang, and Weaver.
Plot
In 2154, humans have depleted Earth's natural resources, leading to a severe energy crisis. The Resources Development Administration (RDA) mines a valuable mineral called unobtanium on Pandora, a densely forested habitable moon orbiting Polyphemus, a fictional gas giant in the Alpha Centauri star system. Pandora, whose atmosphere is poisonous to humans, is inhabited by the Na'vi, a species of , blue-skinned, sapient humanoids that live in harmony with nature and worship a mother goddess named Eywa.
To explore Pandora's biosphere, scientists use Na'vi-human hybrids called "avatars", operated by genetically matched humans. Jake Sully, a paraplegic former Marine, replaces his deceased identical twin brother as an operator of one. Dr. Grace Augustine, head of the Avatar Program, considers Sully an inadequate replacement but accepts his assignment as a bodyguard. While escorting the avatars of Grace and fellow scientist Dr. Norm Spellman, Jake's avatar is attacked by a thanator and flees into the forest, where he is rescued by Neytiri, a female Na'vi. Witnessing an auspicious sign, she takes him to her clan. Neytiri's mother Mo'at, the clan's spiritual leader, orders her daughter to initiate Jake into their society.
Colonel Miles Quaritch, head of RDA's private security force, promises Jake that the company will restore his legs if he gathers information about the Na'vi and the clan's gathering place, a giant tree called Hometree, which stands above the richest deposit of unobtanium in the area. When Grace learns of this, she transfers herself, Jake, and Norm to an outpost. Over the following three months, Jake and Neytiri fall in love as Jake grows to sympathize with the natives. After Jake is initiated into the tribe, he and Neytiri choose each other as mates. Soon afterward, Jake reveals his change of allegiance when he attempts to disable a bulldozer that threatens to destroy a sacred Na'vi site. When Quaritch shows a video recording of Jake's attack on the bulldozer to Administrator Parker Selfridge, and another in which Jake admits that the Na'vi will never abandon Hometree, Selfridge orders Hometree destroyed.
Despite Grace's argument that destroying Hometree could damage the biological neural network native to Pandora, Selfridge gives Jake and Grace one hour to convince the Na'vi to evacuate before commencing the attack. Jake confesses to the Na'vi that he was a spy, and they take him and Grace captive. Quaritch's men destroy Hometree, killing Neytiri's father (the clan chief) and many others. Mo'at frees Jake and Grace, but they are detached from their avatars and imprisoned by Quaritch's forces. Pilot Trudy Chacón, disgusted by Quaritch's brutality, frees Jake, Grace, and Norm, and airlifts them to Grace's outpost, but Grace is shot by Quaritch during the escape.
To regain the Na'vi's trust, Jake connects his mind to that of Toruk, a dragon-like predator feared and honored by the Na'vi. Jake finds the refugees at the sacred Tree of Souls and pleads with Mo'at to heal Grace. The clan attempts to transfer Grace from her human body into her avatar with the aid of the Tree of Souls, but she dies before the process can be completed. Supported by the new chief Tsu'tey, Jake unites the clan and tells them to gather all of the clans to battle the RDA. Quaritch organizes a pre-emptive strike against the Tree of Souls, believing that its destruction will demoralize the natives. On the eve of battle, Jake prays to Eywa, via a neural connection with the Tree of Souls, to intercede on behalf of the Na'vi.
During the subsequent battle, the Na'vi suffer heavy casualties, including Tsu'tey and Trudy, but are rescued when Pandoran wildlife unexpectedly join the attack and overwhelm the humans, which Neytiri interprets as Eywa's answer to Jake's prayer. Jake destroys a makeshift bomber before it can reach the Tree of Souls; Quaritch, wearing an AMP suit, escapes from his own damaged aircraft, then later finds and breaks open the avatar link unit containing Jake's human body, exposing it to Pandora's poisonous atmosphere. Quaritch prepares to slit the throat of Jake's avatar, but Neytiri kills Quaritch and saves Jake from suffocation, seeing his human form for the first time.
With the exceptions of Jake, Norm and a select few others, all humans are expelled from Pandora and sent back to Earth. Jake is permanently transferred into his avatar with the aid of the Tree of Souls.
Cast
Humans
Sam Worthington as Jake Sully, a disabled former Marine who becomes part of the Avatar Program after his twin brother is killed. His military background helps the Na'vi warriors relate to him. Cameron cast the Australian actor after a worldwide search for promising young actors, preferring relative unknowns to keep the budget down. Worthington, who was living in his car at the time, auditioned twice early in development, and he has signed on for possible sequels. Cameron felt that because Worthington had not done a major film, he would give the character "a quality that is really real". Cameron said he "has that quality of being a guy you'd want to have a beer with, and he ultimately becomes a leader who transforms the world". Worthington also briefly appears as Jake's deceased identical twin, Tommy. Cameron offered the role to Matt Damon, with a 10% stake in the film's profits, but Damon turned the film down because of his commitment to the Jason Bourne film series.
Stephen Lang as Colonel Miles Quaritch, the head of the mining operation's security detail. Fiercely consistent in his disregard for any life not recognized as human, he has a profound disregard for Pandora's inhabitants that is evident in both his actions and his language. Lang had unsuccessfully auditioned for a role in Cameron's Aliens (1986), but the director remembered Lang and sought him for Avatar. Michael Biehn, who had worked with Cameron in Aliens, The Terminator and Terminator 2: Judgment Day, was briefly considered for the role. He read the script and watched some of the 3-D footage with Cameron but was ultimately not cast.
Sigourney Weaver as Dr. Grace Augustine, an exobiologist and head of the Avatar Program. She is also Sully's mentor and an advocate of peaceful relations with the Na'vi, having set up a school to teach them English.
Michelle Rodriguez as Trudy Chacón, a combat pilot assigned to support the Avatar Program who is sympathetic to the Na'vi. Cameron had wanted to work with Rodriguez since seeing her in Girlfight.
Giovanni Ribisi as Parker Selfridge, the corporate administrator for the RDA mining operation. While he is at first willing to destroy the Na'vi civilization to preserve the company's bottom line, he is reluctant to authorize the attacks on the Na'vi and taint his image, doing so only after Quaritch persuades him that it is necessary and that the attacks will be humane. When the attacks are broadcast to the base, Selfridge displays discomfort at the violence.
Joel David Moore as Dr. Norm Spellman, a xenoanthropologist who studies plant and animal life as part of the Avatar Program. He arrives on Pandora at the same time as Jake and operates an avatar. Although he is expected to lead the diplomatic contact with the Na'vi, it turns out that Jake has the personality better suited to win the natives' respect.
Dileep Rao as Dr. Max Patel, a scientist who works in the Avatar Program and comes to support Jake's rebellion against the RDA.
Na'vi
Zoe Saldana as Neytiri, the daughter of the leaders of the Omaticaya (the Na'vi clan central to the story). She is attracted to Jake because of his bravery, though frustrated with him for what she sees as his naiveté and stupidity. She serves as Jake's love interest. The character, like all the Na'vi, was created using performance capture, and its visual aspect is entirely computer generated. Saldana has also signed on for potential sequels.
CCH Pounder as Mo'at, the Omaticaya's spiritual leader, Neytiri's mother, and consort to clan leader Eytukan.
Wes Studi as Eytukan, the Omaticaya's clan leader, Neytiri's father, and Mo'at's mate.
Laz Alonso as Tsu'tey, the finest warrior of the Omaticaya. He is heir to the chieftainship of the tribe. At the beginning of the film's story, he is betrothed to Neytiri.
Production
Origins
In 1994, director James Cameron wrote an 80-page treatment for Avatar, drawing inspiration from "every single science fiction book" he had read in his childhood as well as from adventure novels by Edgar Rice Burroughs and H. Rider Haggard. In , Cameron announced that after completing Titanic, he would film Avatar, which would make use of synthetic, or computer-generated, actors. The project would cost $100 million and involve at least six actors in leading roles "who appear to be real but do not exist in the physical world". Visual effects house Digital Domain, with whom Cameron has a partnership, joined the project, which was supposed to begin production in mid-1997 for a 1999 release. However, Cameron felt that the technology had not caught up with the story and vision that he intended to tell. He decided to concentrate on making documentaries and refining the technology for the next few years. It was revealed in a Bloomberg BusinessWeek cover story that 20th Century Fox had fronted $10 million to Cameron to film a proof-of-concept clip for Avatar, which he showed to Fox executives in .
In February 2006, Cameron revealed that his film Project 880 was "a retooled version of Avatar", a film that he had tried to make years earlier, citing the technological advances in the creation of the computer-generated characters Gollum, King Kong, and Davy Jones. Cameron had chosen Avatar over his project Battle Angel after completing a five-day camera test in the previous year.
Development
From January to April 2006, Cameron worked on the script and developed a culture for the film's aliens, the Na'vi. Their language was created by Dr. Paul Frommer, a linguist at USC. The Na'vi language has a lexicon of about 1000 words, with some 30 added by Cameron. The tongue's phonemes include ejective consonants (such as the "kx" in "skxawng") that are found in Amharic, and the initial "ng" that Cameron may have taken from Te Reo Māori. Actress Sigourney Weaver and the film's set designers met with Jodie S. Holt, professor of plant physiology at University of California, Riverside, to learn about the methods used by botanists to study and sample plants, and to discuss ways to explain the communication between Pandora's organisms depicted in the film.
From 2005 to 2007, Cameron worked with a handful of designers, including famed fantasy illustrator Wayne Barlowe and renowned concept artist Jordu Schell, to shape the design of the Na'vi with paintings and physical sculptures when Cameron felt that 3-D brush renderings were not capturing his vision, often working together in the kitchen of Cameron's Malibu home. In , Cameron announced that he would film Avatar for a mid-2008 release and planned to begin principal photography with an established cast by . The following August, the visual effects studio Weta Digital signed on to help Cameron produce Avatar. Stan Winston, who had collaborated with Cameron in the past, joined Avatar to help with the film's designs. Production design for the film took several years. The film had two different production designers, and two separate art departments, one of which focused on the flora and fauna of Pandora, and another that created human machines and human factors. In , Cameron was announced to be using his own Reality Camera System to film in 3-D. The system would use two high-definition cameras in a single camera body to create depth perception.
While these preparations were underway, Fox kept wavering in its commitment to Avatar because of its painful experience with cost overruns and delays on Cameron's previous picture, Titanic, even though Cameron rewrote the script to combine several characters together and offered to cut his fee in case the film flopped. Cameron installed a traffic light with the amber signal lit outside of co-producer Jon Landau's office to represent the film's uncertain future. In mid-2006, Fox told Cameron "in no uncertain terms that they were passing on this film," so he began shopping it around to other studios and approached Walt Disney Studios, showing his proof of concept to then chairman Dick Cook. However, when Disney attempted to take over, Fox exercised its right of first refusal. In , Fox finally agreed to commit to making Avatar after Ingenious Media agreed to back the film, which reduced Fox's financial exposure to less than half of the film's official $237 million budget. After Fox accepted Avatar, one skeptical Fox executive shook his head and told Cameron and Landau, "I don't know if we're crazier for letting you do this, or if you're crazier for thinking you can do this ..."
In December 2006, Cameron described Avatar as "a futuristic tale set on a planet 200 years hence ... an old-fashioned jungle adventure with an environmental conscience [that] aspires to a mythic level of storytelling". The press release described the film as "an emotional journey of redemption and revolution" and said the story is of "a wounded former Marine, thrust unwillingly into an effort to settle and exploit an exotic planet rich in biodiversity, who eventually crosses over to lead the indigenous race in a battle for survival". The story would be of an entire world complete with an ecosystem of phantasmagorical plants and creatures, and native people with a rich culture and language.
Estimates put the cost of the film at about $280–310 million to produce and an estimated $150 million for marketing, noting that about $30 million in tax credits would lessen the financial impact on the studio and its financiers. A studio spokesperson said that the budget was "$237 million, with $150 million for promotion, end of story."
Themes and inspirations
Avatar is primarily an action-adventure journey of self-discovery, in the context of imperialism, and deep ecology.
Cameron said his inspiration was "every single science fiction book I read as a kid" and that he wanted to update the style of Edgar Rice Burroughs' John Carter series. He acknowledged that Avatar shares themes with the films At Play in the Fields of the Lord, The Emerald Forest, and Princess Mononoke, which feature clashes between cultures and civilizations, and with Dances with Wolves, where a battered soldier finds himself drawn to the culture he was initially fighting against. He also cited Hayao Miyazaki's anime films such as Princess Mononoke as an influence on the ecosystem of Pandora.
In 2012, Cameron filed a 45-page legal declaration that intended to "describe in great detail the
genesis of the ideas, themes, storylines, and images that came to be Avatar." In addition to historical events (such as European colonization of the Americas), his life experiences and several of his unproduced projects, Cameron drew connections between Avatar and his previous films. He cited his script and concept art for Xenogenesis, partially produced as a short film, as being the basis for many of the ideas and visual designs in Avatar. He stated that Avatar'''s "concepts of a world mind, intelligence within nature, the idea of projecting force or consciousness using an avatar, colonization of alien planets, greedy corporate interests backed up by military force, the story of a seemingly weaker group prevailing over a technologically superior force, and the good scientist were all established and recurrent themes" from his earlier films including Aliens, The Abyss, Rambo: First Blood Part II, The Terminator and Terminator 2: Judgment Day. He specifically mentioned the "water tentacle" in The Abyss as an example of an "avatar" that "takes on the appearance of...an alien life form...in order to bridge the cultural gap and build trust."
Cameron also cited a number of works by other creators as "reference points and sources of inspiration" for Avatar. These include two of his "favorite" films, 2001: A Space Odyssey, where mankind experiences an evolution after meeting alien life, and Lawrence of Arabia, where "an outsider...encounters and immerses into a foreign culture and then ultimately joins that group to fight other outsiders." Cameron said he became familiar with the concept of a human operating a "synthetic avatar" inside another world from George Henry Smith's short story "In the Imagicon" and Arthur C. Clarke's novel The City and the Stars. He said he learned of the term "avatar" by reading the cyberpunk novels Neuromancer by William Gibson and Islands in the Net by Bruce Sterling. The idea of a "world mind" originated in the novel Solaris by Stanislaw Lem. Cameron mentioned several other films about people interacting with "indigenous cultures" as inspiring him, including Dances With Wolves, The Man Who Would Be King, The Mission, The Emerald Forest, Medicine Man, The Jungle Book and FernGully. He also cited as inspiration the John Carter and Tarzan stories by Edgar Rice Burroughs and other adventure stories by Rudyard Kipling and H. Rider Haggard.
In a 2007 interview with Time magazine, Cameron was asked about the meaning of the term Avatar, to which he replied, "It's an incarnation of one of the Hindu gods taking a flesh form. In this film what that means is that the human technology in the future is capable of injecting a human's intelligence into a remotely located body, a biological body." Cameron also cited the Japanese cyberpunk manga and anime Ghost in the Shell, in terms of how humans can remotely control, and transfer their personalities into, alien bodies.
The look of the Na'vi – the humanoids indigenous to Pandora – was inspired by a dream that Cameron's mother had, long before he started work on Avatar. In her dream, she saw a blue-skinned woman 12 feet () tall, which he thought was "kind of a cool image". Also he said, "I just like blue. It's a good color ... plus, there's a connection to the Hindu deities, which I like conceptually." He included similar creatures in his first screenplay (written in 1976 or 1977), which featured a planet with a native population of "gorgeous" tall blue aliens. The Na'vi were based on them.
For the love story between characters Jake and Neytiri, Cameron applied a star-crossed love theme, which he said was in the tradition of Romeo and Juliet. He acknowledged its similarity to the pairing of Jack and Rose from his film Titanic. An interviewer stated, "Both couples come from radically different cultures that are contemptuous of their relationship and are forced to choose sides between the competing communities." Cameron described Neytiri as his "Pocahontas," saying that his plotline followed the historical story of a "white outsider [who] falls in love with the chief's daughter, who becomes his guide to the tribe and to their special bond with nature." Cameron felt that whether or not the Jake and Neytiri love story would be perceived as believable partially hinged on the physical attractiveness of Neytiri's alien appearance, which was developed by considering her appeal to the all-male crew of artists. Although Cameron felt Jake and Neytiri do not fall in love right away, their portrayers (Worthington and Saldana) felt the characters did. Cameron said the two actors "had a great chemistry" during filming.
For the film's floating "Hallelujah Mountains", the designers drew inspiration from "many different types of mountains, but mainly the karst limestone formations in China." According to production designer Dylan Cole, the fictional floating rocks were inspired by Huangshan (also known as Yellow Mountain), Guilin, Zhangjiajie, among others around the world. Cameron had noted the influence of the Chinese peaks on the design of the floating mountains.
To create the interiors of the human mining colony on Pandora, production designers visited the Noble Clyde Boudreaux oil platform in the Gulf of Mexico during . They photographed, measured and filmed every aspect of the platform, which was later replicated on-screen with photorealistic CGI during post-production.
Cameron said that he wanted to make "something that has this spoonful of sugar of all the action and the adventure and all that" but also have a conscience "that maybe in the enjoying of it makes you think a little bit about the way you interact with nature and your fellow man". He added that "the Na'vi represent something that is our higher selves, or our aspirational selves, what we would like to think we are" and that even though there are good humans within the film, the humans "represent what we know to be the parts of ourselves that are trashing our world and maybe condemning ourselves to a grim future".
Cameron acknowledges that Avatar implicitly criticizes the United States' role in the Iraq War and the impersonal nature of mechanized warfare in general. In reference to the use of the term shock and awe in the film, Cameron said, "We know what it feels like to launch the missiles. We don't know what it feels like for them to land on our home soil, not in America." He said in later interviews, "... I think it's very patriotic to question a system that needs to be corralled ..." and, "The film is definitely not anti-American."
A scene in the film portrays the violent destruction of the towering Na'vi Hometree, which collapses in flames after a missile attack, coating the landscape with ash and floating embers. Asked about the scene's resemblance to the September 11 attacks on the World Trade Center, Cameron said he had been "surprised at how much it did look like ".
Filming
Principal photography for Avatar began in in Los Angeles and Wellington. Cameron described the film as a hybrid with a full live-action shoot in combination with computer-generated characters and live environments. "Ideally at the end of the day the audience has no idea which they're looking at," Cameron said. The director indicated that he had already worked four months on nonprincipal scenes for the film. The live action was shot with a modified version of the proprietary digital 3-D Fusion Camera System, developed by Cameron and Vince Pace. In , Fox had announced that 3-D filming for Avatar would be done at 24 frames per second despite Cameron's strong opinion that a 3-D film requires higher frame rate to make strobing less noticeable. According to Cameron, the film is composed of 60% computer-generated elements and 40% live action, as well as traditional miniatures.
Motion-capture photography lasted 31 days at the Hughes Aircraft stage in Playa Vista in Los Angeles. Live action photography began in at Stone Street Studios in Wellington and was scheduled to last 31 days. More than a thousand people worked on the production. In preparation of the filming sequences, all of the actors underwent professional training specific to their characters such as archery, horseback riding, firearm use, and hand-to-hand combat. They received language and dialect training in the Na'vi language created for the film. Before shooting the film, Cameron also sent the cast to the Hawaiian tropical rainforests to get a feel for a rainforest setting before shooting on the soundstage.
During filming, Cameron made use of his virtual camera system, a new way of directing motion-capture filmmaking. The system shows the actors' virtual counterparts in their digital surroundings in real time, allowing the director to adjust and direct scenes just as if shooting live action. According to Cameron, "It's like a big, powerful game engine. If I want to fly through space, or change my perspective, I can. I can turn the whole scene into a living miniature and go through it on a 50 to 1 scale." Using conventional techniques, the complete virtual world cannot be seen until the motion-capture of the actors is complete. Cameron said this process does not diminish the value or importance of acting. On the contrary, because there is no need for repeated camera and lighting setups, costume fittings and make-up touch-ups, scenes do not need to be interrupted repeatedly. Cameron described the system as a "form of pure creation where if you want to move a tree or a mountain or the sky or change the time of day, you have complete control over the elements".
Cameron gave fellow directors Steven Spielberg and Peter Jackson a chance to test the new technology. Spielberg said, "I like to think of it as digital makeup, not augmented animation ... Motion capture brings the director back to a kind of intimacy that actors and directors only know when they're working in live theater." Spielberg and George Lucas were also able to visit the set to watch Cameron direct with the equipment.
To film the shots where CGI interacts with live action, a unique camera referred to as a "simulcam" was used, a merger of the 3-D fusion camera and the virtual camera systems. While filming live action in real time with the simulcam, the CGI images captured with the virtual camera or designed from scratch, are superimposed over the live action images as in augmented reality and shown on a small monitor, making it possible for the director to instruct the actors how to relate to the virtual material in the scene.
Due to Cameron's personal convictions about climate change, he allowed only plant-based (vegan) food to be served on set.
Visual effects
A number of innovative visual effects techniques were used during production. According to Cameron, work on the film had been delayed since the 1990s to allow the techniques to reach the necessary degree of advancement to adequately portray his vision of the film. The director planned to make use of photorealistic computer-generated characters, created using new motion capture animation technologies he had been developing in the 14 months leading up to .
Innovations include a new system for lighting massive areas like Pandora's jungle, a motion-capture stage or "volume" six times larger than any previously used, and an improved method of capturing facial expressions, enabling full performance capture. To achieve the face capturing, actors wore individually made skull caps fitted with a tiny camera positioned in front of the actors' faces; the information collected about their facial expressions and eyes is then transmitted to computers. According to Cameron, the method allows the filmmakers to transfer 100% of the actors' physical performances to their digital counterparts.
Besides the performance capture data which were transferred directly to the computers, numerous reference cameras gave the digital artists multiple angles of each performance. A technically challenging scene was near the end of the film when the computer-generated Neytiri held the live action Jake in human form, and attention was given to the details of the shadows and reflected light between them.
The lead visual effects company was Weta Digital in Wellington, at one point employing 900 people to work on the film. Because of the huge amount of data which needed to be stored, cataloged and available for everybody involved, even on the other side of the world, a new cloud computing and Digital Asset Management (DAM) system named Gaia was created by Microsoft especially for Avatar, which allowed the crews to keep track of and coordinate all stages in the digital processing. To render Avatar, Weta used a server farm making use of 4,000 Hewlett-Packard servers with 35,000 processor cores with 104 terabytes of RAM and three petabytes of network area storage running Ubuntu Linux, Grid Engine cluster manager, and 2 of the animation software and managers, Pixar's RenderMan and Pixar's Alfred queue management system. The render farm occupies the 193rd to 197th spots in the TOP500 list of the world's most powerful supercomputers. A new texturing and paint software system, called Mari, was developed by The Foundry in cooperation with Weta. Creating the Na'vi characters and the virtual world of Pandora required over a petabyte of digital storage, and each minute of the final footage for Avatar occupies 17.28 gigabytes of storage. It would often take the computer several hours to render a single frame of the film. To help finish preparing the special effects sequences on time, a number of other companies were brought on board, including Industrial Light & Magic, which worked alongside Weta Digital to create the battle sequences. ILM was responsible for the visual effects for many of the film's specialized vehicles and devised a new way to make CGI explosions. Joe Letteri was the film's visual effects general supervisor.
Music and soundtrack
Composer James Horner scored the film, his third collaboration with Cameron after Aliens and Titanic. Horner recorded parts of the score with a small chorus singing in the alien language Na'vi in .
He also worked with Wanda Bryant, an ethnomusicologist, to create a music culture for the alien race.
The first scoring sessions were planned to take place in early 2009. During production, Horner promised Cameron that he would not work on any other project except for Avatar and reportedly worked on the score from four in the morning until ten at night throughout the process. He stated in an interview, "Avatar has been the most difficult film I have worked on and the biggest job I have undertaken." Horner composed the score as two different scores merged into one. He first created a score that reflected the Na'vi way of sound and then combined it with a separate "traditional" score to drive the film.
British singer Leona Lewis was chosen to sing the theme song for the film, called "I See You". An accompanying music video, directed by Jake Nava, premiered , 2009, on MySpace.
Marketing
Promotions
The first photo of the film was released on , 2009, and Empire released exclusive images from the film in its October issue. Cameron, producer Jon Landau, Zoe Saldana, Stephen Lang, and Sigourney Weaver appeared at a panel, moderated by Tom Rothman, at the 2009 San Diego Comic-Con on . Twenty-five minutes of footage was screened in Dolby 3D.
Weaver and Cameron appeared at additional panels to promote the film, speaking on the 23rd and 24th respectively. James Cameron announced at the Comic-Con Avatar Panel that will be 'Avatar Day'. On this day, the trailer was released in all theatrical formats. The official game trailer and toy line of the film were also unveiled on this day.
The 129-second trailer was released online on , 2009.
The new 210-second trailer was premiered in theaters on , 2009, then soon after premiered online on Yahoo! on , 2009, to positive reviews.
An extended version in IMAX 3D received overwhelmingly positive reviews. The Hollywood Reporter said that audience expectations were colored by "the [same] establishment skepticism that preceded Titanic" and suggested the showing reflected the desire for original storytelling. The teaser has been among the most viewed trailers in the history of film marketing, reaching the first place of all trailers viewed on Apple.com with 4 million views.
On October 30, to celebrate the opening of the first 3-D cinema in Vietnam, Fox allowed Megastar Cinema to screen exclusive 16 minutes of Avatar to a number of press. The three-and-a-half-minute trailer of the film premiered live on , 2009, during a Dallas Cowboys football game at Cowboys Stadium in Arlington, Texas, on the Diamond Vision screen, one of the world's largest video displays, and to TV audiences viewing the game on Fox. It is said to be the largest live motion picture trailer viewing in history.
The Coca-Cola Company collaborated with Fox to launch a worldwide marketing campaign to promote the film. The highlight of the campaign was the website AVTR.com. Specially marked bottles and cans of Coca-Cola Zero, when held in front of a webcam, enabled users to interact with the website's 3-D features using augmented reality (AR) technology. The film was heavily promoted in an episode of the Fox Network series Bones in the episode "The Gamer In The Grease" (Season 5, Episode 9). Avatar star Joel David Moore has a recurring role on the program, and is seen in the episode anxiously awaiting the release of the film. A week prior to the American release, Zoe Saldana promoted the film on Adult Swim when she was interviewed by an animated Space Ghost. McDonald's had a promotion mentioned in television commercials in Europe called "Avatarize yourself", which encouraged people to go to the website set up by Oddcast, and use a photograph of themselves to change into a Na'vi.
Books Avatar: A Confidential Report on the Biological and Social History of Pandora, a 224-page book in the form of a field guide to the film's fictional setting of the planet of Pandora, was released by Harper Entertainment on , 2009.
It is presented as a compilation of data collected by the humans about Pandora and the life on it, written by Maria Wilhelm and Dirk Mathison. HarperFestival also released Wilhelm's 48-page James Cameron's Avatar: The Reusable Scrapbook for children. The Art of Avatar was released on , 2009, by Abrams Books. The book features detailed production artwork from the film, including production sketches, illustrations by Lisa Fitzpatrick, and film stills. Producer Jon Landau wrote the foreword, Cameron wrote the epilogue, and director Peter Jackson wrote the preface. In , Abrams Books also released The Making of Avatar, a 272-page book that detailed the film's production process and contains over 500 color photographs and illustrations.
In a 2009 interview, Cameron said that he planned to write a novel version of Avatar after the film was released. In , producer Jon Landau stated that Cameron plans a prequel novel for Avatar that will "lead up to telling the story of the movie, but it would go into much more depth about all the stories that we didn't have time to deal with", saying that "Jim wants to write a novel that is a big, epic story that fills in a lot of things". In August 2013 it was announced that Cameron hired Steven Gould to pen four standalone novels to expand the Avatar universe.
Video game
Cameron chose Ubisoft Montreal to create an Avatar game for the film in 2007. The filmmakers and game developers collaborated heavily, and Cameron decided to include some of Ubisoft's vehicle and creature designs in the film.James Cameron's Avatar: The Game was released on , 2009, for most home video game consoles (PlayStation 3, Xbox 360, Wii, Nintendo DS, iPhone) and Microsoft Windows, and for PlayStation Portable. A second game Avatar: Frontiers of Pandora was under development as of 2021.
Action figures and postage stamps
Mattel Toys announced in December 2009 that it would be introducing a line of Avatar action figures.
Each action figure will be made with a 3-D web tag, called an i-TAG, that consumers can scan using a web cam, revealing unique on-screen content that is exclusive to each specific action figure. A series of toys representing six different characters from the film were also distributed globally in McDonald's Happy Meals.
In December 2009, France Post released a special limited edition stamp based on Avatar, coinciding with the film's worldwide release.
Release and reception
Initial screening Avatar premiered in London on , 2009, and was released theatrically worldwide from to 18. The film was originally set for release on , 2009, during filming, but was pushed back to allow more post-production time (the last shots were delivered in November), and to give more time for theaters worldwide to install 3D projectors. Cameron stated that the film's aspect ratio would be 1.78:1 for 3D screenings and that a 2.39:1 image would be extracted for 2D screenings. However, a 3D 2.39:1 extract was approved for use with constant-image-height screens (i.e. screens which increase in width to display 2.39:1 films). During a 3D preview showing in Germany on , the movie's DRM 'protection' system failed, and some copies delivered could not be watched at all in the theaters. The problems were fixed in time for the public premiere. Avatar was released in a total of 3,457 theaters in the US, of which 2,032 theaters ran it in 3D. In total 90% of all advance ticket sales for Avatar were for 3D screenings.
Internationally, Avatar opened on a total of 14,604 screens in 106 territories, of which 3,671 were showing the film in 3D (producing 56% of the first weekend gross). The film was simultaneously presented in IMAX 3D format, opening in 178 theaters in the United States on . The international IMAX release included 58 theaters beginning on , and 25 more theaters were to be added in the coming weeks. The IMAX release was the company's widest to date, a total of 261 theaters worldwide. The previous IMAX record opening was Harry Potter and the Half-Blood Prince, which opened in 161 IMAX theaters in the US, and about 70 international. 20th Century Fox Korea adapted and later released Avatar in 4D version, which included "moving seats, smells of explosives, sprinkling water, laser lights and wind".
Box office
General Avatar was released internationally on more than 14,000 screens. It earned $3,537,000 from midnight screenings domestically (United States and Canada), with the initial 3D release limited to 2,200 screens. The film earned $26,752,099 on its opening day, and $77,025,481 over its opening weekend, making it the second-largest December opening ever behind I Am Legend, the largest domestic opening weekend for a film not based on a franchise (topping The Incredibles), the highest opening weekend for a film entirely in 3D (breaking Ups record), the highest opening weekend for an environmentalist film (breaking The Day After Tomorrows record), and the 40th largest opening weekend in North America, despite a blizzard that blanketed the East Coast of the United States and reportedly hurt its opening weekend results. The film also set an IMAX opening weekend record, with 178 theaters generating approximately $9.5 million, 12% of the film's $77 million (at the time) North American gross on less than 3% of the screens.
International markets generating opening weekend tallies of at least $10 million were for Russia ($19.7 million), France ($17.4 million), the UK ($13.8 million), Germany ($13.3 million), South Korea ($11.7 million), Australia ($11.5 million), and Spain ($11.0 million). Avatars worldwide gross was US$241.6 million after five days, the ninth largest opening-weekend gross of all time, and the largest for a non-franchise, non-sequel and original film. 58 international IMAX screens generated an estimated $4.1 million during the opening weekend.
Revenues in the film's second weekend decreased by only 1.8% in domestic markets, marking a rare occurrence, earning $75,617,183, to remain in first place at the box office and recording what was then the biggest second weekend of all time. The film experienced another marginal decrease in revenue in its third weekend, dropping 9.4% to $68,490,688 domestically, remaining in first place at the box office, to set a third-weekend record.Avatar crossed the $1 billion mark on the 19th day of its international release, making it the first film to reach this mark in only 19 days. It became the fifth film grossing more than $1 billion worldwide, and the only film of 2009 to do so. In its fourth weekend, Avatar continued to lead the box office domestically, setting a new all-time fourth-weekend record of $50,306,217, and becoming the highest-grossing 2009 release in the United States. In the film's fifth weekend, it set the Martin Luther King Day weekend record, grossing $54,401,446, and set a fifth-weekend record with a take of $42,785,612. It held the top spot to set the sixth and seventh weekend records earning $34,944,081 and $31,280,029 respectively. It was the fastest film to gross $600 million domestically, on its 47th day in theaters.
On , it became the first film to earn over worldwide, and it became the first film to gross over in the U.S. and Canada, on , after 72 days of release. It remained at number one at the domestic box office for seven consecutive weeks – the most consecutive No. 1 weekends since Titanic spent 15 weekends at No.1 in 1997 and 1998 – and also spent 11 consecutive weekends at the top of the box office outside the United States and Canada, breaking the record of nine consecutive weekends set by Pirates of the Caribbean: Dead Man's Chest. By the end of its first theatrical release Avatar had grossed $749,766,139 in the U.S. and Canada, and $ in other territories, for a worldwide total of $.
Including the revenue from a re-release of Avatar featuring extended footage, Avatar grossed $760,507,625 in the U.S. and Canada, and $2,029,172,169 in other countries for a worldwide total of $2,789,679,794. Avatar has set a number of box office records during its release: on , 2010, it surpassed Titanics worldwide gross to become the highest-grossing film of all time worldwide 41 days after its international release, just two days after taking the foreign box office record. On , 47 days after its domestic release, Avatar surpassed Titanic to become the highest-grossing film of all time in Canada and the United States. It became the highest-grossing film of all time in at least 30 other countries and is the first film to earn over $2 billion in foreign box office receipts.
IMAX ticket sales account for $243.3 million of its worldwide gross, more than double the previous record.
Box Office Mojo estimates that after adjusting for the rise in average ticket prices, Avatar would be the 14th-highest-grossing film of all time in North America. Box Office Mojo also observes that the higher ticket prices for 3D and IMAX screenings have had a significant impact on Avatars gross; it estimated, on , 2010, that Avatar had sold approximately tickets in North American theaters, more than any other film since 1999's Star Wars Episode I: The Phantom Menace. On a worldwide basis, when Avatars gross stood at $2 billion just 35 days into its run, The Daily Telegraph estimated its gross was surpassed by only Gone with the Wind ($3.0 billion), Titanic ($2.9 billion), and Star Wars ($2.2 billion) after adjusting for inflation to 2010 prices, with Avatar ultimately winding up with $2.8 billion after subsequent re-releases. Reuters even placed it ahead of Titanic after adjusting the global total for inflation. The 2015 edition of Guinness World Records lists Avatar only behind Gone with the Wind in terms of adjusted grosses worldwide.
Commercial analysis
Before its release, various film critics and fan communities predicted the film would be a significant disappointment at the box office, in line with predictions made for Cameron's previous blockbuster Titanic. This criticism ranged from Avatars film budget, to its concept and use of 3-D "blue cat people". Slate magazine's Daniel Engber complimented the 3D effects, but criticized them for reminding him of certain CGI characters from the Star Wars prequel films and for having the "uncanny valley" effect. The New York Times noted that 20th Century Fox executives had decided to release Alvin and the Chipmunks: The Squeakquel alongside Avatar, calling it a "secret weapon" to cover any unforeseeable losses at the box office.
Box office analysts, on the other hand, estimated that the film would be a box office success. "The holy grail of 3-D has finally arrived," said an analyst for Exhibitor Relations. "This is why all these 3-D venues were built: for Avatar. This is the one. The behemoth." The "cautionary estimate" was that Avatar would bring in around $60 million in its opening weekend. Others guessed higher. There were also analysts who believed that the film's three-dimensionality would help its box office performance, given that recent 3D films had been successful.
Cameron said he felt the pressure of the predictions, but that pressure is good for film-makers. "It makes us think about our audiences and what the audience wants," he stated. "We owe them a good time. We owe them a piece of good entertainment." Although he felt Avatar would appeal to everyone and that the film could not afford to have a target demographic, he especially wanted hard-core science-fiction fans to see it: "If I can just get 'em in the damn theater, the film will act on them in the way it's supposed to, in terms of taking them on an amazing journey and giving them this rich emotional experience." Cameron was aware of the sentiment that Avatar would need significant "repeat business" just to make up for its budget and achieve box office success, and believed Avatar could inspire the same "sharing" reaction as Titanic. He said that film worked because, "When people have an experience that's very powerful in the movie theatre, they want to go share it. They want to grab their friend and bring them, so that they can enjoy it. They want to be the person to bring them the news that this is something worth having in their life."
After the film's release and unusually strong box office performance over its first two weeks, it was debated as the one film capable of surpassing Titanics worldwide gross, and its continued strength perplexed box office analysts. Other films in recent years had been cited as contenders for surpassing Titanic, such as 2008's The Dark Knight, but Avatar was considered the first film with a genuine chance to do so, and its numbers being aided by higher ticket prices for 3D screenings did not fully explain its success to box office analysts. "Most films are considered to be healthy if they manage anything less than a 50% drop from their first weekend to their second. Dipping just 11% from the first to the third is unheard of," said Paul Dergarabedian, president of box-office analysis for Hollywood.com. "This is just unprecedented. I had to do a double take. I thought it was a miscalculation." Analysts predicted second place for the film's worldwide gross, but most were uncertain about it surpassing Titanic because "Today's films flame out much faster than they did when Titanic was released." Brandon Gray, president of Box Office Mojo, believed in the film's chances of becoming the highest-grossing film of all time, though he also believed it was too early to surmise because it had only played during the holidays. He said, "While Avatar may beat Titanics record, it will be tough, and the film is unlikely to surpass Titanic in attendance. Ticket prices were about $3 cheaper in the late 1990s." Cameron said he did not think it was realistic to "try to topple Titanic off its perch" because it "just struck some kind of chord" and there had been other good films in recent years. He changed his prediction by mid-January. "It's gonna happen. It's just a matter of time," he said.
Although analysts have been unable to agree that Avatars success is attributable to one primary factor, several explanations have been advanced. First, January is historically "the dumping ground for the year's weakest films", and this also applied to 2010.
Cameron himself said he decided to open the film in December so that it would have less competition from then to January. Titanic capitalized on the same January predictability, and earned most of its gross in 1998. Additionally, Avatar established itself as a "must-see" event. Gray said, "At this point, people who are going to see Avatar are going to see Avatar and would even if the slate was strong." Marketing the film as a "novelty factor" also helped. Fox positioned the film as a cinematic event that should be seen in the theaters. "It's really hard to sell the idea that you can have the same experience at home," stated David Mumpower, an analyst at BoxOfficeProphets.com. The "Oscar buzz" surrounding the film and international viewings helped. "Two-thirds of Titanics haul was earned overseas, and Avatar [tracked] similarly ...Avatar opened in 106 markets globally and was No. 1 in all of them", and the markets "such as Russia, where Titanic saw modest receipts in 1997 and 1998, are white-hot today" with "more screens and moviegoers" than before.
According to Variety, films in 3D accumulated $1.3 billion in 2009, "a threefold increase over 2008 and more than 10% of the total 2009 box-office gross". The increased ticket price – an average of $2 to $3 per ticket in most markets – helped the film. Likewise, Entertainment Weekly attributed the film's success to 3D glasses, but also to its "astronomic word-of-mouth". Not only do some theaters charge up to $18.50 for IMAX tickets, but "the buzz" created by the new technology was the possible cause for sold-out screenings. Gray said Avatar having no basis in previously established material makes its performance remarkable and even more impressive. "The movie might be derivative of many movies in its story and themes," he said, "but it had no direct antecedent like the other top-grossing films: Titanic (historical events), the Star Wars movies (an established film franchise), or The Lord of the Rings (literature). It was a tougher sell ..." The Hollywood Reporter estimated that after a combined production and promotion cost of between $387–437 million, the film turned a net profit of $1.2 billion.
Critical reception
On review aggregator Rotten Tomatoes, 81% of 322 reviews are positive, and the average rating is 7.4/10. The site's consensus reads, "It might be more impressive on a technical level than as a piece of storytelling, but Avatar reaffirms James Cameron's singular gift for imaginative, absorbing filmmaking." On Metacritic — which assigns a weighted mean score — the film has a score of 83 out of 100 based on 35 critics, indicating "universal acclaim". Audiences polled by CinemaScore gave the film an average grade of "A" on an A+ to F scale. Every demographic surveyed was reported to give this rating. These polls also indicated that the main draw of the film was its use of 3D.
Roger Ebert of the Chicago Sun-Times called the film "extraordinary" and gave it four stars out of four. "Watching Avatar, I felt sort of the same as when I saw Star Wars in 1977," he said, adding that like Star Wars and The Lord of the Rings: The Fellowship of the Ring, the film "employs a new generation of special effects" and it "is not simply a sensational entertainment, although it is that. It's a technical breakthrough. It has a flat-out Green and anti-war message".
A. O. Scott of At The Movies also compared his viewing of the film to the first time he viewed Star Wars and he said "although the script is a little bit ... obvious," it was "part of what made it work". Todd McCarthy of Variety praised the film, saying "The King of the World sets his sights on creating another world entirely in Avatar, and it's very much a place worth visiting." Kirk Honeycutt of The Hollywood Reporter gave the film a positive review. "The screen is alive with more action and the soundtrack pops with more robust music than any dozen sci-fi shoot-'em-ups you care to mention," he stated. Peter Travers of Rolling Stone awarded Avatar a three-and-a-half out of four star rating and wrote in his print review "It extends the possibilities of what movies can do. Cameron's talent may just be as big as his dreams." Richard Corliss of Time magazine thought that the film was "the most vivid and convincing creation of a fantasy world ever seen in the history of moving pictures." Kenneth Turan of the Los Angeles Times thought the film has "powerful" visual accomplishments but "flat dialogue" and "obvious characterization". James Berardinelli of ReelViews praised the film and its story, giving it four out of four stars; he wrote "In 3-D, it's immersive – but the traditional film elements – story, character, editing, theme, emotional resonance, etc. – are presented with sufficient expertise to make even the 2-D version an engrossing 2-hour experience."Avatars underlying social and political themes attracted attention. Armond White of the New York Press wrote that Cameron used "villainous American characters" to "misrepresent facets of militarism, capitalism, and imperialism".See also last paragraph of the above section Avatar Themes and inspirations. Russell D. Moore of The Christian Post concluded that "propaganda exists in the film" and stated "If you can get a theater full of people in Kentucky to stand and applaud the defeat of their country in war, then you've got some amazing special effects." Adam Cohen of The New York Times was more positive about the film, calling its anti-imperialist message "a 22nd-century version of the American colonists vs. the British, India vs. the Raj, or Latin America vs. United Fruit". Ross Douthat of The New York Times opined that the film is "Cameron's long apologia for pantheism [...] Hollywood's religion of choice for a generation now", while Saritha Prabhu of The Tennessean called the film a "misportrayal of pantheism and Eastern spirituality in general", and Maxim Osipov of The Hindustan Times, on the contrary, commended the film's message for its overall consistency with the teachings of Hinduism in the Bhagavad Gita. Annalee Newitz of io9 concluded that Avatar is another film that has the recurring "fantasy about race" whereby "some white guy" becomes the "most awesome" member of a non-white culture. Michael Phillips of the Chicago Tribune called Avatar "the season's ideological Rorschach blot", while Miranda Devine of The Sydney Morning Herald thought that "It [was] impossible to watch Avatar without being banged over the head with the director's ideological hammer." Nidesh Lawtoo believed that an essential, yet less visible social theme that contributed to Avatars success concerns contemporary fascinations with virtual avatars and "the transition from the world of reality to that of virtual reality".
Critics and audiences have cited similarities with other films, literature or media, describing the perceived connections in ways ranging from simple "borrowing" to outright plagiarism. Ty Burr of The Boston Globe called it "the same movie" as Dances with Wolves. Like Dances with Wolves, Avatar has been characterized as being a "white savior" movie, in which a "backwards" native people is impotent without the leadership of a member of the invading white culture. Parallels to the concept and use of an avatar are in Poul Anderson's 1957 novelette "Call Me Joe", in which a paralyzed man uses his mind from orbit to control an artificial body on Jupiter. Cinema audiences in Russia have noted that Avatar has elements in common with the 1960s Noon Universe novels by Arkady and Boris Strugatsky, which are set in the 22nd century on a forested world called Pandora with a sentient indigenous species called the Nave. Various reviews have compared Avatar to the films FernGully: The Last Rainforest, Pocahontas and The Last Samurai. NPR's Morning Edition has compared the film to a montage of tropes, with one commentator stating that Avatar was made by "mixing a bunch of film scripts in a blender". Gary Westfahl wrote that "the science fiction story that most closely resembles Avatar has to be Ursula Le Guin's novella The Word for World Is Forest (1972), another epic about a benevolent race of alien beings who happily inhabit dense forests while living in harmony with nature until they are attacked and slaughtered by invading human soldiers who believe that the only good gook is a dead gook." The science fiction writer and editor Gardner Dozois said that along with the Anderson and Le Guin stories, the "mash-up" included Alan Dean Foster's 1975 novel, Midworld. Some sources saw similarities to the artwork of Roger Dean, which featured fantastic images of floating rock formations and dragons. In 2013, Dean sued Cameron and Fox, claiming that Pandora was inspired by 14 of his images. Dean sought damages of $50m. Dean's case was dismissed in 2014, and The Hollywood Reporter noted that Cameron has won multiple Avatar idea theft cases.Avatar received compliments from filmmakers, with Steven Spielberg praising it as "the most evocative and amazing science-fiction movie since Star Wars" and others calling it "audacious and awe inspiring", "master class", and "brilliant". Noted art director-turned-filmmaker Roger Christian is also a noted fan of the film. On the other hand, Duncan Jones said: "It's not in my top three James Cameron films. ... [A]t what point in the film did you have any doubt what was going to happen next?". For French filmmaker Luc Besson, Avatar opened the doors for him to now create an adaptation of the graphic novel series Valérian and Laureline that technologically supports the scope of its source material, with Besson even throwing his original script in the trash and redoing it after seeing the film. TIME ranked Avatar number 3 in their list of "The 10 Greatest Movies of the Millennium (Thus Far)" also earning it a spot on the magazine's All-Time 100 list, and IGN listed Avatar as number 22 on their list of the top 25 Sci-Fi movies of all time.
Accolades Avatar won the 82nd Academy Awards for Best Art Direction, Best Cinematography, and Best Visual Effects, and was nominated for a total of nine, including Best Picture and Best Director. Avatar also won the 67th Golden Globe Awards for Best Motion Picture – Drama and Best Director, and was nominated for two others. At the 36th Saturn Awards, Avatar won all ten awards it was nominated for: Best Science Fiction Film, Best Actor, Best Actress, Best Supporting Actor, Best Supporting Actress, Best Director, Best Writing, Best Music, Best Production Design and Best Special Effects.
The New York Film Critics Online honored the film with its Best Picture award. The film also won the Critics' Choice Awards of the Broadcast Film Critics Association for Best Action Film and several technical categories, out of nine nominations. It won two of the St. Louis Film Critics awards: Best Visual Effects and Most Original, Innovative or Creative Film. The film also won the British Academy of Film and Television Arts (BAFTA) award for Production Design and Special Visual Effects, and was nominated for six others, including Best Film and Director. The film has received numerous other major awards, nominations and honors.
Special Edition re-release
In July 2010, Cameron confirmed that there would be an extended theatrical re-release of the film on , 2010, exclusively in 3D theaters and IMAX 3D. Avatar: Special Edition includes an additional nine minutes of footage, all of which is CG, including an extension of the sex scene and various other scenes that were cut from the original theatrical film. This extended re-release resulted in the film's run time approaching the current IMAX platter maximum of 170 minutes, thereby leaving less time for the end credits. Cameron stated that the nine minutes of added scenes cost more than a minute to produce and finish. During its 12-week re-release, Avatar: Special Edition grossed an additional $10.74 million in North America and $22.46 million overseas for a worldwide total of $33.2 million.
Extended home media release
20th Century Fox Home Entertainment released the film on DVD and Blu-ray in the US on , 2010, and in the UK on . The US release was not on a Tuesday as is the norm, but was done to coincide with Earth Day. The first DVD and Blu-ray release does not contain any supplemental features other than the theatrical film and the disc menu in favor of and to make space for optimal picture and sound. The release also preserves the film's native 1.78:1 (16:9) format as Cameron felt that was the best format to watch the film. The Blu-ray disc contains DRM (BD+ 5) which some Blu-ray players might not support without a firmware update.Avatar set a first-day launch record in the U.S. for Blu-ray sales at 1.5 million units sold, breaking the record previously held by The Dark Knight (600,000 units sold). First-day DVD and Blu-ray sales combined were over four million units sold. In its first four days of release, sales of Avatar on Blu-ray reached 2.7 million in the United States and Canada – overtaking The Dark Knight to become the best ever selling Blu-ray release in the region. The release later broke the Blu-ray sales record in the UK the following week. In its first three weeks of release, the film sold a total of DVD and Blu-ray discs combined, a new record for sales in that period. As of , 2012, DVD sales (not including Blu-ray) totaled over units sold with in revenue. Avatar retained its record as the top-selling Blu-ray in the US market until January 2015, when it was surpassed by Disney's Frozen.
The Avatar three-disc Extended Collector's Edition on DVD and Blu-ray was released on , 2010. Three different versions of the film are present on the discs: the original theatrical cut (162 minutes), the special edition cut (170 minutes), and a collector's extended cut (178 minutes). The DVD set spreads the film across two discs, while the Blu-ray set presents it on a single disc. The collector's extended cut contains 8 more minutes of footage, thus making it 16 minutes longer than the original theatrical cut. Cameron mentioned, "you can sit down, and in a continuous screening of the film, watch it with the Earth opening". He stated the "Earth opening" is an additional 4 minutes of scenes that were in the film for much of its production but were ultimately cut before the film's theatrical release. The release also includes an additional 45 minutes of deleted scenes and other extras.
Cameron initially stated that Avatar would be released in 3D around , but the studio issued a correction: "3-D is in the conceptual stage and Avatar will not be out on 3D Blu-ray in November." In , Fox stated that the 3D version would be released some time in 2011. It was later revealed that Fox had given Panasonic an exclusive license for the 3D Blu-ray version and only with the purchase of a Panasonic 3DTV. The length of Panasonic's exclusivity period is stated to last until . On , Cameron stated that the standalone 3D Blu-ray would be the final version of the film's home release and that it was, "maybe one, two years out". On Christmas Eve 2010, Avatar had its 3D television world premiere on Sky.
On August 13, 2012, Cameron announced on Facebook that Avatar would be released globally on Blu-ray 3D. The Blu-ray 3D version was finally released on October 16, 2012.
Sequels
Two sequels to Avatar were initially confirmed after the success of the first film; this number was subsequently expanded to four. Their respective release dates were previously December 17, 2021, December 22, 2023, December 19, 2025, and December 17, 2027. Due to the impact of the COVID-19 pandemic on cinema in 2020, the four Avatar sequels releases were then delayed; their respective release dates are currently December 16, 2022, December 20, 2024, December 18, 2026, and December 22, 2028. Cameron is directing, producing and co-writing all four; Josh Friedman, Rick Jaffa, Amanda Silver, and Shane Salerno all took a part in the writing process of all of the sequels before being assigned to finish the separate scripts, making the eventual writing credits for each film unclear.
Filming for the first two sequels began in September 2017. Sam Worthington, Zoe Saldana, Giovanni Ribisi, Joel David Moore, Dileep Rao, and CCH Pounder are all reprising their roles, as are Stephen Lang and Matt Gerald, despite the deaths of their characters in the first film. Sigourney Weaver is also returning, although she stated that she would play a different character.
New cast members include Cliff Curtis and Kate Winslet as members of the Na'vi reef people of Metkayina and Oona Chaplin as Varang, a "strong and vibrant central character who spans the entire saga of the sequels". Seven child actors will also portray pivotal new characters through the sequels: Jamie Flatters, Britain Dalton, and Trinity Bliss as Jake and Neytiri's children, Bailey Bass, Filip Geljo, and Duane Evans Jr. as free-divers of the Metkayina, and Jack Champion as a human. Although the last two sequels have been greenlit, Cameron stated in an interview on November 26, 2017, "Let's face it, if Avatar 2 and 3 don't make enough money, there's not going to be a 4 and 5".
On November 14, 2018, Cameron announced filming on Avatar 2 and 3 with the principal performance capture cast had been completed. In September 2020, Cameron confirmed that live action filming had been completed for 2 and was over 90% complete for 3.
Related media
Stage adaptation Toruk – The First Flight is an original stage production by the Montreal-based Cirque du Soleil which ran between December 2015 and June 2019. Inspired by Avatar, the story is set in Pandora's past, involving a prophecy concerning a threat to the Tree of Souls and a quest for totems from different tribes. Audience members could download an app in order to participate in show effects. On January 18, 2016, it was announced via the Toruk Facebook page that filming for a DVD release had been completed and was undergoing editing.
Theme park attraction
In 2011, Cameron, Lightstorm, and Fox entered an exclusive licensing agreement with the Walt Disney Company to feature Avatar-themed attractions at Walt Disney Parks and Resorts worldwide, including a themed land for Disney's Animal Kingdom in Lake Buena Vista, Florida. The area, known as Pandora – The World of Avatar, opened on May 27, 2017.
Novels
Following the release of Avatar, Cameron initially planned to write a novel based on the film, "telling the story of the movie, but [going] into much more depth about all the stories that we didn't have time to deal with." In 2013, this plan was superseded by the announcement of four new novels set within the "Avatar expanded universe", to be written by Steven Gould. The books were due to be published by Penguin Random House, although since 2017, there has been no update on the planned book series.
See also
List of films featuring extraterrestrials
List of films featuring powered exoskeletonsRun of the ArrowRed Scorpion References
Further reading
A detailed analysis of the film's parallels with the teachings of the Vedas.
Lawtoo, Nidesh (2015). "Avatar Simulation in 3Ts: Techne, Trance, Transformation." Modern Fiction Studies'' 125 41.1 pp. 132–150.
External links
Official shooting script
2009 3D films
2000s science fiction action films
American science fiction adventure films
2000s action adventure films
2009 films
20th Century Fox films
American 3D films
American epic films
American films
American science fiction action films
BAFTA winners (films)
Best Drama Picture Golden Globe winners
Dune Entertainment films
English-language films
Environmental films
Fictional-language films
Films scored by James Horner
Films about cloning
Films about extraterrestrial life
Films about paraplegics or quadriplegics
Films about rebellions
Films about technology
Films about telepresence
Films directed by James Cameron
Films produced by James Cameron
Films produced by Jon Landau
Films set in the 22nd century
Films set on fictional moons
Films shot in Los Angeles
Films shot in Hawaii
Films shot in New Zealand
Films that won the Best Visual Effects Academy Award
Films whose art director won the Best Art Direction Academy Award
Films whose cinematographer won the Best Cinematography Academy Award
Films whose director won the Best Director Golden Globe
Holography in films
IMAX films
Lightstorm Entertainment films
Military science fiction films
Films using motion capture
Planetary romances
Rebellions in fiction
Rotoscoped films
Science fiction war films
Films with screenplays by James Cameron
Social science fiction films
American space adventure films
Transhumanism in film
Films about consciousness transfer
American action adventure films
Films set in forests
Fiction set in the 2140s
Fiction set in the 2150s
2009 science fiction films
Golden Eagle Award (Russia) for Best Foreign Language Film winners |
38982055 | https://en.wikipedia.org/wiki/John%20Diefenbaker%20Senior%20School | John Diefenbaker Senior School | John Diefenbaker Senior School is a secondary school located in Hanover, Ontario, Canada. It is named after John Diefenbaker, a Prime Minister who was born in Neustadt. The school is part of the Bluewater District School Board.
Mascot - Trojan
School Colours - Purple and White
JDSS instrumental bands have travelled the world capturing numerous medals in competition.
In 1985 the JDSS Trojans captured the Grey County Football Championship.
In 1997 the JDSS Trojans defeated the OSCVI Falcons to capture the Grey Country Football Championship.
Notable alumni
Daryl Shane, curler
Jamie Warren, country music singer
See also
List of high schools in Ontario
References
Educational institutions in Canada with year of establishment missing
High schools in Ontario
Schools in Grey County |
36805188 | https://en.wikipedia.org/wiki/Music%20Mouse | Music Mouse | Music Mouse is an algorithmic musical composition software developed by Laurie Spiegel.
Spiegel's best known and most widely used software, "Music Mouse - An Intelligent Instrument" (1986) is for Macintosh, Amiga and Atari computers. The "intelligent instrument" name refers to the program's built-in knowledge of chord and scale convention and stylistic constraints. Automating these processes allows the user to focus on other aspects of the music in real time. In addition to improvisations using this software, Spiegel composed several works for "Music Mouse", including Cavis muris in 1986, Three Sonic Spaces in 1989, and Sound Zones in 1990. She continued to update the program through Macintosh OS 9 and, as of 2021, it remained available for purchase or demo download from her website.
See also
List of music software
Sources
External links
"Music Mouse", The Music Mouse website.
"Music Mouse Instruction Manual", The Music Mouse Instruction Manual and Tutorial by Laurie Spiegel.
"Music Mouse - An Intelligent Instrument - An Emulation" Tero Parviainen, Independent Software Developer
Music software |
17006433 | https://en.wikipedia.org/wiki/BitBake | BitBake | BitBake is a make-like build tool with the special focus of distributions and packages for embedded Linux cross compilation, although it is not limited to that. It is inspired by Portage, which is the package management system used by the Gentoo Linux distribution. BitBake existed for some time in the OpenEmbedded project until it was separated out into a standalone, maintained, distribution-independent tool. BitBake is co-maintained by the Yocto Project and the OpenEmbedded project.
BitBake recipes specify how a particular package is built. Recipes consist of the source URL (http, https, ftp, cvs, svn, git, local file system) of the package, dependencies and compile or install options. They also store the metadata for the package in standard variables. During the build process, recipes are used to track dependencies, performing native or cross-compilation of the package and package it so that it is suitable for installation on the local or a target device. It is also possible to create complete images consisting of a root file system and kernel. As a first step in a cross-build setup, the framework will attempt to create a cross-compiler toolchain suited for the target platform.
See also
Buildroot
Yocto Project
OpenEmbedded
Openmoko
MontaVista Software
List of build automation software
References
External links
BitBake Homepage
BitBake User Manual (older)
Embedded Linux
Build automation
Free software programmed in Python |
44114452 | https://en.wikipedia.org/wiki/Rockchip%20RK3288 | Rockchip RK3288 | The Rockchip RK3288 is an ARM architecture System on Chip (SoC) from Rockchip. It is the first SoC, in August 2014, that uses the 32-bit ARM Cortex-A17 processor. It is a quad-core processor with a NEON coprocessor and hardware acceleration for video and 3D graphics. It is used in a number of Chromebooks and other low-power, low-performance devices.
Specifications
28 nm HKMG process.
Quad-core ARM Cortex-A17, up to 1.8 GHz
Quad-core ARM Mali-T760 MP4 GPU clocked at 650 MHz supporting OpenGL ES 1.1/2.0/3.0/3.1, OpenCL 1.1, Renderscript and Direct3D 11.1
High performance dedicated 2D processor
1080P video encoding for H.264 and VP8, MVC
4K H.264 and 10bits H.265 video decode, 1080P multi video decode
Supports 4Kx2K H.265 resolution
Dual-channel 64-bit DRAM controller supporting DDR3, DDR3L, LPDDR2 and LPDDR3
Up to 3840x2160 display output, HDMI 2.0
Support dual-channel LVDS/dual-channel MIPI-DSI/eDP1.1
HW Security system, support HDCP 2.X
Embedded 13M ISP and MIPI-CSI2 interface
Related products
The RK3288-C is used in the "Veyron" board design of several Chromebooks, and powers all of the following devices:
GPD - GPD XD (handheld console)
Hisense Chromebook 11
Haier Chromebook 11 (and "edu" variant)
AKAI MPC Live
DENON DJ SC6000/SC6000M Prime
ASUS C201 Chromebook
ASUS Chromebook Flip C100
ASUS Chromebook Flip C100P
ASUS Chromebook Flip C100PA
ASUS Chromebit
ASUS Tinker Board and ASUS Tinker Board S
Boardcon EM3288 SBC, MINI3288 module
Radxa Rock 2 - System on Module type single board computer based on RK3288
Lenovo miniStation (game console)
Rikomagik MK902II (Android), MK902II LE (Linux) (netbox)
Rikomagik MK802 V5 (Android), MK802 V5 LE (Linux) (Stick PC)
AtGames Legends Ultimate Arcade Cabinet
Mqmaker - MiQi SBC (Linux, Android)
Headrush MX5 (guitar multi-FX unit)
References
ARM architecture |
573528 | https://en.wikipedia.org/wiki/Systems%20development%20life%20cycle | Systems development life cycle | In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life-cycle, is a process for planning, creating, testing, and deploying an information system. The systems development life cycle concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
Overview
A systems development life cycle is composed of a number of clearly defined and distinct work phases which are used by systems engineers and systems developers to plan for, design, build, test, and deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed customer expectations, based on customer requirements, by delivering systems which move through each clearly defined phase, within scheduled time frames and cost estimates.
Computer systems are complex and often (especially with the recent rise of service-oriented architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of SDLC models or methodologies have been created, such as waterfall, spiral, Agile software development, rapid prototyping, incremental, and synchronize and stabilize.
SDLC can be described along a spectrum of agile to iterative to sequential methodologies. Agile methodologies, such as XP and Scrum, focus on lightweight processes which allow for rapid changes (without necessarily following the pattern of SDLC approach) along the development cycle. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on limited project scope and expanding or improving products by multiple iterations. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results. Other models, such as anamorphic development, tend to focus on a form of development that is guided by project scope and adaptive iterations of feature development.
In project management a project can be defined both with a project life cycle (PLC) and an SDLC, during which slightly different activities occur. According to Taylor (2004), "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".
The SDLC is not a methodology per se, but rather a description of the phases in the life cycle of a software application. In a broad sense, these phases are,:investigation, analysis, design, build, test, implement, and maintenance and support. All software development methodologies follow the SDLC phases but the method of doing that varies vastly between methodologies. In the Scrum framework, for example, one could say a single user story goes through all the phases of the SDLC within a single two-week sprint. Contrast this to the waterfall methodology, as another example, where every business requirement (recorded in the analysis phase of the SDLC in a document called the Business Requirements Specification) is translated into feature/functional descriptions (recorded in the design phase in a document called the Functional Specification) which are then all built in one go as a collection of solution features typically over a period of three to nine months, or more. These methodologies are different approaches, yet they both contain the SDLC phases in which a requirement is born, then travels through the life cycle phases ending in the final phase of maintenance and support, after-which the whole life cycle typically starts again for a subsequent version of the software application.
History and details
The product life cycle describes the process for building information systems in a very deliberate, structured and methodical way, reiterating each stage of the product's life. The systems development life cycle, according to Elliott & Strachan & Radford (2004), "originated in the 1960s, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".
Several systems development frameworks have been partly based on SDLC, such as the structured systems analysis and design method (SSADM) produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".
Phases
The system development life cycle framework provides a sequence of activities for system designers and developers to follow. It consists of a set of steps or phases in which each phase of the SDLC uses the results of the previous one.
The SDLC adheres to important phases that are essential for developers—such as planning, analysis, design, and implementation—and are explained in the section below. This includes evaluation of the currently used system, information gathering, feasibility studies, and request approval. A number of SDLC models have been created, including waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, synchronize, and stabilize. The oldest of these, and the best known, is the waterfall model, a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:
Preliminary analysis: Begin with a preliminary analysis, propose alternative solutions, describe costs and benefits, and submit a preliminary plan with recommendations.
Conduct the preliminary analysis: Discover the organization's objectives and the nature and scope of the problem under study. Even if a problem refers only to a small segment of the organization itself, find out what the objectives of the organization itself are. Then see how the problem being studied fits in with them.
Propose alternative solutions: After digging into the organization's objectives and specific problems, several solutions may have been discovered. However, alternate proposals may still come from interviewing employees, clients, suppliers, and/or consultants. Insight may also be gained by researching what competitors are doing.
Cost benefit analysis: Analyze and describe the costs and benefits of implementing the proposed changes. In the end, the ultimate decision on whether to leave the system as is, improve it, or develop a new system will be guided by this and the rest of the preliminary analysis data.
Systems analysis, requirements definition: Define project goals into defined functions and operations of the intended application. This involves the process of gathering and interpreting facts, diagnosing problems, and recommending improvements to the system. Project goals will be further aided by analysis of end-user information needs and the removal of any inconsistencies and incompleteness in these requirements.
A series of steps followed by the developer include:
Collection of facts: Obtain end user requirements through documentation, client interviews, observation, and questionnaires.
Scrutiny of the existing system: Identify pros and cons of the current system in-place, so as to carry forward the pros and avoid the cons in the new system.
Analysis of the proposed system: Find solutions to the shortcomings described in step two and prepare the specifications using any specific user proposals.
Systems design: At this step, desired features and operations are described in detail, including screen layouts, business rules, process diagrams, pseudocode, and other documentation.
Development: The real code is written here.
Integration and testing: All the modules are brought together into a special testing environment, then checked for errors, bugs, and interoperability.
Acceptance, installation, deployment: This is the final stage of initial development, where the software is put into production and runs actual business.
Maintenance: During the maintenance stage of the SDLC, the system is assessed/evaluated to ensure it does not become obsolete. This is also where changes are made to initial software.
Evaluation: Some companies do not view this as an official stage of the SDLC, while others consider it to be an extension of the maintenance stage, and may be referred to in some circles as post-implementation review. This is where the system that was developed, as well as the entire process, is evaluated. Some of the questions that need to be answered include if the newly implemented system meets the initial business requirements and objectives, if the system is reliable and fault-tolerant, and if it functions according to the approved functional requirements. In addition to evaluating the software that was released, it is important to assess the effectiveness of the development process. If there are any aspects of the entire process (or certain stages) that management is not satisfied with, this is the time to improve.
Disposal: In this phase, plans are developed for discontinuing the use of system information, hardware, and software and making the transition to a new system. The purpose here is to properly move, archive, discard, or destroy information, hardware, and software that is being replaced, in a manner that prevents any possibility of unauthorized disclosure of sensitive data. The disposal activities ensure proper migration to a new system. Particular emphasis is given to proper preservation and archiving of data processed by the previous system. All of this should be done in accordance with the organization's security requirements.
In the following diagram, these stages of the systems development life cycle are divided in ten steps, from definition to creation and modification of IT work products:
Not every project will require that the phases be sequentially executed; however, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap.
System investigation
First the IT system proposal is investigated. During this step, consider all current priorities that would be affected and how they should be handled. Before any system planning is done, a feasibility study should be conducted to determine if creating a new or improved system is a viable solution. This will help to determine the costs, benefits, resource requirements, and specific user needs required for completion. The development process can only continue once management approves of the recommendations from the feasibility study.
The following represent different components of the feasibility study:
Operational feasibility
Financial feasibility
Technical feasibility
Human factors feasibility
Legal/Political feasibility
Analysis
The goal of analysis is to determine where the problem is, in an attempt to fix the system. This step involves breaking down the system in different pieces to analyze the situation, analyzing project goals, breaking down what needs to be created, and attempting to engage users so that definite requirements can be defined.
Design
In systems design, the design functions and operations are described in detail, including screen layouts, business rules, process diagrams, and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems.
The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts.
Design elements describe the desired system features in detail, and they generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudo-code, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the system in sufficient detail, such that skilled developers and engineers may develop and deliver the system with minimal additional input design.
Environments
Environments are controlled areas where systems developers can build, distribute, install, configure, test, and execute systems that move through the SDLC. Each environment is aligned with different areas of the SDLC and is intended to have specific purposes. Examples of such environments include the:
development environment, where developers can work independently of each other before trying to merge their work with the work of others;
common build environment, where merged work can be built, together, as a combined system;
systems integration testing environment, where basic testing of a system's integration points to other upstream or downstream systems can be tested;
user acceptance testing environment, where business stakeholders can test against their original business requirements; and
production environment, where systems finally get deployed for final use by their intended end users.
Testing
The code is tested at various levels in software testing. Unit, system, and user acceptance testings are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much, if any iteration occurs. Iteration is not generally part of the waterfall model, but the means to rectify defects and validate fixes prior to deployment is incorporated into this phase.
The following are types of testing that may be relevant, depending on the type of system under development:
Defect testing the failed scenarios, including
Path testing
Data set testing
Unit testing
System testing
Integration testing
Black-box testing
White-box testing
Regression testing
Automation testing
User acceptance testing
Software performance testing
Training and transition
Once a system has been stabilized through adequate testing, the SDLC ensures that proper training on the system is performed or documented before transitioning the system to its support staff and end users. Training usually covers operational training for those people who will be responsible for supporting the system as well as training for those end users who will be using the system after its delivery to a production operating environment.
After training has been successfully completed, systems engineers and developers transition the system to its final production environment, where it is intended to be used by its end users and supported by its support and operations staff.
Operations and maintenance
The deployment of the system includes various changes and enhancements before the decommissioning or sunset of the system. Maintaining the system is a very important aspect of SDLC. As key personnel change positions in the organization, new changes will be implemented. There are two approaches to system development: the traditional approach (structured) and object oriented. Information engineering includes the traditional system approach, which is also called the structured analysis and design technique. The object-oriented approach views an information system as a collection of objects that are integrated with each other to make a full and complete information system.
Evaluation
The final phase of the SDLC is to measure the effectiveness of the system and evaluate potential enhancements.
Systems analysis and design
The systems analysis and design (SAD) is the process of developing information technology systems (ITS) that effectively use hardware, software, data, processes, and people to support the company's businesses objectives. It is a process of planning a new business system or replacing an existing system by defining its components or modules to satisfy specific requirements. System analysis and design can be considered the meta-development activity, which serves to set the stage and bound the problem. SAD can be leveraged to set the correct balance among competing high-level requirements in the functional and non-functional analysis domains. System analysis and design interact strongly with distributed enterprise architecture, enterprise I.T. Architecture, and business architecture, and relies heavily on concepts such as partitioning, interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system description. This high-level description is then further broken down into the components and modules which can be analyzed, designed, and constructed separately and integrated to accomplish the business goal. SDLC and SAD are cornerstones of full life cycle product and system planning.
Object-oriented analysis
Object-oriented analysis (OOA) is the process of analyzing a task (also known as a problem domain), to develop a conceptual model that can then be used to complete the task. A typical OOA model would describe computer software that could be used to satisfy a set of customer-defined requirements. During the analysis phase of problem-solving, a programmer might consider a written requirements statement, a formal vision document, or interviews with stakeholders or other interested parties. The task to be addressed might be divided into several subtasks (or domains), each representing a different business, technological, or other areas of interest. Each subtask would be analyzed separately. Implementation constraints, (e.g., concurrency, distribution, persistence, or how the system is to be built) are not considered during the analysis phase; rather, they are addressed during object-oriented design (OOD).
The conceptual model that results from OOA will typically consist of a set of use cases, one or more UML class diagrams, and a number of interaction diagrams. It may also include some kind of user interface mock-up.
The input for object-oriented design is provided by the output of object-oriented analysis. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design; analysis and design may occur in parallel, and in practice the results of one activity can feed the other in a short feedback cycle through an iterative process. Both analysis and design can be performed incrementally, and the artifacts can be continuously grown instead of completely developed in one shot.
Some typical (but common to all types of design analysis) input artifacts for object-oriented:
Conceptual model: Conceptual model is the result of object-oriented analysis, it captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage.
Use case: Use case is a description of sequences of events that, taken together, lead to a system doing something useful. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. Use case actors may be end users or other systems. In many circumstances use cases are further elaborated into use case diagrams. Use case diagrams are used to identify the actor (users or other systems) and the processes they perform.
System Sequence Diagram: System Sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the events that external actors generate, their order, and possible inter-system events.
User interface documentations (if applicable): Document that shows and describes the look and feel of the end product's user interface. It is not mandatory to have this, but it helps to visualize the end-product and therefore helps the designer.
Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. If an object database is not used, the relational data model should usually be created before the design, since the strategy chosen for object-relational mapping is an output of the OO design process. However, it is possible to develop the relational data model and the object-oriented design artifacts in parallel, and the growth of an artifact can stimulate the refinement of other artifacts.
Life cycle
Management and control
The SDLC phases serve as a programmatic guide to project activity and provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives during each SDLC phase while executing projects. Control objectives help to provide a clear statement of the desired result or purpose and should be used throughout the entire SDLC process. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.
To manage and control any SDLC initiative, each project will be required to establish some degree of a work breakdown structure (WBS) to capture and schedule the work necessary to complete the project. The WBS and all programmatic material should be kept in the "project description" section of the project notebook. The WBS format is mostly left to the project manager to establish in a way that best describes the project work.
There are some key areas that must be defined in the WBS as part of the SDLC policy. The following diagram describes three key areas that will be addressed in the WBS in a manner established by the project manager. The diagram shows coverage spans numerous phases of the SDLC but the associated MCD has a subset of primary mappings to the SDLC phases. For example, Analysis and Design is primarily performed as part of the Acquisition and Implementation Domain and System Build and Prototype is primarily performed as part of delivery and support.
Work breakdown structured organization
The upper section of the work breakdown structure (WBS) should identify the major phases and milestones of the project in a summary fashion. In addition, the upper section should provide an overview of the full scope and timeline of the project and will be part of the initial project description effort leading to project approval. The middle section of the WBS is based on the seven systems development life cycle phases as a guide for WBS task development. The WBS elements should consist of milestones and "tasks" as opposed to "activities" and have a definitive period (usually two weeks or more). Each task must have a measurable output (e.x. document, decision, or analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems engineering) and may require close coordination with other tasks, either internal or external to the project. Any part of the project needing support from contractors should have a statement of work (SOW) written to include the appropriate tasks from the SDLC phases. The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by external resources such as contractors.
Baselines
Baselines are an important part of the systems development life cycle. These baselines are established after four of the five phases of the SDLC and are critical to the iterative nature of the model . Each baseline is considered as a milestone in the SDLC.
functional baseline: established after the conceptual design phase.
allocated baseline: established after the preliminary design phase.
product baseline: established after the detail design and development phase.
updated product baseline: established after the production construction phase.
Complementary methodologies
Complementary software development methods to systems development life cycle are:
Software prototyping
Joint applications development (JAD)
Rapid application development (RAD)
Extreme programming (XP);
Open-source development
End-user development
Object-oriented programming
Strengths and weaknesses
Few people in the modern computing world would use a strict waterfall model for their SDLC as many modern methodologies have superseded this thinking. Some will argue that the SDLC no longer applies to models like Agile computing, but it is still a term widely in use in technology circles. The SDLC practice has advantages in traditional models of systems development that lends itself more to a structured environment. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i.e. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed.
A comparison of the strengths and weaknesses of SDLC:
An alternative to the SDLC is rapid application development, which combines prototyping, joint application development and implementation of CASE tools. The advantages of RAD are speed, reduced development cost, and active user involvement in the development process.
System lifecycle
The system lifecycle in systems engineering is a view of a system or proposed system that addresses all phases of its existence to include system conception, design and development, production and/or construction, distribution, operation, maintenance and support, retirement, phase-out and disposal.
Conceptual design
The conceptual design stage is the stage where an identified need is examined, requirements for potential solutions are defined, potential solutions are evaluated and a system specification is developed. The system specification represents the technical requirements that will provide overall guidance for system design. Because this document determines all future development, the stage cannot be completed until a conceptual design review has determined that the system specification properly addresses the motivating need.
Key steps within the conceptual design stage include:
Need identification
Feasibility analysis
System requirements analysis
System specification
Conceptual design review
Preliminary system design
During this stage of the system lifecycle, subsystems that perform the desired system functions are designed and specified in compliance with the system specification. Interfaces between subsystems are defined, as well as overall test and evaluation requirements. At the completion of this stage, a development specification is produced that is sufficient to perform detailed design and development.
Key steps within the preliminary design stage include:
Functional analysis
Requirements allocation
Detailed trade-off studies
Synthesis of system options
Preliminary design of engineering models
Development specification
Preliminary design review
For example, as the system analyst of Viti Bank, you have been tasked to examine the current information system. Viti Bank is a fast growing bank in Fiji. Customers in remote rural areas are finding difficulty to access the bank services. It takes them days or even weeks to travel to a location to access the bank services. With the vision of meeting the customers needs, the bank has requested your services to examine the current system and to come up with solutions or recommendations of how the current system can be provided to meet its needs.
Detail design and development
This stage includes the development of detailed designs that brings initial design work into a completed form of specifications. This work includes the specification of interfaces between the system and its intended environment and a comprehensive evaluation of the systems logistical, maintenance and support requirements. The detail design and development is responsible for producing the product, process and material specifications and may result in substantial changes to the development specification.
Key steps within the detail design and development stage include:
Detailed design
Detailed synthesis
Development of engineering and prototype models
Revision of development specification
Product, process and material specification
Critical design review
Production and construction
During the production and/or construction stage the product is built or assembled in accordance with the requirements specified in the product, process and material specifications and is deployed and tested within the operational target environment. System assessments are conducted in order to correct deficiencies and adapt the system for continued improvement.
Key steps within the product construction stage include:
Production and/or construction of system components
Acceptance testing
System distribution and operation
Operational testing and evaluation
System assessment
Utilization and support
Once fully deployed, the system is used for its intended operational role and maintained within its operational environment.
Key steps within the utilization and support stage include:
System operation in the user environment
Change management
System modifications for improvement
System assessment
Phase-out and disposal
Effectiveness and efficiency of the system must be continuously evaluated to determine when the product has met its maximum effective lifecycle. Considerations include: Continued existence of operational need, matching between operational requirements and system performance, feasibility of system phase-out versus maintenance, and availability of alternative systems.
See also
Application lifecycle management
Decision cycle
IPO Model
Software development methodologies
References
Further reading
Cummings, Haag (2006). Management Information Systems for the Information Age. Toronto, McGraw-Hill Ryerson
Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke.
Computer World, 2002, Retrieved on June 22, 2006 from the World Wide Web:
Management Information Systems, 2005, Retrieved on June 22, 2006 from the World Wide Web:
External links
The Agile System Development Lifecycle
Pension Benefit Guaranty Corporation – Information Technology Solutions Lifecycle Methodology
DoD Integrated Framework Chart IFC (front, back)
FSA Life Cycle Framework
HHS Enterprise Performance Life Cycle Framework
The Open Systems Development Life Cycle
System Development Life Cycle Evolution Modeling
Zero Deviation Life Cycle
Integrated Defense AT&L Life Cycle Management Chart, the U.S. DoD form of this concept.
Systems engineering
Computing terminology
Software development process
Software engineering |
27197818 | https://en.wikipedia.org/wiki/Palantir%20Technologies | Palantir Technologies | Palantir Technologies is a public American software company that specializes in big data analytics. Headquartered in Denver, Colorado, it was founded by Peter Thiel, Nathan Gettings, Joe Lonsdale, Stephen Cohen, and Alex Karp in 2003. The company's name is derived from The Lord of the Rings where the magical palantíri were "seeing-stones," described as indestructible balls of crystal used for communication and to see events in other parts of the world.
The company is known for three projects in particular: Palantir Gotham, Palantir Apollo, and Palantir Foundry. Palantir Gotham is used by counter-terrorism analysts at offices in the United States Intelligence Community (USIC) and United States Department of Defense. In the past, Gotham was used by fraud investigators at the Recovery Accountability and Transparency Board, a former US federal agency which operated from 2009 to 2015. Gotham was also used by cyber analysts at Information Warfare Monitor, a Canadian public-private venture which operated from 2003 to 2012. Palantir Apollo is the operating system for continuous delivery and deployment across all environments. Their SaaS is one of five offerings authorized for Mission Critical National Security Systems (IL5) by the U.S. Department of Defense. Palantir Foundry is used by corporate clients such as Morgan Stanley, Merck KGaA, Airbus, Wejo, Lilium, and Fiat Chrysler Automobiles NV.
Palantir's original clients were federal agencies of the USIC. It has since expanded its customer base to serve state and local governments, as well as private companies in the financial and healthcare industries.
History
2003–2008: Founding and early years
Though usually listed as having been founded in 2004, SEC filings state Palantir's official incorporation to be in May 2003 by Peter Thiel (co-founder of PayPal), who named the start-up after the "seeing stone" in Tolkien's legendarium. Thiel saw Palantir as a "mission-oriented company" which could apply software similar to PayPal's fraud recognition systems to "reduce terrorism while preserving civil liberties."
In 2004, Thiel bankrolled the creation of a prototype by PayPal engineer Nathan Gettings and Stanford University students Joe Lonsdale and Stephen Cohen. That same year, Thiel hired Alex Karp, a former colleague of his from Stanford Law School, as chief executive officer.
Headquartered in Palo Alto, California, the company initially struggled to find investors. According to Karp, Sequoia Capital chairman Michael Moritz doodled through an entire meeting, and a Kleiner Perkins executive lectured the founders about the inevitable failure of their company. The only early investments were $2 million from the U.S. Central Intelligence Agency's venture capital arm In-Q-Tel, and $30 million from Thiel himself and his venture capital firm, Founders Fund.
Palantir developed its technology by computer scientists and analysts from intelligence agencies over three years, through pilots facilitated by In-Q-Tel. The company stated computers alone using artificial intelligence could not defeat an adaptive adversary. Instead, Palantir proposed using human analysts to explore data from many sources, called intelligence augmentation.
2009: GhostNet and the Shadow Network
In 2009 and 2010 respectively, Information Warfare Monitor used Palantir software to uncover the GhostNet and the Shadow Network. The GhostNet was a China-based cyber espionage network targeting 1,295 computers in 103 countries, including the Dalai Lama’s office, a NATO computer and various national embassies. The Shadow Network was also a China-based espionage operation that hacked into the Indian security and defense apparatus. Cyber spies stole documents related to Indian security and NATO troop activity in Afghanistan.
2010–2012: Expansion
In April 2010, Palantir announced a partnership with Thomson Reuters to sell the Palantir Metropolis product as "QA Studio" (a quantitative analysis tool).
On June 18, 2010, Vice President Joe Biden and Office of Management and Budget Director Peter Orszag held a press conference at the White House announcing the success of fighting fraud in the stimulus by the Recovery Accountability and Transparency Board (RATB). Biden credited the success to the software, Palantir, being deployed by the federal government. He announced that the capability will be deployed at other government agencies, starting with Medicare and Medicaid.
Estimates were $250 million in revenues in 2011.
2013–2016: Additional funding
A document leaked to TechCrunch revealed that Palantir's clients as of 2013 included at least twelve groups within the U.S. government, including the CIA, the DHS, the NSA, the FBI, the CDC, the Marine Corps, the Air Force, the Special Operations Command, the United States Military Academy, the Joint Improvised-Threat Defeat Organization and Allies, the Recovery Accountability and Transparency Board and the National Center for Missing and Exploited Children. However, at the time, the United States Army continued to use its own data analysis tool. Also, according to TechCrunch, the U.S. spy agencies such as the CIA and FBI were linked for the first time with Palantir software, as their databases had previously been "siloed."
In September 2013, Palantir disclosed over $196 million in funding according to a U.S. Securities and Exchange Commission filing. It was estimated that the company would likely close almost $1 billion in contracts in 2014. CEO Alex Karp announced in 2013 that the company would not be pursuing an IPO, as going public would make "running a company like ours very difficult." In December 2013, the company began a round of financing, raising around $450 million from private funders. This raised the company's value to $9 billion, according to Forbes, with the magazine further explaining that the valuation made Palantir "among Silicon Valley’s most valuable private technology companies."
In December 2014, Forbes reported that Palantir was looking to raise $400 million in an additional round of financing, after the company filed paperwork with the Securities and Exchange Commission the month before. The report was based on research by VC Experts. If completed, Forbes stated Palantir's funding could reach a total of $1.2 billion. As of December 2014, the company continued to have diverse private funders, Ken Langone and Stanley Druckenmiller, In-Q-Tel of the CIA, Tiger Global Management, and Founders Fund, which is a venture Firm operated by Peter Thiel, the chairman of Palantir. As of December 2014, Thiel was Palantir's largest shareholder.
The company was valued at $15 billion in November 2014. In June 2015, Buzzfeed reported the company was raising up to $500 million in new capital at a valuation of $20 billion. By December 2015, it had raised a further $880 million, while the company was still valued at $20 billion. In February 2016, Palantir bought Kimono Labs, a startup which makes it easy to collect information from public facing websites.
In August 2016, Palantir acquired data visualization startup Silk.
2020
Palantir is one of four large technology firms to start working with the NHS on supporting COVID-19 efforts through the provision of software from Palantir Foundry and by April 2020 several countries have used Palantir technology to track and contain the contagion. Palantir also developed Tiberius, a software for vaccine allocation used in the United States.
In December 2020, Palantir was awarded a $44.4 million contract by the U.S. Food and Drug Administration, boosting its shares by about 21%.
Valuation
The company was valued at $9 billion in early 2014, with Forbes stating that the valuation made Palantir "among Silicon Valley's most valuable private technology companies". As of December 2014, Thiel was Palantir's largest shareholder. In January 2015, the company was valued at $15 billion after an undisclosed round of funding with $50 million in November 2014. This valuation rose to $20 billion in late 2015 as the company closed an $880 million round of funding. Palantir has never reported a profit. In 2018, Morgan Stanley valued the company at $6 billion.
Karp, Palantir's chief executive officer, announced in 2013 that the company would not pursue an IPO, as going public would make "running a company like ours very difficult". However, on October 18, 2018, the Wall Street Journal reported that Palantir was considering an IPO in the first half of 2019 following a $41 billion valuation. In July 2020, it was revealed the company had filed for an IPO.
It ultimately went public on the New York Stock Exchange through a direct public offering on September 30, 2020 under the ticker symbol "PLTR".
Investments
The company has invested over $400 million into nearly two dozen SPAC targets according to investment bank RBC Capital Markets, while bringing alongside those companies as customers.
Products
Palantir Gotham
Palantir Gotham is Palantir's government offering. It is an evolution of Palantir's longstanding work in the United States Intelligence Community. More recently, Palantir Gotham has been used as a predictive policing system, which has elicited some controversy.
Palantir Metropolis
Palantir Metropolis (formerly known as Palantir Finance) was software for data integration, information management and quantitative analytics. The software connects to commercial, proprietary and public data sets and discovers trends, relationships and anomalies, including predictive analytics. Aided by 120 "forward-deployed engineers" of Palantir during 2009, Peter Cavicchia III of JPMorgan used Metropolis to monitor employee communications and alert the insider threat team when an employee showed any signs of potential disgruntlement: the insider alert team would further scrutinize the employee and possibly conduct physical surveillance after hours with bank security personnel. The Metropolis team used emails, download activity, browser histories, and GPS locations from JPMorgan owned smartphones and their transcripts of digitally recorded phone conversations to search, aggregate, sort, and analyze this information for any specific keywords, phrases, and patterns of behavior. In 2013, Cavicchia may have shared this information with Frank Bisignano who had become the CEO of First Data Corporation.
Palantir Apollo
Palantir Apollo is a continuous delivery system that manages and deploys Palantir Gotham and Foundry. Apollo was built out of the need for customers to use multiple public and private cloud platforms as part of their infrastructure. Apollo orchestrates updates to configurations and software in the Foundry and Gotham platforms using a micro-service architecture. This product allows Palantir to provide software as a service (SaaS) rather than to operate as a consulting company.
Palantir Foundry
Palantir Foundry was used by NHS England in dealing with the COVID-19 pandemic in England to analyse the operation of the vaccination programme. A campaign was started against the company in June 2021 by Foxglove, a tech-justice nonprofit, because "Their background has generally been in contracts where people are harmed, not healed." Clive Lewis MP, supporting the campaign said Palantir had an "appalling track record."
Other
The company has been involved in a number of business and consumer products, designing in part or in whole. For example, in 2014, they premiered Insightics, which according to the Wall Street Journal "extracts customer spending and demographic information from merchants’ credit-card records." It was created in tandem with credit processing company First Data.
Customers
Corporate use
Palantir Metropolis is used by hedge funds, banks, and financial services firms.
Palantir Foundry clients include Merck KGaA, Airbus and Ferrari.
Palantir partner Information Warfare Monitor used Palantir software to uncover both the Ghostnet and the Shadow Network.
U.S. civil entities
Palantir's software is used by the Recovery Accountability and Transparency Board to detect and investigate fraud and abuse in the American Recovery and Reinvestment Act. Specifically, the Recovery Operations Center (ROC) used Palantir to integrate transactional data with open-source and private data sets that describe the entities receiving stimulus funds. Other clients as of 2019 included Polaris Project, the Centers for Disease Control and Prevention, the National Center for Missing and Exploited Children, the National Institutes of Health, Team Rubicon, and the United Nations World Food Programme.
In October 2020, Palantir began helping the federal government set up a system that will track the manufacture, distribution and administration of COVID-19 vaccines across the country.
U.S. military, intelligence, and police
Palantir Gotham is used by counter-terrorism analysts at offices in the United States Intelligence Community and United States Department of Defense, fraud investigators at the Recovery Accountability and Transparency Board, and cyber analysts at Information Warfare Monitor (responsible for the GhostNet and the Shadow Network investigation).
Other clients as of 2013 included DHS, NSA, FBI, CDC, the Marine Corps, the Air Force, Special Operations Command, West Point, the Joint IED Defeat Organization and Allies. However, at the time the United States Army continued to use its own data analysis tool. Also, according to TechCrunch, "The U.S. spy agencies also employed Palantir to connect databases across departments. Before this, most of the databases used by the CIA and FBI were siloed, forcing users to search each database individually. Now everything is linked together using Palantir."
U.S. military intelligence used the Palantir product to improve their ability to predict locations of improvised explosive devices in its war in Afghanistan. A small number of practitioners reported it to be more useful than the United States Army's program of record, the Distributed Common Ground System (DCGS-A). California Congressman Duncan D. Hunter complained of United States Department of Defense obstacles to its wider use in 2012.
Palantir has also been reported to be working with various U.S. police departments, for example accepting a contract in 2013 to help the Northern California Regional Intelligence Center build a controversial license plates database for California. In 2012 New Orleans Police Department partnered with Palantir to create a predictive policing program.
In 2014, US Immigration and Customs Enforcement (ICE) awarded Palantir a $41 million contract to build and maintain a new intelligence system called Investigative Case Management (ICM) to track personal and criminal records of legal and illegal immigrants. This application has originally been conceived by ICE's office of Homeland Security Investigations (HSI), allowing its users access to intelligence platforms maintained by other federal and private law enforcement entities. The system reached its "final operation capacity" under the Trump administration in September 2017.
Palantir took over the Pentagon's Project Maven contract in 2019 after Google decided not to continue developing AI unmanned drones used for bombings and intelligence.
International Atomic Energy Agency
Palantir was used by the International Atomic Energy Agency (IAEA) to verify if Iran was in compliance with the 2015 agreement.
Europe
The firm has contracts relating to patient data from the British National Health Service. It was awarded an emergency, no-competition contract to mine COVID-19 patient data in 2019. In 2020 this was valued at more than £23.5 million and was extended for two more years. The firm was encouraged by Liam Fox "to expand their software business" in Britain.
The Danish POL-INTEL predictive policing project has been operational since 2017 and is based on the Gotham system. According to the AP the Danish system "uses a mapping system to build a so-called heat map identifying areas with higher crime rates." The Gotham system has also been used by German state police in Hesse and Europol.
The Norwegian Customs is using Palantir Gotham to screen passengers and vehicles for control. Known inputs are prefiled freight documents, passenger lists, the national Currency Exchange database (tracks all cross-border currency exchanges), the Norwegian Welfare Administrations employer- and employee-registry, the Norwegian stock holder registry and 30 public databases from InfoTorg. InfoTorg provides access to more than 30 databases, including the Norwegian National Citizen registry, European Business Register, the Norwegian DMV vehicle registry, various credit databases etc. These databases are supplemented by the Norwegian Customs Departments own intelligence reports, including results of previous controls. The system is also augmented by data from public sources such as social media.
Partnerships and contracts
International Business Machines
On February 8, 2021, Palantir and IBM announced a new partnership that would use IBM's hybrid cloud data platform alongside Palantir's operations platform for building applications. The product, Palantir for IBM Cloud Pak for Data, is expected to simplify the process of building and deploying AI-integrated applications with IBM Watson. It will help businesses/users interpret and use large datasets without needing a strong technical background. Palantir for IBM Cloud Pak for Data will be available for general use in March 2021.
Amazon (AWS)
On March 5, 2021, Palantir announced its partnership with Amazon AWS. Palantir's ERP Suite is now optimized to run on Amazon Web Services. One of the first notable successes of the ERP suite was with BP, which was able to save about $50 million in working capital within two weeks of onboarding the system.
Babylon Health
Palantir took a stake in Babylon Health in June 2021. Ali Parsa told the Financial Times that "nobody" has brought some of the tech that Palantir owns "into the realm of biology and health care."
Controversies
Algorithm development
I2 Inc sued Palantir in Federal Court alleging fraud, conspiracy, and copyright infringement over Palantir's algorithm. Shyam Sankar, Palantir's director of business development, used a private eye company as the cutout for obtaining I2's code. I2 settled out of court for $10 million in 2011.
WikiLeaks proposals (2010)
In 2010, Hunton & Williams LLP allegedly asked Berico Technologies, Palantir, and HBGary Federal to draft a response plan to "the WikiLeaks Threat." In early 2011 Anonymous publicly released HBGary-internal documents, including the plan. The plan proposed that Palantir software would "serve as the foundation for all the data collection, integration, analysis, and production efforts." The plan also included slides, allegedly authored by HBGary CEO Aaron Barr, which suggested "[spreading] disinformation" and "disrupting" Glenn Greenwald’s support for WikiLeaks.
Palantir CEO Karp ended all ties to HBGary and issued a statement apologizing to "progressive organizations… and Greenwald … for any involvement that we may have had in these matters." Palantir placed an employee on leave pending a review by a third-party law firm. The employee was later reinstated.
Racial discrimination lawsuit (2016)
On September 26, 2016, the Office of Federal Contract Compliance Programs of the U.S. Department of Labor filed a lawsuit against Palantir alleging that the company discriminated against Asian job applicants on the basis of their race. According to the lawsuit, the company "routinely eliminated" Asian applicants during the hiring process, even when they were "as qualified as white applicants" for the same jobs. Palantir settled the suit in April 2017 for $1.7 million while not admitting wrongdoing.
British Parliament inquiry (2018)
During questioning in front of the Digital, Culture, Media and Sport Select Committee, Christopher Wylie, the former research director of Cambridge Analytica, said that several meetings had taken place between Palantir and Cambridge Analytica, and that Alexander Nix, the chief executive of SCL, had facilitated their use of Aleksandr Kogan's data which had been obtained from his app "thisisyourdigitallife" by mining personal surveys. Kogan later established Global Science Research to share the data with Cambridge Analytica and others. Wylie confirmed that both employees from Cambridge Analytica and Palantir used Kogan's Global Science Research data together in the same offices.
ICE Partnership (since 2014)
Palantir has come under criticism due to its partnership developing software for U.S. Immigration and Customs Enforcement. Palantir has responded that its software is not used to facilitate deportations. In a statement provided to the New York Times, the firm implied that because its contract was with HSI, a division of ICE focused on investigating criminal activities, it played no role in deportations. However, documents obtained by The Intercept show that this is not the case. According to these documents, Palantir's ICM software is considered 'mission critical' to ICE. Other groups critical of Palantir include the Brennan Center for Justice, National Immigration Project, the Immigrant Defense Project, the Tech Workers Coalition and Mijente. In one internal ICE report Mijente acquired, it was revealed that Palantir's software was critical in an operation to arrest the parents of undocumented migrant children.
On September 28, 2020, Amnesty International released a report criticizing Palantir failure to conduct human rights due diligence around its contracts with ICE. Concerns around Palantir's rights record were being scrutinized for contributing to human rights violations of asylum-seekers and migrants.
"HHS Protect Now" and privacy concerns (since 2020)
The ongoing COVID-19 pandemic has prompted tech companies to respond to growing demand for citizen information from governments in order to conduct contact tracing and to analyze patient data. Consequently, data collection companies, such as Palantir, have been contracted to partake in pandemic data collection practices. Palantir's participation in "HHS Protect Now", a program launched by the United States Department of Health and Human Services to track the spread of the coronavirus, has attracted criticism from American lawmakers.
Palantir's participation in COVID-19 response projects re-ignited debates over its controversial involvement in tracking undocumented immigrants, especially its alleged effects on digital inequality and potential restrictions on online freedoms. Critics allege that confidential data acquired by HHS could be exploited by other federal agencies in unregulated and potentially harmful ways. Alternative proposals request greater transparency in the process to determine whether any of the data aggregated would be shared with the US Immigration and Customs Enforcement to single out undocumented immigrants.
Project Maven (since 2018)
After Google had issues with employees walking out concerning the new contract in partnership with the Pentagon, Project Maven, a secret artificial intelligence program aimed at the unmanned operation of aerial vehicles, was taken up by Palantir. Critics warn that the technology could lead to autonomous weapons that decide who to strike without human input.
See also
Government by algorithm
References
External links
2003 establishments in California
Analytics companies
Big data companies
Business software companies
Companies based in Denver
American companies established in 2003
Data brokers
Criminal investigation
Software companies based in Colorado
Software companies of the United States
Government by algorithm
Software companies established in 2003
Companies listed on the New York Stock Exchange
Direct stock offerings |
2550813 | https://en.wikipedia.org/wiki/Windows%20Preinstallation%20Environment | Windows Preinstallation Environment | Windows Preinstallation Environment (also known as Windows PE and WinPE) is a lightweight version of Windows used for the deployment of PCs, workstations, and servers, or troubleshooting an operating system while it is offline. It is intended to replace MS-DOS boot disks and can be booted via USB flash drive, PXE, iPXE, CD-ROM, or hard disk. Traditionally used by large corporations and OEMs (to preinstall Windows client operating systems on PCs during manufacturing), it is now widely available free of charge via Windows Assessment and Deployment Kit (WADK) (formerly Windows Automated Installation Kit (WAIK)).
Overview
WinPE was originally intended to be used only as a pre-installation platform for deploying Microsoft Windows operating systems, specifically to replace MS-DOS in this respect. WinPE has the following uses:
Deployment of workstations and servers in large corporations as well as pre-installation by system builders of workstations and servers to be sold to end users.
Recovery platform to run 32-bit or 64-bit recovery tools such as Winternals ERD Commander or Windows Recovery Environment (Windows RE).
Platform for running third-party 32-bit or 64-bit disk cloning utilities.
The package can be used for developer testing or as a recovery CD/DVD for system administrators. Many customized WinPE boot CDs packaged with third-party applications for different uses are now available from volunteers via the Internet. The package can also be used as the base of a forensics investigation to either capture a disk image or run analysis tools without mounting any available disks and thus changing state.
Version 2.0 introduced a number of improvements and extended the availability of WinPE to all customers, not just corporate enterprise customers by downloading and installing Microsoft's Windows Automated Installation Kit (WAIK).
It was originally designed and built by a small team of engineers in Microsoft's Windows Deployment team, including Vijay Jayaseelan, Ryan Burkhardt, and Richard Bond.
Versions
The following versions are known to exist:
Derivatives
Windows Recovery Environment
Windows Recovery Environment (WinRE) is a set of tools based on Windows PE to help diagnose and recover from serious errors which may be preventing Windows from booting successfully. Windows RE is installed alongside Windows Vista and later, and may be booted from hard disks, optical media (such as an operating system installation disc) and PXE (e.g. Windows Deployment Services). A copy of Windows RE is included in the installation media of the aforementioned operating systems. It is a successor to the Recovery Console.
Features
Windows RE features include:
Automatic Repair: Automatically finds and fixes boot errors in the Windows Vista Startup Process caused by issues such as corruption of the following components: Boot Configuration Data, disk and file system metadata, Master Boot Record, or Windows Registry, and issues caused by missing or damaged boot and system files, incompatible drivers, or damaged hardware. Prior to Windows 8, this mode was known as "Startup Repair." The executable image for Automatic Repair is startrep.exe
System Restore: Same as the System Restore that is included in Windows, it allows a system's settings to be restored to those of a previous state.
System Image Recovery: Same as the Backup and Restore component of Windows, it allows restoring a previously created disk image.
Windows Memory Diagnostic Tool: Analyses the computer memory (RAM) for defects (not available on Windows 8 and later). The program does not run inside WinRE, but instead reboots the system and executes memtest.exe instead of loading the operating system. memtest.exe cannot be run inside Windows.
Windows Command Prompt: Gives command-line access to the file system, volumes and files. It can be used to run System File Checker (sfc /scannow) against an offline Windows installation and repair missing or corrupt files. Tools like robocopy, diskpart and DISM can be used to perform various system tasks like recovering or backing up files, managing partitions, and fix servicing-related issues respectively. In order to use the command prompt, the user must sign into an administrator account.
Starting with Windows Server 2012/Windows 8, the following additional options are added:
"Refresh" or "Reset": Both re-install Windows from a copy of the operating system on the hard drive. The "Refresh" operation maintains files, settings, and Windows Store apps (but not other programs), while "Reset" performs a factory reset of Windows, optionally formatting the hard drive and performing disk wiping. The Reset function does not perform a full reinstall; it merely performs a factory reset from a WIM image inside a hidden recovery partition. It is possible to create a custom WIM image based on which a Reset is performed.
Startup Settings: Enforces a series of safe settings during the startup.
Windows 10 adds the following:
Restore factory settings: Allows users who upgraded to Windows 10 to revert to their original operating system.
Go back to the previous build: Windows 10 is an operating system for which Microsoft occasionally releases newer builds. In the event that installation of a new build of Windows 10 becomes problematic, this option allows the user to revert to the previous build. Only appears if the previous build's files are not deleted.
Volumes encrypted with Bitlocker can be mounted if a recovery key is available.
Windows Recovery Environment can also be installed to a hard drive partition by OEMs, and customized with additional tools such as a separate system recovery tool for restoring the computer back to its original state. As of Windows Vista SP1, users can create their own bootable CD containing the recovery environment.
REAgentC
Windows includes the REAgentC command which is used to configure a Windows RE boot image and a push-button reset recovery image. It allows administration of recovery options and various customizations. The REAgentC tool can either be used on an offline Windows image or on a running Windows system. The command requires administrator privileges.
Microsoft DaRT
Microsoft Diagnostics and Recovery Toolset (DaRT), sold as a part of Microsoft Desktop Optimization Pack, is yet another toolset based on Windows PE that performs diagnostic and recovery on an offline copy of Windows. It can manage files, edit Windows Registry, uninstall previously installed Windows updates, scan system for malware and restore deleted files.
See also
Live CD
Related software
BartPE
nLite and vLite
WinBuilder
Windows To Go
References
External links
Windows PE Technical Reference
Download Windows PE
Windows administration
Microsoft software
Operating system distributions bootable from read-only media
Live USB
Live CD |
1892539 | https://en.wikipedia.org/wiki/IBM%20OfficeVision | IBM OfficeVision | OfficeVision was an IBM proprietary office support application that primarily ran on IBM's VM operating system and its user interface CMS.
Other platform versions were available, notably OV/MVS and OV/400. OfficeVision provided e-mail, shared calendars, and shared document storage and management, and it provides the ability to integrate word processing applications such as Displaywrite/370 and/or the Document Composition Facility (DCF/SCRIPT). IBM introduced OfficeVision in their May 1989 announcement, followed by several other key releases later.
The advent of the personal computer and the client–server paradigm changed the way organizations looked at office automation. In particular, office users wanted graphical user interfaces. Thus e-mail applications with PC clients became more popular.
OfficeVision/2
IBM's initial answer was OfficeVision/2, a server-requestor system designed to be the strategic implementation of IBM's Systems Application Architecture. The server could run on OS/2, VM, MVS (XA or ESA), or OS/400, while the requester required OS/2 Extended Edition running on IBM PS/2 personal computers, or DOS. IBM also developed OfficeVision/2 LAN for workgroups, which failed to find market acceptance and was withdrawn in 1992. IBM began to resell Lotus Notes and Lotus cc:Mail as an OfficeVision/2 replacement. Ultimately, IBM solved its OfficeVision problems through the hostile takeover of Lotus Software for its Lotus Notes product, one of the two most popular products for business e-mail and calendaring.
IBM originally intended to deliver the Workplace Shell as part of the OfficeVision/2 LAN product, but in 1991 announced plans to release it as part of OS/2 2.0 instead:
IBM last week said some features originally scheduled to ship in OfficeVision/2 LAN will be bundled into the current release of the product, while others will be either integrated into OS/2 or delayed indefinitely... IBM's Workplace Shell, an enhanced graphical user interface, is being lifted from OfficeVision/2 LAN to be included in OS/2 2.0... The shell offers the capability to trigger processes by dragging and dropping icons on the desktop, such as dropping a file into an electronic wastebasket. Porting that feature to the operating system will let any application take advantage of the interface.
Users of IBM OfficeVision included the New York State Legislature and the European Patent Office.
Migration
IBM discontinued support of OfficeVision/VM as of October 6, 2003. IBM recommended that its OfficeVision/VM customers migrate to Lotus Notes and Lotus Domino environments, and IBM offered migration tools and services to assist. Guy Dehond, one of the beta-testers of the AS/400, developed the first migration tool. However, OfficeVision/MVS remained available for sale until March 2014, and was still supported until May 2015, and thus for a time was another migration option for OfficeVision/VM users. OfficeVision/MVS runs on IBM's z/OS operating system.
Earlier PROFS, DISOSS and Office/36
OfficeVision/VM was originally named PROFS (for PRofessional OFfice System) and was initially made available in 1981. Before that it was a PRPQ (Programming Request for Price Quotation), an IBM administrative term for non-standard software offerings with unique features, support and pricing. The first release of PROFS was developed by IBM in Poughkeepsie, NY, in conjunction with Amoco, from a prototype developed years earlier in Poughkeepsie by Paul Gardner and others. Subsequent development took place in Dallas. The editor XEDIT was the basis of the word processing function in PROFS.
PROFS itself was descended from an in-house system developed by IBM Poughkeepsie laboratory. Poughkeepsie developed a primitive in-house solution for office automation over the period 1970–1972; OFS (Office System), which evolved into PROFS, was developed by Poughkeepsie laboratory as a replacement for that earlier system and first installed in October 1974. Compared to Poughkeepsie's original in-house system, the distinctive new features added by OFS were a centralised database virtual machine (data base manager or DBM) for shared permanent storage of documents, instead of storing all documents in user's personal virtual machines; and a centralised virtual machine (mailman master machine or distribution virtual machine) to manage mail transfer between individuals, instead of relying on direct communication between the personal virtual machines of individual users. By 1981, IBM's Poughkeepsie site had over 500 PROFS users.
In 1983, IBM introduced release 2 of PROFS, along with auxiliary software to enable document interchange between PROFS, DISOSS, Displaywriter, IBM 8100 and IBM 5520 systems.
PROFS and its e-mail component, known colloquially as PROFS Notes, featured prominently in the investigation of the Iran-Contra scandal. Oliver North believed he had deleted his correspondence, but the system archived it anyway. Congress subsequently examined the e-mail archives.
OfficeVision/MVS originated from IBM DISOSS, and OfficeVision/400 from IBM Office/36.
IBM's European Networking Center (ENC) in Heidelberg, Germany, developed prototype extensions to OfficeVision/VM to support Open Document Architecture (ODA), in particular a converter between ODA and Document Content Architecture (DCA) document formats.
Earlier ODPS in Far East
OfficeVision/VM for the Far Eastern languages of Japanese, Korean and Chinese, originated from IBM Office and Document Control System (ODPS), a DBCS-enabled porting from PROFS, plus document edit, store and search functions, similar to Displaywrite/370. It was an integrated office system for the Asian languages, that ran on IBM's mainframe computers under VM, offering such functions as email, calendar, and document processing & storing. IBM ODPS was later renamed as IBM OfficeVision/VM and its MVS version (using DISOSS) was not offered. After IBM's buyout of Lotus Development in 1995, the ODPS users were recommended to migrate to Lotus Notes.
IBM ODPS was developed in IBM Tokyo Programming Center, located in Kawasaki, Japan, later absorbed into IBM Yamato Development Laboratory, in conjunction with IBM Dallas Programming Center in Westlake, Texas, U.S., where PROFS was developed, and other programming centers. It first became available in 1986 for Japanese, and then was translated into Korean by IBM Korea and into Traditional Chinese by IBM Taiwan. It was not translated into Simplified Chinese for mainland China.
IBM ODPS consisted of four software components:
The Office Support Program, or OFSP, was PROFS enabled to process the Double Byte Character Set of the Asian languages and added some more functions. It could handle email, address, scheduling, storing/search/distribution of documents, and switch to PROFS in English.
The Document Composition Program, or DCP, was a porting from Document Composition Facility, enabled for processing the Double Byte Character Sets with additional functions. It allowed preparation and printing of documents, with a SCRIPT-type editing method.
The Document Composition Program/Workstation allowed preparation of documents on IBM 5550, PS/55 and other "workstations" (personal computers), that offered IBM Kanji System functions.
The Facsimile Program offered sending/receiving of facsimile data.
References
Further reading
OfficeVision
Email systems
IBM mainframe software
VM (operating system)
Discontinued software |
5066899 | https://en.wikipedia.org/wiki/FITS%20Liberator | FITS Liberator | The ESA/ESO/NASA FITS (Flexible Image Transport System) Liberator is a free software program for processing and editing astronomical science data in the FITS format to reproduce images of the universe. Version 3 and later are standalone programs; earlier versions were plugins for Adobe Photoshop. FITS Liberator is free software released under the BSD-3 license. The engine behind the FITS Liberator is NASA's CFITSIO library.
FITS has been a standard since 1982 and is recognized by the International Astronomical Union (IAU). While not limited solely to image data, archives in the FITS file format include images of stars, nebulae and galaxies produced by space-based and ground-based telescopes from around the world.
Although the first version of the software was an excellent tool used mainly by professional astronomers, efforts have been made to bring high quality astronomical images to the homes of amateur astronomers, educators and students. The FITS Liberator has become the industry standard for professional imaging scientists at the European Space Agency (ESA), the European Southern Observatory (ESO), and NASA. It uses images from the Hubble Space Telescope (HST) and the Very Large Telescope (VLT), among others, to craft beautiful colour astronomical images.
The first and second versions of the FITS Liberator were released in July 2004 and August 2005 respectively, the version, v3.0.1, was released in February 2012. Starting with v3.0.1, FITS Liberator is a stand-alone product and Adobe Photoshop is no longer required for its use; the latest version is available for the three major operating systems.
Version 1
Version 1 of the ESA/ESO/NASA Photoshop FITS Liberator was completed in July 2004 by imaging scientists at the European Space Agency (ESA), the European Southern Observatory (ESO), and NASA. Version 1 allowed all types of FITS images to be opened and also some limited interaction with the images.
Version 2
The preview window
Histogram
Tools
Statistics
Advanced tools for scaling and stretch
The basic workflow is to open a FITS image, study it in the Preview window, adjust the black-and-white levels (6) to give a reasonable contrast and then set the input range for the scaling of the image by clicking the Auto Scaling Button (7). Now, different values of the Scaled Peak level can be tested to scale the image to better fit with one of the possible Stretch functions (8).
Version 2 of the ESA/ESO/NASA Photoshop FITS Liberator image processing software made it both easier and faster to create colour images from raw observations. Updates included:
FITS images with up to 4 billion grey scales can be processed (32 bit support).
FITS images with up to 500 million pixels or more can be processed (100 times larger than standard images from a digital camera).
Re-designed workflow and user interface. The plug-in remembers previous settings.
New options for advanced scaling and stretching.
An entire section of v2.0 was dedicated to metadata input and the user also had access to a text version of the original FITS header.
With the advent of the FITS Liberator v2.0, it became possible for people at home to create spectacular pictures like the iconic Hubble image Pillars of Creation in a matter of minutes.
An updated version of the software containing new scaling and stretching tools is available that allows better manipulation of astronomical images. An update in v2.1 is the ability for users to embed descriptive information about the image and what it shows directly within the final image. Based on a new standard for Astronomy Visualization Metadata (AVM), this information is stored in the standard manner comparable to that used by digital cameras to record exposure information.
The ESA/ESO/NASA WFPC2 Mosaicator: This new add-on tool works on Hubble WFPC2 images. It will generate a single WFPC2 mosaic file from one WFPC2 file containing four CCD images in individual planes.
The ESA/ESO/NASA FITS Concatenator: This Adobe Photoshop script will combine the metadata from two or more individual exposures after the FITS Liberation process.
Version 3
Version 3.0 of the FITS Liberator include the following new features:
FITS Liberator is a stand-alone application, no longer requiring Adobe Photoshop.
Processing medium-sized images is up to 35% faster, thanks to significantly improved memory management.
Processing large images are also faster thanks to delayed application of stretch functions.
FITS Liberator saves TIFF files that open in almost any image processing software.
Version 4
Version 4.0 of the FITS Liberator was released on 4 March 2021, include the new features:
64 bit operating systems are now supported.
MacOSX, Windows and Linux support.
Command Line Interface available for batch processing of FITS files.
Full support for big images (even greater than available memory)
Dark Mode.
Full screen.
Version 4 is being developed with the sponsorship of NSF, NOIRLab, Caltech IPAC, ESA , STScI and CFA.
The Team
The team that produced FITS Liberator 1.0 to 3.0.1 consists of:
Project Executive: Lars Lindberg Christensen
Technical Project Manager: Lars Holm Nielsen
Developers: Kaspar K. Nielsen & Teis Johansen
Scientific, technical support and testing: Robert Hurt & David de Martin
The team that produced FITS Liberator 4 consist of:
Developers: David Zambrano Lizarazo & Juan Fajardo Barrero
Scientific, technical support and testing: Robert Hurt
GUI & Logo Design: David Zambrano Lizarazo
Project Executive: Lars Lindberg Christensen
Technical Project Manager: Javier Enciso
How to use it
For experienced users there is a Quick Start Guide available at the FITS Liberator website.
You can give it a try to the FITS Liberator interface using an Interactive Tour available at the BitPointer website.
For new users a Full User Guide for the FITS Liberator 3, can be obtained from the European homepage of the NASA/ESA Hubble Space Telescope.
References
External links
The ESA/ESO/NASA FITS Liberator v3.0.1 spacetelescope.org
The FITS Liberator 4 page at NOIRLab
The FITS Liberator 4 page at BitPointer
NASA's CFITSIO
Free graphics software
Software using the BSD license |
222662 | https://en.wikipedia.org/wiki/VariCAD | VariCAD | VariCAD is a computer program for 3D/2D CAD and mechanical engineering which has been developed since 1988 in the Czech Republic. VariCAD runs on Windows and Linux. It features many tools for 3D modeling and 2D drafting. VariCAD provides support for parameters and geometric constraints, tools for shells, pipelines, sheet metal unbending and crash tests, assembly support, mechanical part and symbol libraries, calculations, bills of materials, and more.
The program includes a standard part library with screws, nuts, bearings etc. Additionally, it offers many calculation modules for, e.g., springs, beam torsion, volume, mass and center of gravity.
VariCAD allows editing of DWG files without conversion using the Open Design Alliance DWGdirect libraries. VariCAD support the ISO industrial product data exchange format STEP/STP. A list of notable supported file formats is listed in the Comparison of CAD software article.
VariCAD is available for both Windows and for some time on the Linux OS. With the addition of support for Unicode user interface now also supports non Latin character sets such as those used in the Japanese, Chinese and Russian languages.
VariCAD is available in English, German, Portuguese and Japanese languages.
VariCAD Viewer is a free proprietary computer program for viewing of 3D/2D CAD files. It runs on the Windows and Linux operating systems. Notable supported file formats are listed in the Comparison of CAD, CAM and CAE file viewers article.
See also
Comparison of CAD, CAM and CAE file viewers
Comparison of CAD editors for AEC
References
External links
VariCAD Homepage
Computer-aided design software
Computer-aided design software for Linux |
4026453 | https://en.wikipedia.org/wiki/Colony-forming%20unit | Colony-forming unit | A colony-forming unit (CFU, cfu, Cfu) is a unit used in microbiology. It estimates the number of bacteria or fungal cells in a sample which are viable, able to multiply via binary fission under the controlled conditions. Counting with colony-forming units requires culturing the microbes and counts only viable cells, in contrast with microscopic examination which counts all cells, living or dead. The visual appearance of a colony in a cell culture requires significant growth, and when counting colonies it is uncertain if the colony arose from one cell or a group of cells. Expressing results as colony-forming units reflects this uncertainty.
Theory
The purpose of plate counting is to estimate the number of cells present based on their ability to give rise to colonies under specific conditions of nutrient medium, temperature and time. Theoretically, one viable cell can give rise to a colony through replication. However, solitary cells are the exception in nature, and most likely the progenitor of the colony was a mass of cells deposited together. In addition, many bacteria grow in chains (e.g. Streptococcus) or clumps (e.g., Staphylococcus). Estimation of microbial numbers by CFU will, in most cases, undercount the number of living cells present in a sample for these reasons. This is because the counting of CFU assumes that every colony is separate and founded by a single viable microbial cell.
The plate count is linear for E. coli over the range of 30 to 300 CFU on a standard sized Petri dish. Therefore, to ensure that a sample will yield CFU in this range requires dilution of the sample and plating of several dilutions. Typically, ten-fold dilutions are used, and the dilution series is plated in replicates of 2 or 3 over the chosen range of dilutions. Often 100µl are plated but also larger amounts up to 1ml are used. Higher plating volumes increase drying times but often don't result in higher accuracy, since additional dilution steps may be needed. The CFU/plate is read from a plate in the linear range, and then the CFU/g (or CFU/mL) of the original is deduced mathematically, factoring in the amount plated and its dilution factor (e.g. CLSI VET01S).
An advantage to this method is that different microbial species may give rise to colonies that are clearly different from each other, both microscopically and macroscopically. The colony morphology can be of great use in the identification of the microorganism present.
A prior understanding of the microscopic anatomy of the organism can give a better understanding of how the observed CFU/mL relates to the number of viable cells per milliliter. Alternatively it is possible to decrease the average number of cells per CFU in some cases by vortexing the sample before conducting the dilution. However many microorganisms are delicate and would suffer a decrease in the proportion of cells that are viable when placed in a vortex.
Log notation
Concentrations of colony-forming units can be expressed using logarithmic notation, where the value shown is the base 10 logarithm of the concentration. This allows the log reduction of a decontamination process to be computed as a simple subtraction.
Uses
Colony-forming units are used to quantify results in many microbiological plating and counting methods, including:
The Pour Plate method wherein the sample is suspended in a Petri dish using molten agar cooled to approximately 40-45 °C (just above the point of solidification to minimize heat-induced cell death). After the nutrient agar solidifies the plate is incubated.
The Spread Plate method wherein the sample (in a small volume) is spread across the surface of a nutrient agar plate and allowed to dry before incubation for counting.
The Membrane Filter method wherein the sample is filtered through a membrane filter, then the filter placed on the surface of a nutrient agar plate (bacteria side up). During incubation nutrients leach up through the filter to support the growing cells. As the surface area of most filters is less than that of a standard Petri dish, the linear range of the plate count will be less.
The Miles and Misra Methods or drop-plate method wherein a very small aliquot (usually about 10 microliters) of sample from each dilution in series is dropped onto a Petri dish. The drop dish must be read while the colonies are very small to prevent the loss of CFU as they grow together.
However, with the techniques that require the use of an agar plate, no fluid solution can be used because the purity of the specimen cannot be unidentified and it is not possible to count the cells one by one in the liquid.
Tools for counting colonies
Counting colonies is traditionally performed manually using a pen and a click-counter. This is generally a straightforward task, but can become very laborious and time-consuming when many plates have to be enumerated. Alternatively semi-automatic (software) and automatic (hardware + software) solutions can be used.
Software for counting CFUs
Colonies can be enumerated from pictures of plates using software tools. The experimenters would generally take a picture of each plate they need to count and then analyse all the pictures (this can be done with a simple digital camera or even a webcam). Since it takes less than 10 seconds to take a single picture, as opposed to several minutes to count CFU manually, this approach generally saves a lot of time. In addition, it is more objective and allows extraction of other variables such as the size and colour of the colonies.
OpenCFU is a free and open-source program designed to optimise user friendliness, speed and robustness. It offers a wide range of filters and control as well as a modern user interface. OpenCFU is written in C++ and uses OpenCV for image analysis.
NICE is a program written in MATLAB that provides an easy way to count colonies from images.
ImageJ and CellProfiler: Some ImageJ macros and plugins and some CellProfiler pipelines can be used to count colonies. This often requires the user to change the code in order to achieve an efficient work-flow, but can prove useful and flexible. One main issue is the absence of specific GUI which can make the interaction with the processing algorithms tedious.
In addition to software based on traditional desktop computers, apps for both Android and iOS devices are available for semi-automated and automated colony counting. The integrated camera is used to take pictures of the agar plate and either an internal or an external algorithm is used to process the picture data and to estimate the number of colonies.
Automated systems
Many of the automated systems are used to counteract human error as many of the research techniques done by humans counting individual cells have a high chance of error involved. Due to the fact that researchers regularly manually count the cells with the assistance of a transmitted light, this error prone technique can have a significant effect on the calculated concentration in the main liquid medium when the cells are in low numbers.
Completely automated systems are also available from some biotechnology manufacturers. They are generally expensive and not as flexible as standalone software since the hardware and software are designed to work together for a specific set-up.
Alternatively, some automatic systems use the spiral plating paradigm.
Some of the automated systems such as the systems from MATLAB allow the cells to be counted without having to stain them. This lets the colonies to be reused for other experiments without the risk of killing the microorganisms with stains. However, a disadvantage to these automated systems is that it is extremely difficult to differentiate between the microorganisms with dust or scratches on blood agar plates because both the dust and scratches can create a highly diverse combination of shapes and appearances.
Alternative units
Instead of colony-forming units, the parameters Most Probable Number (MPN) and Modified Fishman Units (MFU) can be used. The Most Probable Number method counts viable cells and is useful when enumerating low concentrations of cells or enumerating microbes in products where particulates make plate counting impractical. Modified Fishman Units take into account bacteria which are viable, but non-culturable.
See also
Cell counting
Growth medium
Miles and Misra method
Most probable number
Replica plating
Viral plaque
References
Further reading
Microbiology terms
Biostatistics |
66441371 | https://en.wikipedia.org/wiki/Index%20of%20ancient%20Greece-related%20articles | Index of ancient Greece-related articles | This page lists topics related to ancient Greece.
0–9
226 BC Rhodes earthquake
426 BC Malian Gulf tsunami
464 BC Sparta earthquake
A
Aba
Abae
Abaris the Hyperborean
Abas
Abas (son of Lynceus)
Abderus
Ablerus (mythology)
Abolla
Abron (ancient Greece)
Absyrtus
Acacallis
Acacus
Academic skepticism
Academus
Acamantis
Acamas
Acamas (son of Antenor)
Acamas (son of Theseus)
Acantha
Acanthis
Acanthus
Acanthus of Sparta
Acarnan (son of Alcmaeon)
Acarnania
Acarnanian League
Acaste (Oceanid)
Acastus
Acatalepsy
Aceso
Acestor
Achaea (ancient region)
Achaea (Roman province)
Achaea Phthiotis
Achaean Leaders
Achaean League
Achaeans (Homer)
Achaeans (tribe)
Achaemenid destruction of Athens
Achaeus (general)
Achaeus
Achaeus of Eretria
Achaeus of Syracuse
Achelois
Acheloos Painter
Achelous
Acherdus
Acheron
Acherusia
Achilleion (Thessaly)
Achilleis (trilogy)
Achilles
Achilles and Patroclus
Achilles' heel
Achilles on Skyros
Achilles Painter
Achillicus
Achiroe
Achlys
Acis and Galatea
Acmon
Acmon of Phrygia
Acraea
Acratopotes
Acrion
Acrisius
Acrocorinth
Acropolis
Acropolis of Athens
Acrotatus (father of Areus I)
Acrotatus (king of Sparta)
Acroterion
Actaeon
Actaeus
Actor (mythology)
Acumenus
Acusilaus
Adamas (mythology)
Adeimantus of Collytus
Adeimantus of Corinth
Adephagia
Adiaphora
Adikia
Admete (Oceanid)
Admetus
Admetus (mythology)
Adonia
Adonis
Adrasteia
Adrasteia (mythology)
Adrastus
Adrastus (mythology)
Adrastus of Aphrodisias
Adrastus (son of Gordias)
Adrestia
Adultery in Classical Athens
Adymus of Beroea
Adyton
Aeaces (father of Polycrates)
Aeacus
Aeëtes
Aegaeon
Aegean civilization
Aegeoneus
Aegeus
Aegeus (hero)
Aegialeus (King of Argos)
Aegialeus (King of Sicyon)
Aegialeus (mythology)
Aegialeus (strategos)
Aegilia (Attica)
Aegimius
Aegipan
Aegis
Aegisthus
Aegitium
Aegium
Aegius
Aegle
Aegospotami
Aegys
Aeimnestus
Aeinautae
Aelius Nicon
Aeneas
Aenesidemus
Aeolians
Aeolic Greek
Aeolic order
Aeolus
Aeolus (Odyssey)
Aeolus (son of Hellen)
Aeolus (son of Poseidon)
Aepytus
Aerope
Aesacus
Aesara
Aeschines
Aeschines (physician)
Aeschines of Sphettus
Aeschylus
Aeschylus of Rhodes
Aeson
Aesop
Aesop's Fables
Aesymnetes
Aethalidae
Aethalides
Aether (classical element)
Aether (mythology)
Aethlius
Aethon
Aethra
Aethra (mother of Theseus)
Aetion
Aetius (philosopher)
Aetna
Aetnaeus
Aetolia
Aetolian campaign
Aetolian League
Aetolian War
Aetolus
Aetolus of Aetolia
Aexone
Against Androtion
Against Aristogeiton
Against Eratosthenes
Against Leptines
Against Meidias
Against Neaera
Against Simon
Against Spudias
Against Stephanos
Against the Sophists
Against the Stepmother for Poisoning
Against Timarchus
Against Timocrates
Agamede
Agamedes
Agamemnon
Agamemnon (Zeus)
Aganippe
Aganippe (naiad)
Agape
Agapenor
Agaptolemus
Agasias, son of Menophilus
Agasias of Arcadia
Agasicles
Agasthenes
Agatharchides
Agatharchus
Agathodaemon
Agathon
Agathos kai sophos
Agave
Agdistis
Agela
Ageladas
Agelaus
Ageneros
Agenor
Agenor (mythology)
Agenor of Argos
Agenor of Aetolia
Agenor of Psophis
Agenor of Troy
Agenorides
Agerochus
Ages of Man
Agesander of Rhodes
Agesarchus of Tritaea
Agesilaus I
Agesilaus II
Agesilaus (statesman)
Agesilaus (Xenophon)
Agesipolis I
Agesipolis II
Agesipolis III
Agetor
Agias of Sparta
Agis I
Agis II
Agis III
Agis IV
Aglaea
Aglaureion
Aglaurus
Aglaurus, daughter of Cecrops
Agnaptus
Agnodice
Agoge
Agon
Agonius
Agonothetes
Agora
Agora of the Competaliasts
Agoracritus
Agoraea
Agoraios Kolonos
Agoranomos
Agoranomus
Agreus and Nomios
Agriculture in ancient Greece
Agrionia
Agrionius
Agriopas
Agrippa (astronomer)
Agrius
Agrius (son of Porthaon)
Agrotera
Agyieus
Aiantis
Aidoneus
Aidos
Ainis
Aion
Air (classical element)
Aison (vase painter)
Ajax (play)
Ajax the Great
Ajax the Lesser
Akrotiri
Akrotiri Boxer Fresco
Alabandus
Alabastron
Alala
Alalcomenae (Boeotia)
Alalcomenes
Alalcomenia
Alastor
Alazon
Alcaeus
Alcaeus and Philiscus
Alcaeus of Mytilene
Alcaic stanza
Alcamenes
Alcamenes, son of Sthenelaides
Alcathous
Alcathous of Elis
Alces (mythology)
Alcestis
Alcibiades
Alcidamas
Alcidas
Alcimachus of Apollonia
Alcimedon
Alcimus
Alcinoe
Alcinous
Alcmaeon (mythology)
Alcmaeon in Corinth
Alcmaeon of Croton
Alcmaeonidae
Alcman
Alcmaeon
Alcmaeon, son of Megacles
Alcmaeon of Athens
Alcmene
Alcmenes
Alcmenor
Alcmeonis
Alcon
Alcyone
Alcyone (Pleiad)
Alcyone and Ceyx
Alcyoneus
Alcyoneus (son of Diomos)
Alea (Greek soldier)
Alecto
Alectryon
Alepotrypa cave
Aletes (Heraclid)
Aletes of Mycenae
Aletheia
Aleuadae
Aleuas
Aleus
Alexander (Aetolian general)
Alexander (artists)
Alexander of Pherae
Alexander of Rhodes
Alexander Sarcophagus
Alexanor
Alexiares and Anicetus
Alexicacus
Alexicles (general)
Alexicrates
Alexinus
Alexippus
Alexis (poet)
Alexis (sculptor)
Alexon
Algos
Alipherus
Alkimachos of Pydna
Allegory of the cave
Almops
Almus of Orchomenus
Aloadae
Alope
Alope (spring)
Alopece
Alpha
Alpheus (deity)
Alpos
Altamura Painter
Altar of Athena Polias
Altar of Hieron
Altar of the Chians
Altar of the Twelve Gods
Altar of Zeus Agoraios
Althaea
Althaemenes
Alypius of Alexandria
Alypus
Alytarches
Alyzeus
Amalthea (mythology)
Amantes (tribe)
Amarynceus
Amasis Painter
Amasis (potter)
Amathusia
Amazon statue types
Amazonius
Amazonomachy
Amazons
Ambracia
Ambrax
Ambrosia
Ambryon
Ambulia
Amechania
Ameinias of Athens
Ameinocles
Ameipsias
Amentum
Ammonius Saccas
Amoebaean singing
Amompharetus
Ampersand Painter
Amphiaraos Krater
Amphiareion of Oropos
Amphicleia
Amphictyon
Amphictyonic League
Amphictyonis
Amphidromia
Amphillogiai
Amphilochus I of Argos
Amphilochus II of Argos
Amphimachus I of Elis
Amphimachus II of Elis
Amphimachus of Caria
Amphimachus of Mycenae
Amphimedon
Amphinome
Amphinomus
Amphion
Amphion and Zethus
Amphipole
Amphiprostyle
Amphirho
Amphis
Amphisbaena
Amphithea
Amphithemis
Amphitrite
Amphitryon
Amphora
Amphora (unit)
Amphora of Hermonax in Würzburg
Amphoterus (son of Alcmaeon)
Ampyx
Amulet MS 5236
Amyclae
Amyclas
Amyclas of Sparta
Amycus (centaur)
Amydon
Amykos
Amykos Painter
Amynomachus
Amyntor
Amythaon
Anabasis (Xenophon)
Anacaea
Anacharsis
Anacleteria
Anacreon
Anactor
Anagnorisis
Anagyrus Painter
Anaideia
Anakes
Analatos Painter
Analogy of the divided line
Analogy of the sun
Anamnesis
Ananke
Anaphlystus
Anapos
Anathyrosis
Anax
Anax (mythology)
Anaxagoras
Anaxagoras (mythology)
Anaxagoras of Aegina
Anaxander
Anaxandra
Anaxandridas I
Anaxandridas II
Anaxandrides
Anaxarchus
Anaxibia
Anaxibius
Anaxidamus
Anaxilas (comic poet)
Anaximander
Anaximenes of Miletus
Anaxippus
Anaxis
Anaxo
Anaxo (daughter of Alcaeus)
Ancaeus (son of Poseidon)
Anchiale
Anchialus
Anchises
Ancient accounts of Homer
Ancient Agora of Athens
Ancient Corinth
Ancient Elis
Ancient Greece
Ancient Greece–Ancient India relations
Ancient Greece and wine
Ancient Greek
Ancient Greek accent
Ancient Greek architecture
Ancient Greek art
Ancient Greek astronomy
Ancient Greek boxing
Ancient Greek calendars
Ancient Greek clubs
Ancient Greek coinage
Ancient Greek comedy
Ancient Greek conditional clauses
Ancient Greek cuisine
Ancient Greek dialects
Ancient Greek flood myths
Ancient Greek folklore
Ancient Greek funeral and burial practices
Ancient Greek funerary vases
Ancient Greek grammar
Ancient Greek law
Ancient Greek literature
Ancient Greek medicine
Ancient Greek mercenaries
Ancient Greek military personal equipment
Ancient Greek Musical Notation
Ancient Greek nouns
Ancient Greek novel
Ancient Greek Numbers (Unicode block)
Ancient Greek Olympic festivals
Ancient Greek personal names
Ancient Greek philosophy
Ancient Greek phonology
Ancient Greek present progressive markers
Ancient Greek religion
Ancient Greek sculpture
Ancient Greek technology
Ancient Greek temple
Ancient Greek units of measurement
Ancient Greek verbs
Ancient Greek warfare
Ancient harbour of Samos
Ancient history of Cyprus
Ancient Macedonian army
Ancient Macedonian language
Ancient Magnesia
Ancient Olympic Games
Ancient Olympic pentathlon
Ancient Theatre of Epidaurus
Ancient theatre of Taormina
Ancient Thera
Ancient Thessaly
Ancyle
Andokides (potter)
Andokides (vase painter)
Andraemon
Andragathus
Androcleides
Androcydes (painter)
Androdamas
Androetas
Androgeos
Androgeus (Aeneid)
Androgeus (son of Minos)
Androlepsy
Andromache
Andromache (play)
Andromachus
Andromeda
Andron
Andron (physician)
Andronicus of Rhodes
Andropompus
Androtion
Androtion (historian)
Anemoi
Aneristus
Angele (deme)
Angelitos Athena
Angelos
Anius
Anniceris
Anonymus Londinensis
Anta
Anta capital
Antae temple
Antaea
Antaeus
Antaeus (physician)
Antalcidas
Antefix
Anteias
Antenor
Antenor (mythology)
Antenor of Troy
Antenor (writer)
Antenor Kore
Antenorides
Antepredicament
Anteros
Anthas
Anthedon (Boeotia)
Antheia
Anthesphoria
Anthesteria
Antheus
Anthippus
Anthousai
Anticlea
Anticlus
Anticrates
Antigenes
Antigenes (historian)
Antigone
Antigone (mythology)
Antigone (Euripides play)
Antigone (Sophocles play)
Antigonia
Antigonid dynasty
Antigonid Macedonian army
Antigonid–Nabataean confrontations
Antigonus (historian)
Antigonus (mythology)
Antigonus (physician)
Antigonus (sculptor)
Antigonus of Carystus
Antikyra
Antikythera
Antikythera Ephebe
Antikythera mechanism
Antikythera wreck
Antilochus
Antilochus (historian)
Antimachia
Antimachus
Antimachus (sculptor)
Antimenes Painter
Antimoerus
Antinoeis
Antinous of Ithaca
Antiochis (tribe)
Antiochus (admiral)
Antiochus (mythology)
Antiochus (physician)
Antiochus (sculptor)
Antiochus of Arcadia
Antiochus of Ascalon
Antiope
Antiope (Amazon)
Antipater
Antipater (astrologer)
Antipater of Acanthus
Antipater of Cyrene
Antipater of Sidon
Antipater of Tarsus
Antipater of Tyre
Antipatitis
Antipatrid dynasty
Antiperistasis
Antiphanes (comic poet)
Antiphanes of Argos
Antiphates
Antiphemus
Antiphilus
Antiphon (brother of Plato)
Antiphon (orator)
Antiphon (tragic poet)
Antiphon (writer)
Antiphon Painter
Antiphonus
Antiphus
Antisthenes
Antisthenes (Heraclitean)
Antisthenes of Rhodes
Antisthenes of Sparta
Antistrophe
Antonius
Antonius of Argos
Antorides
Anyte of Tegea
Anytos
Anytus
Aoede
Aoidos
Aon
Aorist
Aorist (Ancient Greek)
Aornum
Apanchomene
Apate
Apatheia
Apaturia
Apaturia (Greek mythology)
Apaturius
Apega of Nabis
Apeiron
Apella
Apellai
Apellaia
Apellas
Apelles
Apemius
Apesantius
Aphareus
Aphareus (writer)
Aphareus of Messenia
Apheidas
Aphidna
Aphneius
Aphrodisia
Aphrodite
Aphrodite of Knidos
Aphrodite of the Gardens
Aphrodite Pandemos
Aphrodite Urania
Aphroditus
Apis (Greek mythology)
Apis of Argos
Apis of Sicyon
Apobates Base
Apocatastasis
Apodektai
Apodicticity
Apollo
Apollo and Daphne
Apollo Citharoedus
Apollo of Mantua
Apollo of Piombino
Apollodoros (vase painter)
Apollodorus (general)
Apollodorus (painter)
Apollodorus (sculptor)
Apollodorus Logisticus
Apollodorus of Acharnae
Apollodorus of Athens
Apollodorus of Boeotia
Apollodorus of Carystus
Apollodorus of Cyrene
Apollodorus of Erythrae
Apollodorus of Phaleron
Apollodorus of Seleucia
Apollodorus of Tarsus
Apollonian and Dionysian
Apollonides (governor of Argos)
Apollonides (philosopher)
Apollonides of Boeotia
Apollonides of Cardia
Apollonides of Cos
Apollonides of Smyrna
Apollonides of Sparta
Apollonieis
Apollonis
Apollonius
Apollonius Cronus
Apollonius Molon
Apollonius of Acharnae
Apollonius of Aphrodisias
Apollonius of Chalcedon
Apollonius of Clazomenae
Apollonius of Laodicea
Apollonius of Perga
Apollonius of Tyana
Apollonius (son of Archias)
Apollonius (son of Chaeris)
Apollothemis
Apology (Xenophon)
Apomyius
Aponia
Apophantic
Aporia
Apotropaei
Apotrophia
Apoxyomenos
Apple of Discord
Apsines
Apulian vase painting
Arabius (mythology)
Aracus (admiral)
Arae
Araphen
Araros
Aratus of Sicyon
Arbius
Arcadia
Arcadian League
Arcadocypriot Greek
Arcas
Arcesilaus
Arcesilaus (mythology)
Arcesius
Archaeological Park of Dion
Archaeological site of Terpsithea Square
Archaic Greece
Archaic Greek alphabets
Archaic smile
Arche
Arche (mythology)
Archedicus
Archegetes
Archelaus (geographer)
Archelaus (Heraclid)
Archelaus (philosopher)
Archelaus (play)
Archelaus Chersonesita
Archelaus of Sparta
Archelochus
Archemachus
Archemachus of Euboea
Archermus
Archestratus
Archestratus (general)
Archestratus (music theorist)
Archias of Corinth
Archidamus (physician)
Archidamus I
Archidamus II
Archidamus III
Archidamus IV
Archidamus V
Archilochus
Archimedes
Archimedes Palimpsest
Archimelus
Archinus
Archinus (historian)
Archon
Archon basileus
Archytas
Archytas of Mytilene
Ardalus
Ardeas
Aregon
Areopagite constitution
Areopagus
Ares
Ares Borghese
Aresas
Arestor
Arete
Arete (mythology)
Aretes of Dyrrachium
Arethusa
Aretology
Areus I
Areus II
Arezzo 1465 vase
Argalus
Arge
Arges (Cyclops)
Argeus of Argos
Argia
Argileonis
Argiope
Argius
Argive vase painting
Argo
Argonautica
Argonautica Orphica
Argonauts
Argos
Argos (dog)
Argos panoply
Argos Theater
Argus (Argonaut)
Argus (Greek myth)
Argus (king of Argos)
Argus Panoptes
Agryle
Argyramoiboi
Argyraspides
Argyrocopeum
Arignote
Arimaspi
Arimneste
Arimnestos
Arimoi
Arion
Arion (mythology)
Ariphron
Arisbas
Arisbe
Aristaeus
Aristaeus the Elder
Aristagoras
Aristander of Paros
Aristarchus of Athens
Aristarchus of Colchis
Aristarchus of Samos
Aristarchus of Samothrace
Aristarchus of Sparta
Aristarchus of Tegea
Aristeas (sculptor)
Aristeia
Aristeides
Aristeus
Aristias
Aristides
Aristides of Thebes
Aristion
Aristion (physician)
Aristippus
Aristippus of Larissa
Aristippus the Younger
Aristo of Ceos
Aristocleidas
Aristocleides
Aristocles (sculptors)
Aristocles of Messene
Aristocracy
Aristodemus
Aristodemus of Cydathenaeum
Aristodemus of Miletus
Aristodemus of Sparta
Aristogeiton (orator)
Aristoi
Aristolaos
Aristomachos of Argos
Aristomachus
Aristomenes
Ariston (painter)
Ariston of Athens
Ariston of Sparta
Aristonymus
Aristophanes
Aristophanes (vase painter)
Aristophanes of Byzantium
Aristophon (comic poet)
Aristotelian ethics
Aristotelianism
Aristotle
Aristotle's biology
Aristotle's theory of universals
Aristotle's views on women
Aristotle's wheel paradox
Aristotle of Cyrene
Aristoxenus
Aristoxenus (physician)
Arithmetica
Arius Didymus
Arkalochori Axe
Arkesilas Cup
Arkesilas Painter
Arnaeus
Arrephorion
Arrephoros
Arrhephoria
Arrhichion
Arrian
Arsinoe
Arsis and thesis
Artas of Messapia
Artemidorus
Artemis
Artemision Bronze
Arundel marbles
Arura
Aryballos
Asbolus
Ascalaphus
Ascalaphus (son of Acheron)
Ascalaphus of Orchomenus
Ascanius
Asclepiad (title)
Asclepiades of Phlius
Asclepiades the Cynic
Asclepiodorus (painter)
Asclepiodotus (philosopher)
Asclepeion
Asclepius
Ascolia
Asebeia
Asia (mythology)
Asine (Messenia)
Asius
Asius of Samos
Asklepieion of Athens
Askos
Asopis
Asphodel Meadows
Aspis
Astacus (mythology)
Asteas
Asteria (Titaness)
Asterion (god)
Asterion (king of Crete)
Asterius
Asterodia
Asteropaios
Asterope
Asterope (Hesperid)
Astomi
Astra Planeta
Astraea
Astraeus
Astraeus (mythology)
Astris
Astronomical rings
Asty
Astyanax
Astydameia
Astylochus
Astymedusa
Astynomus
Astynous
Atalanta
Atas
Ateleia
Atene (deme)
Athamanians
Athena
Athena Alea
Athena Alkidemos
Athena Areia
Athena Painter
Athena Parthenos
Athena Promachos
Athenaeus
Athenaeus (musician)
Athenaeus Mechanicus
Athenian Band Cup by the Oakeshott Painter (MET 17.230.5)
Athenian coinage decree
Athenian coup of 411 BC
Athenian democracy
Athenian festivals
Athenian Grain-Tax Law of 374/3 B.C.
Athenian military
Athenian Revolution
Athenian sacred ships
Athenian Treasury
Athenians Project
Athenion of Maroneia
Athenodorus Cananites
Athenodorus of Soli
Athens
Atheradas of Laconia
Athmonum
Athos
Atimia
Atintanians
Atlantis
Atlas (architecture)
Atlas (mythology)
Atomism
Atrax (mythology)
Atrax (Thessaly)
Atreus
Atropos
Attaginus
Attalid dynasty
Attalus (general)
Attalus of Rhodes
Atthidographer
Attic calendar
Attic declension
Attic Greek
Attic helmet
Attic numerals
Attic orators
Attic talent
Attic War
Attic weight
Attica
Atticism
Atticus (philosopher)
Attis
Atymnius
Atys (son of Croesus)
Atys of Lydia
Augeas
Aulis
Auloniad
Aulos
Auridae
Autariatae
Autesion
Autochthe
Autochthon
Autokrator
Autolycus
Autolycus of Pitane
Automedon
Autonoe (mythology)
Autonoë of Thebes
Autonous
Axiochus (Alcmaeonid)
Axiochus (dialogue)
Axion
Axiotta
Axylus
Azan (mythology)
Azania
Azenia (Attica)
Azone
B
Babys
Bacchiadae
Bacchius of Tanagra
Bacchoi
Bacchylides
Baiake
Bakis
Balius and Xanthus
Ballista
Baltimore Painter
Banausos
Band cup
Band skyphos
Baptes
Barbiton
Barytone
Basileus
Basilides (Stoic)
Basilides of Tyre
Basilides the Epicurean
Basilinna
Bassaris
Bate (Attica)
Batea
Bathonea
Bathycles
Bathycles of Magnesia
Batiae
Batis of Lampsacus
Batrachomyomachia
Battiadae
Battle of Abydos
Battle of Aegina
Battle of Aegospotami
Battle of Alalia
Battle of Amorgos
Battle of Amphipolis
Battle of Arginusae
Battle of Artemisium
Battle of Asculum
Battle of Beneventum (275 BC)
Battle of Byzantium
Battle of Catana (397 BC)
Battle of Chaeronea (338 BC)
Battle of Chios (201 BC)
Battle of Cnidus
Battle of Corinth (146 BC)
Battle of Coronea (394 BC)
Battle of Coronea (447 BC)
Battle of Corupedium
Battle of Crannon
Battle of Cretopolis
Battle of Crocus Field
Battle of Cumae
Battle of Cunaxa
Battle of Cynoscephalae (364 BC)
Battle of Cynoscephalae
Battle of Cynossema
Battle of Cyzicus
Battle of Delium
Battle of Deres
Battle of Dyme
Battle of Embata
Battle of Ephesus (ca. 258 BC)
Battle of Eretria
Battle of Gabiene
Battle of Gaugamela
Battle of Gaza (312 BC)
Battle of Haliartus
Battle of Heraclea
Battle of Himera (409 BC)
Battle of Himera (480 BC)
Battle of Hysiae (417 BC)
Battle of Hysiae (c.669 BC)
Battle of Idomene
Battle of Ipsus
Battle of Issus
Battle of Lade
Battle of Lade (201 BC)
Battle of Lechaeum
Battle of Leontion
Battle of Leuctra
Battle of Lyncestis
Battle of Lysimachia
Battle of Mantinea (207 BC)
Battle of Mantinea (362 BC)
Battle of Mantinea (418 BC)
Battle of Marathon
Battle of Megalopolis
Battle of Megara
Battle of Mount Lycaeum
Battle of Munychia
Battle of Mycale
Battle of Myonessus
Battle of Mytilene (406 BC)
Battle of Naupactus
Battle of Naxos
Battle of Nemea
Battle of Notium
Battle of Oenophyta
Battle of Olpae
Battle of Orkynia
Battle of Orneae
Battle of Pandosia
Battle of Paraitakene
Battle of Paxos
Battle of Pharos
Battle of Phoenice
Battle of Phyle
Battle of Piraeus
Battle of Plataea
Battle of Plataea (323 BC)
Battle of Potidaea
Battle of Pydna
Battle of Pylos
Battle of Raphia
Battle of Rhium
Battle of Salamis
Battle of Salamis (306 BC)
Battle of Scarpheia
Battle of Sellasia
Battle of Sepeia
Battle of Spartolos
Battle of Sphacteria
Battle of Sybota
Battle of Syme
Battle of Tanagra (426 BC)
Battle of Tanagra (457 BC)
Battle of Tegyra
Battle of Thebes
Battle of Thermopylae
Battle of Thermopylae (279 BC)
Battle of Thermopylae (323 BC)
Battle of the Echinades (322 BC)
Battle of the Eurymedon
Battle of the Eurymedon (190 BC)
Battle of the Granicus
Battle of the Hellespont (321 BC)
Battle of the Hydaspes
Battle of the 300 Champions
Battle of the Fetters
Battle of the Great Foss
Battle of the Strait of Messina
Battle of the Tigris
Battus
Baubo
Baucis and Philemon
Belbina (Argolis)
Beldam Painter
Belemina
Bellerophon
Bellerophon Painter
Bellerophon (play)
Belly Amphora by the Andokides Painter (Munich 2301)
Bema
Bema of Phaidros
Bematist
Bembina (Argolis)
Bendidia
Bene (Crete)
Boeotian muses
Berenice (Epirus)
Berenicidae
Berlin Foundry Cup
Berlin Painter
Beroe
Besa (Attica)
Bessa (Locris)
Beta
Between Scylla and Charybdis
Bia
Biannus
Bias
Bias (son of Amythaon)
Bias of Priene
Bibliotheca (Pseudo-Apollodorus)
Bident
Bienor
Bilingual kylix by the Andokides painter
Bilingual vase painting
Bion of Abdera
Bion of Borysthenes
Bionnus
Bireme
Bisaltae
Bisaltes
Biston
Bistonis
Biton of Pergamon
Black-figure pottery
Black-glazed Ware
Black soup
Blond Kouros's Head of the Acropolis
BMN Painter
Boar's tusk helmet
Boebe (Thessaly)
Boedromia
Boeotarch
Boeotia
Boeotian Dancer's Group Kothon, Black Figure Tripod, 6th Century B.C.
Boeotian helmet
Boeotian shield
Boeotian Treasury
Boeotian vase painting
Boeotian War
Boeotus
Boeotus (son of Poseidon)
Boeotus of Sicyon
Boethus
Boethus of Sidon (Stoic)
Boios
Boium
Bolbe
Bolina
Bolina (Achaea)
Bomolochus
Book of Lemmas
Boreads
Borghese Gladiator
Borghese Vase
Bormus
Borus
Borysthenes
Borysthenis
Bosporan Kingdom
Botres
Bottiaea
Boudeion
Boukris
Boule
Bouleuterion
Bouleutic oath
Bounos
Boustrophedon
Bouzyges
Bowl of Hygieia
Boxer Stele Fragment from Kerameikos
Boxing Siana Cup
Brachyllas
Branchus (lover of Apollo)
Brangas
Brasidas
Brauroneion
Brea (Thrace)
Bremon
Bremusa
Brimo
Briseis
Briseus
Britomartis
Brizo
Bromius
Brontinus
Bronze Diskos Thrower Statue
Bronze Statuette of Athletic Spartan Girl
Broteas
Brothers Poem
Bryaxis
Brycus
Brygos
Brygos cup of Würzburg
Brygos Painter
Bryn Mawr Painter
Bryseae
Bryson of Achaea
Bryson of Heraclea
Buchetium
Bucolion
Bucolus
Budeia
Bularchus
Bull-Leaping Fresco
Bull of the Corcyreans
Bupalus and Athenis
Buphagus
Buphonia
Buprasium
Bura
Burgon Group
Burgon vase
Buskin
Butadae
Butades
Butes
Bybon
Byzantium
Byzas
C
C Painter
Caanthus
Cabeiri
Cabeiro
Cadmea
Cadmean victory
Cadmus
Cadmus of Miletus
Caduceus
Caduceus as a symbol of medicine
Caeneus
Caeretan hydria
Caerus
Calamis (4th century BC)
Calamis (5th century BC)
Calathus (basket)
Calchas
Calesius
Caletor
Caliadne
Callias III
Callichore
Callicles
Callicrates
Callicrates of Sparta
Callicratidas
Callidice
Callimachus (polemarch)
Callimedon
Callinus
Calliope
Calliphon
Calliphon of Croton
Callippides
Callippus
Callippus of Syracuse
Callirhoe (mythology)
Callirrhoe (daughter of Achelous)
Callirrhoe (Oceanid)
Callisthenes
Callisto
Callistratus (grammarian)
Callithyia
Callixenus
Calyce
Calydnus
Calydon
Calydon of Aetolia
Calydoneus
Calydonian boar hunt
Calypso (mythology)
Calyx-Krater by the artist called the Painter of the Berlin Hydria depicting an Amazonomachy
Cameirus (mythology)
Campanian vase painting
Campe
Canace
Canachus
Candalus
Candaon
Candaules
Canephoria
Canethus
Canopus
Canosa vases
Canthus
Cap of invisibility
Capaneus
Cape Matapan
Capture of Oechalia
Capys of Dardania
Car (Greek myth)
Car of Caria
Carcinus (writer)
Cardamyle
Cardia (Thrace)
Carius
Carmanor (of Crete)
Carmanor (son of Dionysus)
Carme (mythology)
Carneades
Carneia
Carneiscus
Carnus
Carpaea
Carpus of Antioch
Caryae
Caryatid
Caryatis
Carystius
Carystus
Cassandra
Cassandra (metaphor)
Cassandreia
Cassiopeia (mother of Andromeda)
Cassopaei
Cassope
Cassotis
Castalia
Castellani Painter
Castor of Rhodes
Catalogue of Ships
Catalogue of Women
Catamite
Catastasis
Categories (Aristotle)
Catharsis
Catoptrics
Catreus
Cattle of Helios
Caucon
Caucones
Caunos (mythology)
Cavalcade Painter
Cave of Euripides
Cave Sanctuaries of the Akropolis
Cebes
Cebren
Cebriones
Cecrops
Cecrops I
Cecrops II
Cedalion
Cedi (Attica)
Celaeneus
Celaeno
Celaeno (Pleiad)
Celaenus (mythology)
Celestial spheres
Celeus
Cella
Celtine
Celtus
Centaur
Centaurides
Central Greece
Centuripe ware
Cephale
Cephalion
Cephalus
Cephalus of Athens
Cephalus of Phocis
Cepheus (father of Andromeda)
Cepheus (king of Tegea)
Cephisia
Cephisodorus
Cephisodotus (general)
Cephisodotus the Elder
Cephisodotus the Younger
Cephisso
Cephissus
Cerambus
Cerameicus Painter
Cerameis
Ceraon
Cerastes
Cerberus
Cercaphus
Cercaphus (Heliadae)
Cercetes
Cercidas
Cercopes
Cercopes (epic poem)
Cercops
Cercyon
Cercyon of Eleusis
Cerdo (mythology)
Ceremonies of ancient Greece
Ceriadae
Cerinthus (Euboea)
Ceroessa
Ceryneian Hind
Ceryx
Cestria (Epirus)
Cestrinus
Ceto
Ceto (Greek myth)
Cettus
Cetus
Ceuthonymus
Ceyx of Trachis
Chabrias
Chaeremon
Chaeremon of Alexandria
Chaerephon
Chaeresilaus
Chaeron of Pellene
Chaeronea
Chaetus
Chalandriani
Chalceia
Chalcidian helmet
Chalcidianising cup
Chalciope
Chalcis
Chalcis (Aetolia)
Chalcis (Epirus)
Chalcis Decree
Chalcodon
Chalcon
Chaldean Oracles
Chalkaspides
Chalkidian pottery
Chalkidiki
Chalkotheke
Chamaeleon (philosopher)
Chaon
Chaonia
Chaonians
Chaos
Charadra (Epirus)
Charadra (Messenia)
Charadra (Phocis)
Chares of Athens
Chares of Lindos
Charicles
Chariclo
Charidemus
Charilaus
Chariot Allegory
Charioteer of Delphi
Charis (mythology)
Charisticary
Charites
Charitimides
Charition mime
Chariton
Charixene
Charmadas
Charmides
Charmides (dialogue)
Charmus
Charnabon
Charon
Charon's obol
Charondas
Charops
Charybdis
Chastieis
Chatsworth Head
Cheirisophus (general)
Chelys
Cheramyes
Chersias
Chersiphron
Chi (letter)
Chian wine
Chigi vase
Children of Heracles
Chiliarch
Chilon of Patras
Chilon of Sparta
Chion of Heraclea
Chione
Chione (daughter of Arcturus)
Chione (daughter of Boreas)
Chione (daughter of Callirrhoe)
Chionides
Chionis of Sparta
Chios
Chios (Caria)
Chirimachus
Chiron
Chiton
Chiusi Painter
Chlamys
Chloris
Chloris (nymph)
Chloris of Thebes
Choerilus (playwright)
Choerilus of Samos
Cholargos (deme)
Cholleidae
Choragic Monument of Nikias
Choragic Monument of Thrasyllos
Choral poetry
Choreia
Chorizontes
Chorus of the elderly in classical Greek drama
Chremonidean War
Chrestomathy
Chromia
Chromis
Chromius
Chronology of ancient Greek mathematicians
Chronos
Chrysanthis
Chrysaor
Chryse (Lesbos)
Chryse (mythology)
Chryse (ancient Greek placename)
Chryseis
Chryseis (mythology)
Chryselephantine sculpture
Chryselephantine statues at Delphi
Chryses (mythology)
Chryses of Troy
Chrysippe
Chrysippus
Chrysippus (Greek myth)
Chrysippus of Cnidos
Chrysippus of Elis
Chrysis Painter
Chrysogonus of Athens
Chrysondyon
Chrysothemis
Chrysus
Chthonia
Chthonic
Chthonius
Cicynna
Cilix
Cilla
Cilla (city)
Cimon
Cimon Coalemos
Cimon of Cleonae
Cinaethon of Sparta
Cineas
Cineas (Athenian)
Cinyras
Cipollino marble
Circe
Cisseus
Cistophorus
Cisus
Cissus (mythology)
Cithara
Citharode
City walls of Athens
Class of Cabinet des Médailles 218
Classical Greece
Classical mythology
Classical order
Classical sculpture
Claw of Archimedes
Cleander of Gela
Cleander of Sparta
Cleandridas
Cleanthes
Cleanthes (artist)
Clearchus of Rhegium
Clearchus of Soli
Clearchus of Sparta
Cleidemus
Cleinias
Cleinias of Tarentum
Cleisthenes
Cleisthenes (son of Sibyrtius)
Cleitagora
Cleite
Cleitus the Black
Cleitus the White
Cleobule
Cleobulina
Cleobulus
Cleocharia
Cleodaeus
Cleodora (nymph)
Cleodorus
Cleolaus
Cleolla
Cleomachus
Cleombrotus I
Cleombrotus II
Cleombrotus (regent)
Cleombrotus of Ambracia
Cleomedes
Cleomenean War
Cleomenes I
Cleomenes II
Cleomenes III
Cleomenes (seer)
Cleomenes the Cynic
Cleon
Cleon (mythology)
Cleon (sculptor)
Cleonaeus
Cleondas of Thebes
Cleonides
Cleonymus of Athens
Cleonymus of Sparta
Cleopatra (Danaid)
Cleophon (poet)
Cleophon (politician)
Cleostratus
Cleruchy
Climacteric year
Clinomachus
Clio
Clio (mythology)
Clipeus
Clitomachus (philosopher)
Clitophon (Athenian)
Clitophon (dialogue)
Clitorians
Clonia (nymph)
Clonius
Clothing in ancient Greece
Clotho
Clymene
Clymenus
Clytemnestra
Clytie
Clytie (Oceanid)
Clytius
Clytus
Cnemus
Cnidian Treasury
Coa vestis
Coan wine
Coastal Lamptrai
Cocalus
Cocytus
Codrus
Codrus Painter
Coele
Coeranus
Coeratadas
Coes of Mytilene
Coinage of Side
Coinage of the Social War (91–88 BC)
Colaeus
Collytus
Colonae (Leontis)
Colonides
Colonus (Attica)
Colophon
Colossus of Rhodes
Colossus of the Naxians
Colotes
Columbus Painter
Comaetho
Comast Group
Combe
Cometas
Cometes
Common Peace
Companion cavalry
Comus
Concentric spheres
Congress at the Isthmus of Corinth
Congress of Gela
Conisterium
Conon
Conon (mythographer)
Conon of Samos
Conservation and restoration of ancient Greek pottery
Conspiracy of Cinadon
Constitution of the Athenians (Pseudo-Xenophon)
Constitution of the Lacedaemonians
Constitution of the Athenians (Aristotle)
Contest of Cithaeron and Helicon
Contest of Homer and Hesiod
Conthyle
Contrapposto
Controversia
Coön
Copae
Copreus (mythology)
Copreus of Elis
Coprus
Corax of Syracuse
Cordax
Coresus
Corinthian bronze
Corinthian helmet
Corinthian order
Corinthian War
Corinthus
Coriscus of Scepsis
Corium (Crete)
Cornix
Coroebus
Coroebus of Elis
Coronaeus
Corone (Messenia)
Coroneia (Boeotia)
Coronis
Coronis (lover of Apollo)
Coronis (textual symbol)
Coronus
Coroplast (artisan)
Corpus Aristotelicum
Corus (mythology)
Corybas (mythology)
Corycia
Corydallus
Coryphaeus
Corythus
Cothocidae
Cotyla
Cotyttia
Counter-Earth
Cragaleus
Cranaus
Crantor
Crantor (mythology)
Craterus' ex voto
Crates (engineer)
Crates of Athens
Crates of Mallus
Crates of Thebes
Crateuas (physician)
Cratinus
Cratippus of Athens
Cratippus of Pergamon
Cratylus
Cratylus (dialogue)
Creon (king of Corinth)
Creon (king of Thebes)
Creonion
Creophylus of Samos
Crepidoma
Cres
Cresphontes
Crestonia
Cretan archers
Cretan Bull
Cretan War (205–200 BC)
Crete
Crete (mythology)
Cretea
Cretheus
Crethon
Creusa
Creusa (Naiad)
Creusa of Athens
Creusa of Corinth
Creusa of Troy
Criasus
Crinacus
Crinaeae
Crinagoras of Mytilene
Crinis
Crino
Crioa (Attica)
Crissa
Crisus
Critheïs
Crithote (Thrace)
Critias
Critias (dialogue)
Crito
Crito of Alopece
Critodemus
Critolaos of Megalopolis
Critolaus
Criton of Heraclea
Criton of Pieria
Crius
Croatian Apoxyomenos
Crobylus
Crocus (mythology)
Croeseid
Croesus
Crommyonian Sow
Cropia (Attica)
Crotalum
Crotopus
Crouching Satyr Eye-Cup
Crypteia
Ctesias
Ctesibius
Ctesicles
Ctesilochus
Ctesippus
Ctimene
Cuarius (Boeotia)
Cult of Artemis at Brauron
Cult of Dionysus
Cultural depictions of Medusa and Gorgons
Cumae
Curetes (tribe)
Cyamites
Cyaneae
Cyanippus
Cybele
Cychreides
Cychreus
Cycliadas
Cyclic Poets
Cyclopean masonry
Cyclopes
Cyclops (play)
Cycnus
Cycnus of Aetolia
Cycnus of Kolonai
Cycnus of Liguria
Cycnus (son of Ares)
Cydantidae
Cydathenaeum
Cydias
Cydon
Cylarabes
Cylix of Apollo
Cyllarus
Cyllene (Elis)
Cylon of Athens
Cynaegirus
Cynaethus
Cynegeticus
Cynicism (philosophy)
Cynisca
Cynortas
Cynosarges
Cynosura (Laconia)
Cynurus
Cynus
Cyparissus
Cyparissus (Phocis)
Cyphus
Cypria
Cypriot Bichrome ware
Cypselus
Cyrenaics
Cyrene
Cyropaedia
Cythera
Cytherus
Cytinium
Cytissorus
Cytorus
Cyzicus
D
Dactyls
Daduchos
Daedala
Daedalidae
Daedalion
Daedalus
Daemon
Daemones Ceramici
Daetor
Daidala
Daiphron
Demaratus
Damarchus
Damasen
Damasichthon
Damasichthon (King of Thebes)
Damasithymus
Damastor
Demetrius of Phalerum
Damo
Damocles
Damocrates
Demodocus (dialogue)
Damon and Pythias
Damon of Athens
Damysus
Danaë
Danaïdes
Danake
Danaus
Dancer of Pergamon
Dancers of Delphi
Dancing Satyr of Mazara del Vallo
Dandes of Argos
Daphnaie
Daphne
Daphnephoria
Daphnis
Daphnus
Dardanian invasion of Epirus
Dardanians
Dardanus
Dardanus of Athens
Dardanus (Scythian king)
Dardanus (son of Zeus)
Dares Phrygius
Darius Painter
Darius Vase
Dascylium (Caria)
Dascylus
Dassaretae
Data (Euclid)
Daulis
Daulis (mythology)
Daybreak Painter
De genio Socratis
De Interpretatione
Death in ancient Greek art
Decelea
Deception of Zeus
Declension of Greek nouns in Latin
Decline of Greco-Roman polytheism
Decree of Aristoteles
Decree of Dionysopolis
Decree of Philippi
Decree of Philippi, 242 BCE
Decree of Themistocles
Dedication of Nikandre
Defeat of Leonnatus by Antiphilus
Deferent and epicycle
Definitions (Plato)
Deianira
Deidamia (Greek myth)
Deidamia of Scyros
Deileon
Deimachus
Deimachus (mythology)
Deimos (deity)
Deinomenes
Deiochus
Deioneus
Deiopites
Deiphobus
Deiphontes
Deipneus
Deipnon
Deipylus
Deipyrus
Deiradiotae
Delian League
Delium
Delos
Delphi
Delphi Inscription
Delphic Hymns
Delphic maxims
Delphic Sibyl
Delphinia
Delphinion
Delphus
Delphyne
Delta (letter)
Demaenetus
Demaratus
Demarchos
Deme
Demeter
Demeter of Knidos
Demetrius (somatophylax)
Demetrius (son of Althaemenes)
Demetrius (son of Pythonax)
Demetrius of Alopece
Demetrius of Amphipolis
Demetrius of Magnesia
Demetrius of Phalerum
Demetrius of Scepsis
Demetrius the Cynic
Demetrius Lacon
Demiurge
Demiurge (magistrate)
Democedes
Demochares
Democles
Democoon
Democrates
Democrates of Aphidna
Democritus
Demodocus
Demoleon
Demoleon (mythology)
Demoleus
Demonax
Demonax (lawmaker)
Demonice of Aetolia
Demonicus of Pella
Demophilus of Thespiae
Demophon (seer)
Demophon of Athens
Demophon of Eleusis
Demoptolemus
Demosthenes
Demosthenes (general)
Demosthenes Philalethes
Depictions of the sacrifice of Iphigenia
Dercylidas
Dereium
Derveni Krater
Derveni papyrus
Descent of Perithous
Desmon of Corinth
Despinis Head
Despoina
Deucalion
Deucalion (mythology)
Deucalion (son of Minos)
Deuteragonist
Dexagoridas
Dexamenus
Dexippus
Dexippus of Cos
Dexithea (mythology)
Dia
Diacria
Diacria (Euboea)
Diades of Pella
Diadochi
Diadumenos
Diaeus
Diagoras of Melos
Diagoras of Rhodes
Diairesis
Diana of Gabii
Dianoia
Diaphorus (mythology)
Dias (mythology)
Diateichisma
Diaulos (architecture)
Diaulos (instrument)
Diaulos (running race)
Dicaearchus
Dicaearchus of Aetolia
Dicaeus
Dictys
Didascaly
Didyma
Didymus Chalcenterus
Didymus the Musician
Dienekes
Dieuches
Digamma
Diipetes
Dikastes
Dike
Diliad
Dimachae
Dimoetes
Dinarchus
Dinocrates
Dinon
Dinos
Dinos of the Gorgon Painter
Dinos Painter
Dinostratus
Dio of Alexandria
Diocles
Diocles (mathematician)
Diocles of Carystus
Diocles of Cnidus
Diocles of Corinth
Diocles of Magnesia
Diocles of Syracuse
Diocorystes
Diodorus Cronus
Diodorus of Adramyttium
Diodorus of Alexandria
Diodorus of Aspendus
Diodorus of Tyre
Diodorus Siculus
Diodotus (son of Eucrates)
Diodotus the Stoic
Dioedas
Diogenes
Diogenes and Alexander
Diogenes Laërtius
Diogenes of Apollonia
Diogenes of Athens (sculptor)
Diogenes of Athens (tragedian)
Diogenes of Babylon
Diogenes of Oenoanda
Diogenes of Phoenicia
Diogenes of Seleucia
Diogenes of Tarsus
Diogenianus
Diolkos
Diomea (Attica)
Diomede
Diomedes
Diomedes of Thrace
Dion
Dion, Pieria
Dione (mythology)
Dione (Titaness)
Dionysia
Dionysiakos
Dionysian Mysteries
Dionysius (ambassador)
Dionysius (Athenian commander)
Dionysius Chalcus
Dionysius of Byzantium
Dionysius of Chalcedon
Dionysius of Cyrene
Dionysius of Halicarnassus
Dionysius of Lamptrai
Dionysius of Miletus
Dionysius the Phocaean
Dionysius the Renegade
Dionysius Thrax
Dionysodorus
Dionysodorus (sophist)
Dionysus
Dionysus Aesymnetes
Dionysus Cup
Dionysus in comparative mythology
Diopeithes
Dioplethes
Diores
Dioscorides (Stoic)
Diosphos Painter
Diotima of Mantinea
Diotima's Ladder of Love
Diotimus the Stoic
Dioxippe
Dioxippus
Diphilus
Diphilus (physician)
Diphridas
Diphros
Diple (textual symbol)
Dipoenus and Scyllis
Dipylon Amphora
Dipylon inscription
Dipylon krater
Dipylon Master
Dirce
Disciples of Plotinus
Discobolus
Discophoros
Discourses of Epictetus
Dissoi logoi
Distyle
Distyle in antis
Dithyramb
Dium (Crete)
Dium (Euboea)
Dius
Diyllus
Dochmiac
Dodona
Dodona (Thessaly)
Dodone
Dodonian Zeus
Dodwell Painter
Dogmatic school
Dokimasia
Dokimasia Painter
Dolichos (race)
Doliones
Dolius
Dolon
Dolops
Dolos
Dorian invasion
Dorians
Doric Greek
Doric Hexapolis
Doric order
Doric Tetrapolis
Dorieus
Doris (Greece)
Doris (mythology)
Dorium
Dorus (Deucalionid)
Dorus (mythology)
Dory
Doryclus
Doryphoros
Doryssus
Douris (vase painter)
Dracanum
Dracius
Draco (lawgiver)
Draco (physician)
Draconian constitution
Dragon's teeth (mythology)
Dragons in Greek mythology
Drakaina
Dreros inscription
Droop cup
Drosera (naiad)
Dryad
Dryas (mythology)
Dryas of Calydon
Dryope
Dryope (daughter of Dryops)
Dryopes
Dryops (mythology)
Dryops of Oeta
Duel Painter
Dulichium
Duris of Samos
Dymas
Dyme
Dynamene
Dyscrasia
Dyskolos
Dysnomia (deity)
Dyssebeia
E
Eagle of Zeus
Earliest Greek democracies
Early life of Plato
Earth (classical element)
Earth and water
East Greek Bird Bowl
East Greek vase painting
Eastern pediment of the Temple of Zeus at Olympia
Ecbasus
Ecclesia
Ecdysia
Echea
Echecrates
Echecrates of Phlius
Echecrates of Thessaly
Echecratides
Echedemos
Echelidae
Echembrotus
Echemmon
Echemus
Echephron
Echestratus
Echetlus
Echetus
Echidna
Echion
Echion (painter)
Echius
Echo (mythology)
Echo Stoa
Echthroi
Eclectic school
Economics (Aristotle)
Economy of ancient Greece
Ecphantus the Pythagorean
Ectenes
Ecumene
Edinburgh Painter
Edonis
Education in ancient Greece
Eetion
Eetion (mythology)
Ego eimi
Eidolon
Eidothea
Eikas
Eikasia
Eileithyia
Eilesium
Eilissus
Eion
Eion (Argolis)
Eioneus
Eirene (artist)
Eirene (goddess)
Eiresidae
Eiresione
Eiron
Eitea (Acamantis)
Eitea (Antiochis)
Ekdromoi
Ekecheiria
Ekklesiasterion
Ekkyklema
Ekphrasis
Ekpyrosis
Elaea (Epirus)
Elaeus (Aetolia)
Elaeus (Attica)
Elaeus (Epirus)
Elaphebolia
Elasioi
Elasippus (mythology)
Elateia
Elateia (Epirus)
Elatus
Elbows Out
Eleatics
Electra
Electra (Euripides play)
Electra (Greek mythology)
Electra (Oceanid)
Electra (Pleiad)
Electra (Sophocles play)
Electryon
Electryone
Eleionomae
Eleius
Eleon
Eleos
Elephantis
Elephenor
Eleusinian Mysteries
Eleusinian Mysteries Hydria
Eleusinion
Eleusis
Eleusis (Boeotia)
Eleusis (mythology)
Eleusis Amphora
Eleuther
Eleutheria
Eleutherna Bridge
Elgin Amphora
Elgin Marbles
Elimiotis
Elis (city)
Ellopia
Elone
Elpenor
Elpinice
Elpinice (daughter of Herodes Atticus)
Elpis
Elymus
Elyrus
Elysium
Emathion
Empedocles
Empiric school
Empusa
Enalus
Enarephoros
Enarete
Enceladus (giant)
Enchiridion of Epictetus
Endeïs
Endius
Endoeus
Endoxa
Endymion
Enipeus
Enispe
Ennomus
Enodia
Enope (Greece)
Enorches
Entochus
Enyalius
Enyeus
Enyo
Eordaea
Eos
Epacria
Epactaeus
Epaminondas
Epaphus
Epeigeus
Epeius
Epeius of Phocis
Eperatus
Ephebic oath
Ephectics
Ephesia Grammata
Ephesian school
Ephesus
Ephialtes
Ephialtes of Trachis
Ephor
Ephorus
Ephyra (Aetolia)
Ephyra (Elis)
Epiales
Epic Cycle
Epicaste
Epicephisia
Epicharmus of Kos
Epicleas (admiral)
Epicles
Epicrates of Ambracia
Epicrates of Athens
Epictetus
Epicurea
Epicureanism
Epicurus
Epidamnos
Epidaurus
Epidaurus (mythology)
Epidaurus Limera
Epideictic
Epidoseis
Epidotes
Epieicidae
Epigamia
Epigenes, son of Antiphon
Epigenes of Athens
Epigenes of Byzantium
Epigenes of Sicyon
Epigeus
Epigoni
Epigoni (epic)
Epigoni (play)
Epigonion
Epigonus
Epigonus of Ambracia
Epigram of Amazaspos
Epigrams (Homer)
Epigrams (Plato)
Epihipparch
Epikleros
Epiktetos
Epilaus
Epilogism
Epimachus of Athens
Epimeliad
Epimenides
Epimetheus
Epinetron
Epinicus
Epinikion
Epinomis
Epione
Epiphanius of Petra
Epiphany
Epiphron
Epipole of Carystus
Epirote League
Epirus (ancient state)
Epirus (mythology)
Episkopoi
Episkyros
Epistates
Episteme
Epistles (Plato)
Epistrategos
Epistrophus
Epitadeus
Epitasis
Epitelidas of Laconia
Epithalamium
Epithets in Homer
Epitrepontes
Epizelus
Epoché
Epochus
Epode
Eponymous archon
Epopeus
Epopeus (king of Sicyon)
Epsilon
Equatorial ring
Erasinides
Erasinos
Erasistratus
Erastus of Scepsis
Erato
Erato (dryad)
Erato (mythology)
Eratosthenes (statesman)
Erchia
Erechtheion
Erechtheis
Erechtheus
Eretria Painter
Eretrian school
Ereuthalion
Ereuthus
Erginus
Erginus (Argonaut)
Erginus (king of Minyans)
Ergiscus
Ergoteles (potter)
Ergoteles of Himera
Ergotimos
Eriboea
Eribotes
Ericea
Erichthonius of Athens
Erichthonius of Dardania
Eridanos
Erigyius
Erikepaios
Erineus (city)
Erinna
Erinyes
Eriopis
Eris
Eroeadae (Antiochis)
Eroeadae (Hippothontis)
Eromenos
Eros
Eros (concept)
Erotes
Erotianus
Ersa
Erymanthian Boar
Erymanthus
Erymneus
Erysichthon of Attica
Erysichthon of Thessaly
Erytheia
Erytheia (mythology)
Erythrae (Boeotia)
Erythrae (Locris)
Erythraea (Crete)
Erythraean Sibyl
Erythras
Eryx
Eryxias (dialogue)
Eryximachus
Eta
Eteocles
Eteocles of Orchomenus
Eteoclus
Eteoneus
Eteonicus
Ethos
Euaemon
Euaeon of Lampsacus
Eualcides
Euboea
Euboea (mythology)
Euboean League
Euboean vase painting
Eubuleus
Eubulides
Eubulus (banker)
Eubulus (poet)
Eubulus (statesman)
Eucharides Painter
Eucheirus
Euchenor
Eucleia
Eucleidas
Eucleides
Euclid
Euclid's Elements
Euclid's Optics
Euclid of Megara
Euclidean algorithm
Euclidean geometry
Euctemon
Eudaemon
Eudaimonia
Eudamidas I
Eudamidas II
Eudamidas III
Eudemian Ethics
Eudemus
Eudemus of Rhodes
Eudoros
Eudorus of Alexandria
Eudoxus of Cnidus
Eudoxus of Cyzicus
Euenus
Euergetes
Euhemerus
Euippe
Euippe (daughter of Tyrimmas)
Eukarpia
Eulabeia (mythology)
Eulamius
Eumaeus
Eumedes
Eumelus
Eumelus of Corinth
Eumenes
Eumolpidae
Eumolpus
Eunapius
Euneus
Eunicus
Eunoia
Eunomia
Eunomus (admiral)
Eunomus (king of Sparta)
Eunostus
Eunostus (hero)
Eupalamus
Eupalinos
Eupatridae
Eupeithes
Euphantus
Eupheme
Euphemus
Euphemus (mythology)
Euphiletos Painter
Euphiletos Painter Panathenaic prize amphora
Euphorbos plate
Euphorbus
Euphorbus (physician)
Euphorion
Euphorion (playwright)
Euphorion of Chalcis
Euphraeus
Euphranor
Euphrates the Stoic
Euphron
Euphronios
Euphronios Krater
Euphrosyne
Eupolemeia
Eupolis
Eupompus
Euporia
Eupraxia
Eupyridae
Eureka (word)
Euripides
Euripus (Acarnania)
Euroea
Europa (consort of Zeus)
Europa (Greek myth)
Europs (mythology)
Eurotas
Euryale (Gorgon)
Euryalus
Euryalus (Phaeacian)
Euryanassa
Eurybarus
Eurybates
Eurybatus
Eurybia
Eurybiades
Eurybius
Eurybotas
Eurybus of Athens
Eurycleia of Ithaca
Euryclids
Eurycrates
Eurycratides
Eurycyda
Eurydamas
Eurydice
Eurydice (Greek myth)
Eurydice (daughter of Adrastus)
Eurydice of Argos
Eurydice of Mycenae
Eurydice of Pylos
Eurydice of Thebes
Euryleonis
Eurylochus of Same
Eurymachus
Eurymachus (Odyssey)
Eurymedon (mythology)
Eurymedon (strategos)
Eurymedon of Myrrhinus
Eurymedon the Hierophant
Eurymedon vase
Eurymedousa
Eurymenae (Epirus)
Eurymenes
Eurynome
Eurynome (Oceanid)
Eurynome of Megara
Eurynomos (daemon)
Eurynomus
Euryphon
Eurypon
Eurypyle
Eurypylus
Eurypylus of Cos
Eurypylus of Cyrene
Eurypylus of Thessaly
Eurypylus (son of Telephus)
Eurysaces
Eurysthenes
Eurysthenes (Pergamon)
Eurystheus
Eurythemista
Eurytion
Eurytion (king of Phthia)
Eurytios Krater
Eurytus
Eurytus (Pythagorean)
Eurytus of Sparta
Eurytus and Cteatus
Eurytus of Oechalia
Eusebeia
Eusorus
Eustathius of Cappadocia
Euterpe
Euthenia
Euthydemus (dialogue)
Euthydemus (Socratic literature)
Euthydemus (tyrant)
Euthydemus of Chios
Euthydikos Kore
Euthymenes
Euthymia (philosophy)
Euthymides
Euthyna
Euthynteria
Euthyphro
Euthyphro (prophet)
Euthyphro dilemma
Eutocius of Ascalon
Eutrapelia
Eutresis (Boeotia)
Eutresis culture
Eutychides
Eutychius Proclus
Euxantius
Euxippe
Evadne
Evaechme
Evagoras
Evander (philosopher)
Evenius
Evenor
Evenus
Evenus of Aetolia
Ever to Excel
Everes
Evippus
Ex voto of the Lacaedemonians
Exeligmos
Exekias
Exomis
Expansion of Macedonia under Philip II
Ex voto of the Arcadians
Eye-cup
F
Family tree of the Greek gods
Fiction set in ancient Greece
Fifth-century Athens
Fire (classical element)
First Alcibiades
First Ancient Theatre, Larissa
First Battle of Lamia
First declension
First Macedonian War
First Messenian War
First Peloponnesian War
First Persian invasion of Greece
First Philippic
First Sacred War
Fish plate
Food and diet in ancient medicine
For Phormion
Forced suicide
Foreign War
Form of the Good
Fortunate Isles
Foundry Painter
Four causes
Fourth Macedonian War
Fourth Philippic
Fragment from the tomb of Nikarete
François Vase
Free will in antiquity
Fronto of Emesa
Funeral games
Funeral oration
Funerary monument for an athlete
G
Gaddi Torso
Gadfly (mythology)
Gaia
Galanthis
Galatea (Greek myth)
Galatea (mythology)
Galaton
Galene
Gamelia
Gamma
Ganymede
Gargareans
Gargettus
Garum
Gastraphetes
Gate of Athena Archegetis
Gates of horn and ivory
Gegenees
Geison
Gela Painter
Gelanor
Gello
Gelon
Gelon of Laconia
Gelos
Geminus
Gemon
Generation of Animals
Genitive absolute
Genos
Genus (music)
Geocentric model
Geography of the Odyssey
Geometric art
Geomori (Athens)
Geoponici
Geraestus (Euboea)
Gerarai
Geras
Gerenia
Gerousia
Gertus
Geryon
Geryoneis
Getty kouros
Giants
Gigantomachy by the Suessula Painter
Gigonus
Gitanae
Gla
Glaphyrae
Glauce
Glaucetas
Glaucia
Glaucias (physician, 3rd century BC)
Glaucias (physician, 4th century BC)
Glaucias of Aegina
Glaucias of Athens
Glaucias of Macedon
Glaucippe
Glaucon
Glaucus
Glaucus (mythology)
Glaucus of Carystus
Glaucus of Chios
Glaucus of Corinth
Glaucus of Crete
Glaucus of Lycia
Glisas
Glossary of Stoicism terms
Glycon of Croton
Gnathia vases
Gnesippus
Gnomic poetry
Gnosis (artist)
Golden Fleece
Golden mean (philosophy)
Golden Verses
Golgos
Goltyr Painter
Gongylos
Gonoessa
Gordion cup
Gordius of Cappadocia
Gorge
Gorgias
Gorgias (dialogue)
Gorgidas
Gorgo, Queen of Sparta
Gorgon
Gorgon Painter
Gorgoneion
Gorgoneion Group
Gorgopas (2nd century BC)
Gorgopas (4th century BC)
Gorgophone
Gorgophone (Perseid)
Gorgophonus
Gorgus
Gorgythion
Gortyn
Gortyn code
Gortyna
Graea
Graeae
Graecians
Graecus
Graphe paranomon
Grave monument from Kallithea
Grave relief of Thraseas and Euandria
Grave stele (NAMA 7901)
Grave Stele of Dexileos
Grave Stele of Hegeso
Great Rhetra
Greco-Bactrian Kingdom
Greco-Persian Wars
Greco-Roman hairstyle
Greco-Roman relations in classical antiquity
Greece in the 5th century BC
Greece in the Roman era
Greek alphabet
Greek and Roman artillery
Greek baths
Greek Baths in ancient Olympia
Greek baths of Gela
Greek chorus
Greek city-state patron gods
Greek colonisation
Greek chorus
Greek Dark Ages
Greek democracy
Greek diacritics
Greek divination
Greek drachma
Greek gardens
Greek hero cult
Greek Heroic Age
Greek inscriptions
Greek letters used in mathematics, science, and engineering
Greek love
Greek lyric
Greek mathematics
Greek mythology
Greek mythology in popular culture
Greek mythology in western art and literature
Greek numerals
Greek orthography
Greek primordial deities
Greek riddles
Greek sea gods
Greek terracotta figurines
Greek Theatre of Syracuse
Greek tragedy
Greek underworld
Greek words for love
Greek wrestling
Greeks in pre-Roman Gaul
Griffin Warrior Tomb
Group E (vase painting)
Group of Rhodes 12264
Gryllus, son of Xenophon
Gryton
Guneus
Gutta
Gylippus
Gylis
Gylon
Gymnasiarch
Gymnasium
Gymnasium at Delphi
Gymnitae
Gymnopaedia
Gynaeconomi
Gynaecothoenas
Gyrton (Thessaly)
H
Hades
Hadra vase
Hadrian's Library
Haemon
Haemon (mythology)
Haemus
Hagius
Hagnias
Hagnon of Tarsus
Hagnon, son of Nikias
Haimon Painter
Halae Aexonides
Halae Araphenides
Halaesus
Halasarna
Halcyon (dialogue)
Haliacmon (mythology)
Haliartus
Halie
Halirrhothius
Halitherses
Halizones
Haloa
Halteres
Hamadryad
Hamartia
Hamaxantia
Harma (Attica)
Harma (Boeotia)
Harmodius and Aristogeiton
Harmodius and Aristogeiton (sculpture)
Harmonia
Harmost
Harpalion
Harpalus
Harpalus (astronomer)
Harpalus (engineer)
Harpalus (son of Polemaeus)
Harpalyce
Harpalykos
Harpe
Harpina
Harpina (city)
Harpleia
Harpocration
Harpy
Harpy Tomb
Harrow Painter
Hasselmann Painter
Head of a Philosopher
Hebe (mythology)
Hecale (Attica)
Hecale (poem)
Hecamede
Hecataeus of Miletus
Hecate
Hecaterus
Hecato of Rhodes
Hecatomb
Hecatompedum
Hecatoncheires
Hector
Hecuba
Hecuba (play)
Hedea of Tralles
Hedone
Hedylogos
Hegemon of Thasos
Hegemone
Hegesandridas
Hegesias of Cyrene
Hegesias of Sinope
Hegesinus of Pergamon
Hegesippus (orator)
Hegesippus of Halicarnassus
Hegesistratus
Hegetoria
Hegetorides
Hegias
Hegias of Athens
Heidelberg Painter
Heimarmene
Hekatompedon temple
Helen of Troy
Helenus
Helepolis
Heleus
Heliadae
Heliades
Heliaia
Heliastic oath
Helicaon
Helice (mythology)
Helike
Halimus
Heliocentrism
Heliodorus (ambassador)
Heliodorus (metrist)
Heliodorus (surgeon)
Heliodorus of Athens
Heliodorus of Emesa
Heliodorus of Larissa
Helios
Helladic chronology
Hellanicus (mythology)
Hellanicus of Lesbos
Hellanodikai
Hellas
Helle
Hellen
Hellenic historiography
Hellenica
Hellenistic armies
Hellenistic art
Hellenistic glass
Hellenistic Greece
Hellenistic influence on Indian art
Hellenistic period
Hellenistic philosophy
Hellenistic portraiture
Hellenistic Prince
Hellenistic religion
Hellenistic theatre of Dion
Hellenization
Hellenotamiae
Hellespontine Sibyl
Hellespontophylakes
Hellotia
Helmetheus
Helos
Helos (Elis)
Helots
Hemera
Hemithea (mythology)
Hemithorakion
Henioche
Hepatizon
Hephaestia
Hephaestio
Hephaestion (grammarian)
Hephaestus
Hera
Hera Alexandros
Hera Ammonia
Heraclea (Acarnania)
Heraclea in Trachis
Heraclean Tablets
Heracleia (festival)
Heracleidae
Heracleides (409 BC)
Heracleides (415 BC)
Heracleides (admiral)
Heracleides (ambassador)
Heracleides of Byzantium
Heracleides of Cyme
Heracleides of Ephesus
Heracleides of Gyrton
Heracleides of Maroneia
Heracleides of Mylasa
Heracleides of Tarentum
Heracleides the Phocian
Heracles
Heracles Papyrus
Heracles Patroos
Heraclides (painter)
Heraclides (physician)
Heraclides of Aenus
Heraclides of Erythrae
Heraclides of Smyrna
Heraclides of Tarentum
Heraclides of Tarsus
Heraclides Ponticus
Heraclitus
Heraclitus (commentator)
Heraclitus the Paradoxographer
Heraclius the Cynic
Heraea (Arcadia)
Heraean Games
Heraeum (Thrace)
Heraion of Argos
Heraion of Perachora
Heraion of Samos
Heraklas
Herakles (Euripides)
Herald and Trumpet contest
Hercules and the lion of Nemea (Louvre Museum, L 31 MN B909)
Hercules and the Wagoner
Hercules at the crossroads
Herillus
Herm (sculpture)
Hermaea
Hermagoras of Amphipolis
Hermaphroditus
Hermarchus
Hermeneumata
Hermes
Hermes and the Infant Dionysus
Hermes Logios type
Hermes Ludovisi
Hermes Trismegistus
Hermias of Atarneus
Hermione
Hermione (Argolis)
Hermippe
Hermippus
Hermippus of Berytus
Hermippus of Smyrna
Hermocrates
Hermocrates (dialogue)
Hermodike I
Hermodike II
Hermodorus
Hermodorus of Salamis
Hermogenes (philosopher)
Hermogenes (potter)
Hermogenes of Priene
Hermonax
Hermotimus of Clazomenae
Hermotimus of Pedasa
Hermus
Hermus (Attica)
Hero
Hero and Leander
Herodicus
Herodorus
Herodorus of Megara
Herodotus
Herodotus (physician)
Heroic nudity
Heroön
Heroon at Nemea
Herophilos
Herophon
Herostratus
Herpyllis
Herse
Herse of Athens
Hesiod
Hesione
Hesione (mythology)
Hesione (Oceanid)
Hesperia
Hesperides
Hesperis
Hesperus
Hessus (Locris)
Hestia
Hestiaea (Attica)
Hestiaeus of Perinthus
Hesychius of Alexandria
Hetaira
Hicesius
Hicetaon
Hicetas
Hicetas of Leontini
Hiera Orgas
Hierapytna
Hierax (Spartan admiral)
Hiereiai
Hiero (Xenophon)
Hierocles (Stoic)
Hieromenia
Hieromneme
Hieronymus of Cardia
Hieronymus of Rhodes
Hierophant
Hierophylakes
Hieropoios
Hieros gamos
High Priestess of Athena Polias
Hilaeira
Himation
Himeraeus
Hippalcimus
Hippalectryon
Hippalus
Hipparchia of Maroneia
Hipparchic cycle
Hipparchicus
Hipparchus
Hipparchus (cavalry officer)
Hipparchus (dialogue)
Hipparchus (son of Peisistratos)
Hippasus
Hippasus (mythology)
Hippe
Hippeis
Hippias
Hippias (tyrant)
Hippias Major
Hippias Minor
Hippo (philosopher)
Hippobotus
Hippocampus
Hippocleides
Hippocoon
Hippocoon of Sparta
Hippocrates
Hippocrates, father of Peisistratos
Hippocrates (physicians)
Hippocrates of Athens
Hippocrates of Chios
Hippocrates of Gela
Hippocratic bench
Hippocratic Corpus
Hippocratic Oath
Hippocrene
Hippodamas
Hippodamia (mythology)
Hippodamia (daughter of Oenomaus)
Hippodamia (wife of Autonous)
Hippodamia (wife of Pirithous)
Hippodamus of Miletus
Hippodrome
Hippodrome of Olympia
Hippolochus (mythology)
Hippolochus of Troy
Hippolyta
Hippolyte
Hippolytus (Greek myth)
Hippolytus (play)
Hippolytus (son of Theseus)
Hippomedon
Hippomedon of Sparta
Hippomedon (Seven against Thebes)
Hippomenes
Hipponax
Hipponicus III
Hipponous
Hippotae
Hippotes
Hippothoe
Hippothoon
Hippothous
Hippotion
Hippotomadae
Histiaeotis
Histiaeus
Historia Plantarum (Theophrastus book)
Historicity of the Homeric epics
Histories (Herodotus)
History of Animals
History of Athens
History of Crete
History of ethics in Ancient Greece
History of Greece
History of Greek and Hellenistic Sicily
History of Macedonia (ancient kingdom)
History of medicine in Cyprus
History of Sparta
History of the Peloponnesian War
Hodoedocus
Homados
Homer
Homer's Ithaca
Homeric Greek
Homeric Hymns
Homeric prayer
Homeridae
Homerus of Byzantium
Homonoia
Homonoia (mythology)
Homosexuality in ancient Greece
Homosexuality in the militaries of ancient Greece
Hopleus
Hoplite
Hoplite formation in art
Hoplite phalanx
Hoplitodromos
Horae
Horkos
Horme
Horsehead Amphora
Horses Amphora
Horus (athlete)
Hubris
Humorism
Hyacinth
Hyacinthia
Hyacinthus the Lacedaemonian
Hyades
Hyagnis
Hyampolis
Hyamus
Hyas
Hybadae
Hybrias
Hybris
Hybristica
Hydaspes (mythology)
Hydna
Hydraulic telegraph
Hydraulis
Hydraulis of Dion
Hydria
Hydria (Paros)
Hyettus
Hyettus (Boeotia)
Hygieia
Hylates
Hyle
Hyle (Boeotia)
Hyle (Locris)
Hyllus
Hyllus (mythology)
Hylomorphism
Hypaethral
Hypate
Hypenus of Elis
Hyperanthes
Hyperasius
Hyperbatas
Hyperbius
Hyperbolus
Hyperborea
Hypereides
Hypereides (potter)
Hyperenor
Hyperes
Hyperetes
Hyperion
Hyperippe
Hypermnestra
Hypermnestra (mythology)
Hypermnestra of Aetolia
Hyperochus
Hyperphas
Hyperuranion
Hypnos
Hypobibazon Class
Hypodiastole
Hypokeimenon
Hyporchema
Hypothesis (drama)
Hypotrachelium
Hypsenor
Hypseus
Hypsicerus
Hypsicles
Hypsipyle
Hypsipyle (play)
Hyria (Boeotia)
Hyrieus
Hyrmine
Hyrmine (Elis)
Hyrtacina
Hyrtacus
Hysiae (Argolis)
Hysiae (Boeotia)
Hysminai
Hysmon
Hysplex
I
I know that I know nothing
Iacchus
Ialemus
Ialmenus
Ialysos (mythology)
Ialysus
Iambe
Iamenus
Iamidai
Iamus
Ianeira
Iapetus
Iapyx
Iardanus
Iardanus of Lydia
Iasion
Iaso
Iasus
Iasus (king of Argos)
Iatromantis
Ibycus
Icaria (Attica)
Icarius
Icarius (Athenian)
Icarius (Spartan)
Icarius of Hyperesia
Icarus
Iccus of Taranto
Ichnaea
Ichneutae
Ichor
Ichthyas
Ichthyocentaurs
Ictinus
Ida
Ida (mother of Minos)
Ida (nurse of Zeus)
Idaea
Idaean Dactyls
Idalion Tablet
Idas
Idas (mythology)
Idmon
Idmon (Argonaut)
Idomeneus of Crete
Idomeneus of Lampsacus
Idrias
Idyia
Idyma
Iliad
Ilione
Ilioneus
Ilioupersis Painter
Ilium (Epirus)
Iliupersis
Illyrian type helmet
Illyrian weaponry
Illyrius
Ilus
Ilus (son of Dardanus)
Ilus (son of Tros)
Imagines (work by Philostratus)
Imbrex and tegula
Imbrius
Imbrus
Immaradus
Impluvium
Inachorium
Inachus
Inatus
Incomposite interval
Indica (Arrian)
Indica (Ctesias)
Indo-Greek Kingdom
Infinitive
Ino
Interpretation of Dreams (Antiphon)
Invasions of Epidamnus
Io
Iobates
Iodame
Ioke
Iolaidas of Argos
Iolaus
Iolcus
Iole
Ion
Ion (dialogue)
Ion (play)
Ion of Chios
Ionian League
Ionian Revolt
Ionian School (philosophy)
Ionians
Ionic Greek
Ionic order
Ionic vase painting
Ionidae
Ionides
Iophon
Iota
Iota subscript
Iphianassa
Iphianassa (daughter of Agamemnon)
Iphianeira
Iphicles
Iphiclus
Iphicrates
Iphidamas
Iphigenia
Iphigenia in Aulis
Iphigenia in Tauris
Iphimedeia
Iphinoe
Iphis
Iphis (mythology)
Iphistiadae
Iphito
Iphitos
Iphitus of Oechalia
Iphthime
Ipnus
Ipotane
Ira (Messenia)
Iris
Iron Age Greek migrations
Irus
Isaeus
Isagoras
Ischys
Isindus
Ismarus (Thrace)
Ismene
Ismene (Asopid)
Ismenias
Ismenis
Ismenus
Isocrates
Isonoe
Isopoliteia
Istasus
Isthmia (ancient city)
Isthmian Games
Istron
Istrus (mythology)
Isus
Isus (Boeotia)
Isus (Megaris)
Isyllus
Italian School (philosophy)
Italus
Ithaca
Ithaca (polis)
Ithome
Ithome (Thessaly)
Ithoria
Iton (Thessaly)
Itonia
Itonus
Itylus
Ixion
Iynx
J
Jar (pelike) with Odysseus and Elpenor
Jason
Jason of Nysa
Jena Painter
Jocasta
Jockey of Artemision
Judgement of Paris
Judgement of Paris Amphora
Julianus the Egyptian
K
Kabiria Group
Kachrylion
Kai
Kairos
Kakia
Kakodaimonistai
Kalamos
Kale
Kalos inscription
Kalos kagathos
Kamares ware
Kamira
Kanathos
Kandaulos
Kanephoros
Kantharos
Kapheleis
Kappa
Karamuza
Karbasyanda
Kardaki Temple
Karpion
Karpos
Kasolaba
Kassel cup
Kasta Tomb
Katabasis
Katakekaumene
Katalepsis
Kathekon
Katolophyromai
Kaunos
Kausia
Kerameikos
Kerameikos steles
Kerch style
Keres
Kernos
Kerykes
Kestros (weapon)
Khalkotauroi
Kheriga
Khôra
Kiln
King Teucer
Kladeos
Klazomenai
Klazomenian sarcophagi
Klazomenian vase painting
Kleino (musician)
Kleitias
Kleitomachos (athlete)
Kleobis and Biton
Kleophon Painter
Kleophrades Painter
Kleophrades Painter Panathenaic prize amphora
Kleos
Kleroterion
Klismos
Knossos
Know thyself
Koalemos
Kobalos
Kodapeis
Koine Greek
Koine Greek grammar
Koinon
Koinon of Macedonians
Kolakretai
Koliorga
Kolonai
Kolonos Hill
Kolpos
Komast cup
Kommos (theatre)
Komos
Konos
Kopis
Kora of Sicyon
Korai of Ionia
Korai of the Acropolis of Athens
Kore (sculpture)
Kore of Lyons
Korkyra (mythology)
Korkyra (polis)
Korophaioi
Korybantes
Kottabos
Kotthybos
Kouloura
Kourion
Kouroi of Flerio
Kouros
Kouros of Apollonas
Kouros of Samos
Kouros of Tenea
Kourotrophos
Krater
Kratos
Kresilas
Kriophoros
Kritios
Kritios Boy
Krocylea
Kroisos Kouros
Krokinas of Larissa
Kronia
Krotos
KX Painter
KY Painter
Kyathos
Kybernis
Kydoimos
Kydonia
Kykeon
Kyklos
Kylix
Kylix depicting athletic combats by Onesimos
Kylix depicting Pentathletes
Kymopoleia
Kynodesme
Kyrbas
Kyrbissos
Kyrenia ship
Kyrios
L
Labda
Labdacus
Labotas
Lacedaemon
Lacedaemonius
Lachares
Laches (dialogue)
Laches (general)
Lachesis
Laciadae
Laconian vase painting
Laconic phrase
Laconicus
Laconophilia
Lacritus
Lacydes of Cyrene
Ladon (mythology)
Ladromus of Laconia
Laelaps
Laertes
Laestrygon
Laestrygonians
Lagoras
Laïs (physician)
Laius
Lakaina
Lamachus
Lamas (mythology)
Lambda
Lamedon (mythology)
Lamian War
Lamis
Lamiskos
Lamon (Crete)
Lampad
Lampadephoria
Lampetia
Lamponeia
Lamprocles
Lamprus
Lamprus of Erythrae
Lamptrai
Lampus
Land reform in Athens
Land reform in Sparta
Laocoön
Laocoon (mythology)
Laodamas
Laodamia
Laodamia of Phylace
Laodice
Laodice (daughter of Priam)
Laodicea (Arcadia)
Laodocus
Laomedon
Laomedon of Mytilene
Laonome
Laophonte
Laophoon
Laothoe
Laphria
Lapithaeum
Lapithes (hero)
Lapiths
Larnax
Las (Greece)
Lasaea
Lasion
Lasthenes
Lasus of Hermione
Late Greek
Latmus (town)
Law court (ancient Athens)
Law of abode
Laws (dialogue)
Leaena
Leagros Group
League of Corinth
League of Free Laconians
League of the Islanders
League of the Macedonians
Learchus
Lebedus
Lebes
Lebes Gamikos
Lechaeum
Leda
Ledon
Ledra
Lefkandi
Leimone
Leiocritus
Leitus
Lekhes
Lekythos
Lelantine War
Lelantos
Lelex
Lelex (mythology)
Lelex of Laconia
Lelex of Megara
Lemnian Athena
Lemnos
Lenaia
Lenobius
Lenormant Athena
Lenos (Elis)
Leo (mythology)
Leo of Phlius
Leochares
Leocrates
Leodamas of Thasos
Leodes
Leon (mathematician)
Leon of Salamis
Leon of Sparta
Leonidaion
Leonidas I
Leonidas II
Leonidas (physician)
Leonidas (sculpture)
Leonidas of Rhodes
Leonteus
Leonteus of Lampsacus
Leontiades
Leontiades (Thermopylae)
Leontichus
Leontion
Leontis
Leophron
Leos (mythology)
Leosthenes
Leosthenes (admiral)
Leotychidas
Lepreum
Lepreus
Lepsia
Lepsimandus
Leptines of Syracuse
Lerna
Lernaean Hydra
Lesbonax
Lesbos
Lesche
Lesche of the Knidians
Lesches
Lethe
Leto
Leucadius
Leuce
Leucippe
Leucippus
Leucippus (mythology)
Leucippus of Crete
Leucippus of Messenia
Leucippus of Sicyon
Leucon
Leuconoe (Attica)
Leucopeus
Leucothea
Leucothoe
Leucus
Leukaspides
Libanius
Libon
Libya
Libyan Sibyl
Lichas
Lichas (Spartan)
Licymnius
Life of Homer (Pseudo-Herodotus)
Lilaea
Lilaea (ancient city)
Limenius
Limnad
Limnae (Peloponnesus)
Limnae (Sparta)
Limnaea (Acarnania)
Limnaeus
Limnio
Limos
Lindos Chronicle
Lindus (mythology)
Linear A
Linear B
Linothorax
Linus (Argive)
Linus (mythology)
Linus of Thrace
Lion Gate
Lion of Amphipolis
Lion of Cithaeron
Lion Painter
Lip Cup
Lipara (mythology)
Liriope
Litae
Literary topos
Lithobolos
Litra
Little Iliad
Little-Master cup
Little Masters
Liturgy
Lityerses
Lochagos
Lochos
Locrian Greek
Locrians
Locris
Locrus
Logographer (legal)
Logographer (history)
Logos
Long Wall (Thracian Chersonese)
Long Walls
Longus
Lopadotemachoselachogaleokranioleipsanodrimhypotrimmatosilphiokarabomelitokatakec hymenokichlepikossyphophattoperisteralektryonoptekephalliokigklopeleiolagoiosiraiobap hetraganopterygon
Lophis
Lotus-eaters
Lotus tree
Loutrophoros
Lower Ancyle
Lower Paeania
Lower Pergase
Lower Potamus
Lucanian vase painting
Lucian
Ludovisi Throne
Lupercus of Berytus
Lusia (Attica)
Lycaethus
Lycaon
Lycaon (king of Arcadia)
Lycaon (son of Priam)
Lycaste
Lycastus
Lycastus (Crete)
Lyceum
Lyceus
Lyciscus of Messenia
Lycius (sculptor)
Lyco of Iasos
Lyco of Troas
Lycomedes
Lycomedes (mythology)
Lycomedes of Mantinea
Lycomedes of Thebes
Lycophron
Lycophron (mythology)
Lycophron (sophist)
Lycophron of Corinth
Lycoreia
Lycorus
Lyctus
Lycurgeia
Lycurgus
Lycurgus (king of Sparta)
Lycurgus of Arcadia
Lycurgus of Athens
Lycurgus (of Nemea)
Lycurgus of Sparta
Lycurgus of Thrace
Lycus
Lycus (Thebes)
Lycus of Euboea
Lycus of Fortunate Isles
Lycus of Libya
Lydiadas of Megalopolis
Lydion
Lydos
Lydus
Lygdamis of Naxos
Lykaia
Lynceus
Lynceus of Argos
Lynceus of Messenia
Lynceus of Samos
Lyncus
Lyrceia
Lyrcus
Lyrcus (son of Abas)
Lyre
Lyrnessus
Lysander
Lysianassa
Lysias
Lysicles (4th century BC)
Lysicles (5th century BC)
Lysidice
Lysimache
Lysimachia (Aetolia)
Lysimachus
Lysinomus
Lysippe
Lysippides
Lysippides Painter
Lysippos
Lysis (dialogue)
Lysis of Taras
Lysistrata
Lysistratus
Lysithea (mythology)
Lysithous
Lyssa
Lysus
Lyttian War
M
Macar
Macareus (son of Aeolus)
Macareus of Rhodes
Macaria
Macedonia
Macedonian phalanx
Macelo (mythology)
Machai
Machanidas
Machaon
Machatas (sculptor)
Machatas of Aetolia
Machatas of Europos
Macistus
Macmillan aryballos
Madrid Painter
Maeandropolis
Maenad
Maenalus
Maeon
Maera (hound)
Magic in the Greco-Roman world
Magna Graecia
Magna Moralia
Magnes (mythology)
Magnes (comic poet)
Magnes (son of Aeolus)
Magnes (son of Argos)
Magnetes
Maia
Makedon
Makhaira
Makra Stoa
Makron
Malians
Mamercus of Catane
Mamertines
Mandrocleides
Mandrocles
Manes of Lydia
Maniae
Manika
Mannerists (Greek vase painting)
Mantias
Mantineia
Mantineia Base
Mantius
Manto
Manto (daughter of Tiresias)
Manumission inscriptions at Delphi
Marathon
Marathon (mythology)
Marathon Boy
Marathon tumuli
Mardonius (general)
Mares of Diomedes
Margites
Margos
Mariandynus
Marianus Scholasticus
Marinus of Neapolis
Marion, Cyprus
Maron
Maroneia (Attica)
Marpessa
Marpessa of Aetolia
Marpsius
Marriage in ancient Greece
Marsyas
Marsyas Painter
Marvels (Theopompus)
Maschalismos
Mases
Mask of Agamemnon
Mastos
Mastos Painter
Material monism
Mathematical text fragment (Berlin, Staatliche Museen, pap. 11529)
Matton
Maximus of Tyre
Meander
Measurement of a Circle
Mechane
Mechanics (Aristotle)
Mecisteus
Meda
Medea
Medea (play)
Medeon (Acarnania)
Medeon (Boeotia)
Medesicaste
Medici Vase
Medimnos
Medism
Medius (physician)
Medius of Larissa
Medon
Medus
Medusa
Megacles
Megacles of Epirus
Megaera
Megala Erga
Megalai Ehoiai
Megalopolis
Megalostrata (poet)
Megapenthes
Megapenthes (son of Menelaus)
Megapenthes (son of Proetus)
Megara
Megareus of Onchestus
Megareus of Thebes
Megarian decree
Megarian school
Megarian Treasury (Delphi)
Megarian Treasury (Olympia)
Megaris
Megaron
Megasthenes
Meges
Meges of Sidon
Megistias
Meidias
Meidias Painter
Meilichios
Melaenae
Melaina
Melampodia
Melampus
Melaneus (mythology)
Melaneus of Oechalia
Melanippe
Melanippe (daughter of Aeolus)
Melanippides
Melanippus
Melanthius
Melanthius (Odyssey)
Melantho
Melanthus
Melas
Meleager
Meleager of Gadara
Meleager of Skopas
Meleager Painter
Melera
Melete
Meletus
Melia (consort of Apollo)
Melia (consort of Inachus)
Meliae
Melian pithamphora
Melian relief
Meliboea
Melicertes
Melinoë
Melisseus
Melissus of Samos
Melite (Attica)
Melite (heroine)
Melite (mythology)
Melite (naiad)
Melpeia
Melpomene
Members of the Delian League
Memnon of Rhodes
Memorabilia (Xenophon)
Memphis
Memphis (daughter of Epaphus)
Menaechmus
Menander
Menander of Ephesus
Mene
Menecrates (sculptor)
Menecrates of Syracuse
Menecrates of Tralles
Menedemus
Menedemus of Pyrrha
Menedemus the Cynic
Menelaion
Menelaus
Menelaus (son of Lagus)
Menelaus of Alexandria
Menelaus of Pelagonia
Menemachus
Menemachus (mythology)
Menesaechmus
Menesthes
Menestheus
Menesthius
Menexenus
Menexenus (dialogue)
Menippe
Menippe and Metioche
Menippean satire
Menippus
Menippus (mythology)
Meniskos
Meno
Meno (general)
Meno's slave
Menodotus of Nicomedia
Menoeceus
Menoetius
Menon
Menon I of Pharsalus
Mental illness in ancient Greece
Mentes (King of the Cicones)
Mentes (King of the Taphians)
Metonic cycle
Mentor (Greek myth)
Mentor (Odyssey)
Mentor of Rhodes
Meridarch
Meriones
Mermerus
Mermerus and Pheres
Merope (Greek myth)
Merope (daughter of Oenopion)
Merope (Messenia)
Merope (Oedipus)
Merope (Pleiad)
Meropis
Merops
Merrythought cup
Mesangylon
Mesaulius
Mese
Mesoa
Mesogeia
Mesogeia Painter
Messa (Greece)
Messapian pottery
Messene
Messenia (ancient region)
Mesthles
Mestor
Metabasis paradox
Metageitnia
Metagenes
Metakosmia
Metanira
Metapa
Metaphysics (Aristotle)
Metaxy
Metempsychosis
Methe
Methodic school
Methon
Metic
Metion
Meton of Athens
Metope
Metope (mythology)
Metopes of the Parthenon
Metretes
Metrocles
Metrodora
Metrodorus (grammarian)
Metrodorus of Athens
Metrodorus of Chios
Metrodorus of Cos
Metrodorus of Lampsacus (the elder)
Metrodorus of Lampsacus (the younger)
Metrodorus of Scepsis
Metrodorus of Stratonicea
Metrological Relief
Metron of Pydna
Metroon
Metropolis (Doris)
Metropolis (Euboea)
Metropolis (Perrhaebia)
Metropolis (Thessaly)
Miasma (Greek mythology)
Micon
Micythus
Middle Gate (Piraeus)
Middle Platonism
Mideia
Midnight poem
Milesian school
Milesian tale
Miletus
Miletus (mythology)
Military Decree of Amphipolis
Military of Mycenaean Greece
Military tactics in Ancient Greece
Milo of Croton
Miltiades
Miltiades the Elder
Mimas (Aeneid)
Mimas (Giant)
Mimnermus
Mindarus
Mines of Laurion
Minoa
Minos
Minos (dialogue)
Minotaur
Minthe
Minyades
Minyans
Minyas
Minyas (poem)
Misenus
Misthophoria
Mithaecus
Mixing bowl with the exposure of baby Aegisthos
Mixobarbaroi
Mixolydian mode
Mnasagoras
Mnasippus
Mnasitheus of Sicyon
Mnason of Phocis
Mneme
Mnemosyne
Mnesarchus of Athens
Mnesikles
Mnesitheus
Modern understanding of Greek mythology
Moerocles
Molon labe
Molossians
Molpadia
Molurus
Molus (mythology)
Molus (Argive soldier)
Molus of Aetolia
Molus of Crete
Moly (herb)
Momus
Monimus
Monument of Prusias II
Monument of the Eponymous Heroes
Mopsus
Mopsus (Argonaut)
Mopsus (son of Manto)
Mora
Moral intellectualism
Moria
Moros
Morpheus
Mosaics of Delos
Moschion (physician)
Moschion (tragic poet)
Moschophoros
Mothax
Motya Charioteer
Mount Helicon
Mount Ida
Mount Kyllini
Mount Lykaion
Mount Oeta
Mount Olympus
Mount Parthenion
Mount Pentelicus
Mourning Athena
Movable nu
Mu (letter)
Munich Kouros
Munichia
Munichia (festival)
Munichus
Musaeus of Athens
Muscle cuirass
Muses
Museum of Ancient Greek Technology
Music of ancient Greece
Musical system of ancient Greece
Mycenae
Mycenae (Crete)
Mycenaean figurine on tripod
Mycenaean Greece
Mycenaean Greek
Mycenaean palace amphora with octopus (NAMA 6725)
Mycenaean pottery
Mycenaean religion
Mycene
Mydon
Mygdon of Bebryces
Mygdon of Phrygia
Mygdon of Thrace
Mygdonia
Myia
Myiagros
Mykonos
Mykonos vase
Myles
Myma
Mynes (mythology)
Myra
Myrmekes
Myrmex
Myrmidon (hero)
Myrmidon of Athens
Myrmidone
Myrmidons
Myron
Myron of Priene
Myrrhinus
Myrrhinutta
Myrtilus
Myrtis
Myrtis of Anthedon
Myrto
Myrto (mythology)
Myscellus
Mysius
Myson of Chenae
Mysus
Myth of Er
Mythos (Aristotle)
Mytilene
Mytilenean Debate
Mytilenean revolt
N
N Painter
Nabis
Naiad
Naiskos
Name vase
Names of the Greeks
Nana
Napaeae
Napaeus (mythology)
Narcissus
Narycus
Natural slavery
Naubolus
Naucrary
Naucratis Painter
Nauplius
Nausicaa
Nausinous
Nausiphanes
Nausithous
Navarch
Naxia (Caria)
Naxos (Crete)
Naxos (mythology)
Neaera
Neaira
Nealkes
Neandreia
Neanthes of Cyzicus
Neapolis (Chalcidice)
Neapolis (Thrace)
Neapolis (Thracian Chersonese)
Nearchos
Nearchus of Elea
Nearchus of Orchomenus
Nebris
Neck Amphora by Exekias (Berlin F 1720)
Necklace of Harmonia
Necromanteion of Acheron
Neikea
Nekyia
Neleides
Neleus
Neleus of Scepsis
Nemean Baths
Nemean Games
Nemean lion
Nemesis
Nemesis (philosophy)
Neo-Attic
Neobule
Neodamodes
Neon
Neon (Phocis)
Neoplatonism
Neoptolemus
Neopythagoreanism
Neorion
Neorion at Samothrace
Nepenthe
Nephalia
Nephalion
Nephele
Nericus
Neris (Cynuria)
Nerites
Neritum
Nesoi
Nessos of Chios
Nessos Painter
Nessus
Nestor
Nestor's Cup (Mycenae)
Nestor's Cup (mythology)
Nestor's Cup (Pithekoussai)
Nestor of Tarsus
Nete
New York Kouros
Nicaea
Nicaea (Locris)
Nicander
Nicander of Sparta
Nicanor of Cyrene
Nicanor Stigmatias
Nicarchus
Nicarchus (general)
Nicarete of Megara
Nichomachus
Nicias
Nicias of Nicaea
Nicippe
Nicobule
Nicochares
Nicocles (Paphos)
Nicocles (Salamis)
Nicocles of Sicyon
Nicodamus (sculptor)
Nicodorus of Mantineia
Nicomachus
Nicomachus (father of Aristotle)
Nicomachus (son of Aristotle)
Nicomachus of Thebes
Nicomedes (mathematician)
Nicomedes of Sparta
Nicophon
Nicopolis
Nicostratus
Nicostratus (comic poet)
Nicoteles of Cyrene
Nike
Nike of Callimachus
Nike of Paros
Nikosthenes
Nikosthenic amphora
Nikoxenos Painter
Nilus (mythology)
Nine Lyric Poets
Ninnion Tablet
Niobe
Niobe (Argive)
Niobid Painter
Niobids
Nireus
Nireus (mythology)
Nisa (Boeotia)
Nisa (Megaris)
Nisaea
Nisos
Nolan amphora
Nomia
Nomos (music)
Nomos (mythology)
Nonacris
Nonnus
Norakos
Northampton Group
Nostoi
Nostos
Notion
Noumenia
Nous
Nu (letter)
Nudium
Numenius of Apamea
Numenius of Heraclea
Numisianus
Nutrition in Classical Antiquity
Nycteïs
Nycteus
Nyctimene
Nyctimus
Nymph
Nymphaeum (Olympia)
Nymphai Hyperboreioi
Nymphis
Nymphodorus (physician)
Nymphodorus of Abdera
Nympholepsy
Nysa
Nysa (Boeotia)
Nysa (Euboea)
Nysiads
Nyx
O
Oa (Attica)
Oaxes
Obelism
Obol
Ocalea
Ocalea (town)
Oceanids
Ochimus
Ocridion
Octaeteris
Ocypete
Ocyrhoe
Odeon (building)
Odeon of Agrippa
Odeon of Athens
Odeon of Herodes Atticus
Odyssean gods
Odysseus
Odysseus Acanthoplex
Odysseus in the Underworld krater
Odyssey
Oea (Attica)
Oeae
Oeagrus
Oebalus
Oebotas of Dyme
Oeceus
Oechalia (Aetolia)
Oechalia (Arcadia)
Oechalia (Euboea)
Oechalia (Messenia)
Oechalia (Thessaly)
Oechalia (Trachis)
Oeconomicus
Oedipodea
Oedipus
Oedipus (Euripides)
Oedipus at Colonus
Oedipus Rex
Oeneon
Oeneus
Oeniadae
Oenochoe
Oenoe
Oenoe (Attica)
Oenoe (Corinthia)
Oenoe (Elis)
Oenoe (Icaria)
Oenoe (Marathon)
Oenomaus
Oenomaus of Gadara
Oenone
Oenopides
Oenopion
Oenotropae
Oenotrus
Oeonus
Oestrus (mythology)
Oesyme
Oetaea
Oetaei
Oetylus
Oeum
Oeum (Locris)
Oeum Cerameicum
Oeum Deceleicum
Ogyges
Ogygia
Oicles
Oikeiôsis
Oikistes
Oikonomos
Oikos
Oileus
Oinochoe by the Shuvalov Painter (Berlin F2414)
Oizys
Olaeis
Old Comedy
Old Greek
Old Man of the Sea
Old Oenia
Old Temple of Athena
Older Parthenon
Olen
Olenus
Olenus (Aetolia)
Olenus (Achaea)
Olethros
Olganos
Olive branch
Olive wreath
Olizon
Oloosson
Olophyxus
Olpae
Olpae (Locris)
Oltos
Olymos
Olympe
Olympia, Greece
Olympia Master
Olympiad
Olympias (trireme)
Olympic Truce
Olympic winners of the Archaic period
Olympiodorus the Elder
Olympiodorus the Younger
Olympus (musician)
Olynthiacs
Olynthus
Omega
Omicron
Omophagia
Omphale
Omphalos
Omphalos of Delphi
On a Wound by Premeditation
On Conoids and Spheroids
On Floating Bodies
On Horsemanship
On Ideas
On Justice
On Sizes and Distances (Hipparchus)
On Spirals
On the Chersonese
On the Crown
On the Equilibrium of Planes
On the False Embassy
On the Halonnesus
On the Heavens
On the Liberty of the Rhodians
On the Malice of Herodotus
On the Murder of Eratosthenes
On the Nature of Man
On the Navy Boards
On the Peace
On the Sacred Disease
On the Sizes and Distances (Aristarchus)
On the Sphere and Cylinder
On Virtue
Onasander
Onatas
Oncae
Onceium
Onchestos
Onchestos (mythology)
Oncius
Oneirocritica
Oneiros
Onesicritus
Onesilus
Onesimos
Onomacles
Onomacritus
Onomarchus
Onomasti komodein
Onomastus of Smyrna
Onthyrius
Ophelestes
Opheltes
Opheltes (mythology)
Opheltius
Ophion
Ophiotaurus
Ophiussa
Ophryneion
Opisthodomos
Opites
Ops (mythology)
Opsis
Opson
Opsophagos
Optative
Opuntian Locris
Opus (Elis)
Opus, Greece
Opus (mythology)
Orchamus
Orchomenus
Orchomenus (Arcadia)
Orchomenus (Boeotia)
Orchomenus (Euboea)
Orchomenus (Thessaly)
Oread
Oreithyia Painter
Oresas
Oresteia
Orestes
Orestes (play)
Orestes Pursued by the Furies
Orestheus
Orestis
Orgia
Oribasius
Oricum
Orientalizing period
Orion
Orithyia
Orithyia of Athens
Orithyia (Amazon)
Ormenium
Ormenus
Orneae
Orneus
Ornithomancy
Ornytion
Ornytus
Orobiae
Oropos (Epirus)
Orpheus
Orpheus and Eurydice
Orphic Egg
Orphism
Orphne
Orseis
Orsilochus
Orsinome
Orsippus
Orthe (Thessaly)
Orthostates
Orthotes
Orthrus
Orus (mythology)
Orya (play)
Oschophoria
Osmida
Ossa cave
Ostomachion
Ostracism
Otanes
Othorus
Othreis
Othryades
Othryoneus
Otrera
Otryne
Otus of Cyllene
Ourea
Ousia
Outis
Overline
Owl of Athena
Oxford Palmette Class
Oxyathres of Heraclea
Oxybeles
Oxygala
Oxylus
Oxylus (son of Haemon)
Oxyntes
Oxythemis of Coroneia
Ozolian Locris
P
Paean
Paean (god)
Paeania
Paeon
Paeon (father of Agastrophus)
Paeon (son of Antilochus)
Paeon (son of Poseidon)
Paeon of Elis
Paeonidae
Paestan vase painting
Paestum
Pagae
Pagondas
Paideia
Painter of Acropolis 606
Painter of Berlin A 34
Painter of Berlin 1686
Painter of Munich 1410
Painter of Nicosia Olpe
Painter of Palermo 489
Painter of the Berlin Dancing Girl
Painter of the Dresden Lekanis
Painter of the Vatican Mourner
Pair of athletes (Delphi)
Paired opposites
Palace of Nestor
Palaechthon
Palaestinus
Palaestra
Palaestra at Delphi
Palaestra at Olympia
Palaestra (mythology)
Palaikastro Kouros
Palamedes
Palici
Palioxis
Palladium
Pallake
Pallantides
Pallas
Pallas (Giant)
Pallas of Arcadia
Pallas (son of Evander)
Pallas (son of Pandion)
Pallas (Titan)
Pallene (Attica)
Pamboeotia
Pambotadae
Pammenes of Thebes
Pammon
Pamphaios
Pamphilus
Pamphylian Greek
Pamphylus
Pan
Pan Painter
Panacea
Panaenus
Panaetius
Panares
Panathenaic amphora
Panathenaic Games
Panathenaic Stadium
Panchaia (island)
Pancrates of Athens
Pandaie
Pandareus
Pandarus
Pandia
Pandia (festival)
Pandion (hero)
Pandion (mythology)
Pandion I
Pandion II
Pandionis
Pandora
Pandora's box
Pandora of Thessaly
Pandorus
Pandosia (Epirus)
Pandroseion
Pandrosus
Pandura
Panegyris
Panhellenic Games
Panhellenion
Panionium
Pankration
Panopeus
Panopeus (mythology)
Panoply
Panormus
Panther Painter
Panthoides
Panthous
Pantites
Pantodapoi
Panyassis
Paphos
Pappus of Alexandria
Papyrus Oxyrhynchus 221
Papyrus Oxyrhynchus 223
Papyrus Oxyrhynchus 224
Papyrus Oxyrhynchus 225
Papyrus Oxyrhynchus 226
Papyrus Oxyrhynchus 413
Parabasis
Paradox of the Court
Paragraphos
Paralia (Attica)
Parallel Lives
Paralus (ship)
Paralus and Xanthippus
Parauaea
Parergon
Parian Chronicle
Parian marble
Paris
Parmenides
Parmenides (dialogue)
Parmeniskos group
Paroikoi
Paros
Parrhasius (painter)
Parrhasius (son of Lycaon)
Partheniae
Parthenius of Nicaea
Parthenon
Parthenon Frieze
Parthenopeus
Participle
Pasicles of Thebes
Pasion
Pasiphaë
Pasithea
Passaron
Patera
Patreus
Patro the Epicurean
Patrocles (geographer)
Patroclus
Patroclus (admiral)
Pausanias (geographer)
Pausanias of Athens
Pausanias of Sicily
Pausanias of Sparta
Pausanias the Regent
Pausanias' description of Delphi
Pausias
Peace (play)
Peace of Antalcidas
Peace of Callias
Peace of Nicias
Peace of Philocrates
Peak sanctuaries
Pedanius Dioscorides
Pedasus
Pederasty in ancient Greece
Pedestal of Agrippa
Pediments of the Parthenon
Pegaeae
Pegasides
Pegasus
Peiraikos
Peirasia
Peirous
Peisander
Peisander (navarch)
Peisander (oligarch)
Peisenor
Peisistratus of Orchomenus
Peisistratus of Pylos
Peitharchia
Peitho
Pelagon
Pelanor
Pelasgia
Pelasgians
Pelasgic wall
Pelasgiotis
Pelasgus
Pelasgus of Argos
Peleces
Peleiades
Peleus
Peliades
Pelias
Peliganes
Pelike
Pelike with actors preparing
Pelinna
Pella
Pella curse tablet
Pellana
Pellene
Pelopia
Pelopia (daughter of Thyestes)
Pelopidas
Pelopion
Peloponnese
Peloponnesian League
Peloponnesian War
Pelops
Pelops (mythology)
Pelops (son of Agamemnon)
Pelops of Sparta
Peltast
Peneleos
Penelope
Penestai
Peneus
Pentathlon
Pentecontaetia
Penteconter
Pentele
Penthesilea
Penthesilea Painter
Pentheus
Penthilus of Mycenae
Penthus
Peplos
Peplos Kore
Pepromene
Peraea (Euboea)
Perdix (mythology)
Peregrinus Proteus
Pergamon
Pergamon Altar
Pergase
Periander
Peribolos
Pericles
Pericles the Younger
Pericles with the Corinthian helmet
Periclymenus
Periclytus
Perictione
Perieres
Perieres of Messenia
Perikeiromene
Perileos
Perimede
Perimedes
Perioeci
Peripatetic school
Peripatos (Akropolis)
Peripeteia
Periphas
Periphas (king of Attica)
Periphetes
Peripteros
Perispomenon
Peristasis
Peristhenes
Perithoedae
Peritrope
Perizoma Group
Peronai
Perrhaebi
Perrhaebia
Perrhidae
Persaeus
Perse
Perseides
Persephone
Persephone Painter
Perserschutt
Perses (brother of Hesiod)
Perses of Colchis
Perses (son of Perseus)
Perses (Titan)
Perseus
Perseus (geometer)
Perseus of Pylos
Persian Rider
Persica (Ctesias)
Petalism
Petasos
Peteon
Petraeus (mythology)
Pezhetairos
Phaeax (architect)
Phaeax (orator)
Phaedo
Phaedo of Elis
Phaedra
Phaedra complex
Phaedrus (Athenian)
Phaedrus the Epicurean
Phaenarete
Phaenias of Eresus
Phaenon
Phaenops
Phaethon
Phaethon of Syria
Phaethusa
Phaistos Disc
Phalaikos
Phalanthus of Tarentum
Phalerum
Phalerus
Phanas of Pellene
Phanes
Phanes (coin issuer)
Phanias (Athenian commander)
Phantasiai
Phantes
Phanto of Phlius
Phanus (mythology)
Phara
Pharae (Boeotia)
Pharae (Crete)
Pharis
Pharmakos
Phayllos of Croton
Phegaea (Aigeis)
Phegaea (Pandionis)
Phegeus
Phegeus of Psophis
Phegus
Pheidippides
Pheidon
Pheidon I
Phelloe
Pheme
Phemius
Phereclus
Pherecrates
Pherecydes of Athens
Pherecydes of Leros
Pherecydes of Syros
Pheres
Pherusa
Phi
Phiale of Megara
Phiale Painter
Phialo
Phidias
Phidippus
Phigalia
Philaemon
Philaenis
Philagrius of Epirus
Philaidae
Philammon
Philander (mythology)
Philemon (poet)
Philia
Philia (Greco-Roman magic)
Philinus of Athens
Philinus of Cos
Philip II of Macedon
Philip of Opus
Philippeioi
Philippeion
Philippi
Philippic
Philippides (comic poet)
Philippus of Chollidae
Philippus of Croton
Philiscus of Aegina
Philiscus of Corcyra
Philistus
Philo of Byzantium
Philo of Larissa
Philo the Dialectician
Philochorus
Philocles
Philoctetes
Philoctetes (Euripides play)
Philoctetes (Sophocles play)
Philodemus (mythology)
Philodice
Philoetius (Odyssey)
Philoi
Philolaus
Philomelus
Philon
Philonides of Laodicea
Philonoe
Philophrosyne
Philopoemen
Philosopher king
Philostratus
Philostratus of Lemnos
Philostratus the Younger
Philotas (Antiochid general)
Philotas (musician)
Philotes
Philotimo
Philoxenus (physician)
Philoxenus of Cythera
Philoxenus of Eretria
Philyllius
Philyra (mythology)
Philyra (Oceanid)
Phineus
Phintias
Phintys
Phlegethon
Phlegra
Phlegyas
Phlias
Phlius
Phlya
Phlyax play
Phobetor
Phobos
Phocais
Phocion
Phocis
Phocus
Phocus of Aegina
Phocus of Boeotia
Phocus of Corinth
Phoebe (Greek myth)
Phoebe of Messenia
Phoebe (Titaness)
Phoebidas
Phoenix (son of Agenor)
Phoenix (son of Amyntor)
Pholoe Painter
Pholus (mythology)
Phonoi
Phora
Phorbas
Phorbas (king of Argos)
Phorbas of Elis
Phorbas of Thessaly
Phorbus (mythology)
Phorcys
Phorcys of Phrygia
Phorminx
Phormio
Phoroneus
Phoronis (Hellanicus)
Phoros
Phradmon
Phrasikleia Kore
Phrasimus
Phrasius
Phratry
Phrearrhii
Phrenius
Phrike
Phrixus
Phronesis
Phrontis
Phrontis (son of Phrixus)
Phrourarch
Phryctoria
Phrygian helmet
Phrygillus
Phrygius
Phryne
Phrynichus (comic poet)
Phrynichus (oligarch)
Phrynichus (tragic poet)
Phrynon
Phrynos
Phrynos Painter
Phthia
Phthia (mythology)
Phthisis
Phthonus
Phye
Phylace (Thessaly)
Phylacides
Phylacus
Phylarch
Phylas
Phyle
Phyle (Attica)
Phyle Campaign
Phyle Cave
Phyleus
Phyllis (river god)
Phylo (Odyssey)
Phylonomus
Physcoa
Physcus
Physis
Phytalus
Pi (letter)
Pierian Spring
Pierus of Emathia
Pileus (hat)
Pimpleia
Pinakion
Pinax
Pindar
Pindar's First Olympian Ode
Pindus
Pioneer Group
Piraeus
Piraeus Apollo
Piraeus Artemis
Piraeus Athena
Piraeus Painter
Pirene (fountain)
Pirene (mythology)
Pirithous
Pisa
Pisidice
Pisistratus
Pisticci Painter
Pistis
Pistoxenos Painter
Pitane (Laconia)
Pithos
Pithus
Pitsa panels
Pittacus of Mytilene
Pittheus
Pitys (mythology)
Placenta cake
Plague of Athens
Plataea
Plato
Plato (comic poet)
Plato's five regimes
Plato's number
Plato's political philosophy
Plato's theory of soul
Plato's unwritten doctrines
Platonic Academy
Platonic epistemology
Platonic idealism
Platonic realism
Platonism
Pleiades
Pleione
Pleistarchus
Pleisthenes
Pleistoanax
Plethron
Pleuron
Pleuron of Aetolia
Plexippus
Plotheia
Plotinus
Plouto (Oceanid)
Ploutonion
Ploutonion at Hierapolis
Pluralist school
Plutarch
Pluto
Plutus
Plutus (play)
Plynteria
Pneuma
Pneuma (Stoic)
Pneumatic school
Pnyx
Podalirius
Podarces
Podes
Poeas
Poena
Poiesis
Polemarch
Polemarchus
Polemic
Polemocrates (physician)
Polemon
Polemon of Athens
Polemos
Poliporthes
Polis
Politarch
Politeia
Polites (friend of Odysseus)
Polites of Troy
Politics (Aristotle)
Polium
Polos
Polos Painter
Polus
Polyaenus of Lampsacus
Polyandrion
Polybius
Polybolos
Polybotes
Polybus (physician)
Polybus of Corinth
Polybus of Sicyon
Polybus (son of Antenor)
Polycaon
Polychares of Messenia
Polycles (155 BC)
Polycles (370 BCE)
Polycrates
Polyctor
Polydamas
Polydamas of Pharsalus
Polydamas of Skotoussa
Polydamna
Polydectes
Polydectes of Sparta
Polydorus
Polydorus of Sparta
Polydorus of Thebes
Polydorus of Troy
Polydorus (son of Astyanax)
Polygnotos (vase painter)
Polygnotus
Polyhymnia
Polyidus
Polyidus (poet)
Polyidus of Corinth
Polyidus of Thessaly
Polymatheia
Polymedes of Argos
Polymedon
Polymele
Polymestor
Polymnestus
Polypheides
Polyphemos Painter
Polyphemos reclining and holding a drinking bowl
Polyphemus
Polyphemus (Argonaut)
Polyphrasmon
Polypoetes
Polystratus
Polystratus the Epicurean
Polyxena
Polyxenidas
Polyxenus
Polyxo
Ponos
Pontic Group
Pontus
Poppy goddess
Porphyrion
Porthaon
Portico of the Aetolians
Porus (Attica)
Porus (mythology)
Poseidon
Poseidon of Melos
Posidippus (comic poet)
Posidippus (epigrammatic poet)
Posidonius
Potamides
Potamo of Mytilene
Potamoi
Potamon
Potamus (Attica)
Potamus Deiradiotes
Potnia
Potnia Theron
Potone
Pottery of ancient Greece
Pous
Prasiae
Pratinas
Praxagoras
Praxagoras of Athens
Praxias and Androsthenes
Praxidice
Praxiphanes
Praxiteles
Praxithea
Pre-Greek substrate
Pre-Socratic philosophy
Precepts of Chiron
Priam
Priam Painter
Priapus
Priasus
Priene
Priene Inscription
Priestess of Hera at Argos
Prince of the Lilies
Princeton Painter
Probalinthus
Probolê
Proboulos
Procles
Proclus
Procne
Procris
Prodicus
Prodromoi
Proetus
Proetus (son of Abas)
Prohairesis
Proioxis
Prokles (Pergamon)
Promachos
Promachus
Promachus of Macedon
Promachus of Pellene
Promanteia
Promedon
Prometheia
Prometheus
Prometheus Bound
Prometheus the Fire-Bringer
Prometheus Unbound (Aeschylus)
Pronax
Pronous
Pronunciation of Ancient Greek in teaching
Prophasis
Propylaea
Propylaea (Acropolis of Athens)
Prorrhesis
Prosodion
Prosody (Greek)
Prospalta (Attica)
Prostitution in ancient Greece
Prostyle
Prosymna
Prosymnus
Protagonist
Protagoras
Protagoras (dialogue)
Protesilaus
Proteus
Proteus (Greek myth)
Prothoenor
Prothous
Protogeneia
Protogenes
Protogeometric style
Protomachus (Athenian general)
Protostates
Providence Painter
Proxenus of Atarneus
Proxenus of Boeotia
Proxeny
Prytaneion
Prytanis (king of Sparta)
Psalacantha
Psamathe (Nereid)
Psaphis
Pseras
Pseudanor
Pseudo-Chalkidian vase painting
Pseudo-Demosthenes
Pseudo-Scymnus
Pseudodipteral
Pseudoperipteros
Psi
Psi and phi type figurine
Psiax
Psiloi
Psilosis
Psophis
Psophis (mythology)
Psychagogy
Psyche
Psychro Cave
Psykter
Ptelea (Attica)
Pterelaus
Pterelaus (son of Lelex)
Pterelaus (son of Taphius)
Pteruges
Ptolemais of Cyrene
Ptolemy (somatophylax)
Ptolemy of Epirus
Ptolemy of Thebes
Ptolichus
Ptoon Painter
Pyanopsia
Pygmalion
Pyknon
Pylades
Pylaemenes
Pylaeus
Pylaon
Pylene
Pylos Combat Agate
Pylus
Pyracmus of Euboea
Pyraechmes
Pyramus and Thisbe
Pyrausta
Pyre of Heracles
Pyrgoteles
Pyrilampes
Pyroeis
Pyrrha of Thessaly
Pyrrhic War
Pyrrhichios
Pyrrhichos
Pyrrho
Pyrrhonism
Pyrrhus of Epirus
Pyrrhus' invasion of the Peloponnese
Pyrrhus of Athens
Pythagoras
Pythagoras (boxer)
Pythagoras (sculptor)
Pythagoras of Laconia
Pythagoras the Spartan
Pythagorean astronomical system
Pythagorean interval
Pythagorean tuning
Pythagoreanism
Pythagoreion
Pytheas
Pythia
Pythian Games
Pythias
Pythion
Pythion of Megara
Pythius of Priene
Python
Python of Aenus
Pytia
Pyxis (vessel)
Q
Quadratrix of Hippias
Quantitative metathesis
Quintus Smyrnaeus
R
Rampin Rider
Rape in Greek mythology
Rape of Persephone
Rarus
Rational animal
Red Figure Pelike with an Actor Dressed as a Bird
Red-figure pottery
Reed Painter
Regina Vasorum
Regions of ancient Greece
Representation of women in Athenian tragedy
Republic (Plato)
Republic (Zeno)
Resting Satyr
Returns from Troy
Revelers Vase
Rhacius
Rhadamanthus
Rhadine and Leontichus
Rhamnus (Crete)
Rhaphanidosis
Rhapso
Rhapsode
Rharian Field
Rhaucus
Rhea
Rhebas (river)
Rhene
Rhesus (play)
Rhesus of Thrace
Rhetoric (Aristotle)
Rhexenor
Rhianus
Rhieia
Rhipe
Rhittenia
Rhium (Messenia)
Rho
Rhodian vase painting
Rhodius
Rhodope
Rhodos
Rhoecus
Rhoeo
Rhoiteion
Rhombus formation
Rhomos
Rhoptron
Rhynchus (Greece)
Rhytium
Rhyton
Riace bronzes
Rider Amphora
Rider Painter
Ring of Gyges
Ripheus
Rival Lovers
Rod of Asclepius
Roman–Greek wars
Romanization of Greek
Rough breathing
Royal formula of Parthian coinage
Rufus of Ephesus
Running in Ancient Greece
Rycroft Painter
S
Sabouroff head
Sacred Band of Thebes
Sacred caves of Crete
Sacred Gate
Sacred Way
Sacrificial tripod
Sacrificial victims of Minotaur
Sage
Salamis
Salamis Stone
Salamis Tablet
Salinon
Salmacis
Salmacis (fountain)
Salmoneus
Salpe
Salpinx
Same (Homer)
Sami
Samia (play)
Samian Sibyl
Samian vase painting
Samian War
Samothrace temple complex
Sampi
San (letter)
Sanctuary of Aphrodite Aphrodisias
Sanctuary of Aphrodite Paphia
Sanctuary of Apollo Maleatas
Sanctuary of Artemis Orthia
Sanctuary of Pandion
Sanctuary of the Mother of Gods and Aphrodite
Sanctuary of Zeus Polieus
Sangarius
Sannyrion
Saon
Sapphic stanza
Sappho
Sappho Painter
Sarissa
Sarissophoroi
Sarpedon
Sarpedon (Trojan War hero)
Satyr
Satyr play
Satyros
Satyrus the Peripatetic
Scamander
Scamander of Boeotia
Scamandrius
Scambonidae
Scaphe
Scaphism
Schedius
Scheria
Schoeneus
Schoenus (Boeotia)
Scholarch
School of Abdera
Science in classical antiquity
Sciritae
Sciritis
Sciron
Scirtonium
Scirtus (mythology)
Scirum
Scirus (Arcadia)
Scolus (Boeotia)
Scopas
Sculpture of a horse (Olympia B 1741)
Scylla
Scymnus
Scytale
Scythian archers
Second Alcibiades
Second Ancient Theatre, Larissa
Second Athenian League
Second Battle of Lamia
Second declension
Second Macedonian War
Second Messenian War
Second Persian invasion of Greece
Second Philippic
Second Sacred War
Second Temple of Hera (Paestum)
Second War of the Diadochi
Seikilos epitaph
Seisachtheia
Selene
Seleucus of Alexandria
Seleucus of Seleucia
Sellasia (Laconia)
Selloi
Semachidae
Semachos
Semele
Semonides of Amorgos
Semystra
Senex amans
Serapion of Alexandria
Serenus of Antinoöpolis
Sestos
Seven against Thebes
Seven Against Thebes
Seven Sages of Greece
Severe style
Shambling Bull Painter
Shield bearer
Shield of Achilles
Shield of Heracles
Ship of State
Shirt of Nessus
Shuvalov Painter
Siana Cup
Sibyl
Sibyl rock
Sibyna
Sibyrtius
Sicilian Expedition
Sicilian vase painting
Sicilian Wars
Sicinnus
Sicyon
Sicyon (mythology)
Sicyonian Treasury
Side (mythology)
Siege of Athens (287 BC)
Siege of Athens and Piraeus (87–86 BC)
Siege of Eretria
Siege of Gythium
Siege of Lamia
Siege of Lilybaeum (278 BC)
Siege of Mantinea
Siege of Medion
Siege of Megalopolis
Siege of Naxos (499 BC)
Siege of Perinthus
Siege of Plataea
Siege of Rhodes (305–304 BC)
Siege of Sparta
Siege of Syracuse (213–212 BC)
Siege of Syracuse (278 BC)
Siege of Syracuse (311–309 BC)
Siege of Syracuse (343 BC)
Siege of Syracuse (397 BC)
Siege of Thebes (292–291 BC)
Siege of Tyre (332 BC)
Sigma
Sikyonioi
Silanion
Silanus of Ambracia
Silenus
Sileraioi
Silloi
Silver age
Silver stater with a turtle
Sima
Simmias (explorer)
Simmias of Rhodes
Simmias of Syracuse
Simmias of Thebes
Simon of Athens
Simon the Shoemaker
Simonides of Ceos
Sinis
Sinon
Sinope
Sintice
Siphnian Treasury
Siren
Siren Painter
Sirras
Sisyphus
Sisyphus (dialogue)
Sisyphus Painter
Sithon
Six's technique
Skene
Skeptouchos
Skeuophoros
Skira
Skolion
Skyphos
Skyros
Skythes
Slavery in ancient Greece
Smikros
Smilis
Smooth breathing
Smyrna
Snub-nose painter
Social War (220–217 BC)
Social War (357–355 BC)
Sock and buskin
Socrates
Socrates of Achaea
Socrates the Younger
Socratic dialogue
Socratic method
Socratic problem
Socratic questioning
Socus
Sokles
Sollium
Solon
Solonian Constitution
Solymus
Somatophylakes
Sons of Aegyptus
Soos (king of Sparta)
Sophilos
Sophist
Sophist (dialogue)
Sophistic works of Antiphon
Sophocles
Sophron
Sophroniscus
Sophrosyne
Sopolis of Macedon
Soranus of Ephesus
Sosicrates
Sosigenes (Stoic)
Sosigenes of Alexandria
Sosigenes the Peripatetic
Sosipolis (god)
Sositheus
Sostratos of Aegina
Sostratos of Chios
Sostratus of Dyme
Sostratus of Pellene
Sostratus of Sicyon
Sosus of Pergamon
Sotades
Sotades of Crete
Sotades Painter
Soter
Soter (daimon)
Soteria (festival)
Soteria
Soteridas of Epidaurus
Sotion
Sotira (physician)
Sounion
Sounion Kouros
Sousta
South Italian ancient Greek pottery
South Stoa I (Athens)
Sparta
Sparta (mythology)
Spartan army
Spartan Constitution
Spartan hegemony
Spartan naval art: Ivory plaque
Spartia temple
Spartiate
Spartocid dynasty
Spartoi
Spercheides
Speusippus
Sphaeria
Sphaerics
Sphaerus
Sphendale
Spherical Earth
Sphettus
Sphodrias
Sphyrelaton
Spintharus of Corinth
Spool-shaped pyxis (NAMA 5225)
Sport in ancient Greek art
Sporus of Nicaea
Spoudaiogeloion
Spurious diphthong
Stadion (unit)
Stadion (running race)
Stadium at Nemea
Stadium at Olympia
Stadium of Delphi
Stag Hunt Mosaic
Stagira (ancient city)
Stamnos
Standing Youth (Munich SL 162)
Staphylus
Staphylus (son of Dionysus)
Staphylus of Naucratis
Stasander
Stasanor
Stasimon
Stasinus
Stasis
Stater
Statue of the priestess Aristonoe
Statue of Zeus at Olympia
Statuette of hoplite (Berlin Antiquities Collection Misc. 7470)
Steiria
Stele of Aristion
Stele of Arniadas
Stentor
Stephane
Sterope
Sterope (Pleiad)
Sterope of Tegea
Stesichorus
Stesicles
Stesimbrotos of Thasos
Stheneboea
Sthenelaidas
Sthenele
Sthenelus
Sthenelus (son of Capaneus)
Sthenelus of Mycenae
Sthennis
Stheno
Stichius (mythology)
Stichomythia
Stilbe
Stilbon
Stilpo
Stirrup jar
Stoa
Stoa Basileios
Stoa of Attalos
Stoa of Eumenes
Stoibadeion
Stoic categories
Stoic logic
Stoic passions
Stoic physics
Stoichedon
Stoicism
Strabo
Strangford Apollo
Strategos
Stratichus
Straticles
Strato of Lampsacus
Stratobates
Stratocles
Stratonice
Stratonice of Pontus
Stratonicus of Athens
Strattis
Strattis of Chios
Strombichides
Strongylion
Strophe
Strophius
Strymon
Studies on Homer and the Homeric Age
Stygne
Stylobate
Stymphalian birds
Stymphalus
Stymphalus (Arcadia)
Stymphalus (son of Elatus)
Styra
Styx
Subjunctive
Sublunary sphere
Substantial form
Successions of Philosophers
Sufax
Suicide of Ajax Vase
Suitors of Helen
Suitors of Penelope
Sukhumi stela
Superposed order
Susarion
Swallow song of Rhodes
Swing Painter
Syagrus (poet)
Sybaris (mythology)
Sybridae
Syceus
Sycophancy
Syennesis of Cyprus
Syleus (mythology)
Syloson (son of Calliteles)
Syme (mythology)
Symmoria
Symplegades
Sympoliteia
Symposium
Symposium (Xenophon)
Synedrion
Synizesis
Synoecism
Synoikia
Syntagmatarchis
Sypalettus
Syrinx
Syrtos
Syssitia
T
Taenarus
Tagmatarchis
Tagus
Tainia
Talares
Talaria
Talaus
Taleides Painter
Talos
Talos (inventor)
Talthybius
Tanagra figurine
Tanais Tablets
Tantalus
Tantalus (mythology)
Tantalus (son of Broteas)
Tantalus (son of Menelaus)
Taphians
Taphius
Taras
Taraxippus
Targitaos
Tarporley Painter
Tarquinia Painter
Tarrha
Tartarus
Tau
Taurus
Taxiarch
Taxiles (Pontic army officer)
Taygete
Techne
Tecmessa
Tectamus
Tegea
Tegea (Crete)
Tegeates
Tegyra
Tegyrios
Teichoscopy
Teithras
Talaemenes
Telauges
Telchines
Teleboans
Telecleia
Telecleides
Telecles
Teleclus
Teledice
Telegonus (son of Odysseus)
Telegony
Telemachus
Telemachy
Telemus
Teleon
Telephassa
Telephus
Telepylos
Teles of Megara
Telesarchus (military commander)
Telesarchus of Samos
Telesilla
Telesphorus (mythology)
Telestas
Telesterion
Telesto
Telete
Teleutias
Tellis of Sicyon
Tellus of Athens
Telmius
Telos
Telycrates
Temenos
Temenus
Temenus (mythology)
Temple C (Selinus)
Temple E (Selinus)
Temple F (Selinus)
Temple of Aphaea
Temple of Aphrodite at Acrocorinth
Temple of Aphrodite, Knidos
Temple of Aphrodite, Kythira
Temple of Aphrodite, Sparta
Temple of Aphrodite Urania
Temple of Apollo (Delphi)
Temple of Apollo (Syracuse)
Temple of Apollo Patroos
Temple of Artemis
Temple of Artemis, Corfu
Temple of Artemis Amarynthia
Temple of Artemis Ephesia
Temple of Asclepius, Epidaurus
Temple of Athena (Paestum)
Temple of Athena (Syracuse)
Temple of Athena Alea
Temple of Athena Lindia
Temple of Athena Nike
Temple of Athena Polias (Priene)
Temple of Concordia, Agrigento
Temple of Demeter Amphictyonis
Temple of Dionysus, Naxos
Temple of Dionysus Lysios
Temple of Hephaestus
Temple of Hera Lacinia
Temple of Hera, Mon Repos
Temple of Hera, Olympia
Temple of Hera Lacinia
Temple of Heracles, Agrigento
Temple of Isthmia
Temple of Olympian Zeus, Agrigento
Temple of Olympian Zeus, Athens
Temple of Poseidon, Sounion
Temple of Poseidon (Tainaron)
Temple of Poseidon (Taranto)
Temple of Sangri
Temple of the Delians
Temple of Zeus, Olympia
Temple of Zeus Kyrios
Ten Thousand
Tenages
Tenerus (son of Apollo)
Tenes
Tereus
Tereus (play)
Term logic
Termerus
Terpander
Terpsichore
Terpsimbrotos
Terpsion
Tetartemorion
Tethys
Tetradrachm
Tetrapharmacum
Tetrapharmakos
Tetrapolis (Attica)
Teucer
Teumessian fox
Teutamides
Teuthis
Teuthras
Teuthras (mythology)
Thalassa
Thalatta! Thalatta!
Thales (painter)
Thales of Miletus
Thales's theorem
Thaletas
Thalia (Grace)
Thalia (Muse)
Thalia (Nereid)
Thalia (nymph)
Thalpius (mythology)
Thalysia
Thamyris
Thanatos
Thanatos Painter
Thargelia
Thasian rebellion
Thasus
Thaumacus (mythology)
Thaumas
The Affecter
Theaetetus (dialogue)
Theaetetus (mathematician)
Theagenes of Megara
Theagenes of Patras
Theagenes of Rhegium
Theagenes of Thasos
Theages
Theandrios
Theano
Theano of Troy
Theano (philosopher)
Thearides
Theatre of ancient Greece
Theatre of Dionysus
Thebaid
Theban Cycle
Theban hegemony
Theban kings in Greek mythology
Theban–Spartan War
Theban Treasury (Delphi)
Thebe
Thebes
Thebes tablets
Theia
Theias
Theiodamas
Theios aner
Thelxinoë
Thelxion
Thelxion of Argos
Thelxion of Sicyon
Themacus
Themis
Themis of Rhamnous
Themiscyra
Themison of Eretria
Themison of Laodicea
Themison of Samos
Themison of Thera
Themista of Lampsacus
Themiste
Themisto
Themistoclean Wall
Themistocles
Theobule
Theoclymenus
Theodas of Laodicea
Theodectes
Theodorus of Cyrene
Theodorus of Samos
Theodorus the Atheist
Theodosius of Alexandria (grammarian)
Theodosius of Bithynia
Theogenes
Theognis
Theognis of Megara
Theogony
Theomachy
Theombrotus
Theon of Alexandria
Theon of Samos
Theon of Smyrna
Theophane
Theophiliscus
Theophilus (geographer)
Theophrastus
Theopompus
Theopompus (comic poet)
Theopompus of Sparta
Theorica
Theoris of Lemnos
Theorodokoi
Theoroi
Theory of forms
Theramenes
Therapeutae of Asclepius
Therapne
Theras
Theriac
Theriaca
Theristai
Therma
Thermopylae
Thermos
Thero
Theron of Acragas
Thersander
Thersander (Epigoni)
Thersilochus
Thersites
Theseia
Theseus
Theseus Painter
Theseus Ring
Thesmophoria
Thesmophoriazusae
Thespia
Thespiae
Thespis
Thespius
Thesprotia (polis)
Thesprotians
Thessalian League
Thessalian vase painting
Thessaliotis
Thessalus
Thessalus
Thestius
Thestor (mythology)
Theta
Thetis
Theudius
Thiasus
Thimbron (fl. 400–391 BC)
Third Macedonian War
Third man argument
Third Philippic
Third Sacred War
Thyreus (mythology)
Thirty Tyrants
Thirty Years' Peace
Thisbe (Boeotia)
Thoas
Thoas (king of Aetolia)
Thoas (king of Corinth)
Thoas (king of Lemnos)
Thoas (king of the Taurians)
Thoas (son of Jason)
Tholos
Tholos of Delphi
Thoön (mythology)
Thootes (mythology)
Thorae
Thorakitai
Thorax (Aetolia)
Thorax of Lacedaemonia
Thorax of Larissa
Thoricus
Thrace (mythology)
Thrasippus
Thrasos
Thrassa
Thrasybulus
Thrasybulus of Miletus
Thrasybulus of Syracuse
Thrasyllus
Thrasymachus
Thrasymachus of Corinth
Thrasymedes
Thrasymedes (mythology)
Thrax
Three Line Group
Three-phase firing
Thria (Attica)
Thriasian Plain
Thronium (Locris)
Thucydides
Thucydides, son of Melesias
Thule
Thumos
Thyatira
Thyestes
Thyestes (Euripides)
Thyia
Thymaridas
Thymbra
Thymiaterion
Thymochares
Thymoetadae
Thymoetes
Thyreophoroi
Thyreos
Thyrgonidae
Thyrsus
Tiasa
Timachidas of Rhodes
Timaea, Queen of Sparta
Timaeus (dialogue)
Timaeus (historian)
Timaeus of Locri
Timaeus the Sophist
Timaios of Elis
Timanthes
Timanthes of Cleonae
Timanthes of Sicyon
Timarchus of Miletus
Timarete
Timasitheus of Delphi
Timasitheus of Lipara
Timasitheus of Trapezus
Timeline of ancient Greece
Timeline of ancient Greek mathematicians
Timeline of Athens
Timeo Danaos et dona ferentes
Timocharis
Timoclea
Timocleidas
Timocles
Timocracy
Timocrates of Lampsacus
Timocrates of Rhodes
Timocrates of Syracuse
Timocreon
Timolaus of Cyzicus
Timoleon
Timomachus
Timon of Athens (person)
Timon of Phlius
Timophanes
Timotheus (aulist)
Timotheus (general)
Timotheus (sculptor)
Timotheus of Miletus
Timoxenos
Timycha
Tiphys
Tiresias
Tisamenus
Tisamenus (King of Thebes)
Tisamenus (son of Antiochus)
Tisamenus (son of Orestes)
Tisander
Tisias
Tisiphone
Titacidae
Titanomachy
Titanomachy (epic poem)
Titans
Tithonos Painter
Tithonus
Titias
Tityos
Tityos Painter
Tlepolemus
Tleson
Tmolus (father of Tantalus)
Tmolus (son of Ares)
Tolmides
Tomb of Menecrates
Toparches
Tower of the Winds
Toxaechmes
Toxeus
Toxotai
Trachis (Phocis)
Tractatus coislinianus
Tragasus
Tragic hero
Trambelus
Transcendentals
Treasuries at Olympia
Treasury of Atreus
Treasury of Cyrene
Treasury of the Acanthians
Treasury of the Massaliots (Delphi)
Treaty of Dardanos
Trechus (mythology)
Trial of Socrates
Trick at Mecone
Tricorythus
Trident of Poseidon
Trierarch
Trierarchy
Triglyph
Trigonon
Trinemeia
Triopas
Triopas of Argos
Triphylia
Triphylians
Tripolis (region of Laconia)
Triptolemos Painter
Triptolemos (play)
Triptolemus
Trireme
Tritaea (Achaea)
Tritaea (Locris)
Tritaea (Phocis)
Tritagonist
Triteia
Triton
Trittys
Trochilus
Troezen
Troglodytae
Troilus
Troilus of Elis
Trojan Battle Order
Trojan Horse
Trojan language
Trojan Leaders
Trojan War
Trophimoi
Trophonius
Tros
Tryphon
Tübingen Hoplitodromos Runner
Tunnel of Eupalinos
Twelve Olympians
Two-handled amphora (Boston 63.1515)
Tyche
Tychon
Tydeus
Tydeus Painter
Tyllus
Tympanum
Tymphaea
Tyndareus
Types of Women
Typhon
Typology of Greek vase shapes
Tyrannion of Amisus
Tyrannus (mythology)
Tyrant
Tyrmeidae
Tyro
Tyrrhenian amphorae
Tyrtaeus
U
Ucalegon
Ula (Caria)
Underworld Painter
Unity of opposites
Unmoved mover
Upper Agryle
Upper Ancyle
Upper Lamptrai
Upper Paeania
Upper Pergase
Upper Potamus
Upper World
Upsilon
Urania
Urania (mythology)
Uranium (Caria)
Uranus
Use of costume in Athenian tragedy
V
Valle dei Templi
Valley of the Muses
Vari Cave
Varrese Painter
Vasiliki ware
Velchanos
Venus de' Medici
Venus de Milo
Vergina Sun
Victorious Youth
Voidokilia beach
Vrysinas
W
Wall Paintings of Thera
Wandering womb
War against Nabis
Warfare in ancient Greek art
Warfare in Minoan Art
Warrior Vase
Wars of Alexander the Great
Wars of the Delian League
Wars of the Diadochi
Water (classical element)
Ways and Means (Xenophon)
Wedding of Ceyx
Wedding Painter
West Slope Ware
Wheel of fire
White ground technique
Wild Goat Style
Wine-dark sea (Homer)
Winged Gorgoneion (Olympia B 110)
Winged helmet
Winnowing Oar
Women in ancient Sparta
Women in Classical Athens
Women of Trachis
Works of Demosthenes
X
Xanthe (mythology)
Xanthias
Xanthika
Xanthippe
Xanthippe (mythology)
Xanthippus
Xanthippus of Carthage
Xanthius
Xanthos (King of Thebes)
Xanthus
Xanthus (historian)
Xenagoras (geometer)
Xenagoras (historian)
Xenagus
Xenarchos
Xenarchus (comic poet)
Xenarchus of Seleucia
Xenarius
Xenelasia
Xenia
Xeniades
Xenias of Arcadia
Xenias of Elis
Xenoclea
Xenokleides
Xenocles
Xenocrates
Xenocrates of Aphrodisias
Xenodice
Xenokles Painter
Xenon (tyrant)
Xenopatra
Xenophanes
Xenophilus
Xenophon
Xenophon (son of Euripides)
Xenophon of Aegium
Xenophon of Corinth
Xenophon of Ephesus
Xenos
Xerxes' Pontoon Bridges
Xi (letter)
Xiphos
Xoanon
Xuthus
Xypete
Xyston
Xystus
Y
Yona
YZ Group
Z
Zacynthus
Zagreus
Zakoros
Zaleucus
Zanes of Olympia
Zarax
Zarex
Zeleia
Zelus
Zeno (physician)
Zeno of Citium
Zeno of Cyprus
Zeno of Elea
Zeno of Rhodes
Zeno of Tarsus
Zeno's paradoxes
Zenobius
Zenodorus
Zenodotus
Zenodotus (Stoic)
Zereia
Zeta
Zeus
Zeus Areius
Zeus Georgos
Zeuxidamus
Zeuxippe
Zeuxippus
Zeuxippus of Heraclea
Zeuxippus of Sicyon
Zeuxis
Zeuxis of Tarentum
Zeuxo
Zmaratha
Zoilus
Zone (colony)
Zone (vestment)
Zopyron
Zopyrus (physician)
Zoster (Attica)
Zoster (costume)
Lists
Ancient Greek and Roman roofs
Ancient Greek cities
Ancient Greek monetary standards
Ancient Greek philosophers
Cynic
Epicurean
Platonist
Stoic
Ancient Greek playwrights
Ancient Greek poets
Ancient Greek temples
Ancient Greek theatres
Ancient Greek tribes
Ancient Greek tyrants
Ancient Greeks
Kings of Athens
Ancient Macedonians
Ancient Olympic victors
Greek mathematicians
Greek mythological creatures
Greek mythological figures
Greek place names
Greek phrases
Greek vase painters
Homeric characters
Kings of Sparta
Minor Greek mythological figures
Oracular statements from Delphi
Schools of philosophy
Speakers in Plato's dialogues
Stoae
Thracian Greeks
Trojan War characters
See also
Outline of ancient Greece
Timeline of ancient Greece
I |
21751894 | https://en.wikipedia.org/wiki/GPXE | GPXE | gPXE is an open-source Preboot eXecution Environment (PXE) client firmware implementation and bootloader derived from Etherboot. It can be used to enable computers without built-in PXE support to boot from the network, or to extend an existing client PXE implementation with support for additional protocols. While standard PXE clients use TFTP to transfer data, gPXE client firmware adds the ability to retrieve data through other protocols like HTTP, iSCSI and ATA over Ethernet (AoE), and can work with Wi-Fi rather than requiring a wired connection.
gPXE development ceased in summer 2010, and several projects are migrating or considering migrating to iPXE as a result.
PXE implementation
gPXE can be loaded by a computer in several ways:
from media like floppy disk, USB flash drive, or hard disk
as a pseudo Linux kernel
as an ELF image
from an option ROM on a network card or embedded in a system BIOS
over a network as a PXE boot image
gPXE implements its own PXE stack, using a driver corresponding to the network card, or a UNDI driver if it was loaded by PXE itself. This allows to use a PXE stack even if the network card has no boot ROM, by loading gPXE from a fixed medium.
Bootloader
Although its basic role was to implement a PXE stack, gPXE can be used as a full-featured network bootloader. It can fetch files from multiple network protocols, such as TFTP, NFS, HTTP or FTP, and can boot PXE, ELF, Linux, FreeBSD, multiboot, EFI, NBI and Windows CE images.
In addition, it is scriptable and can load COMBOOT and COM32 SYSLINUX extensions. This allows for instance to build a graphical menu for network boot.
See also
PXE
PXELINUX
iPXE
References
External links
etherboot.org - The Etherboot/gPXE Wiki
ROM-o-matic.net dynamically generates gPXE and Etherboot network booting image
Introduction to Network Booting and Etherboot
PXE dust: scalable day-to-day diskless booting (via Archive.org last available)
Network booting
Free boot loaders
Free network-related software |
17539252 | https://en.wikipedia.org/wiki/Security%20level%20management | Security level management | Security level management (SLM) comprises a quality assurance system for electronic information security.
The aim of SLM is to display the IT security status transparently across a company at any time, and to make IT security a measurable quantity. Transparency and measurability form the prerequisites for making IT security proactively monitorable, so that it can be improved continuously.
SLM is oriented towards the phases of the Deming Cycle/Plan-Do-Check-Act (PDCA) Cycle: within the scope of SLM, abstract security policies or compliance guidelines at a company are transposed into operative, measureable specifications for the IT security infrastructure. The operative aims form the security level to be reached.
The security level is checked permanently against the current performance of the security systems (malware scanner, patch systems, etc.). Deviations can be recognised early on and adjustments made to the security system.
SLM falls under the range of duties of the chief security officer (CSO), the chief information officer (CIO) or the chief information security officer (CISO), who report directly to the Executive Board on IT Security and data availability.
Classification
SLM is related to the disciplines of Security and Security Event management (SIEM), which the analysts Gartner summarise in their Magic Quadrant for Security Information and Event Management, and define as follows:
"[…] SIM provides reporting and analysis of data primarily from host systems and applications, and secondarily from security devices — to support security policy compliance management, internal threat management and regulatory compliance initiatives. SIM supports the monitoring and incident management activities of the IT security organization […]. SEM improves security incident response capabilities. SEM processes near-real-time data from security devices, network devices and systems to provide real-time event management for security operations. […]"
SIM and SEM relate to the infrastructure for realising superordinate security aims, but are not descriptive of a strategic management system with aims, measures, revisions and actions to be derived from this. SLM unites the requisite steps for realising a measurable, functioning IT security structure in a management control cycle.
SLM can be categorised under the strategic panoply of IT governance, which, via suitable organisation structures and processes, ensures that IT supports corporate strategy and objectives. SLM allows CSOs, CIOs and CISOs to prove that SLM is contributing towards protecting electronic data relevant to processes adequately, and therefore makes a contribution in part to IT governance.
The Steps towards SLM
Defining the Security Level (Plan): Each company specifies security policies. The executive management defines aims in relation to the integrity, confidentiality, availability and authority of classified data. In order to be able to verify compliance with these specifications, concrete aims for the individual security systems at the company need to be derived from the abstract security policies. A security level consists of a collection of measurable limiting and threshold values.
Example: operative aims like "the anti-virus systems at our UK sites need to be up-to-date no longer than four hours after publication of the current definition" need to be derived from superordinate security policies like "our employees should be able to work without being interrupted."
Limiting and threshold values are to be specified separately and individually for different sites, locations and countries, because the IT infrastructure on-site and any other local determining factors need to be taken into consideration.
Example: office buildings in the UK are normally equipped with high-speed dedicated lines. It is wholly realistic here to limit the deadline for supplying all computers with the newest anti-virus definitions to a few hours. For a factory in Asia, with a slow modem link to the web, a realistic limiting value would have to be set that is somewhat higher.
The IT control manual Control Objectives for Information and Related Technology Cobit (CobiT) provides companies with instructions on transposing subordinate, abstract aims into measurable aims in a few steps.
Collecting and Analysing Data (Do):Information on the current status of the systems can be gleaned from the log file and status reports provided by individual anti-virus, anti-spyware or anti-spam consoles. Monitoring and reporting solutions analysing software applications from all software houses can simplify and accelerate data collection.
Checking the Security Level (Check): SLM prescribes continual reconciliation of the defined security level with the current measured values. Automated real-time reconciliation supplies companies with a permanent status report on the security status across all locations.
Adjusting the Security Structure (Act): Efficient SLM allows trend analyses and long-term comparative assessments to be made. Through the rolling observation of the security level, weak spots in the network can be identified early on and appropriate adjustments made proactively in the security systems.
See also
Besides defining the specifications for engineering, introducing, operating, monitoring, maintaining and improving a documented information security management system, ISO/IEC 27001 also defines the specifications for implementing suitable security mechanisms.
ITIL, a collection of best practices for IT control processes, goes far beyond IT security. In relation, it supplies criteria for how Security Officers can conceive IT security as an independent, qualitatively measurable service and integrate it into the universe of business-process-oriented IT processes. ITIL also works from the top down with policies, processes, procedures and job-related instructions, and assumes that both superordinate, but also operative aims need to be planned, implemented, controlled, evaluated and adjusted.
External links
COBIT:
Summary and material from the German Chapter of the ISACA - German
4.0 Deutsch.pdf Cobit 4.0 - German
ISO/IEC 27000
The ISO 27000 Directory
International Organization for Standardization
ITIL
"ITIL and Information Security" (ITIL und Informationssicherheit), Federal Office for Information Security (BSI), Germany - German
"How ITIL can improve Information Security", securityfocus.com – English
Official ITIL website of the British Office of Government Commerce - English
Data security |
21143560 | https://en.wikipedia.org/wiki/Fear%2C%20uncertainty%2C%20and%20doubt | Fear, uncertainty, and doubt | Fear, uncertainty, and doubt (often shortened to FUD) is a propaganda tactic used in sales, marketing, public relations, politics, polling and cults. FUD is generally a strategy to influence perception by disseminating negative and dubious or false information and a manifestation of the appeal to fear.
Definition
The term "fear, uncertainty, and doubt" appeared as far back as the 1920s, whereas the similar formulation "doubts, fears, and uncertainties" reaches back to 1693. By 1975, the term was appearing abbreviated as FUD in marketing and sales contexts as well as in public relations:
The abbreviation FUD is also alternatively rendered as "fear, uncertainty, and disinformation".
FUD was first used with its common current technology-related meaning by Gene Amdahl in 1975, after he left IBM to found his own company, Amdahl Corp.:
This usage of FUD to describe disinformation in the computer hardware industry is said to have led to subsequent popularization of the term.
As Eric Steven Raymond wrote:
By spreading questionable information about the drawbacks of less well-known products, an established company can discourage decision-makers from choosing those products over its own, regardless of the relative technical merits. This is a recognized phenomenon, epitomized by the traditional axiom of purchasing agents that "nobody ever got fired for buying IBM equipment". The aim is to have IT departments buy software they know to be technically inferior because upper management is more likely to recognize the brand.
Examples
Software producers
Microsoft
From the 1990s onward, the term became most often associated with Microsoft. Roger Irwin said:
In 1996, Caldera, Inc. accused Microsoft of several anti-competitive practices, including issuing vaporware announcements, creating FUD, and excluding competitors from participating in beta-test programs in order to destroy competition in the DOS market.
One of the claims was related to having modified Windows 3.1 so that it would not run on DR DOS 6.0 although there were no technical reasons for it not to work. This was caused by the so-called AARD code, some encrypted piece of code, which had been found in a number of Microsoft programs. The code would fake nonsensical error messages if run on DR DOS, like:
If the user chose to press , Windows would continue to run on DR DOS without problems. While it had been already speculated in the industry that the purpose of this code was to create doubts about DR DOS's compatibility and thereby destroy the product's reputation, internal Microsoft memos published as part of the United States v. Microsoft antitrust case later revealed that the specific focus of these error messages was DR DOS. At one point, Microsoft CEO Bill Gates sent a memo to a number of employees, reading
Microsoft Senior Vice President Brad Silverberg later sent another memo, stating
In 2000, Microsoft settled the lawsuit out-of-court for an undisclosed sum, which in 2009 was revealed to be $280 million.
At around the same time, the leaked internal Microsoft "Halloween documents" stated "OSS [Open Source Software] is long-term credible… [therefore] FUD tactics cannot be used to combat it."
Open source software, and the Linux community in particular, are widely perceived as frequent targets of Microsoft's FUD:
Statements about the "viral nature" of the GNU General Public License (GPL).
Statements that "…FOSS [Free and open source software] infringes on no fewer than 235 Microsoft patents", before software patent law precedents were even established.
Statements that Windows Server 2003 has lower total cost of ownership (TCO) than Linux, in Microsoft's "Get-The-Facts" campaign. It turned out that they were comparing Linux on a very expensive IBM mainframe to Windows Server 2003 on an Intel Xeon-based server.
A 2010 video claimed that OpenOffice.org had a higher long-term cost of ownership, as well as poor interoperability with Microsoft's own office suite. The video featured statements such as "If an open source freeware solution breaks, who's gonna fix it?"
SCO v. IBM
The SCO Group's 2003 lawsuit against IBM, funded by Microsoft, claiming $5 billion in intellectual property infringements by the free software community, is an example of FUD, according to IBM, which argued in its counterclaim that SCO was spreading "fear, uncertainty, and doubt".
Magistrate Judge Brooke C. Wells wrote (and Judge Dale Albert Kimball concurred) in her order limiting SCO's claims: "The court finds SCO's arguments unpersuasive. SCO's arguments are akin to SCO telling IBM, 'sorry, we are not going to tell you what you did wrong because you already know...' SCO was required to disclose in detail what it feels IBM misappropriated... the court finds it inexcusable that SCO is... not placing all the details on the table. Certainly if an individual were stopped and accused of shoplifting after walking out of Neiman Marcus they would expect to be eventually told what they allegedly stole. It would be absurd for an officer to tell the accused that 'you know what you stole, I'm not telling.' Or, to simply hand the accused individual a catalog of Neiman Marcus' entire inventory and say 'it's in there somewhere, you figure it out.
Regarding the matter, Darl Charles McBride, President and CEO of SCO, made the following statements:
"IBM has taken our valuable trade secrets and given them away to Linux,"
"We're finding... cases where there is line-by-line code in the Linux kernel that is matching up to our UnixWare code"
"...unless more companies start licensing SCO's property... [SCO] may also sue Linus Torvalds... for patent infringement."
"Both companies [IBM and Red Hat] have shifted liability to the customer and then taunted us to sue them."
"We have the ability to go to users with lawsuits and we will if we have to, 'It would be within SCO Group's rights to order every copy of AIX [IBM's proprietary UNIX] destroyed
"As of Friday, [13] June [2003], we will be done trying to talk to IBM, and we will be talking directly to its customers and going in and auditing them. IBM no longer has the authority to sell or distribute IBM AIX and customers no longer have the right to use AIX software"
"If you just drag this out in a typical litigation path, where it takes years and years to settle anything, and in the meantime you have all this uncertainty clouding over the market..."
"Users are running systems that have basically pirated software inside, or stolen software inside of their systems, they have liability."
SCO stock skyrocketed from under a share to over in a matter of weeks in 2003. It later dropped to around —then crashed to under 50 cents on 13 August 2007, in the aftermath of a ruling that Novell owns the UNIX copyrights.
Apple
Apple's claim that iPhone jailbreaking could potentially allow hackers to crash cell phone towers was described by Fred von Lohmann, a representative of the Electronic Frontier Foundation (EFF), as a "kind of theoretical threat...more FUD than truth".
Security industry
FUD is widely recognized as a tactic to promote the sale or implementation of security products and measures. It is possible to find pages describing purely artificial problems. Such pages frequently contain links to the demonstrating source code that does not point to any valid location and sometimes even links that "will execute malicious code on your machine regardless of current security software", leading to pages without any executable code.
The drawback to the FUD tactic in this context is that, when the stated or implied threats fail to materialize over time, the customer or decision-maker frequently reacts by withdrawing budgeting or support from future security initiatives.
FUD has also been utilized in technical support scams, which may use fake error messages to scare unwitting computer users, especially the elderly or computer-illiterate, into paying for a supposed fix for a non-existent problem, to avoid being framed for criminal charges such as unpaid taxes, or in extreme cases, false accusations of illegal acts such as child pornography.
Caltex
The FUD tactic was used by Caltex Australia in 2003. According to an internal memo, which was subsequently leaked, they wished to use FUD to destabilize franchisee confidence, and thus get a better deal for Caltex. This memo was used as an example of unconscionable behaviour in a Senate inquiry. Senior management claimed that it was contrary to and did not reflect company principles.
Clorox
In 2008, Clorox was the subject of both consumer and industry criticism for advertising its Green Works line of allegedly environmentally friendly cleaning products using the slogan, "Finally, Green Works." The slogan implied both that "green" products manufactured by other companies which had been available to consumers prior to the introduction of Clorox's GreenWorks line had all been ineffective, and also that the new GreenWorks line was at least as effective as Clorox's existing product lines. The intention of this slogan and the associated advertising campaign has been interpreted as appealing to consumers' fears that products from companies with less brand recognition are less trustworthy or effective. Critics also pointed out that, despite its representation of GreenWorks products as "green" in the sense of being less harmful to the environment and/or consumers using them, the products contain a number of ingredients advocates of natural products have long campaigned against the use of in household products due to toxicity to humans or their environment. All three implicit claims have been disputed, and some of their elements disproven, by environmental groups, consumer-protection groups, and the industry self-regulatory Better Business Bureau.
See also
References
Further reading
External links
Computer jargon
Marketing techniques
Microsoft criticisms and controversies
Propaganda techniques
Doubt
Fear |
1135347 | https://en.wikipedia.org/wiki/Roland%20MT-32 | Roland MT-32 | The Roland MT-32 Multi-Timbre Sound Module is a MIDI synthesizer module first released in 1987 by Roland Corporation. It was originally marketed to amateur musicians as a budget external synthesizer with an original list price of $695. However, it became more famous along with its compatible modules as an early de facto standard in computer music. Since it was made prior to the release of the General MIDI standard, it uses its own proprietary format for MIDI file playback.
Within Roland's family of Linear Arithmetic (LA) synthesizers, the multitimbral MT-32 series constitutes the budget prosumer line for computer music at home, the multitimbral D-5, D-10, D-20 and D-110 models constitute the professional line for general studio use, and the high-end monotimbral D-50 and D-550 models are for sophisticated multi-track studio work. It was the first product in Roland's line of Desktop Music System (DTM) packages in Japan.
Features
Like the Roland D-50 Linear Synthesizer, it uses Linear Arithmetic synthesis, a form of sample-based synthesis combined with subtractive synthesis, to produce its sounds. Samples are used for attacks and drums, while traditional synthesis assures the sustain phase of the sounds.
The original MT-32 comes with a preset library of 128 synth and 30 rhythm sounds, playable on 8 melodic channels and one rhythm channel. It also features a digital reverberation effect. Successors (see below) added a library of 33 sound effects. Because of the absence of a piano attack sample, it cannot play a convincing acoustic piano sound.
Sounds are created from up to 4 partials which can be combined in various ways (including ring modulation). With 32 partials available overall, polyphony depends on the tonal complexity of the music, and 8 to 32 notes can be played simultaneously.
The MT-32 by default assigns its parts 1~8 and R(hythm) to respond on input MIDI channels 2~9 and 10 respectively. By consequence, MIDI files using the popular channel 1 or the other channels 11~16 cannot have those parts played on the MT-32. However, the MT-32's melodic parts can be shifted down to respond to channels 1~8 using a button combination or through MIDI system exclusive messages, enabling improved compatibility with non-MT-32-specific MIDI sequences.
Additionally, in 1993 Roland released the "GM2MT" SysEx pack, which can be used to reprogram the MT-32 and compatibles to match General MIDI specifications as closely as possible. 64 of the 128 patches (the limit of possible variations) are completely new or modified sounds, with additional sounds having been added to drum channel 10. Despite this, compatibility with GM is still limited by the lack of parts (9 on the MT-32, 16 per GM specification) and reversed panpot compared to MMA MIDI specifications. The utility was predated by a pack called "MT32GS", released by Mike Cornelius in 1992. The CM-Panion, by Gajits Music Software, was an Amiga editor which worked with the MT-32.
MT-32 models
Two major revisions of the MT-32 were produced. Roland refers to them as MT-32 (Old / Without headphones) and MT-32 (New / With headphones).
MT-32 (old)
The LA32 sound generation chip is an 80-pin PGA. The control CPU is an Intel C8095-90 in ceramic DIP-48 package. The digital-to-analog converter (DAC) is a Burr-Brown PCM54; the input signal having a resolution of 15 bits (see below). Line-outs are unbalanced 1/4″ TS phone connector (separate left and right channels.) No headphone jack.
MT-32 with revision 0 PCB, used in units up to serial number 851399.
The PGA LA32 chip is later replaced with a 100-pin QFP type.
MT-32 with "old-type" revision 1 PCB, used in units with serial numbers 851400 - 950499.
MT-32 (new)
The control CPU is an Intel P8098. Same Digital-to-analog converter (DAC), but with 16 bits of input signal resolution (see below). A stereo 1/4″ TRS headphones jack is added.
MT-32 with "new-type" revision 1 PCB, used in units with serial numbers 950500 and up.
Roland MT-100: Combination of MT-32 and Roland PR-100 (Sequencer and 2.8" Quick-Disk). While it uses a MT-32 (New) PCB, the chassis is different.
MT-32 compatible models
To target computer users, Roland released a number of CM (Computer Music) modules. They came without an LCD display and had most buttons removed. CM modules are compatible with MT-32, but feature 33 additional sound effect samples which many games took advantage of. These sound effects cannot be heard on an MT-32.
Early models share a similar design to MT-32 (New). Control CPU is an Intel P8098 and DAC is a Burr-Brown PCM54.
Roland CM-32L: Released in 1989, this Roland CM has only a volume knob, a MIDI message and a power-on indicator as external controls.
Roland CM-64: A combination of the CM-32L with the sample-based CM-32P, a cut-down "computer music" version of the Roland U-110. The CM-32P part plays on MIDI channels 11-16 which are not used by the CM-32L part.
Roland LAPC-I: ISA bus expansion card for IBM PCs and compatibles. Includes the MPU-401 interface.
In later models, the DAC is a Burr-Brown PCM55, and vibrato is noticeably faster.
Roland CM-32LN: Sound module for the NEC PC-98 series notebook computers, featuring a special connector for direct connection to the computer's 110-pin expansion port. Released in Japan only.
Roland CM-500: A combination of the CM-32LN with the Roland GS-compatible Roland CM-300, the "computer music" version of the Roland SC-55. Released around 1992.
Roland LAPC-N: C-Bus expansion card for the NEC PC-98 series of computers. Released in Japan only.
Roland RA-50: LA unit with CM-32L ROM (but not all CM-32L samples): Requires software work around or hardware modification to work 100% as a MT-32.
Sound quality problems
Given the MT-32 was intended to be a relatively low-cost prosumer product, many corners were cut in the design of its DAC output. For example, the circuitry needed to properly calibrate the DACs was omitted, resulting in distortion of the analog signal.
Despite having the capabilities of a professional synthesizer module, the noisy output of the MT-32 caused it to be generally considered unsuitable for professional studio use, although it was considered sufficient for use as the sound engine within other Roland prosumer products of the period. For example, the first generation of E-series home keyboards produced by the company, the first being the E-20 (and its associated modular version the RA-50 arranger), use a highly modified MT-32 motherboard. However, an aftermarket modification was available from Real World Interfaces to improve the MT-32's sound quality and generally increase its suitability for professional use.
Digital overflow
The MT-32 and compatible modules use a parallel 16-bit DAC at a sampling rate of 32000 Hz. In order to improve the signal-to-noise ratio without investing in higher-quality components, the volume of the digital signal fed into the DAC is doubled by shifting all 15 non-sign-carrying data bits to the left, which amounts to multiplying the amplitude by two while keeping the noise floor constant at the analogue output.
However, if this doubled amplitude exceeds the amount that can be represented with 16 bits, an arithmetic overflow occurs, audible as a very loud popping or cracking noise that occurs whenever the original signal crosses +16384/-16384 (the value of bit 14 lost in the bit shift).
This bit shift is implemented differently between module generations. In first-generation modules, this bit shift is performed at the connection between the data bus and DAC:
Original (non-shifted) data bit # Connection
--------------------------------------------------------------------------------
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00 Output of LA32 synthesizer chip
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00 Input to reverberation chip
15 13 12 11 10 09 08 07 06 05 04 03 02 01 00 -- input to DAC
| |
| +- most significant data-carrying bit
+- sign bit
This means that the reverberation chip will not "see" the overflow noise and thus not reverberate it. However, since bit 14 is dropped completely, the effective resolution is reduced to 15 bits, and since the DAC's least significant bit is not connected at all and thus not changing with the sign, additional one-bit noise is produced, audible at low signal levels.
In second-generation modules, the bit shift is performed at the connection between the LA32 sound generation chip and the data bus:
Original (non-shifted) data bit # Connection
--------------------------------------------------------------------------------
15 13 12 11 10 09 08 07 06 05 04 03 02 01 00 14 output of LA32 synthesizer chip
15 13 12 11 10 09 08 07 06 05 04 03 02 01 00 14 input to reverberation chip
15 13 12 11 10 09 08 07 06 05 04 03 02 01 00 14 input to DAC
| |
| +- most significant data-carrying bit
+- sign bit
This means that the reverberation chip will "see" the overflow noise and thus reverberate it. However, since the DAC's least significant bit is connected and does change with the sign, the sound quality is improved slightly over the earlier implementation.
To prevent digital signal overflow and its audible result, the digital output volume must be kept low enough so that bit 14 will never be used. On the first generation MT-32, this can simply be done by selecting a lower main volume on the unit's front panel, which directly controls the software main volume setting, which in turn directly translates into the amplitude of the digital output signal. On later generation units, this does not work, as the main volume knob and the software main volume setting only modify the volume of the analogue output using voltage-controlled amplifiers and have little effect on the amplitude of the digital signal. To prevent signal overflow, each individual part's volume (controller #7) must be kept low instead.
A third-party solution
In the period of 1989 to 1993, Robin Whittle of Real World Interfaces offered aftermarket modifications to the MT-32 to address its sound quality issues, as well as improve the functionality of the reverberation unit, provide discrete analog outputs for the internal reverb send and reverb return, and provide battery backup of the MT-32's settings.
According to documentation written in 1990, these modifications were only available for the first-generation MT-32, and not the later "headphone" model or any of the other MT-32 derivatives.
Note that the RWI modifications were intended for those using the MT-32 professionally, and may cause some minor compatibility issues with video game soundtracks intended for a stock MT-32. In particular the changes to the reverb unit functionality will likely cause an RWI modified MT-32 to render reverberation differently from what was intended, with possibly detrimental effects.
Compatibility problems
First generation units, having control ROM versions below 2.00, require a 40 millisecond delay between system exclusive messages. Some computer games which were programmed to work with the compatible modules (see above) or later ROM versions that do not require this delay, fail to work with these units, producing incorrect sounds or causing the firmware to lock up due to a buffer overflow bug, requiring turning the unit off and on. However, some games were designed to exploit errors in earlier units, causing incorrect sound on later revisions.
Also, some games were written to use instruments not found in the MT-32 models, and require a compatible module, such as a CM-32L, for proper sound playback.
Music for PC games
Despite its original purpose as a companion to other professional MIDI equipment, the MT-32 became one of several de facto standards for PC computer game publishers. Sierra On-Line, a leading PC game publisher of the time, took an interest in the sound-design of its PC games. Sierra secured a distribution deal to sell the MT-32 in the US, and invested heavily in giving its game titles (at the time) state-of-the-art sound by hiring professional composers to write in-game music. King's Quest IV, released in 1988, was the first Sierra title with a complete musical soundtrack scored on the MT-32.
The MT-32 with a necessary MPU-401 interface cost $550.00 to purchase from Sierra when it first sold the device. Although the MT-32's high price prevented it from dominating the end-user market of gamers, other PC publishers quickly followed Sierra's lead, expanding the role of music in their own game titles with Roland supporting the industry by releasing CM modules for computer users. The MT-32 remained popular for musical composition well into the early 1990s, when the game-industry began to shift toward CD Audio.
The proliferation of the General MIDI standard, along with competition from less expensive "wavetable" sample-based soundcards, led to the decline of musical soundtracks using the MT-32's proprietary features. Games that played General MIDI tracks on the MT-32 initialized the MT-32's sound bank to approximate the General MIDI Level 1 (GM1) specification, but avoided any of the MT-32's hallmark music-synthesis features, adhering to GM1's rather limited set of controllers.
Emulation
Due to the popularity of the MT-32 as a music playback device for computer games, many modern sound cards provide a simple "MT-32 emulation mode", usually realized by way of a sound mapping comprised either of General MIDI instruments rearranged to roughly represent the MT-32's preset sound bank, or of samples directly recorded from the original unit. Later modules like most of the Roland Sound Canvas series, Yamaha MU-series and the Kawai GMega feature such limited MT-32 backwards compatibility modes. Results are often considered poor, as the sampling technology used can not reflect the pitch- and time-variable characteristics of the original synthesizer technology, and programming of custom sounds (see above) is not being supported at all. One exception is the Orchid SoundWave 32 card released by Orchid Technology in 1994, whose on-board digital signal processor (DSP) allowed for a more faithful reproduction of the original sound characteristics.
More recently, there have been attempts at emulating the LA synthesizer technology in software using images of the original PCM and control ROMs. The most notable of these emulators is the open-source project Munt, which emulates the MT-32 hardware by way of a virtual device driver for Microsoft Windows, or a virtual MIDI device for OS X, BSD and Linux. It is also incorporated into ScummVM, an open-source adventure game interpreter, as of version 0.7.0. Munt is based on an earlier MT-32 Emulation Project, which was the source of a short-lived legal argument over distribution of the original ROM images with Roland Corporation, who manufactured the MT-32 and claims copyright on the ROM's data.
Roland offers emulation of classic synthesizers via the Roland Cloud subscription service. Support for the D-50 was notably added in June 2017.
References
External links
Roland MT-32 MIDI Implementation
Munt (MT-32/CM-32L emulator)
mp3 samples from SynthMania
Polynominal MT 32 advanced programming, audio test, manual and schematics
MT-32
MIDI standards
Products introduced in 1987 |
51178973 | https://en.wikipedia.org/wiki/Sinemia | Sinemia | Sinemia was a subscription-based service for discounted movie-ticket plans. Sinemia advertised a variety of subscription types, including those allowing users to watch movies in every format available (2D – 3D – IMAX – 4DX – DBOX – ScreenX) at any movie theater with no limitation on the dates or movie showtimes, however the service was plagued with problems. Sinemia was the only international movie-ticket subscription service that operated in the UK, Canada, Turkey, and Australia alongside the US. Sinemia ceased operations in the US on April 26, 2019.
The company stated it was the subject of a pending FTC investigation in its US bankruptcy filing, and a class-action lawsuit was filed against Sinemia over fees. Sinemia also came under fire for requiring its users to provide photo identification, social security numbers and other personal information, termination of subscriber accounts without apparent cause as well as app errors which many believed were intentionally designed to slow use of the app, and that the company was able to selectively "fix" on a subscriber-by-subscriber basis depending upon how widely and frequently such customers complained on Twitter and other platforms.
Service
The Sinemia app could be downloaded on iPhone, Android and Windows Phone devices. Upon purchase of a plan, a user's Sinemia membership began. However an activation period of ten days applied before users could use the service by choosing their preferred movie and theater in the Sinemia app, unless users paid an additional fee to start using the service right away. The app provided virtual credit card numbers members could use to purchase movie tickets online, however convenience fees charged by services like Atom and Fandango would be charged to the member's credit card separately.
Sinemia's “Rollover” feature allowed users to roll over unused movie ticket credits to the next month. The feature was applicable for both one-person plans and family plans.
In March 2019, Sinemia introduced and heavily promoted the “Sinemia Limitless” feature, which claimed to allow users to purchase tickets with a one-time payment at the location of their choosing without using the Sinemia app.
This feature promised an option for movie goers who prefer not getting involved in subscriptions but charged a hefty $49.95 initiation fee, and up to a $19.95 card activation fee, and many users claimed their memberships were not activated until many weeks after their purchase.
Despite Sinemia previously calling competitor Moviepass's "Unlimited plan" unsustainable, Sinemia nonetheless introduced its own Unlimited plan in the US, UK, Canada, and Australia in 2018, which claimed to allow users to watch a movie every day with their membership.
Additional Features
Advance Ticket Feature
Sinemia developed a feature allowing customers to buy movie tickets online in advance of the show date, however its service fees (above and beyond the monthly cost advertised) became the source of much controversy (see section below).
Sinemia Social
Sinemia Social was a content platform for movie-related content such as news and specially curated movie lists, and they planned to offer a database for actors, actresses, trailers, and movies.
History
In 2014, Sinemia was founded by entrepreneur Rıfat Oğuz.
In 2016, Sinemia received an investment from 500 Startups and raised $1.5M for its initial US expansion led by Revo Capital.
In 2018, Sinemia announced the Unlimited plan in the UK, US, Canada, and Australia, allowing the users to watch a movie every day.
In October 2018, movie-ticket subscription company Sinemia announced it would begin to offer its tech to movie theaters so they could craft their own personalized subscription plans through its software platform, Sinemia Enterprise.
In January 2019, Sinemia introduced the Rollover feature.
In March 2019, Sinemia introduced the “Limitless” feature.
On April 19, 2019, Sinemia faced a second class action lawsuit. This lawsuit claimed the company terminated user accounts for no reason and did not offer any type of refund.
On April 24, 2019, Bloomberg reported that Sinemia was considering closing down its subscription service to focus more attention on creating similar programs for theater chains, according to sources familiar with the matter.
On April 26, 2019 the company posted a letter on its website indicating that it will be "closing its doors and ending operations in the US effective immediately."
On April 27, 2019 the company declared bankruptcy and confirmed that it is under FTC investigation.
Controversy
FTC Investigation
On April 27, 2019, Sinemia confirmed that it was under "pending" Federal Trade Commission investigation. While the FTC would neither confirm nor deny the investigation, the FTC's role in "stopping unfair, deceptive or fraudulent practices in the marketplace" matches many of the complaints logged by users.
Class-Action Lawsuit
In November 2018, a class-action lawsuit was filed against Sinemia alleging a "bait and switch" scheme over a new $1.80 per ticket processing fee charged to customers who had prepaid annual memberships. "[Sinemia] lures consumers in by convincing them to purchase a purportedly cheaper movie subscription, and then adds undisclosed fees that make such purchases no bargain at all," the lawsuit claims. "Sinemia fleeces consumers with an undisclosed, unexpected, and not-bargained-for processing fee each time a plan subscriber goes to the movies using Sinemia's service." The lawsuit was amended in February 2019 to include more plaintiffs across 10 states.
Photo ID Demands, "Exit Scam" Behavior and Privacy Concerns
Beginning late February 2019, customers found themselves unable to use the Sinemia app along with a message that Sinemia needed a copy of a driver's license, passport or other photo ID to "prevent fraud." In some cases, Sinemia required two forms of ID leading to speculation about the company's motivation. In many cases, once the company obtained the photo ID, passport, etc. the accounts were soon terminated. Those who refused to provide photo identification were barred from using the service with no refund, thus leaving many compelled to provide their personal identification documents as unusual and invasive as the demand is. Some have noted that Sinemia at the time was acting as an "exit scam" in which they collect private information for sale on the dark web, and quickly cease operations and disappear. Within 60 days Sinemia announced that they were ceasing US operations, leaving customers exceptionally concerned about the fate of their photo ID, credit card information, and other personal information included date of birth, and in some cases social security numbers.
Account Terminations
In early March 2019, customers began reporting that their accounts were being terminated with no reason being given for the terminations. Despite outcry from many of those terminated that there was no misuse or fraud on their part, Sinemia claimed that it "has uncovered more than a thousand variations of fraud and has improved its fraud detection systems accordingly" and that it "detected fraudulent activities by a number of users whose memberships have been subsequently terminated due to violation of the Terms of Service." Many noted that they were terminated after the company updated their app to request "tips" when customers attempted to purchase movie tickets and once they refused to tip, their service was terminated. Sinemia subsequently offered partial refunds to customers "based on the difference between what you’ve paid to Sinemia and your spending which also includes the cost of the tickets you have received through your Sinemia membership," which for most users who were terminated ended up being zero, leading some users to speculate that the company terminated users that it was beginning to lose money on. The company claims that it has only removed 3% of user accounts since early March due to fraudulent activity, though it has provided no hard data to back up this claim. Publications such as Business Insider noted that they alone had received hundreds of complaints from customers on Sinemia problems, and many other sites reporting the "fraud" claims have dozens or hundreds of comments from irate terminated users, suggesting that the "3%" claim is highly understated. A survey conducted in the private Sinemia Chatter Facebook group (with 284 members total) showed that fully 80% effectively had their accounts or service effectively terminated, whether through outright account termination, account termination followed by reinstatement quickly followed by error messages, or unusual error message completely preventing the ability to use the app. One user termed this latter condition, "soft termination," a way to kick off users and stop them from seeing their allotted movies without the affront of a "fraud" accusation that was yielding Sinemia so much bad press.
Account Reinstatements
Following the outcry of users who claim they were terminated despite following all of the rules, Sinemia began selectively reinstating those accounts of the most vocal complainers, particular those posting about Sinemia's business practices on social media. This was very quickly followed by unusual and never-before-seen app errors (see "Sinemia App Issues, below)
Sinemia App Issues
In March 2019, Sinemia users began reporting problems with the company's app. Complaints ranged from the inability to purchase tickets due to error messages within the app, to not being able to check in via the app when seeing the movie as required under the company's terms of service. One common error message was termed the "Error OK" message due to the placement of a single "Error" word and the OK button beneath it (see photo). Many reported this problem had been happening for days and weeks, and that the response to customer service inquiries was either slow, unhelpful, or non-existent, with some cancelling their subscriptions due to the lack of assistance with the app's problems. The app currently holds a 2.5/5 rating on Google Play and 1.1/5 on iTunes.
Terms-of-Service Changes
In April 2019, it was reported that Sinemia had changed its terms of service to prevent users from bringing or participating in any class-action lawsuit against the company.
IndieWire Interview
On April 2, 2019, IndieWire published an interview with Sinemia CEO Rifat Oguz which the interviewer described as "at times, contentious." When asked about the problems users experience with the app, Oguz stated, "The technology is always updated, currently we are updating almost every two days… so people always need to update and not use the outdated version," while also theorizing about some of the users who were unable to purchase tickets, "If you were a terminated user, you can’t use your app... Maybe they’re terminated but they don’t know that." Responding to reports of poor customer service, Oguz claimed, "We also increased our employee number maybe 30 percent this year, maybe 40 percent, so we’re growing and we’re advancing with increasing our customer support, increasing our team." When asked about the issue of requiring photo IDs from its users, he stated that this information "only stays 24 hours in our system" and promised that this information would not be shared, while also attributing the need for photo IDs to "the fraud that we are facing." When asked about the termination of user accounts with no reason offered for the terminations, Oguz stated, "We can of course try to do more, but right now, we need to fight the frauds to actually maintain the business."
See also
MoviePass
References
Ticket sales companies
Digital marketing companies of the United States
Defunct subscription services
2019 disestablishments in California |
128500 | https://en.wikipedia.org/wiki/West%20Fargo%2C%20North%20Dakota | West Fargo, North Dakota | West Fargo is a city in Cass County, North Dakota, United States. It is, as of the 2020 Census, the fifth largest city in the state of North Dakota with a population of 38,626, and it is one of the state's fastest growing cities. West Fargo was founded in 1926. The city is part of the Fargo-Moorhead, ND-MN Metropolitan Statistical Area.
Geography
West Fargo is located at (46.871749, -96.894966).
According to the U.S. Census Bureau, the city has a total area of , of which is land and is water.
Climate
This climatic region is typified by large seasonal temperature differences, with warm (and often humid) summers and cold (sometimes severely cold) winters. According to the Köppen Climate Classification system, West Fargo has a humid continental climate, abbreviated "Dfb" on climate maps.
Demographics
According to the 2008-2012 American Community Survey 5-Year Estimates, the ancestry is as follows:
German 46.2%
Norwegian 35.4%
Irish 7.6%
Swedish 6.2%
English 5.4%
French (except Basque) 3.8%
American 2.7%
Polish 2.6%
Russian 2.5%
Czech 2.4%
Subsaharan African 2.2%
Italian 1.7%
Scottish 1.3%
Danish 1.2%
2010 census
At the 2010 census, there were 25,830 people, 10,348 households and 6,823 families residing in the city. The population density was . There were 10,760 housing units at an average density of . The racial makeup was 93.5% White, 2.0% African American, 1.0% Native American, 1.4% Asian, 0.4% from other races, and 1.8% from two or more races. Hispanic or Latino of any race were 1.8% of the population.
There were 10,348 households, of which 36.2% had children under the age of 18 living with them, 52.8% were married couples living together, 9.1% had a female householder with no husband present, 4.0% had a male householder with no wife present, and 34.1% were non-families. 26.4% of all households were made up of individuals, and 6.6% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 3.04.
The median age was 32.6 years. 26.9% of residents were under the age of 18; 9.3% were between the ages of 18 and 24; 32.9% were from 25 to 44; 23.2% were from 45 to 64; and 7.8% were 65 years of age or older. The gender makeup of the city was 49.6% male and 50.4% female.
2000 census
At the 2000 census, there were 14,940 people, 5,771 households and 4,091 families residing in the city. The population density was 2,049.2 per square mile (791.3/km). There were 5,968 housing units at an average density of 818.6 per square mile (316.1/km). The racial makeup was 96.40% White, 0.42% African American, 1.04% Native American, 0.28% Asian, 0.02% Pacific Islander, 0.67% from other races, and 1.16% from two or more races. Hispanic or Latino of any race were 1.41% of the population.
The top six ancestry groups in the city are German (47.9%), Norwegian (39.7%), Irish (8.3%), Swedish (7.2%), French (5.2%), English (4.8%).
There were 5,771 households, of which 40.1% had children under the age of 18 living with them, 57.3% were married couples living together, 9.9% had a female householder with no husband present, and 29.1% were non-families. 23.7% of all households were made up of individuals, and 5.9% had someone living alone who was 65 years of age or older. The average household size was 2.59 and the average family size was 3.09.
29.2% of the population were under the age of 18, 8.9% from 18 to 24, 34.0% from 25 to 44, 21.2% from 45 to 64, and 6.7% who were 65 years of age or older. The median age was 32 years. For every 100 females, there were 97.4 males. For every 100 females age 18 and over, there were 94.4 males.
The median household income was $44,542 and the median family income was $51,765. Males had a median income of $32,105 and females $22,148. The per capita income was $19,368. About 4.7% of families and 6.3% of the population were below the poverty line, including 7.8% of those under age 18 and 14.8% of those age 65 or over.
Law and government
The City of West Fargo is governed by a Board of City Commissioners, which consists of the President of the Board (Mayor) and four City Commissioners. The current mayor of West Fargo as of 2018 is Bernie Dardis.
City Hall
Staff
Sharon Schacher retired in 2011 after 35 years as the City of West Fargo's finance director. Tina Fisk replaced Schacher in that year. Two new director positions were created: human resources, which was filled by Carmen Schroeder, and information technology, which was filled by James Anderson, both in 2011. In 2015, Tina Fisk was named city administrator, replacing Jim Brownlee.
Building
City Hall's official ground breaking was held on May 9, 1975, in 2005, City Hall was renovated when the library moved to its new facility. City Hall's most recent renovation concluded in 2016, which brought building inspections and information technology under the same roof and included secure underground police parking. The $19 million renovation added 34,000 square feet to City Hall.
Police Department
The Police Department has grown from three officers in 1968, to 61 sworn officers as of 2018. "The West Fargo Police Department’s Mission is to provide quality service to residents and guests of West Fargo, ensuring a safe community by protecting their constitutional rights in the most professional manner possible." Police officers and other city employees enforce West Fargo city ordinances. Heith Janke is the current, as of 2017, Chief of Police. The previous chief was ousted by the city for inappropriate contact with city companies. The police department's community programs include Citizen Police Academy, Police Explorers Post 281, Night to Unite, Neighborhood Watch Program, TRIAD and Crime Free Multi-Housing.
Economic Development and Community Services
Business
"The Business Development Department connects new and existing business owners and operators with city officials, helping to pave the way for the growth and expansion that is making West Fargo part of North Dakota’s new economic frontier." Economic Development Director Matthew Marshall "says some of West Fargo’s growth is a reflection of the recent trendiness of the greater Fargo area, which has attracted young workers and families and translated into a low median age that businesses desire." Incentives for businesses include loans (PACE Loan and Flex PACE Loan) as well as "tax incentives for purchasing, leasing, or making improvements to real property located in a North Dakota renaissance zone."
Community
In 2015, West Fargo became a "North Dakota Cares" community. North Dakota Governor Jack Dalrymple has "pledged $500,000 from his executive budget" to support "service members, veterans, families and survivors."
Public Works
Public Works Department oversees streets, sewer and water, sanitation and forestry for the city. "There are eight (8) existing wells within the City. The total pumping capacity of all wells together is 3,500 galloons per minute (5 million gallons per day)." In 2014, Chris Brungardt, the former assistant director, was appointed as the public works director and The West Fargo City Commission unanimously approved a "contract with Twin Cities based Waste Management to start a no-sort recycling program in the city in April."
Fire Department
West Fargo Fire & Rescue is a combination department led by Chief Dan Fuller, of Danvers MA. The department has 23 career and 45 part-time positions. The dept has two stations and provides "all hazard" services including fire suppression, community risk reduction, basic life support EMS, hazmat, and technical rescue specialties such as water/ice rescue, high angle rope rescue and tactical EMS. Two ladders, three engines, two EMS rescues, a heavy rescue, a Battalion Chief truck, a hazmat trailer, two boats, a grass truck, a K9 truck, and seven administrative cars make up the vehicles in the fleet. The command structure includes a career Fire Chief, Office Coordinator, three Deputy Chiefs (Risk Reduction, Operations, Professional Standards), an Emergency Preparedness Coordinator, one training chief, two Battalion Chiefs (one career, one PT), one equipment services tech, seven Captains (three career, four PT) as well as a Fire Department Chaplain. The department holds an ISO Class 3 rating.
Parks
The West Fargo Park District maintains 30 parks, bike paths, and facilities that include Scheels Soccer Complex, Veterans Memorial Arena, Rustad Recreation Center and Veterans Memorial Pool. A five-member park board oversees the Park District; Barb Erbstoesser is the executive director. West Fargo Winter Days, an annual event, includes a Silver Snowflake Search, sleigh rides and a chili cookoff.
Public Library
The West Fargo Public Library is located in the Clayton A. Lodoen Center at 215 3rd Street East in West Fargo. The library moved into this facility in 2005. Freda Hatten, the first Librarian of the West Fargo Public Library, retired in 1976. Carissa Hansen has served as Library Director since December 2019, following the retirement of Sandra Hannahs who served from 2007 to 2019. Before that, Miriam Arves had been the library director for 31 years. Beyond the circulation of physical items like books, the West Fargo Public Library offers a wide range of in-person and online services to patrons. In 2020, amid calls for worldwide social distancing due to the COVID-19 outbreak, the West Fargo Public Library translated their popular in-person programs into virtual programs calling this new collection of services "West Fargo Public Library at Home!"
Awards
In 2013, West Fargo was named City of the Year by the North Dakota League of Cities.
In 2013, The West Fargo City Commission received the American Public Works Association North Dakota Chapter Project of the Year Award for the city's "Storm Sewer Improvement District numbers 4044, 4046, and 4047".
In 2014, City Administrator Jim Brownlee was "named North Dakota League of Cities Outstanding City Employee of the Year".
In 2014, Library Director Sandra Hannahs was named North Dakota Librarian of the Year.
National recognition
West Fargo has been a Tree City for over 30 years. Tree City USA requirements include "maintaining a tree board or department, having a community tree ordinance, spending at least $2 per capita on urban forestry and celebrating Arbor Day".
In 2011, West Fargo was recognized as a Playful City USA by KaBOOM!. "Playful City USA is a national recognition program sponsored by the Humana Foundation, honoring cities and towns that champion efforts to make play a priority through establishing policy initiatives, infrastructure investments and innovative programming".
In 2015, West Fargo was named one of North Dakota's five safest cities according to The Safewise Report which uses FBI Crime Report data to rank the safest cities
In 2016, West Fargo Public Library was named North Dakota's Amazing Library by MSN.com
Transportation
West Fargo works with North Dakota Department of Transportation, Fargo-Moorhead Metropolitan Council of Governments (Metro COG), and Fargo Moorhead Metro Area Transit to meet the transportation needs of West Fargo citizens. In addition, The West Fargo Municipal Airport, 6 miles northwest of Fargo, is operated by the West Fargo Airport Authority and has a paved and lighted 3,300 x 50 foot runway.
Education
West Fargo School District
West Fargo School District serves the city of West Fargo, much of southwestern Fargo, the suburb of Reile's Acres, and the communities of Horace and Harwood. Seven West Fargo residents are elected to serve on the school board, these residents govern the school district and serve 4-year terms.
The board voted unanimously on Monday, March 26, to hire Beth Slette, a 25-year veteran of the district and current West Fargo assistant superintendent of elementary education. Slette took over for Dr. David Flowers who had served as superintendent since 2010. Flowers was named the North Dakota Superintendent of the Year by the North Dakota Association of School Administrators in 2012. Holly Ripley, an assistant principal at West Fargo High School, was named the 2016 National Assistant Principal of the Year.
History
The West Fargo School District (then referred to as "School District No. 6 in Cass County") was formed on 9 October 1876. In January 1887, Nina Hall was hired to teach for two months. She was paid $40. "This first school was large enough to handle the pupils of the district until 1910 when it became necessary to build the Fairview School in the western part of the district. The two schools continued to operate until 1923." In 1922, North School was built, which included two classrooms and a gymnasium. The following year, Jennie Worman Colby became the first principal. In 1939, a new school building was built for grades 7-12. Today the building, The Clayton A. Lodoen Community Center, houses the West Fargo Community High, Clayton A. Lodoen Kindergarten Center, and West Fargo Public Library.
Growth
The City of West Fargo's growth has caused the building of new schools to meet the needs of its students. Aurora Elementary School (located in the Eagle Run development in southern West Fargo) opened for the 2007–2008 school year and Sheyenne 9th Grade Center opened on August 27, 2007 in response to the district's growing enrollment and overcrowding at West Fargo High School (2007 was the first year that freshmen were educated outside the High School since 1993). The Sheyenne 9th Grade Center may serve as a second middle school for West Fargo, as it was decided in March 2009 to be voted on by the public. In January 2015, Superintendent David Flowers presented a 10-year RSP Associates demographics study which "predicts the district will continue to add between 400 and more than 600 students each year".
Schools
The school district operates two early childhood schools (Clayton A. Lodoen Kindergarten Center and Osgood Kindergarten Center), ten elementary schools (Aurora Elementary, Eastwood Elementary, Freedom Elementary, Harwood Elementary, Horace Elementary, Independence Elementary, L.E. Berger Elementary, Liberty 5th Grade, South Elementary, and Westside Elementary), two middle schools (Cheney Middle and Liberty Middle) and three high schools (West Fargo High School, Sheyenne High, and Community High).
Hulbert Aquatic Facility
In 2016, the school district began construction of an $18.5 million competitive pool facility at the L.E. Berger Elementary School. The facility will include the pool used for the USA Swimming trials for the 2016 Summer Olympics at the CenturyLink Center Omaha in which Michael Phelps and Ryan Lochte competed. The Omaha pool which was built by Myrtha Pools was dismantled after the competition and moved to West Fargo. It is named for the Hulbert family which donated $1 million for the project.
Technology
Since it began in 2009, West Fargo School's Science, Technology, Engineering and Math (STEM) program has taken top honors in several competitions, including the Technology Student Association State Competition in 2012, the Bison BEST competition in 2009, and students won first place for Best Web Page Design at the 2009 Frontier Trails BEST Regional Robotics Competition
In 2015, "an education partnership" was "launched to help high school students in West Fargo, Fargo and Northern Cass school districts prepare for college and 21st century technical careers." While a business partnership already exists between West Fargo High School and Microsoft, Cass County Career and Technical Education Consortium hopes to expand to industries to include "agricultural science, diesel technology, health science, aviation, information technology and engineering".
A group of Liberty Middle School students won ‘Best of State’ in the 2014–15, 2015–16 and 2016–17 Verizon Innovative App Challenge.
National awards
In 2016, "West Fargo High School teacher Michelle Strand earned the Presidential Award of Excellence in Mathematics and Science Teaching as named by President Barack Obama."
North Dakota State Teacher of the Year Awards
1996 - Marcia Kenyon, Eastwood Elementary School
1998 - Vickie Boutiette, District Reading
2008 - Verna Rasmussen, Westside Elementary School
2013 - Andrea Noonan, Cheney Middle School
2014 - Aaron Knodel, West Fargo High School
2017 - Nanci Dauwen, Sheyenne High School
Churches
Catholic
Blessed Sacrament Catholic Church
Holy Cross Catholic Church
Lutheran
Faith Lutheran
Lutheran Church of the Cross
St. Andrew Lutheran Church
Shepherd of the Valley Lutheran Church
Triumph Lutheran Brethren Church
Methodist
Flame of Faith United Methodist Church
Presbyterian
Community Presbyterian Church
Other denominations
Changing Lives Tabernacle
Meadow Ridge Bible Chapel
New Beginnings Assembly of God
Prairie Heights
Red River Church
Shiloh Evangelical Free Church
Businesses
The West Fargo community supports businesses through the city of West Fargo and The Fargo Moorhead West Fargo Chamber of Commerce. The city of West Fargo supports business owners through The West Fargo Economic Development Advisory Committee, West Fargo Economic Development Department and City Assessor's Office. The Fargo Moorhead West Fargo Chamber of Commerce is a bi-state organization representing over 2,000 firms and 94,000 people. The Chamber supports its members through "advocacy, education, and engagement".
Technology companies with West Fargo locations, include:
Applied Industrial Technologies
BNG Technologies
Data Technologies Inc.
High Point Networks
Network Center Communications
Norse Technologies
Razor Tracking
Red Chair Solutions
TrueIT
Digital Famous Media
Newspapers and magazines
West Fargo news is covered in several newspapers and magazines including:
Area Woman Magazine
Fargo Forum
Fargo Monthly
Prairie Business
West Fargo Pioneer mailed free to every West Fargo resident
Annual events
Annual West Fargo events include:
Big Iron, an annual event located at the West Fargo Fairgrounds, features farm equipment and over 900 exhibit booths. Over 87,000 attendees took part in the three-day Big Iron in 2013.
Bonanzaville Pioneer Days includes a parade, food, demonstrations and tours.
Hamfest an annual event located at the West Fargo Fairgrounds, features presentations and equipment for sale.
Nite to Unite, hosted by the West Fargo Police Department, is an annual community summer event. Past activities have included Police, Fire, FM Ambulance and Military demonstrations, free food, face painting, mascots and community service display booths.
Red River Valley Fair includes entertainment, arts and crafts shows, livestock, fireworks and a petting zoo.
The West Fargo Public Library hosts its Summer Reading Program to encourage reading for children, teens, and adults.
The West Fargo Shakers holds an annual New Year's Eve Party on December 31, all proceeds benefit the Back Pack Program.
West Fest, held in September, is a community event for all ages, including a softball tournament, a pancake feed, a parade, and firefighter's ball.
Sites of interest
Big Iron Farm Show
Bonanzaville, USA
Notable people
Anthony W. England, NASA astronaut
Jan Maxwell, Broadway actress and five time Tony Award nominee
Tyler Roehl, former running back with the Seattle Seahawks and Minnesota Vikings
Matt Strahm, relief pitcher for the San Diego Padres
Alon Wieland, businessman and North Dakota state legislator
References
Further reading
Bicentennial West Fargo-Riverside History Book Committee. (1977). Thru the years to '76. West Fargo, N.D.: J & M Printing.
Cushing, N. (2003). West Fargo : A work in progress. Moorhead, Minn.: Dept. of Mass Communications, Minnesota State University Moorhead.
Dodge, R. (2009). Prairie murders : The true story of three murders and the loss of innocence in a small North Dakota town (1st ed.). St. Cloud, Minn.: North Star Press of St. Cloud.
Forness, P. (1994). Seasons : Pleasant pastures on the All-Muddy River (1st. ed.). Fargo, N.D.: Prairie House.
Heritage Publications (Hendrum, Minn.). (2003). A Century of the Red River Valley Fair. Hendrum, MN: Heritage Publications.
Witham, D. (2003). Sharing a legacy : The life & times of Donovan C. Witham. West Fargo, ND?: S.n.
Witham, D. (2011). Always with'em : A life to remember, musings on publishing, politics, and life in a small town. West Fargo, N.D.: Donovan C. Witham.
External links
City of West Fargo official website
City of West Fargo - YouTube
West Fargo Public Schools official website
West Fargo Public Schools - YouTube
West Fargo Public Library
The Fargo Moorhead West Fargo Chamber of Commerce
West Fargo Economic Development website
Cities in North Dakota
Cities in Cass County, North Dakota
Fargo–Moorhead
Populated places established in 1926 |
37829366 | https://en.wikipedia.org/wiki/GendBuntu | GendBuntu | GendBuntu is a version of Ubuntu adapted for use by France's National Gendarmerie. The Gendarmerie have pioneered the use of open source software on servers and personal computers since 2005 when it adopted the OpenOffice.org office suite, making the OpenDocument .odf format its nationwide standard.
Project
The GendBuntu project derives from Microsoft's decision to end the development of Windows XP, and its inevitable replacement with Windows Vista or a later edition of Windows on government computers. This meant that the Gendarmerie would have incurred large expenses for staff retraining even if it had continued to use proprietary software.
One of the main aims of the GendBuntu project was for the organisation to become independent from proprietary software distributors and editors, and achieve significant savings in software costs (estimated to be around two million euros per year).
Around 90% of the 10,000 computers purchased by the Gendarmerie per year are bought without an operating system, and have GendBuntu installed by the Gendarmerie's technical department. This has become one of the major incentives of the scheme for staff; transferring to GendBuntu from a proprietary system means the staff member receives a new computer with a widescreen monitor.
The main goal is to migrate 80,000 computers by the end of 2014, a date which coincides with the end of support for Microsoft Windows XP. 35,000 GendBuntu desktops and laptops have been deployed as of November 2011.
A major technical problem encountered during the development of the project was keeping the existing computer system online while the update took place, not only in metropolitan France but also in overseas Departments and Regions. It was solved partly by redistributing dedicated servers or workstations on Local Area Networks (depending on the number of employees working on each LAN) and with the use of an ITIL-compliant qualifying process.
An extensive IT support team helped to implement the changes. This included the "core team" at Gendarmerie headquarters at Issy-les-Moulineaux, the "running team" of four located at the Gendarmerie data center at Rosny-sous-Bois, and about 1,200 local support staff.
Timeline
2004 - OpenOffice.org software replaces 20,000 copies of the Microsoft Office suite on Gendarmerie computers, with the transfer of all 90,000 office suites being completed in 2005.
2006 - Migration begins to the Mozilla Firefox web browser, on 70,000 workstations, and to the Mozilla Thunderbird email client. The Gendarmerie follows the example of the Ministry of Culture in this decision. Other software follows, such as GIMP.
2008 - The decision is made to migrate to Ubuntu on 90% of the Gendarmerie's computers by 2016. Ubuntu is installed on 5,000 workstations installed all over the country (one on each police station's LAN), primarily for training purposes.
2009 - Nagios supervision begins
2010 - 20,000 computers ordered without a pre-installed operating system
January 2011 - Beginning of the large scale phasing in of GendBuntu 10.04 LTS
December 2011 - 25,000 computers deployed with GendBuntu 10.04 LTS
February 2013 - Upgrade from GendBuntu 10.04 LTS to GendBuntu 12.04 LTS. The local management and IT support teams will phase in the upgrade in such a way to not disrupt the running of the police stations.
May 2013 - Target for end of the migration to GendBuntu 12.04 LTS - 35,000 computers upgraded.
December 2013 - 43,000 computers deployed with GendBuntu 12.04 LTS. TCO lowered by 40%.
February 2014 - Beginning of final stage of the migration of existing Windows XP computers to GendBuntu 12.04 LTS
June 2014 - Migration completed. 65,000 computers deployed with GendBuntu 12.04 LTS (total number of computers : 77,000)
March 2017 - Migration completed. 70,000 computers deployed with GendBuntu 14.04 LTS (total number of computers: 82,000)
May 2017 - Introduction of GendBuntu 16.04 LTS
June 2018 - 82% of PC workstations running GendBuntu 16.04 LTS
Early June 2019 - 90% of workstations running GendBuntu (approx. 77,000)
Spring 2019 - Migration to GendBuntu 18.04
See also
Canaima (operating system)
Inspur
LiMux
Nova (operating system)
Ubuntu Kylin
VIT, C.A.
References
External links
Presentation of Major Stéphane Dumond, French Gendarmerie Nationale, published December 26th, 2014
French police: we saved millions of euros by adopting Ubuntu, Ars Technica, March 12, 2009
Linux software projects
State-sponsored Linux distributions
Ubuntu derivatives
Law enforcement in France
Linux distributions |
6193733 | https://en.wikipedia.org/wiki/PTGui | PTGui | PTGui is a panorama photo stitching program for Windows and macOS developed by New House Internet Services BV. PTGui was created as a GUI frontend to Helmut Dersch's Panorama Tools. It features its own stitching and blending engine along with compatibility to Panorama Tools. PTGui supports telephoto, normal, wide angle and fisheye lenses to create partial cylindrical up to full spherical panoramas. PTGui can handle multiple rows of images.
Originally released for Windows, version 6.0.3 introduced support for Mac OS X.
The 'free trial version' of PTGui is fully functional but creates panoramas with embedded visible watermarks.
PTGui pro also includes HDR and tone mapping support.
See also
Hugin is an open source alternative also based on Panorama Tools
Further reading
Jacobs, Corinna - Interactive Panoramas: Techniques for Digital Panoramic Photography
Andrews, Philip - 360 Degree Imaging: The Photographers Panoramic Virtual Reality Manual
References
External links
C (programming language) software
C++ software
Panorama software
Windows graphics-related software
MacOS graphics software
Photo stitching software
Photo software
Software that uses wxWidgets
HDR tone mapping software |
52619100 | https://en.wikipedia.org/wiki/Zhou%20Zhi-Hua | Zhou Zhi-Hua | Zhou Zhi-Hua (; born November 20, 1973) is a Professor of Computer Science at Nanjing University. He is the Standing Deputy Director of the National Key Laboratory for Novel Software Technology, and Founding Director of the LAMDA Group. His research interests include artificial intelligence, machine learning and data mining.
Biography
Zhou Zhi-Hua received his B.Sc., M.Sc. and Ph.D. degrees in computer science from Nanjing University in 1996, 1998 and 2000, respectively, all with the highest honor. He joined the Department of Computer Science & Technology of Nanjing University as an Assistant Professor in 2001, promoted to Associate Professor in 2002 and Full Professor in 2003. He was appointed as Cheung Kong Professor in 2006.
Research
Zhou is known for significant contributions to ensemble learning, multi-label learning, and learning with partial supervision (semi-supervised learning, multi-instance learning, etc.). He has authored two books and published more than 150 scientific articles in premium journals/conferences. According to Google Scholar, his h-index is 102. He also holds 18 patents.
Services
Zhou founded the ACML (Asian Conference on Machine Learning), and served as Advisory Committee member of IJCAI (2015-2016), General co-chair of ICDM'2016, Program Committee Co-Chair of IJCAI'2015 Machine Learning track, etc. He served for editorial boards of many journals, including Executive Editor-in-Chief for Frontiers of Computer Science. He is/was Chair of CCF-AI (2012-), Chair of the IEEE CIS Data Mining Technical Committee (2015-2016), Chair of the CAAI Machine Learning Technical Committee (2006-2015). He founded the LAMDA, a famous research group in machine learning and data mining in China.
Awards
Zhou received various award/honors including the National Natural Science Award of China (premium science award in China),
the IEEE ICDM Outstanding Service Award,
the PAKDD Distinguished Contribution Award,
the IEEE CIS Outstanding Early Career Award, the Microsoft Professorship Award, etc.
He is a Fellow of the ACM,
AAAS,
AAAI,
IEEE,
IAPR,
IET/IEE
and CCF.
Books
Ensemble Methods: Foundations and Algorithms. 2012
Machine Learning. (in Chinese). 2016
References
External links
Zhi-Hua Zhou homepage
Artificial intelligence researchers
1973 births
Living people
Fellows of the Association for the Advancement of Artificial Intelligence
Fellows of the International Association for Pattern Recognition
Fellow Members of the IEEE
Fellows of the Association for Computing Machinery |
1374165 | https://en.wikipedia.org/wiki/Hong%20Kong%20Certificate%20of%20Education%20Examination | Hong Kong Certificate of Education Examination | The Hong Kong Certificate of Education Examination (HKCEE, 香港中學會考) was a standardised examination between 1974 and 2011 after most local students' five-year secondary education, conducted by the Hong Kong Examinations and Assessment Authority (HKEAA), awarding the Hong Kong Certificate of Education secondary school leaving qualification. The examination has been discontinued in 2012 and its roles are now replaced by the Hong Kong Diploma of Secondary Education as part of educational reforms in Hong Kong. It was considered as the equivalent of the GCSE in the United Kingdom.
Overview
Students usually took the HKCEE at the end of their five-year period of secondary school in Hong Kong; it was compulsory for students who wanted to pursue further education, but some students took individual examinations to increase their chance of continuing their study or to fulfil certain requirements in tertiary education programs. The final year in which school candidates were accepted was 2010. There were 127,162 candidates entered for the examination, 90,063 of them school candidates and 37,099 private candidates.
The HKCEE was conducted from late February to June, but major subjects were taken between mid-April and May, after the major subjects examination in the Hong Kong Advanced Level Examination were completed, by the Hong Kong Examinations and Assessment Authority (HKEAA). Oral examinations were conducted in late May to early July.
Examination results were released in early August, traditionally on the Wednesday after the first-round admission of the Joint University Programmes Admissions System had been released. There were 39 subjects available in the HKCEE. Most day-school candidates took 6 to 8 subjects in the HKCEE, with 10 being the upper limit. Apart from Chinese and English, which were taken by almost every school candidate, and language-specific subjects (French, Chinese History (Chinese only), Buddhist Studies (Chinese only), Literature in English (English only), Putonghua (Chinese only) and Word Processing and Business Communication (English only), all subjects could be taken in either Chinese or English. The same standards were applied in marking and grading regardless of the choice of language, and the language medium was not recorded on the results notices or certificates. It was, however, recorded on admission forms.
Purpose
After sitting the HKCEE and having their examination results announced, candidates could apply for a place in sixth form in local schools in Hong Kong. Moreover, to qualify for the Hong Kong Advanced Level Examination (HKALE), students had to pass certain requirements of HKCEE as a prerequisite. The Joint University Programmes Admissions System (JUPAS), including the EAS system under JUPAS, also considered students' HKCEE results as a requirement and as a decisive factor of admission in the admission processes. Thus, students' results in HKCEE affected their application to sixth form but also directly affected the chance of entering the universities after seventh form, which was commonly ignored by students until they entered sixth form. In JUPAS, most admissions programs gave HKCEE results about a 10–30% weight, and some as much as 50%; the HKALE results provided the other 50%. Thus, HKCEE was the initial stage of University Entrance Examination.
The HKCEE was fully recognised by other countries including New Zealand, Australia, the UK, the US and many other major countries. It was equivalent to Year 11 in Australia and in the UK.
For comparison, the Mathematics syllabus of HKCEE was equivalent to New Zealand's National Certificate of Educational Achievement Level 2 Mathematics at Form 6 (Year 12) level (excluding Calculus) rather than NCEA Level 1 or its predecessor the School Certificate examination, sat by the country's Form 5 (Year 11) students.
Additional Mathematics in the HKCEE was more advanced than NCEA Level 3 Mathematics with Calculus, sat by Form 7 (Year 13) students in New Zealand to gain university entrance in science and engineering. HKCEE's Additional Mathematics was also recognised by most of the programs in Hong Kong's universities as equivalent to HKALE Pure Mathematics.
For the examination questions on the same topics, those in HKCEE tended to be loaded with unclear wordings and difficult manipulations, compared with their NCEA counterparts.
In other subjects, such as the sciences like Chemistry and Physics, the syllabus covered in HKCEE was similar to that of the SAT Subject Tests sat in Grade 12, but it was arguably easier to obtain a score of 760 on the SAT Subject Tests than to obtain a grade of A in the HKCEE examination although Grade 12 was theoretically equivalent to Form 6 under the Hong Kong school system.
Grading and UK equivalence
The results of the HKCEE are expressed in terms of seven grades A – U (or 5*-1 and U for Chinese and English) other than French.
In the past, there were two other grades below UNCL: G and H. They were called "Grenade" and "Ladder".
Results below grade 'F' are designated as unclassified ("UNCL"), assigned either when candidates hand in unanswered or unintelligible paper(s), or when candidates are assumed to have cheated. Candidates not taking the exam are designated as Absent ('ABS') for that subject.
Before 2002 grades A – F were each divided into two "fine grades", making the original number of grades available twelve, from A(01) to F(12). The fine grades in both HKCEE and HKALE were lifted in 2002, as they were accused of being discriminatory to students.
Most of the results are graded "on the curve" but at the same time a cutoff score for each grade is also used. Obtaining an A is very difficult, especially for languages in the past system, where only about 1.5–3% of students received A's. On average, only the top 3–4% in each subject can get an A. The cutoff scores vary greatly from subject to subject and from year to year. To give a clearer picture, for Chinese, A-grades are sometimes given for candidates having scored 70 or above, while for Mathematics, an A invariably translates to a score in excess of 90. The cutoff scores are not released by the HKEAA publicly; the information is only available to teachers.
Official statistics can be found on the HKEAA website: https://web.archive.org/web/20051124073914/http://www.hkeaa.edu.hk/doc/fd/2004cee/39-60.pdf
New grading system
Since 2007, as a steppingstone towards the grading system HKDSE that will be introduced in 2010, a new grading system was introduced to the exams in the subjects of Chinese and English. Under the new system There are seven grades: five numerical grades from 1–5, where 1 is the lowest and 5 is the highest, and two other grades, "5*" and "UNCL", for students with particularly outstanding and poor performance respectively. The traditional "on-the-curve" system was not used other than to distinguish between the 5*'s and the 5's.
The point system was in chaos in its first year of implementation since level 2, which is the passing line in HKEAA, counts as two points in some schools and the HKEAA, but counts as one point in EMB and most schools. This counting process not only confused Form 5 students, but also some Form 6 students who repeated the exam to obtain better Language results for JUPAS admissions. The problems with the points system caused changes in the method for calculating points in HKEAA in the following years.
Since the two syllabi in English were merged into one along with the new system, some schools were worried that the level of English is insufficient for the HKALE as they expect HKEAA will decrease the difficulty to allow students who previously studied in Syllabus A(mainly from some non-EMI schools) to pass more easily. This resulted in some schools rejecting students who had a level 2 in English.
The system will be fully utilised in all major subjects in the HKDSE.
Further studies
Students' results in HKCEE and their conduct (behavior at school which is usually shown in the school's internal report) at school are the main admission factors in Secondary 6, which is the main stream for university admission. In EMB's official admission processes, students with 4 passed subjects, or 3 passes with 5 marks (both excluding language subjects) is the minimum requirement. Students with 14 marks above, including passes in English and one other Language subject have an advantage as they can be admitted from the first stage of Form 6 admissions. Students scoring 30 marks (the maximum) with L4/C in two designated language subjects (one of which must be English) in their first attempt will be permitted to apply to the Direct Entry for the 3 major Universities in Hong Kong (see below).
Requirements of sitting in HKALE are independent from Form 6 admissions since they are managed by separate organisations. A student who passes all the minimum requirements for sitting in HKALE also meets all the requirements for applying for Form 6, but not vice versa. Schools may admit a student who failed in language subjects, providing that the school would bear the risk that he/she may not pass again in the following year, not allowing them to sit for the HKALE.
For admissions to the four-year Higher Diploma programs in HKIVE and degrees in The University of Hong Kong, Chinese University of Hong Kong, and the University of Science and Technology, the best 7 subjects, instead of 6, are counted. In other cases, the best 5 or 6 subjects are counted. The Chinese University of Hong Kong, in addition, did not accept "combined certificates" (results obtained in more than two examinations) and must be fulfilled in a single attempt (usually the first one).
International recognition
HKEAA has been working closely with international agencies, overseas universities and colleges to promote recognition of HKEAA examinations. Standards of performance in the HKCEE and HKALE have for many years been benchmarked against standards in comparable subjects at British GCE O-Level and A/AS-Level. In the case of performance in the English Language, studies have been conducted to link standards of performance in HKCEE English Language (Syllabus B) and HKALE Use of English to standards in IELTS and TOEFL.
Starting from 2007, HKCEE standards-referenced reporting was adopted in Chinese language and English language subjects. The results in the two subjects have also been benchmarked against International General Certificate of Secondary Education (IGCSE) results.
The Hong Kong Diploma for Secondary Education Examination (HKDSE) will be conducted for the first time in 2012. To secure appropriate recognition of HKDSE qualifications, the HKEAA has been holding discussions with international agencies including University of Cambridge International Examinations (CIE), National Academic Recognition Information Centre (NARIC), Universities and Colleges Admissions Service (UCAS) in the UK and the Australian Education International (AEI) in Australia to conduct benchmarking and comparability research on HKDSE.
Although HKEAA examinations have been widely accepted, some universities have set particular criteria for admission of overseas students. For instance, the University of Cambridge in the UK has set out admission requirements for under-age or minor candidates concerning guardianship arrangements.
In cases where candidates wish to further their studies abroad, they may be required to take as one of the basic requirements certain unified examinations conducted by the examination authorities of that particular country. These include, for example, the National Higher Education Entrance Examination for Universities in Mainland China and the SAT for the United States.
UK NARIC is the UK's National Agency for the UK Government. They are the official information provider on information on wide-ranging international qualifications and skills attained from outside the UK.
Although NARIC is a National Agency for the UK Government, the institutions of higher education may make their own decision on what foreign qualifications or study they will accept, and UK NARIC has only an advisory role.
The two new HKCEE language subjects have been benchmarked against the International General Certificate of Secondary Education (IGCSE) by the Cambridge Assessment. The HKCEE results in Chinese Language and English Language are recognised as equivalent to the IGCSE results as follows:
Marking schemes
While the HKEAA have been publishing booklets of past examination papers of each subject at an affordable price, the marking schemes (i.e. official detailed solutions) of past examinations were never readily available to the public. The official argument from the HKEAA for not publishing these marking schemes was that it might be "pedagogically unsound" and would encourage "rote memorisation" behaviour from students. Nevertheless, students were often able to obtain these "restricted documents" by taking classes at cram schools. Hence, the policy of the HKEAA indirectly denied less privileged students from gaining access to information pertaining to how examination papers are scored. This might have created a socio-economic bias in the ability of students to obtain good results in the HKCEE. Moreover, according to media reports, some so-called "star tutors" managed to earn more than HK$7 million per annum (~US$900,000), leading some to speculate that the HKEAA's policy had indirectly transferred large amounts of wealth to these cram schools.
To deal with this problem, the HKEAA started to release the marking schemes together with the year's examination papers in 2003.
HKSAR Government Scholarship
The HKSAR Government Scholarship is a prestigious award associated with the HKCEE Examination. The top 30–40 candidates in the HKCEE Examination receive this scholarship each year. Counterintuitively, many 10A students fail to receive this award. This apparent paradox is mainly due to the methodology in which the HKSAR Government Scholarship is awarded. Rather than calculating the number of A's each student has, the HKSAR Government Scholarship is awarded on the basis of the highest total in raw scores attained in the seven best subjects of each student. Consequently, many 9A, 8A, and even 7A students go on to win this award while 10A students do not. This has been a major source of complaint from parents of 10A students who felt cheated from this coveted prize.
Early Admissions Scheme
The Early Admissions Scheme (or simply "EAS"), a subsystem of the Joint University Programmes Admissions System (JUPAS) since 2003, allowed school Candidates with 6 or more "A"s (Distinctions) on their first attempt of the HKCEE, with level 4 or above in English Language, and also level 4 or above in Chinese Language or "C" or above in French or Putonghua to apply for the University of Hong Kong, Chinese University of Hong Kong, or the Hong Kong University of Science and Technology institutions after their Secondary 6 study, without the need to sit for the HKALE.
Each year about 400–600 students entered the scheme via this subsystem in JUPAS. A selection procedure exists in this scheme but, unlike the mainstream scheme of JUPAS, the students are guaranteed a firm offer in EAS regardless of the number of applicants in EAS. Students who met the EAS requirements do not need to apply for this scheme to enter the aforementioned universities (though most of them will), but they need to take part in the HKALE and participate in the mainstream scheme if they did not participate and want to pursue their studies in Hong Kong.
There are also rare cases where students who were eligible for the EAS are given independent offers by universities outside EAS.
List of subjects
Subjects in bold are the major examination subjects (which over 20% of students sit); most secondary schools will provide these curricula for students.
Accommodation and Catering Services 2
Additional Mathematics
Biology
Buddhist Studies 1
Chemistry
Chinese History
Chinese Language
Chinese Literature
Commerce
Computer and Information Technology
Design and Technology 5
Design and Technology (Alternative Syllabus) 5
Economic and Public Affairs 3
Economics 3
Electronics and Electricity
English Language
Fashion and Clothing 4
French
Geography
Government and Public Affairs
Graphical Communication
History
Home Economics (Dress and Design) 4
Home Economics (Food, Home and Family) 2
Integrated Humanities
Literature in English
Mathematics
Music
Physical Education
Physics
Principles of Accounts
Putonghua
Religious Studies 1
Science and Technology
Social Studies
Technological Studies 5
Travel and Tourism
Visual Art
Word Processing and Business Communication (English)
Buddhist Studies may not be taken with Religious studies; Religious Studies are available in Protestant and Catholic versions on the same paper, which varies in the citations of the Bible. Chapters that only appear in the Catholic version are not in the Syllabus. Both subjects are open-booked.
Accommodation and Catering Services may not be taken with Home Economics (Food, Home and Family)
Economics may not be taken with Economic and Public Affairs
Home Economics (Dress and Design) may not be taken with Fashion and Clothing
Design and Technology may not be taken with Design and Technology (Alternative Syllabus) or Technological Studies
Cancelled/Renamed subjects
The year in parenthesis is the last year of examination.
Woodwork (1992)
German (2001)
Typewriting (2002)
Metalwork (2004)
Art (2005) (restructured as Visual Arts)
Computer Studies (2005), Information Technology (2005) (merged into Computer and Information Technology)
Textiles (2007)
Technical Drawing (2007)
Ceramics (2007)
Human Biology (2007) (Was an alternate syllabus for Biology)
Engineering Science (2007) (Was an alternate syllabus for Physics)
Planned developments
The Authority was gradually implementing school-based assessment to all subjects, to reduce stress on students due to studying for exams. Starting from 2006, two subjects—Chinese History and History—had been implemented with the school-based assessment, replacing the previous multiple choice paper in public examinations.
In 2007, the curricula for Chinese and English were revised. The two subjects were no longer graded along the normal distribution curve but rather by criteria referencing (with the exception of the highest grade, the 5*). Numerical levels were used instead of the traditional letter grades.
The proposed revisions specific to Chinese included:
The removal of the 26 selected essays, excerpts from the classics, poems and ancient lyrics in the original curriculum, replacing them with a selection of reading materials by the teachers.
Independent reading comprehension and writing papers.
The addition of a listening comprehension examination.
The addition of a speaking (oral) examination.
The addition of a paper testing integrated skills.
The addition of a school-based assessment (SBA) scheme that accounts for 20% of the exam mark.
The proposed revisions specific to English included:
The abolishment of two separate syllabi. Before 2007, two syllabi coexisted. Syllabus B was an O-level course and Syllabus A was easier but considered inferior. For HKEAA/EMB's view, grades attained on syllabus A were considered to be inferior to grades attained on syllabus B (e.g. a C on syllabus A was equivalent to an E on syllabus B), except in Form 6 admission and HKALE requirements in which the two syllabi were considered the same.
However, many universities and secondary schools claimed that the gap should have been larger. According to the report by HKEAA comparing the student's Use of English in HKALE with their previous result in HKCEE English, the passing rate of use of English for which candidates received a C on syllabus A is far lower than that of candidates who received an E in syllabus B.
The abolishment of the testing of grammar and language usage which was once part of the old reading comprehension and usage paper. (Grammar will be tested alongside comprehension in the same section)
A refined writing paper, now requiring two separate pieces of writing, one guided, one independent and more open-ended.
A refined reading comprehension paper, with questions requiring written answers instead of the old format in which every question was a multiple choice question.
A refined speaking skills paper, requiring more independent thinking than the previous routinised paper.
The addition of a school-based assessment scheme that accounts for 15% of the exam mark.
HKEAA had also announced that candidates who sat in the 2006 exam who wished to retake Chinese or English subjects were to take the new syllabi. Concerns were raised about whether or not those candidates, who were used to the old syllabi, could adapt to the structure of the new syllabi in nearly half a year.
Discontinuation
Owing to the transition from the seven-year curriculum (five years of secondary and two years of sixth form / matriculation) to a six-year curriculum of secondary education, the HKCEE and the HKALE were discontinued from 2012 and 2014 respectively and replaced with a new examination, the Hong Kong Diploma of Secondary Education Examination (HKDSE).
Controversies
1997 Theft of examination papers and marking schemes by HKEA senior officer
Several examiners reported to the HKEA that a candidate scored almost full marks in almost every paper he sat and his answers were identical to those in the marking schemes and even included the typing mistakes. The HKEA suspected that someone had improperly obtained the question papers and marking schemes in advance of the examinations. The case was referred to the ICAC for investigation. The ICAC found that the candidate had been an under-achiever at school and was not expected to get high grades. Further investigations revealed that he was the son of Mak Cheung Wah, then an assistant senior subject panel of the HKEA. Mak stole his colleagues' keys and reproduced them to enable him to open the safe deposits to obtain and photocopy question papers and marking schemes in every subject for his son to read prior to the examinations. Mak's son memorised the answers and wrote them on the answer scripts to score high marks. The ICAC only managed to solve the case and arrested the pair moments before the release of the results. Mak pleaded guilty to allowing unauthorised persons to have access to confidential examination documents. He was ordered by the Eastern Magistrates' Court to do 220 hours of community service. He was also dismissed by the HKEA and lost his entitlement to pension amounted to HK$720,000. His son was disqualified by the HKEA and his grades were annulled. Since then, the HKEA and the ICAC reviewed and revised the security arrangements of examination documents.
2005 English Language (Syllabus B) grading error
In 2005, the oral component of the Syllabus B English language examination was incorrectly added to the total score because of a recent upgrade to the HKEAA computer system, and the supervisor in charge failed to double-check the results. Subsequently, many candidates received an incorrect total score, which resulted in an incorrect final grade for the subject. The problem was so severe that some students wrongfully received an F grade (the second-lowest grade) when they were supposed to receive an A (the highest grade) in the oral section. Since the final English mark is calculated by averaging the marks in the oral, reading comprehension, listening, and writing sections, having an F in oral would have seriously affected the final English mark if the candidate did well in other components.
In an attempt to mitigate the situation, the HKEAA publicly apologised and offered free rechecks on the oral component of the English language subject for all candidates. Candidates who would have had a higher score if the error had not occurred received an upgrade. In total, the error affected 670 candidates, and 422 candidates had their oral component mark upgraded while 248 had their overall English subject grade upgraded. The cascade reaction affected 233 candidates who were eligible for sixth form.
However, the mistake was discovered far too late. It was discovered when the admissions process was almost over. Since some candidates were unable to find a school for their matriculate education because they received an incorrect grade, the Education and Manpower Bureau was forced to increase the school quotas for some schools to accommodate the affected students. HKEAA chairman Irving Koo assured the students that their education would not be affected by the error.
2005 English Language (Syllabus B) incident
A proofreading exercise in Paper 2 of the HKCEE English Language (Syllabus B) quoted a message that was adapted from an online forum. Some students flamed on that forum in anger after the exam. That caused the HKEAA to hold several internal meetings banning the use of messages from the online forum.
2006 English Language (Syllabus B) paper 2 incident
The HKCEE English Language (Syllabus B) 2006, the last year for the syllabus, was administered on 4 May 2006. Paper 2, Reading Comprehension and Usage, had some candidates complained that the HKEAA, in providing all of the study sources via the Internet, created the potential for candidates to access the solutions with electronic devices such as personal digital assistants and cellphones while they were in the toilet. The rumour was first spread on local forums and Young-M. The incident generated widespread public furore.
Numerous discussions were initiated on local forums. Candidates sitting for the paper demanded a retake of the paper and an apology from the HKEAA. Some candidates collaborated with political parties to hold protests against the HKEAA's decision to not readminister the paper. A protest (Cantonese) was proposed for 31 May or 1 July.
On a local forum, a candidate threatened to sue the HKEAA by saying that justice needed to be defended. More than 100 complaints were received by the HKEAA regarding the incident.
The actual articles used in the exam:
, an article on the Para Para scene in Hong Kong, published by USA Today on 31 August 2001.
, a website about cyberbullying, had numerous grammatical and structural mistakes.
2007 Chinese Language paper 2 incident
In 2007 HKCEE Chinese Language Paper 2 (Writing), Question 2 'Lemon Tea' was suspected to be leaked beforehand since a tutor, Siu Yuen from King's Glory Education Centre gave his students a sample article of a similar title, 'Iced Lemon Tea', well before the exam. That led to assumption that the tutor had knowledge of the question in the actual exam.
Two students lodged a complaint to the. A spokesperson of the HKEAA stated that copying by candidates will result in no marks given to the plagiarised parts.
2007 English Language questions leakage incident
Before the examination date of English Language Papers in 2007, "Mr. Ken", one of the well-known tutors at the Modern Education Centre in Hong Kong, called his students back to the tutorial centre for a few days before the test to review some material on fashion. It was later found that the actual test had some of the same topics and even some questions identical that were identical, fuelling worries of a possible leakage.
2008 Chinese Language incident
Controversies were in two papers: paper 1, comprehension, and paper 5, integrated skills.
In paper 1, the format in previous years was that two passages were supplied, one in Vernacular Chinese and one in Classical Chinese, and candidates were required to answer questions mainly in words. In 2008, however, the second passage was a Chinese poem instead. Also, half of the total scores were from multiple choice questions, and in some questions, choices were said to be difficult to distinguish. For example, in question 5, the four choices were "believable" (可以相信), "affirmative" (肯定的是), "proved by facts" (事實證明) and "undeniable" (無可否認). Some teachers said that they could not even make the relevant decisions in a short period of time or said that the paper required deduction skills and common sense rather than Chinese knowledge, and some candidates said the paper was more like gambling than an examination. Some also commented that the passages were too difficult for CE level. Even a university professor admitted that he could not finish the exam in a reasonable amount of time. The first Vernacular Chinese passage was said to be of the difficulty of Form 7 Advanced Level Examination Chinese Language and Culture, and the second poem, Moonlight of Spring River (春江花月夜) in Tang Dynasty, was said to be of the difficulty of university Chinese. the HKESA replied that papers can be set in any format and style.
In paper 5, candidates were required to listen to a recording and do various tasks, but both the recording and the data file were criticised. A Halloween advertisement was included in the recording, and some candidates afterward said that they felt uneasy or thought the radio channel was switched. In the data file, Chinese slang terms were discussed in an extract from a newspaper article and in a poster promoting reading. Some teachers and students criticised the materials for promoting slang terms, some students said that they had not heard of such terms and others said that the HKEAA had misconceptions about the use of the slang terms. The Authority responded that, as is the norm, some of the materials provided were deliberately false or worthless to promote critical thinking and choice-taking by students.
On YouTube and Internet forums, ringtones of the recording appeared after the paper, including the imitation of Cantopop duo Twins singing their song, "Lian Ai Da Guo Tian" (戀愛大過天, Love Is More Important Than Anything), and the recording of the Halloween advertisement.
Some candidates also suggested actions to protest the difficulty of the comprehension paper by wearing black clothes and staying on seats after the end of examinations on 2 May and 3 May 2008, when the English Language examinations were held.
Publishing
As a regular practice, the HKEAA published past papers, marking schemes and examination reports every year. In previous years, only past papers were available; most subjects put past papers of the previous 5 years in a joint edition (except English and Putonghua, which had a tape/CD). Marking schemes were to be given only to markers.
From 2003, the authority issued the examination report and the question papers in year form, which included a question paper, suggested answers (changed to marking schemes from 2005), and candidates' performance and examination statistics; the price ranged from HK$20–45.
See also
Education in Hong Kong
Hong Kong Advanced Level Examination
References
External links
The Hong Kong Examinations and Assessment Authority
Institute of International Education – Information on HKCEE
School qualifications
Standardized tests
School examinations in Hong Kong |
62840918 | https://en.wikipedia.org/wiki/Testimony%20%28Veep%29 | Testimony (Veep) | "Testimony" is the ninth episode of the fourth season of Veep and the 37th episode overall. The episode was written by Sean Gray and Will Smith, and directed by Armando Iannucci. It first aired on June 7, 2015. The plot of this bottle episode follows President Meyer's staff undergoing hearings administered by the House Judiciary Committee regarding her campaign's federal data breach (from third episode of the season, "Data"). They also must testify about allegations that Selina lobbied to kill her own bill, Families First (from the previous episode "B/ill"). She and her staffers scapegoat campaign consultant Bill Ericsson as the mastermind behind the data breach.
Iannucci received a nomination for Outstanding Directing for a Comedy Series at the Primetime Emmy Awards for the episode.
Synopsis
The episode opens on President Selina Meyer (Julia Louis-Dreyfus) giving a press conference in which she denies that she lobbied against her Families First bill.
Several of Selina's associates are shown being sworn in at House Judiciary Committee hearings: ex-staffers, Amy (Anna Chlumsky) and Dan (Reid Scott), are together with White House Aide Jonah (Timothy Simons); Sue (Sufe Bradshaw), her secretary, is alone; Ben (Kevin Dunn), her chief of staff, is alone; and her daughter Catherine (Sarah Sutherland) is in a private deposition.
Ben vehemently denies any intent to kill the bill. Amy and Dan state they were consultants hired to lobby people to vote against the bill.
Leigh Patterson (Jessie Ennis), a former White House aide, testifies that she was fired to conceal that someone used a confidential data breach to target bereaved parents for President Meyer's campaign. She states that only President Meyer's campaign consultant, Bill Ericsson (Deidrich Bader), and bag man Gary Walsh (Tony Hale), knew about the data breach while the President did not.
Ericsson and Kent (Gary Cole), Selina's campaign manager, appear together before the committee. Ericsson becomes visibly agitated when the committee members state that his name has consistently been brought up. Kent acknowledges that Leigh was fired because she was scapegoated.
Gary anxiously testifies in front of the committee. Selina, alone in a deposition, denies any knowledge of the campaign data breach. Gary denies having any contact with lobbyists, except occasionally because of Catherine's fiancé, Jason. Meanwhile, Selina refers to Jason as a consultant, and the interviewers correct her, noting that he is a lobbyist. Selina lies and states that she thinks that Catherine has split up with Jason.
Sue sits before the committee and denies that Selina had a meeting with Congressman Pierce, who cast the vote that killed the bill. When asked about voice memos, she states that the Press Secretary, Mike McLintock (Matt Walsh), is responsible for them.
Mike is sworn into the committee hearing. The Committee members note a witness saw him with Dan, Amy, and Congressman Pierce in the parking lot. Mike states the meeting was by chance.
Selina returns to her deposition and states that Catherine has confirmed she broke up with Jason.
In Ben's hearing, they play an incriminating voice memo where Selina asks about Dan and Amy. Ben denies it is related to Families First.
Now in a deposition, Gary admits he asked Dan and Amy to lobby against Families First and states that Bill Ericsson paid them for the job. In her deposition, Catherine also states Ericsson was responsible. One by one, each staffer is shown naming Ericsson as the responsible party.
Production
The episode was shot such that every scene appeared as taped footage from the in-story depositions and the congressional hearings. This is a departure from the show's usual cinema vérité style.
Critical reception
"Testimony" received positive critical reception. For Vulture, Daniel Kurland wrote in a review, "'Testimony' is not only one of Veep’s strongest entries...but is also a staggering accomplishment in comedic television in general." Kate Kulzick stated in a review for The A.V. Club, "It will be nice to get back to the show’s usual approach and aesthetic in the finale but “Testimony” is a refreshing and welcome change of pace for the series, giving the team at Veep the chance to flex new comedic muscles and demonstrate how talented, and versatile, they are."
Awards
Outgoing showrunner Armando Iannucci received a nomination for Outstanding Directing for a Comedy Series at the Primetime Emmy Awards for "Testimony".
References
External links
"Testimony" on HBO
2015 American television episodes
Veep (TV series) |
263659 | https://en.wikipedia.org/wiki/A20%20line | A20 line | The A20, or address line 20, is one of the electrical lines that make up the system bus of an x86-based computer system. The A20 line in particular is used to transmit the 21st bit on the address bus.
A microprocessor typically has a number of address lines equal to the base-two logarithm of the number of words in its physical address space. For example, a processor with 4 GB of byte-addressable physical space requires 32 lines, which are named A0 through A31. The lines are named after the zero-based number of the bit in the address that they are transmitting. The least significant bit is first and is therefore numbered bit 0 and signaled on line A0. A20 transmits bit 20 (the 21st bit) and becomes active once addresses reach 1 MB, or 220.
Overview
The Intel 8086, Intel 8088, and Intel 80186 processors had 20 address lines, numbered A0 to A19; with these, the processor can access 220 bytes, or 1 MB. Internal address registers of such processors only had 16 bits. To access a 20-bit address space, an external memory reference was made up of a 16-bit offset address added to a 16-bit segment number, shifted 4 bits so as to produce a 20-bit physical address. The resulting address is equal to segment × 16 + offset. There are many combinations of segment and offset that produce the same 20-bit physical address. Therefore, there were various ways to address the same byte in memory. For example, here are four of the 4096 different segment:offset combinations, all referencing the byte whose physical address is 0x000FFFFF (the last byte in 1 MB-memory space):
F000:FFFF
FFFF:000F
F555:AAAF
F800:7FFF
Referenced the last way, an increase of one in the offset yields F800:8000, which is a proper address for the processor, but since it translates to the physical address 0x00100000 (the first byte over 1 MB), the processor would need another address line for actual access to that byte. Since there is no such line on the 8086 line of processors, the 21st bit above, while set, gets dropped, causing the address F800:8000 to "wrap around" and to actually point to the physical address 0x00000000.
When IBM designed the IBM PC AT (1984) machine, it decided to use the new higher-performance Intel 80286 microprocessor. The 80286 could address up to 16 MB of system memory in protected mode. However, the CPU was supposed to emulate an 8086's behavior in real mode, its startup mode, so that it could run operating systems and programs that were not written for protected mode. The 80286 did not force the A20 line to zero in real mode, however. Therefore, the combination F800:8000 would no longer point to the physical address 0x00000000, but to the address 0x00100000. As a result, programs relying on the address wrap around would no longer work. To remain compatible with such programs, IBM decided to correct the problem on the motherboard.
That was accomplished by inserting a logic gate on the A20 line between the processor and system bus, which got named Gate-A20. Gate-A20 can be enabled or disabled by software to allow or prevent the address bus from receiving a signal from A20. It is set to non-passing for the execution of older programs that rely on the wrap-around. At boot time, the BIOS first enables Gate-A20 when it counts and tests all of the system memory, and then disables it before transferring control to the operating system.
Originally, the logic gate was a gate connected to the Intel 8042 keyboard controller. Controlling it was a relatively slow process. Other methods have since been added to allow more efficient multitasking of programs that require this wrap-around with programs that access all of the system memory. There are multiple methods to control the A20 line.
Disconnecting A20 would not wrap all memory accesses above 1 MB, just those in the 1–2 MB, 3–4 MB, 5–6 MB, etc. ranges. Real-mode software cared only about the area slightly above 1 MB, so the Gate-A20 line was enough.
Enabling the Gate-A20 line is one of the first steps that a protected-mode x86 operating system does in the bootup process, often before control has been passed to the kernel from the bootstrap (in the case of Linux, for example).
Virtual 8086 mode, introduced with the Intel 80386, allows the A20 wrap-around to be simulated by using the virtual memory facilities of the processor; physical memory may be mapped to multiple virtual addresses. Thus, the memory mapped at the first megabyte of virtual memory may be mapped again in the second megabyte of virtual memory. The operating system may intercept changes to Gate A20 and make corresponding changes to the virtual-memory address space, which also makes irrelevant the efficiency of Gate-A20 line toggling.
A20 gate
Controlling the A20 line was an important feature at one stage in the growth of the IBM PC architecture, as it added access to an additional 65,520 bytes (64 KB − 16 bytes) of memory in real mode, without significant software changes.
In what was arguably a "hack", the A20 gate was originally part of the keyboard controller on the motherboard, which could open or close it depending on what behavior was desired.
The A20 gate is still present on many modern PCs, and the gate is initially closed right after boot. Modern protected-mode operating systems typically open the A20 gate early during the boot process and never close it again. Such operating systems do not have the compatibility reasons for keeping it closed, and they gain access to the full range of physical addresses available by opening it.
The Intel 80486 and Pentium added a special pin named A20M#, which when asserted low forces bit 20 of the physical address to be zero for all on-chip cache- or external-memory accesses. It was necessary, since the 80486 introduced an on-chip cache and so masking this bit in external logic was no longer possible. Software still needs to manipulate the gate and must still deal with external peripherals (the chipset) for that.
Support for the A20 gate was changed in the Nehalem microarchitecture (some sources incorrectly claim that A20 support was removed). Rather than the CPU having a dedicated A20M# pin that receives the signal whether or not to mask the A20 bit, it has been virtualized so that the information is sent from the peripheral hardware to the CPU using special bus cycles. From a software point of view, the mechanism works exactly as before, and an operating system must still program external hardware (which in-turn sends the aforementioned bus cycles to the CPU) to disable the A20 masking.
Intel no longer supports the A20 gate, starting with Haswell. Page 271 of the Intel System Programmers Manual Vol. 3A from June 2013 states: "The functionality of A20M# is used primarily by older operating systems and not used by modern operating systems. On newer Intel 64 processors, A20M# may be absent."
A20 handler
The A20 handler is IBM PC memory manager software that controls access to the high memory area (HMA). Extended-memory managers usually provide this functionality. A20 handlers are named after the 21st address line of the microprocessor, the A20 line.
In DOS, HMA managers such as HIMEM.SYS have the "extra task" of managing A20. HIMEM.SYS provided an API for opening/closing A20. DOS itself could use the area for some of its storage needs, thereby freeing up more conventional memory for programs. That functionality was enabled by the DOS=HIGH or HIDOS=ON directives in the CONFIG.SYS configuration file.
Affected programs
Since 1980, the address wrap was internally used by 86-DOS and MS-DOS to implement the CP/M-style CALL 5 entry point in the Program Segment Prefix (PSP) (which partially resembles CP/M's zero page). This was, in particular, utilized by programs machine-translated from CP/M-80 through assembly language translators like Seattle Computer Products' TRANS86. The CALL 5 handler this entry point refers to resides at physical address 0x000000C0 (overlapping the entry for INT 30h and the first byte of INT 31h in the real mode interrupt vector table). However, by the design of CP/M-80 which loaded the operating system immediately above the memory available for the program to run in, the 8080/Z80 16-bit target address stored at offset 6 in the zero page could deliberately also be interpreted as segment memory size. In order to emulate this in DOS with its 8086 segment:offset addressing scheme, the far call entry point's 16-bit offset had to match this segment size (f.e. 0xFEF0), which is stored at offset 6 in the PSP, overlapping parts of the CALL 5. The only way to reconcile these requirements was to choose a segment value that, when added to 0xFEF0, results in an address of 0x001000C0, which, on an 8086, wraps around to 0x000000C0.
A20 had to be disabled for the wraparound to occur and DOS programs using this interface to work. Newer DOS versions which can relocate parts of themselves into the HMA, typically craft a copy of the entry point at FFFF:00D0 in the HMA (which again resolves to physical 0x001000C0), so that the interface can work without regard to the state of A20.
One program known to use the CALL 5 interface is the DOS version of the Small-C compiler. Also, the SPELL utility in Microsoft's Word 3.0 (1987) is one of the programs depending on the CALL 5 interface to be set up correspondingly. Sun Microsystems' PC-NFS (1993) requires the CALL 5 fix-up as well.
Also, to save program space, a trick was used by some BIOS and DOS programmers, for example, to have one segment that has access to program data (such as from F800:0000 to F800:7FFF, pointing to the physical addresses 0x000F8000–0x000FFFFF), as well as the I/O data (such as the keyboard buffer) that was located in the first memory segment (with addresses F800:8000 to F800:FFFF pointing to the physical addresses 0x00000000 to 0x00007FFF).
This trick works for as long as the code isn't executed in low memory, the first 64 KB of RAM, a condition that was always true in older DOS versions without load-high capabilities.
With the DOS kernel relocated into higher memory areas, low memory increasingly became available for programs, causing those depending on the wraparound to fail. The executable loaders in newer versions of DOS attempt to detect some common types of affected programs and either patch them on-the-fly to function also in low memory or load them above the first 64 KB before passing execution on to them. For programs, which are not detected automatically, LOADFIX or MEMMAX -L can be used to force programs to be loaded above the first 64 KB.
The trick was utilized by IBM/Microsoft Pascal itself as well as by programs compiled with it, including Microsoft's MASM. Other commonly used development utilities using this were executable compressors like Realia's Spacemaker (written by Robert B. K. Dewar in 1982 and used to compress early versions of the Norton Utilities) and Microsoft's EXEPACK (written by Reuben Borman in 1985) as well as the equivalent /E[XEPACK] option in Microsoft's LINK 3.02 and higher. Programs processed with EXEPACK would display a "Packed file is corrupt" error message.
Various third-party utilities exist to modify compressed executables either replacing the problematic uncompression routine(s) through restubbing, or attempting to expand and restore the original file.
Modern Legacy BIOS boot loaders (such as GNU GRUB) use A20 line. UEFI boot loaders use 32-bit protected mode or 64-bit long mode.
See also
Bug compatibility
Computer storage
High memory area (HMA)
LOADFIX (CONFIG.SYS directive) (PTS-DOS)
Boot loaders
References
Further reading
X86 memory management
Memory expansion
X86 architecture
IBM PC compatibles |
28486339 | https://en.wikipedia.org/wiki/Georgia%20Tech | Georgia Tech | The Georgia Institute of Technology, commonly referred to as Georgia Tech or, in the state of Georgia, as Tech, is a public research university and institute of technology in Atlanta, Georgia. It is part of the University System of Georgia and has satellite campuses in Savannah, Georgia; Metz, France; Athlone, Ireland; Shenzhen, China; and Singapore.
The school was founded in 1885 as the Georgia School of Technology as part of Reconstruction plans to build an industrial economy in the post-Civil War Southern United States. Initially, it offered only a degree in mechanical engineering. By 1901, its curriculum had expanded to include electrical, civil, and chemical engineering. In 1948, the school changed its name to reflect its evolution from a trade school to a larger and more capable technical institute and research university. Today, Georgia Tech is organized into six colleges and contains about 31 departments/units, with emphasis on science and technology. It is well recognized for its degree programs in computer science, engineering, and business.
Student athletics, both organized and intramural, are a part of student and alumni life. The school's intercollegiate competitive sports teams, the four-time football national champion Yellow Jackets, and the nationally recognized fight song "Ramblin' Wreck from Georgia Tech", have helped keep Georgia Tech in the national spotlight. Georgia Tech fields eight men's and seven women's teams that compete in the NCAA Division I athletics and the Football Bowl Subdivision. Georgia Tech is a member of the Coastal Division in the Atlantic Coast Conference.
History
Establishment
The idea of a technology school in Georgia was introduced in 1865 during the Reconstruction period. Two former Confederate officers, Major John Fletcher Hanson (an industrialist) and Nathaniel Edwin Harris (a politician and eventually Governor of Georgia), who had become prominent citizens in the town of Macon, Georgia after the Civil War, strongly believed that the South needed to improve its technology to compete with the industrial revolution, which was occurring throughout the North. However, because the American South of that era was mainly populated by agricultural workers and few technical developments were occurring, a technology school was needed.
In 1882, the Georgia State Legislature authorized a committee, led by Harris, to visit the Northeast to see firsthand how technology schools worked. They were impressed by the polytechnic educational models developed at the Massachusetts Institute of Technology and the Worcester County Free Institute of Industrial Science (now Worcester Polytechnic Institute). The committee recommended adapting the Worcester model, which stressed a combination of "theory and practice", the "practice" component including student employment and production of consumer items to generate revenue for the school.
On October 13, 1885, Georgia Governor Henry D. McDaniel signed the bill to create and fund the new school. In 1887, Atlanta pioneer Richard Peters donated to the state of the site of a failed garden suburb called Peters Park. The site was bounded on the south by North Avenue, and on the west by Cherry Street. He then sold five adjoining acres of land to the state for US$10,000, (). This land was near Atlanta's northern city limits at the time of its founding, although the city has expanded several miles beyond it. A historical marker on the large hill in Central Campus notes the site occupied by the school's first buildings once held fortifications to protect Atlanta during the Atlanta Campaign of the American Civil War. The surrender of the city took place on the southwestern boundary of the modern Georgia Tech campus in 1864.
Early years
The Georgia School of Technology opened in the fall of 1888 with two buildings. One building (now Tech Tower, an administrative headquarters) had classrooms to teach students; The second building featured a shop and had a foundry, forge, boiler room, and engine room. It was designed for students to work and produce goods to sell and fund the school. The two buildings were equal in size to show the importance of teaching both the mind and the hands, though, at the time, there was some disagreement to whether the machine shop should have been used to turn a profit.
On October 20, 1905, U.S. President Theodore Roosevelt visited Georgia Tech. On the steps of Tech Tower, Roosevelt delivered a speech about the importance of technological education. He then shook hands with every student.
Georgia Tech's Evening School of Commerce began holding classes in 1912. The evening school admitted its first female student in 1917, although the state legislature did not officially authorize attendance by women until 1920. Annie T. Wise became the first female graduate in 1919 and was Georgia Tech's first female faculty member the following year. In 1931, the Board of Regents transferred control of the Evening School of Commerce to the University of Georgia (UGA) and moved the civil and electrical engineering courses at UGA to Tech. Tech replaced the commerce school with what later became the College of Business. The commerce school would later split from UGA and eventually become Georgia State University. In 1934, the Engineering Experiment Station (later known as the Georgia Tech Research Institute) was founded by W. Harry Vaughan with an initial budget of $5,000 () and 13 part-time faculty.
Modern history
Founded as the Georgia School of Technology, Georgia Tech assumed its present name in 1948 to reflect a growing focus on advanced technological and scientific research. Unlike most similarly named universities (such as the Massachusetts Institute of Technology and the California Institute of Technology), the Georgia Institute of Technology is a public institution.
Under President Blake Ragsdale Van Leer's tenure, Tech went through a significant change, expanded its campus with new facilities, added new engineering courses, first admitted female students to regular classes in 1952 and began steps toward integration. He stood up to Georgia governor Marvin Griffin's demand to bar Bobby Grier from participating in the 1956 Sugar Bowl game between Georgia Tech and Grier's University of Pittsburgh. After Van Leer's death, his wife Ella Lillian Wall Van Leer bought a house on campus and opened it to female students to support their success. In 1968 women could enroll in all programs at Tech. Industrial Management was the last program to open to women. The first women's dorm, Fulmer Hall, opened in 1969. Rena Faye Smith, appointed as a research assistant in the School of Physics in 1969 by Dr. Ray Young, in X-Ray Diffraction, became the first female faculty member (research) in the School of Physics. She went on to earn a Ph.D. at Georgia State University and taught physics and instructional technology at Black Hills State University – 1997–2005 as Rena Faye Norby. She served as a Fulbright Scholar in Russia 2004–2005. Women constituted 30.3% of the undergraduates and 25.3% of the graduate students enrolled in Spring 2009.
In 1959, a meeting of 2,741 students voted by an overwhelming majority to endorse integration of qualified applicants, regardless of race. Three years after the meeting, and one year after the University of Georgia's violent integration, Georgia Tech became the first university in the Deep South to desegregate without a court order. There was little protest to this by Tech students; like the city of Atlanta described by former Mayor William Hartsfield, they seemed "too busy to hate". In the 1967–68 academic year 28 students out of 7,526 were black. In 1968, William Peace became the first black instructor and Marle Carter became the first black member of the homecoming court. In 1964, Dr. Calvin Huey became the first black player to play at Grant Field when he took the field for Navy. The first black person to play for Georgia Tech was Eddie McAshan in 1970.
In 1965 the university bought the former Pickrick Restaurant, a site of confrontation in the Civil Rights Movement, which it first used as a placement center. Later, it was known as the Ajax Building. The building was razed in 2009.
Similarly, there was little student reaction at Georgia Tech to the Vietnam War and United States involvement in the Cambodian Civil War. The student council defeated a resolution supporting the Vietnam Moratorium, and the extent of the Tech community's response to the Kent State shooting was limited to a student-organized memorial service, though the Institute was ordered closed for two days, along with all other University System of Georgia schools.
In 1988, President John Patrick Crecine pushed through a restructuring of the university. The Institute at that point had three colleges: the College of Engineering, the College of Management, and the catch-all COSALS, the College of Sciences and Liberal Arts. Crecine reorganized the latter two into the College of Computing, the College of Sciences, and the Ivan Allen College of Management, Policy, and International Affairs. Crecine never asked for input regarding the changes and, consequently, many faculty members disliked his top-down management style; despite this, the changes passed by a slim margin. Crecine was also instrumental in securing the 1996 Summer Olympics for Atlanta. A large amount of construction occurred, creating most of what is now considered "West Campus" for Tech to serve as the Olympic Village, and significantly gentrifying Midtown Atlanta. The Undergraduate Living Center, Fourth Street Apartments, Sixth Street Apartments, Eighth Street Apartments, Hemphill Apartments, and Center Street Apartments housed athletes and journalists. The Georgia Tech Aquatic Center was built for swimming events, and the Alexander Memorial Coliseum was renovated. The Institute also erected the Kessler Campanile and fountain to serve as a landmark and symbol of the Institute on television broadcasts.
In 1994, G. Wayne Clough became the first Tech alumnus to serve as the president of the Institute; he was in office during the 1996 Summer Olympics. In 1998, he separated the Ivan Allen College of Management, Policy, and International Affairs into the Ivan Allen College of Liberal Arts and returned the College of Management to "College" status (Crecine, the previous president, had demoted Management from "College" to "School" status as part of a controversial 1990 reorganization plan). His tenure focused on a dramatic expansion of the Institute, a revamped Undergraduate Research Opportunities Program, and the creation of an International Plan. On March 15, 2008, he was appointed secretary of the Smithsonian Institution, effective July 1, 2008. Dr. Gary Schuster, Tech's provost and executive vice president for Academic Affairs, was named interim president, effective July 1, 2008.
On April 1, 2009, G. P. "Bud" Peterson, previously the chancellor of the University of Colorado at Boulder, became the 11th president of Georgia Tech. On April 20, 2010, Georgia Tech was invited to join the Association of American Universities, the first new member institution in nine years. In 2014, Georgia Tech launched the first "massive online open degree" in computer science by partnering with Udacity and AT&T; a complete degree through that program costs students $7,000. It eventually expanded this program with its online masters in analytics in January 2017, as well as providing the option for advanced credits with a MicroMasters in collaboration with edX.
On January 7, 2019, President G.P. Bud Peterson announced his intention to retire. Angel Cabrera, former President of George Mason University and Georgia Tech alum, was named his successor on June 13, 2019. Cabrera took office on September 3, 2019.
Campus sections
The Georgia Tech campus is located in Midtown, an area slightly north of downtown Atlanta. Although a number of skyscrapers—most visibly the headquarters of AT&T, The Coca-Cola Company, and Bank of America—are visible from all points on campus, the campus itself has few buildings over four stories and has a great deal of greenery. This gives it a distinctly suburban atmosphere quite different from other Atlanta campuses such as that of Georgia State University.
The campus is organized into four main parts: West Campus, East Campus, Central Campus, and Technology Square. West Campus and East Campus are both occupied primarily by student living complexes, while Central Campus is reserved primarily for teaching and research buildings.
West Campus
West Campus is occupied primarily by apartments and coed undergraduate dormitories. Apartments include Crecine, Center Street, 6th Street, Maulding, Graduate Living Center (GLC), and Eighth Street Apartments, while dorms include Freeman, Montag, Fitten, Folk, Caldwell, Armstrong, Hefner, Fulmer, and Woodruff Suites. The Campus Recreation Center (formerly the Student Athletic Complex); a volleyball court; a large, low natural green area known as the Burger Bowl; and a flat artificial green area known as the CRC (formerly SAC) Fields are all located on the western side of the campus. In 2017, West Village, a multipurpose facility featuring dining options, meeting space, School of Music classrooms, and offices to West Campus, opened.
The Robert C. Williams Paper Museum is located on West Campus.
West Campus was formerly home to Under the Couch, which relocated to the Student Center in the fall of 2010. Also within walking distance of West Campus are several late-night eateries. West Campus was home to a convenience store, West Side Market, which closed following the opening of West Village in the fall of 2017. Due to limited space, all auto travel proceeds via a network of one-way streets which connects West Campus to Ferst Drive, the main road of the campus. Woodruff Dining Hall, or "Woody's", was the West Campus Dining Hall, before closing after the opening of West Village. It connected the Woodruff North and Woodruff South undergraduate dorms.
East Campus
East Campus houses all of the fraternities and sororities as well as most of the undergraduate freshman dormitories. East Campus abuts the Downtown Connector, granting residences quick access to Midtown and its businesses (for example, The Varsity) via a number of bridges over the highway. Georgia Tech football's home, Bobby Dodd Stadium is located on East Campus, as well as Georgia Tech basketball's home, McCamish Pavilion (formerly Alexander Memorial Coliseum).
Brittain Dining Hall is the main dining hall for East Campus. It is modeled after a medieval church, complete with carved columns and stained glass windows showing symbolic figures. The main road leading from East Campus to Central Campus is a steep ascending incline commonly known as "Freshman Hill" (in reference to the large number of freshman dorms near its foot). On March 8, 2007, the former Georgia State University Village apartments were transferred to Georgia Tech. Renamed North Avenue Apartments by the institute, they began housing students in the fall semester of 2007.
Central Campus
Central Campus is home to the majority of the academic, research, and administrative buildings. The Central Campus includes, among others: the Howey Physics Building; the Boggs Chemistry Building; the College of Computing Building; the Klaus Advanced Computing Building; the College of Design Building; the Skiles Classroom Building, which houses the School of Mathematics and the School of Literature, Media and Culture; the D. M. Smith Building, which houses the School of Public Policy; and the Ford Environmental Science & Technology Building. In 2005, the School of Modern Languages returned to the Swann Building, a 100-year-old former dormitory that now houses some of the most technology-equipped classrooms on campus. Intermingled with these are a variety of research facilities, such as the Centennial Research Building, the Microelectronics Research Center, the Neely Nuclear Research Center, the Nanotechnology Research Center, and the Petit Biotechnology Building.
Tech's administrative buildings, such as Tech Tower, and the Bursar's Office, are also located on the Central Campus, in the recently renovated Georgia Tech Historic District. The campus library, the Fred B. Wenn Student Center, and the Student Services Building ("Flag Building") are also located on Central Campus. The Student Center provides a variety of recreational and social functions for students including: a computer lab, a game room ("Tech Rec"), the Student Post Office, a music venue, a movie theater, the Food Court, plus meeting rooms for various clubs and organizations. Adjacent to the eastern entrance of the Student Center is the Kessler Campanile (which is referred to by students as "The Shaft"). The former Hightower Textile Engineering building was demolished in 2002 to create Yellow Jacket Park. More greenspace now occupies the area around the Kessler Campanile for a more aesthetically pleasing look, in accordance with the official Campus Master Plan. In August 2011, the G. Wayne Clough Undergraduate Learning Commons opened next to the library and occupies part of the Yellow Jacket Park area.
Technology Square
Technology Square, also known as "Tech Square", is located across the Downtown Connector and embedded in the city east of East Campus. Opened in August 2003 at a cost of $179 million, the district was built over run-down neighborhoods and has sparked a revitalization of the entire Midtown area. Connected by the recently renovated Fifth Street Bridge, it is a pedestrian-friendly area comprising Georgia Tech facilities and retail locations. One complex contains the College of Business Building, holding classrooms and office space for the Scheller College of Business, as well as the Georgia Tech Hotel and Conference Center and the Georgia Tech Global Learning Center. The Scheller College of Business is also home to three large glass chandeliers made by Dale Chihuly. This is one of the few locations of Chihuly's works found in the state of Georgia.
Another part of Tech Square, the privately owned Centergy One complex, contains the Technology Square Research Building (TSRB), holding faculty and graduate student offices for the College of Computing and the School of Electrical and Computer Engineering, as well as the GVU Center, a multidisciplinary technology research center. The Advanced Technology Development Center (ATDC) is a science and business incubator, run by the Georgia Institute of Technology, and is also headquartered in Technology Square's Centergy One complex.
Other Georgia Tech-affiliated buildings in the area host the Center for Quality Growth and Regional Development, the Georgia Tech Enterprise Innovation Institute, the Advanced Technology Development Center, VentureLab, the Georgia Electronics Design Center and the new CODA (mixed-use development). Technology Square also hosts a variety of restaurants and businesses, including the headquarters of notable consulting companies like Accenture and also including the official Institute bookstore, a Barnes & Noble bookstore, and a Georgia Tech-themed Waffle House.
Satellite campuses
In 1999, Georgia Tech began offering local degree programs to engineering students in Southeast Georgia, and in 2003 established a physical campus in Savannah, Georgia. Until 2013, Georgia Tech Savannah offered undergraduate and graduate programs in engineering in conjunction with Georgia Southern University, South Georgia College, Armstrong Atlantic State University, and Savannah State University. The university further collaborated with the National University of Singapore to set up The Logistics Institute–Asia Pacific in Singapore. The campus now serves the institute's hub for professional and continuing education and is home to the regional offices of the Georgia Tech Enterprise Innovation Institute, the Savannah Advanced Technology Development Center, and the Georgia Logistics Innovation Center.
Georgia Tech also operates a campus in Metz, in northeastern France, known as Georgia Tech Lorraine. Opened in October 1990, it offers master's-level courses in Electrical and Computer Engineering, Computer Science and Mechanical Engineering and Ph.D. coursework in Electrical and Computer Engineering and Mechanical Engineering. Georgia Tech Lorraine was the defendant in a lawsuit pertaining to the language used in advertisements, which was a violation of the Toubon Law.
The College of Design (formerly College of Architecture) maintains a small permanent presence in Paris in affiliation with the École d'architecture de Paris-La Villette and the College of Computing has a similar program with the Barcelona School of Informatics at the Polytechnic University of Catalonia in Barcelona, Spain. There are additional programs in Athlone, Ireland, Shanghai, China, and Singapore. Georgia Tech was supposed to have set up two campuses for research and graduate education in the cities of Visakhapatnam and Hyderabad, Telangana, India by the year 2010, but it appeared the plans had been set on hold . Alt URL
Campus services
Georgia Tech Cable Network, or GTCN, is the college's branded cable source. Most non-original programming is obtained from Dish Network. GTCN currently has 100 standard-definition channels and 23 high-definition channels.
The Office of Information Technology, or OIT, manages most of the Institute's computing resources (and some related services such as campus telephones). With the exception of a few computer labs maintained by individual colleges, OIT is responsible for most of the computing facilities on campus. Student, faculty, and staff e-mail accounts are among its services. Georgia Tech's ResNet provides free technical support to all students and guests living in Georgia Tech's on-campus housing (excluding fraternities and sororities). ResNet is responsible for network, telephone, and television service, and most support is provided by part-time student employees.
Organization and administration
Georgia Tech's undergraduate and graduate programs are divided into six colleges. Collaboration among the colleges is frequent, as mandated by a number of interdisciplinary degree programs and research centers. Georgia Tech has sought to strengthen its undergraduate and graduate offerings in less technical fields, primarily those under the Ivan Allen College of Liberal Arts, which saw a 20% increase in admissions in 2008. Also, even in the Ivan Allen College, the Institute does not offer Bachelor of Arts and Masters of Arts degrees, only Bachelor of Science and Master of Science degrees. Georgia Tech's honors program is highly selective and designed to cater to the most intellectually curious undergraduates from all six colleges.
Academics
Demographics
As of fall 2019, the student body consists of more than 36,000 undergraduate and graduate students, with graduate students making up 56% of the student body. Of the around 20,000 graduate students enrolled in 2019, about 60% were enrolled in online graduate degree programs. The student body at Georgia Tech is approximately 70% male and 30% female.
Underrepresented groups enrollment is slowly increasing due to Tech valuing diversity and inclusion. Tech's growing liberal arts programs, more holistic review of all applicants, and outreach programs encouraging them to consider careers in STEM are effectively improving their presence on campus.
Around 50–55% of all Georgia Tech students are residents of the state of Georgia, around 20% come from outside the U.S., and 25–30% are residents of other U.S. states or territories. The top states of origin for all non-Georgia U.S. students are Florida, Texas, California, North Carolina, Virginia, New Jersey, and Maryland. Students at Tech represent all 50 states and 114 countries. The top three countries of origin for all international students are China, India, and South Korea.
Funding
The Georgia Institute of Technology is a public institution that receives funds from the State of Georgia, tuition, fees, research grants, and alumni contributions. In 2014, the Institute's revenue amounted to about $1.422 billion. Fifteen percent came from state appropriations and grants while 20% originated from tuition and fees. Grants and contracts accounted for 55% of all revenue. Expenditures were about $1.36 billion. Forty-eight percent went to research and 19% went to instruction. The Georgia Tech Foundation runs the university's endowment and was incorporated in 1932. It includes several wholly owned subsidiaries that own land on campus or in Midtown and lease the land back to the Georgia Board of Regents and other companies and organizations. Assets totaled $1.882 billion and liabilities totaled $0.478 billion in 2014. As of 2007, Georgia Tech had the most generous alumni donor base, percentage wise, of any public university ranked in the top 50. In 2015, the university received a $30 million grant from Atlanta philanthropist Diana Blank to build the "most environmentally-sound building ever constructed in the Southeast."
Rankings
In 2021 U.S. News & World Report named Georgia Tech 3rd worldwide for both its Bachelor's in Analytics and Master of Science in Business Analytics degree programs. Also in the 2021 Times Higher Education subject rankings, Georgia Tech ranked 12th for engineering and 13th for computer science in the world.
Tech's undergraduate engineering program was ranked 4th in the United States and its graduate engineering program ranked 8th by U.S. News & World Report for 2021. Tech's graduate engineering program rankings are aerospace (4th), biomedical/bioengineering (2nd), chemical (tied for 5th), civil (tied for 3rd), computer (tied for 6th), electrical (tied for 6th), environmental (tied for 5th), industrial (1st), materials (9th), mechanical (tied for 5th), and nuclear (9th). Tech's undergraduate computer science program ranked 5th and its graduate computer science program ranked 8th. Other graduate computer science program rankings are artificial intelligence (7th), theory (9th), systems (10th), and programming language (16th)
Also for 2021, U.S. News & World Report ranked Tech 4th in the United States for most innovative university.
Admissions
For its fall 2019 first-year undergraduate class, Georgia Tech accepted 18.8% of an institution record 36,936 applicants. The in-state applicant acceptance rate was 37.7%, the out-of-state applicant acceptance rate was 14.9%.
In 2017, Georgia Tech announced valedictorians and salutatorians from Georgia's accredited public and private high schools with 50 or more graduates will be the only students offered automatic undergraduate admission via its Georgia Tech Scholars Program.
Research
Facilities and classification
Georgia Tech is classified among "R1: Doctoral Universities – Very high research activity". Much of this research is funded by large corporations or governmental organizations. Research is organizationally under the Executive Vice President for Research, Stephen E. Cross, who reports directly to the institute president. Nine "interdisciplinary research institutes" report to him, with all research centers, laboratories and interdisciplinary research activities at Georgia Tech reporting through one of those institutes.
The oldest of those research institutes is a nonprofit research organization referred to as the Georgia Tech Research Institute (GTRI). GTRI provides sponsored research in a variety of technical specialties including radar, electro-optics, and materials engineering. Around forty percent (by award value) of Georgia Tech's research, especially government-funded classified work, is conducted through this counterpart organization. GTRI employs over 1,700 people and had $305 million in revenue in fiscal year 2014. The other institutes include: the Parker H. Petit Institute for Bioengineering & Bioscience, the Georgia Tech Institute for Electronics and Nanotechnology, the Georgia Tech Strategic Energy Institute, the Brook Byers Institute for Sustainable Systems, the Georgia Tech Manufacturing Institute, the Institute of Paper Science and Technology, Institute for Materials and the Institute for People and Technology.
Entrepreneurship
Many startup companies are produced through research conducted at Georgia Tech, with the Advanced Technology Development Center and VentureLab ready to assist Georgia Tech's researchers and entrepreneurs in organization and commercialization. The Georgia Tech Research Corporation serves as Georgia Tech's contract and technology licensing agency. Georgia Tech is ranked fourth for startup companies, eighth in patents, and eleventh in technology transfer by the Milken Institute. Georgia Tech and GTRI devote of space to research purposes, including the new $90 million Marcus Nanotechnology Building, one of the largest nanotechnology research facilities in the Southeastern United States with over of clean room space.
Georgia Tech encourages undergraduates to participate in research alongside graduate students and faculty. The Undergraduate Research Opportunities Program awards scholarships each semester to undergraduates who pursue research activities. These scholarships, called the President's Undergraduate Research Awards, take the form of student salaries or help cover travel expenses when students present their work at professional meetings. Additionally, undergraduates may participate in research and write a thesis to earn a "Research Option" credit on their transcripts. An undergraduate research journal, The Tower, was established in 2007 to provide undergraduates with a venue for disseminating their research and a chance to become familiar with the academic publishing process.
Recent developments include a proposed graphene antenna.
Georgia Tech and Emory University have a strong research partnership and jointly administer the Emory-Georgia Tech Predictive Health Institute. They also, along with Peking University, administer the Wallace H. Coulter Department of Biomedical Engineering. In 2015, Georgia Tech and Emory were awarded an $8.3 million grant by the National Institutes of Health (NIH) to establish a National Exposure Assessment Laboratory. In July 2015, Georgia Tech, Emory, and Children's Healthcare of Atlanta were awarded a four-year, $1.8 million grant by the Cystic Fibrosis Foundation in order to expand the Atlanta Cystic Fibrosis Research and Development Program. In 2015, the two universities received a five-year, $2.9 million grant from the National Science Foundation (NSF) to create new bachelor's, master's, and doctoral degree programs and concentrations in healthcare robotics, which will be the first program of its kind in the Southeastern United States.
The Georgia Tech Panama Logistics Innovation & Research Center is an initiative between the H. Milton Stewart School of Industrial and Systems Engineering, the Ecuador National Secretariat of Science and Technology, and the government of Panama that aims to enhance Panama's logistics capabilities and performance through a number of research and education initiatives. The center is creating models of country level logistics capabilities that will support the decision-making process for future investments and trade opportunities in the growing region and has established dual degree programs in the University of Panama and other Panamanian universities with Georgia Tech. A similar center in Singapore, The Centre for Next Generation Logistics, was established in 2015 and is a collaboration between Georgia Tech and the National University of Singapore. The Center will work closely with government agencies and the industry to perform research in logistics and supply chain systems for translation into innovations and commercialization to achieve transformative economic and societal impact.
Industry connections
Georgia Tech maintains close ties to the industrial world. Many of these connections are made through Georgia Tech's cooperative education and internship programs. Georgia Tech's Division of Professional Practice (DoPP), established in 1912 as the Georgia Institute of Technology Cooperative Division, operates the largest and fourth-oldest cooperative education program in the United States, and is accredited by the Accreditation Council for Cooperative Education. The DoPP is charged with providing opportunities for students to gain real-world employment experience through four programs, each targeting a different body of students. The Undergraduate Cooperative Education Program is a five-year program in which undergraduate students alternating between semesters of formal instruction at Georgia Tech and semesters of full-time employment with their employers.
The Graduate Cooperative Education Program, established in 1983, is the largest such program in the United States. It allows graduate students pursuing master's degrees or doctorates in any field to spend a maximum of two consecutive semesters working full- or part-time with employers. The Undergraduate Professional Internship Program enables undergraduate students—typically juniors or seniors—to complete a one- or two-semester internship with employers. The Work Abroad Program hosts a variety of cooperative education and internship experiences for upperclassmen and graduate students seeking international employment and cross-cultural experiences. While all four programs are voluntary, they consistently attract high numbers of students—more than 3,000 at last count. Around 1,000 businesses and organizations hire these students, who collectively earn $20 million per year.
Georgia Tech's cooperative education and internship programs have been externally recognized for their strengths. The Undergraduate Cooperative Education was recognized by U.S. News & World Report as one of the top 10 "Programs that Really Work" for five consecutive years. U.S. News & World Report additionally ranked Georgia Tech's internship and cooperative education programs among 14 "Academic Programs to Look For" in 2006 and 2007. On June 4, 2007, the University of Cincinnati inducted Georgia Tech into its Cooperative Education Hall of Honor.
Student life
Georgia Tech students benefit from many Institute-sponsored or related events on campus, as well as a wide selection of cultural options in the surrounding district of Midtown Atlanta, "Atlanta's Heart of the Arts". Just off campus, students can choose from several restaurants, including a half-dozen in Technology Square alone. Home Park, a neighborhood that borders the north end of campus, is a popular living area for Tech students and recent graduates.
Housing
Georgia Tech Housing is subject to a clear geographic division of campus into eastern and western areas that contain the vast majority of housing. East Campus is largely populated by freshmen and is served by Brittain Dining Hall and North Avenue Dining Hall. West Campus houses some freshmen, transfer, and returning students (upperclassmen), and is served by West Village. Graduate students typically live off-campus (for example, in Home Park) or on-campus in the Graduate Living Center or 10th and Home.
The Institute's administration has implemented programs in an effort to reduce the levels of stress and anxiety felt by Tech students. The Familiarization and Adaptation to the Surroundings and Environs of Tech (FASET) Orientation and Freshman Experience (a freshman-only dorm life program to "encourage friendships and a feeling of social involvement") programs, which seek to help acclimate new students to their surroundings and foster a greater sense of community. As a result, the Institute's retention rates improved.
In recent years , Georgia Tech Housing has been at or over capacity. In Fall 2006, many dorms housed "triples", which was a project that put three residents into a two-person room. Certain pieces of furniture were not provided to the third resident as to accommodate a third bed. When spaces became available in other parts of campus, the third resident was moved elsewhere. In 2013, Georgia Tech provided housing for 9,553 students, and housing was 98% occupied.
In the fall of 2007, the North Avenue Apartments were opened to Tech students. Originally built for the 1996 Olympics and belonging to Georgia State University, the buildings were given to Georgia Tech and have been used to accommodate Tech's expanding population. Georgia Tech freshmen students were the first to inhabit the dormitories in the Winter and Spring 1996 quarters, while much of East Campus was under renovation for the Olympics. The North Avenue Apartments (commonly known as "North Ave") are also noted as the first Georgia Tech buildings to rise above the top of Tech Tower. Open to second-year undergraduate students and above, the buildings are located on East Campus, across North Avenue and near Bobby Dodd Stadium, putting more upperclassmen on East Campus. Currently, the North Avenue Apartments East and North buildings are undergoing extensive renovation to the façade. During their construction, the bricks were not properly secured and thus were a safety hazard to pedestrians and vehicles on the Downtown Connector below.
Two programs on campus as well have houses on East Campus: the International House (commonly referred to as the I-House); and Women, Science, and Technology. The I-House is housed in 4th Street East and Hayes. Women, Science, and Technology is housed in Goldin and Stein. The I-House hosts an International Coffee Hour every Monday night that class is in session from 6 to 7 pm, hosting both residents and their guests for discussions.
Single graduate students may live in the Graduate Living Center (GLC) or at 10th and Home. 10th and Home is the designated family housing unit of Georgia Tech. Residents are zoned to Atlanta Public Schools. Residents are zoned to Centennial Place Elementary, Inman Middle School, and Grady High School.
Student clubs and activities
Several extracurricular activities are available to students, including over 350 student organizations overseen by the Office of Student Involvement. The Student Government Association (SGA), Georgia Tech's student government, has separate executive, legislative, and judicial branches for undergraduate and graduate students. One of the SGA's primary duties is the disbursement of funds to student organizations in need of financial assistance. These funds are derived from the Student Activity Fee that all Georgia Tech students must pay, currently $123 per semester. The ANAK Society, a secret society and honor society established at Georgia Tech in 1908, claims responsibility for founding many of Georgia Tech's earliest traditions and oldest student organizations, including the SGA.
Arts
Georgia Tech's Music Department was established as part of the school's General College in 1963 under the leadership of Ben Logan Sisk. In 1976, the Music Department was assigned to the College of Sciences & Liberal Studies, and in 1991 it was relocated to its current home in the College of Design. In 2009, it was reorganized into the School of Music. The Georgia Tech Glee Club, founded in 1906, is one of the oldest student organizations on campus, and still operates today as part of the School of Music. The Glee Club was among the first collegiate choral groups to release a recording of their songs. The group has toured extensively and appeared on The Ed Sullivan Show twice, providing worldwide exposure to "Ramblin' Wreck from Georgia Tech". Today, the modern Glee Club performs dozens of times each semester for many different events, including official Georgia Tech ceremonies, banquets, and sporting events. It consists of 40 to 60 members and requires no audition or previous choral experience.
The Georgia Tech Band Program, also in the School of Music, represents Georgia Tech at athletic events and provides Tech students with a musical outlet. It was founded in 1908 by 14 students and Robert "Biddy" Bidez. The marching band consistently fields over 300 members. Members of the marching band travel to every football game.
The School of Music is also home to a number of ensembles, such as the 80-to-90-member Symphony Orchestra, Jazz Ensemble, Concert Band, and Percussion and MIDI Ensembles. Students also can opt to form their own small Chamber Ensembles, either for course credit or independently. The contemporary Sonic Generator group, backed by the GVU and in collaboration with the Center for Music Technology, performs a diverse lineup of music featuring new technologies and recent composers.
Georgia Tech also has a music scene that is made up of groups that operate independently from the Music Department. These groups include four student-led a cappella groups: Nothin' but Treble, Sympathetic Vibrations, Taal Tadka, and Infinite Harmony. Musician's Network, another student-led group, operates Under the Couch, a live music venue and recording facility that was formerly located beneath the Couch Building on West Campus and is now located in the Student Center.
Many music, theatre, dance, and opera performances are held in the Ferst Center for the Arts. DramaTech is the campus' student-run theater. The theater has been entertaining Georgia Tech and the surrounding community since 1947. They are also home to Let's Try This! (the campus improv troupe) and VarietyTech (a song and dance troupe). Momocon is an annual anime/gaming/comics convention held on campus in March hosted by Anime O-Tekku, the Georgia Tech anime club. The convention has free admission and was held in the Student Center, Instructional Center, and surrounding outdoor areas until 2010. Beginning in 2011, the convention moved its venue to locations in Technology Square.
Student media
WREK is Georgia Tech's student run radio station. Broadcast at 91.1 MHz on the FM band the station is known as "Wrek Radio". The studio is on the second floor of the Student Center Commons. Broadcasting with 100 kW ERP, WREK is among the nation's most powerful college radio stations. WREK is a student operated and run radio station. In April 2007, a debate was held regarding the future of the radio station. The prospective purchasers were GPB and NPR. WREK maintained its independence after dismissing the notion with approval from the Radio Communications Board of Georgia Tech. The Georgia Tech Amateur Radio Club, founded in 1912, is among the oldest collegiate amateur radio clubs in the nation. The club provided emergency radio communications during several disasters including numerous hurricanes and the 1985 Mexico earthquake.The Technique, also known as the "Nique", is Tech's official student newspaper. It is distributed weekly during the Fall and Spring semesters (on Fridays), and biweekly during the Summer semester (with certain exceptions). It was established on November 17, 1911. Blueprint is Tech's yearbook, established in 1908. Other student publications include The North Avenue Review, Tech's "free-speech magazine", Erato, Tech's literary magazine, The Tower, Tech's undergraduate research journal and T-Book, the student handbook detailing Tech traditions. The offices of all student publications are located in the Student Services Building.
Greek life
Greek life at Georgia Tech includes over 50 active chapters of social fraternities and sororities. All of the groups are chapters of national organizations, including members of the North American Interfraternity Conference, National Panhellenic Conference, and National Pan-Hellenic Council. The first fraternity to establish a chapter at Georgia Tech was Alpha Tau Omega in 1888, before the school held its first classes. The first sorority to establish a chapter was Alpha Xi Delta in 1954. In 2019, 28% of undergraduate men and 33% of undergraduate women were active in Tech's Greek system.
Athletics
Georgia Tech teams are variously known as the Yellow Jackets, the Ramblin' Wreck and the Engineers; but the official nickname is Yellow Jackets. They compete as a member of the National Collegiate Athletic Association (NCAA) Division I level (Football Bowl Subdivision (FBS) sub-level for football), primarily competing in the Atlantic Coast Conference (ACC) for all sports since the 1979–80 season (a year after they officially joined the conference before beginning conference play), Coastal Division in any sports split into a divisional format since the 2005–06 season. The Yellow Jackets previously competed as a charter member of the Metro Conference from 1975–76 to 1977–78, as a charter member of the Southeastern Conference (SEC) from 1932–33 to 1963–64, as a charter of the Southern Conference (SoCon) from 1921–22 to 1931–32, and as a charter member of the Southern Intercollegiate Athletic Association (SIAA) from 1895–96 to 1920–21. They also competed as an Independent from 1964–65 to 1974–75 and on the 1978–79 season. Men's sports include baseball, basketball, cross country, football, golf, swimming & diving, tennis and track & field; while women's sports include basketball, cross country, softball, swimming and diving, tennis, track & field and volleyball.
The Institute mascots are Buzz and the Ramblin' Wreck. The Institute's traditional football rival is the University of Georgia; the rivalry is considered one of the fiercest in college football. The rivalry is commonly referred to as Clean, Old-Fashioned Hate, which is also the title of a book about the subject. There is also a long-standing rivalry with Clemson. Tech has seventeen varsity sports: football, women's and men's basketball, baseball, softball, volleyball, golf, men's and women's tennis, men's and women's swimming and diving, men's and women's track and field, and men's and women's cross country. Four Georgia Tech football teams were selected as national champions in news polls: 1917, 1928, 1952, and 1990. In May 2007, the women's tennis team won the NCAA National Championship with a 4–2 victory over UCLA, the first ever national title granted by the NCAA to Tech.
Fight songs
Tech's fight song "I'm a Ramblin' Wreck from Georgia Tech" is known worldwide. First published in the 1908 Blue Print, it was adapted from an old drinking song ("Son of a Gambolier") and embellished with trumpet flourishes by Frank Roman. Then-Vice President Richard Nixon and Soviet Premier Nikita Khrushchev sang the song together when they met in Moscow in 1958 to reduce the tension between them. As the story goes, Nixon did not know any Russian songs, but Khrushchev knew that one American song as it had been sung on The Ed Sullivan Show.
"I'm a Ramblin' Wreck" has had many other notable moments in its history. It is reportedly the first school song to have been played in space. Gregory Peck sang the song while strumming a ukulele in the movie The Man in the Gray Flannel Suit. John Wayne whistled it in The High and the Mighty. Tim Holt's character sings a few bars of it in the movie His Kind of Woman. There are numerous stories of commanding officers in Higgins boats crossing the English Channel on the morning of D-Day leading their men in the song to calm their nerves. It is played after every Georgia Tech score in a football game.
Another popular fight song is "Up With the White and Gold", which is usually played by the band preceding "Ramblin' Wreck". First published in 1919, "Up with the White and Gold" was also written by Frank Roman. The song's title refers to Georgia Tech's school colors and its lyrics contain the phrase, "Down with the Red and Black", an explicit reference to the school colors of the University of Georgia and the then-budding Georgia Tech–UGA rivalry.
Club sports
Georgia Tech participates in many non-NCAA sanctioned club sports, including airsoft, boxing, crew, cricket, cycling (winning three consecutive Dirty South Collegiate Cycling Conference mountain bike championships), disc golf, equestrian, fencing, field hockey, gymnastics, ice hockey, kayaking, lacrosse, paintball, roller hockey, soccer, rugby union, sailing, skydiving, swimming, table tennis, triathlon, ultimate, water polo, water ski, and wrestling. Many club sports take place at the Georgia Tech Aquatic Center, where swimming, diving, water polo, and the swimming portion of the modern pentathlon competitions for the 1996 Summer Olympics were held. In 2018, the first annual College Club Swimming national championship meet was held at the McAuley Aquatic Center and the hosts, the Georgia Tech Swim Club, were crowned the first-ever club swimming and diving national champions.
Traditions
Georgia Tech has a number of legends and traditions, some of which have persisted for decades. Some are well-known; for example, the most notable of these is the popular but rare tradition of stealing the 'T' from Tech Tower. Tech Tower, Tech's historic primary administrative building, has the letters "TECH" hanging atop it on each of its four sides. There have been several attempts by students to orchestrate complex plans to steal the huge symbolic letter T, and on occasion they have carried this act out successfully.
One of the cherished holdovers from Tech's early years, a steam whistle blew five minutes before the hour, every hour from 7:55 a.m. to 5:55 p.m.[162] However, starting in the fall semester of 2017, due to a new classroom scheduling template, the whistle no longer adheres to this convention and follows a modified schedule.[163] The whistle also blows every spring during the "When the Whistle Blows" remembrance ceremony.[164] The faculty newspaper is named The Whistle.[63]
School colors
Georgia Tech students hold a heated, long and ongoing rivalry with the University of Georgia, known as Clean, Old-Fashioned Hate. The first known hostilities between the two institutions trace back to 1891. The University of Georgia's literary magazine proclaimed UGA's colors to be "old gold, black, and crimson". Dr. Charles H. Herty, then President of the University of Georgia, felt that old gold was too similar to yellow and that it "symbolized cowardice". After the 1893 football game against Tech, Herty removed old gold as an official color. Tech would first use old gold for their uniforms, as a proverbial slap in the face to UGA, in their first unofficial football game against Auburn in 1891. Georgia Tech's school colors would henceforth be old gold and white.
In April 2018 Georgia Tech went through a comprehensive brand redefinement solidifying the school colors into Tech Gold and White as the primary school colors while Navy Blue serves as the contrasting secondary color. The decision to move forward with gold, white and blue is rooted in history, as the first mention of official Georgia Tech class colors came in the Atlanta Constitution in 1891 (white, blue and gold) and the first GT class ring in 1894 also featured gold, white and blue.
Mascots
Costumed in plush to look like a yellow jacket, the official mascot of Georgia Tech is Buzz. Buzz enters the football games at the sound of swarming yellow jackets and proceeds to do a flip on the fifty-yard line GT logo. He then bull rushes the goal post and has been known to knock it out of alignment before football games. Buzz is also notorious for crowd surfing and general light-hearted trickery amongst Tech and rival fans.
The Ramblin' Wreck was the first official mascot of Georgia Tech. It is a 1930 Ford Model A Sports Coupe. The Wreck has led the football team onto the field every home game since 1961. The Wreck features a gold and white paint job, two gold flags emblazoned with the words "To Hell With Georgia" and "Give 'Em Hell Tech", and a white soft top. The Wreck is maintained by the Ramblin' Reck Club, a selective student leadership organization on campus.
Spirit organizations
The Ramblin' Reck Club is charged with upholding all school traditions and creating new traditions such as the SWARM. The SWARM is a 900-member spirit group seated along the north end zone or on the court at basketball games. This is the group that typically features body painting, organized chants, and general fanaticism.
The marching band that performs at halftime and after big plays during the football season is clad in all white and sits next to SWARM at football games providing a dichotomy of white and gold in the North End Zone. The band is also the primary student organization on campus that upholds the tradition of RAT caps, wherein band freshman wear the traditional yellow cap at all band events.
Fight songs and chants
The band plays the fight songs Ramblin' Wreck from Georgia Tech and Up With the White and Gold after every football score and between every basketball period. At the end of a rendition of either fight song, there is a series of drum beats followed by the cheer "Go Jackets" three times (each time followed by a second cheer of "bust their ass"), then a different drum beat and the cheer "Fight, Win, Drink, Get Naked!" The official cheer only includes "Fight, Win" but most present other than the band and cheerleaders will yell the extended version.
It is also tradition for the band to play the "When You Say Budweiser" after the third quarter of football and during the second-to-last official timeout of every basketball game. During the "Budweiser Song", all of the fans in the stadium alternate bending their knees and standing up straight. Other notable band songs are Michael Jackson's Thriller for half-time at the Thrillerdome, Ludacris' Move Bitch'' for large gains in football. Another popular chant is called the Good Word and it begins with asking, "What's the Good Word?" The response from all Tech faithful is, "To Hell With Georgia." The same question is asked three times and then the followup is asked, "How 'bout them dogs?" And everyone yells, "Piss on 'em."
Notable people
There are many notable graduates, non-graduate former students and current students of Georgia Tech. Georgia Tech alumni are known as Yellow Jackets. According to the Georgia Tech Alumni Association:
The first class of 95 students entered Georgia Tech in 1888, and the first two graduates received their degrees in 1890. Since then, the institute has greatly expanded, with an enrollment of 14,558 undergraduates and 6,913 postgraduate students .
Many distinguished individuals once called Georgia Tech home, the most notable being Jimmy Carter, former President of the United States and Nobel Peace Prize winner, who briefly attended Georgia Tech in the early 1940s before matriculating at and graduating from the United States Naval Academy. Juan Carlos Varela, a 1985 industrial engineering graduate, was elected president of Panama in May 2014. Another Georgia Tech graduate and Nobel Prize winner, Kary Mullis, received the Nobel Prize in Chemistry in 1993. A large number of businesspeople (including but not limited to prominent CEOs and directors) began their careers at Georgia Tech. Some of the most successful of these are Charles "Garry" Betty (CEO Earthlink), David Dorman (CEO AT&T Corporation), Mike Duke (CEO Wal-Mart), David C. Garrett Jr. (CEO Delta Air Lines), and James D. Robinson III (CEO American Express and later director of The Coca-Cola Company).
Tech graduates have been deeply influential in politics, military service, and activism. Atlanta mayor Ivan Allen Jr. and former United States Senator Sam Nunn have both made significant changes from within their elected offices. Former Georgia Tech President G. Wayne Clough was also a Tech graduate, the first Tech alumnus to serve in that position. Many notable military commanders are alumni; James A. Winnefeld, Jr. who served as the ninth Vice Chairman of the Joint Chiefs of Staff, Philip M. Breedlove who served as the Commander, U.S. Air Forces in Europe, William L. Ball was the 67th Secretary of the Navy, John M. Brown III was the Commander of the United States Army Pacific Command, and Leonard Wood was Chief of Staff of the Army and a Medal of Honor recipient for helping capture of the Apache chief Geronimo. Wood was also Tech's first football coach and (simultaneously) the team captain, and was instrumental in Tech's first-ever football victory in a game against the University of Georgia. Thomas McGuire was the second-highest scoring American ace during World War II and a Medal of Honor recipient.
Numerous astronauts and National Aeronautics and Space Administration (NASA) administrators spent time at Tech; most notably, Retired Vice Admiral Richard H. Truly was the eighth administrator of NASA, and later served as the president of the Georgia Tech Research Institute. John Young walked on the moon as the commander of Apollo 16, first commander of the space shuttle and is the only person to have piloted four different classes of spacecraft. Georgia Tech has its fair share of noteworthy engineers, scientists, and inventors. Nobel Laureate Kary Mullis developed the polymerase chain reaction, Herbert Saffir developed the Saffir-Simpson Hurricane Scale, and W. Jason Morgan made significant contributions to the theory of plate tectonics and geodynamics. In computer science, Krishna Bharat developed Google News, and D. Richard Hipp developed SQLite. Architect Michael Arad designed the World Trade Center Memorial in New York City.
Despite their highly technical backgrounds, Tech graduates are no strangers to the arts or athletic competition. Among them, comedian/actor Jeff Foxworthy of Blue Collar Comedy Tour fame and Randolph Scott both called Tech home. Several famous athletes have, as well; about 150 Tech students have gone into the National Football League (NFL), with many others going into the National Basketball Association (NBA) or Major League Baseball (MLB). Well-known American football athletes include all-time greats such as Joe Hamilton, Pat Swilling, Billy Shaw, and Joe Guyon, former Tech head football coaches Pepper Rodgers and Bill Fulcher, and recent students such as Calvin Johnson and Tashard Choice. Some of Tech's recent entrants into the NBA include Josh Okogie, Chris Bosh, Derrick Favors, Thaddeus Young, Jarrett Jack, and Iman Shumpert. Award-winning baseball stars include Kevin Brown, Mark Teixeira, Nomar Garciaparra, and Jason Varitek. In golf, Tech alumni include the legendary Bobby Jones, who founded The Masters, and David Duval, who was ranked the No. 1 golfer in the world in 1999.
See also
List of colleges and universities in metropolitan Atlanta
References
Further reading
External links
Georgia Tech Athletics website
Universities and colleges in Atlanta
Educational institutions established in 1885
Engineering universities and colleges in Georgia (U.S. state)
Midtown Atlanta
Technological universities in the United States
Universities and colleges accredited by the Southern Association of Colleges and Schools
Georgia Tech
1885 establishments in Georgia (U.S. state) |
51002505 | https://en.wikipedia.org/wiki/IEEE%20Rebooting%20Computing | IEEE Rebooting Computing | The Task Force on Rebooting Computing (TFRC), housed within IEEE Computer Society, is the new home for the IEEE Rebooting Computing Initiative. Founded in 2013 by the IEEE Future Directions Committee, Rebooting Computing has provided an international, interdisciplinary environment where experts from a wide variety of computer-related fields can come together to explore novel approaches to future computing. IEEE Rebooting Computing began as a global initiative launched by IEEE that proposes to rethink the concept of computing through a holistic look at all aspects of computing, from the device itself to the user interface. As part of its work, IEEE Rebooting Computing provides access to various resources like conferences and educational events, feature and scholarly articles, reports, and videos.
History
IEEE Future Directions Committee established an "IEEE Rebooting Computing" working group in late 2012 with the broad vision of "rebooting" the entire field of computer technology. The activities of this working group are carried out by the IEEE Rebooting Computing Committee, a team of volunteers from ten participating IEEE Societies and Councils, in conjunction with IEEE Future Directions staff members.
The term "rebooting computing" was coined by IEEE Life Fellow, Peter Denning, as part of an early U.S. National Science Foundation-sponsored project focused on revamping computer education.
In order to achieve its goal of rebooting computing, IEEE Rebooting Computing hosted four invitation-only summits between 2013 and 2015 in Washington, D.C., and Santa Cruz, California. These summits addressed the future of computing from a holistic point of view.
In 2014, IEEE Rebooting Computing adopted its logo, consisting of an exploding infinity symbol. The logo is intended to suggest the absence of limits for future computing technology.
IEEE Rebooting Computing announced the signing of a Memorandum of Understanding (MOU) with the International Technology Roadmap for Semiconductors (ITRS) in March 2015. This led in May 2016 to the formation of the IEEE International Roadmap for Devices and Systems (IRDS), which incorporated the previous mission of ITRS in semiconductor device fabrication and expanded it to encompass alternative technologies, computer architectures, and system applications.
In September 2015, IEEE Rebooting Computing announced support for the National Strategic Computing Initiative (NSCI). Established under Executive Order 13072 issued by U.S. President Barack Obama in July 2015, the NSCI calls for a coordinated Federal strategy in high-performance computing (HPC) research, development, and deployment.
In October 2015, the National Nanotechnology Initiative (NNI), an interagency program of the U.S. government, announced a "Nanotechnology-Inspired Grand Challenge in Future Computing". A key document cited by NNI as part of this grand challenge is a white paper, co-sponsored by IEEE Rebooting Computing and ITRS, entitled Sensible Machines.
In 2017, the IEEE New Initiatives Committee renewed the mandate of the Rebooting Computing Initiative, with five major activities: the International Conference on Rebooting Computing (ICRC), IRDS, the Industry Summit on the Future of Computing, the Low-Power Image Recognition Challenge (LPIRC), and a Workshop on the Confluence of Artificial Intelligence and Cybersecurity. In 2018, a new activity was added to promote the development of quantum computing.
Purpose
IEEE Rebooting Computing Task Force aims to help return the computing industry to exponential computer-performance scaling, which stalled in 2005 due to energy inefficiencies of CMOS-based classical computers. Historically, computer processing power doubled every 18 months due to increasing densities of transistors per semiconductor unit. To alleviate challenges brought on by limitations in computer architectures and sustain regular processing performance gains, there was a move toward instruction-level parallelism and superscalar microprocessors. However, with rising costs associated with greater power consumption brought on by this approach signaling the end of Moore's Law, IEEE introduced the IEEE Rebooting Computing initiative.
Incorporating three fundamental pillars of rebooting computing, including energy efficiency, security, and Human Computer Interface (HCI), the initiative seeks to overcome setbacks and challenges relating to the deceleration of computational power and capacity. In turn, these efforts may also be applied in other technology sectors, such as the Internet of Things.
Current work
With the goal of identifying new directions in computing and aiding industry in returning to historical exponential scaling of computer performance, IEEE Rebooting Computing encompasses a variety of activities, products, and services. Among these efforts are an online web portal, technical community, publications, conferences, and events. IEEE Rebooting Computing also maintains a collaborative partnership with IRDS, as well as responding to and participating in national and international initiatives, the NSCI and the "Nanotechnology Inspired Grand Challenge for Future Computing".
IEEE Rebooting Computing Web Portal
The web portal is the primary online home for IEEE Rebooting Computing. The website provides relevant news, information, and resources to users, such as articles authored by IEEE experts and third-party publications. It also includes access to a list of both IEEE-sponsored and general industry conferences and events, videos, and historical data from IEEE Rebooting Computing's past summits.
IEEE Rebooting Computing Podcasts
The web portal also hosts the IEEE Rebooting Computing Podcast, which is a collection of interviews with leaders in the field, updated monthly. This collection is also hosted on the video website IEEE.tv.
IEEE Rebooting Computing Technical Community
IEEE Technical Communities are virtual communities for practitioners, subject matter experts, researchers, and other technology professionals interested in specific topic areas. Open to any interested individual, the IEEE Computer Society Rebooting Computing Technical Community serves as a venue for the distribution and dissemination of news, announcements, and other information from those societies and councils taking part in the IEEE Rebooting Computing initiative. An email newsletter is distributed monthly to several thousand community members, and includes free access to specially selected recent articles of interest from the IEEE Xplore library of journals and conference proceedings. IEEE membership is not required to become a member of the IEEE Rebooting Computing Technical Community.
IEEE Rebooting Computing conferences and events
IEEE Rebooting Computing sponsors, co-sponsors, and takes part in a variety of technology conferences and events worldwide. Conference and event programming is designed to stimulate discussion of existing and emerging technologies, including challenges, benefits, and opportunities. Typically lasting anywhere from a single day to a week or more, conference and event programming generally encompasses keynote addresses, panel discussions, paper presentations, poster sessions, tutorials, and workshops in one or more tracks.
IEEE Rebooting Computing Summits
During its first several years, the initiative's flagship event series was its Rebooting Computing Summits. The inaugural IEEE Rebooting Computing Summit was held in December 2013 in Washington, D.C. The event drew business and industry, government, and academic representatives both from the U.S. and internationally for a variety of plenary lectures and brainstorming sessions.
Based on the first event, a second IEEE Rebooting Computing Summit was held in May 2014 in Santa Cruz, California. Following a similar format to the first summit, a group of invited business and trade, academia, and government experts took part in discussing neuromorphic engineering, approximate computing, and adiabatic / reversible computing.
With the first two summits serving as the event's basis, IEEE Rebooting Computing held a third summit in October 2014, in Scotts Valley, California. The theme for the third summit was "Rethinking Structures of Computation", and focused on the topics of parallel computing, security, approximation, and Human-Computer Interaction. As part of the event, attendees took part in plenary talks, a poster session, and heard details of a new government initiative in future computing research.
A fourth IEEE Rebooting Computing Summit (RCS4), with a theme of "Roadmapping the Future of Computing: Discovering How We May Compute" was held in December 2015, in Washington D.C. The event included plenary talks and breakout groups in the three tracks of "Probabilistic/Approximate Computing", "Neuromorphic Computing", and "Beyond CMOS/3D Computing", with a fourth track on "Superconducting Computing". The summit also hosted speakers from other programs promoting future computing, both governmental and industrial, including DARPA, Intelligence Advanced Research Projects Activity (IARPA), ITRS, NSCI, Office of Science and Technology Policy (OSTP), and Semiconductor Research Corporation.
IEEE International Conference on Rebooting Computing
A larger, open conference, the IEEE International Conference on Rebooting Computing (ICRC 2016), was held in October 2016, in San Diego, California. The goal of ICRC 2016 was to discover and foster novel methodologies to reinvent computing technology, including new materials and physics, devices and circuits, system and network architectures, and algorithms and software. Proceedings of the event have been published by IEEE, and videos of many of the presentations are available online. The second conference in this series, ICRC 2017, was held in November 2017 in Washington, DC, as part of IEEE Rebooting Computing Week. A third conference in this series, ICRC 2018, was held in Washington, DC in November 2018. ICRC 2019 is being planned for November 2019, tentatively in the San Francisco Bay area.
IEEE Industry Summit on the Future of Computing
In November 2017, IEEE Rebooting Computing also sponsored a distinct one-day summit, following ICRC, which addressed similar topics but with a somewhat different focus and audience. This Industry Summit featured plenary presentations by industry, government, and academic leaders on what we can expect for new computer technologies in coming decades. For example, this featured a new public announcement from IBM Research on a breakthrough in quantum computing technology. Other topics of interest included artificial intelligence, machine learning, memory-driven computing, and heterogeneous computing. A second Industry Summit was held in 2018, and plans are to continue this again in November 2019.
Low-Power Image Recognition Challenges
In June 2015, IEEE Rebooting Computing held the first-ever Low-Power Image Recognition Competition (LPIRC). Held as a one-day workshop as during the 2015 Design Automation Conference in San Francisco, California, the competition aimed to assess the state of low-power approaches to object detection in images. The competition fielded competitors from four different countries and included teams from Carnegie Mellon, Rice University, and Tsinghua University and Huawei.
Before the competition, training data was released for detection from the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). Source code of the referee system was released to the public in March 2015. For the competition, an intranet was established for the contestants to retrieve provided image files from and return answers to the competition's referee system. Teams were given 10 minutes to process images, which were ranked by detection accuracy and energy usage.
A second LPIRC was held during the June 2016 Design Automation Conference in Austin, Texas. A third LPIRC was held in July 2017 as part of the Computer Vision and Pattern Recognition Conference (CVPR) in Honolulu, Hawaii. In 2018, two LPIRC competitions were held, one at CVPR in Salt Lake City, Utah in June, and a second online competition in November. These included major new sponsors Google and Facebook. LPIRC 2019 is being planned.
An overview of the first three years of LPIRC was presented at the 2018 IEEE Conference on Design Automation and Test in Europe.
IEEE Workshop on Cybersecurity and Artificial Intelligence
In October 2017, a three-day IEEE Confluence Event was held, bringing together leaders in the fields of cybersecurity and artificial intelligence/machine learning (AI/ML). This workshop was co-chaired by Dr. Dejan Milojicic, co-chair of Rebooting Computing. The aim was to develop a strategy to coordinate efforts to apply AI/ML to improve cybersecurity worldwide. Following this workshop, an IEEE Trend Paper was published entitled “Artificial Intelligence and Machine Learning Applied to Cybersecurity”, with recommendations for new standards and regulations for industry and government. A second workshop was held in November 2018, with plans to continue this effort in the future.
IEEE Quantum Computing Summit
With the growing interest and technological developments in quantum computing, IEEE determined in 2018 to expand its role in establishing metrics and benchmarks in this nascent field. This effort has been led by Dr. Erik DeBenedictis, one of the co-chairs of Rebooting Computing. An invitation-only Summit was held in August 2018 in Atlanta, Georgia, with leaders from industry, academia, and industry, and led to a White Paper on the subject.
IEEE Rebooting Computing Week
Starting in 2017, “Rebooting Computing Week” was created to have a common location for annual conferences and workshops associated with Rebooting Computing. In 2017 and 2018, this was held in November in the Washington, DC area. Events in 2018 included ICRC, the Industry Summit, IRDS Workshop, the Cybersecurity Workshop, and the Quantum Computing Workshop. Plans for 2019 are to have Rebooting Computing Week in the San Francisco Bay area, during November.
Publications
As part of the initiative's work, IEEE Rebooting Computing members and societies regularly publish papers, manuscripts, journals and magazines, and other documents. Among the various IEEE publications IEEE Rebooting Computing contributes to or features articles from on its web portal are Computer; IEEE Journal on Emerging and Selected Topics in Circuits and Systems; IEEE Journal on Exploratory Solid-State Computational Devices and Circuits; IEEE Solid-State Circuits Magazine; IEEE Spectrum; and Proceedings of the IEEE.
In December 2015, Computer published a special issue on rebooting computing, with members of the IEEE Rebooting Committee as guest editors and contributors. In November 2016, the Italian online magazine Mondo Digitale published an article entitled "Rebooting Computing: Developing a Roadmap for the Future of the Computer Industry." In March 2017, Computing in Science and Engineering published a special issue on "The End of Moore's Law", addressing alternative approaches to maintaining exponential growth in performance, even as classic device scaling may be ending.
Since 2016, Computer has published a series of columns under the heading “Rebooting Computing”, coordinated by RC co-chair Dr. Erik DeBenedictis. Recent titles have included:
A Role for IEEE in Quantum Computing
Rebooting Computing to Avoid Meltdown and Spectre
Opportunities and Controversies of Reversible Computing
Computer Architecture's Changing Role in Rebooting Computing
IEEE Rebooting Computing also contributes to a variety of trade publications and news outlets, such as EE Times and Scientific Computing.
Participating IEEE societies
IEEE Rebooting Computing began as a multi-society participation from a cross-section of IEEE societies with interest in numerous aspects of computing, including circuits and systems design; architectures; design automation; magnetics; nanotechnology; reliability; and superconductors.
IEEE Societies and Councils taking part in the IEEE Rebooting Computing initiative:
IEEE Circuits and Systems Society
IEEE Components, Packaging, and Manufacturing Technology Society
IEEE Computer Society
IEEE Council on Electronic Design Automation
IEEE Technical Council on Superconductivity
IEEE Electron Devices Society
IEEE Magnetics Society
IEEE Nanotechnology Council
IEEE Reliability Society
IEEE Solid-State Circuits Society
Collaboration with ITRS and IRDS
IEEE Rebooting Computing has established a collaborative relationship with the ITRS, starting with an exchange of information in 2014. Following the signing of a formal collaboration agreement, IEEE Rebooting Computing and ITRS arranged and held joint international workshops in 2015 with the objective of identifying computer performance scaling challenges and establishing a roadmap to successfully restart computer performance scaling. IEEE Rebooting Computing further collaborated with ITRS on a new effort, known as ITRS 2.0, that extends beyond traditional Moore's Law scaling of chips to include roadmaps covering systems and applications.
ITRS Chairman Paolo Gargini said, "The ITRS shares IEEE Rebooting Computing's mission to restore computing to its historic exponential performance scaling trends so our society and future societies can benefit. Our agreement will ensure we help fundamentally shift the computer industry's focus, resources, time and attention on to new possibilities for computational performance."
On May 4, 2016, IEEE announced the launch of the "International Roadmap for Devices and Systems" (IRDS), operating as part of the IEEE Standards Association's (IEEE-SA) Industry Connections program. IRDS is sponsored by IEEE Rebooting Computing in consultation with the IEEE Computer Society and ITRS. IRDS will provide guidance on future trends in computer systems, architectures, software, chips, and other components across the entire computer industry, and is modeled on ITRS roadmaps that have previously guided the semiconductor industry during the Moore's Law era. The first IRDS Roadmap was released in the first quarter of 2018 on the IRDS Web Portal.
Influence and impact
Through its summits and conferences, other educational efforts, and engagement with government, IEEE Rebooting Computing initiative has begun to influence both the technology industry and national policy efforts. The initiative is releasing the IRDS Roadmap of Future Computing, which includes development of performance benchmarks and standards for new classes of computer systems.
Addressing roadblocks in future high-performance computing, also known as exascale computing, is a key area of focus for IEEE Rebooting Computing. The initiative has been actively pursuing and aiding the industry in making progress toward possible solutions such as specialized chip architectures, millivolt switches, and 3D integrated circuits, as noted by Dr. Erik DeBenedictis of Sandia National Laboratories in "Power Problems Threaten to Strangle Exascale Computing".
In February 2015, IEEE Rebooting Computing Senior Program Director Bichlien Hoang and co-author Sin-Kuen Hawkins received a "Best Presentation Award" for their paper, "How Will Rebooting Computing Help IoT". Presented at the 18th International Conference on Intelligence in Next Generation Networks (ICIN 2015) in Paris, France, the paper described IEEE Rebooting Computing's approach to addressing technical challenges generated by IoT other key computing trends.
One of the key features of the future computing environment is its heterogeneous nature, combining different types of processors. In January 2018, the Office of Advanced Scientific Computing Research of the US Department of Energy held a Workshop on Extreme Heterogeneity. The invited plenary talk of the workshop was on the IEEE Rebooting Computing Initiative, and was presented by Prof. Tom Conte of Georgia Tech, co-chair of the Initiative. His slides are available here.
Media coverage
Media coverage of IEEE Rebooting Computing's efforts has increased. In May 2016, a New York Times feature article on the technological and economic implications of the ending of Moore's Law quoted IEEE Rebooting Co-Chair, Professor Thomas M. Conte of the Georgia Institute of Technology as saying, "The end of Moore's Law is what led to this. Just relying on the semiconductor industry is no longer enough. We have to shift and punch through some walls and break through some barriers."
Among other publications reporting on IEEE Rebooting Computing activities are EE Times; HPCWire; IEEE Spectrum; Inside HPC; Scientific Computing; SiliconANGLE; and VR World.
For example, in November 2018, Forbes Magazine published an article entitled, "IEEE Roadmaps Guide Future Memories and Applications" featuring IRDS. The 2017 ICRC was featured in a Spectrum news article entitled, "Four Strange New Ways to Compute".
See also
Association for Computing Machinery
Big Data
Computer architecture
Computer security
Computer vision
IEEE Cloud Computing
Supercomputer
References
External links
IEEE Rebooting Computing; 4:46 minutes
Rebooting Computing: Elie Track's Keynote at ΗΚΝ Student Leadership Conference 2016; 27:24 minutes
Approximate Computing: An Emerging Paradigm For Energy-Efficient Design
2nd IEEE Rebooting Computing Summit; 2:23 minutes
At IEEE's Rebooting the Computer Conference, A New Economy of Memory Abundance
OSTP Nanotechnology‐Inspired Grand Challenge: Sensible Machines
Computing
Cyberspace
American engineering organizations
Institute of Electrical and Electronics Engineers
International nongovernmental organizations
Piscataway, New Jersey |
56964307 | https://en.wikipedia.org/wiki/ABViewer | ABViewer | ABViewer is multifunctional software for working with AutoCAD DWG, DXF, PLT, STEP, IGES, STL and other 2D and 3D CAD files. The application allows creating and editing drawings, as well as saving them to AutoCAD DWG/DXF, PDF, JPG and a number of other vector and raster file formats.
The software was developed by CADSoftTools in 2003. Since that time, the program has been translated into more than 30 languages and now it supports more than 50 2D/3D vector and raster formats.
History
The early version of ABViewer presented a viewer that also allowed merging CAD files, storing BMP and EMF images in the clipboard, as well as printing a group of files. Initially, the program supported 20 languages and was available in two versions: Standard and Professional. In 2007, ABViewer was no longer just a viewer. It became a tool for viewing, editing and converting files. The application had a full set of tools of a professional editor and supported operations used in design and project work. As a converter, ABViewer made it possible to convert selected parts of the image. Now depending on the functionality the program is available in three versions (Standard, Professional and Enterprise).
Features
Viewer
ABViewer allows viewing 2D and 3D drawings. It supports work with drawing layers and layouts; users can reposition drawings: i.e. zoom, rotate 3D models as well as change the drawing display view and mode; users can navigate through files by means of the Thumbnails window. It is possible to hide/show texts and dimensions as well as measure drawings (both in 2D and 3D modes). The program not only displays drawings but also provides access to drawing properties and structure.
Editor
The Editor mode enables users to create drawings from scratch as well as edit loaded files. ABViewer offers a wide range of tools for working with CAD drawings: drawing tools (used to add entities); modifying instruments (for work with created drawings); different types of snap; work with blocks and external references.
Saving and printing
ABViewer allows saving as well as printing drawings: saving to vector and raster file formats; saving DWG/DXF drawings to G-code; extended printing settings (multipage printing, print preview, plot settings). Batch operations with multiple files are available too: batch conversion and batch print.
Advanced features
ABViewer also provides additional functionality for work with drawings: conversion of PDF drawings into editable DWG files, the Redline mode for adding markups, comparison of DWG/DXF drawing revisions, georeferencing, work from the command line, LISP support as well as XML support.
Supported formats
ABViewer allows viewing more than 50 vector and raster 2D and 3D formats:
The program opens archived drawings too: ZIP, 7z, RAR, CAB, BZIP, TAR.
Localization
ABViewer is fully translated into the following languages: Chinese, Czech, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Brazilian Portuguese, Russian and Spanish. ABViewer documentation and detailed help system are available in English, German, French and Russian.
Reception
Ron LaFon from the Cadalyst compared nine different CAD Viewers in 2008 and AViewer v6.2 was among them.
He stated that the program supported lots of languages (32 languages in total at that time), had “clean and easy to understand” user interface and a variety of available features.
The Digital Engineering magazine named ABViewer (version 7) a “cost-efficient high-quality application” that could be used by engineers as well as by office workers. The Thumbnail visualization of the folder contents was highlighted in particular as this feature makes the search for the required files considerably easier.
Softpedia describes ABViewer version 14 as a modern tool that can help in a variety of tasks and is easy and user-friendly even for inexperienced users.
Apart from it ABViewer is referenced to as a tool for measuring, converting and viewing CAD files in a number of modern studies and researches in different fields.
See also
CAD
DWG
ShareCAD, a free online service for viewing DWG and other CAD files
List of computer-aided technologies companies and their software products
Comparison of computer-aided design editors
References
External links
CADSoftTools official website
Computer-aided design software
Computer-aided design software for Windows
2003 software |
51241557 | https://en.wikipedia.org/wiki/Zapya | Zapya | Zapya (Chinese: 快牙; pinyin: kuai ya) is a peer-to-peer file sharing application that allows users to transfer files of any size and of any format without the need of an Internet connection. Dewmobile, Inc. initially conceived Kuai Ya in Silicon Valley, California, USA to target the Chinese market in 2012. However, the demand for the application spread to neighboring countries such as Myanmar and Pakistan. When the international user base had grown to a reasonable size, Dewmobile created a separate application known as Zapya to publish on Apple App Store and Google Play Store. While Kuai Ya and Zapya are similar to each other, they include different APK and features in order to comply with Google Play Policies.
Zapya gained popularity in countries with low Internet penetration and poor Internet architecture because it allows users to share files without relying on an Internet or cellular data connection. The application is available on multiple platforms, including lower-end phone models, so that it is accessible to everyone. Users can transfer files of any kind and any size for free using a transfer method similar to Bluetooth and AirDrop. Some cellphone stores use Zapya's "Phone Replicate" feature to transfer the data from their customers' old phones to their new ones.
Impact
Zapya has become ingrained in Cuban youth culture due to limited Wi-Fi access in Cuba. The Miami Herald reported on 11 July 2015 on how Cuban tech start-ups use Zapya to overcome the lack of internet penetration and poor Internet architecture in Cuba. They also found that the youth of Cuba use Zapya as a free platform to talk to their friends and share funny videos and photos.
The popularity of Zapya in Cuba has only grown stronger over the years to the point that Cubans have coined the verb "zapyar" as a slang term to refer to sharing files. Cachivache Media deemed Zapya as "the network for the disconnected" in 2016. Travelers and students planning to study abroad in Cuba are recommended to download Zapya before going to the country.
Controversies
Temporary removal from Google Play Store
For a week in October 2019, Zapya was removed from Google Play Store and labeled as "harming other applications". It was determined that a third party software development kit (SDK) had been deemed harmful by the Google Play Policy team. Any applications that used this SDK were also deemed harmful. The SDK was promptly removed. Dewmobile issued a formal apology to users on 5 October 2019 and urged them to update to the new version that complied with Google Play Policies.
Provisional ban in China
Even though Kuai Ya complies with Chinese censorship restrictions, both it and Zapya are banned in the Chinese autonomous region of Xinjiang. In November 2019, the leaked China Cables revealed that the Chinese government's mass surveillance and predictive-policing program Integrated Joint Operation Platform (IJOP) had flagged 1.8 million users with Zapya and Kuai Ya on their phones for investigation as part of the crackdown on Uyghur Muslims. It is not known when the ban came into effect but recently, travelers and residents who are found with either application downloaded to their phones are forced to uninstall them. Applications such as YouTube, Facebook, Instagram are also banned in Xinjiang.
See also
Utility software
References
External links
Official website
File sharing software
Windows file sharing software
MacOS file sharing software
Android (operating system) software
IOS software |
67672534 | https://en.wikipedia.org/wiki/Formulary%20Book%20of%20Somogyv%C3%A1r | Formulary Book of Somogyvár | The Formulary Book of Somogyvár (, ) is a codex or formulary from the Kingdom of Hungary, which was written mainly in the second half of the 15th century and was expanded in the 16th century. Beside legal texts, the manuscript contains three annals which date back to the time of the Árpádian era, a genealogy of the Hungarian monarchs from Béla III to Ladislaus of Naples, a rhythmic list of kings and a record of events regarding the Ottoman–Habsburg wars in Hungary. The codex is kept in the Teleki Library in Târgu-Mureş, Romania.
Background
According to legal historian György Bónis, the 272-page document was written mainly between the 1460s and the end of the 1480s by an unidentified legal scholar of the royal court of King Matthias Corvinus. After his retirement, this scholar settled down in Somogyvár Abbey, an important place of authentication in the Kingdom of Hungary, where he copied and compiled his work with his own records from the royal court and his subsequent local legal practice. It is possible that this scholar is identical with jurist John Izsó de Kékcse, who acted as secular notary and lawyer of the abbey in 1488. Following that, the formulary book and its three annales were extended and completed by two another unidentified authors who also resided in Transdanubia. Lastly, a fourth person possessed the text, who recorded some events of the Ottoman wars in the 16th century, and acknowledged the legitimacy of John Zápolya during the civil war, while omitted to mention Ferdinand of Habsburg.
The document went Transylvania in some way. Historian Dániel Bácsatyai considered the Transylvanian Saxon pastor Michael Siegler possibly used the text when wrote his historical work Chronologia rerum Hungaricum in the 1560–1570s, since both authors know John Sigismund Zápolya's date of birth as an hour exactly, beside other similarities regarding the 16th century notes. Contrary to this, based on two attached copies of charters (issued in 1579 and c. 1592), Bónis argued that Hungarian prelate István Szuhay brought the codex to the Principality of Transylvania, when he was sent as envoy to the court of Stephen Bocskai in the 1590s. The fate of the formulary book is unknown for the upcoming two centuries. By 1794, lawyer József Batz de Zágon possessed the codex. He donated it to the library of the Rerformed Protestant High School in Marosvásárhely (legal predecessor of the Teleki Library) in 1811. György Bónis was the first historian, who analyzed the manuscript and determined the circumstances of its origin in 1957, but he did not describe the text itself. László Solymosi provided certified photocopy to the Diplomatic Photo Collection (DF) of the National Archives of Hungary. In the coming decades, only the footnote of Palatine Thomas' death (from the 16th-century fourth author) received attention. Adrien Quéret-Podesta was the first scholar, who analyzed the texts of the three annals in her 2009 study. Dániel Bácsatyai published and translated the texts concerning history – annals, genealogy, rhythmic list of kings and the 16th-century records – of the formulary book into Hungarian in 2019.
Content
Legal texts
The codex contains altogether 486 sections (copies of 446 charters, 3 clauses, a law text, 10 historical records and 24 notes and 2 additional charters from the 16th century). In terms of scope, two-fifths of the work consists of the publication of royal diplomas between the pages 119 verso and 226 verso (no. 223–374 formulas). Another sections contain charters from both the royal chancellery and the places of authentication: from 1 recto to 119 recto (no. 1–222 formulas), and from 227 recto to 272 verso (no. 375–456), which covers three-fifths of the scope of the formulary book. The collection of charters does not follow a chronological order, the author copied all newly added diplomas to the next blank page. Based on the dates, György Bónis considered the first original author compiled the vast majority of his work in the years between 1480 and 1486, just before the passing of the so-called Decretum maius, when Matthias Corvinus ordered to replace many previous contradictory decrees with a systematic law-code. Bónis argued the formulary book a valuable resource for presenting pre-Tripartitum legal life in Hungary, regarding private law, criminal law and litigation.
Bónis, after examining the content and form elements, defined the chapters of the formulary book as follows:
The author collected the documents to educate students and novice professionals, he also provided the texts with a number of useful remarks, similarly to the 14th-century Ars Notarialis. The author tried to gather the entire material of a single lawsuit, thus, the apprentice was able to trace all documents at all stages of the proceedings in a single case (for instance, investigation stage, requests for postponement, sentencing after a long absence despite the summons and applications for retrial). The author presents cases useful from the point of view of jurisprudence, which appear as many ad hoc possibilities as possible (e.g. postponement of litigation due to the litigant's minority or participation in a military campaign, out-of-court settlement). For the primary purpose of the case law illustration, the author frequently modified in the texts of the original diploma, the identity of the persons concerned (often deleted or changed). Multiple grammatical errors can be found in texts as a result of multiple copying. György Bónis emphasized the lack of logical organization and consistency of the texts too. Regarding the newer section, which contains documents from the places of authentication, represents less instructor intent. Instead of education, law practice mattered after his retirement to the Somogyvár Abbey, so he collected and compiled a sample library for himself. Here can be found some system organizing principles: lawyer advocates, last wills and testaments, pledges, petitions and omissions are found in roughly a subchapter.
Historical texts
Of the 272 pages, 10 pages – between 258 recto and 267 verso – are related to historical and genealogical narratives, while the vast majority of the manuscript contains legal aids (for instance, a guide to recognizing non-authentic charters), texts and copies of authentic diplomas.
First annales
The first annales (called as "Christian Annals" by Quéret-Podesta), are between 260 recto and the top of the 262 recto (altogether five pages). Its first section contains Biblical events from the creation of the world (possibly based on Bede's chronica maiora) to the Acts of the Apostles and the subsequent history of the Roman Catholic Church (based on Regino of Prüm's Chronicon), thus placing Hungarian history in a universal context. The Biblical history contains 11 notes (5 are from Old Testament, 6 are from New Testament) while the history of the Catholic Church and early medieval Europe (until the reign of Charlemagne) are made of 17 notes (altogether 28 notes). Hungarian events last from 993 (Stephen I's ascension to the Hungarian throne, which, in fact, occurred in 997) to 1291 (Andrew III's campaign against the Duchy of Austria). In the mid-16th century, the first annales were supplemented by a single footnote from the fourth author – the death and burial of Palatine Thomas in 1186.
Short notes of the events of the 11th century – mostly deal with Hungarian saints – are related to the Annales Posonienses in their core material, according to Bácsatyai. The text does not refer to kings Stephen I and Ladislaus I as "saints", when it mentions their coronation and death, which testifies to the early origin of the annales. The annales accuse the "Hungarians" of killing Bishop Gerard of Csanád, thus, the original text could have been written in an ecclesial community where foreign priests lived. Only two notes narrate events from the 12th century: Stephen II's invasion of Dalmatia (1124) and the canonization of Ladislaus I ("1113", in fact 1192). Two-third part of the Hungarian-related notes depict events from the 13th century. Dániel Bácsatyai considered that this section is the most valuable part of the entire formulary book. The text provides detailed genealogical data of Béla IV and his family. It uniquely gives the exact dates of the death of Queen Maria Laskarina (23 July 1270) and Béla, Duke of Slavonia (11 June 1269), while in the case of the king it gives a day's earlier mortality – Friday, 2 May 1270, which was also confirmed by the necrologium of the Oberalteich Abbey. Bácsatyai claimed the Hungarian chronicles put the date of his death to 3 May (also "Friday", which is, however, wrong) in retrospect, because of the feast of the Finding of the Holy Cross. A note also contains the date of the death of Francis of Assisi, as a single non-Hungarian event in this period. Therefore, Bácsatyai argued that this section of the first annales was originally written in the church of the Franciscans in Esztergom, where Béla IV and his family were also buried, so the exact date of their death was known to the local friars. Historian Attila Zsoldos accepted this argument and, consequently, the reliability of the dates of death of the aforementioned royalties.
Based on this, Bácsatyai considered the other notes of 13th century events as reliable too, which, however, differ from the scientific position. For instance, the first annales state that Béla IV was born in 1209, during a lunar eclipse. There is a scholar consensus that the monarch was born in 1206, because, upon King Andrew II's initiative, Pope Innocent III had already appealed to the Hungarian prelates and barons on 7 June to swear an oath of loyalty to the King's future son. According to the pope's letter, this unnamed son was born by 29 November 1206. Bácsatyai claimed this son was an unidentified older brother of Béla, who died in childhood. According to him, Béla was born either 1208 or 1209, when there were complete lunar eclipses in the territory of Hungary. In response, Zsoldos pointed out that Béla and his wife Maria married around 1220 and had already reached the age of majority by 1223, when King Andrew II persuaded Béla to separate from his wife, according to a letter of Pope Honorius III. There is also academic consensus – albeit it based only on tradition (Mór Wertner) and not primary source – that Béla's younger brother, Coloman of Galicia was indeed born in 1208, thus Bácsatyai's interpretation about a possible another unnamed Hungarian prince (born in 1206) is a fringe theory. The first annales narrate the civil war between Béla IV and his son Duke Stephen in a short sentence under the year 1267. Bácsatyai accepted this date, despite Hungarian historiography uniformly place the events from late 1264 to early 1265, since the seminal monograph of Gyula Pauler (A magyar nemzet története az Árpádházi királyok alatt, Vol. 1–2). Later, Bácsatyai also wrote a study for journal Századok (2020), in which he sought to support the correctness of the year 1267 with foreign chronicles (for instance, the appendix of Jans der Enikel's Weltchronik) and set up a new chronology of the events, practically return to the standpoint of the pre-Pauler historiography. Zsoldos, who had previously written the history of the civil war in 2007, contested his effort and argued the narrations of certain royal charters – which make Bácsatyai's proposal unsustainable – are more reliable sources than foreign (mainly Austrian) chronicles, which contain many elements of fiction and deal only tangentially with the Hungarian civil war. The annales also call Duke Stephen's Cuman father-in-law as "Semperchan". It is possible he is identical with Seyhan (Zayhan), whom Béla IV refers to his "kinsman" in 1255. The first annales state that Andrew III was crowned king on 6 August 1290, Sunday, the feast day of Pope Sixtus II. Bácsatyai accepted the reliability of the text, while the academic standpoint traditionally set the date to 23 July based on references in the Illuminated Chronicle and the Steirische Reimchronik, which, however, are not free from difficulties of interpretation. Bácsatyai argued that Andrea Dandolo's chronicle confirmed this data, according to which the coronation occurred during the feast of Saint Dominic (4 August). The historian considered the Venetian chronicle misinterpreted the information and the ceremony took place on the anniversary of the death of Dominic (6 August). Bácsatyai also analyzed the charters of Andrew III, examining the dates around which there is a change in the number of years of the king's reign, which confirm the correctness of the date in the annales (6 August). Bácsatyai also emphasized that the text notes that Andrew was "jointly and unanimously elected king by the Hungarians", which would have been an inconceivable formula in the later 14th-century chronicle composition.
Second annales
The second annales (called as "Hunnic Annals" by Quéret-Podesta), are between lower four-fifths of 262 recto and upper two-thirds of 263 verso (altogether four pages). The annales narrate events from Hunnic, Avar and Hungarian history, identifying the three people as a single Hungarian nation. Regarding the Huns, the work contains notes from the period between 337 (the Goths was expelled to the Roman Empire by the Huns, which marked the beginning of the Migration Period, in fact occurred in 376) and 405 (in fact 451, Attila's marching into Aurelianum). Regarding the Avars, the annales narrate the events between 503 (in fact 562, their failed attack on Austrasia) and 612 (the text kneads several events together, some of them already applies to the Hungarian invasions of Europe). From the Hungarian history, the annales refer to events from the period between 910 (the collection of several clashes of the Hungarian invasions from various years) and 1222 (the settlement of the Dominicans in Hungary). According to the analysis of Quéret-Podesta, the Hungarian section of the second annals contains 3 notes regarding the 10th century, 14 notes from the 11th century, 8 notes from the 12th century and finally 5 notes regarding the 13th century.
The original author of the second annales also utilized information from Regino of Prüm's Chronicon, but instead of the early history of Christianity – as the author of the first annals acted –, he focused on the barbaric past (the history of the Hunnic and Avar people). For the outline of the history of the Huns, the author also used Bene's chronica maiora as a source and – based on the text, for instance the annales correctly refer to Attila's brother as Bleda instead of "Buda" – the corpus of text was definitely written before Simon of Kéza's Gesta Hunnorum et Hungarorum (early 1280s). After philological research, Dániel Bácsatyai emphasized the second annales utilized the continuation of the chronicle of Regino (edited by Adalbert of Magdeburg) independently of the well-known Hungarian chronicles, Anonymus' Gesta Hungarorum, Simon of Kéza's Gesta Hunnorum et Hungarorum and the 14th-century chronicle composition (e.g. the Illuminated Chronicle). According to Bácsatyai, the second annales are more directly related to the Chronicon: the work utilized an extract from Regino's work, which was also used by a hypothetical gesta (or annales) about the early Hungarian history (the invasions to Europe). This gesta became a primary source for both Anonymus and the 14th-century chronicle composition for the events in the 10th century, independently from each other. Bácsatyai argued the second annales – as the earliest example – proves that the question of Hunnic–Hungarian identity was already present in earlier Hungarian historiography, before the age of Anonymus.
Regarding the section of the Hungarian history, notes until the 1160s are closely related to the text of the Annales Posonienses, but the second annales gave the years much more accurately (up to a year or two differences). The work contains much less unique information than the first annals. Its narrative, however, differs significantly at several points from other chronicles, for instance the Illuminated Chronicle; the late medieval chronicles, which mostly used texts written under kings descended from Álmos – a claimant to the Hungarian throne –, preserved an unfavorable image of King Coloman and his rule. The second annales state that after the death of Ladislaus I in 1095, Coloman returned home "peacefully" from Poland and began to rule jointly with his younger brother Álmos. The second annales are also unique in the statement that Béla I obtained the Hungarian throne with a "violent hand" against his brother Andrew I in 1060; the surviving chronicles were all written during the time of the descendants of Béla I, where such a formulation of events is understandably not found. Géza I was called as "Magnus" in those parts, when the subsequent monarch was still a duke, in accordance with the inscriptions on the coins issued by Duke Géza, which well reflects the author's awareness (later chronicles, including the Illuminated Chronicle, erroneously claim the king receive the epithet "Great" or "Magnus" because of his monarchical greatness after his death). One of the events in 13th century history also deserves attention: under the year 1205, there is a truncated, unfinished sentence, according to which the young Ladislaus III "was [...] violently from Esztergom" and subsequently his uncle Andrew II was crowned king. It is known that the child monarch died in exile, after his mother, Constance of Aragon, fled to Austria, taking Ladislaus with her. The annales also mentioned the brief reigns of the anti-kings Ladislaus II and Stephen IV (the rivals of their nephew Stephen III). According to Bácsatyai, there are philological parallels between the second annales and Alberic of Trois-Fontaines's chronicle regarding the list of Hungarian monarchs.
Third annales
The third annales (called as "Hungarian Annals" by Quéret-Podesta), are between 265 verso and 266 verso (altogether three pages). The earliest text of the annals – copied by the original first author – contains elements only from the Hungarian history, lasted from 1001 (the coronation of Stephen I) to 1464 (the coronation of Matthias Corvinus). The subsequent owners of the formulary book – three different handwriting can be distinguished – continued the text of the third annales. The second author preserved events from the year 1490 (the death of Matthias, the coronation of Vladislaus II and the first phase of the War of the Hungarian Succession). The third author contributed to the annales with a single note: the coronation of queen consort Anne of Foix-Candale in 1502. With the most entries, the owner of the fourth handwriting added the text: the events from this section last from 1516 (the death of Vladislaus II) to 1540 (the death of John Zápolya). According to Quéret-Podesta, 14 notes deal with events from the 11th century, 7 with the 12th century, 14 with the 13th century, only 3 with the 14th century and 9 notes with the 15th century, written by the original author. The three other authors expanded the text with 2 notes from the 15th century (second author) and 5 notes from the 16th century (1 by the third and 4 by the fourth author).
The text of the annales contains mostly genealogical data of the Hungarian monarchs (except a large earthquake in the year 1092). Similarly to the first and second annals, the third annales are closely related to the Annales Posonienses. Thus it is plausible that all four known annals (Annales Posonienses and the three annals of the Formulary Book of Somogyvár) had a common source regarding the events of the 11th and 12th centuries, an annales now lost. It is the only annales of the formulary book, which use Arabic numerals (following the year 1048). The third annales have no unique information.
Miscellaneous records
A biographical and genealogical list of the Hungarian kings – written by the first original author – can be found in two separate pages; the first part is on the lower part of the page 263 verso, immediately after the end of the second annales. The section continues after a blank page, in 265 recto. The list contains biographical data from Béla III to Ladislaus V, but also includes Ladislaus of Naples (an unsuccessful claimant against Sigismund) and his sister Joanna II of Naples, the last monarch of the Capetian House of Anjou. The names of the monarchs were written in Blackletter (or Gothic) script. The author preserved the name of Andrew II as "Endre", the old Hungarian variant of his name. There are some errors in the lineage: for instance, the text incorrectly claims that Andrew III was the son of his immediate predecessor, Ladislaus IV. On the last page of historical notes (267 recto), different notes can be found rejecting the authenticity of certain royal charters issued by the Hungarian monarchs. Within this, there is also a rhythmic list of kings, lasted from Stephen I (1000) to Sigismund (1437). This text contains an aid for clerks of the chancellery and places of authentication to easily navigate which kings' letters of donation are considered valid or invalid at the time the formulary book is compiled.
The fourth author of the formulary book recorded some events of the Ottoman wars in the 15th and 16th centuries, which reflected his historiographical awareness. He began the chronology on the page 265 recto, in the space left blank by the original author under his own work, the biographical data of the kings of Hungary. This section lasted from 1438 (the Ottoman occupation of Szászsebes, today Sebeș, Romania) to 1469 (in fact 1467, Matthias' unsuccessful invasion to Moldavia). The author continued the chronology in 264 verso. It narrates the events from 1479 (the Battle of Breadfield) to 1567 (in fact 1566, the Siege of Szigetvár and the death of Suleiman the Magnificent).
References
Sources
Hungarian chronicles
Medieval Latin histories
15th-century history books
Legal history of Hungary
15th-century Latin books |
27995 | https://en.wikipedia.org/wiki/Supply%20chain%20management | Supply chain management | In commerce, supply chain management (SCM), the management of the flow of goods and services, between businesses and locations, and includes the movement and storage of raw materials, of work-in-process inventory, and of finished goods as well as end to end order fulfillment from point of origin to point of consumption. Interconnected, interrelated or interlinked networks, channels and node businesses combine in the provision of products and services required by end customers in a supply chain.
Supply-chain management has been defined as the "design, planning, execution, control, and monitoring of supply-chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand and measuring performance globally".
SCM practice draws heavily on industrial engineering, systems engineering, operations management, logistics, procurement, information technology and marketing, and strives for an integrated, multidisciplinary, multimethod approach. Marketing channels play an important role in supply-chain management. Current research in supply-chain management is concerned with topics related to sustainability and risk management, among others. An important concept discussed in SCM is supply chain resilience. Some suggest that the “people dimension” of SCM, ethical issues, internal integration, transparency/visibility, and human capital/talent management are topics that have, so far, been underrepresented on the research agenda. Supply chain management (SCM) is the broad range of activities required to plan, control and execute a product's flow from materials to production to distribution in the most economical way possible. SCM encompasses the integrated planning and execution of processes required to optimize the flow of materials, information and capital in functions that broadly include demand planning, sourcing, production, inventory management and logistics -- or storage and transportation.
Although it has the same goals as supply chain engineering, supply chain management is focused on a more traditional management and business based approach, whereas supply chain engineering is focused on a mathematical model based one.
Mission
Supply-chain management, techniques with the aim of coordinating all parts of SC from supplying raw materials to delivering and/or resumption of products, tries to minimize total costs with respect to existing conflicts among the chain partners. An example of these conflicts is the interrelation between the sale department desiring to have higher inventory levels to fulfill demands and the warehouse for which lower inventories are desired to reduce holding costs.
Origin of the term and definitions
In 1982, Keith Oliver, a consultant at Booz Allen Hamilton introduced the term "supply chain management" to the public domain in an interview for the Financial Times. In 1983 WirtschaftsWoche in Germany published for the first time the results of an implemented and so called "Supply Chain Management project", led by Wolfgang Partsch.
In the mid-1990s, more than a decade later, the term "supply chain management" gained currency when a flurry of articles and books came out on the subject. Supply chains were originally defined as encompassing all activities associated with the flow and transformation of goods from raw materials through to the end user, as well as the associated information flows. Supply-chain management was then further defined as the integration of supply chain activities through improved supply-chain relationships to achieve a competitive advantage.
In the late 1990s, "supply-chain management" (SCM) rose to prominence, and operations managers began to use it in their titles with increasing regularity.
Other commonly accepted definitions of supply-chain management include:
The management of upstream and downstream value-added flows of materials, final goods, and related information among suppliers, company, resellers, and final consumers.
The systematic, strategic coordination of traditional business functions and tactics across all business functions within a particular company and across businesses within the supply chain, for the purposes of improving the long-term performance of the individual companies and the supply chain as a whole
A customer-focused definition is given by Hines (2004:p76): "Supply chain strategies require a total systems view of the links in the chain that work together efficiently to create customer satisfaction at the end point of delivery to the consumer. As a consequence, costs must be lowered throughout the chain by driving out unnecessary expenses, movements, and handling. The main focus is turned to efficiency and added value, or the end user's perception of value. Efficiency must be increased, and bottlenecks removed. The measurement of performance focuses on total system efficiency and the equitable monetary reward distribution to those within the supply chain. The supply-chain system must be responsive to customer requirements."
The integration of key business processes across the supply chain for the purpose of creating value for customers and stakeholders
According to the Council of Supply Chain Management Professionals (CSCMP), supply-chain management encompasses the planning and management of all activities involved in sourcing, procurement, conversion, and logistics management. It also includes coordination and collaboration with channel partners, which may be suppliers, intermediaries, third-party service providers, or customers. Supply-chain management integrates supply and demand management within and across companies. More recently, the loosely coupled, self-organizing network of businesses that cooperate to provide product and service offerings has been called the Extended Enterprise.
A supply chain, as opposed to supply-chain management, is a set of organizations directly linked by one or more upstream and downstream flows of products, services, finances, or information from a source to a customer. Supply-chain management is the management of such a chain.
Supply-chain-management software includes tools or modules used to execute supply chain transactions, manage supplier relationships, and control associated business processes. The overall goal of the software is to improve supply chain performance by monitoring a company’s supply chain network from end-to-end (suppliers, transporters, returns, warehousers, retailers, manufacturers, and customers).
In some cases, the supply chain includes the collection of goods after consumer use for recycling or the reverse logistics processes for returning faulty or unwanted products backwards to producers early in the value chain.
Functions
Supply-chain management is a cross-functional approach that includes managing the movement of raw materials into an organization, certain aspects of the internal processing of materials into finished goods, and the movement of finished goods out of the organization and toward the end consumer. As organizations strive to focus on core competencies and become more flexible, they reduce ownership of raw materials sources and distribution channels. These functions are increasingly being outsourced to other firms that can perform the activities better or more cost effectively. The effect is to increase the number of organizations involved in satisfying customer demand, while reducing managerial control of daily logistics operations. Less control and more supply-chain partners lead to the creation of the concept of supply-chain management. The purpose of supply-chain management is to improve trust and collaboration among supply-chain partners thus improving inventory visibility and the velocity of inventory movement. in this section we have to communicate with all the vendors, suppliers and after that we have to take some comparisons after that we have to place the order.
Importance
Organizations increasingly find that they must rely on effective supply chains, or networks, to compete in the global market and networked economy. In Peter Drucker's (1998) new management paradigms, this concept of business relationships extends beyond traditional enterprise boundaries and seeks to organize entire business processes throughout a value chain of multiple companies.
In recent decades, globalization, outsourcing, and information technology have enabled many organizations, such as Dell and Hewlett Packard, to successfully operate collaborative supply networks in which each specialized business partner focuses on only a few key strategic activities. This inter-organisational supply network can be acknowledged as a new form of organisation. However, with the complicated interactions among the players, the network structure fits neither "market" nor "hierarchy" categories. It is not clear what kind of performance impacts different supply-network structures could have on firms, and little is known about the coordination conditions and trade-offs that may exist among the players. From a systems perspective, a complex network structure can be decomposed into individual component firms. Traditionally, companies in a supply network concentrate on the inputs and outputs of the processes, with little concern for the internal management working of other individual players. Therefore, the choice of an internal management control structure is known to impact local firm performance.
In the 21st century, changes in the business environment have contributed to the development of supply-chain networks. First, as an outcome of globalization and the proliferation of multinational companies, joint ventures, strategic alliances, and business partnerships, significant success factors were identified, complementing the earlier "just-in-time", lean manufacturing, and agile manufacturing practices. Second, technological changes, particularly the dramatic fall in communication costs (a significant component of transaction costs), have led to changes in coordination among the members of the supply chain network.
Many researchers have recognized supply network structures as a new organisational form, using terms such as "Keiretsu", "Extended Enterprise", "Virtual Corporation", "Global Production Network", and "Next Generation Manufacturing System". In general, such a structure can be defined as "a group of semi-independent organisations, each with their capabilities, which collaborate in ever-changing constellations to serve one or more markets in order to achieve some business goal specific to that collaboration".
The importance of supply chain management proved crucial in the 2019-2020 fight against the coronavirus (COVID-19) pandemic that swept across the world. During the pandemic period, governments in countries which had in place effective domestic supply chain management had enough medical supplies to support their needs and enough to donate their surplus to front-line health workers in other jurisdictions. Some organizations were able to quickly develop foreign supply chains in order to import much needed medical supplies.
Supply-chain management is also important for organizational learning. Firms with geographically more extensive supply chains connecting diverse trading cliques tend to become more innovative and productive.
The security-management system for supply chains is described in ISO/IEC 28000 and ISO/IEC 28001 and related standards published jointly by the ISO and the IEC. Supply-Chain Management draws heavily from the areas of operations management, logistics, procurement, and information technology, and strives for an integrated approach.
Supply chain resilience
An important element of SCM is supply chain resilience, defined as "the capacity of a supply chain to persist, adapt, or transform in the face of change". For a long time, the interpretation of resilience in the sense of engineering resilience (= robustness) prevailed in supply chain management, leading to the notion of persistence. A popular implementation of this idea is given by measuring the time-to-survive and the time-to-recover of the supply chain, allowing to identify weak points in the system.
More recently, the interpretations of resilience in the sense of ecological resilience and social–ecological resilience have led to the notions of adaptation and transformation, respecitively. A supply chain is thus interpreted as a social-ecological system that – similar to an ecosystem (e.g. forest) – is able to constantly adapt to external environmental conditions and – through the presence of social actors and their ability to foresight – also to transform itself into a fundamentally new system. This leads to a panarchical interpretation of a supply chain, embedding it into a system of systems, allowing to analyze the interactions of the supply chain with systems that operate at other levels (e.g. society, political economy, planet Earth).
For example, these three components of resilience can be discussed for the 2021 Suez Canal obstruction, when a ship blocked the canal for several days. Persistence means to "bounce back"; in our example it is about removing the ship as quickly as possible to allow "normal" operations. Adaptation means to accept that the system has reached a "new normal" state and to act accordingly; here, this can be implemented by redirecing ships around the African cape or use alternative modes of transport. Finally, transformation means to question the assumptions of globalization, outsourcing and linear supply chains and to envision alternatives; in this example this could lead to local and circular supply chains that do not need global transportation routes any longer.
Historical developments
Six major movements can be observed in the evolution of supply-chain management studies: creation, integration, and globalization, specialization phases one and two, and SCM 2.0.
Creation era
The term "supply chain management" was first coined by Keith Oliver in 1982. However, the concept of a supply chain in management was of great importance long before, in the early 20th century, especially with the creation of the assembly line. The characteristics of this era of supply-chain management include the need for large-scale changes, re-engineering, downsizing driven by cost reduction programs, and widespread attention to Japanese management practices. However, the term became widely adopted after the publication of the seminal book Introduction to Supply Chain Management in 1999 by Robert B. Handfield and Ernest L. Nichols, Jr., which published over 25,000 copies and was translated into Japanese, Korean, Chinese, and Russian.
Integration era
This era of supply-chain-management studies was highlighted with the development of electronic data interchange (EDI) systems in the 1960s, and developed through the 1990s by the introduction of enterprise resource planning (ERP) systems. This era has continued to develop into the 21st century with the expansion of Internet-based collaborative systems. This era of supply-chain evolution is characterized by both increasing value-added and reducing costs through integration.
A supply chain can be classified as a stage 1, 2 or 3 network. In a stage 1–type supply chain, systems such as production, storage, distribution, and material control are not linked and are independent of each other. In a stage 2 supply chain, these are integrated under one plan and enterprise resource planning (ERP) is enabled. A stage 3 supply chain is one that achieves vertical integration with upstream suppliers and downstream customers. An example of this kind of supply chain is Tesco.
Globalization era
It is the third movement of supply-chain-management development, the globalization era, can be characterized by the attention given to global systems of supplier relationships and the expansion of supply chains beyond national boundaries and into other continents. Although the use of global sources in organisations' supply chains can be traced back several decades (e.g., in the oil industry), it was not until the late 1980s that a considerable number of organizations started to integrate global sources into their core business. This era is characterized by the globalization of supply-chain management in organizations with the goal of increasing their competitive advantage, adding value, and reducing costs through global sourcing.
Specialization era (phase I): outsourced manufacturing and distribution
In the 1990s, companies began to focus on "core competencies" and specialization. They abandoned vertical integration, sold off non-core operations, and outsourced those functions to other companies. This changed management requirements, as the supply chain extended beyond the company walls and management was distributed across specialized supply-chain partnerships.
This transition also refocused the fundamental perspectives of each organization. Original equipment manufacturers (OEMs) became brand owners that required visibility deep into their supply base. They had to control the entire supply chain from above, instead of from within. Contract manufacturers had to manage bills of material with different part-numbering schemes from multiple OEMs and support customer requests for work-in-process visibility and vendor-managed inventory (VMI).
The specialization model creates manufacturing and distribution networks composed of several individual supply chains specific to producers, suppliers, and customers that work together to design, manufacture, distribute, market, sell, and service a product. This set of partners may change according to a given market, region, or channel, resulting in a proliferation of trading partner environments, each with its own unique characteristics and demands.
Specialization era (phase II): supply-chain management as a service
Specialization within the supply chain began in the 1980s with the inception of transportation brokerages, warehouse management (storage and inventory), and non-asset-based carriers, and has matured beyond transportation and logistics into aspects of supply planning, collaboration, execution, and performance management.
Market forces sometimes demand rapid changes from suppliers, logistics providers, locations, or customers in their role as components of supply-chain networks. This variability has significant effects on supply-chain infrastructure, from the foundation layers of establishing and managing electronic communication between trading partners to more complex requirements such as the configuration of processes and workflows that are essential to the management of the network itself.
Supply-chain specialization enables companies to improve their overall competencies in the same way that outsourced manufacturing and distribution has done; it allows them to focus on their core competencies and assemble networks of specific, best-in-class partners to contribute to the overall value chain itself, thereby increasing overall performance and efficiency. The ability to quickly obtain and deploy this domain-specific supply-chain expertise without developing and maintaining an entirely unique and complex competency in house is a leading reason why supply-chain specialization is gaining popularity.
Outsourced technology hosting for supply-chain solutions debuted in the late 1990s and has taken root primarily in transportation and collaboration categories. This has progressed from the application service provider (ASP) model from roughly 1998 through 2003 to the on-demand model from approximately 2003 through 2006, to the software as a service (SaaS) model currently in focus today.
Supply-chain management 2.0 (SCM 2.0)
Building on globalization and specialization, the term "SCM 2.0" has been coined to describe both changes within supply chains themselves as well as the evolution of processes, methods, and tools to manage them in this new "era". The growing popularity of collaborative platforms is highlighted by the rise of TradeCard's supply-chain-collaboration platform, which connects multiple buyers and suppliers with financial institutions, enabling them to conduct automated supply-chain finance transactions.
Web 2.0 is a trend in the use of the World Wide Web that is meant to increase creativity, information sharing, and collaboration among users. At its core, the common attribute of Web 2.0 is to help navigate the vast information available on the Web in order to find what is being bought. It is the notion of a usable pathway. SCM 2.0 replicates this notion in supply chain operations. It is the pathway to SCM results, a combination of processes, methodologies, tools, and delivery options to guide companies to their results quickly as the complexity and speed of the supply-chain increase due to global competition; rapid price fluctuations; changing oil prices; short product life cycles; expanded specialization; near-, far-, and off-shoring; and talent scarcity.
Business-process integration
Successful SCM requires a change from managing individual functions to integrating activities into key supply-chain processes. In an example scenario, a purchasing department places orders as its requirements become known. The marketing department, responding to customer demand, communicates with several distributors and retailers as it attempts to determine ways to satisfy this demand. Information shared between supply-chain partners can only be fully leveraged through process integration.
Supply-chain business-process integration involves collaborative work between buyers and suppliers, joint product development, common systems, and shared information. According to Lambert and Cooper (2000), operating an integrated supply chain requires a continuous information flow. However, in many companies, management has concluded that optimizing product flows cannot be accomplished without implementing a process approach. The key supply-chain processes stated by Lambert (2004) are:
Customer-relationship management
Customer-service management
Demand-management style
Order fulfillment
Manufacturing-flow management
Supplier-relationship management
Product development and commercialization
Returns management
Much has been written about demand management. Best-in-class companies have similar characteristics, which include the following:
Internal and external collaboration
Initiatives to reduce lead time
Tighter feedback from customer and market demand
Customer-level forecasting
One could suggest other critical supply business processes that combine these processes stated by Lambert, such as:
Customer service management process
Customer relationship management concerns the relationship between an organization and its customers. Customer service is the source of customer information. It also provides the customer with real-time information on scheduling and product availability through interfaces with the company's production and distribution operations. Successful organizations use the following steps to build customer relationships:
determine mutually satisfying goals for organization and customers
establish and maintain customer rapport
induce positive feelings in the organization and the customers
Inventory management
Inventory management is concerned with ensuring the right stock at the right levels, in the right place, at the right time and the right cost. Inventory management entails inventory planning and forecasting: forecasting helps planning inventory.
Procurement process
Strategic plans are drawn up with suppliers to support the manufacturing flow management process and the development of new products. In firms whose operations extend globally, sourcing may be managed on a global basis. The desired outcome is a relationship where both parties benefit and a reduction in the time required for the product's design and development. The purchasing function may also develop rapid communication systems, such as electronic data interchange (EDI) and internet linkage, to convey possible requirements more rapidly. Activities related to obtaining products and materials from outside suppliers involve resource planning, supply sourcing, negotiation, order placement, inbound transportation, storage, handling, and quality assurance, many of which include the responsibility to coordinate with suppliers on matters of scheduling, supply continuity (inventory), hedging, and research into new sources or programs. Procurement has recently been recognized as a core source of value, driven largely by the increasing trends to outsource products and services, and the changes in the global ecosystem requiring stronger relationships between buyers and sellers.
Product development and commercialization
Here, customers and suppliers must be integrated into the product development process in order to reduce the time to market. As product life cycles shorten, the appropriate products must be developed and successfully launched with ever-shorter time schedules in order for firms to remain competitive. According to Lambert and Cooper (2000), managers of the product development and commercialization process must:
coordinate with customer relationship management to identify customer-articulated needs;
select materials and suppliers in conjunction with procurement; and
develop production technology in manufacturing flow to manufacture and integrate into the best supply chain flow for the given combination of product and markets.
Integration of suppliers into the new product development process was shown to have a major impact on product target cost, quality, delivery, and market share. Tapping into suppliers as a source of innovation requires an extensive process characterized by development of technology sharing, but also involves managing intellectual property issues.
Manufacturing flow management process
The manufacturing process produces and supplies products to the distribution channels based on past forecasts. Manufacturing processes must be flexible in order to respond to market changes and must accommodate mass customization. Orders are processes operating on a just-in-time (JIT) basis in minimum lot sizes. Changes in the manufacturing flow process lead to shorter cycle times, meaning improved responsiveness and efficiency in meeting customer demand. This process manages activities related to planning, scheduling, and supporting manufacturing operations, such as work-in-process storage, handling, transportation, and time phasing of components, inventory at manufacturing sites, and maximum flexibility in the coordination of geographical and final assemblies postponement of physical distribution operations.
Physical distribution
This concerns the movement of a finished product or service to customers. In physical distribution, the customer is the final destination of a marketing channel, and the availability of the product or service is a vital part of each channel participant's marketing effort. It is also through the physical distribution process that the time and space of customer service become an integral part of marketing. Thus it links a marketing channel with its customers (i.e., it links manufacturers, wholesalers, and retailers).
Outsourcing/partnerships
This includes not just the outsourcing of the procurement of materials and components, but also the outsourcing of services that traditionally have been provided in-house. The logic of this trend is that the company will increasingly focus on those activities in the value chain in which it has a distinctive advantage and outsource everything else. This movement has been particularly evident in logistics, where the provision of transport, storage, and inventory control is increasingly subcontracted to specialists or logistics partners. Also, managing and controlling this network of partners and suppliers requires a blend of central and local involvement: strategic decisions are taken centrally, while the monitoring and control of supplier performance and day-to-day liaison with logistics partners are best managed locally.
Performance measurement
Experts found a strong relationship from the largest arcs of supplier and customer integration to market share and profitability. Taking advantage of supplier capabilities and emphasizing a long-term supply-chain perspective in customer relationships can both be correlated with a firm's performance. As logistics competency becomes a critical factor in creating and maintaining competitive advantage, measuring logistics performance becomes increasingly important, because the difference between profitable and unprofitable operations becomes narrower. A.T. Kearney Consultants (1985) noted that firms engaging in comprehensive performance measurement realized improvements in overall productivity. According to experts, internal measures are generally collected and analyzed by the firm, including cost, customer service, productivity, asset measurement, and quality. External performance is measured through customer perception measures and "best practice" benchmarking.
Warehousing management
To reduce a company's cost and expenses, warehousing management is concerned with storage, reducing manpower cost, dispatching authority with on time delivery, loading & unloading facilities with proper area, inventory management system etc.
Workflow management
Integrating suppliers and customers tightly into a workflow (or business process) and thereby achieving an efficient and effective supply chain is a key goal of workflow management.
Theories
There are gaps in the literature on supply-chain management studies at present (2015): there is no theoretical support for explaining the existence or the boundaries of supply-chain management. A few authors, such as Halldorsson et al., Ketchen and Hult (2006), and Lavassani et al. (2009), have tried to provide theoretical foundations for different areas related to supply chain by employing organizational theories, which may include the following:
Resource-based view (RBV)
Transaction cost analysis (TCA)
Knowledge-based view (KBV)
Strategic choice theory (SCT)
Agency theory (AT)
Channel coordination
Institutional theory (InT)
Systems theory (ST)
Network perspective (NP)
Materials logistics management (MLM)
Just-in-time (JIT)
Material requirements planning (MRP)
Theory of constraints (TOC)
Total quality management (TQM)
Agile manufacturing
Time-based competition (TBC)
Quick response manufacturing (QRM)
Customer relationship management (CRM)
Requirements chain management (RCM)
Dynamic Capabilities Theory
Dynamic Management Theory
Available-to-promise (ATP)
Supply Chain Roadmap
Optimal Positioning of the Delivery Window (OPDW)
However, the unit of analysis of most of these theories is not the supply chain but rather another system, such as the firm or the supplier-buyer relationship. Among the few exceptions is the relational view, which outlines a theory for considering dyads and networks of firms as a key unit of analysis for explaining superior individual firm performance (Dyer and Singh, 1998).
Organization and governance
The management of supply chains involve a number of specific challenges regarding the organization of relationships among the different partners along the value chain. Formal and informal governance mechanisms are central elements in the management of supply chain. Particular combinations of governance mechanisms may impact the relational dynamics within the supply chain. The need for interdisciplinarity in SCM research has been pointed out by academics in the field.
Supply chain centroids
In the study of supply-chain management, the concept of centroids has become a useful economic consideration. In mathematics and physics, a centroid is the arithmetic mean position of all the points in a plane figure. For supply chain management, a centroid is a location with a high proportion of a country's population and a high proportion of its manufacturing, generally within . In the US, two major supply chain centroids have been defined, one near Dayton, Ohio, and a second near Riverside, California.
The centroid near Dayton is particularly important because it is closest to the population center of the US and Canada. Dayton is within 500 miles of 60% of the US population and manufacturing capacity, as well as 60% of Canada's population. The region includes the interchange between I-70 and I-75, one of the busiest in the nation, with 154,000 vehicles passing through per day, of which 30–35% are trucks hauling goods. In addition, the I-75 corridor is home to the busiest north-south rail route east of the Mississippi River.
A supply chain is the network of all the individuals, organizations, resources, activities and technology involved in the creation and sale of a product. A supply chain encompasses everything from the delivery of source materials from the supplier to the manufacturer through to its eventual delivery to the end user. The supply chain segment involved with getting the finished product from the manufacturer to the consumer is known as the distribution channel.
Wal-Mart strategic sourcing approaches
In 2010, Wal-Mart announced a big change in its sourcing strategy. Initially, Wal-Mart relied on intermediaries in the sourcing process. It bought only 20% of its stock directly, but the rest were bought through the intermediaries. Therefore, the company came to realize that the presence of many intermediaries in the product sourcing was actually increasing the costs in the supply chain. To cut these costs, Wal-Mart decided to do away with intermediaries in the supply chain and started direct sourcing of its goods from the suppliers. Eduardo Castro-Wright, the then Vice President of Wal-Mart, set an ambitious goal of buying 80% of all Wal-Mart goods directly from the suppliers. Walmart started purchasing fruits and vegetables on a global scale, where it interacted directly with the suppliers of these goods. The company later engaged the suppliers of other goods, such as cloth and home electronics appliances, directly and eliminated the importing agents. The purchaser, in this case Wal-Mart, can easily direct the suppliers on how to manufacture certain products so that they can be acceptable to the consumers. Thus, Wal-Mart, through direct sourcing, manages to get the exact product quality as it expects, since it engages the suppliers in the producing of these products, hence quality consistency. Using agents in the sourcing process in most cases lead to inconsistency in the quality of the products, since the agent's source the products from different manufacturers that have varying qualities.
Wal-Mart managed to source directly 80% profit its stock; this has greatly eliminated the intermediaries and cut down the costs between 5-15%, as markups that are introduced by these middlemen in the supply chain are cut. This saves approximately $4–15 billion. This strategy of direct sourcing not only helped Wal-Mart in reducing the costs in the supply chain but also helped in the improvement of supply chain activities through boosting efficiency throughout the entire process. In other words, direct sourcing reduced the time that takes the company to source and stocks the products in its stock. The presence of the intermediaries elongated the time in the process of procurement, which sometimes led to delays in the supply of the commodities in the stores, thus, customers finding empty shelves. Wal-Mart adopted this strategy of sourcing through centralizing the entire process of procurement and sourcing by setting up four global merchandising points for general goods and clothing. The company instructed all the suppliers to bring their products to these central points that are located in different markets. The procurement team assesses the quality brought by the suppliers, buys the goods, and distributes them to various regional markets. The procurement and sourcing at centralized places helped the company to consolidate the suppliers.
The company has established four centralized points, including an office in Mexico City and Canada. Just a mere piloting test on combining the purchase of fresh apples across the United States, Mexico, and Canada led to the savings of about 10%. As a result, the company intended to increase centralization of its procurement in North America for all its fresh fruits and vegetables. Thus, centralization of the procurement process to various points where the suppliers would be meeting with the procurement team is the latest strategy which the company is implementing, and signs show that this strategy is going to cut costs and also improve the efficiency of the procumbent process.
Strategic vendor partnerships is another strategy the company is using in the sourcing process. Wal-Mart realized that in order for it to ensure consistency in the quality of the products it offers to the consumers and also maintain a steady supply of goods in its stores at a lower cost, it had to create strategic vendor partnerships with the suppliers. Wal-Mart identified and selected the suppliers who met its demand and at the same time offered it the best prices for the goods. It then made a strategic relationship with these vendors by offering and assuring the long-term and high volume of purchases in exchange for the lowest possible prices. Thus, the company has managed to source its products from same suppliers as bulks, but at lower prices. This enables the company to offer competitive prices for its products in its stores, hence, maintaining a competitive advantage over its competitors whose goods are a more expensive in comparison.
Another sourcing strategy Wal-Mart uses is implementing efficient communication relationships with the vendor networks; this is necessary to improve the material flow. The company has all the contacts with the suppliers whom they communicate regularly and make dates on when the goods would be needed, so that the suppliers get ready to deliver the goods in time. The efficient communication between the company's procurement team and the inventory management team enables the company to source goods and fill its shelves on time, without causing delays and empty shelves. In other words, the company realized that in ensuring a steady flow of the goods into the store, the suppliers have to be informed early enough, so that they can act accordingly to avoid delays in the delivery of goods. Thus, efficient communication is another tool which Wal-Mart is using to make the supply chain be more efficient and to cut costs.
Cross-docking is another strategy that Wal-Mart is using to cut costs in its supply chain. Cross-docking is the process of transferring goods directly from inbound trucks to outbound trucks. When the trucks from the suppliers arrive at the distribution centers, most of the trucks are not offloaded to keep the goods in the distribution centers or warehouses; they are transferred directly to another truck designated to deliver goods to specific retail stores for sale. Cross-docking helps in saving the storage costs. Initially, the company was incurring considerable costs of storing the suppliers from the suppliers in its warehouses and the distributions centers to await the distribution trucks to the retail stores in various regions.
Tax-efficient supply-chain management
Tax-efficient supply-chain management is a business model that considers the effect of tax in the design and implementation of supply-chain management. As the consequence of globalization, cross-national businesses pay different tax rates in different countries. Due to these differences, they may legally optimize their supply chain and increase profits based on tax efficiency.
Sustainability and social responsibility in supply chains
Supply chain networks are the veins of an economy, but the health of these veins is dependent on the well-being of the environment and society. Supply-chain sustainability is a business issue affecting an organization's supply chain or logistics network, and is frequently quantified by comparison with SECH ratings, which use a triple bottom line incorporating economic, social, and environmental aspects. While SECH ratings are defined as social, ethical, cultural, and health' footprints, the more commonly used ESG moniker stands for Environment, Social and Governance. Consumers have become more aware of the environmental impact of their purchases and companies' ratings and, along with non-governmental organizations (NGOs), are setting the agenda, and beginning to push, for transitions to more sustainable approaches such as organically grown foods, anti-sweatshop labor codes, and locally produced goods that support independent and small businesses. Because supply chains may account for over 75% of a company's carbon footprint, many organizations are exploring ways to reduce this and thus improve their profile.
For example, in July 2009, Wal-Mart announced its intentions to create a global sustainability index that would rate products according to the environmental and social impacts of their manufacturing and distribution. The index is intended to create environmental accountability in Wal-Mart's supply chain and to provide motivation and infrastructure for other retail companies to do the same.
It has been reported that companies are increasingly taking environmental performance into account when selecting suppliers. A 2011 survey by the Carbon Trust found that 50% of multinationals expect to select their suppliers based upon carbon performance in the future and 29% of suppliers could lose their places on 'green supply chains' if they do not have adequate performance records on carbon.
The US Dodd–Frank Wall Street Reform and Consumer Protection Act, signed into law by President Obama in July 2010, contained a supply chain sustainability provision in the form of the Conflict Minerals law. This law requires SEC-regulated companies to conduct third party audits of their supply chains in order to determine whether any tin, tantalum, tungsten, or gold (together referred to as conflict minerals) is mined or sourced from the Democratic Republic of the Congo, and create a report (available to the general public and SEC) detailing the due diligence efforts taken and the results of the audit. The chain of suppliers and vendors to these reporting companies will be expected to provide appropriate supporting information.
Incidents like the 2013 Savar building collapse with more than 1,100 victims have led to widespread discussions about corporate social responsibility across global supply chains. Wieland and Handfield (2013) suggest that companies need to audit products and suppliers and that supplier auditing needs to go beyond direct relationships with first-tier suppliers. They also demonstrate that visibility needs to be improved if supply cannot be directly controlled and that smart and electronic technologies play a key role to improve visibility. Finally, they highlight that collaboration with local partners, across the industry and with universities is crucial to successfully managing social responsibility in supply chains.
Circular supply-chain management
Circular Supply-Chain Management (CSCM) is "the configuration and coordination of the organisational functions marketing, sales, R&D, production, logistics, IT, finance, and customer service within and across business units and organizations to close, slow, intensify, narrow, and dematerialise material and energy loops to minimise resource input into and waste and emission leakage out of the system, improve its operative effectiveness and efficiency and generate competitive advantages". By reducing resource input and waste leakage along the supply chain and configure it to enable the recirculation of resources at different stages of the product or service lifecycle, potential economic and environmental benefits can be achieved. These comprise e.g. a decrease in material and waste management cost and reduced emissions and resource consumption.
Components
Management components
SCM components are the third element of the four-square circulation framework. The level of integration and management of a business process link is a function of the number and level of components added to the link. Consequently, adding more management components or increasing the level of each component can increase the level of integration of the business process link.
Literature on business process reengineering, buyer-supplier relationships, and SCM suggests various possible components that should receive managerial attention when managing supply relationships. Lambert and Cooper (2000) identified the following components:
Planning and control
Work structure
Organization structure
Product flow facility structure
Information flow facility structure
Management methods
Power and leadership structure
Risk and reward structure
Culture and attitude
However, a more careful examination of the existing literature leads to a more comprehensive understanding of what should be the key critical supply chain components, or "branches" of the previously identified supply chain business processes—that is, what kind of relationship the components may have that are related to suppliers and customers. Bowersox and Closs (1996) state that the emphasis on cooperation represents the synergism leading to the highest level of joint achievement. A primary-level channel participant is a business that is willing to participate in responsibility for inventory ownership or assume other financial risks, thus including primary level components. A secondary-level participant (specialized) is a business that participates in channel relationships by performing essential services for primary participants, including secondary level components, which support primary participants. Third-level channel participants and components that support primary-level channel participants and are the fundamental branches of secondary-level components may also be included.
Consequently, Lambert and Cooper's framework of supply chain components does not lead to any conclusion about what are the primary- or secondary-level (specialized) supply chain components —that is, which supply chain components should be viewed as primary or secondary, how these components should be structured in order to achieve a more comprehensive supply chain structure, and how to examine the supply chain as an integrative one.
Reverse supply chain
Reverse logistics is the process of managing the return of goods and may be considered as an aspect of "aftermarket customer services". Any time money is taken from a company's warranty reserve or service logistics budget, one can speak of a reverse logistics operation. Reverse logistics also includes the process of managing the return of goods from store, which the returned goods are sent back to warehouse and after that either warehouse scrap the goods or send them back to supplier for replacement depending on the warranty of the merchandise.
Digitizing supply chains
Consultancies and media expect the performance efficacy of digitizing supply chains to be high. Additive manufacturing and blockchain technology have emerged as the two technologies with some of the highest economic relevance. The potential of additive manufacturing is particularly high in the production of spare parts, since its introduction can reduce warehousing costs of slowly rotating spare parts. Digitizing technology bears the potential to completely disrupt and restructure supply chains and enhance existing production routes.
In comparison, research on the influence of blockchain technology on the supply chain is still in its early stages. The conceptual literature has argued for a considerably long time that the highest performance efficacy is expected in the potential for automatic contract creation. Empirical evidence contradicts this hypothesis: the highest potential is expected in the arenas of verified customer reviews and certifications of product quality and standards.
In addition, the technological features of blockchains support transparency and traceability of information, as well as high levels of reliability and immutability of records.
Systems and value
Supply chain systems configure value for those that organize the networks. Value is the additional revenue over and above the costs of building the network. Co-creating value and sharing the benefits appropriately to encourage effective participation is a key challenge for any supply system. Tony Hines defines value as follows: "Ultimately it is the customer who pays the price for service delivered that confirms value and not the producer who simply adds cost until that point".
Global applications
Global supply chains pose challenges regarding both quantity and value. Supply and value chain trends include:
Globalization
Increased cross-border sourcing
Collaboration for parts of value chain with low-cost providers
Shared service centers for logistical and administrative functions
Increasingly global operations, which require increasingly global coordination and planning to achieve global optimums
Complex problems involve also midsized companies to an increasing degree
These trends have many benefits for manufacturers because they make possible larger lot sizes, lower taxes, and better environments (e.g., culture, infrastructure, special tax zones, or sophisticated OEM) for their products. There are many additional challenges when the scope of supply chains is global. This is because with a supply chain of a larger scope, the lead time is much longer, and because there are more issues involved, such as multiple currencies, policies, and laws. The consequent problems include different currencies and valuations in different countries, different tax laws, different trading protocols, vulnerability to natural disasters and cyber threats, and lack of transparency of cost and profit.
Roles and responsibilities
Supply chain professionals play major roles in the design and management of supply chains. In the design of supply chains, they help determine whether a product or service is provided by the firm itself (insourcing) or by another firm elsewhere (outsourcing). In the management of supply chains, supply chain professionals coordinate production among multiple providers, ensuring that production and transport of goods happen with minimal quality control or inventory problems. One goal of a well-designed and maintained supply chain for a product is to successfully build the product at minimal cost. Such a supply chain could be considered a competitive advantage for a firm.
Beyond design and maintenance of a supply chain itself, supply chain professionals participate in aspects of business that have a bearing on supply chains, such as sales forecasting, quality management, strategy development, customer service, and systems analysis. Production of a good may evolve over time, rendering an existing supply chain design obsolete. Supply chain professionals need to be aware of changes in production and business climate that affect supply chains and create alternative supply chains as the need arises.
In a research project undertaken by Michigan State University's Broad College of Business, with input from 50 participating organisations, the main issues of concern to supply chain managers were identified as capacity/resource availability, talent (recruitment), complexity, threats/challenges (supply chain risks), compliance and cost/purchasing issues. Keeping up with frequent changes in regulation was identified as a particular concern.
Supply-chain consultants may provide expert knowledge in order to assess the productivity of a supply-chain and, ideally, to enhance its productivity. Supply chain consulting involves the transfer of knowledge on how to exploit existing assets through improved coordination and can hence be a source of competitive advantage: the role of the consultant is to help management by adding value to the whole process through the various sectors from the ordering of the raw materials to the final product. In this regard, firms may either build internal teams of consultants to tackle the issue or engage external ones: companies choose between these two approaches taking into consideration various factors.
The use of external consultants is a common practice among companies. The whole consulting process generally involves the analysis of the entire supply-chain process, including the countermeasures or correctives to take to achieve a better overall performance.
Skills and competencies
Supply chain professionals need to have knowledge of managing supply chain functions such as transportation, warehousing, inventory management, and production planning. In the past, supply chain professionals emphasized logistics skills, such as knowledge of shipping routes, familiarity with warehousing equipment and distribution center locations and footprints, and a solid grasp of freight rates and fuel costs. More recently, supply-chain management extends to logistical support across firms and management of global supply chains. Supply chain professionals need to have an understanding of business continuity basics and strategies.
Certification
Individuals working in supply-chain management can attain professional certification by passing an exam developed by a third party certification organization. The purpose of certification is to guarantee a certain level of expertise in the field. The knowledge needed to pass a certification exam may be gained from several sources. Some knowledge may come from college courses, but most of it is acquired from a mix of on-the-job learning experiences, attending industry events, learning best practices with their peers, and reading books and articles in the field. Certification organizations may provide certification workshops tailored to their exams.
University rankings
The following North American universities rank high in their master's education in the SCM World University 100 ranking, which was published in 2017 and which is based on the opinions of supply chain managers: Michigan State University, Penn State University, University of Tennessee, Massachusetts Institute of Technology, Arizona State University, University of Texas at Austin and Western Michigan University. In the same ranking, the following European universities rank high: Cranfield School of Management, Vlerick Business School, INSEAD, Cambridge University, Eindhoven University of Technology, London Business School and Copenhagen Business School. In the 2016 Eduniversal Best Masters Ranking Supply Chain and Logistics the following universities rank high: Massachusetts Institute of Technology, KEDGE Business School, Purdue University, Rotterdam School of Management, Pontificia Universidad Catolica del Peru, Universidade Nova de Lisboa, Vienna University of Economics and Business and Copenhagen Business School.
Organizations
A number of organizations provide certification in supply chain management, such as the Council of Supply Chain Management Professionals (CSCMP), IIPMR (International Institute for Procurement and Market Research), APICS (the Association for Operations Management), ISCEA (International Supply Chain Education Alliance) and IoSCM (Institute of Supply Chain Management). APICS' certification is called Certified Supply Chain Professional, or CSCP, and ISCEA's certification is called the Certified Supply Chain Manager (CSCM), CISCM (Chartered Institute of Supply Chain Management) awards certificate as Chartered Supply Chain Management Professional (CSCMP). Another, the Institute for Supply Management, is developing one called the Certified Professional in Supply Management (CPSM) focused on the procurement and sourcing areas of supply-chain management. The Supply Chain Management Association (SCMA) is the main certifying body for Canada with the designations having global reciprocity. The designation Supply Chain Management Professional (SCMP) is the title of the supply chain leadership designation.
Topics addressed by selected professional supply chain certification programmes
The following table compares topics addressed by selected professional supply chain certification programmes.
See also
Barcode scanner
Beer distribution game
Bullwhip effect
Calculating demand forecast accuracy
Cold chain
Cost to serve
Customer-driven supply chain
Customer-relationship management
Demand-chain management
Distribution
Document automation
Ecodesk
Enterprise planning systems
Enterprise resource planning
Fair Stone standard
Industrial engineering
Information technology management
Integrated business planning
Inventory
Inventory control
Inventory control system
Inventory management software
LARG SCM
Liquid logistics
Logistic engineering
Logistics
Logistics management
Logistics Officer
Management accounting in supply chains
Management information system
Master of Science in Supply Chain Management
Military supply-chain management
Netchain analysis
Offshoring Research Network
Operations management
Order fulfillment
Procurement
Procurement outsourcing
Product quality risk in supply chain
Radio-frequency identification
Reverse logistics
Service management
Software configuration management (SCM)
Stock management
Strategic information system
Supply chain engineering
Supply-chain-management software
Supply-chain network
Supply-chain security
Supply chain
Supply management
Trade finance
Value chain
Vendor-managed inventory
Warehouse
Warehouse management system
Associations
INFORMS
Institute of Industrial Engineers
References
Further reading
Ferenc Szidarovszky and Sándor Molnár (2002) Introduction to Matrix Theory: With Applications to Business and Economics, World Scientific Publishing. Description and preview.
FAO, 2007, Agro-industrial supply chain management: Concepts and applications. AGSF Occasional Paper 17 Rome.
Haag, S., Cummings, M., McCubbrey, D., Pinsonneault, A., & Donovan, R. (2006), Management Information Systems For the Information Age (3rd Canadian Ed.), Canada: McGraw Hill Ryerson
Halldorsson, A., Kotzab, H., Mikkola, J. H., Skjoett-Larsen, T. (2007). Complementary theories to supply chain management. Supply Chain Management, Volume 12 Issue 4, 284-296.
Hines, T. (2004). Supply chain strategies: Customer driven and customer focused. Oxford: Elsevier.
Hopp, W. (2011). Supply Chain Science. Chicago: Waveland Press.
Kallrath, J., Maindl, T.I. (2006): Real Optimization with SAP® APO. Springer .
Kaushik K.D., & Cooper, M. (2000). Industrial Marketing Management. Volume29, Issue 1, January 2000, Pages 65–83
Kouvelis, P.; Chambers, C.; Wang, H. (2006): Supply Chain Management Research and Production and Operations Management: Review, Trends, and Opportunities. In: Production and Operations Management, Vol. 15, No. 3, pp. 449–469.
Larson, P.D. and Halldorsson, A. (2004). Logistics versus supply chain management: an international survey. International Journal of Logistics: Research & Application, Vol. 7, Issue 1, 17-31.
Simchi-Levi D., Kaminsky P., Simchi-levi E. (2007), Designing and Managing the Supply Chain, third edition, McGraw-Hill
Stanton, D. (2020), Supply Chain Management For Dummies, Second Edition. Wiley New York.
Houlihan, J.B. (1985), "International Supply Chain Management", International Journal of Physical Distribution & Materials Management, Vol. 15 No. 1, pp. 22-38.
Douglas J. Thomas, Paul M. Griffin, Coordinated supply chain management, European Journal of Operational Research, Volume 94, Issue 1, 1996, Pages 1-15, ,
Keah Choon Tan, "A framework of supply chain management literature", European Journal of Purchasing & Supply Management, vol. 7, no. 1, (2001), pp. 39-48, ,
Freight transport
Distribution (marketing)
Management by type |
1632796 | https://en.wikipedia.org/wiki/Essbase | Essbase | Essbase is a multidimensional database management system (MDBMS) that provides a platform upon which to build analytic applications. Essbase began as a product from Arbor Software, which merged with Hyperion Software in 1998. Oracle Corporation acquired Hyperion Solutions Corporation in 2007, Oracle market Essbase as "Oracle Essbase", both on-premises and in Oracle's Cloud Infrastructure (OCI). Until late 2005 IBM also marketed an OEM version of Essbase as DB2 OLAP Server.
The database researcher E. F. Codd coined the term "on-line analytical processing" (OLAP) in a whitepaper
that set out twelve rules for analytic systems (an allusion to his earlier famous set of twelve rules defining the relational model). This whitepaper, published by Computerworld, was somewhat explicit in its reference to Essbase features, and when it was later discovered that Codd had been sponsored by Arbor Software, Computerworld withdrew the paper.
In contrast to "on-line transaction processing" (OLTP), OLAP defines a database technology optimized for processing human queries rather than transactions. The results of this orientation were that multidimensional databases oriented their performance requirements around a different set of benchmarks (Analytic Performance Benchmark, APB-1) than that of RDBMS (Transaction Processing Performance Council (TPC)).
Hyperion renamed many of its products in 2005, giving Essbase an official name of Hyperion System 9 BI+ Analytic Services, but the new name was largely ignored by practitioners. The Essbase brand was later returned to the official product name for marketing purposes, but the server software still carried the "Analytic Services" title until it was incorporated into Oracle's Business Intelligence Foundation Suite (BIFS) product.
In August 2005, Information Age magazine named Essbase as one of the 10 most influential technology innovations of the previous 10 years, along with Netscape, the BlackBerry, Google, virtualization, Voice Over IP (VOIP), Linux, XML, the Pentium processor, and ADSL. Editor Kenny MacIver said: "Hyperion Essbase was the multi-dimensional database technology that put online analytical processing on the business intelligence map. It has spurred the creation of scores of rival OLAP products – and billions of OLAP cubes".
History and motivation
Essbase was originally developed to address the scalability issues associated with spreadsheets such as Lotus 1-2-3 and Microsoft Excel. Indeed, the patent covering (now expired) Essbase uses spreadsheets as a motivating example to illustrate the need for such a system.
In this context, "multi-dimensional" refers to the representation of financial data in spreadsheet format. A typical spreadsheet may display time intervals along column headings, and account names on row headings. For example:
If a user wants to break down these values by region, for example, this typically involves the duplication of this table on multiple spreadsheets:
An alternative representation of this structure would require a three-dimensional spreadsheet grid, giving rise to the idea that "Time", "Account", and "Region" are dimensions. As further dimensions are added to the system, it becomes very difficult to maintain spreadsheets that correctly represent the multi-dimensional values. Multidimensional databases such as Essbase provide a data store for values that exist, at least conceptually, in a multi-dimensional "hypercube".
Sparsity
As the number and size of dimensions increases, developers of multidimensional databases increasingly face technical problems in the physical representation of data. Say the above example was extended to add a "Customer" and "Product" dimension:
If the multidimensional database reserved storage space for every possible value, it would need to store 2,400,000,000 (4 × 4 × 3 × 10,000 × 5,000) cells. If the software maps each cell as a 64-bit floating point value, this equates to a memory requirement of at least 17 gigabytes (exactly 19.2 GB). In practice, of course, the number of combinations of "Customer" and "Product" that contain meaningful values will be a tiny subset of the total space. This property of multi-dimensional spaces is referred to as sparsity.
Aggregation
OLAP systems generally provide for multiple levels of detail within each dimension by arranging the members of each dimension into one or more hierarchies. A time dimension, for example, may be represented as a hierarchy starting with "Total Time", and breaking down into multiple years, then quarters, then months. An Accounts dimension may start with "Profit", which breaks down into "Revenue" and "Expenses", and so on.
In the example above, if "Product" represents individual product SKUs, analysts may also want to report using aggregations such as "Product Group", "Product Family", "Product Line", etc. Similarly, for "Customer", natural aggregations may arrange customers according to geographic location or industry.
The number of aggregate values implied by a set of input data can become surprisingly large. If the Customer and Product dimensions are each in fact six "generations" deep, then 36 (6 × 6) aggregate values are affected by a single data point. It follows that if all these aggregate values are to be stored, the amount of space required is proportional to the product of the depth of all aggregating dimensions. For large databases, this can cause the effective storage requirements to be many hundred times the size of the data being aggregated.
Block storage (Essbase Analytics)
Since version 7, Essbase has supported two "storage options" which take advantage of sparsity to minimize the amount of physical memory and disk space required to represent large multidimensional spaces. The Essbase patent describes the original method, which aimed to reduce the amount of physical memory required without increasing the time required to look up closely related values. With the introduction of alternative storage options, marketing materials called this the Block Storage Option (Essbase BSO), later referred to as Essbase Analytics.
Put briefly, Essbase requires the developer to tag dimensions as "dense" or "sparse". The system then arranges data to represent the hypercube into "blocks", where each block comprises a multi-dimensional array made up of "dense" dimensions, and space is allocated for every potential cell in that block. Sparsity is exploited because the system only creates blocks when required. In the example above, say the developer has tagged "Accounts" and "Time" as "dense", and "Region", "Customer", and "Product" as "sparse". If there are, say, 12,000 combinations of Region, Customer and Product that contain data, then only 12,000 blocks will be created, each block large enough to store every possible combination of Accounts and Time. The number of cells stored is therefore 192000 (4 × 4 × 12000), requiring under 2 gigabytes of memory (exact 1,536MB), plus the size of the index used to look up the appropriate blocks.
Because the database hides this implementation from front-end tools (i.e., a report that attempts to retrieve data from non-existent cells merely sees "null" values), the full hypercube can be navigated naturally, and it is possible to load values into any cell interactively.
Calculation engine
Users can specify calculations in Essbase BSO as:
the aggregation of values through dimensional hierarchies;
stored calculations on dimension members;
"dynamically calculated" dimension members; or
procedural "calculation scripts" that act on values stored in the database.
The first method (dimension aggregation) takes place implicitly through addition, or by selectively tagging branches of the hierarchy to be subtracted, multiplied, divided or ignored. Also, the result of this aggregation can be stored in the database, or calculated dynamically on demand—members must be tagged as "Stored" or "Dynamic Calc." to specify which method is to be used.
The second method (stored calculations) uses a formula against each calculated dimension member when Essbase calculates that member, the result is stored against that member just like a data value.
The third method (dynamic calculation) is specified in exactly the same format as stored calculations, but calculates a result when a user accesses a value addressed by that member; the system does not store such calculated values.
The fourth method (calculation scripts) uses a procedural programming language specific to the Essbase calculation engine. This type of calculation may act upon any data value in the hypercube, and can therefore perform calculations that cannot be expressed as a simple formula.
A calculation script must also be executed to trigger the calculation of aggregated values or stored calculations as described above—a built-in calculation script (called the "default calculation") can be used to execute this type of calculation.
Aggregate storage (Enterprise Analytics)
Although block storage effectively minimizes storage requirements without impacting retrieval time, it has limitations in its treatment of aggregate data in large applications, motivating the introduction of a second storage engine, named Aggregate Storage Option (Essbase ASO) or more recently, Enterprise Analytics. This storage option makes the database behave much more like an OLAP database, such as SQL Server Analysis Services.
Following a data load, Essbase ASO does not store any aggregate values, but instead calculates them on demand. For large databases, where the time required to generate these values may become inconvenient, the database can materialize one or more aggregate "views", made up of one aggregate level from each dimension (for example, the database may calculate all combinations of the fifth generation of Product with the third generation of Customer), and these views are then used to generate other aggregate values where possible. This process can be partially automated, where the administrator specifies the amount of disk space that may be used, and the database generates views according to actual usage.
This approach has a major drawback in that the cube cannot be treated for calculation purposes as a single large hypercube, because aggregate values cannot be directly controlled, so write-back from front-end tools is limited, and complex calculations that cannot be expressed as MDX expressions are not possible.
Calculation engine
Essbase ASO can specify calculations as:
the aggregation of values through dimensional hierarchies; or
dynamically calculated dimension members.
The first method (dimension aggregation) basically duplicates the algorithm used by Essbase BSO.
The second method (dynamic calculations) evaluates MDX expressions against dimension members.
User interface
The majority of Essbase users work with Essbase data via an add-in for Microsoft Excel (previously also Lotus 1-2-3) known as Smart View. The Essbase Add-In is a standard plugin to Microsoft Excel and creates an additional menu that can be used to connect to Essbase databases, retrieve or write data, and navigate the cube's dimensions ("Zoom in", "Pivot", etc.).
In 2005, Hyperion began to offer a visualization tool called Hyperion Visual Explorer (HVE) which was an OEM from Tableau Software. Tableau Software originated at Stanford University as a government-sponsored research project to investigate new ways for users to interact with relational and OLAP databases. Hyperion and Tableau built together built fundamentally the first versions of Tableau Software which was designed specifically for multidimensional (OLAP) databases. Oracle quickly terminated the OEM arrangement with Tableau Software soon after the acquisition of Hyperion in 2007.
Most other well known analytics vendors provide user-facing applications with support for Essbase and include;
Hyperion Analyzer (aka Hyperion System 9 BI+ Web Analysis)
Hyperion Reports (aka Hyperion System 9 BI+ Financial Reporting)
Hyperion Enterprise Reporting
Hyperion Business Intelligence (aka Hyperion System 9 BI+ Interactive Reporting, and Brio Interactive Reporting)
Hyperion SQR (aka Hyperion System 9 BI+ Production Reporting)
Alphablox
Arcplan dynaSight (aka Arcplan Enterprise)
Oracle Business Intelligence Suite Enterprise Edition (aka OBIEE, Siebel Analytics)
Dodeca Spreadsheet Management System
Dodeca Excel Add-In for Essbase
Reporting Suite
EV Analytics
The previous offerings from Hyperion acquired new names as given below:
APIs are available for C, Visual Basic and Java, and embedded scripting support is available for Perl. The standardised XML for Analysis protocol can query Essbase data sources using the MDX language.
In 2007, Oracle Corporation began bundling Hyperion BI tools into Oracle Business Intelligence Enterprise Edition Plus.
Administrative interface
A number of standard interfaces can administer Essbase applications:
ESSCMD, the original command line interface for administration commands;
MaxL, a "multi-dimensional database access language" which provides both a superset of ESSCMD commands, but with a syntax more akin to SQL, as well as support for MDX queries;
Essbase Application Manager, the original Microsoft Windows GUI administration client, compatible with versions of Essbase before 7.0;
Essbase Administration Services, later renamed Analytic Administration Services, and then back to 'Essbase Administration Services' in v. 9.3.1, the currently supported GUI administration client; and
Essbase Integration Server for maintaining the structure and content of Essbase databases based on data models derived from relational or file-based data sources.
Cloud offerings
Since 2017, Essbase Cloud has been available as part of the Oracle Analytics Cloud (OAC), a suite of analytics solutions that include reports and dashboards, data visualization, inline data preparation and mobile.
Competitors
There are several significant competitors among the OLAP, analytics products to that of Essbase (HOLAP/MOLAP) on the market, among them SAP BPC, Microsoft SQL Server Microsoft Analysis Services, (MOLAP, HOLAP, ROLAP), IBM Cognos (ROLAP), IBM/Cognos/Applix TM1 (MOLAP), Oracle OLAP (ROLAP/MOLAP), MicroStrategy (ROLAP), and EXASolution (ROLAP).
Also note that of the above competitors, including Essbase, all use heterogenous relational (Microsoft SQL Server, Oracle, IBM DB/2, TeraData, Access, etc.) or non-relational data sourcing (Excel, text Files, CSV Files, etc.) to feed the cubes (facts and dimensional data), except for Oracle OLAP which may only use Oracle relational sourcing.
Export and/or product migration of Essbase
two options can export Essbase cubes into other formats:
CubePort, a commercial conversion application, converts Essbase cubes to the Microsoft SQL Server Analysis Services product. This product performs an object-to-object translation that make up an Essbase cube, including: outline, member formulas, calc scripts, data loading (load rules), report scripts to MDX queries, substitution variables, and security model. It can extract from any platform version of Essbase, including Oracle/Hyperion Essbase on Windows, Unix, AIX, HP UX, Solaris, IBM DB/2 OLAP, or AS/400 Showcase Essbase.
OlapUnderground Outline Extractor performs a pure, rudimentary, export of the outline, though it does not directly create any new objects. The output is a simple text file that can be pulled indirectly into other OLAP products, among other uses, such as synchronizing outlines. The Outline Extractor is now maintained, supported and distributed free of charge by Applied OLAP, Inc.
See also
OLAP
Oracle OLAP
Business Intelligence
Data warehouse
Hyperion Planning
Comparison of OLAP servers
References
External links
Essbase
Oracle EPM, BI & Data Warehousing
Oracle Essbase
Hyperion at Oracle
v21 documentation
v19.3 documentation
v11.1.2.4 documentation
v11.1.2.3 documentation
v11.1.1.3 documentation
v9.3.1 documentation
Online analytical processing
Oracle software |
433034 | https://en.wikipedia.org/wiki/Wide%20Mouth%20Frog%20protocol | Wide Mouth Frog protocol | The Wide-Mouth Frog protocol is a computer network authentication protocol designed for use on insecure networks (the Internet for example). It allows individuals communicating over a network to prove their identity to each other while also preventing eavesdropping or replay attacks, and provides for detection of modification and the prevention of unauthorized reading. This can be proven using Degano.
The protocol was first described under the name "The Wide-mouthed-frog Protocol" in the paper "A Logic of Authentication" (1990), which introduced Burrows–Abadi–Needham logic, and in which it was an "unpublished protocol ... proposed by" coauthor Michael Burrows. The paper gives no rationale for the protocol's whimsical name.
The protocol can be specified as follows in security protocol notation:
A, B, and S are identities of Alice, Bob, and the trusted server respectively
and are timestamps generated by A and S respectively
is a symmetric key known only to A and S
is a generated symmetric key, which will be the session key of the session between A and B
is a symmetric key known only to B and S
Note that to prevent active attacks, some form of authenticated encryption (or message authentication) must be used.
The protocol has several problems:
A global clock is required.
The server S has access to all keys.
The value of the session key is completely determined by A, who must be competent enough to generate good keys.
It can replay messages within the period when the timestamp is valid.
A is not assured that B exists.
The protocol is stateful. This is usually undesired because it requires more functionality and capability from the server. For example, S must be able to deal with situations in which B is unavailable.
See also
Alice and Bob
Kerberos (protocol)
Needham–Schroeder protocol
Neuman–Stubblebine protocol
Otway–Rees protocol
Yahalom (protocol)
References
Computer access control protocols |
15011379 | https://en.wikipedia.org/wiki/Bowl%20Championship%20Series%20controversies | Bowl Championship Series controversies | The Bowl Championship Series (BCS) was a selection system designed, through polls and computer statistics, to determine a No. 1 and No. 2 ranked team in the NCAA Division I Football Bowl Subdivision (FBS). After the final polls, the two top teams were chosen to play in the BCS National Championship Game which determined the BCS national champion team, but not the champion team for independent voting systems (most notably the AP Poll). This format was intended to be "bowl-centered" rather than a traditional playoff system, since numerous FBS Conferences had expressed their unwillingness to participate in a play-off system. However, due to the unique and often esoteric nature of the BCS format, there had been controversy as to which two teams should play for the national championship and which teams should play in the four other BCS bowl games (Fiesta Bowl, Orange Bowl, Rose Bowl, and Sugar Bowl). In this selection process, the BCS was often criticized for conference favoritism, its inequality of access for teams in non-Automatic Qualifying (non-AQ) Conferences (most likely due to perceived strength of schedule), and perceived monopolistic, "profit-centered" motives. In terms of this last concern, Congress explored the possibility on more than one occasion of holding hearings to determine the legality of the BCS under the terms of the Sherman Anti-Trust Act, and the United States Justice Department also periodically announced interest in investigating the BCS for similar reasons.
Overview
A survey conducted in 2009 at the Quinnipiac University found that 63% of individuals interested in college football preferred a playoff system to the BCS, while only 26 percent supported the BCS as status quo. Arguments from critics typically centered on the validity of BCS national championship pairings and its designated National Champions. Many critics focused strictly on the BCS methodology itself, which employed subjective voting assessments, while others noted the ability for undefeated teams to finish seasons without an opportunity to play the national championship game. For example, in the last six seasons of Division I FBS football, there have been more undefeated non-BCS champions than undefeated BCS champions. Other criticisms involved discrepancies in the allocation of monetary resources from BCS games, as well as the determination of non-championship BCS game participants, which need not comply with the BCS rankings themselves. Critics note that other sports and divisions of college football complete seasons without disputed national champions which critics attribute to the use of the playoff format.
Critics argued that increasing the number of teams would increase the validity of team comparisons in conferences, which do not compete with one another during the regular season; teams typically only play three or four non-conference games, as the result of pre-determined schedules. BCS proponents view the possibility of expanded competitive post-season opportunities as negative. The primary delivery of this objection is a slippery slope argument rhetorically known as bracket creep. Implementation of a playoff system, proponents object, would lead to other, more serious consequences, such as the diminished value of the regular season, diminished value of the bowl tradition, or damage to the collegiate academic calendar year. Critics, including Republican congressman Joe Barton, have been quick to respond to these red herrings, noting that teams from non-AQ conferences are already excluded from the national championship and their inclusion would only improve the meaningfulness of the regular season.
A further criticism of the system was the institutionalized bias towards the six AQ conferences and Notre Dame, an independent team in football, at the deliberate expense of the five Division I-A/FBS BCS non-AQ conferences. During the BCS era (1998-2013), 11 non-AQ conference Division I-A/FBS teams finished the regular season undefeated (Tulane in 1998; Marshall in 1999; Utah in 2004 and 2008; Boise State in 2004, 2006, 2008 and 2009; Hawaii in 2007; and TCU in 2009 and 2010) without being given an opportunity to play in the national championship game. (Due to Mid-American Conference bowl tie-ins, the 1999 Marshall team was in danger of not going to any bowl game if it had lost the conference title game, despite its No. 11 final ranking.) Another problem was presented when more than one non-AQ conference team had an undefeated schedule in the same season. In 2008, Utah and Boise State both went undefeated. However, the BCS rules only provided for one automatic at-large BCS berth from teams in the non-AQ conferences. Therefore, a two-loss Ohio State team was chosen over Boise State for the Fiesta Bowl, and Boise State ended up outside of the BCS games. This problem arose again in 2009, with Boise State and TCU undefeated. The final BCS rankings saw TCU at No. 4 and Boise State at No. 6, which meant that only TCU was guaranteed a slot in the BCS bowls. However, the Broncos were not left out of the BCS bowl party this time, as they were chosen to face TCU in the Fiesta Bowl. Nonetheless, both Boise State and TCU finished the regular season unbeaten – in the case of Boise State, for the second year in a row, the fourth year out of six, and in 2006 finished as the only undefeated team in the nation – and never had a chance to play for a BCS national title.
Since, however, teams from non-AQ conferences play what are considered generally easier schedules than teams from AQ conferences, it is unclear whether this "bias" is merely a penalty based on schedule strength that can also apply to AQ conference teams (see, e.g., the 2007 Kansas team, below). A rejoinder would be that teams from non-AQ conferences only have so much control over their schedules, creating the possibility that such a team might in fact be one of the two best teams in the country, and might also have made a good-faith effort to play a challenging schedule, but might still be excluded from the national championship game. This could happen due to BCS teams turning them down in fear of an upset, or scheduling a traditionally strong school who turned out to be having a weak year. The 2009 TCU team is a counterexample, however. They defeated both Virginia and Clemson on the road, and won the rest of their games by an average of 31 points. They received a BCS bid to play against Boise State in the Fiesta Bowl. Critics, though, argue that TCU may have been more deserving to play Alabama in the BCS Championship Game than Texas. With wins over Clemson, BYU, and Utah, some journalists, including Football Writers Association of America Past-President Dennis Dodd, have cited the 2009 TCU team as an example of a non-AQ team, excluded from consideration for the national championship in spite of their performance against strong competition.
Teams from non-AQ conferences have been successful in BCS bowl games, but this has not affected the position of proponents that non-AQ conference teams are not on an equal level with teams from automatic qualifying conferences. Such "BCS Busters" went 5–3 in BCS bowl games, and 4–2 in BCS bowls against teams from AQ conferences: Utah won both its BCS bowl appearances in 2004 and 2008, Boise State won both of its appearances in 2006 and 2009, while TCU won a BCS bowl in 2010 after losing one in 2009 (to Boise State). Northern Illinois lost the Orange Bowl to a tough Florida State team 31-10. In the previous year the Clemson Tigers lost by 37 points to the West Virginia Mountaineers, with lower attendance and television ratings. The only team that could reasonably be described as "playing a weak schedule and then being exposed by a BCS team" is the 2007 Hawaii team, which was defeated by Georgia in the 2008 Sugar Bowl.
Another concern with the BCS was that a team could fail to win its conference championship, but still play in the BCS championship game. This happened in the 2001, 2003, and 2011 seasons. In 2001 Nebraska played Miami (Florida), after a blowout loss to Colorado in the Cornhuskers' final regular-season game and, therefore, did not play in the Big 12 Conference Championship game. In 2003 Oklahoma played LSU despite losing to Kansas State 35–7 in the Big 12 Conference title game. In 2011, Alabama was selected to play LSU in a rematch between the two programs despite losing the earlier matchup and not winning their division, let alone their conference. This entails that a team that could not even win their conference title is awarded the title of best team in the nation, despite the obvious presence of a better squad within their own conference, as was the case with Alabama in 2011. A rejoinder is that these situations actually reflect a virtue of the BCS system, which rewards teams for their performance throughout the entire season, thereby reinforcing the notion that, in college football, every game (rather than just conference championship games, or games late in the season) matters.
A similar criticism was that a team with similar or better arguments to another team can be left out of the BCS despite beating the other team. This happened between Miami (Florida) and Florida State University in 2000, where Miami beat Florida St. yet Florida St. went to the National Championship Game. The University of Washington also beat Miami and finished with an 11–1 record, further adding to the controversy. In 2008, the situation was repeated when one-loss Oklahoma was selected for the BCS Championship over one-loss Texas, which beat the Sooners during the regular season. Although not related to the title game, after the 2007 season, Kansas was chosen to go to the BCS Orange Bowl, even though they had lost to Missouri (who went to the non-BCS Cotton Bowl, despite only losing twice to Oklahoma, and being ranked higher than both Kansas and Big Ten Rose Bowl Representative Illinois, who Missouri beat). This, among other games in history, illustrates that late season losses are often more injurious than early season losses.
Finally, critics argued that a team could win a weak conference and be awarded an automatic berth over at-large teams that were perceived to be more deserving. Most of this criticism centered on the Big East after losing Miami, Virginia Tech, and Boston College to the ACC. In 2004, No. 21 Pittsburgh won the Big East with a record of 8–3 and was awarded an automatic bid because they won their conference at the expense of several runner-up teams with much better rankings, such as No. 5 California, No. 7 Georgia, and No. 8 Virginia Tech, that were left out. In 2008, undefeated No. 9 Boise State and No. 11 TCU were left out of the BCS while No. 19 Virginia Tech, winner of the ACC was given a BCS bowl berth. In 2010, Connecticut won the Big East with a record of 8–4 and was awarded an automatic bid to the Fiesta Bowl despite not being ranked in the top 25 of the BCS standings. As a result, the Mountain West Conference campaigned to receive an automatic bid while there were calls for the Big East to lose its AQ status. Another way to fix this problem would be to mandate that if a conference champion finishes with a low ranking (say below 12) that they forfeit their automatic bid, and get put into the pool of "at large" teams that the BCS can choose from. Thus, a higher ranked non-AQ team (or an extra team from other AQ conferences) could be selected by a bowl game. Another solution to this problem that was heavily considered prior to the cessation of the BCS was to eliminate AQ status and allow the bowl committees to select the participants they want provided that the school has a sufficiently high BCS ranking so that no BCS bowl would be forced to take a low-ranking AQ conference champion. This would also allow a third team from the same conference to participate in a BCS bowl, something that is prohibited by the current rules.
Questions regarding disparities in revenue sharing
In addition to concerns about the inclusion of non-AQ (Automatic Qualifying) conference teams in the five BCS bowls, some critics have noted the disparities between the amounts paid to the six AQ conferences and their respective schools, as opposed to other conferences and their own schools.
The official BCS website discusses the payouts for the 2009–2010 BCS bowls.
Since each AQ conference is guaranteed at least one representative to a BCS game, each conference will receive approximately $21.2 million, plus an additional $6 million should a second conference team be selected. Although each conference has its own arrangement for the distribution of these funds, the average income per school in each conference is as follows (One team selected/Two teams selected):
Atlantic Coast (14 teams): $1.767M / $2.667M
American Athletic (12 teams): $2.650M / $3.400M
Big Ten (12 teams): $1.927M / $2.473M
Big 12 (10 teams): $1.767M / $2.667M
Pacific-10 (12 teams): $2.120M / $2.720M
Southeastern (14 teams): $1.767M / $2.667M
With next season's realignment in the Big Ten, Big 12, and Pacific-10 (to be renamed the Pac-12) conferences, these numbers will be adjusted.
Notre Dame is guaranteed 1/74th of net revenues, or approximately $1.7 million. If selected to play in a BCS bowl, Notre Dame will receive $6 million.
Independent programs Army and Navy will each receive $100,000 for allowing their teams to participate in the selection for BCS bowls.
Nine percent, or approximately $12.35 million, is guaranteed in aggregate to Conference USA, the Mid-American, Mountain West, Sun Belt, and Western Athletic conferences. If a team from one of these five conferences plays in a BCS bowl, an additional nine percent (approximately $12.35M) will be given in aggregate to the conferences, and if a second team participates, those conferences will receive an additional $6.0M. These five conferences are composed of a total of 52 teams, broken down as follows:
Conference USA – 14 teams
Mid-American – 13 teams
Mountain West – 12 teams
Sun Belt – 11 teams
Therefore, if the payouts to these conferences were broken down equally per school (which is not the case), this would amount to an average of $237,500 per school. If one team from these conferences were to play in a BCS game, that figure would increase to $519,231 per school. Should two teams be selected, the average per school would rise to $634,615 per school.
As a result, in the best-case scenario schools from the non-AQ conferences would receive approximately 37% of the least of the schools in the AQ conferences, including Notre Dame. These numbers are not the actual amounts paid to each school, but are averaged over the total number of schools.
Each of the 14 conferences in the Football Championship Subdivision (formerly Division I-AA), will receive $250,000, or a total of $3.5M. The FCS subdivision, consists of 122 football programs in 14 conferences, with 7 schools independent. (It is unclear if the independent schools are included in the BCS payout.) As a result, although the actual distribution will vary significantly, each of the 122 schools will receive an average of $28,689. This represents 1/56th of the amount that Notre Dame (the team with the lowest guaranteed amount) would receive, and 1/209th of the amount that Notre Dame would receive if it receives a BCS bid.
A breakdown of the BCS non-AQ revenue sharing conducted in 2010 shows the total amounts that the five non-AQ conferences received from all bowls. The coalition of C-USA, MAC, MWC, Sun Belt, and WAC conferences divides half of the BCS revenue equally amongst the five conferences, and the other half into 15 equal shares which are divided by performance. Since Boise State and TCU participated in the Fiesta Bowl, the coalition grossed a total of $24M. As a result, the conferences received and divided the following income (average per school, which likely do not reflect actual amounts, are in parentheses):
MWC – $9.8M ($1,088,889)
WAC – $7.8M ($866,667)
C-USA – $2.8M ($233,333)
MAC – $2.1M ($161,538)
Sun Belt – $1.5M ($166,667)
The 2008–2009 BCS Media Guide claims that over the first 10 years of the BCS arrangement, a total of $100 million has been given to the then-50 non-AQ conference Football Bowl Subdivision schools and the 122 Football Championship Subdivision schools. This gives an average of $10M/year, or $58,803 per school year. By comparison, each AQ conference (between eight and twelve schools) is guaranteed $18 million this year, an average of $1.66M per school for the 65 participating institutions.
The disparities between AQ conferences and non-AQ conferences continue outside the Bowl Championship Series to other bowls, but since the payouts for the five BCS bowls are so much greater than other bowls, the BCS has a major impact on revenue distribution paid to the various Football Bowl Subdivision (formerly Division I-A) schools. A 2003 study described the disparities between the different schools. In 2003, there were 24 bowls other than the BCS bowls, creating opportunities for 48 teams to participate in bowl games. Of these 48 teams, 33 were from AQ conferences.
In 2003, the Big Ten led all conferences with $31.9 million from its seven bowl appearances. By comparison, Conference USA, which led the non-AQ conferences with five bowl appearances, brought in a total of $5.75 million. TCU led all non-AQ schools with $1.37 million from its Liberty bowl appearance.
A similar study of 2000–2010 bowls shows that the SEC led all conferences with $40.46M from its ten bowl appearances. By comparison, the Mountain West Conference led all non-AQ conferences with $12.9M from its five bowls, including TCU's Fiesta Bowl appearance.
The BCS itself acknowledges the vast discrepancies between conferences the automatically qualify (AQ conferences) by drawing a comparison between BCS and non-BCS bowls. On their website, it notes that as a result of Utah's 2009 Sugar Bowl appearance, the MWC received $9.8 million. In contrast, the largest payout of any AQ-conference contracted bowl is the MAACO Bowl Las Vegas, which pays the two teams a total of $1.8 million. The conclusion from this is that the Sugar Bowl paid the MWC over 10 times the best of what a non-BCS bowl offered. Should Utah not have been offered the bid, the MWC would have suffered considerably in comparison.
As a result, there has been significant criticism regarding the revenue distribution by bowls, specifically the BCS due to its significantly higher payout to participating teams. This disparity coupled with the comparative difficulty for non-AQ conference teams to participate in BCS bowls, compounded by the uneven split even for non-AQ conference teams competing in a BCS bowl, have raised calls for further reform in the revenue distribution structure. These concerns have also called into question the underlying motivations of the BCS, insofar as revenue is concerned. These issues have been the center of some Congressional inquiries, the threat of a lawsuit by the Attorney General of Utah, and a recent law review article in the Harvard Journal of Sports and Entertainment Law concluding that the BCS violates federal antitrust law.
Finally, these figures cannot describe the vast differences in merchandise and other revenue that schools receive based on participating in higher visibility games, such as the BCS bowls.
Controversies by season
1998–99 season
The first year of the BCS ended in controversy when one-loss Kansas State finished third in the final BCS standings, but were passed over for participation in BCS games in favor of Ohio State (ranked 4th) and two-loss Florida (8th). Instead, the Wildcats played in the less prestigious Alamo Bowl against Purdue. That offseason, the BCS adopted the "Kansas State Rule", which provides that the 3rd ranked team (or 4th ranked team if the 3rd ranked team has already qualified as a conference champion) in the final BCS standings is assured an invitation to a BCS game. The rule was first utilized in 2002–03, giving an automatic berth to USC. The rule was used eight times in all, with Texas earning automatic bids in 2004–05 and 2008–09, Ohio State earning an automatic bid in 2005–06, Michigan receiving an automatic bid in 2006–07, Stanford receiving an automatic bid in 2010–11 and 2011–12, Florida receiving an automatic bid in 2012–13, and Alabama receiving an automatic bid in 2013–14.
The year also provided the first hint of the inherent bias in the system, with Tulane going undefeated yet due to their conference affiliation winding up just 10th in the final BCS rankings and being ignored for a potential at-large bid.
1999–2000 season
In the second year of the BCS, Kansas State finished 6th in the BCS standings, but once again received no invitation, instead being passed over in favor of Michigan (ranked 8th). Kansas State's predicament (as well as that of undefeated Tulane the previous year and undefeated Marshall who was denied this year) inaugurated the long-standing media controversies regarding the system. Michigan was also picked ahead of in-state and Big Ten rival Michigan State (ranked 9th), despite the Spartans defeating the Wolverines that year (34-31). Michigan, a perceived national brand team, passing up Kansas State and Michigan State led to allegations that the BCS favored paydays over athletic integrity.
2000–01 season
Florida State (11–1, ACC Champions) was chosen to play undefeated Oklahoma (12–0, Big 12 champions) in the Orange Bowl for the national championship, despite their one loss coming to another one-loss team, the Miami Hurricanes (10–1, Big East champions), who were ranked No. 2 in both human polls. Adding to the controversy, Miami's one loss came to yet another one loss team, the Pac-10 champion Washington Huskies, leaving three teams with a legitimate claim to play Oklahoma in the National Championship game. Florida State lost to Oklahoma 13–2, while Washington and Miami both easily won their BCS games, adding more fuel to the fire. Washington beat Purdue 34-24 in the Rose Bowl, and Miami beat Florida 37-20 in the Sugar Bowl. As a result of the controversy, the BCS was tweaked in the off-season: a "quality-win" bonus was added to the formula, giving extra credit for beating a top ten team.
2001–02 season
In another controversial season, Nebraska was chosen as a national title game participant despite being ranked No. 4 in both human polls and not winning their conference. The Huskers went into their final regular season game at Colorado undefeated, but left Boulder with a 62–36 loss, and Colorado went on to win the Big 12 championship. However, the BCS computers, which contributed to 50% of the rankings, did not take into account time of loss, so one-loss Nebraska came out ahead of two-loss Colorado and one-loss Oregon, the consensus No. 2 in both human polls (but 4th in the BCS). In the end, Nebraska beat Colorado for the No. 2 spot in the BCS poll by .05 points. Nebraska was routed in the national title game, 37–14, by Miami. Similarly, Oregon dominated Colorado in the Fiesta Bowl, 38–16. As a result, the computer's influence in determining the rankings was reduced from 50% to 33.3%, and the human poll ranking increased to 66.6%.
2002–03 season
The 2002-03 season was not controversial for the championship game selections or non-AQ selection to the BCS bowl games, but for the BCS allegedly ruining Bowl traditions that had gone way back before the BCS' existence. The Rose Bowl traditionally features the champions of the Big Ten and the Pac-10. However, Big Ten co-champion Ohio State, finishing No. 2 in the BCS, had qualified to play in the 2003 Fiesta Bowl for the national championship against Big East champion Miami.
After the national championship was set, the Orange Bowl had the next pick, and invited No. 3 (#5 BCS) Iowa, who had shared the Big Ten title with Ohio State. When it was the Rose Bowl's turn to select, the best available team was No. 8 (#7 BCS) Oklahoma, the Big 12 champion. When it came time for the Orange Bowl and Sugar Bowl to make a second pick, both wanted Pac-10 co-champion USC. However, a BCS rule stated that if two bowls wanted the same team, the bowl with the higher payoff had priority. The Orange Bowl immediately extended an at-large bid to the Trojans and paired them with the Hawkeyes in a Big Ten/Pac-10 "Rose Bowl East" matchup in the 2003 Orange Bowl. The Rose Bowl was left to pair Oklahoma with Pac-10 co-champion Washington State. Rose Bowl committee executive director Mitch Dorger was not pleased with the results, and the 2003 Rose Bowl game had the lowest attendance and first non-sellout since 1944. This was the second consecutive Rose Bowl to not feature the Big Ten and Pac 10 together, as the previous seasons bowl was the championship game, the first time the Rose Bowl did not feature their 2 respective conferences since 1918 and 1919. It was only the 2nd time this had ever happened in the Rose Bowl's history. The Rose Bowl ended with Oklahoma thumping Washington State 34-14, and the Orange Bowl ended with USC whacking Iowa 38-17. As for the Sugar Bowl, there was no controversy surrounding it, as SEC Champion Georgia and ACC Champion Florida State were paired together, with Georgia winning the game 26-13. Meanwhile, Ohio State upset Miami in the Fiesta Bowl, 31-24, to win the national championship and finished as the only undefeated team in the nation, completing the first 14-0 season in college football history.
2003–04 season
The 2003–04 season saw three schools from BCS AQ conferences finishing the season with one loss (in fact, no Division I-A team finished the season undefeated, something that hadn't happened since 1996, two years before the advent of the BCS). The three schools in question were Oklahoma, LSU and USC.
USC was ranked No. 1 in both human polls, but were burdened by a 2.67 computer ranking due to a weaker schedule and one defeatto unranked Californiaduring the regular season. Meanwhile, Oklahoma had a perfect undefeated regular season playing a stronger schedule, then lost to a strong No. 8-ranked Kansas State in the Big 12 Championship Game, 35-7. The Sooners had been ranked first in both the human polls and the BCS rankings going into the conference playoff week, but dropped to No. 3 in the human polls after the loss to the Wildcats. The Sooners were still ranked first in the computer rankings by a large enough margin to give the Sooners the top spot in the final BCS rankings (the computer rankings didn't take time of loss into account). LSU earned the second spot based on a stronger computer ranking than USC and a No. 2 human poll ranking, and went on to claim the BCS championship with a 21–14 win over Oklahoma in the Sugar Bowl. USC instead went to the Rose Bowl, where they easily defeated Big Ten champion Michigan (No. 4), and the AP proclaimed the Trojans as national champions. The split in the polls left many LSU (13–1) and USC (12–1) fans displeased.
The college coaches involved in the Coaches Poll were contractually obligated to award their organization's trophy and first place votes to the winner of the BCS championship game, LSU. However, for the first time in the history of the BCS, the BCS Champion was not unanimous. LSU received 60 first-place votes, while USC garnered three. The three coaches who broke ranks and violated their contractual obligation and voted against LSU for the national champion were Lou Holtz of South Carolina, along with two coaches from the conferences that played in that year’s Rose BowlMike Bellotti of the Pac-10's Oregon and Ron Turner of the Big Ten's Illinois.
2004–05 season
Undefeated teams
The 2004–05 regular season finished with five undefeated teams for the first time since 1979. Despite having perfect records, Auburn, Utah and Boise State were denied an opportunity to play for the BCS championship. Utah and Boise State's schedules were thought of as weaker than Auburn's (by virtue of playing in the weaker Mountain West and WAC, respectively). However, Utah was the first BCS non-AQ conference team to ever play in a BCS game.
Much of the debate centered on No. 3 Auburn, who went undefeated in the SEC, leading to debates over the strength of schedule, a value that was diminished in the BCS before the season. In the championship game, Oklahoma was crushed by USC, 55–19. Meanwhile, Auburn defeated ACC Champion No. 9 Virginia Tech in the Sugar Bowl 16–13, and No. 6 Utah demolished Big East champion No. 21 Pittsburgh in the Fiesta Bowl, 35–7. This left 3 undefeated teams at the end of the season, with Auburn finishing at No. 2 and Utah at No. 4, and once again leaving many voters wondering who the real champion was.
Lobbying for votes
Another controversy occurred this season when voters jumped Texas over California in the final regular-season poll. Texas coach Mack Brown publicly lobbied voters to give Texas the final at-large bid. Although the Bears, as Pac-10 runner-up, normally would have had first crack at a Rose Bowl berth, Brown lobbied for and got that berth. Several Associated Press voters were besieged by fan emails and phone calls attempting to sway their votes, apparently spurred from Brown's pleas to rank Texas ahead of other "less deserving teams." California's cause was hurt when it was less than impressive in a 26–16 victory over 24-point underdog Southern Miss in Hattiesburg, Mississippi the night before bowl bids were extended.
Nine of the 65 AP voters moved Texas ahead of Cal, and three of them were from Texas. In the Coaches Poll, four voters moved Cal down to No. 7 and two to No. 8, when the week before none had them lower than No. 6. Meanwhile, two coaches moved Texas up to No. 3, even though they did not play that week. The Los Angeles Times wrote that accusations were raised about manipulated voting, but the individual votes were not released to prove or disprove the allegations. In the end, Texas gained 23 points on Cal in the AP Poll, and 43 points in the Coaches Poll. This allowed Texas to earn a BCS berth, finishing .0129 points ahead of Cal in the BCS standings after being .0013 points behind. In the Rose Bowl, Texas rallied to defeat Big Ten champion Michigan in a thriller, 38-37.
Following the game, Brown was heavily criticized for his lobbying of votes, while Cal coach Jeff Tedford called for all votes to be released to the public. The Golden Bears lost to Texas Tech in the Holiday Bowl, 45-31, hurting their cause. Cal played without two of the highest performing receivers in the NCAA, however, their loss was attributed in many press reports to the Bears' disappointment over being denied their first Rose Bowl appearance in 45 years.
As a result of two straight years of controversy, the Associated Press removed its poll from the BCS formula, instead choosing to give out its own national championship trophy. The AP Poll was replaced by the Harris Interactive Poll the following year.
Reggie Bush improper benefits scandal
In the years following USC's 2004 season BCS title, it emerged that former Trojan running back Reggie Bush, who played for the team in 2004 (and 2005), had received illegal gifts in violation of NCAA rules. The subsequent NCAA investigation confirmed this, and the Trojan football program was hit with severe sanctions in the summer of 2010. Among these were the vacating of the last two wins of 2004 (including the Orange Bowl) and all wins in 2005 (they lost the Rose Bowl to Texas). After USC's appeal was denied, the BCS officially stripped them of the 2004 BCS title in 2011 and left the title for that year vacant.
2006–07 season
Going into the final poll, undefeated Boise State and four one-loss teams (Louisville, Michigan, Wisconsin and Florida) were up for a spot against undefeated Ohio State in the BCS National Championship game. Louisville (11–1, Big East champions), and Boise State were given less consideration because of a perceived lack of schedule strength, while Wisconsin (11–1) was two steps removed from Ohio State (they lost to Michigan, who lost to Ohio State, and Wisconsin and Ohio State did not play).
Michigan lost to Ohio State 42–39 in the regular season finale (giving the Buckeyes the Big Ten championship), but were still ranked ahead of Florida and behind USC going into the final ballot. Florida defeated Arkansas in the SEC Championship Game, and No. 2 USC lost to UCLA, leaving Michigan and Florida as one-loss teams who both claimed they deserved to play for the national championship. Ultimately, Florida passed Michigan into the No. 2 spot by a mere .0101 points. This small difference was a result of the human polls (USA Today's Coaches' Poll and Harris Interactive Poll) ranking Florida above Michigan, while the computer polls had the two teams tied for second.
Michigan, which was automatically guaranteed an at-large berth by virtue of its No. 3 ranking, went to the Rose Bowl, which they lost to USC 32–18. Florida won the national championship by impressively beating Ohio State, 41–14. Florida received all but one of the 65 first-place votes in the final Associated Press poll (the other went to Boise State, who won the Fiesta Bowl over Oklahoma). At the conclusion of the season, Wisconsin and Louisville both finished the season with one loss, while Boise State was the only undefeated team in the nation.
Because of a BCS rule allowing only two teams from each conference to play in BCS bowl games, highly ranked Wisconsin and Auburn were not eligible for selection to a BCS game. Wisconsin was excluded because Ohio State and Michigan represented the Big Ten, and Auburn was excluded because LSU and Florida represented the SEC, even though Auburn defeated LSU 7–3 and Florida 27–17 during the season. LSU earned the at-large bid on the strength of its 31–26 victory over SEC West champion Arkansas, while the Razorbacks defeated then No. 2 Auburn 27–10 on the road. Auburn's 37–15 loss at home to a reeling Georgia team also ended its chances at the BCS.
An omission of the rule still would have not have been enough for Auburn to secure a berth, as Wisconsin would have likely been the final at-large bid. The final BCS poll had seven teams from the SEC and the Big Ten ranked in the top twelve, but by the rule only two from each conference were eligible to play in BCS games, offering the opportunity to argue that both conferences are over-ranked, that the Big Ten schedule does not produce a true conference champion, or that the limit of 2 teams from any one conference is inappropriate.
2007–08 season
In a season full of upsets, the top two teams in the polls lost on the same weekend each of the final two weeks, sending the BCS into chaos heading into the selection of the two teams to play for the national championship. On November 23, top-ranked LSU lost in triple overtime to Arkansas. This was the Tigers' second triple-overtime loss of the season, with the other to Kentucky. The following day, No. 4 Missouri beat No. 2 Kansas and took the top spot in the BCS for the following week. This created the interesting prospect of No. 1 Missouri playing its final game of the season as three-point underdogs against Oklahoma. On December 1, Missouri was defeated by Oklahoma in the Big 12 Championship Game. No. 2 West Virginia was also stunned at home by unranked Pittsburgh in the annual Backyard Brawl game. Meanwhile, Ohio State, who was idle for the final two weeks, climbed the rankings from No. 5 to No. 1. Hawaii capped off an undefeated season (the only such team going into the bowl season), beating Washington and securing a BCS appearance for the first time in school history. However, as with Boise State in the previous season, Hawaii did not play for the BCS Championship because the Warriors' schedule was deemed too weak, adding to the ongoing controversy regarding the subjectivity and how easy it is to manipulate the strength of schedule element in determining rankings. In fact, with Hawaii's loss in the Sugar Bowl, the 2007–08 season was the first since the 2003–04 season (and only the second in the BCS era) with no teams finishing the entire season undefeated.
In another irony, No. 6 Missouri was shut out of the BCS entirely when No. 8 Kansas was selected as one of three at-large teams. The Tigers finished higher in the BCS standings and had defeated the Jayhawks a week before the Big 12 title game. However, Kansas received a bid to the Orange Bowl; Orange Bowl officials said that they picked Kansas because the Jayhawks had only one loss, while Missouri had two losses, both to Big 12 champion Oklahoma. Since BCS rules did not allow more than two teams from one conference to get a bid, Missouri was denied an at-large bid because of Kansas' invitation to the Orange Bowl and Oklahoma's invitation to the Fiesta Bowl. Instead, Missouri crushed Arkansas 38–7 in the Cotton Bowl. Kansas stunned No. 3 Virginia Tech in the Orange Bowl 24–21, and Oklahoma was trounced by West Virginia 48-28 in the Fiesta Bowl, making no clear argument either way.
Before "Championship Saturday", LSU was ranked No. 7 and Georgia was No. 4. However, after No. 1 Missouri and No. 2 West Virginia lost, LSU was catapulted to No. 2 based on a 21–14 win over No. 14 Tennessee in the SEC Championship Game. Many argued that the Bulldogs should not play for the national title because they didn't play for—let alone win—the SEC Championship. ESPN's Kirk Herbstreit served as point man for the attack on the Bulldogs, ironically one year after pleading for an Ohio State-Michigan rematch after the Wolverines failed to win their conference. The Bulldogs and Vols finished with identical 6–2 records atop the SEC East, but Tennessee represented the division in the championship game by virtue of beating Georgia 35–14 in October. Virginia Tech had been ranked No. 6, above LSU, but had to settle for the No. 3 slot, despite a convincing win over No. 11 Boston College in the ACC Championship Game. Voters were likely influenced by LSU's crushing 48–7 defeat of Virginia Tech early in the season. Computer rankings placed Virginia Tech (0.960) and LSU (0.950) No. 1, and No. 2, respectively. The top four teams in the BCS standings were No. 1 Ohio State, No. 2 LSU, No. 3 Virginia Tech, and No. 4 Oklahoma.
Ultimately, LSU defeated Ohio State 38–24, marking the second straight season that the Buckeyes lost the championship game to an SEC opponent. LSU received 60 of a possible 65 first-place votes in the final AP poll, the fewest for a BCS champion since 2004. Georgia, another SEC team, was second in the poll and received three first-place votes. The final two first place votes went USC and Kansas, ranked No. 3 and No. 7 respectively.
2008–09 season
In the Big 12 South division, there was a three-way tie for the division champion between Oklahoma, Texas, and Texas Tech (all one-loss teams). The winner of that division would likely play in the national championship game if it beat Missouri in the Big 12 Championship Game. Oklahoma lost to Texas 45–35, then Texas lost to Texas Tech 39–33, and then Texas Tech lost to Oklahoma 65–21. In the Big 12, the BCS standings were used to break this tie, causing the teams to jockey for votes in the human polls. In the end, Oklahoma edged out Texas for the right to represent the Big 12 South in the conference championship game. Despite the head-to-head loss to the Longhorns earlier in the season, the computer rankings ranked the Sooners' schedule ahead of the Longhorns. Another BCS AQ conference, the SEC, merely uses the BCS standings to eliminate one team in a three-way tie and then use head to head to determine tiebreakers, which would have worked in Texas' favor.<ref>"Using the SEC model, if Oklahoma wins out, the Sooners and Texas Tech would probably be eliminated in a three-way tie. Texas would probably get the nod to play for the Big 12 championship by virtue of its 45–35 victory over Oklahoma on Oct. 11. Using the Big 12 model, Oklahoma's chances would be much better." Fit to be tied? Big 12 tie-breaker could determine national championship game berth , Birmingham News</ref>
Going into the conference championship games, only four teams—Alabama, Utah, Ball State and Boise State—were undefeated. However, in the event of an Alabama loss, Utah, Ball State and Boise State had no chance at a title game berth because their schedules were deemed too weak, once again igniting a controversy about schedule strength. As it turned out, Alabama lost to one-loss Florida in the SEC Championship Game, vaulting the Gators to the second spot in the final BCS rankings and a matchup in the title game against Oklahoma. Alabama fell to fourth, behind Texas. In addition, Ball State lost the MAC championship to Buffalo, which denied any chance they had at getting a BCS berth.
Utah and Boise State both finished in the top 15 of the BCS standings and were thus eligible for BCS at-large spots. It was generally understood, however, that only one team would get a berth, as it would be hard to justify allowing a second mid-major conference team into a BCS bowl over an AQ conference runner-up. This difficulty in "justifying" both non-automatic qualifying teams going to BCS bowls led a number of BCS critics to point to this situation as being reflective of the arrogance and assumption of higher quality of the AQ conferences that is not borne out by any statistics or their win-loss records, but rather is based on past records and reputations. Utah qualified automatically as the highest ranked (in the top 12) non-AQ conference champion and defeated Alabama in the Sugar Bowl. No. 9 Boise State and No. 11 TCU were matched up in the Poinsettia Bowl, marking the first time in history that a bowl featured two teams from non-AQ conferences ranked higher than both participants in a BCS bowl game in the same season (the Orange Bowl matched No. 12 Cincinnati and No. 19 Virginia Tech). TCU defeated Boise State 17–16, and Utah won the Sugar Bowl to finish as the nation's only undefeated team and ranked No. 2 in the AP poll.
After the season, the Mountain West Conference made a proposal at the BCS commissioners' annual spring meetings that a selection committee replace the polls and computers, an eight-team playoff system put in place, and changes to the automatic qualifier rules. On June 24, 2009, the BCS presidential oversight committee rejected the plan.
2009–10 season
By mid-October, it was clear that Florida and Alabama would face off in the 2009 SEC Championship Game, and the winner would play in the BCS title game. It was also generally believed that Texas would get the other spot if it won the 2009 Big 12 Championship Game, despite concerns about a weak non-conference schedule and a surprising lack of quality teams in the Big 12. Ultimately, in a repeat of the 2004–05 season, five teams finished the season undefeated—Alabama, Texas, Cincinnati, TCU, and Boise State. Going into the final weekend of the regular season, it was already certain that at least two teams would finish undefeated due to the SEC title game matchup between Alabama and Florida, as well as TCU having already completed an undefeated season.
Texas won the Big 12 title game, and with it a spot in the BCS title game, in controversial fashion. As the game clock appeared to run out with Nebraska winning 12–10; officials ruled that the time left on the clock was reviewable and ordered 1 second put back on the clock, allowing the Longhorns to kick a field goal for a 13–12 win, a result that left then-Nebraska coach Bo Pelini claiming that it was part of a BCS conspiracy. Earlier, Alabama trounced Florida in the SEC title game to earn the other slot.
Boise State, Cincinnati and TCU all believed they had a chance at being in the championship game if Texas lost. However, despite a convincing season-opening win over eventual Pac-10 champion Oregon, Boise State's schedule was once again deemed too weak for a spot in the title game. TCU also thought it would have a shot, since by this time the Mountain West had been reckoned as the strongest non-AQ conference. Cincinnati, however, probably had the strongest claim of the three. Despite being ranked behind TCU going into championship weekend, the Bearcats were the undefeated champion of an AQ conference, rather than an at-large team like the Horned Frogs or Broncos. Indeed, any realistic chance of Boise State or TCU getting in the title game ended with Cincinnati's season-ending victory over Pittsburgh, which ensured that at least two teams from AQ conferences (Cincinnati and the SEC champion) would finish undefeated. Cincinnati passed TCU to finish 3rd in the final BCS standings, but with the margin as slim as it was and three of the six BCS computers having placed Texas behind Cincinnati but ahead of TCU, no conclusions can be drawn as to what might have happened if Texas had lost. Cincinnati was routed by Florida in the Sugar Bowl, 51-24, while Alabama won the national title over Texas, 37-21.
Non-AQ Bowl Selection Controversy
Unrelated to the title game was the controversy regarding the bowl selections. While at No. 6, Boise State was able to earn an at-large berth, the announcement that they would be playing No. 4 TCU in the Fiesta Bowl caused a massive outcry and also focused the controversy on the broader issue of truly fair access to Bowl opportunities, rather than just appearances. As the two "BCS Busters" would be matched up against each other and would thereby be denied the opportunity to face a top team from one of the six BCS AQ conferences, instead providing a rematch of a non-BCS bowl from the previous year (see above), the BCS came off looking "at best, a cowardly cartel". Placing two teams from non-AQ conferences in the same bowl also contradicted the previous assertion that non-AQ schools are less likely to receive at-large bids because the bowls prefer the superior drawing power of the big schools and their highly mobile fanbases—hence undefeated Boise State's omission from the BCS the previous year in favor of two-loss Ohio State. For this reason, some were calling this match up the "Separate but Equal Bowl", or the "Fiasco Bowl."
The issue of far more consequence brought to the fore as a result of this game was that of access to equal and fair competition, the access to the chance to compete for and win the "Big Game" in the first place. There was a tremendous amount of criticism surrounding the 2010 Fiesta Bowl team pairing. Many argued that the BCS was terrified of a BCS non-AQ conference team defeating a BCS AQ conference team and bringing into question ever more starkly the entire premise of the BCS's existence, that teams from AQ conferences are somehow superior to non-AQ conference teams and are therefore more deserving to play for the "National Championship". A defeat of a top ranked AQ conference team would help affirm that this premise was false – as the impressive record of non-AQ teams in BCS Bowls (4–2 against BCS AQ teams) already hints at. Consequently, the BCS paired TCU and BSU together so that the possibility of an embarrassment of an AQ school, and by extension the entire system's validity, was eliminated. Boise State ended up beating TCU in the Fiesta Bowl, 17-10.
2010–11 season
During TCU's second undefeated regular season run in a row (their only loss being the 2010 Fiesta Bowl against Boise State), and while Boise State was still undefeated prior to losing to Nevada, E. Gordon Gee, the president of Ohio State and formerly president of two other BCS AQ conference schools, made public comments to the Associated Press stating that schools from BCS non-AQ conferences should not be allowed to compete for the BCS Championship. "I do know, having been both a Southeastern Conference president and a Big Ten president, that it's like murderer's row every week for these schools. We do not play the Little Sisters of the Poor. We play very fine schools on any given day. So I think until a university runs through that gauntlet that there's some reason to believe that they not be the best teams to [be] in the big ballgame." These comments sparked immediate criticism from commentators, coaches from non-AQ conferences and much of the general public.
Ironically, TCU went on to win the Rose Bowl over Wisconsin (who had defeated the Buckeyes earlier in the season), and billboards appeared in the Columbus area congratulating TCU on its win that were signed by "The Little Sisters of the Poor" as a jibe to Dr. Gee's remarks. He nominally apologized for after the game, and later performed community service at a nursing home operated by a convent group known Little Sisters of the Poor, although he added that he had no idea they existed when he made the comments.
The 2010 season found three teams, Oregon, Auburn and TCU all with undefeated records. The teams from the two automatic qualifying conferences, Oregon (Pac-10) and Auburn (SEC), were selected over the Horned Frogs for the 2011 National Championship game due to TCU's weak strength of schedule. Auburn defeated Oregon for the title, 22-19.
At this point, the controversy surrounding the BCS became a topic of conversation within the United States government. In 2008 U.S. Senator Orrin Hatch (R-Utah) had said that he would hold congressional hearings on the BCS in the future after his Utah team failed to play in the national championship game. Following up on Senator Hatch's actions in the Senate, in April 2011 the Attorney General of Utah announced that he would be initiating a class action anti-trust lawsuit against the BCS, despite the fact that Utah is joining to the Pacific-10 Conference, which is an automatic qualifying conference. In May 2011 the U.S. Justice Department sent a letter to the NCAA asking for a detailed explanation about why FBS football was the only NCAA sport that the NCAA did not 1) have a playoff system in place to determine a champion and 2) why the NCAA had abrogated its responsibility to do so and given the authority to determine the NCAA Champion to an outside group such as the BCS. The Justice Department's investigation and Utah Attorney General's lawsuit are both aimed at forcing the BCS to open its books, which they are as a non-profit required to do every year and have never done, and at determining whether the BCS is an illegal trust or cartel based on Sherman Anti-Trust Act of 1890, the Clayton Anti-Trust Act of 1914 and the Robinson-Patman Anti-Price Discrimination Act. Two more states Attorneys General are said to be considering joining the Utah lawsuit, and the investigation by the Justice Department will probably include a minute and extensive examination of the Fiesta Bowl Scandal as well as conducting complete audits of the other BCS Bowls, the BCS itself and possibly even the schools of the 6 BCS Automatic Qualification Conferences.
The Fiesta Bowl scandal in particular was the catalyst that opened the BCS up to Federal interest for the first time, largely because the government is concerned not only about the BCS's stifling of fair competition, but more importantly for the Federal Government about the possibility of fraud and tax evasion, if the BCS has violated the rules governing tax exempt organizations and groups that control tax exempt organizations. If the BCS Bowls, who are each separate entities yet also part of the BCS as a whole as well were to lose their tax exempt status, they could be liable for back taxes totaling hundreds of millions of dollars. The Fiesta Bowl abuses – especially those regarding alleged illegal and improper political contributions, excessive executive compensation and unjustified reimbursement payments, and the making of excessive, interest free and un-repaid loans – are precisely the types of abuses that would justify the Internal Revenue Service in stripping the BCS, and each BCS Bowl and possibly even each AQ conference school (although that is highly unlikely) of their tax exempt status. In the worst-case scenario the BCS could also be subject to forfeiture and seizure proceedings. While the worst penalties are unlikely to be enforced, even the milder penalties, such as a determination of a cartel and trust, would have devastating consequences for the BCS and the current system. The court could also order a resolution of the current unfair competition inherent in the structure of the BCS, including ordering a playoff system and ordering the Bowls to participate. Despite Big 10 Commissioner Delaney's assertion that if the BCS were to fold they would "go back to the old system" if a court ordered a solution the Conferences would have no choice in the matter, and would be required – especially if a determination is made that the BCS is an illegal trust or cartel – to do whatever the court says, including submitting to federal oversight of the Bowl's and Bowl teams' finances and administration, and conducting a 4, 8 or 16 team playoff, or whatever other remedy the court ordered in their holding. The Department of Justice inquiry is far and away the most potentially dangerous legal situation that the BCS has faced to date.
In February 2012 former Fiesta Bowl chief executive John Junker pleaded guilty to one felony count of solicitation to commit a fraud scheme. He was sentenced later under his terms of his plea bargain. This plea dealt with the scheme the Fiesta Bowl was involved in to solicit from, and then reimburse, employees for political donations to politicians. Two people still with the Fiesta Bowl pleaded guilty to misdemeanor charges of making a prohibited campaign contribution, each paying fines and placed on probation for one year.
On the field, for the first time, an ineligible-player situation contaminated two of the five BCS bowls in this season before they were played. In December 2010, five Ohio State players were implicated in an illegal-benefits scandal preceding the 2011 Sugar Bowl. Though the five players were suspended for five 2011 season games apiece, not only was Ohio State still allowed to play in the 2011 Sugar Bowl (which also resulted in Wisconsin playing in a Rose Bowl they otherwise would not have been allowed in, as Michigan State would have the Big Ten's Rose Bowl berth had Ohio State been removed from the three-way tie which allowed Wisconsin to gain the berth), but so were the five players. After defeating Arkansas, the scandal grew, including open deception by Ohio State coach Jim Tressel. As a result, Tressel has been forced out and, on July 11, 2011, Ohio State vacated all of its wins in an effort to reduce their penalties.
2011–12 season
By late October, it was clear that the winner of the November 5 game between LSU and Alabama would win the SEC West title, and that team would get a spot in the BCS title game if it won the rest of its games and the 2011 SEC Championship Game. LSU defeated Alabama 9–6, putting it on the inside track for the championship game.
The identity of the other title game participant was less clear. Initially, Alabama's loss seemed to clear the way for Oklahoma State, which jumped to No. 2 in the BCS rankings. However, the Cowboys lost in double overtime at Iowa State on November 18, dropping them to fourth in the BCS rankings, while Alabama leaped to second. This raised the possibility of a rematch between the Tigers and Crimson Tide if both teams won out.
On the final weekend of the regular season, LSU routed Georgia 42-10 to win the SEC championship and clinch a berth in the national title game. A few hours later, Oklahoma State dismantled Oklahoma 44–10, in Stillwater, to win the Big 12 title, assuring it of no worse than a bid in the 2012 Fiesta Bowl (which hosts the Big 12 champion unless it finishes in the top two of the BCS rankings). A week earlier, Alabama finished its season with a 42–14 flogging of Auburn. While Oklahoma State had seemingly been eliminated from title contention two weeks earlier, the Cowboys reentered the discussion with their convincing defeat of the Sooners. Ultimately, Oklahoma State had the second-highest computer average, while Alabama finished second in both human polls. The Tide's human-poll lead over the Cowboys was large enough to place them second in the final BCS rankings by only .0086 of a point—the smallest margin between No. 2 and No. 3 in BCS history—sending them to the BCS title game against LSU and locking Oklahoma State into the Fiesta Bowl, in which they beat Stanford 41–38.
In the run-up to the title game, most AP Poll voters said that unless Alabama won impressively, they were at least willing to consider voting LSU as national champion even if Alabama won. At least three voters said they would definitely vote the Tigers No. 1 unless the Crimson Tide won decisively. This led to the possibility of a split national championship, as the Coaches Poll is contractually obligated to vote its national championship to the winner of the BCS title game. Ultimately, Alabama defeated LSU 21–0, and hours later was a near-unanimous choice as national champion, taking all but five first-place votes in the AP Poll.Alabama No. 1 in AP poll; Okla. St. No. 3. ESPN, 2012-01-10. The "rematch" bowl increased calls for a requirement that for any team to qualify for the national championship game, they must also have won their conference championship, either shared or outright. This idea was part of the discussions held after the end of the 2011 season as the BCS discussed changes for the next BCS cycle and contract period.
Also for the first time, the idea of a "plus one" playoff was also discussed by the 13 conference athletic directors and Notre Dame's athletic director as they began to realize that the public opinion regarding a playoff had reached such a state that inaction might result in government action, based on the Sherman Anti-Trust Act.
The title game debate had a ripple effect on the Sugar Bowl. Normally, the Sugar Bowl gets the first pick of SEC teams. However, with LSU and Alabama's selections to the title game, no other SEC teams were eligible for BCS bids. Ultimately, the Sugar Bowl selected No. 11 Virginia Tech and No. 13 Michigan, bypassing No. 7 Boise State, No. 8 Kansas State and No. 12 Baylor. The selection of Virginia Tech drew particular ire, since the Hokies had gone 1–2 against ranked teams, with the two losses coming by 48 points—including a 38–10 rout at the hands of Clemson in the ACC Championship Game. Additionally, Michigan had just barely qualified for a BCS bid; it finished just two spots above the cutoff for a team from an AQ conference to get a bid without winning its conference. By at least one account, it was the lowest-ranked at-large team from an AQ conference to receive a bid in the BCS' history, with only Illinois in 2007 equaling the #13 spot (and Illinois at least had the "second-place clause" in their favor as they were invited to the Rose Bowl with top-ranked Big Ten Champion Ohio State headed for the National Championship Game). Michigan ended up beating Virginia Tech 23-20 in Overtime.
Notably, this season marked the first since 2005 that no non-AQ teams were selected. Boise State was fifth in the initial BCS rankings, but its cause was significantly hobbled when it lost to TCU 36–35 on November 12, effectively handing the Mountain West title to the Horned Frogs. Houston appeared well on its way to a bid after an undefeated regular season placed them sixth in the next-to-last BCS rankings. However, the Cougars lost in the 2011 Conference USA Football Championship Game to Southern Miss. This left Boise State and TCU as the only non-AQ teams in serious contention for a bid. However, TCU's chances for a bid ended when they finished 18th in the final BCS rankings, and accepted an invitation to the Poinsettia Bowl against WAC champion Louisiana Tech Bulldogs, whom they defeated. There continued to be confusion and speculation in the press, however, about how TCU's BCS ranking was actually computed.
2012–13 season
Three bowl-eligible teams went into the weekend of November 17 still undefeated: Kansas State, Oregon, and Notre Dame (Ohio State was also undefeated, but was ineligible for the postseason due to NCAA penalties). Kansas State and Oregon's human-poll leads over Notre Dame were large enough that it would be very difficult for the Fighting Irish to overtake the Ducks and Wildcats if all three won out. However, Kansas State was routed by Baylor 52-24, while Oregon was upended by Stanford 17-14 in overtime. Hours earlier, Notre Dame defeated Wake Forest 38-0. When the BCS rankings were released the next day, Notre Dame vaulted to the top spot in the BCS standings, and locked up a berth in the national championship game a week later with a season-ending win over USC.
Those same rankings put Alabama at No. 2 behind Notre Dame, with Georgia close behind at No. 3. Both teams had already clinched berths in the 2012 SEC Championship Game. Once again, the SEC Championship Game became a de facto semifinal game for a national championship berth. With a thrilling come from behind victory, Alabama won 32-28, and easily defeated Notre Dame in the title game, 42-14.
Going into the final week of the season, three non-AQ teams were in contention for a BCS bid--Kent State, Northern Illinois and Boise State, who were ranked 17th, 20th and 21st, respectively. Northern Illinois defeated Kent State 44-37 in the MAC Championship Game, while Boise State closed out its season with a 27-21 win over Colorado State. The final BCS standings had Northern Illinois at 15th. Under BCS rules, a non-AQ team must finish 16th or higher in the BCS rankings and be higher than at least one AQ champion to get a BCS berth. Since the Huskies were ranked ahead of two AQ conference champions — Big East champ Louisville (21st) and Big Ten champ Wisconsin (unranked), this was enough to give the Huskies a berth in the BCS, making them the first and only Mid American Conference team to ever participate in a BCS game.
The inclusion of the Huskies over a higher-profile team from an AQ conference was criticized by analysts, most notably ESPN's Jesse Palmer, David Pollack and Kirk Herbstreit, who claimed Northern Illinois had not played a legitimate schedule. However, computer rankings showed that Northern Illinois had a stronger schedule than Boise State, as the weakness of the Mountain West due to the departures of TCU, BYU and Utah resulted in the Broncos having the lowest computer-ranking percentage of any team in the BCS standings. The Huskies earned a bid to the Orange Bowl, where they lost to Florida State, 31-10.
2013–14 season
Perhaps fittingly, the final year of the BCS produced no controversy at season's end. In mid-November, Alabama, Florida State, Ohio State and Baylor were all undefeated and ranked 1 to 4 respectively, but Alabama and Florida State held the number 1 and 2 rankings in the BCS poll. Baylor's national title hopes effectively ended on November 24 with a 49-17 thumping by Oklahoma State, and a week later, Alabama's hopes for a third straight title ended when it was upset by Auburn 34-28. When the BCS rankings were updated on December 1, Florida State moved up to No. 1, while Ohio State moved to second and Auburn jumped to third.
Any debate regarding the title game matchup ended when Ohio State lost 34-24 to Michigan State in the Big Ten Championship Game, while Auburn defeated Missouri 59-42 to win the SEC Championship. Auburn vaulted to second in the final rankings, setting up a title game matchup with Florida State. The Seminoles won the last ever BCS National Championship with a 34-31 win over the Tigers, thanks to a touchdown with 11 seconds remaining in the game.
Continuing their pattern of bypassing higher ranked teams, the Sugar Bowl committee chose #11 Oklahoma over #10 Oregon to play against #3 Alabama. Both teams won their bowl games convincingly, with Oklahoma defeating Alabama 45-31 and Oregon dominating Texas 30-7 in the Alamo Bowl. As a result, no clear argument could be made for either school.
Support
While there is substantial criticism aimed at the BCS system from coaches, media and fans alike, there is also some support for the system. Supporters claim there are several key advantages that the BCS has over a playoff system. Under the BCS, a single defeat is extremely detrimental to a team's prospects for a national championship. Supporters contend that this creates a substantial incentive for teams to do their best to win every game. Under a playoff system, front-running teams could be in a position of safety at the end of the regular season and could pull or greatly reduce their use of top players in order to protect them from injuries or give them recovery time (this happens frequently in the NFL). This is very unlikely to happen in the BCS system where a team in the running for a No. 1 or No. 2 ranking at the end of the year would be nearly certain to be punished in the polls enough for a loss that the team would be eliminated from contention.
Supporters also note that for all the controversy the BCS generates about which two teams are the best in the nation, it does ensure that when there is a clear-cut top two, the national championship will be decided on the field. For example, Miami (FL) and Ohio State in 2002 were the only undefeated teams in the nation; both teams had only a couple of close contests. Under the BCS system, these two teams got to play for the championship. Before the advent of the BCS, they would have never met on the field since Ohio State would have been contractually obligated to play in the Rose Bowl. Had they both won, there would have likely been a split national championship.
The NCAA, the governing organization of all collegiate sports, has no official process for determining its FBS (Div. 1-A) champion. Instead, FBS champions are chosen by what the NCAA calls in its official list of champions "selecting organizations".
In 1997, pursuant to a legally binding contract which is now being examined and questioned by the United States Department of Justice in the early stages of an investigation into whether the BCS is an illegal trust or not, all 119 FBS (now 125) universities chose the BCS as their sanctioned selecting organization. The legality of the underlying contracts that bind the schools and bowls to the BCS are now under considerable government and media scrutiny. Under the current, legally questionable contracts, the BCS:
"...is managed by the commissioners of the 11 NCAA Division I-A conferences, the director of athletics at the University of Notre Dame, and representatives of the bowl organizations.
"...is a five-game arrangement for post-season college football that is designed to match the two top-rated teams in a national championship game and to create exciting and competitive matchups between eight other highly regarded teams in four other games".
This contract has no effect on any other selecting organization; it operates only on its signatories—the member universities of the FBS. Fans or media might argue, opine and arrive at differing results from those of the BCS, but the universities (teams) are bound by the latter's processes.
Still, some proponents of the BCS recognize the inconsistency that the system offers. An article taken from BCSfootball.org titled "Playoff Smayoff! We Don't Need It" openly states "...trust the process and we will get it right 80 percent of the time.''" As one sports writer argued, "Is it too much to ask for a system that gets it right every time" instead of getting it right 4 out of 5 times? FBS football is the only sport in which the NCAA has not mandated a specific bracketed playoff system, with even Division I FCS conducting a playoff every year.
See also
College football playoff debate
College Football Playoff, the successor system to the Bowl Championship Series used to determine the NCAA Division I Football Bowl Subdivision national champion
References
External links
Congress to look into 'deeply flawed' BCS system
Further reading
Bowl Championship Series
College football controversies |
33958 | https://en.wikipedia.org/wiki/Wake-on-LAN | Wake-on-LAN | Wake-on-LAN (WoL or WOL) is an Ethernet or Token Ring computer networking standard that allows a computer to be turned on or awakened by a network message.
The message is usually sent to the target computer by a program executed on a device connected to the same local area network. It is also possible to initiate the message from another network by using subnet directed broadcasts or a WoL gateway service.
Equivalent terms include wake on WAN, remote wake-up, power on by LAN, power up by LAN, resume by LAN, resume on LAN and wake up on LAN. If the computer being awakened is communicating via Wi-Fi, a supplementary standard called Wake on Wireless LAN (WoWLAN) must be employed.
The WoL and WoWLAN standards are often supplemented by vendors to provide protocol-transparent on-demand services, for example in the Apple Bonjour wake-on-demand (Sleep Proxy) feature.
History
In October 1996, Intel and IBM formed the Advanced Manageability Alliance (AMA). In April 1997, this alliance introduced the Wake-on-LAN technology.
Principle of operation
Ethernet connections, including home and work networks, wireless data networks and the Internet itself, are based on frames sent between computers. WoL is implemented using a specially designed frame called a magic packet, which is sent to all computers in a network, among them the computer to be awakened. The magic packet contains the MAC address of the destination computer, an identifying number built into each network interface card ("NIC") or other ethernet device in a computer, that enables it to be uniquely recognized and addressed on a network. Powered-down or turned off computers capable of Wake-on-LAN will contain network devices able to "listen" to incoming packets in low-power mode while the system is powered down. If a magic packet is received that is directed to the device's MAC address, the NIC signals the computer's power supply or motherboard to initiate system wake-up, in the same way that pressing the power button would do.
The magic packet is sent on the data link layer (layer 2 in the OSI model) and when sent, is broadcast to all attached devices on a given network, using the network broadcast address; the IP-address (layer 3 in the OSI model) is not used.
Because Wake-on-LAN is built upon broadcast technology, it can generally only be used within the current network subnet. There are some exceptions, though, and Wake-on-LAN can operate across any network in practice, given appropriate configuration and hardware, including remote wake-up across the Internet.
In order for Wake-on-LAN to work, parts of the network interface need to stay on. This consumes a small amount of standby power, much less than normal operating power. The link speed is usually reduced to the lowest possible speed to not waste power (e.g. a Gigabit Ethernet NIC maintains only a 10 Mbit/s link). Disabling wake-on-LAN when not needed can very slightly reduce power consumption on computers that are switched off but still plugged into a power socket. The power drain becomes a consideration on battery powered devices such as laptops as this can deplete the battery even when the device is completely shut down.
Magic packet
The magic packet is a frame that is most often sent as a broadcast and that contains anywhere within its payload 6 bytes of all 255 (FF FF FF FF FF FF in hexadecimal), followed by sixteen repetitions of the target computer's 48-bit MAC address, for a total of 102 bytes.
Since the magic packet is only scanned for the string above, and not actually parsed by a full protocol stack, it could be sent as payload of any network- and transport-layer protocol, although it is typically sent as a UDP datagram to port 0 (reserved port number), 7 (Echo Protocol) or 9 (Discard Protocol), or directly over Ethernet as EtherType 0x0842. A connection-oriented transport-layer protocol like TCP is less suited for this task as it requires establishing an active connection before sending user data.
A standard magic packet has the following basic limitations:
Requires destination computer MAC address (also may require a SecureOn password)
Does not provide a delivery confirmation
May not work outside of the local network
Requires hardware support of Wake-on-LAN on destination computer
Most 802.11 wireless interfaces do not maintain a link in low power states and cannot receive a magic packet
The Wake-on-LAN implementation is designed to be very simple and to be quickly processed by the circuitry present on the network interface card with minimal power requirement. Because Wake-on-LAN operates below the IP protocol layer, IP addresses and DNS names are meaningless and so the MAC address is required.
Subnet directed broadcasts
A principal limitation of standard broadcast wake-on-LAN is that broadcast packets are generally not routed. This prevents the technique being used in larger networks or over the Internet. Subnet directed broadcasts (SDB) may be used to overcome this limitation. SDB may require changes to intermediate router configuration. Subnet directed broadcasts are treated like unicast network packets until processed by the final (local) router. This router then broadcasts the packet using layer 2 broadcast. This technique allows a broadcast to be initiated on a remote network but requires all intervening routers to forward the SDB. When preparing a network to forward SDB packets, care must be taken to filter packets so that only desired (e.g. WoL) SDB packets are permittedotherwise the network may become a participant in DDoS attacks such as the Smurf attack.
Troubleshooting magic packets
Wake-on-LAN can be a difficult technology to implement, because it requires appropriate BIOS/UEFI, network card and, sometimes, operating system and router support to function reliably. In some cases, hardware may wake from one low power state but not from others. This means that due to hardware issues the computer may be waking up from the "soft off state" (S5) but doesn't wake from sleep or hibernation or vice versa. Also, it is not always clear what kind of magic packet a NIC expects to see.
In that case, software tools like a packet analyzer can help with Wake-on-LAN troubleshooting as they allow confirming (while the PC is still on) that the magic packet is indeed visible to a particular computer's NIC. The same magic packet can then be used to find out if the computer powers up from an offline state. This allows networking issues to be isolated from other hardware issues. In some cases they also confirm that the packet was destined for a specific PC or sent to a broadcast address and they can additionally show the packet's internals.
Starting with Windows Vista, the operating system logs all wake sources in the "System" event log. The Event Viewer and the powercfg.exe /lastwake command can retrieve them.
Security considerations
Unauthorized access
Magic packets are sent via the data link or OSI-2 layer, which can be used or abused by anyone on the same LAN, unless the L2 LAN equipment is capable of (and configured for) filtering such traffic to match site-wide security requirements.
Firewalls may be used to prevent clients among the public WAN from accessing the broadcast addresses of inside LAN segments, or routers may be configured to ignore subnet-directed broadcasts (see above).
Certain NICs support a security feature called "SecureOn". It allows users to store within the NIC a hexadecimal password of 6 bytes. Clients have to append this password to the magic packet. The NIC wakes the system only if the MAC address and password are correct. This security measure significantly decreases the risk of successful brute force attacks, by increasing the search space by 48 bits (6 bytes), up to 296 combinations if the MAC address is entirely unknown. However any network eavesdropping will expose the cleartext password. Still, only a few NIC and router manufacturers support such security features.
Abuse of the Wake-on-LAN feature only allows computers to be switched on; it does not in itself bypass password and other forms of security, and is unable to power off the machine once on. However, many client computers attempt booting from a PXE server when powered up by WoL. Therefore, a combination of DHCP and PXE servers on the network can sometimes be used to start a computer with an attacker's boot image, bypassing any security of the installed operating system and granting access to unprotected, local disks over the network.
Interactions with network access control
The use of Wake-on-LAN technology on enterprise networks can sometimes conflict with network access control solutions such as 802.1X or MAC-based authentication, which may prevent magic packet delivery if a machine's WoL hardware has not been designed to maintain a live authentication session while in a sleep state. Configuration of these two features in tandem often requires tuning of timing parameters and thorough testing.
Data privacy
Some PCs include technology built into the chipset to improve security for Wake-on-LAN. For example, Intel AMT (a component of Intel vPro technology), includes Transport Layer Security (TLS), an industry-standard protocol that strengthens encryption.
AMT uses TLS encryption to secure an out-of-band communication tunnel to an AMT-based PC for remote management commands such as Wake-on-LAN. AMT secures the communication tunnel with Advanced Encryption Standard (AES) 128-bit encryption and RSA keys with modulus lengths of 2,048 bits. Because the encrypted communication is out-of-band, the PC's hardware and firmware receive the magic packet before network traffic reaches the software stack for the operating system (OS). Since the encrypted communication occurs "below" the OS level, it is less vulnerable to attacks by viruses, worms, and other threats that typically target the OS level.
IT shops using Wake-on-LAN through the Intel AMT implementation can wake an AMT PC over network environments that require TLS-based security, such as IEEE 802.1X, Cisco Self Defending Network (SDN), and Microsoft Network Access Protection (NAP) environments. The Intel implementation also works for wireless networks.
Hardware requirements
Wake-on-LAN support is implemented on the motherboard of a computer and the network interface card, and is consequently not dependent on the operating system running on the hardware. Some operating systems can control Wake-on-LAN behaviour via NIC drivers. With older motherboards, if the network interface is a plug-in card rather than being integrated into the motherboard, the card may need to be connected to the motherboard by an additional cable. Motherboards with an embedded Ethernet controller which supports Wake-on-LAN do not need a cable. The power supply must meet ATX 2.01 specifications.
Hardware implementations
Older motherboards must have a WAKEUP-LINK header onboard connected to the network card via a special 3-pin cable; however, systems supporting the PCI 2.2 standard and with a PCI 2.2 compliant network adapter card do not usually require a Wake-on-LAN cable as the required standby power is relayed through the PCI bus.
PCI version 2.2 supports PME (Power Management Events). PCI cards send and receive PME signals via the PCI socket directly, without the need for a Wake-on-LAN cable.
Wake-on-LAN usually needs to be enabled in the Power Management section of a PC motherboard's BIOS/UEFI setup utility, although on some systems, such as Apple computers, it is enabled by default. On older systems the BIOS/UEFI setting may be referred to as WoL; on newer systems supporting PCI version 2.2, it may be referred to as PME (Power Management Events, which include WoL). It may also be necessary to configure the computer to reserve standby power for the network card when the system is shut down.
In addition, in order to get Wake-on-LAN to work, enabling this feature on the network interface card or on-board silicon is sometimes required. Details of how to do this depend upon the operating system and the device driver.
Laptops powered by the Intel Centrino Processor Technology or newer (with explicit BIOS/UEFI support) allow waking up the machine using wireless Wake on Wireless LAN (WoWLAN).
In most modern PCs, ACPI is notified of the "waking up" and takes control of the power-up. In ACPI, OSPM must record the "wake source" or the device that is causing the power-upthe device being the "Soft" power switch, the NIC (via Wake-on-LAN), the cover being opened, a temperature change, etc.
The 3-pin WoL interface on the motherboard consist of pin-1 +5V DC (red), pin-2 Ground (black), pin-3 (green or yellow). By supplying the pin-3 wake signal with +5V DC the computer will be triggered to power up provided WoL is enabled in the BIOS/UEFI configuration.
Software requirements
Software which sends a WoL magic packet is referred to in different circles as both a "client" and a "server", which can be a source of confusion. While WoL hardware or firmware is arguably performing the role of a "server", Web-based interfaces which act as a gateway through which users can issue WoL packets without downloading a local client often become known as "The Wake On LAN Server" to users. Additionally, software that administers WoL capabilities from the host OS side may be carelessly referred to as a "client" on occasion, and of course, machines running WoL generally tend to be end-user desktops, and as such, are "clients" in modern IT parlance.
Creating and sending the magic packet
Sending a magic packet requires knowledge of the target computer's MAC address. Software to send WoL magic packets is available for all modern platforms, including Windows, Macintosh and Linux, plus many smartphones. Examples include: Wake On LAN GUI, LAN Helper, Magic Packet Utility, NetWaker for Windows, Nirsoft WakeMeOnLAN, WakeOnLANx, EMCO WOL, Aquila Tech Wake on LAN, ManageEngine WOL utility, FusionFenix and SolarWinds WOL Tool. There are also web sites that allow a Magic Packet to be sent online without charge. Example source code for a developer to add Wake-on-LAN to a program is readily available in many computer languages.
Ensuring the magic packet travels from source to destination
If the sender is on the same subnet (local network, aka LAN) as the computer to be awakened there are generally no issues. When sending over the Internet, and in particular where a NAT (Network Address Translator) router, as typically deployed in most homes, is involved, special settings often need to be set. For example, in the router, the computer to be controlled needs to have a dedicated IP address assigned (aka a DHCP reservation). Also, since the controlled computer will be "sleeping" except for some electricity on to part of its LAN card, typically it will not be registered at the router as having an active IP lease.
Further, the WoL protocol operates on a "deeper level" in the multi-layer networking architecture. To ensure the magic packet gets from source to destination while the destination is sleeping, the ARP Binding (also known as IP & MAC binding) must typically be set in a NAT router. This allows the router to forward the magic packet to the sleeping computer's MAC adapter at a networking layer below typical IP usage. In the NAT router, ARP binding requires just a dedicated IP number and the MAC address of the destination computer. There are some security implications associated with ARP binding (see ARP spoofing); however, as long as none of the computers connected to the LAN are compromised, an attacker must use a computer that is connected directly to the target LAN (plugged into the LAN via cable, or by breaking through the Wi‑Fi connection security to gain access to the LAN).
Most home routers are able to send magic packets to LAN; for example, routers with the DD-WRT, Tomato or PfSense firmware have a built-in Wake-on-LAN client. OpenWrt supports both Linux implementations for WoL etherwake and WoLs.
Responding to the magic packet
Most WoL hardware functionally is typically blocked by default and needs to be enabled in using the system BIOS/UEFI. Further configuration from the OS is required in some cases, for example via the Device Manager network card properties on Windows operating systems.
Microsoft Windows
Newer versions of Microsoft Windows integrate WoL functionality into the Device Manager. This is available in the Power Management tab of each network device's driver properties. For full support of a device's WoL capabilities (such as the ability to wake from an ACPI S5 power off state), installation of the full driver suite from the network device manufacturer may be necessary, rather than the bare driver provided by Microsoft or the computer manufacturer. In most cases correct BIOS/UEFI configuration is also required for WoL to function.
The ability to wake from a hybrid shutdown state (S4) (aka Fast Startup) or a soft powered-off state (S5) is unsupported in Windows 8 and above, and Windows Server 2012 and above. This is because of a change in the OS behavior which causes network adapters to be explicitly not armed for WoL when shutdown to these states occurs. WOL from a non-hybrid hibernation state (S4) (i.e. when a user explicitly requests hibernation) or a sleep state (S3) is supported. However, some hardware will enable WoL from states that are unsupported by Windows.
Mac hardware (OS X)
Modern Mac hardware supports WoL functionality when the computer is in a sleep state, but it is not possible to wake up a Mac computer from a powered-off state.
The feature is controlled via the OS X System Preferences Energy Saver panel, in the Options tab. Marking the Wake for network access checkbox enables Wake-on-LAN.
Apple's Apple Remote Desktop client management system can be used to send Wake-on-LAN packets, but there are also freeware and shareware Mac OS X applications available.
On Mac OS X Snow Leopard and later, the service is called Wake on Demand or Bonjour Sleep Proxy and is synonymous with the Sleep Proxy Service. It comes enabled out of the box, but in previous versions of the operating system, the service needs to be enabled under the Energy Saver pane of System Preferences. The network interface card may allow the service to function only on Wi‑Fi, only on Ethernet, or both.
Linux
Wake-on-LAN support may be changed using a subfunction of the ethtool command.
Other machine states and LAN wakeup signals
In the early days of Wake-on-LAN the situation was relatively simple: a machine was connected to power but switched off, and it was arranged that a special packet be sent to switch the machine on.
Since then many options have been added and standards agreed upon. A machine can be in seven power states from S0 (fully on) through S5 (powered down but plugged in) and disconnected from power (G3, Mechanical Off), with names such as "sleep", "standby", and "hibernate". In some reduced-power modes the system state is stored in RAM and the machine can wake up very quickly; in others the state is saved to disk and the motherboard powered down, taking at least several seconds to wake up. The machine can be awakened from a reduced-power state by a variety of signals.
The machine's BIOS/UEFI must be set to allow Wake-on-LAN. To allow wakeup from powered-down state S5, wakeup on PME (Power Management Event) is also required. The Intel adapter allows "Wake on Directed Packet", "Wake on Magic Packet", "Wake on Magic Packet from power off state", and "Wake on Link". Wake on Directed Packet is particularly useful as the machine will automatically come out of standby or hibernation when it is referenced, without the user or application needing to explicitly send a magic packet. Unfortunately in many networks waking on directed packet (any packet with the adapter's MAC address or IP address) or on link is likely to cause wakeup immediately after going to a low-power state. Details for any particular motherboard and network adapter are to be found in the relevant manuals; there is no general method. Knowledge of signals on the network may also be needed to prevent spurious wakening.
Unattended operation
For a machine which is normally unattended, precautions need to be taken to make the Wake-on-LAN function as reliable as possible. For a machine procured to work in this way, Wake-on-LAN functionality is an important part of the purchase procedure.
Some machines do not support Wake-on-LAN after they have been disconnected from power (e.g., when power is restored after a power failure). Use of an uninterruptible power supply (UPS) will give protection against a short period without power, although the battery will discharge during a prolonged power-cut.
Waking up without operator presence
If a machine that is not designed to support Wake-on-LAN is left powered down after power failure, it may be possible to set the BIOS/UEFI to start it up automatically on restoration of power, so that it is never left in an unresponsive state. A typical BIOS/UEFI setting is AC back function which may be on, off, or memory. On is the correct setting in this case; memory, which restores the machine to the state it was in when power was lost, may leave a machine which was hibernating in an unwakeable state.
Other problems can affect the ability to start or control the machine remotely: hardware failure of the machine or network, failure of the BIOS/UEFI settings battery (the machine will halt when started before the network connection is made, displaying an error message and requiring a keypress), loss of control of the machine due to software problems (machine hang, termination of remote control or networking software, etc.), and virus infection or hard disk corruption. Therefore, the use of a reliable server-class machine with RAID drives, redundant power supplies, etc., will help to maximize availability. Additionally, a device which can switch the machine off and on again, controlled perhaps by a remote signal, can force a reboot which will clear problems due to misbehaving software.
For a machine not in constant use, energy can be conserved by putting the machine into low-power RAM standby after a short timeout period. If a connection delay of a minute or two is acceptable, the machine can timeout into hibernation, powered off with its state saved to disk.
Wake on Internet
The originator of the wakeup signal (magic packet) does not have to be on the same local area network (LAN) as the computer being woken. It can be sent from anywhere using:
A virtual private network (VPN)which makes the originator appear to be a member of the LAN.
The Internet with local broadcastingsome routers permit a packet received from the Internet to be broadcast to the entire LAN; the default TCP or UDP ports preconfigured to relay WoL requests are usually ports 7 (Echo Protocol), 9 (Discard Protocol), or both. This proxy setting must be enabled in the router, and port forwarding rules may need to be configured in its embedded firewall in order to accept magic packets coming from the internet side to these restricted port numbers, and to allow rebroadcasting them on the local network (normally to the same ports and the same TCP or UDP protocol). Such routers may also be configurable to use different port numbers for this proxying service.
The Internet without local broadcastingif (as often) the firewall or router at the destination does not permit packets received from the Internet to be broadcast to the local network, Wake-on-Internet may still be achieved by sending the magic packet to any specified port of the destination's Internet address, having previously set the firewall or router to forward packets arriving at that port to the local IP address of the computer being woken. The router may require reservation of the local IP address of the computer being woken in order to forward packets to it when it is not live.
See also
Alert on LAN
Alert Standard Format
Desktop and mobile Architecture for System Hardware
RTC Alarm
Wake-on-RingTelephone line ring event
Conventional PCI pinoutPower Management Event (PME#) signal
Wired for Management
References
Computer-related introductions in 1997
Networking standards
BIOS
Unified Extensible Firmware Interface
Remote control
Ethernet |
253655 | https://en.wikipedia.org/wiki/Sword-and-sandal | Sword-and-sandal | Sword-and-sandal, also known as peplum (pepla plural), is a subgenre of largely Italian-made historical, mythological, or Biblical epics mostly set in the Greco-Roman or medieval period. These films attempted to emulate the big-budget Hollywood historical epics of the time, such as Ben-Hur, Cleopatra, Quo Vadis, The Robe, Spartacus, Samson and Delilah and The Ten Commandments. These films dominated the Italian film industry from 1958 to 1965, eventually being replaced in 1965 by spaghetti Western and Eurospy films.
The term "peplum" (a Latin word referring to the Ancient Greek garment peplos), was introduced by French film critics in the 1960s. The terms "peplum" and "sword-and-sandal" were used in a condescending way by film critics. Later, the terms were embraced by fans of the films, similar to the terms "spaghetti Western" or "shoot-'em-ups". In their English versions, peplum films can be immediately differentiated from their Hollywood counterparts by their use of "clumsy and inadequate" English language dubbing. A 100-minute documentary on the history of Italy's peplum genre was produced and directed by Antonio Avati in 1977 entitled Kolossal: i magnifici Maciste (aka Kino Kolossal).
Italian epic films set in antiquity that were produced before the 1958 peplum wave proper, such as Fabiola (1949) and Ulysses (1954), have been called proto-peplum, and recent films set in such Greco-Roman times (made after the peplum wave ended in 1965) have been called neo-peplum.
Genre characteristics
Sword-and-sandal films are a specific class of Italian adventure films that have subjects set in Biblical or classical antiquity, often with plots based more or less loosely on Greco-Roman history or the other contemporary cultures of the time, such as the Egyptians, Assyrians, and Etruscans, as well as medieval times. Not all of the films were fantasy-based by any means. Many of the plots featured actual historical personalities such as Julius Caesar, Cleopatra, and Hannibal, although great liberties were taken with the storylines. Gladiators and slaves rebelling against tyrannical rulers, pirates and swashbucklers were also popular subjects.
As Robert Rushing defines it, peplum, "in its most stereotypical form, [...] depicts muscle-bound heroes (professional bodybuilders, athletes, wrestlers, or brawny actors) in mythological antiquity, fighting fantastic monsters and saving scantily clad beauties. Rather than lavish epics set in the classical world, they are low-budget films that focus on the hero's extraordinary body." Thus, most sword-and-sandal films featured a superhumanly strong man as the protagonist, such as Hercules, Samson, Goliath, Ursus or Italy's own popular folk hero Maciste. In addition, the plots typically involved two women vying for the affection of the bodybuilder hero: the good love interest (a damsel in distress needing rescue), and an evil femme fatale queen who sought to dominate the hero.
Also, the films typically featured an ambitious ruler who would ascend the throne by murdering those who stood in his path, and often it was only the muscular hero who could depose him. Thus, Maria Elena D'Amelio points out the hero's often political goal: "to restore a legitimate sovereign against an evil dictator."
Many of the peplum films involved a clash between two populations, one civilized and the other barbaric, which typically included a scene of a village or city being burned to the ground by invaders. For their musical content, most films contained a colorful dancing girls sequence, meant to underline pagan decadence.
Precursors of the sword-and-sandal wave (pre-1958)
Italian films of the silent era
Italian filmmakers paved the way for the peplum genre with some of the earliest silent films dealing with the subject, including the following:
The Sack of Rome (1905)
Agrippina (1911)
The Fall of Troy (1911)
The Queen of Nineveh (1911, directed by Luigi Maggi)
Brutus (1911)
Quo Vadis (1913, directed by Enrico Guazzoni)
Antony and Cleopatra (1913)
Cabiria (1914, directed by Giovanni Pastrone)
Julius Caesar (1914)
Saffo (Sappho, 1918, directed by Antonio Molinari)
The Crusaders (1918)
Fabiola (1918) directed by Enrico Guazzoni
Attila (1919, directed by F. Mari)
Venere (Venus, 1919, directed by Antonio Molinari)
Il mistero di Osiris (The Mystery of Osiris, 1919) directed by Antonio Molinari
Giuliano l'Apostata (1919, directed by Ugo Falena)
Giuditta e Oloferne (Judith and Holofernes, 1920) directed by Antonio Molinari
The Sack of Rome, (1920) directed by Enrico Guazzoni
Messalina, (1924) directed by Enrico Guazzoni
Gli ultimi giorni di Pompei (The Last Days of Pompeii (1926) directed by Carmine Gallone and Amleto Palermi)
The silent Maciste films (1914–1927)
The 1914 Italian silent film Cabiria was one of the first films set in antiquity to make use of a massively muscled character, Maciste (played by actor Bartolomeo Pagano), who served in this premiere film as the hero's slavishly loyal sidekick. Maciste became the public's favorite character in the film however, and Pagano was called back many times to reprise the role. The Maciste character appeared in at least two dozen Italian silent films from 1914 through 1926, all of which featured a protagonist named Maciste although the films were set in many different time periods and geographical locations.
Here is a complete list of the silent Maciste films in chronological order:
Cabiria (1914) introduced the Maciste character
Maciste (1915) "The Marvelous Maciste"
Maciste bersagliere ("Maciste the Ranger", 1916)
Maciste alpino ("Maciste The Warrior", 1916)
Maciste atleta ("Maciste the Athlete", 1917)
Maciste medium ("Maciste the Clairvoyant", 1917)
Maciste poliziotto ("Maciste the Detective", 1917)
Maciste turista ("Maciste the Tourist", 1917)
Maciste sonnambulo ("Maciste the Sleepwalker", 1918)
La Rivincita di Maciste ("The Revenge of Maciste", 1919)
Il Testamento di Maciste ("Maciste's Will", 1919)
Il Viaggio di Maciste ("Maciste's Journey", 1919)
Maciste I ("Maciste the First", 1919)
Maciste contro la morte ("Maciste vs Death", 1919)
Maciste innamorato ("Maciste in Love", 1919)
Maciste in vacanza ("Maciste on Vacation", 1920)
Maciste salvato dalle acque ("Maciste Rescued from the Waters", 1920)
Maciste e la figlia del re della plata ("Maciste and the Silver King's Daughter", 1922)
Maciste und die Japanerin ("Maciste and the Japanese", 1922)
Maciste contro Maciste ("Maciste vs. Maciste", 1923)
Maciste und die chinesische truhe ("Maciste and the Chinese Trunk", 1923)
Maciste e il nipote di America ("Maciste's American Nephew", 1924)
Maciste imperatore ("Emperor Maciste", 1924)
Maciste contro lo sceicco ("Maciste vs. the Sheik", 1925)
Maciste all'inferno ("Maciste in Hell", 1925)
Maciste nella gabbia dei leoni ("Maciste in the Lions' Den", 1926)
il Gigante delle Dolemite ("The Giant From the Dolomite", released in 1927)
Italian fascist and post-war historical epics (1937-1956)
The Italian film industry released several historical films in the early sound era, such as the big-budget Scipione l'Africano (Scipio Africanus: The Defeat of Hannibal) in 1937. In 1949, the postwar Italian film industry remade Fabiola (which had been previously filmed twice in the silent era). The film was released in the United Kingdom and in the United States in 1951 in an edited, English-dubbed version. Fabiola was an Italian-French co-production like the following films The Last Days of Pompeii (1950) and Messalina (1951).
During the 1950s, a number of American historical epics shot in Italy were released. In 1951, MGM producer Sam Zimbalist cleverly used the lower production costs, use of frozen funds and the expertise of the Italian film industry to shoot the large-scale Technicolor epic Quo Vadis in Rome. In addition to its fictional account linking the Great Fire of Rome, the Persecution of Christians in the Roman Empire and Emperor Nero, the film - following the novel "Quo vadis" by the Polish writer Henryk Sienkiewicz - featured also a mighty protagonist named Ursus (Italian filmmakers later made several pepla in the 1960s exploiting the Ursus character). MGM also planned Ben Hur to be filmed in Italy as early as 1952.
Riccardo Freda's Sins of Rome was filmed in 1953 and released by RKO in an edited, English-dubbed version the following year. Unlike Quo Vadis, there were no American actors or production crew. The Anthony Quinn film Attila (directed by Pietro Francisci in 1954), the Kirk Douglas epic Ulysses (co-directed by an uncredited Mario Bava in 1954) and Helen of Troy (directed by Robert Wise with Sergio Leone as an uncredited second unit director in 1955) were the first of the big peplum films of the 1950s. Riccardo Freda directed another peplum, Theodora, Slave Empress in 1954, starring his wife Gianna Maria Canale. Howard Hawks directed his Land of the Pharaohs (starring Joan Collins) in Italy and Egypt in 1955. Robert Rossen made his film Alexander the Great in Egypt in 1956, with a music score by famed Italian composer Mario Nascimbene.
The main sword-and-sandal period (1958-1965)
To cash in on the success of the Kirk Douglas film Ulysses, Pietro Francisci planned to make a film about Hercules, but searched unsuccessfully for years for a physically convincing yet experienced actor. His daughter spotted American bodybuilder Steve Reeves in the American film Athena and he was hired to play Hercules in 1957 when the film was made. (Reeves was paid $10,000 to star in the film).
The genre's instantaneous growth began with the U.S. theatrical release of Hercules in 1959. American producer Joseph E. Levine acquired the U.S. distribution rights for $120,000, spent $1 million promoting the film and made more than $5 million profit. This spawned the 1959 Steve Reeves sequel Hercules Unchained, the 1959 re-release of Cecil B. DeMille's Samson and Delilah (1949), and dozens of imitations that followed in their wake. Italian filmmakers resurrected their 1920s Maciste character in a brand new 1960s sound film series (1960–1964), followed rapidly by Ursus, Samson, Goliath and various other mighty-muscled heroes.
Almost all peplum films of this period featured bodybuilder stars, the most popular being Steve Reeves, Reg Park and Gordon Scott. Some of these stars, such as Mickey Hargitay, Reg Lewis, Mark Forest, Gordon Mitchell and Dan Vadis, had starred in Mae West's touring stage review in the United States in the 1950s. Bodybuilders of Italian origin, on the other hand, would adopt English pseudonyms for the screen; thus, stuntman Sergio Ciani became Alan Steel, and ex-gondolier Adriano Bellini was called Kirk Morris.
To be sure, many of the films enjoyed widespread popularity among general audiences, and had production values that were typical for popular films of their day. Some films included frequent re-use of the impressive film sets that had been created for Ben-Hur and Cleopatra.
Although many of the bigger budget pepla were released theatrically in the US, fourteen of them were released directly to Embassy Pictures television in a syndicated TV package called The Sons of Hercules. Since few American viewers had a familiarity with Italian film heroes such as Maciste or Ursus, the characters were renamed and the films molded into a series of sorts by splicing on the same opening and closing theme song and newly designed voice-over narration that attempted to link the protagonist of each film to the Hercules mythos. These films ran on Saturday afternoons in the 1960s.
Peplum films were, and still are, often ridiculed for their low budgets and bad English dubbing. The contrived plots, poorly overdubbed dialogue, novice acting skills of the bodybuilder leads, and primitive special effects that were often inadequate to depict the mythological creatures on screen all conspire to give these films a certain camp appeal now. In the 1990s, several of them have been subjects of riffing and satire in the United States comedy series Mystery Science Theater 3000.
However, in the early 1960s, a group of French critics, mostly writing for the Cahiers du cinéma, such as Luc Moullet, started to celebrate the genre and some of its directors, including Vittorio Cottafavi, Riccardo Freda, Mario Bava, Pietro Francisci, Duccio Tessari, and Sergio Leone. Not only directors, but also some of the screenwriters, often put together in teams, worked past the typically formulaic plot structure to include a mixture of "bits of philosophical readings and scraps of psychoanalysis, reflections on the biggest political systems, the fate of the world and humanity, fatalistic notions of accepting the will of destiny and the gods, anthropocentric belief in the powers of the human physique, and brilliant syntheses of military treatises".
With reference to the genre's free use of ancient mythology and other influences, Italian director Vittorio Cottafavi, who directed a number of peplum films, used the term "neo-mythologism".
Hercules series (1958–1965)
A series of 19 Hercules movies were made in Italy in the late '50s and early '60s. The films were all sequels to the successful Steve Reeves peplum Hercules (1958), but with the exception of Hercules Unchained, each film was a stand-alone story not connected to the others. The actors who played Hercules in these films were Steve Reeves followed by Gordon Scott, Kirk Morris, Mickey Hargitay, Mark Forest, Alan Steel, Dan Vadis, Brad Harris, Reg Park, Peter Lupus (billed as Rock Stevens) and Mike Lane. In a 1997 interview, Reeves said he felt his two Hercules films could not be topped by another sequel, so he declined to do any more Hercules films.
The films are listed below by their American release titles, and the titles in parentheses are their original Italian titles with an approximate English translation. Dates shown are the original Italian theatrical release dates, not the U.S. release dates (which were years later in some cases).
Hercules (Le fatiche di Ercole / The Labors of Hercules, 1958) starring Steve Reeves
Hercules Unchained (Ercole e la regina di Lidia / Hercules and the Queen of Lydia, 1959) starring Steve Reeves
Goliath and the Dragon (La vendetta di Ercole / The Revenge of Hercules, 1960) starring Mark Forest as Hercules (Hercules' name was changed to Goliath when this film was dubbed in English and distributed in the U.S.)
Hercules Vs The Hydra (Gli amori di Ercole / The Loves of Hercules, 1960) co-starring Mickey Hargitay (as Hercules) and Jayne Mansfield
Hercules and the Captive Women (Ercole alla conquista di Atlantide / Hercules at the Conquest of Atlantis, 1961) starring Reg Park as Hercules (alternate U.S. title: Hercules and the Haunted Women)
Hercules in the Haunted World (Ercole al centro della terra / Hercules at the Center of the Earth, 1961) directed by Mario Bava, starring Reg Park as Hercules
Hercules in the Vale of Woe (Maciste contro Ercole nella valle dei guai / Maciste vs Hercules in the Vale of Woe) comedy starring Frank Gordon as Hercules, 1961
Ulysses Against the Son of Hercules (Ulisse contro Ercole / Ulysses vs. Hercules) starring Mike Lane as Hercules, 1962
The Fury of Hercules (La furia di Ercole / The Fury of Hercules) starring Brad Harris as Hercules, 1962 (alternate U.S. title: The Fury of Samson)
Hercules, Samson and Ulysses (Ercole sfida Sansone / Hercules Challenges Samson) starring Kirk Morris as Hercules, 1963
Hercules Against Moloch (Ercole contro Molock / Hercules vs. Molock) starring Gordon Scott as Hercules, 1963 ( The Conquest of Mycenae)
Son of Hercules in the Land of Darkness (Ercole l'invincibile / Hercules the Invincible) starring Dan Vadis as Hercules, 1964. (this was originally a Hercules film that was re-titled for inclusion in the U.S. syndicated TV package The Sons of Hercules).
Hercules vs The Giant Warriors (il trionfo di Ercole / The Triumph of Hercules) starring Dan Vadis as Hercules, 1964 (alternate U.S. title: Hercules and the Ten Avengers)
Hercules Against Rome (Ercole contro Roma / Hercules vs. Rome) starring Alan Steel as Hercules, 1964
Hercules Against the Sons of the Sun (Ercole contro i figli del sole / Hercules vs. the Sons of the Sun) starring Mark Forest as Hercules, 1964
Samson and His Mighty Challenge (Ercole, Sansone, Maciste e Ursus: gli invincibili / Hercules, Samson, Maciste and Ursus: The Invincibles) starring Alan Steel as Hercules, 1964 ( Combate dei Gigantes or Le Grand Defi)
Hercules and the Tyrants of Babylon (Ercole contro i tiranni di Babilonia / Hercules vs. the Tyrants of Babylon) starring Rock Stevens as Hercules, 1964
Hercules and the Princess of Troy (no Italian title) starring Gordon Scott as Hercules, 1965 ( Hercules vs. the Sea Monster; this U.S./ Italian co-production was made as a pilot for a Charles Band-produced TV series that never materialized and it was later distributed as a feature film)
Hercules the Avenger (Sfida dei giganti / Challenge of the Giants) starring Reg Park as Hercules, 1965 (this film was composed mostly of re-edited footage from the two 1961 Reg Park Hercules films)
A number of English-dubbed Italian films that featured the word "Hercules" in the title were not made as Hercules movies originally, such as:
Hercules Against the Moon Men, Hercules Against the Barbarians, Hercules Against the Mongols and Hercules of the Desert were all originally Maciste films. (See "Maciste" section below)
Hercules and the Black Pirate and Hercules and the Treasure of the Incas were both re-titled Samson movies. (See "Samson" section below)
Hercules, Prisoner of Evil was actually a re-titled Ursus film. (See "Ursus" section below)
Hercules and the Masked Rider was actually a re-titled Goliath movie. (See "Goliath" section below)
None of these films in their original Italian versions involved the Hercules character in any way. Likewise, most of the Sons of Hercules movies shown on American TV in the 1960s had nothing to do with Hercules in their original Italian versions.
(see also The Three Stooges Meet Hercules (1962), an American-made genre parody starring peplum star Samson Burke as Hercules)
Goliath series (1959–1964)
The Italians used Goliath as the superhero protagonist in a series of adventure films (pepla) in the early 1960s. He was a man possessed of amazing strength, although he seemed to be a different person in each film. After the classic Hercules (1958) became a blockbuster sensation in the film industry, a 1959 Steve Reeves film Il terrore dei barbari (Terror of the Barbarians) was re-titled Goliath and the Barbarians in the U.S. The film was so successful at the box office, it inspired Italian filmmakers to do a series of four more films featuring a generic beefcake hero named Goliath, although the films were not related to each other in any way (the 1960 Italian peplum David and Goliath starring Orson Welles was not part of this series, since that movie was just a historical retelling of the Biblical story).
The titles in the Italian Goliath adventure series were as follows: (the first title listed for each film is the film's original Italian title along with its English translation, while the U.S. release title follows in bold type in parentheses)
Il terrore dei barbari / Terror of the Barbarians (1959) (Goliath and the Barbarians in the U.S.), starring Steve Reeves as Goliath (although he is referred to as "Emiliano" in the original Italian-language version)
Goliath contro i giganti / Goliath Against the Giants (Goliath Against the Giants) (1960) starring Brad Harris
Goliath e la schiava ribelle / Goliath and the Rebel Slave (Tyrant of Lydia Against The Son of Hercules) (1963) starring Gordon Scott
Golia e il cavaliere mascherato / Goliath and the Masked Rider (Hercules and the Masked Rider) (1964) starring Alan Steel (note: Goliath is referred to as "Hercules" in English-dubbed prints)
Golia alla conquista di Bagdad / Goliath at the Conquest of Baghdad (Goliath at the Conquest of Damascus, 1964) starring Rock Stevens (aka Peter Lupus)
The name Goliath was also inserted into the English titles of three other Italian pepla that were re-titled for U.S. distribution in an attempt to cash in on the Goliath craze, but these films were not originally made as "Goliath movies" in Italy.
Both Goliath and the Vampires (1961) and Goliath and the Sins of Babylon (1963) actually featured the famed Italian folk hero Maciste in the original Italian versions, but American distributors did not feel the name "Maciste" meant anything to American audiences.
Goliath and the Dragon (1960) was originally an Italian Hercules movie called The Revenge of Hercules, but it was re-titled Goliath and the Dragon in the U.S. since at the time Goliath and the Barbarians was breaking box-office records, and the distributors may have thought the name "Hercules" was trademarked by distributor Joseph E. Levine.
Maciste series (1960–1965)
There were a total of 25 Maciste films from the 1960s peplum craze (not counting the two dozen silent Maciste films made in Italy pre-1930). By 1960, seeing how well the two Steve Reeves Hercules films were doing at the box office, Italian producers decided to revive the 1920s silent film character Maciste in a new series of color/sound films. Unlike the other Italian peplum protagonists, Maciste found himself in a variety of time periods ranging from the Ice Age to 16th century Scotland. Maciste was never given an origin, and the source of his mighty powers was never revealed. However, in the first film of the 1960s series, he mentions to another character that the name "Maciste" means "born of the rock" (almost as if he was a god who would just appear out of the earth itself in times of need). One of the 1920s silent Maciste films was actually entitled "The Giant from the Dolomite", hinting that Maciste may be more god than man, which would explain his great strength.
The first title listed for each film is the film's original Italian title along with its English translation, while the U.S. release title follows in bold type in parentheses (note how many times Maciste's name in the Italian title is altered to an entirely different name in the American title):
Maciste nella valle dei re / Maciste in the Valley of the Kings (Son of Samson, 1960) Maciste the Mighty, Maciste the Giant, starring Mark Forest
Maciste nella terra dei ciclopi / Maciste in the Land of the Cyclops (Atlas in the Land of the Cyclops, 1961) starring Gordon Mitchell
Maciste contro il vampiro / Maciste Vs. the Vampire (Goliath and the Vampires, 1961) starring Gordon Scott
Il trionfo di Maciste / The Triumph of Maciste (Triumph of the Son of Hercules, 1961) starring Kirk Morris
Maciste alla corte del gran khan / Maciste at the Court of the Great Khan (Samson and the Seven Miracles of the World, 1961) starring Gordon Scott
Maciste, l'uomo più forte del mondo / Maciste, the Strongest Man in the World (Mole Men vs the Son of Hercules, 1961) starring Mark Forest
Maciste contro Ercole nella valle dei guai / Maciste Against Hercules in the Vale of Woe (Hercules in the Vale of Woe, 1961) starring Kirk Morris as Maciste; this was a satire/spoof featuring the comedy team of Franco and Ciccio
Totò contro Maciste / Totò vs. Maciste (no American title, 1962) starring Samson Burke; this was a comedy satirizing the peplum genre (part of the Italian "Toto" film series) and was never distributed in the U.S.; it is apparently not even available in English
Maciste all'inferno / Maciste in Hell (The Witch's Curse, 1962) starring Kirk Morris
Maciste contro lo sceicco / Maciste Against the Sheik (Samson Against the Sheik, 1962) starring Ed Fury
Maciste, il gladiatore piu forte del mondo / Maciste, the World's Strongest Gladiator (Colossus of the Arena, 1962) starring Mark Forest
Maciste contro i mostri / Maciste Against the Monsters (Fire Monsters Against the Son of Hercules, 1962) starring Reg Lewis
Maciste contro i cacciatori di teste / Maciste Against the Headhunters (Colossus and the Headhunters, 1962) starring Kirk Morris; Fury of the Headhunters
Maciste, l'eroe piu grande del mondo / Maciste, the World's Greatest Hero (Goliath and the Sins of Babylon, 1963) starring Mark Forest
Zorro contro Maciste / Zorro Against Maciste (Samson and the Slave Queen, 1963) starring Alan Steel
Maciste contro i mongoli / Maciste Against the Mongols (Hercules Against the Mongols, 1963) starring Mark Forest
Maciste nell'inferno di Gengis Khan / Maciste in Genghis Khan's Hell (Hercules Against the Barbarians, 1963) starring Mark Forest
Maciste alla corte dello zar / Maciste at the Court of the Czar (Atlas Against The Czar, 1964) starring Kirk Morris ( Samson vs. the Giant King)
Maciste, gladiatore di Sparta / Maciste, Gladiator of Sparta (Terror of Rome Against the Son of Hercules, 1964) starring Mark Forest
Maciste nelle miniere de re salomone / Maciste in King Solomon's Mines (Samson in King Solomon's Mines, 1964) starring Reg Park
Maciste e la regina de Samar / Maciste and the Queen of Samar (Hercules Against the Moon Men, 1964) starring Alan Steel
La valle dell'eco tonante / Valley of the Thundering Echo (Hercules of the Desert, 1964) starring Kirk Morris, Desert Raiders, in France as Maciste and the Women of the Valley
Ercole, Sansone, Maciste e Ursus: gli invincibili / Hercules, Samson, Maciste and Ursus: The Invincibles (Samson and His Mighty Challenge, 1964) starring Renato Rossini as Maciste ( Combate dei Gigantes or Le Grand Defi)
Gli invicibili fratelli Maciste / The Invincible Maciste Brothers (The Invincible Brothers Maciste, 1964) The Invincible Gladiators, starring Richard Lloyd as Maciste
Maciste il Vendicatore dei Mayas / Maciste, Avenger of the Mayans (has no American title, 1965) (Note:* this Maciste film was made up almost entirely of re-edited stock footage from two older Maciste films, Maciste contro i mostri and Maciste contro i cacciatori di teste, so Maciste switches from Kirk Morris to Reg Lewis in various scenes; this movie is very scarce since it was never distributed in the U.S. and is apparently not even available in English)
In 1973, the Spanish cult film director Jesus Franco directed two low-budget "Maciste films" for French producers: Maciste contre la Reine des Amazones (Maciste vs the Queen of the Amazons) and Les exploits érotiques de Maciste dans l'Atlantide (The Erotic Exploits of Maciste in Atlantis). The films had almost identical casts, both starring Val Davis as Maciste, and appear to have been shot back-to-back. The former was distributed in Italy as a "Karzan" movie (a cheap Tarzan imitation), while the latter film was released only in France with hardcore inserts as Les Gloutonnes ("The Gobblers"). These two films were totally unrelated to the 1960s Italian Maciste series.
Ursus series (1960–1964)
Following Buddy Baer's portrayal of Ursus in the classic 1951 film Quo Vadis, Ursus was used as a superhuman Roman-era character who became the protagonist in a series of Italian adventure films made in the early 1960s.
When the "Hercules" film craze hit in 1959, Italian filmmakers were looking for other muscleman characters similar to Hercules whom they could exploit, resulting in the nine-film Ursus series listed below. Ursus was referred to as a "Son of Hercules" in two of the films when they were dubbed in English (in an attempt to cash in on the then-popular "Hercules" craze), although in the original Italian films, Ursus had no connection to Hercules whatsoever. In the English-dubbed version of one Ursus film (retitled Hercules, Prisoner of Evil), Ursus was actually referred to throughout the entire film as "Hercules".
There were a total of nine Italian films that featured Ursus as the main character, listed below as follows: Italian title / English translation of the Italian title (American release title);
Ursus / Ursus (Ursus, Son of Hercules, 1960) Mighty Ursus (United Kingdom), starring Ed Fury
La Vendetta di Ursus / The Revenge of Ursus (The Vengeance of Ursus, 1961) starring Samson Burke
Ursus e la Ragazza Tartara / Ursus and the Tartar Girl (Ursus and the Tartar Princess, 1961) The Tartar Invasion, The Tartar Girl; starring Joe Robinson, Akim Tamiroff, Yoko Tani; directed by Remigio Del Grosso
Ursus Nella Valle dei Leoni / Ursus in the Valley of the Lions (Valley of the Lions, 1962) starring Ed Fury; this film revealed the origin story of Ursus
Ursus il gladiatore ribelle / Ursus the Rebel Gladiator (The Rebel Gladiators, 1962) starring Dan Vadis
Ursus Nella Terra di Fuoco / Ursus in the Land of Fire (Son of Hercules in the Land of Fire, 1963) Son of Atlas in the Land of Fire; starring Ed Fury
Ursus il terrore dei Kirghisi / Ursus, the Terror of the Kirghiz (Hercules, Prisoner of Evil, 1964) starring Reg Park
Ercole, Sansone, Maciste e Ursus: gli invincibili / Hercules, Samson, Maciste and Ursus: The Invincibles (Samson and His Mighty Challenge, 1964) starring Yan Larvor as Ursus ( Combate dei Gigantes or Le Grand Defi)
Gli Invincibili Tre / The Invincible Three (Three Avengers, 1964) starring Alan Steel as Ursus
Samson series (1961–1964)
A character named Samson was featured in a series of five Italian peplum films in the 1960s, no doubt inspired by the 1959 re-release of the epic Victor Mature film Samson and Delilah. The character was similar to the Biblical Samson in the third and fifth films only; in the other three, he just appears to be a very strong man (not related at all to the Biblical figure).
The titles are listed as follows: Italian title / its English translation (U.S. release title in parentheses);
Sansone / Samson (Samson) 1961, starring Brad Harris, in France as Samson Against Hercules
Sansone contro i pirati / Samson Against the Pirates (Samson and the Sea Beast) 1963, starring Kirk Morris
Ercole sfida Sansone / Hercules challenges Samson (Hercules, Samson and Ulysses) 1963, starring Richard Lloyd
Sansone contro il corsaro nero / Samson Against the Black Pirate (Hercules and the Black Pirate) 1963, starring Alan Steel
Ercole, Sansone, Maciste e Ursus: gli invincibili / Hercules, Samson, Maciste and Ursus: The Invincibles (Samson and the Mighty Challenge) 1964, starring Nadir Baltimore as Samson ( Samson and His Mighty Challenge, Combate dei Gigantes or Le Grand Défi)
The name Samson was also inserted into the U.S. titles of six other Italian movies when they were dubbed in English for U.S. distribution, although these films actually featured the adventures of the famed Italian folk hero Maciste.
Samson Against the Sheik (1962), Son of Samson (1960), Samson and the Slave Queen (1963), Samson and the Seven Miracles of the World (1961), Samson vs. the Giant King (1964), and Samson in King Solomon's Mines (1964) were all re-titled Maciste movies, because the American distributors did not feel the name Maciste was marketable to U.S. filmgoers.
Samson and the Treasure of the Incas ( Hercules and the Treasure of the Incas) (1965) sounds like a peplum title, but was actually a spaghetti Western.
The Sons of Hercules (TV syndication package)
The Sons of Hercules was a syndicated television show that aired in the United States in the 1960s. The series repackaged 14 randomly chosen Italian peplum films by unifying them with memorable title and end title theme songs and a standard voice-over intro relating the main hero in each film to Hercules any way they could. In some areas, each film was split into two one-hour episodes, so the 14 films were shown as 28 weekly episodes. None of the films were ever theatrically released in the U.S.
The films are not listed in chronological order, since they were not really related to each other in any way. The first title listed below for each film was its American broadcast television title, followed in parentheses by the English translation of its original Italian theatrical title:
Ursus, Son of Hercules (Ursus) 1961, starring Ed Fury, Mighty Ursus in England
Mole Men vs the Son of Hercules (Maciste, the Strongest Man in the World) 1961, starring Mark Forest
Triumph of the Son of Hercules (The Triumph of Maciste) 1961, starring Kirk Morris
Fire Monsters Against the Son of Hercules (Maciste vs. the Monsters) 1962, starring Reg Lewis
Venus Against the Son of Hercules (Mars, God Of War) 1962, starring Roger Browne
Ulysses Against the Son of Hercules (Ulysses against Hercules) 1962, starring Mike Lane
Medusa Against the Son of Hercules (Perseus The Invincible) 1962, starring Richard Harrison
Son of Hercules in the Land of Fire (Ursus In The Land Of Fire) 1963, starring Ed Fury
Tyrant of Lydia Against The Son of Hercules (Goliath and the Rebel Slave) 1963, starring Gordon Scott
Messalina Against the Son of Hercules (The Last Gladiator) 1963, starring Richard Harrison
The Beast of Babylon Against the Son of Hercules (Hero of Babylon) 1963, starring Gordon Scott, Goliath, King of the Slaves
Terror of Rome Against the Son of Hercules (Maciste, Gladiator of Sparta) 1964, starring Mark Forest
Son of Hercules in the Land of Darkness (Hercules the Invincible) 1964, starring Dan Vadis
Devil of the Desert Against the Son of Hercules (Anthar the Invincible) 1964, starring Kirk Morris, directed by Antonio Margheriti, The Slave Merchants, Soraya, Queen of the Desert
Steve Reeves pepla (in chronological order of production)
Steve Reeves appeared in 14 pepla made in Italy from 1958 to 1964, and most of his films are highly regarded examples of the genre. His pepla are listed below in order of production, not in order of release. The U.S. release titles are shown below, followed by the original Italian title and its translation (in parentheses)
Hercules (1958) (Le fatiche di Ercole / The Labors of Hercules) actually filmed in 1957, released in Italy in 1958, released in the U.S. in 1959
Hercules Unchained (1959) (Ercole e la regina di Lidia / Hercules and the Queen of Lydia) released in the U.S. in 1960
Goliath and the Barbarians (1959) (Il terrore dei barbari / Terror of the Barbarians)
The Giant of Marathon (1959) (La battaglia di Maratona / The Battle of Marathon)
The Last Days of Pompeii (1959) (Gli ultimi giorni di Pompei / The Last Days of Pompeii)
The White Warrior (1959) (Hadji Murad il Diavolo Bianco / Hadji Murad, The White Devil) directed by Riccardo Freda
Morgan, the Pirate (1960) (Morgan, il pirata/ Morgan, the Pirate)
The Thief of Baghdad (1960) (Il Ladro di Bagdad / The Thief of Baghdad)
The Trojan Horse (1961) (La guerra di Troia/ The Trojan War)
Duel of the Titans (1961) (Romolo e Remo / Romulus And Remus)
The Slave (1962) (Il Figlio di Spartaco / Son of Spartacus)
The Avenger (1962) (La leggenda di Enea / The Legend Of Aeneas) The Last Glory of Troy; (this was a sequel to The Trojan Horse)
Sandokan the Great (1963) (Sandokan, la tigre di Mompracem/ Sandokan, the Tiger of Mompracem) directed by Umberto Lenzi
Pirates of Malaysia (1964) Sandokan, the Pirate of Malaysia, Pirates of the Seven Seas; this was a sequel to Sandokan the Great, directed by Umberto Lenzi
Other (non-series) Italian pepla
There were many 1950s and 1960s Italian pepla that did not feature a major superhero (such as Hercules, Maciste or Samson), and as such they fall into a sort of miscellaneous category. Many were of the Cappa e spada (swashbuckler) variety, though they often feature well-known characters such as Ali Baba, Julius Caesar, Ulysses, Cleopatra, the Three Musketeers, Zorro, Theseus, Perseus, Achilles, Robin Hood, and Sandokan. The first really successful Italian films of this kind were Black Eagle (1946) and Fabiola (1949).
Adventurer of Tortuga, The (1964), starring Guy Madison
Adventures of Mandrin, The (1952) Captain Adventure Don Juan's Night of Love, starring Raf Vallone and Silvana Pampanini
Adventures of Scaramouche, The (1963) The Mask of Scaramouche, starring Gérard Barray and Gianna Maria Canale
Alexander The Great (1956), starring Richard Burton (U.S. film with music score by Mario Nascimbene)
Ali Baba and the Sacred Crown (1962) The Seven Tasks of Ali Baba, starring Richard Lloyd
Ali Baba and the Seven Saracens (1964) Sinbad Against the Seven Saracens, starring Gordon Mitchell
Alone Against Rome (1962) Vengeance of the Gladiators, starring Lang Jeffries and Rossana Podestà
Anthar the Invincible (1964) Devil of the Desert Against the Son of Hercules, starring Kirk Morris, directed by Antonio Margheriti
Antigone (1961) Rites for the Dead, starring Irene Papas, a Greek production
Arena, The (1974) Naked Warriors, directed by Steve Carver and Joe D'Amato, starring Pam Grier and Margaret Markov (a late entry in the genre)
Arms of the Avenger (1963) The Devils of Spartivento, starring John Drew Barrymore
Atlas (1961) Atlas, the Winner of Athena, directed in Greece by Roger Corman, starring Michael Forest
Attack of the Moors (1959) The Kings of France
Attack of the Normans (1962) The Normans, starring Cameron Mitchell
Attila (1954), directed by Pietro Francisci, starring Anthony Quinn and Sophia Loren
Avenger of the Seven Seas (1961) Executioner of the Seas, starring Richard Harrison and Michèle Mercier
Avenger of Venice, The (1963), starring Brett Halsey and Gianna Maria Canale
Bacchantes, The (1961), starring Pierre Brice and Akim Tamiroff
Balboa (Spanish, 1963) Conquistadors of the Pacific
Barabbas (1961) produced by Dino de Laurentiis, starring Anthony Quinn, filmed in Italy
Battle of the Amazons (1973) Amazons: Women of Love and War, Beauty of the Barbarian (directed by Alfonso Brescia)
Beatrice Cenci (1956) directed by Riccardo Freda
Beatrice Cenci (1969) directed by Lucio Fulci
Behind the Mask of Zorro (1966) The Oath of Zorro, Tony Russel
Bible, The (1966) ( La Bibbia), Dino de Laurentiis, Ennio Morricone music, filmed in Italy
Black Archer, The (1959)
Black Devil, The (1957) starred Gerard Landry
Black Duke, The (1963), starring Cameron Mitchell
Black Eagle, The (1946) Return of the Black Eagle, directed by Riccardo Freda
Black Lancers, The (1962) Charge of the Black Lancers, Mel Ferrer
Brennus, Enemy of Rome (1964) Battle of the Valiant, Gordon Mitchell
Burning of Rome, The (1963) The Magnificent Adventurer, Brett Halsey
Caesar Against the Pirates (1962) Gordon Mitchell
Caesar the Conqueror (1962), starring Cameron Mitchell
Captain Falcon (1958) Lex Barker
Captain from Toledo, The (1966)
Captain of Iron, The (1961) Revenge of the Mercenaries, Barbara Steele
Captain Phantom (1953)
Captains of Adventure (1961) starring Paul Muller, Gerard Landry
Caribbean Hawk, The (1963) Yvonne Monlaur
Carthage in Flames (1960)
Castillian, The (1963) Cesare Romero
Catherine of Russia (1963) directed by Umberto Lenzi
Cavalier in the Devil’s Castle (1959)
Centurion, The (1962) The Conqueror of Corinth
Challenge of the Gladiator (1965) Peter Lupus
Cleopatra's Daughter (1960) The Tomb of the Kings, Debra Paget
Colossus and the Amazon Queen (1960), Ed Fury and Rod Taylor
Colossus of Rhodes, The (1960) directed by Sergio Leone
Conqueror of Atlantis (1965) The Kingdom in the Sand, Kirk Morris (U.S. dubbed version calls the hero "Hercules")
Conqueror of Maracaibo, The (1961)
Conqueror of the Orient (1961) starring Rik Battaglia
Constantine and the Cross (1960) Constantine the Great, starring Cornel Wilde
Coriolanus: Hero without a Country (1963) Thunder of Battle, Gordon Scott
Cossacks, The (1959) Edmund Purdom
Count of Monte Cristo, The (1961) Louis Jourdan
Damon and Pythias (1962) The Tyrant of Syracuse, Guy Williams
David and Goliath (1960) Orson Welles
Defeat of Hannibal, The (1937) Scipione l'Africano
Defeat of the Barbarians (1962) King Manfred
Desert Desperadoes (1959) Akim Tamiroff
Desert Warrior (1957) The Desert Lovers, Ricardo Montalban
Devil Made a Woman, The (1959) A Girl Against Napoleon
Devil's Cavaliers, The (1959)
Diary of a Roman Virgin (1974) Livia, una vergine per l'impero romano, directed by Joe D'Amato (used stock footage from The Last Days of Pompeii (1959) and The Arena (1974))
Dragon's Blood, The (1957) Sigfrido, based on the legend of the Niebelungen, special effects by Carlo Rambaldi
Duel of Champions (1961) Horatio and Curiazi, Alan Ladd
Erik the Conqueror (1961) Gli Invasori/ The Invaders, directed by Mario Bava, starring Cameron Mitchell
Esther and the King (1961) Joan Collins, Richard Egan
Executioner of Venice, The (1963) Lex Barker, Guy Madison
Fabiola (1949) The Fighting Gladiator
Falcon of the Desert (1965) The Magnificent Challenge, starring Kirk Morris
Fall of Rome, The (1961) directed by Antonio Margheriti
Fall of the Roman Empire, The (1964) U.S. production filmed in Spain, Sophia Loren
Fighting Musketeers, The (1961)
Fire Over Rome (1963)
Fury of Achilles, The (1962) Gordon Mitchell
Fury of the Pagans (1960) Fury of the Barbarians
Giant of Metropolis, The (1961) Gordon Mitchell (this peplum had a science fiction theme instead of fantasy)
Giant of the Evil Island (1965) Mystery of the Cursed Island, Peter Lupus
Giants of Rome (1964) directed by Antonio Margheriti, starring Richard Harrison
Giants of Thessaly (1960) directed by Riccardo Freda
Gladiator of Rome (1962) Battle of the Gladiators, Gordon Scott
Gladiators Seven (1962) The Seven Gladiators, Richard Harrison
Golden Arrow, The (1962) directed by Antonio Margheriti
Gold for the Caesars (1963) Jeffrey Hunter
Golgotha (1935) Behold the Man (made in France)
Guns of the Black Witch (1961) Terror of the Sea, Don Megowan
Hannibal (1959) Victor Mature
Hawk of the Caribbean (1963) The Caribbean Hawk
Head of a Tyrant (1959) Judith and Holophernes
Helen of Troy (1956) starring Jacques Sernas
Hero of Babylon (1963) The Beast of Babylon vs. the Son of Hercules, Gordon Scott
Hero of Rome (1964) The Colossus of Rome, Gordon Scott
Herod the Great (1958)
Huns, The (1960) Queen of the Tartars
Invasion 1700 (1962) With Iron and Fire, With Fire and Sword, Daggers of Blood
Ivanhoe, the Norman Swordsman (1971) La spada normanna, directed by Roberto Mauri
Invincible Gladiator, The (1961) Richard Harrison
Invincible Swordsman, The (1963)
The Iron Swordsman (1949) Count Ugolino, directed by Riccardo Freda
Jacob, The Man Who Fought With God (1963)
Kampf um Rom (1968) The Last Roman, starring Laurence Harvey, Honor Blackman, Orson Welles
Kindar, the Invulnerable (1965) Mark Forest
King of the Vikings (1960) Prince in Chains, The
Knight of a Hundred Faces, The (1960) The Silver Knight, Knight of a Thousand Faces, The, starring Lex Barker
Knights of Terror (1963) Terror of the Red Capes, Tony Russel
Knight Without a Country (1959) The Faceless Rider
Knives of the Avenger (1967) Viking Massacre, directed by Mario Bava, starring Cameron Mitchell
Last Gladiator, The (1963) Messalina Against the Son of Hercules
Last of the Vikings (1961), starring Cameron Mitchell
Legions of the Nile (1959) The Legions of Cleopatra
Lion of St. Mark, The (1964) Gordon Scott
Lion of Thebes, The (1964) Helen of Troy, Mark Forest
Loves of Salammbo, The (1960) Salambo
Magnificent Gladiator, The (1964) Mark Forest
Marco Polo (1962) Rory Calhoun
Marco the Magnificent (1965) Anthony Quinn, Orson Welles
Mars, God of War (1962) Venus Against the Son of Hercules
Masked Conqueror, The (1962)
Masked Man Against the Pirates, The (1965)
Mask of the Musketeers (1963) Zorro and the Three Musketeers, starring Gordon Scott
Massacre in the Black Forest (1967), starring Cameron Mitchell
Messalina (1960) Belinda Lee
Michael Strogoff (1956) Revolt of the Tartars
Mighty Crusaders, The (1958) Jerusalem Set Free, Gianna Maria Canale
Minotaur, The (1960) Theseus Against the Minotaur, The Warlord of Crete
Miracle of the Wolves (1961) Blood on his Sword, starring Jean Marais
Missione sabbie roventi (Mission Burning Sands) (1966) starring Renato Rossini, directed by Alfonso Brescia
Mongols, The (1961) directed by Riccardo Freda, starring Jack Palance
Musketeers of the Sea (1962)
My Son, The Hero (1961) Arrivano i Titani, The Titans
Mysterious Rider, The (1948) directed by Riccardo Freda
Mysterious Swordsman, The (1956) starring Gerard Landry
Nero and the Burning of Rome (1953) Nero and Messalina
Night of the Great Attack (1961) Revenge of the Borgias
Night They Killed Rasputin, The (1960) The Last Czar
Nights of Lucretia Borgia, The (1959)
Odyssey, The (1968) L'Odissea, Cyclops segment directed by Mario Bava; Samson Burke played Polyphemus the Cyclops
Old Testament, The (1962) starred Brad Harris
Perseus the Invincible (1962) Medusa vs. the Son of Hercules
Pharaoh's Woman, The (1960)
Pia of Ptolomey (1962)
Pirate and the Slave Girl, The (1959) Scimitar of the Saracen, Lex Barker
Pirate of the Black Hawk, The (1958) Gérard Landry
Pirate of the Half Moon (1957)
Pirates of the Coast (1960) Lex Barker
Pontius Pilate (1962) Basil Rathbone
Prince with the Red Mask, The (1955) The Red Eagle, starring Frank Latimore
Prisoner of the Iron Mask, The (1962) The Revenge of the Iron Mask
Pugni, Pirati e Karatè (1973) Fists, Pirates and Karate, directed by Joe D'Amato, starring Richard Harrison (a 1970s Italian spoof of pirate movies)
Queen for Caesar, A (1962) Gordon Scott
Queen of Sheba (1952) directed by Pietro Francisci
Queen of the Amazons (1960) Colossus and the Amazon Queen
Queen of the Nile (1961) Nefertiti, Vincent Price
Queen of the Pirates (1960) The Venus of the Pirates, Gianna Maria Canale
Queen of the Seas (1961) directed by Umberto Lenzi
Quo Vadis (1951) filmed in Italy, Sergio Leone asst. dir.
Rage of the Buccaneers (1961) Gordon, The Black Pirate, starring Vincent Price
Rape of the Sabine Women, The (1961) Romulus and the Sabines, Roger Moore
Red Cloak, The (1955) Bruce Cabot
Red Sheik, The (1962)
Revak the Rebel (1960) The Barbarians, Jack Palance
Revenge of Black Eagle, The (1951) Gianna Maria Canale
Revenge of Ivanhoe, The (1965) Rik Battaglia
Revenge of Spartacus, The (1965) Revenge of the Gladiators, Roger Browne
Revenge of the Barbarians (1960)
Revenge of the Black Eagle (1951) directed by Riccardo Freda
Revenge of the Conquered (1961) Drakut the Avenger
Revenge of the Gladiators (1961) starring Mickey Hargitay
Revenge of the Musketeers (1964) Dartagnan vs. the Three Musketeers, starring Fernando Lamas
Revolt of the Barbarians (1964) directed by Guido Malatesta
Revolt of the Mercenaries (1961)
Revolt of the Praetorians (1964) The Invincible Warriors, starring Richard Harrison
Revolt of the Seven (1964) The Spartan Gladiator, starring Helga Line
Revolt of the Slaves, The (1960) Rhonda Fleming
Robin Hood and the Pirates (1960) Lex Barker
Roland the Mighty (1956) Orlando, directed by Pietro Francisci
Rome Against Rome (1964) War of the Zombies
Rome 1585 (1961) The Mercenaries, Debra Paget
Rover, The (1967) The Adventurer, starring Anthony Quinn
Sack of Rome, The (1953) The Barbarians, The Pagans
Samson and Gideon (1965) Fernando Rey, Biblical film
Sandokan Fights Back (1964) Sandokan to the Rescue, The Revenge of Sandokan
Sandokan vs. the Leopard of Sarawak (1964) Throne of Vengeance
Saracens, The (1965) The Devil's Pirate, The Flag of Death, starring Richard Harrison
Saul and David (1964)
Scheherazade (1963) starring Anna Karina
Sea Pirate, The (1966) Thunder Over the Indian Ocean, Surcouf, Hero of the Seven Seas
Secret Mark of D'Artagnan, The (1962)
Secret Seven, The (1965) The Invincible Seven
Seven from Thebes (1964)
Seven Rebel Gladiators (1965) Seven Against All, starring Roger Browne
Seven Revenges, The (1961) The Seven Challenges, Ivan the Conqueror, starring Ed Fury
Seven Seas to Calais (1962) Sir Francis Drake, King of the Seven Seas, Rod Taylor
Seven Slaves Against the World (1964) Seven Slaves Against Rome, starring Roger Browne and Gordon Mitchell
Seven Tasks of Ali Baba, The (1962) Ali Baba and the Sacred Crown
Seventh Sword, The (1962) Brett Halsey
79 A.D., the Destruction of Herculaneum (1962) Brad Harris
Shadow of Zorro, The (1962)
Sheba and the Gladiator (1959) The Sign of Rome, Sign of the Gladiator, Anita Ekberg
Siege of Syracuse (1960) Tina Louise
Simbad e il califfo di Bagdad (1973) directed by Pietro Francisci
Sins of Rome (1953) Spartacus, directed by Riccardo Freda
Slave Girls of Sheba (1963) starring Linda Cristal
Slave of Rome (1960) starring Guy Madison
Slave Queen of Babylon (1963) Yvonne Furneaux
Slaves of Carthage, The (1956) The Sword and the Cross, Gianna Maria Canale (not to be confused with Mary Magdalene; see below)
Sodom and Gomorrah (1962) Rosanna Podesta, U.S./Italian film shot in Italy, co-directed by Sergio Leone
Son of Black Eagle (1968)
Son of Captain Blood, The (1962)
Son of Cleopatra, The (1965) Mark Damon
Son of d'Artagnan (1950) directed by Riccardo Freda
Son of El Cid, The (1965) 100 Horsemen, Mark Damon
Son of the Red Corsair (1959) a.k.a. Son of the Red Pirate, Lex Barker
Son of the Sheik (1961) Kerim, Son of the Sheik, starring Gordon Scott
Spartacus and the Ten Gladiators (1964) Ten Invincible Gladiators, starring Dan Vadis
Spartan Gladiator, The (1965) Tony Russel
Story of Joseph and his Brethren, The (1961)
Suleiman the Conqueror (1961)
Sword and the Cross, The (1958) Mary Magdalene
Sword in the Shadow, A (1961) starring Livio Lorenzon
Sword of Damascus, The (1964) The Thief of Damascus
Sword of El Cid, The (1962) The Daughters of El Cid
Sword of Islam, The (1961)
Sword of the Conqueror (1961) Rosamund and Alboino, Jack Palance
Sword for the Empire, A (1965) Sword of the Empire
Sword of the Rebellion, The (1964) The Rebel of Castelmonte
Swordsman of Siena (1962) The Mercenary
Sword of Vengeance (1961) La spada della vendetta
Sword Without a Country (1961) Sword Without a Flag
Tartars, The (1961) starring Victor Mature
Taras Bulba, The Cossack (1963) Plains of Battle
Taur, the Mighty (1963) Tor the Warrior, Taur, the King of Brute Force, starring Joe Robinson
Temple of the White Elephant (1965) Sandok, the Giant of the Jungle, Sandok, the Maciste of the Jungle (not a Maciste film, however, in spite of the alternate title)
Ten Gladiators, The (1963) starring Dan Vadis
Terror of the Black Mask (1963) The Invincible Masked Rider
Terror of the Red Mask (1960) starring Lex Barker
Terror of the Steppes (1964) The Mighty Khan, Kirk Morris
Tharus, Son of Attila (1962) Colossus and the Huns, Ricardo Montalban
Theodora, Slave Empress (1954) directed by Riccardo Freda
Thor and the Amazon Women (1963) starring Joe Robinson
Three Hundred Spartans, The (1963) starring Richard Egan; U.S. film filmed in Greece using Italian screenwriters
Three Swords for Rome (1965) starring Roger Browne
Three Swords of Zorro, The (1963)
Tiger of the Seven Seas (1962)
Treasure of the Petrified Forest, The (1965) starring Gordon Mitchell
Triumph of Robin Hood (1962) starring Samson Burke
Triumph of the Ten Gladiators (1965) starring Dan Vadis
Two Gladiators, The (1964) Fight or Die, starring Richard Harrison
Tyrant of Castile, The (1964) starring Mark Damon
Ulysses (1954) produced by Dino De Laurentiis, starring Kirk Douglas and Anthony Quinn
Virgins of Rome, The (1961) Amazons of Rome
Vulcan, Son of Jupiter (1962) Vulcan, Son of Jove, Gordon Mitchell, Richard Lloyd, Roger Browne
War Goddess (1973) The Bare-Breasted Warriors, Le guerriere dal seno nudo, directed by Terence Young
War Gods of Babylon (1962) The Seventh Thunderbolt, The Seven Glories of Assur
Warrior and the Slave Girl, The (1958) The Revolt of the Gladiators, starring Gianna Maria Canale
Warrior Empress, The (1960) Sappho, Venus of Lesbos, Kerwin Matthews, Tina Louise
White Slave Ship (1961) directed by Silvio Amadio
Women of Devil's Island (1962) starring Guy Madison
Wonders of Aladdin, The (1961) starring Donald O'Connor
Zorikan the Barbarian (1964) starring Dan Vadis
Zorro (1975) Alain Delon
Zorro and the Three Musketeers (1963) Mask of the Musketeers, starring Gordon Scott
Zorro in the Court of England (1970) starring Spiros Focás
Zorro at the Court of Spain (1962) The Masked Conqueror, George Ardisson
Zorro of Monterrey (1971) El Zorro de Monterrey, Carlos Quiney
Zorro, Rider of Vengeance (1971) Carlos Quiney
Zorro's Last Adventure (1970) La última aventura del Zorro, Carlos Quiney
Zorro the Avenger (1962) The Revenge of Zorro, Frank Latimore
Zorro the Avenger (1969) El Zorro justiciero (1969) starring Fabio Testi
Zorro the Fox (1968) El Zorro, George Ardisson
Zorro, the Navarra Marquis (1969) Nino Vingelli
Zorro the Rebel (1966) Howard Ross
Zorro Against Maciste (1963) Samson and the Slave Queen (1963) starring Pierre Brice, Alan Steel
Gladiator films
Inspired by the success of Spartacus, there were a number of Italian peplums that heavily emphasized the gladiatorial arena in their plots, with it becoming almost a peplum subgenre in itself. One group of supermen known as "The Ten Gladiators" appeared in a trilogy, all three films starring Dan Vadis in the lead role.
Alone Against Rome (1962) Vengeance of the Gladiators
The Arena (1974) Naked Warriors, co-directed by Joe D'Amato, starring Pam Grier, Paul Muller and Rosalba Neri
Challenge of the Gladiator (1965) starring Peter Lupus ( Rock Stevens)
Fabiola (1949) The Fighting Gladiator
Gladiator of Rome (1962) Battle of the Gladiators, starring Gordon Scott
Gladiators Seven (1962) The Seven Gladiators, starring Richard Harrison
Invincible Gladiator, The (1961) Richard Harrison
Last Gladiator, The (1963) Messalina Against the Son of Hercules
Maciste, Gladiator of Sparta (1964) Terror of Rome Against the Son of Hercules
Revenge of Spartacus, The (1965) Revenge of the Gladiators, starring Roger Browne
Revenge of The Gladiators (1961) starring Mickey Hargitay
Revolt of the Seven (1964) The Spartan Gladiator, starring Tony Russel and Helga Line
Revolt of the Slaves (1961) Rhonda Fleming
Seven Rebel Gladiators (1965) Seven Against All, starring Roger Browne
Seven Slaves Against the World (1965) Seven Slaves Against Rome, The Strongest Slaves in the World, starring Roger Browne and Gordon Mitchell
Sheba and the Gladiator (1959) The Sign of Rome, Sign of the Gladiator, Anita Ekberg
Sins of Rome (1952) Spartacus, directed by Riccardo Freda
Slave, The (1962) Son of Spartacus, Steve Reeves
Spartacus and the Ten Gladiators (1964) Ten Invincible Gladiators, Dan Vadis
Spartan Gladiator, The (1965) Tony Russel
Ten Gladiators, The (1963) Dan Vadis
Triumph of the Ten Gladiators (1965) Dan Vadis
Two Gladiators, The (1964) Fight or Die, Richard Harrison
Ursus, the Rebel Gladiator (1963) Rebel Gladiators, Dan Vadis
Warrior and the Slave Girl, The (1958) The Revolt of the Gladiators, Gianna Maria Canale
Ancient Rome
Brennus, Enemy of Rome (1964) Battle of the Valiant, Gordon Mitchell
Caesar Against the Pirates (1962) Gordon Mitchell
Caesar the Conqueror (1962) Cameron Mitchell, Rik Battaglia
Carthage in Flames (1960) Cartagine in fiamme, directed by Carmine Gallone
Centurion The (1962) The Conqueror of Corinth
Colossus of Rhodes, The (1960) directed by Sergio Leone
Constantine and the Cross (1960) Constantine the Great
Coriolanus: Hero without a Country (1963) Thunder of Battle, Gordon Scott
Diary of a Roman Virgin (1974) Livia, una vergine per l'impero romano, directed by Joe D'Amato (used stock footage from The Last Days of Pompeii (1959) and The Arena (1974))
Duel of Champions (1961) Horatio and Curiazi, Alan Ladd
Duel of the Titans (1962) Romulus and Remus, Steve Reeves, Gordon Scott
Fall of Rome, The (1961) directed by Antonio Margheriti
Fire Over Rome (1963)
Giants of Rome (1963) directed by Antonio Margheriti, starring Richard Harrison
Gold for the Caesars (1963) Jeffrey Hunter
Hannibal (1959) Victor Mature
Hero of Rome (1964) The Colossus of Rome, Gordon Scott
Kampf um Rom (1968) The Last Roman, starring Laurence Harvey, Honor Blackman, Orson Welles
Last Days of Pompeii (1959) Steve Reeves
Massacre in the Black Forest (1967) Cameron Mitchell
Messalina (1960)
Nero and the Burning of Rome (1955) Nero and Messalina
Quo Vadis (1951) assistant director Sergio Leone
Rape of the Sabine Women, The (1961) Roger Moore
Revenge of Spartacus, The (1965) Roger Browne
Revenge of the Barbarians (1960)
Revolt of the Praetorians (1965) Richard Harrison
Rome Against Rome (1963) War of the Zombies
The Secret Seven (1965) The Invincible Seven
79 A.D., the Destruction of Herculaneum (1962) Brad Harris
Sheba and the Gladiator (1959) The Sign of Rome, Sign of the Gladiator, Anita Ekberg
Sins of Rome (1952) Spartaco, directed by Riccardo Freda
The Slave of Rome (1960) starring Guy Madison
Slaves of Carthage, The (1956) The Sword and the Cross, Gianna Maria Canale (not to be confused with Mary Magdalene)
Theodora, Slave Empress (1954) directed by Riccardo Freda
Three Swords for Rome (1965) Roger Browne
Virgins of Rome, The (1961) Amazons of Rome
Greek mythology
The Avenger (1962) Legend of Aeneas, Steve Reeves
Alexander The Great (1956) U.S. film with music score by Mario Nascimbene
Antigone (1961) Rites for the Dead, a Greek production
Bacchantes, The (1961)
Battle of the Amazons (1973) Amazons: Women of Love and War, Beauty of the Barbarian (directed by Alfonso Brescia)
The Colossus of Rhodes (1961) directed by Sergio Leone
Conqueror of Atlantis (1965) starring Kirk Morris
Damon and Pythias (1962) Guy Williams
Fury of Achilles (1962) Gordon Mitchell
Giant of Marathon (1959) (The Battle of Marathon) Steve Reeves
Giants of Thessaly (1960) directed by Riccardo Freda
Helen of Troy (1956) directed by Robert Wise
Hercules Challenges Samson (1963) Hercules, Samson and Ulysses
Lion of Thebes, The (1964) Helen of Troy, Mark Forest
Mars, God of War (1962) Venus Against the Son of Hercules
The Minotaur (1961) Theseus Against the Minotaur, The Warlord of Crete
My Son, the Hero (1961) Arrivano i Titani, The Titans
Odyssey, The (1968) Cyclops segment directed by Mario Bava; Samson Burke played Polyphemus the Cyclops
Perseus the Invincible (1962) Medusa vs. the Son of Hercules
Queen of the Amazons (1960) Colossus and the Amazon Queen
Seven from Thebes (1964) André Lawrence
Siege of Syracuse, The (1962) Tina Louise
Treasure of the Petrified Forest (1965) Gordon Mitchell (the plot involves Amazons)
Trojan Horse, The (1961) The Trojan War, Steve Reeves
Ulysses (1954) starring Kirk Douglas and Anthony Quinn
Vulcan, Son of Jupiter (1962) Vulcan, Son of Jove, Gordon Mitchell, Richard Lloyd, Roger Browne
Warrior Empress, The (1960) Sappho, Venus of Lesbos, Kerwin Matthews, Tina Louise
Barbarian and Viking films
Attack of the Normans (1962) The Normans, Cameron Mitchell
Attila (1954) directed by Pietro Francisci, Anthony Quinn, Sophia Loren
The Cossacks (1960)
Defeat of the Barbarians (1962) King Manfred
Dragon's Blood, The (1957) Sigfrido, based on the legend of the Niebelungen, special effects by Carlo Rambaldi
Erik the Conqueror (1961) The Invaders, directed by Mario Bava, starring Cameron Mitchell
Fury of the Pagans (1960) Fury of the Barbarians
Goliath and the Barbarians (1959) Terror of the Barbarians, Steve Reeves
The Huns (1960) Queen of the Tartars
Invasion 1700 (1962) With Iron and Fire, With Fire and Sword, Daggers of Blood
King of the Vikings (1960) The Prince in Chains
The Last of the Vikings (1961) starring Cameron Mitchell and Broderick Crawford
Marco Polo (1962) Rory Calhoun
Marco the Magnificent (1965) Anthony Quinn, Orson Welles
Michel Strogoff (1956) Revolt of the Tartars
The Mongols (1961) starring Jack Palance
Revak the Rebel (1960) The Barbarians, Jack Palance
Revolt of the Barbarians (1964) directed by Guido Malatesta
Roland the Mighty (1956) directed by Pietro Francisci
Saracens, The (1965) The Devil's Pirate, The Flag of Death
The Seven Revenges (1961) The Seven Challenges, Ivan the Conqueror, starring Ed Fury
Suleiman the Conqueror (1961)
Sword of the Conqueror (1961) Rosamund and Alboino, Jack Palance
Sword of the Empire (1964)
Taras Bulba, The Cossack (1963) Plains of Battle
The Tartars (1961) Victor Mature, Orson Welles
Terror of the Steppes (1963) The Mighty Khan, starring Kirk Morris
Tharus Son of Attila (1962) Colossus and the Huns, Ricardo Montalban
Zorikan the Barbarian (1964) Dan Vadis
Swashbucklers / pirates
Adventurer of Tortuga (1965) starring Guy Madison
Adventures of Mandrin, The (1960) Captain Adventure
Adventures of Scaramouche, The (1963) The Mask of Scaramouche, Gianna Maria Canale
Arms of the Avenger (1963) The Devils of Spartivento, starring John Drew Barrymore
At Sword's Edge (1952) dir. by Carlo Ludovico Bragaglia
Attack of the Moors (1959) The Kings of France
Avenger of the Seven Seas (1961) Executioner of the Seas, Richard Harrison
Avenger of Venice, The (1963) directed by Riccardo Freda, starring Brett Halsey
Balboa (Spanish, 1963) Conquistadors of the Pacific
Beatrice Cenci (1956) directed by Riccardo Freda
Beatrice Cenci (1969) directed by Lucio Fulci
Behind the Mask of Zorro (1966) The Oath of Zorro, Tony Russel
Black Archer, The (1959) Gerard Landry
Black Devil, The (1957) Gerard Landry
Black Duke, The (1963) Cameron Mitchell
Black Eagle, The (1946) Return of the Black Eagle, directed by Riccardo Freda
Black Lancers, The (1962) Charge of the Black Lancers, Mel Ferrer
Captain from Toledo, The (1966)
Captain of Iron, The (1962) Revenge of the Mercenaries, Barbara Steele
Captain Phantom (1953)
Captains of Adventure (1961) starring Paul Muller and Gerard Landry
Caribbean Hawk, The (1963) Yvonne Monlaur
Castillian, The (1963) Cesare Romero, U.S./Spanish co-production
Catherine of Russia (1962) directed by Umberto Lenzi
Cavalier in Devil’s Castle (1959) Cavalier of Devil's Island
Conqueror of Maracaibo, The (1961)
Count of Monte Cristo, The (1962) Louis Jourdan
Devil Made a Woman, The (1959) A Girl Against Napoleon
Devil's Cavaliers, The (1959) The Devil's Riders, Gianna Maria Canale
Dick Turpin (1974) a Spanish production
El Cid (1961) Sophia Loren, Charlton Heston, U.S./ Italian film shot in Italy
Executioner of Venice, The (1963) Lex Barker, Guy Madison
Fighting Musketeers, The (1961)
Giant of the Evil Island (1965) Mystery of the Cursed Island, Peter Lupus
Goliath and the Masked Rider (1964) Hercules and the Masked Rider, Alan Steel
Guns of the Black Witch (1961) Terror of the Sea, Don Megowan
Hawk of the Caribbean (1963)
Invincible Swordsman, The (1963)
The Iron Swordsman (1949) Count Ugolino, directed by Riccardo Freda
Ivanhoe, the Norman Swordsman (1971) La spada normanna, directed by Roberto Mauri
Knight of a Hundred Faces, The (1960) The Silver Knight, starring Lex Barker
Knights of Terror (1963) Terror of the Red Capes, Tony Russel
Knight Without a Country (1959) The Faceless Rider
Lawless Mountain, The (1953) La montaña sin ley (stars Zorro)
Lion of St. Mark, The (1964) Gordon Scott
Mark of Zorro (1975) made in France, Monica Swinn
Mark of Zorro (1976) George Hilton
Masked Conqueror, The (1962)
Mask of the Musketeers (1963) Zorro and the Three Musketeers, starring Gordon Scott
Michael Strogoff (1956) Revolt of the Tartars
Miracle of the Wolves (1961) Blood on his Sword, starring Jean Marais
Morgan, the Pirate (1960) Steve Reeves
Musketeers of the Sea (1960)
Mysterious Rider, The (1948) directed by Riccardo Freda
Mysterious Swordsman, The (1956) starred Gerard Landry
Nephews of Zorro, The (1968) Italian comedy with Franco and Ciccio
Night of the Great Attack (1961) Revenge of the Borgias
Night They Killed Rasputin, The (1960) The Last Czar
Nights of Lucretia Borgia, The (1959)
Pirate and the Slave Girl, The (1959) Lex Barker
Pirate of the Black Hawk, The (1958)
Pirate of the Half Moon (1957)
Pirates of the Coast (1960) Lex Barker
Prince with the Red Mask, The (1955) The Red Eagle
Prisoner of the Iron Mask, The (1961) The Revenge of the Iron Mask
Pugni, Pirati e Karatè (1973) Fists, Pirates and Karate, directed by Joe D'Amato, starring Richard Harrison (a 1970s Italian spoof of pirate movies)
Queen of the Pirates (1961) The Venus of the Pirates, Gianna Maria Canale
Queen of the Seas (1961) directed by Umberto Lenzi
Rage of the Buccaneers (1961) Gordon, The Black Pirate, starring Vincent Price
Red Cloak, The (1955) Bruce Cabot
Revenge of Ivanhoe, The (1965) Rik Battaglia
Revenge of the Black Eagle (1951) directed by Riccardo Freda
Revenge of the Musketeers (1963) Dartagnan vs. the Three Musketeers, Fernando Lamas
Revenge of Spartacus, The (1965) Roger Browne
Revolt of the Mercenaries (1961)
Robin Hood and the Pirates (1960) Lex Barker
Roland, the Mighty (1956) directed by Pietro Francisci
Rome 1585 (1961) The Mercenaries, Debra Paget, set in the 1500s
Rover, The (1967) The Adventurer, starring Anthony Quinn
The Sack of Rome (1953) The Barbarians, The Pagans (set in the 1500s)
Samson vs. the Black Pirate (1963) Hercules and the Black Pirate, Alan Steel
Samson vs. the Pirates (1963) Samson and the Sea Beast, Kirk Morris
Sandokan Fights Back (1964) Sandokan to the Rescue, The Revenge of Sandokan, Guy Madison
Sandokan the Great (1964) Sandokan, the Tiger of Mompracem, Steve Reeves
Sandokan, the Pirate of Malaysia (1964) Pirates of Malaysia, Pirates of the Seven Seas, Steve Reeves, directed by Umberto Lenzi
Sandokan vs. the Leopard of Sarawak (1964) Throne of Vengeance, Guy Madison
Saracens, The (1965) The Devil's Pirate, The Flag of Death, starring Richard Harrison
Sea Pirate, The (1966) Thunder Over the Indian Ocean, Surcouf, Hero of the Seven Seas
Secret Mark of D'artagnan, The (1962)
Seven Seas to Calais (1961) Sir Francis Drake, King of the Seven Seas, Rod Taylor
Seventh Sword, The (1960) Brett Halsey
Shadow of Zorro (1962) Frank Latimore
Sign of Zorro, The (1952)
(1963) Duel at the Rio Grande, Sean Flynn
Son of Black Eagle (1968)
Son of Captain Blood (1962)
Son of d'Artagnan (1950) directed by Riccardo Freda
Son of El Cid, The (1965) Mark Damon
Son of the Red Corsair (1959) Son of the Red Pirate, Lex Barker
Son of Zorro (1973) Alberto Dell'Acqua
Sword in the Shadow, A (1961) starring Livio Lorenzon
Sword of Rebellion, The (1964) The Rebel of Castelmonte
Sword of Vengeance (1961) La spada della vendetta
Swordsman of Siena, The (1961) The Mercenary
Sword Without a Country (1960) Sword Without a Flag
Taras Bulba, The Cossack (1963) Plains of Battle
Terror of the Black Mask (1963) The Invincible Masked Rider
Terror of the Red Mask (1960) Lex Barker
Three Swords of Zorro, The (1963) The Sword of Zorro, Guy Stockwell
Tiger of the Seven Seas (1963)
Triumph of Robin Hood (1962) starring Samson Burke
Tyrant of Castile, The (1964) Mark Damon
White Slave Ship (1961) directed by Silvio Amadio
The White Warrior (1959) Hadji Murad, the White Devil, Steve Reeves
Women of Devil's Island (1962) starring Guy Madison
Zorro (1968) El Zorro, Zorro the Fox, George Ardisson
Zorro (1975) Alain Delon
Zorro and the Three Musketeers (1963) Gordon Scott
Zorro at the Court of England (1969) Spiros Focás as Zorro
Zorro at the Court of Spain (1962) The Masked Conqueror, Georgio Ardisson
Zorro of Monterrey (1971) El Zorro de Monterrey, Carlos Quiney
Zorro, Rider of Vengeance (1971) Carlos Quiney
Zorro's Last Adventure (1970) La última aventura del Zorro, Carlos Quiney
Zorro the Avenger (1962) The Revenge of Zorro, Frank Latimore
Zorro the Avenger (1969) El Zorro justiciero (1969) Fabio Testi
Zorro, the Navarra Marquis (1969) Nadir Moretti as Zorro
Zorro the Rebel (1966) Howard Ross
Zorro Against Maciste (1963) Samson and the Slave Queen (1963) starring Pierre Brice, Alan Steel
Biblical
Barabbas (1961) Dino de Laurentiis, Anthony Quinn, filmed in Italy
Bible, The (1966) Dino de Laurentiis, John Huston, filmed in Italy
David and Goliath (1960) Orson Welles
Desert Desperadoes (1956) plot involves King Herod
Esther and the King (1961) Joan Collins, Richard Egan
Head of a Tyrant, The (1959)
Herod the Great (1958) Edmund Purdom
Jacob, the Man Who Fought with God (1964) Giorgio Cerioni
Mighty Crusaders, The (1957) Jerusalem Set Free, Gianna Maria Canale
Old Testament, The (1962) Brad Harris
Pontius Pilate (1962) Jean Marais
The Queen of Sheba (1952), directed by Pietro Francisci
Samson and Gideon (1965) Fernando Rey
Saul and David (1963) Gianni Garko
Sodom and Gomorrah (1962) Rosanna Podesta, U.S./Italian film shot in Italy
Story of Joseph and his Brethren, The (1960)
Sword and the Cross, The (1958) Mary Magdalene, Gianna Maria Canale
Ancient Egypt
Cleopatra's Daughter (1960) starring Debra Paget
Legions of the Nile (1959) starring Linda Cristal
Pharaoh's Woman, The (1960) with John Drew Barrymore
Queen for Caesar, A (1962) Gordon Scott
Queen of the Nile (1961) Nefertiti, Vincent Price
Son of Cleopatra (1964) Mark Damon
Babylon / the Middle East
Ali Baba and the Seven Saracens (1962) Sinbad Against the 7 Saracens, starring Gordon Mitchell
Anthar, The Invincible (1964) Devil of the Desert Against the Son of Hercules, starring Kirk Morris, directed by Antonio Margheriti
Desert Warrior (1957) The Desert Lovers, Ricardo Montalban
Falcon of the Desert (1965) The Magnificent Challenge, starring Kirk Morris
Golden Arrow, The (1962) directed by Antonio Margheriti
Goliath at the Conquest of Baghdad (1964) Goliath at the Conquest of Damascus, Peter Lupus
Goliath and the Rebel Slave (1963) The Tyrant of Lydia vs. The Son of Hercules, Gordon Scott
Goliath and the Sins of Babylon (1963) Maciste, the World's Greatest Hero, Mark Forest
Hercules and the Tyrants of Babylon (1964)
Hero of Babylon (1963) The Beast of Babylon vs. the Son of Hercules, Gordon Scott
Kindar, the Invulnerable (1965) Mark Forest, Rosalba Neri
Missione sabbie roventi (Mission Burning Sands) (1966) starring Renato Rossini, directed by Alfonso Brescia
Red Sheik, The (1962)
Scheherazade (1963) starring Anna Karina
Seven Tasks of Ali Baba, The (1962) Ali Baba and the Sacred Crown, starring Richard Lloyd
Slave Girls of Sheba (1963) starring Linda Cristal
Slave Queen of Babylon (1962) Yvonne Furneaux
Son of the Sheik (1961) Kerim, Son of the Sheik, starring Gordon Scott
Sword of Damascus, The (1964) The Thief of Damascus
Sword of Islam, The (1961) a.k.a. Love and Faith; an Italian/ Egyptian co-production
Thief of Baghdad, The (1961) Steve Reeves
War Gods of Babylon (1962) The Seventh Thunderbolt
Wonders of Aladdin, The (1961) Donald O'Connor
The second peplum wave: the 1980s
After the peplum gave way to the spaghetti Western and Eurospy films in 1965, the genre lay dormant for close to 20 years. Then in 1982, the box-office successes of Jean-Jacques Annaud's Quest for Fire (1981), Arnold Schwarzenegger's Conan the Barbarian (1982) and Clash of the Titans (1981 film) (1981) spurred a second renaissance of sword and sorcery Italian pepla in the five years immediately following. Most of these films had low budgets, focusing more on barbarians and pirates so as to avoid the need for expensive Greco-Roman sets. The filmmakers tried to compensate for their shortcomings with the addition of some graphic gore and nudity. Many of these 1980s entries were helmed by noted Italian horror film directors (Joe D'Amato, Lucio Fulci, Luigi Cozzi, etc.) and many featured actors Lou Ferrigno, Miles O'Keefe and Sabrina Siani. Here is a list of the 1980s pepla:
Adam and Eve (1983) Adamo ed Eva, la prima storia d'amore, contains stock footage from One Million Years B.C. (1966)
Ator, the Fighting Eagle (1983) Ator the Invincible, starring Miles O'Keefe and Sabrina Siani, directed by Joe D'Amato
Ator 2: The Blade Master (1985) Cave Dwellers, starring Miles O’Keefe, directed by Joe D’Amato
Ator 3: Iron Warrior (1986) Iron Warrior, starring Miles O'Keefe, directed by Alfonso Brescia (Joe D'Amato disowned this entry in the Ator saga, since it was done without his involvement)
Ator 4: Quest for the Mighty Sword (1989) Quest for the Mighty Sword, starring Eric Allan Kramer (as the son of Ator), Laura Gemser and Marisa Mell, directed by Joe D'Amato
Barbarian Master (1984) Sangraal, the Sword of Fire, Sword of the Barbarians, starring Sabrina Siani
The Barbarians (1987) The Barbarians and Company, semi-comedy starring Peter and David Paul, directed by Ruggero Deodato
The Cantabrians (1980) Los Cantabros, directed by Paul Naschy in Spain
Conqueror of the World (1983) I padroni del mondo / Fathers of the World, Master of the World (a barbarian movie set in prehistoric times) directed by Alberto Cavallone
Conquest (1983) Conquest of the Lost Land, starring Sabrina Siani, directed by Lucio Fulci
Hercules (1983) starring Lou Ferrigno and Sybil Danning, directed by Luigi Cozzi
Hercules 2 (1984) The Adventures of Hercules, starring Lou Ferrigno, directed by Luigi Cozzi
Hundra (1983) Italian/ Spanish Conan ripoff directed by Matt Cimber
The Invincible Barbarian (1982) Gunan, the Warrior, Gunan, King of the Barbarians, starring Sabrina Siani, directed by Franco Prosperi
Ironmaster (1983) The War of Iron, co-starring Luigi Montefiori, directed by Umberto Lenzi
The Seven Magnificent Gladiators (1983) starring Lou Ferrigno and Dan Vadis
She (1982) starring Sandahl Bergman and Gordon Mitchell
Sinbad of the Seven Seas (1988) starring Lou Ferrigno, directed by Luigi Cozzi
Thor, the Conqueror (1983) directed by Tonino Ricci
The Throne of Fire (1983) starring Sabrina Siani, directed by Franco Prosperi
Yor, the Hunter from the Future (1983) starring Reb Brown, directed by Antonio Margheriti (a barbarian film that has science fiction elements in the story)
A group of so-called "porno peplum" films were devoted to Roman emperors, especially - but not only - to Caligula and Claudius' spouse Messalina:
Caligula (1979) directed by Tinto Brass
A Filha de Calígula (1981) directed by Ody Fraga
Caligula and Messalina (1981) directed by Bruno Mattei
My Nights with Messalina (1982) directed by Jaime J. Puig
Nerone and Poppea (1982) directed by Bruno Mattei
Caligula... The Untold Story (1982) directed by Joe D'Amato
Orgies of Caligula (1984) Caligula's Slaves, Roma, l'antica chiave dei sensi
See also
The Bible in film
References
Further reading
Diak, Nicholas, editor. The New Peplum: Essays on Sword and Sandal Films and Television Programs Since the 1990s. McFarland and Company, Inc. 2018.
Richard Dyer: "The White Man's Muscles" in R. Dyer: White: London: Routledge: 1997:
David Chapman: Retro Studs: Muscle Movie Posters from Around the World: Portland: Collectors Press: 2002:
Hervé Dumont, L'Antiquité au cinéma. Vérités, légendes et manipulations (Nouveau-Monde, 2009; )
Florent Fourcart, Le Péplum italien (1946–1966) : Grandeur et décadence d'une antiquité populaire (2012, CinExploitation; )
Maggie Gunsberg: "Heroic Bodies: The Culture of Masculinity in Peplums" in M. Gunsberg: Italian Cinema: Gender and Genre: Houndsmill: Palgrave Macmillan: 2005:
Patrick Lucanio, With Fire and Sword: Italian Spectacles on American Screens, 1958–1968 (Scarecrow Press, 1994; )
Irmbert Schenk: "The Cinematic Support to Nationalist(ic) Mythology: The Italian Peplum 1910–1930" in Natascha Gentz and Stefan Kramer (eds.) Globalization, Cultural Identities and Media Representations Albany, NY: State University of New York Press: 2006:
Stephen Flacassier: "Muscles, Myths and Movies": Rabbit's Garage: 1994 :
External links
Films
The Avenger by Georgia Rivalta. Steve Reeves stars as Aeneas.
Hercules Unchained (Pietro Francisci, director.)
The Giant of Metropolis (starring Gordon Mitchell; Umberto Scarpelli, director)
Hercules and the Tyrants of Babylon (Domenico Paolella, dir.)
Images and discussion
The Many Faces of Hercules at Brian's Drive-In Theatre
PEPLVM - Images de l'Antiquité, par Michel Eloy (in French)
Cinéma & Histoire: L'Antiquité au Cinéma (in French), by Hervé Dumont.
Vincent Price, B Movies, Film Noir, Bela Lugosi, Boris Karloff, Peter Cushing,Christopher Lee,Barbara Steele,horror,sci-fi,B westerns,sword & sandal (source of peplum DVD's)
Something Weird Video (source of peplum DVD's)
Santo And Friends (filmography of Mexican muscleman films)
Film genres
Historical fiction
Italian films by genre
20th century in Italy
Fantasy genres
Classical antiquity in modern art and culture |
42713479 | https://en.wikipedia.org/wiki/IOS%208 | IOS 8 | iOS 8 is the eighth major release of the iOS mobile operating system developed by Apple Inc., being the successor to iOS 7. It was announced at the company's Worldwide Developers Conference on June 2, 2014, and was released on September 17, 2014. It was succeeded by iOS 9 on September 16, 2015.
iOS 8 incorporated significant changes to the operating system. It introduced a programming interface for communication between apps, and "Continuity", a cross-platform (Mac, iPhone, and iPad) system that enables communication between devices in different product categories, such as the ability to answer calls and reply to SMS on the Mac and iPad. Continuity includes a "Handoff" feature that lets users start a task on one device and continue on another. Other changes included a new Spotlight Suggestions search results feature that provides more detailed results, Family Sharing, where a family can link together their accounts to share content, with one parent as the administrator with permission controls, an updated keyboard with QuickType, providing contextual predictive word suggestions and Extensibility, which allows for easier sharing of content between apps. Third-party developers got additional features to integrate their apps deeper into the operating system, including support for widgets in the Notification Center, and the ability to make keyboards that users can replace the default iOS keyboard with.
App updates in the release included the new Health app, which can aggregate data from different fitness apps, as well as enabling a Medical ID accessible on the lock screen for emergencies, support for iCloud Photo Library in the Photos app, which enables photos to be synchronized and stored in the cloud, and iCloud Drive, which lets users store files in the cloud and browse them across devices. In iOS 8.4, Apple updated its Music app with a streaming service called Apple Music, and a 24-hour radio station called Apple Music 1.
Reception of iOS 8 was positive. Critics praised Continuity and Extensibility as major features enabling easier control and interaction between different apps and devices. They also liked the QuickType keyboard word suggestions, and highlighted Spotlight Suggestions for making the iPhone "almost a portable search portal for everything.” However, reviewers noted that the full potential for iOS 8 would only be realized once third-party developers integrated their apps to support new features, particularly widgets in the Notification Center.
Roughly a week after release, iOS 8 had reached 46% of iOS usage share. In October 2014, it was reported that the adoption rate had "stalled,” only increasing by "a single percentage point" from the previous month. This situation was blamed on the requirement of a high amount of free storage space to install the upgrade, especially difficult for iPhones sold with 8 or 16 gigabytes of maximum storage space. The following December, iOS 8 had reached 63% usage share, a notable 16% increase from the October measurement.
History
Introduction and initial release
iOS 8 was introduced at the company's Worldwide Developers Conference on June 2, 2014, with the first beta made available to conference attendees after the keynote presentation.
iOS 8 was officially released on September 17, 2014.
Updates
System features
Continuity
iOS 8 introduced Continuity, a cross-platform (Mac, iPhone, and iPad) system that enables communication between devices in different product categories. Continuity enables phone call functionality for the iPad and Mac, in which calls are routed through the iPhone over to a secondary device. The secondary device then serves as a speaker phone. This also brings SMS support to the iPad and Mac, an extension of the iMessage feature in previous versions.
Continuity adds a feature called "Handoff," that lets users start a task on one device and continue on another, such as composing an e-mail on the iPhone and then continuing it on the iPad before sending it on the Mac. In order to support Handoff and Continuity, Macs needed to have the OS X Yosemite operating system, which was released in October 2014, as well as support for Bluetooth low energy.
Spotlight
iOS 8 introduced Spotlight Suggestions, a new search feature that integrates with many websites and services to show more detailed search results, including snippets of Wikipedia articles, local news, quick access to apps installed on the device, iTunes content, movie showtimes, nearby places, and info from various websites. Spotlight Suggestions are available on the iOS home screen as well as in the Safari web browser search bar.
Notifications
The drop-down Notification Center has now been redesigned to allow widget functionality. Third-party developers can add widget support to their apps that let users see information in the Notification Center without having to open each respective app. Users can add, rearrange, or remove any widgets, at any time. Examples of widgets include a Weather app showing current weather, and a Calendar app showing upcoming events.
Notifications are now actionable, allowing users to reply to a message while it appears as a quick drop-down, or act on a notification through the Notification Center.
Keyboard
iOS 8 includes a new predictive typing feature called QuickType, which displays word predictions above the keyboard as the user types.
Apple now allows third-party developers to make keyboard apps that users can replace the default iOS keyboard with. For added privacy, Apple added a settings toggle called "Allow Full Access", that optionally enables the keyboard to act outside its app sandbox, such as synchronizing keyboard data to the cloud, third-party keyboards are not allowed to use Siri for voice dictation, and some secure text fields do not allow input.
Family Sharing
iOS 8 introduced Family Sharing, which allows up to 6 people to register unique iTunes accounts that are then linked together, with one parent becoming the administrator, controlling the overall experience. Purchases made on one account can be shared with the other family members, but purchases made by kids under 13 years of age require parental approval. Purchases made by adults will not be visible for the kids at all.
Family Sharing also extends into apps, a Shared album is automatically generated in the Photos app of each family member, allowing everyone to add photos, videos, and comments to a shared place. An Ask to Buy feature allows anyone to request the purchase of items in the App Store, iTunes Store, and iBooks Store, as well as in-app purchases and iCloud storage, with the administrator having the option to either approve or deny the purchase.
Multitasking
The multitasking screen shows a list of recently called and favorited contacts. The feature can be turned off in Settings.
Other
iOS 8 includes an additional data roaming option in Settings for European users, allowing greater control over data usage abroad.
The Siri personal voice assistant now has integrated Shazam support. Asking Siri "What song is this?" will identify what song is playing.
Wi-Fi calling has been added to allow mobile phone calls over Wi-Fi. Mobile operator carriers can then enable the Voice-over-Wi-Fi functionality in their services.
App features
Photos and Camera
Camera app
The Camera app gets two new features; time-lapse and self-timer. Time-lapse records frames at shorter intervals than normal film frequencies and builds them into movies, showing events in a faster speed. Self-timer gives the user the option of a three-second or ten-second countdown before automatically taking a photo. iPads can now take pictures in panoramic mode.
iCloud Photo Library
iOS 8 added iCloud Photo Library support to the Photos app, enabling photo synchronization between different Apple devices. Photos and videos were backed up in full resolution and in their original formats. This feature almost meant that lower-quality versions of photos could be cached on the device rather than the full-size images, potentially saving significant storage space on models with limited storage availability.
Search
The Photos app received better search, with different search categorization options, including Nearby, One Year Ago, Favorites, and Home, based on geolocation and date of photo capture.
Editing
Additionally, the Photos app gained more precise editing controls, including improved rotation; one-touch auto-enhancement tools; and deeper color adjustments, such as brightness, contrast, exposure, and shadows. There is also an option to hide a photo without deleting it.
Extensions
Apple added an Extensibility feature in iOS 8, that allows filters and effects from third-party apps to be accessed directly from within a menu in the standard Photos app, rather than having to import and export photos through each respective app to apply effects.
Camera Roll
In the initial release of iOS 8, Apple removed a "Camera Roll" feature from the Photos app. Camera Roll was an overview of all photos on the device, but was replaced by a "Recently Added" photo view displaying photos by how recently the user captured them.
Despite being replaced by a "Recently Added" album, the removal of Camera Roll sparked user complaints, which Apple returned the feature in the iOS 8.1 update.
Messages
In iOS 8, Messages gets new features for group conversations, including a Do Not Disturb mode that disables conversation notifications, as well as the ability to remove participants from the chat. A new Tap to Talk chat button lets users send quick voice comments to a recipient, and a Record button allows users to record short videos.
For interaction between two Apple users, the Messages app allows users to send short picture, video or audio clips with a 2-minute expiration time.
In the Settings app, the user has the option to have messages be automatically deleted after a certain time period.
Safari
In the Safari web browser, developers can now add support for Safari Password Sharing, which allows them to share credentials between sites they own and apps they own, potentially cutting down on the number of times users need to type in credentials for their apps and services. The browser also adds support for the WebGL graphics API.
iCloud Drive
In a similar style as a file manager, the iCloud Drive is a file hosting service that, once enabled in Settings, lets users save any kind of file in the app, and the media is synchronized to other iOS devices, as well as the Mac.
App Store
In iOS 8, Apple updated App Store with an "Explore" tab providing improved app discovery, trending searches in the "Search" tab, and the ability for developers to bundle multiple apps into a single discounted package. New "preview" videos allow developers to visually show an app's function.
Health
HealthKit is a service that allows developers to make apps that integrate with the new Health app. The Health app primarily aggregates data from fitness apps installed on the user's device, except for steps and flights climbed, which are tracked through the motion processor on the user's iPhone. Users can enter their medical history in Medical ID, which is accessible on the lock screen, in case of an emergency.
HomeKit
HomeKit serves as a software framework that lets users set up their iPhone to configure, communicate with, and control smart-home appliances. By designing rooms, items and actions in the HomeKit service, users can enable automatic actions in the house through a simple voice dictation to Siri or through apps.
Manufacturers of HomeKit-enabled devices are required to purchase a license, and all HomeKit products are required to have an encryption co-processor. Equipment manufactured without HomeKit-support can be enabled for use through a "gateway" product, such as a hub that connects between those devices and the HomeKit service.
Passbook
The Passbook app on iOS 8 was updated to include Apple Pay, a digital payment service, available on iPhone 6 and 6 Plus with the release of iOS 8.1.
Music
A new music streaming service, Apple Music, was introduced in the iOS 8.4 update. It allows subscribers to listen to an unlimited number of songs on-demand through subscriptions. With the release of the music service, the standard Music app on iOS was revamped both visually and functionally to include Apple Music, as well as the 24-hour live radio station Beats 1.
Notes
Notes received rich text editing support, with the ability to bold, italicize or underline text; and image support, allowing users to post photos in the app.
Weather
The Weather app now uses weather data from The Weather Channel instead of Yahoo!. The app also received slight changes in the user interface. In March 2019, Yahoo ended support for the Weather app on iOS 7 and earlier.
Tips
iOS 8 added a new "Tips" app, that shows tips and brief information about the features in iOS on a weekly basis.
Touch ID
iOS 8 allows Touch ID to be used in third-party apps.
Reception
iOS 8 received positive reviews. Brad Molen of Engadget highlighted Continuity as a major advancement for users with multiple Apple devices. He also praised the Extensibility feature, allowing apps to share data, and liked the support for third-party keyboards. However, Molen noted that some of the new introductions - Family Sharing, Continuity, and iCloud Drive - require further diving into the Apple ecosystem to work. He particularly enjoyed actionable notifications and third-party widget support in Notification Center. Charles Arthur of The Guardian also liked Extensibility, as well as the new QuickType word suggestions functionality in the iOS keyboard. He criticized the lack of an option for choosing default apps, and he also criticized the Settings menu for being confusing and unintuitive. Darrell Etherington of TechCrunch praised the improvements to iMessage, writing: "Best for me has been the ability to mute and leave group conversations, which is something I've been sorely missing since the introduction of group iMessage conversations." He liked the new search and editing features in Photos, and the QuickType feature in the keyboard, but particularly highlighted Spotlight Suggestions as "one of the better features of iOS 8, even if it's a small service addition," noting that "it makes your iPhone almost a portable search portal for everything." Martin Bryant of The Next Web wrote that "The real advances here are yet to come," adding that "Apple has included demonstrations of what can be done, but the true power of what's under the hood will be realized over the coming days, weeks and months" as third-party developers gradually incorporate new features into their apps.
On September 23, 2014, "roughly a week" after the release of iOS 8, user adoption of iOS 8 had reached 46%. In October 2014, Andrew Cunningham of Ars Technica reported that iOS 8 user adoption rate had "stalled," only climbing "a single percentage point" since the previous September measurement of 46%. Cunningham blamed the "over-the-air" update requiring 5 gigabytes to install, an "unusually large amount" that may have posed challenges to those using 8 gigabyte and 16 gigabyte devices. As an alternative, Apple offered the update via its iTunes software, but Cunningham noted that "An iTunes hookup is going to be even more out of the way these days than it was a few years ago, not least because Apple has spent the last three years coaching people to use their iDevices independently of their computers." In December, a new report from Ars Technica stated that iOS 8 usage had increased to 63%, up "a solid 16 percent."
Problems
App crash rate
A study by Apteligent (formerly Crittercism) found that the rate at which apps crashed in their tests was 3.56% on iOS 8, higher than the 2% found on iOS 7.1.
8.0.1 update issues
In September 2014, the iOS 8.0.1 update caused significant issues with Touch ID on iPhone 6 and cellular network connectivity on some models. Apple stated that affected users should reinstall the initial iOS 8 release until version 8.0.2 was ready.
iOS 8.0.2 was released one day after 8.0.1, with a fix for issues caused by the 8.0.1 update.
Miscellaneous bugs
Forbes published several articles focusing on problems in iOS 8 regarding Wi-Fi and battery, Bluetooth, and calendar.
"Effective power" text message crash
In May 2015, news outlets reported on a bug where receiving a text message with a specific combination of symbols and Arabic characters, caused the Messages application to crash and the iPhone to reboot.
The bug, named "effective power," could potentially continuously reboot a device if the message was visible on the lock screen.
The flaw was exploited for the purpose of trolling, by intentionally causing others' phones to crash.
The bug was fixed in iOS 8.4, an update released in June 2015.
Hoaxes
In September 2014, a hoax Apple advertisement for an alleged feature of iOS 8 called "Wave" circulated on Twitter, which promised users that they would be able to recharge their iPhone by heating it in a microwave oven. This feature does not exist, and the media cited numerous people reporting on Twitter that they had destroyed their iPhone by following the procedure described in the advertisement.
Supported devices
With this release, Apple dropped support for the iPhone 4.
iPhone
iPhone 4S
iPhone 5
iPhone 5C
iPhone 5S
iPhone 6
iPhone 6 Plus
iPod Touch
iPod Touch (5th generation)
iPod Touch (6th generation)
iPad
iPad 2
iPad (3rd generation)
iPad (4th generation)
iPad Air
iPad Air 2
iPad Mini (1st generation)
iPad Mini 2
iPad Mini 3
References
External links
2014 software
Tablet operating systems |
41651502 | https://en.wikipedia.org/wiki/Intrahealth%20Systems%20Limited | Intrahealth Systems Limited | Intrahealth Systems Limited is a privately held company that develops, licences, supports and sells electronic health record software, medical practice management software, and related services. Founded in Auckland, New Zealand by Dr. Mark Matthews and Dr. Andrew Hall in 1997, they have since moved their headquarters to the City of North Vancouver, British Columbia, Canada in 2005. Apart from the head office, Intrahealth maintains offices in Auckland, New Zealand; Sydney; and Toronto, Canada.
History
Intrahealth was founded in 1997 and spent its early years developing Profile and moving towards becoming one of New Zealand's largest vendors of Practice Management Systems (PMS).
The ACT (Australian Capital Territory) Division of General Practice entered into an alliance with Intrahealth to supply clinical and patient management software for general practitioners in Canberra and surrounding areas in 2003.
In 2005, Intrahealth expanded into British Columbia, Canada after winning a significant contract with Fraser Health Authority. The expansion saw additional contracts awarded from Vancouver Coastal, Interior, and Intertribal Health Authorities between that year and 2006. Also, in its home country, Intrahealth won a contract with the New Zealand Defense Force which would see its software used for the medical records of the defence force's 10,000 employees.
Profile for iOS was announced as a portable version of its practice management system in 2012, combining both a live online mode with syncing for offline use.
In 2013, Intrahealth was selected as the sole approved EHR provider for New Brunswick's provincial program. The selection followed a rigorous procurement process conducted by Velante, a subsidiary of the New Brunswick Medical Society, in which 16 other vendors submitted proposals. Additionally, Profile was granted dual National Class certification as both an Electronic Medical Record (EMR) and Ambulatory EMR by Canada Health Infoway, the first company to do so.
Products
Profile (Medical practice management software and Electronic health record system for Windows)
Profile for Mac (Medical practice management software and Electronic health record system for OS X)
Aero (Patient and Provider app platform for iOS and Android)
Accession (Patient web portal)
Maestro (Electronic messaging platform)
Insync
com.unity
See also
Medical practice management software
Electronic health record
Health information technology
References
External links
Software companies of New Zealand
Software companies of Canada
Software companies established in 1997
Companies based in North Vancouver
Electronic health record software companies
New Zealand companies established in 1997
Canadian companies established in 1997 |
33621747 | https://en.wikipedia.org/wiki/Jane%20Octavia%20Brookfield | Jane Octavia Brookfield | Jane Octavia Brookfield (25 March 1821 – 27 November 1896) was a literary hostess and writer, best known for her platonic friendship with William Makepeace Thackeray. She also wrote four novels; some critics have drawn parallels between the events in these novels and her relationship with Thackeray.
Biography
Brookfield was born on 25 March 1821, the youngest daughter of Sir Charles Abraham Elton, a former soldier. She lived with her seven sisters and five brothers, along with her father and mother Sarah in Clevedon Court, near Bristol. Sir Charles was a published author, writing an elegy about two of his sons who had drowned in the Bristol Channel, and was friends with both Charles Lamb and Samuel Taylor Coleridge.
In 1837, the family moved to Southampton, and due to Jane's height her father nicknamed her "Glumdalclitch". In 1838 she was courted by and became engaged to William Henry Brookfield, the priest at the local church, twelve years her senior. After he found a better job, as curate of St James's Church, Piccadilly, the couple married on 18 November 1841.
Jane maintained an influential literary salon, which included among others Thackeray and her husband's old college friend Alfred Tennyson. It was her close friendship with Thackeray for which she is best remembered and in the mid-1840s they were on intimate terms. D. J. Taylor in her biography in the Oxford Dictionary of National Biography states "the relationship between him and Jane was almost certainly not sexual (there may have been a chaste embrace or two ...)". Thackeray incorporated some of her characteristics into two of his characters: Amelia Sedley in Vanity Fair (1848), and Laura Bell in Pendennis (1850).
In 1851, William Brookfield barred Thackeray from further visits to or correspondence with Jane. The friendship was never renewed; Thackeray's health worsened steadily through the 1850s and he died in 1863.
Jane Brookfield wrote four novels that were published several years after Thackeray's death, all between 1868 and 1873. Some critics have seen echoes of the friendship with Thackeray in these novels. She ceased writing after her husband died in 1874.
Jane herself died in 1896 at the age of 75.
Family
The couple were survived by their two sons Arthur Montagu Brookfield (1853–1940) who became a British Army officer, diplomat author and Conservative politician who sat in the House of Commons from 1885 to 1903; and Charles Hallam Elton Brookfield (1857–1913) an actor.
Novels
Only George (1868)
Not Too Late (1868)
Influence (1871)
Not a Heroine (1873)
Notes
References
1821 births
1896 deaths
Women of the Victorian era
Daughters of baronets
British salon-holders |
258959 | https://en.wikipedia.org/wiki/Phone%20connector%20%28audio%29 | Phone connector (audio) | A phone connector, also known as phone jack, audio jack, headphone jack or jack plug, is a family of electrical connectors typically used for analog audio signals. The standard is that a plug (described as the male connector) will connect with a jack (described as female).
The phone connector was invented for use in telephone switchboards in the 19th century and is still widely used.
The phone connector is cylindrical in shape, with a grooved tip to retain it. In its original audio configuration, it typically has two, three, four or, occasionally, five contacts. Three-contact versions are known as TRS connectors, where T stands for "tip", R stands for "ring" and S stands for "sleeve". Ring contacts are typically the same diameter as the sleeve, the long shank. Similarly, two-, four- and five- contact versions are called TS, TRRS and TRRRS connectors respectively. The outside diameter of the "sleeve" conductor is . The "mini" connector has a diameter of and the "sub-mini" connector has a diameter of . The "mini" connector has a length of .
Other terms
Specific models, and connectors used in specific applications, may be termed e.g. stereo plug, headphone jack, microphone jack, aux input, etc. The 3.5 mm versions are commonly called mini-phone, mini-stereo, mini jack, etc.
In the UK, the terms jack plug and jack socket are commonly used for the respective male and female phone connectors. In the US, a stationary (more fixed) electrical connector is called a jack. The terms phone plug and phone jack sometimes refer to different genders of phone connectors, but also sometimes refer to the RJ11 and older telephone plugs and corresponding jacks that connect wired telephones to wall outlets.
Phone plugs and jacks are not to be confused with the similar terms phono plug and phono jack (or in the UK, phono socket) which refer to RCA connectors common in consumer hi-fi and audiovisual equipment. The 3.5 mm connector is, however, sometimes—but counter to the connector manufacturers' nomenclature—referred to as mini phono.
Historical development
Quarter-inch size
Modern phone connectors are available in three standard sizes. The original version descends from as early as 1877, when the first-ever telephone switchboard was installed at 109 Court Street in Boston in a building owned by Charles Williams, Jr.; or 1878, when an early switchboard was used for the first commercial manual telephone exchange in New Haven, Connecticut created by George W. Coy. The 1877 switchboard was last known to be located in the lobby of 185 Franklin Street, Boston.
In February 1884, C. E. Scribner was issued US Patent 293,198 for a "jack-knife" connector that is the origin of calling the receptacle a "jack". Scribner was issued U.S. Patents 262,701, 305,021, and 489,570 relating to an improved design that more closely resembles the modern plug. The current form of the switchboard-plug was patented prior to 1902, when Henry P. Clausen received a patent on an improved design. It is today still used on mainstream musical equipment, especially on electric guitars.
Western Electric was the manufacturing arm of the Bell System, and thus originated or refined most of the engineering designs, including the telephone jacks and plugs which were later adopted by other industries, including the U.S. military.
By 1907, Western Electric had designed a number of models for different purposes, including:
Code No. 47 2-conductor plugs for use with type 3, 91, 99, 102, 103, 108, and 124 jacks—used for switchboards
Code No. 85 3-conductor plugs for use with type 77 jacks—used for the operator's head telephone
Code No. 103 twin 2-conductor plugs for use with type 91, and type 99 jacks—used for the operator's head telephone and chest transmitter (microphone)
Code No. 109 3-conductor plugs for use with jack 92 on telephone switchboards (with the same basic shape as the modern Bantam plugs)
Code No. 110, 3-conductor plug for use with jacks 49, 117, 118, 140, and 141 on switchboards
Code No. 112, twin 2-conductor plug for use with jacks 91 and 99—used for the operator's head telephone and chest, with a transmitter cutout key (microphone mute)
Code No. 116, 1-conductor plug for use with cordless jack boxes
Code No. 126, 3-conductor plug for use with type 132 and type 309 jacks on portable street railway sets
By 1950, the two main plug designs were:
WE-309 (compatible with -inch jacks, such as 246 jack), for use on high-density jack panels such as the 608A
WE-310 (compatible with -inch jacks, such as the 242)
Several modern designs have descended from those earlier versions:
B-Gauge standard BPO316 (not compatible with EIA RS-453)
EIA RS-453: Dimensional, Mechanical and Electrical Characteristics Defining Phone Plugs & Jacks standard of diameter, also found in IEC 60603-11:1992 Connectors for frequencies below 3 MHz for use with printed boards – Part 11: Detail specification for concentric connectors (dimensions for free connectors and fixed connectors).
Military variants
U.S. military versions of the Western Electric plugs were initially specified in Amendment No.1, MIL-P-642, and included:
M642/1-1
M642/1-2
M642/2-1
M642/2-2
M642/4-1
M642/4-2
MIL-P-642/2, also known as PJ-051. (Similar to Western Electric WE-310, and thus not compatible with EIA RS-453)
MIL-P-642/5A: Plug, Telephone (TYPE PJ-068) and Accessory Screws (1973), and MIL-DTL-642F: Plugs, Telephone, and Accessory Screws (2015), with diameter, also known by the earlier Signal Corps PL-68 designation. These are commonly used as the microphone jack for aviation radios, and on Collins S-line and many Drake amateur radios. MIL-DTL-642F states, "This specification covers telephone plugs used in telephone (including telephone switchboard consoles), telegraph, and teletype circuits, and for connecting headsets, handsets, and microphones into communications circuits."
Miniature size
The 3.5 mm or miniature size was originally designed in the 1950s as two-conductor connectors for earpieces on transistor radios, and remains a standard still used today. This roughly half-sized version of the original, popularized by the Sony EFM-117J radio (released in 1964), is still commonly used in portable applications. The three-conductor version became very popular with its application on the Walkman in 1979, as unlike earlier transistor radios, these devices had no speaker of their own; the usual way to listen to them was to plug in headphones. There is also an EIA standard for 0.141-inch miniature phone jacks.
The 2.5 mm or sub-miniature sizes were similarly popularized on small portable electronics. They often appeared next to a 3.5 mm microphone jack for a remote control on-off switch on early portable tape recorders; the microphone provided with such machines had the on-off switch and used a two-pronged connector with both the 3.5 and 2.5 mm plugs. They were also used for low-voltage DC power input from wall adapters. In the latter role they were soon replaced by coaxial DC power connectors. 2.5 mm phone jacks have also been used as the headset jacks on mobile telephones (see ).
The 3.5 mm and 2.5 mm sizes are sometimes called in and in respectively in the United States, though those dimensions are only approximations. All sizes are now readily available in two-conductor (unbalanced mono) and three-conductor (balanced mono or unbalanced stereo) versions.
Four-conductor versions of the 3.5 mm plug and jack are used for certain applications. A four-conductor version is often used in compact camcorders and portable media players, providing stereo sound and composite analog video. It is also used for a combination of stereo audio, a microphone, and controlling media playback, calls, volume and/or a virtual assistant on some laptop computers and most mobile phones, and some handheld amateur radio transceivers from Yaesu. Some headphone amplifiers have used it to connect "balanced" stereo headphones, which require two conductors per audio channel as the channels do not share a common ground.
Broadcast usage
By the 1940s, broadcast radio stations were using Western Electric Code No. 103 plugs and matching jacks for patching audio throughout studios. This connector was used because of its use in AT&T's Long Line circuits for distribution of audio programs over the radio networks' leased telephone lines. Because of the large amount of space these patch panels required, the industry began switching to 3-conductor plugs and jacks in the late 1940s, using the WE Type 291 plug with WE type 239 jacks. The type 291 plug was used instead of the standard type 110 switchboard plug because the location of the large bulb shape on this TRS plug would have resulted in both audio signal connections being shorted together for a brief moment while the plug is being inserted and removed. The Type 291 plug avoids this by having a shorter tip.
Patch bay connectors
Professional audio and the telecommunication industry use a diameter plug, associated with trademarked names including Bantam, TT, Tini-Telephone, and Tini-Tel. They are not compatible with standard EIA RS-453/IEC 60603-11 1/4-inch jacks. In addition to a slightly smaller diameter, they have a slightly different geometry. The three-conductor TRS versions are capable of handling balanced line signals and are used in professional audio installations. Though unable to handle as much power, and less reliable than a jack, Bantam connectors are used for professional console and outboard patchbays in recording studio and live sound applications, where large numbers of patch points are needed in a limited space. The slightly different shape of Bantam plugs is also less likely to cause shorting as they are plugged in.
Less common
A two-pin version, known to the telecom industry as a "310 connector", consists of two -inch phone plugs at a centre spacing of . The socket versions of these can be used with normal phone plugs provided the plug bodies are not too large, but the plug version will only mate with two sockets at inches centre spacing, or with line sockets, again with sufficiently small bodies. These connectors are still used today in telephone company central offices on "DSX" patch panels for DS1 circuits. A similar type of 3.5 mm connector is often used in the armrests of older aircraft, as part of the on-board in-flight entertainment system. Plugging a stereo plug into one of the two mono jacks typically results in the audio coming into only one ear. Adapters are available.
A short-barrelled version of the phone plug was used for 20th century high-impedance mono headphones, and in particular those used in World War II aircraft. These have become rare. It is physically possible to use a normal plug in a short socket, but a short plug will neither lock into a normal socket nor complete the tip circuit.
Less commonly used sizes, both diameters and lengths, are also available from some manufacturers, and are used when it is desired to restrict the availability of matching connectors, such as inside diameter jacks for fire safety communication in public buildings.
Aviation and US military connectors
US military phone connectors include both 0.25-in. (6.35 mm) and 0.21-in. (5.34 mm) diameter plugs, which both mate with the M641-series open frame jacks, exemplified by Switchcraft C11 and C12 series jacks. Military specifications and standards relating to phone connectors include MIL-STD 202, MIL-P-642/*, and MIL-J-641.
Commercial and general aviation (GA) civil airplane headset plugs are similar, but not identical. A standard -in. monaural plug, type PL-55 (both two-conductor phone plugs, also called PJ-055B, which mate with JK-24 and JK-34A jacks) is used for headphones. On many newer GA aircraft the headphone jack is a standard -in. phone connector wired in the standard unbalanced stereo configuration instead of the PJ-055 to allow stereo music sources to be reproduced.
Aviation headphones are paired with special tip-ring-sleeve, 3/16-in (0.206 in)/5.23-mm diameter plug, type PJ-068 (PL-68), for the microphone. The PJ-068 mates with a JK-33 jack (Switchcraft C-12B), and is similar to the Western Electric plug WE-109. In the microphone plug the Ring is used for the microphone hot and the sleeve is ground. The extra (tip) connection in the microphone plug is often left unconnected but is also sometimes used for various functions, most commonly an optional push-to-talk switch, but on some aircraft it carries headphone audio and on others a DC supply.
Military aircraft and civil helicopters have another type termed a U-174/U; These are also known as NATO plugs or Nexus TP120 phone plugs. They are similar to -in. (6.35 mm) plug, but with a diameter short shaft with an extra ring, i.e. four conductors in total, allowing two for the headphones (mono), and two for the microphone. There is a confusingly similar four conductor British connector with a slightly smaller diameter and a different wiring configuration used for headsets in many UK Military aircraft and often also referred to as a NATO or UK NATO connector.
Mono and stereo compatibility
The original application for the 6.35 mm ( in) phone jack was in manual telephone exchanges. Many different configurations of these phone plugs were used, some accommodating five or more conductors, with several tip profiles. Of these many varieties, only the two-conductor version with a rounded tip profile was compatible between different manufacturers, and this was the design that was at first adopted for use with microphones, electric guitars, headphones, loudspeakers, and other audio equipment.
When a three-conductor version of the 6.35 mm plug was introduced for use with stereo headphones, it was given a sharper tip profile in order to make it possible to manufacture jacks that would accept only stereo plugs, to avoid short-circuiting the right channel of the amplifier. This attempt has long been abandoned, and now the convention is that all plugs fit all sockets of the same size, regardless of whether they are balanced or unbalanced, mono or stereo. Most 6.35 mm plugs, mono or stereo, now have the profile of the original stereo plug, although a few rounded mono plugs are still produced. The profiles of stereo miniature and sub-miniature plugs have always been identical to the mono plugs of the same size.
The results of this physical compatibility are:
If a two-conductor plug is inserted into a three-conductor socket, the result is that the ring (right channel) of the socket is grounded. This property is deliberately used in several applications. However, if equipment is not designed for such a use, grounding the right channel causes a short circuit which has the potential to damage an audio amplifier channel. In any case, any signal from the right channel is naturally lost in this scenario.
If a three-conductor plug is connected to a two-conductor socket, normally the result is to leave the ring of the plug unconnected. This open circuit is potentially dangerous to equipment utilizing vacuum tubes, but most solid-state devices will tolerate an open condition well. A three-conductor socket could be wired as an unbalanced mono socket to ground the ring in this situation, but the more conventional wiring is to leave the ring unconnected, exactly simulating a mono socket.
Because of a lack of standardization in the past regarding the dimensions (length) given to the ring conductor and the insulating portions on either side of it in 6.35 mm ( in) phone connectors and the width of the conductors in different brands and generations of sockets, there are occasional issues with compatibility between differing brands of plug and socket. This can result in a contact in the socket bridging (shorting) the ring and sleeve contacts on a phone connector.
General use
In the most common arrangement, consistent with the original intention of the design, the male plug is connected to a cable, and the female socket is mounted in a piece of equipment. A considerable variety of line plugs and panel sockets is available, including plugs suiting various cable sizes, right-angle plugs, and both plugs and sockets in a variety of price ranges and with current capacities up to 15 amperes for certain heavy-duty in versions intended for loudspeaker connections.
Common uses of phone plugs and their matching sockets include:
Headphone and earphone jacks on a wide range of equipment. 6.35 mm ( in) plugs are common on home and professional component equipment, while 3.5 mm plugs are nearly universal for portable audio equipment and headphones. 2.5 mm plugs are not as common, but are used on communication equipment such as cordless phones, mobile phones, and two-way radios, especially in the earliest years of the 21st century before the 3.5 mm became standard on mobile phones. The use of headphone jacks in smartphones is declining in favor of USB-C connectors and wireless Bluetooth solutions.
Consumer electronics devices such as digital cameras, camcorders, and portable DVD players use 3.5 mm connectors for composite video and audio output. Typically, a TRS connection is used for mono unbalanced audio plus video, and a TRRS connection for stereo unbalanced audio plus video. Cables designed for this use are often terminated with RCA connectors on the other end. Sony also used this style of connection as the TV-Out on some models of Vaio laptop.
Hands-free sets and headsets often use 3.5 mm or 2.5 mm connectors. TRS connectors are used for mono audio out and an unbalanced microphone (with a shared ground). Four-conductor TRRS phone connectors add an additional audio channel for stereo output. TRRS connectors used for this purpose are sometimes interoperable with TRS connectors, depending on how the contacts are used.
Microphone inputs on tape and cassette recorders, sometimes with remote control switching on the ring, on early, monaural cassette recorders mostly a dual-pin version consisting of a 3.5 mm TS for the microphone and a 2.5 mm TS for remote control which switches the recorder's power supply.
Patching points (insert points) on a wide range of equipment.
Personal computers, sometimes using a sound card plugged into the computer. Stereo 3.5 mm jacks are used for:
Line in (stereo)
Line out (stereo)
Headphones and loudspeaker out (stereo)
Microphone input (mono, usually with 5 V power available on the ring.)
Older laptop computers generally have one jack for headphones and one mono jack for a microphone at microphone level.
LCD monitors with built-in speakers will need a cable with 3.5 mm male TRS plugs on each end to connect to the sound card.
Devices designed for surround output may use multiple jacks for paired channels (e.g. TRS for front left and right; TRRS for front center, rear center, and subwoofer; and TRS for surround left and right).
Eurorack, Moog and other modular synthesizers
Almost all electric guitars use a in mono jack as their output connector. Some makes (such as Shergold) use a stereo jack instead for stereo output, or a second stereo jack, in addition to a mono jack (as with Rickenbacker).
Instrument amplifiers for guitars, basses and similar amplified musical instruments. in jacks are overwhelmingly the most common connectors for:
Inputs. A shielded cable with a mono in phone plug on each end is commonly termed a guitar cable or a patch cable, the first name reflecting this usage, the second the history of the phone plug's development for use in manual telephone exchanges.
Loudspeaker outputs
Line outputs
Foot switches and effects pedals. Stereo plugs are used for double switches (for example by Fender). There is little compatibility between makers.
Effects loops, which are normally wired as patch points
Electronic keyboards use jacks for a similar range of uses to guitars and amplifiers, and in addition:
Sustain pedals
Expression pedals
Electronic drums use jacks to connect sensor pads to the synthesizer module or MIDI encoder. In this usage, a change in voltage on the wire indicates a drum stroke.
TRS jacks are sometimes used for balanced connections for instance in compact or economy audio mixing desks for balanced microphone inputs. In some audio equipment, a TRS connection may be offered in addition to an XLR balanced line connector.
Loudspeaker connections for older or consumer sound reinforcement equipment. Speakon connectors are used in modern professional systems as they mate with greater contact area and thus carry higher current, lock in place and do not risk shorting out the amplifier upon insertion or disconnection. Some professional loudspeakers carry both Speakon and TRS connectors for compatibility. Heavy-duty in loudspeaker jacks are rated at 15 A maximum which limits them to applications involving less than 1,800 watts.
Modular synthesizers commonly use monophonic cables for creating patches.
Quarter-inch phone connectors are widely used to connect external processing devices to insert points on mixing consoles. Two- or three-conductor phone connectors might be used in pairs as separate send and return jacks, or a single three-conductor phone jack might serve as both send and return, in which case the signals are unbalanced. The one unbalanced combination send/return TRS insert jack saves both panel space and component complexity, but the unbalanced connection may introduce a slight buzz. Insert points on mixing consoles may also be XLR, RCA or bantam TT (tiny telephone) jacks, depending on the make and model.
Some small electronic devices such as audio cassette players, especially in the cheaper price brackets, use a two-conductor 3.5 mm or 2.5 mm phone jack as a DC power connector.
Some photographic studio strobes have in or 3.5 mm jacks for the flash synchronization input. A camera's electrical flash output (PC socket or hot shoe adapter) is cabled to the strobe light's sync input jacks.
Some cameras use the 2.5 mm stereo jack for the connector for the remote shutter release (and focus activation); examples are Canon's RS-60E3 remote switch and Sigma's CR-21 wired remote control.
Some miniaturized electronic devices use 2.5 mm or 3.5 mm jacks as serial port connectors for data transfer and unit programming. This technique is particularly common on graphing calculators, such as the TI-83 series, and some types of amateur and two-way radio. In more modern equipment USB mini-B connectors are provided in addition to or instead of jack connectors. The second-generation iPod Shuffle from Apple has one TRRS jack which serves as headphone, USB, or power supply, depending on the connected plug.
The Atari 2600 (Video Computer System), the first widely popular home video game console with interchangeable software programs, used a 3.5 mm TS (two conductor) jack for 9 V 500 mA DC power.
The Apple Lisa personal computer used a three-conductor TRS phone connector for its keyboard.
The Sangean DCR-200 radio uses a wire aerial terminating with a 2.5 mm phone connector.
Computer sound
Personal computer sound cards, such as Creative Labs' Sound Blaster line, use a 3.5 mm phone connector as a mono microphone input, and deliver a 5 V bias voltage on the ring to power the FET preamplifier built into electret microphones. Adjustmes may be required to achieve compatibility between different manufacturers.
The Apple PlainTalk microphone jack used on some older Macintosh systems is designed to accept an extended 3.5 mm three-conductor phone connector; in this case, the tip carries power for a preamplifier inside the microphone. It cannot accept a standard microphone without a preamp. If a PlainTalk-compatible microphone is not available, the jack can accept a line-level sound input.
Normally, 3.5 mm three-conductor sockets are used in computer sound cards for stereo output. Thus, for a sound card with 5.1 output, there will be three sockets to accommodate six channels: front left and right; surround left and right; and center and subwoofer. 6.1 and 7.1 channel sound cards from Creative Labs, however, use a single three-conductor socket (for the front speakers) and two four-conductor sockets. This is to accommodate rear-center (6.1) or rear left and right (7.1) channels without the need for additional sockets on the sound card.
Some portable computers have a combined 3.5 mm TRS-TOSLINK jack, supporting stereo audio output using a TRS connector, or TOSLINK (stereo or 5.1 Dolby Digital/DTS) digital output using a suitable optical adapter. Most iMac computers have this digital/analog combo output feature as standard, with early MacBooks having two ports, one for analog/digital audio input and other for output. Support for input was dropped on various later models
Some newer computers, such as Lenovo laptops, have 3.5 mm TRRS headset sockets, which are compatible with phone headsets and may be distinguished by a headset icon instead of the usual headphones or microphone icons. These are particularly used for voice over IP.
Video
Equipment requiring video with stereo audio input/output sometimes uses 3.5 mm TRRS connectors. Two incompatible variants exist, of and length, and using the wrong variant may either simply not work, or could cause physical damage.
Attempting to fully insert the longer (17 mm) plug into a receptacle designed for the shorter (15 mm) plug may damage the receptacle, and may damage any electronics located immediately behind the receptacle. However, partially inserting the plug will work as the tip/ring/ring distances are the same for both variants.
Using the shorter plug in a socket designed for the longer connector will result in the plug not 'locking in', and may additionally result in wrong signal routing and/or a short circuit inside the equipment (e.g. the plug tip may cause the contacts inside the receptacle – tip/ring 1, etc. - to short together).
The shorter 15 mm TRRS variant is more common and fully physically compatible with 'standard' 3.5 mm TRS and TS connectors.
Recording equipment
Many small video cameras, laptops, recorders and other consumer devices use a 3.5 mm microphone connector for attaching a (mono/stereo) microphone to the system.
These fall into three categories:
Devices that use an unpowered microphone: usually a cheap dynamic or piezoelectric microphone. The microphone generates its own voltage, and needs no power.
Devices that use a self-powered microphone: usually a condenser microphone with internal battery-powered amplifier.
Devices that use a "plug-in powered" microphone: an electret microphone containing an internal FET amplifier. These provide a good quality signal, in a very small microphone. However, the internal FET needs a DC power supply, which is provided as a bias voltage for an internal preamp transistor.
Plug-in power is supplied on the same line as the audio signal, using an RC filter. The DC bias voltage supplies the FET amplifier (at a low current), while the capacitor decouples the DC supply from the AC input to the recorder. Typically, V=1.5 V, R=1 kΩ, C=47 μF.
If a recorder provides plug-in power, and the microphone does not need it, everything will usually work ok. In the converse case (recorder provides no power; microphone needs power), no sound will be recorded. Neither misconfiguration will damage consumer hardware, but providing power when none is needed could destroy a broadcast-type microphone.
PDAs and mobile phones
Three- or four-conductor (TRS or TRRS) 2.5 mm and 3.5 mm sockets are common on older cell phones and newer smartphones respectively, providing mono (three conductor) or stereo (four conductor) sound and a microphone input, together with signaling (e.g., push a button to answer a call). These are used both for handsfree headsets (esp. mono audio plus mic, also stereo audio plus mic, plus signaling for call handling) and for (stereo) headphones (stereo audio, no mic). Wireless (connectorless) headsets or headphones usually use the Bluetooth protocol.
3.5 mm TRRS (stereo-plus-mic) sockets became particularly common on smartphones, and have been used e.g. by Nokia since 2006; they are often compatible with standard 3.5 mm stereo headphones. Some computers now also include a TRRS headset socket, compatible with headsets intended for smartphones.
There are multiple conflicting standards for TRRS connectors and their compatibility with three conductor TRS. The four conductors of a TRRS connector are assigned to different purposes by different manufacturers. Any 3.5 mm plug can be plugged mechanically into any socket, but many combinations are electrically incompatible. For example, plugging TRRS headphones into a TRS headset socket (or vice versa) or plugging TRRS headphones from one manufacturer into a TRRS socket from another may not function correctly, or at all. Mono audio will usually work, but stereo audio or microphone may not work, depending on wiring. Signaling compatibility depends both on wiring compatibility and the signals sent by the hands-free/headphones controller being correctly interpreted by the phone. Adapters that are wired for headsets will not work for stereo headphones and conversely. Further, as TTY/TDDs are wired as headsets, TTY adapters can also connect a 2.5 mm headset to a phone.
TRRS standards
Two different forms are frequently found, both of which place left audio on the tip and right audio on the first ring (for compatibility with stereo connectors). Where they differ is in the placement of the microphone and return contacts:
The first, which places the ground return on the sleeve and the microphone on the second ring, is standardized in OMTP and has been accepted as a national Chinese standard YDT 1885–2009. It is mostly used on older devices, such as older Nokia mobiles, older Samsung smartphones, and some Sony Ericsson phones, and products meant for the Chinese market. Headsets using this wiring may be indicated by black plastic separators between the rings.
The second, which reverses these contacts, with the microphone on the sleeve, is used by Apple's iPhone line, and has become the de facto TRRS standard, to maintain compatibility with these products. It is now used by HTC devices, recent Samsung, Nokia, and Sony phones, among others. This is referred to as CTIA/AHJ, and has the disadvantage that the mic will be shorted to ground if the body of the device is metal and the sleeve has a flange that contacts it. Headsets using this wiring may be indicated by white plastic separators between the rings.
If a CTIA headset is connected to a mobile phone with OMTP interface, the missing ground will effectively connect speakers in out-of-phase series, resulting in no voice on typical popular music recordings where the singers are in the center; in this case, if the main microphone button is held down, shorting across the microphone and restoring ground, the correct sound may be audible.
The 4-pole 3.5 mm connector is defined by the Japanese standard JEITA/EIAJ RC-5325A, "4-Pole miniature concentric plugs and jacks", originally published in 1993. 3-pole 3.5 mm TRS connectors are defined in JIS C 6560. See also JIS C 5401 and IEC 60130-8.
Interoperability
The USB Type-C Cable and Connector Specification Revision 1.1 specifies a mapping from a USB-C jack to a 4-pole TRRS jack, for the use of headsets, and supports both CTIA and OMTP (YD/T 1885–2009) modes. See Audio Adapter Accessory Mode (Appendix A). Some devices transparently handle many jack standards, and there are hardware implementations of this available as components.
Some devices apply voltage to the sleeve and second ring to detect the wiring, and switch the last two conductors to allow a device made to one standard to be used with a headset made to the other.
TRRRS standards
New TRRRS standard for 3.5 mm connectors was developed and recently approved by ITU-T. The new standard, called P.382 (formerly P.MMIC), outlines technical requirements and test methods for a 5-pole socket and plug configuration. Compared to the legacy TRRS standard, TRRRS provides one extra line that can be used for connecting a second microphone or external power to/from the audio accessory.
P.382 requires compliant sockets and plugs to be backwards compatible with legacy TRRS and TRS connectors. Therefore, P.382 compliant TRRRS connectors should allow for seamless integration when used on new products. TRRRS connectors enable following audio applications: active noise cancelling, binaural recording and others, where dual analogue microphone lines can be directly connected to a host device. It was commonly found on Sony phones starting with the Xperia Z1-XZ1 and Xperia 1 II.
Switch contacts
Panel-mounting jacks are often provided with switch contacts. Most commonly, a mono jack is provided with one normally closed (NC) contact, which is connected to the tip (live) connection when no plug is in the socket, and disconnected when a plug is inserted. Stereo sockets commonly provide two such NC contacts, one for the tip (left channel live) and one for the ring or collar (right channel live). Some designs of jack also have such a connection on the sleeve. As this contact is usually ground, it is not much use for signal switching, but could be used to indicate to electronic circuitry that the socket was in use.
Less commonly, some jacks are provided with normally open (NO) or change-over contacts, and/or the switch contacts may be isolated from the connector.
The original purpose of these contacts was for switching in telephone exchanges, for which there were many patterns. Two sets of change-over contacts, isolated from the connector contacts, were common. The more recent pattern of one NC contact for each signal path, internally attached to the connector contact, stems from their use as headphone jacks. In many amplifiers and equipment containing them, such as electronic organs, a headphone jack is provided that disconnects the loudspeakers when in use. This is done by means of these switch contacts. In other equipment, a dummy load is provided when the headphones are not connected. This is also easily provided by means of these NC contacts.
Other uses for these contacts have been found. One is to interrupt a signal path to enable other circuitry to be inserted. This is done by using one NC contact of a stereo jack to connect the tip and ring together when no plug is inserted. The tip is then made the output, and the ring the input (or vice versa), thus forming a patch point.
Another use is to provide alternative mono or stereo output facilities on some guitars and electronic organs. This is achieved by using two mono jacks, one for left channel and one for right, and wiring the NC contact on the right channel jack to the tip of the other, to connect the two connector tips together when the right channel output is not in use. This then mixes the signals so that the left channel jack doubles as a mono output.
Where a 3.5 mm or 2.5 mm jack is used as a DC power inlet connector, a switch contact may be used to disconnect an internal battery whenever an external power supply is connected, to prevent incorrect recharging of the battery.
A standard stereo jack is used on most battery-powered guitar effects pedals to eliminate the need for a separate power switch. In this configuration, the internal battery has its negative terminal wired to the sleeve contact of the jack. When the user plugs in a two-conductor (mono) guitar or microphone lead, the resulting short circuit between sleeve and ring connects an internal battery to the unit's circuitry, ensuring that it powers up or down automatically whenever a signal lead is inserted or removed. A drawback of this design is the risk of inadvertently discharging the battery if the lead is not removed after use, such as if the equipment is left plugged in overnight.
Design
Notes
Balanced audio
When a phone connector is used to make a balanced connection, the two active conductors are both used for a monaural signal. The ring, used for the right channel in stereo systems, is used instead for the inverting input. This is a common use in small audio mixing desks, where space is a premium and they offer a more compact alternative to XLR connectors. Another advantage offered by TRS phone connectors used for balanced microphone inputs is that a standard unbalanced signal lead using a TS phone jack can simply be plugged into such an input. The ring (right channel) contact then makes contact with the plug body, correctly grounding the inverting input.
A disadvantage of using phone connectors for balanced audio connections is that the ground mates last and the socket grounds the plug tip and ring when inserting or disconnecting the plug. This causes bursts of hum, cracks and pops and may stress some outputs as they will be short circuited briefly, or longer if the plug is left half in.
This problem does not occur when using the 'gauge B' (BPO) phone connector (PO 316) which although it is of 0.25 in (6.35 mm) diameter has a smaller tip and a recessed ring so that the ground contact of the socket never touches the tip or ring of the plug. This type was designed for balanced audio use, being the original telephone 'switchboard' connector and is still common in broadcast, telecommunications and many professional audio applications where it is vital that permanent circuits being monitored (bridged) are not interrupted by the insertion or removal of connectors. This same tapered shape used in the 'gauge B' (BPO) plug can be seen also in aviation and military applications on various diameters of jack connector including the PJ-068 and 'bantam' plugs. The more common straight-sided profile used in domestic and commercial applications and discussed in most of this article is known as 'gauge A'.
XLR connectors used in much professional audio equipment mate the ground signal on pin 1 first.
Unbalanced audio
Phone connectors with three conductors are also commonly used as unbalanced audio patch points (or insert points, or simply inserts), with the output on many mixers found on the tip (left channel) and the input on the ring (right channel). This is often expressed as "tip send, ring return". Other mixers have unbalanced insert points with "ring send, tip return". One advantage of this system is that the switch contact within the panel socket, originally designed for other purposes, can be used to close the circuit when the patch point is not in use. An advantage of the tip send patch point is that if it is used as an output only, a 2-conductor mono phone plug correctly grounds the input. In the same fashion, use of a "tip return" insert style allows a mono phone plug to bring an unbalanced signal directly into the circuit, though in this case the output must be robust enough to withstand being grounded. Combining send and return functions via single in TRS connectors in this way is seen in very many professional and semi-professional audio mixing desks, because it halves the space needed for insert jack fields which would otherwise need two jacks, one for send and one for return. The tradeoff is that unbalanced signals are more prone to buzz, hum and outside interference.
In some three-conductor TRS phone inserts, the concept is extended by using specially designed phone jacks that will accept a mono phone plug partly inserted to the first click and will then connect the tip to the signal path without breaking it. Most standard phone connectors can also be used in this way with varying success, but neither the switch contact nor the tip contact can be relied upon unless the internal contacts have been designed with extra strength for holding the plug tip in place. Even with stronger contacts, an accidental mechanical movement of the inserted plug can interrupt signal within the circuit. For maximum reliability, any usage involving first click or half-click positions will instead rewire the plug to short tip and ring together and then insert this modified plug all the way into the jack.
The TRS tip return, ring send unbalanced insert configuration is mostly found on older mixers. This allowed for the insert jack to serve as a standard-wired mono line input that would bypass the mic preamp. However tip send has become the generally accepted standard for mixer inserts since the early-to-mid 1990s. The TRS ring send configuration is still found on some compressor sidechain input jacks such as the dbx 166XL.
In some very compact equipment, 3.5 mm TRS phone connectors are used as patch points.
Some sound recording devices use a three-conductor phone connector as a mono microphone input, using the tip as the signal path and the ring to connect a standby switch on the microphone.
Poor connections
Connectors that are tarnished, or that were not manufactured within tight tolerances, are prone to cause poor connections. Depending upon the surface material of the connectors, tarnished ones can be cleaned with a burnishing agent (for solid brass contacts typical) or contact cleaner (for plated contacts).
See also
Banana connector
Coaxial power connector
Notes
References
External links
The 19th Century plug that's still being used—BBC News
Audio engineering
Audiovisual connectors
Computer connectors
Telephone connectors |
49977029 | https://en.wikipedia.org/wiki/Daniel%20Lopatin%20discography | Daniel Lopatin discography | Daniel Lopatin is a Brooklyn-based experimental musician who records primarily under the pseudonym Oneohtrix Point Never. Early in his career as both a solo artist and as a member of several groups, he released a number of LPs and extended plays on a variety of independent labels. In 2010, he signed to Editions Mego and released his major label debut Returnal. In 2011, he founded the record label Software. In 2013, Lopatin signed to British electronic label Warp Records and released his label debut R Plus Seven.
As Oneohtrix Point Never
Studio albums
Extended plays and cassettes
Transmat Memories (2008, Taped Sounds)
A Pact Between Strangers (2008, Gneiss Things)
Hollyr (2008, Sound Holes)
Ruined Lives (2008, Young Tapes)
Heart of a Champion (2008, Mistake By The Lake Tapes)
KGB Nights/Blue Drive (credited to KGB Man/Oneohtrix Point Never) (2009, Catholic Tapes)
Young Beidnahga (2009, Ruralfaune)
Caboladies/Oneohtrix Point Never Split (2009, NNA Tapes)
Scenes with Curved Objects (2009, Utmarken)
Dog in the Fog (2012, Software)
Commissions I (2014, Warp)
Commissions II (2015, Warp)
The Station (2018, Warp)
Love in the Time of Lexapro (2018, Warp)
Singles
"Sleep Dealer" (2011, Software)
"Replica" (2011, Software)
"Still Life" (2013, Warp)
"Problem Areas" (2013, Warp)
"Zebra" (2013, Warp)
"Sticky Drama" (2015, Warp)
"I Bite Through It" (2015, Warp)
"Mutant Standard" (2015, Warp)
"The Pure and the Damned" (2017, Warp)
"Black Snow" (2018, Warp)
"We'll Take It" (2018, Warp)
"The Station" (2018, Warp)
Compilation albums
Rifts 2-CD (2009, No Fun); 3-CD (2012, Software)
Drawn and Quartered (2013, Software)
The Fall into Time (2013, Software)
Soundtrack albums
Good Time (2017, Warp)
Uncut Gems (2019, Warp)
Miscellaneous
Music for Reliquary House / In 1980 I Was a Blue Square (split LP with Rene Hell) (2010, NNA Tapes)
Production and mixing work
Antony and the Johnsons – Swanlights (EP), 2010 (Producer on "Swanlights OPN Edit")
Nine Inch Nails – Hesitation Marks, 2013 ("Find My Way" remix on deluxe edition)
Pariah – IOTDXI (compilation), 2011 (Producer on "Orpheus" (Oneohtrix Point Never Subliminal Cops Edit))
ANOHNI – Hopelessness, 2016 (Co-production with Anohni and Hudson Mohawke)
DJ Earl – Open Your Eyes, 2016 (Co-production, keyboards)
ANOHNI – Paradise (EP), 2017 (Co-production with Anohni and Hudson Mohawke)
FKA twigs - MAGDALENE, 2019 (Producer on "daybed")
The Weeknd - After Hours, 2020 (Co-production, keyboards, "Save Your Tears" remix on deluxe edition)
The Weeknd - Dawn FM, 2022 (Co-production)
Video
Memory Vague (2009, Root Strata)
As Daniel Lopatin
Collaborations
Instrumental Tourist (collaboration with Tim Hecker) (2011, Software)
Film score
The Bling Ring (2013)
Partisan (2015)
Uncut Gems (Original Motion Picture Soundtrack) (2019)
Musical contributions
Ducktails – The Flower Lane (2013) (Synthesizer)
Real Estate – "Out of Tune" from Days (2011) (Synthesizer)
Moses Sumney - Græ (2020) (Synthesizer, Additional Production)
Production and mixing work
Autre Ne Veut – ‘Anxiety’, 2013 (Additional Production, Keyboards)
Ducktails – ‘The Flower Lane’, 2013 (Synthesizer)
Okkyung Lee, Lasse Marhaug, C. Spencer Yeh – ‘Wake Up Awesome’, 2013 (Executive Producer – as Daniel Lopatin)
Clinic – ‘Free Reign’, 2012 (Mixing)
Harold Grosskopf – ‘Re-Synthesist’, 2011 (feature on ‘Trauma 2010)
As Chuck Person
Cassette
Chuck Person's Eccojams Vol. 1 (as Chuck Person) (2010, The Curatorial Club)
Compilations
A.D.D. Complete (2012, Software)
As Dania Shapes
Studio albums
Soundsystem Pastoral (2006, Naivsuper)
Holograd (2008, Label Cities)
As a member of Ford & Lopatin/Games
Studio albums
Channel Pressure (2011, Software)
Extended plays
Everything Is Working (2010, Hippos in Tanks)
That We Can Play (2010, Hippos in Tanks)
Mixtapes
Spend the Night with... Games (2010)
Other projects
As a member of Infinity Window
Trans Fat (2008, Chocolate Monk)
Artificial Midnight (2009, Arbor)
As a member of Skyramps
Days of Thunder (2009, Wagon)
As a member of Total System Point Never
Power in That Which Compels You (2008, Snapped in Half)
As a member of Astronaut
Early Peril (2008, Insult)
Sans Noise Suitcase (2008, Housecraft)
As a member of Guys Next Door
"Behind the Wall" (2017, PC Music)
References
Discographies of American artists |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.